Tuesday, November 6, 2012

meminfo To know detail Memory information on *nix server

Meminfo provides information about distribution and utilization of memory.  This utilization varies on architecture of server.  All fields may not be there is all server output.

$cat /proc/meminfo

MemTotal:        8062872 kB

MemFree:          927352 kB

Buffers:          225360 kB

Cached:          6142440 kB

SwapCached:            0 kB

Active:           949748 kB

Inactive:        5816904 kB

Active(anon):     330948 kB

Inactive(anon):    68100 kB

Active(file):     618800 kB

Inactive(file):  5748804 kB

Unevictable:           0 kB

Mlocked:               0 kB

SwapTotal:       4194296 kB

SwapFree:        4194296 kB

Dirty:                44 kB

Writeback:             0 kB

AnonPages:        398812 kB

Mapped:            61616 kB

Shmem:               192 kB

Slab:             253172 kB

SReclaimable:     221144 kB

SUnreclaim:        32028 kB

KernelStack:        1936 kB

PageTables:         7340 kB

NFS_Unstable:          0 kB

Bounce:                0 kB

WritebackTmp:          0 kB

CommitLimit:     8225732 kB

Committed_AS:     862100 kB

VmallocTotal:   34359738367 kB

VmallocUsed:      289592 kB

VmallocChunk:   34359440136 kB

HardwareCorrupted:     0 kB

AnonHugePages:    342016 kB

HugePages_Total:       0

HugePages_Free:        0

HugePages_Rsvd:        0

HugePages_Surp:        0

Hugepagesize:       2048 kB

DirectMap4k:       10240 kB

DirectMap2M:     8378368 kB

Here is the detail about some of important parameters

 MemTotal: Total usable RAM (i.e. physical ram minus a few reserved bits and the kernel binary code)
 MemFree:  Free RAM portioan
 Buffers:     Temporary storage for raw disk blocks
Cached:      In-memory cache for files read
SwapCached: Memory that once was swapped out, is swapped back in but still also is in the swapfile (if memory is needed it doesn't need to be swapped out AGAIN because it is already in the swapfile. This saves I/O)
 Active: Memory that has been used more recently and usually not reclaimed unless absolutely necessary.
 Inactive: Memory which has been less recently used.  It is more eligible to be reclaimed for other purposes

SwapTotal: total amount of swap space available
SwapFree: Memory which has been evicted from RAM, and is temporarily on the disk
Dirty: Memory which is waiting to get written back to the disk
Writeback: Memory which is actively being written back to the disk
AnonPages: Non-file backed pages mapped into userspace page tables
Mapped: files which have been mmaped, such as libraries
Slab: in-kernel data structures cache
SReclaimable: Part of Slab, that might be reclaimed, such as caches
SUnreclaim: Part of Slab, that cannot be reclaimed on memory pressure
PageTables: Amount of memory dedicated to the lowest level of page tables.
NFS_Unstable: NFS pages sent to the server, but not yet committed to stable storage
Bounce: Memory used for block device "bounce buffers"
WritebackTmp: Memory used by FUSE for temporary writeback buffers
VmallocTotal: total size of vmalloc memory area
VmallocUsed: amount of vmalloc area which is used
VmallocChunk: largest contiguous block of vmalloc area which is free

Troubleshooting MySQL database performance

Here are some cool ways, how you can monitor the database performance of your MySQL installation.

Monitoring of any database/Application is always an iterative and continuous process. You need to check which graph is ok for your Application/Database. You need to measure which values are the signs of Alert or DR and what may cause it.

Below are the main items you can use to monitor your system:

1. MySQL error log
2.  mysqladmin processlist
3.  mysqladmin extended (absolute values)
4.  mysqladmin extended -i10 -r (relative values)
5.  mysql -e “show innodb status”
6.  OS data. vmstat/iostat

1) MySQL error log
-The error log is great source of information in case of there is any error.  Nothing should write to the error log, after the MySQL server has completed its initialization sequence, so everything written in error log after initialization should check immediately.
2) mysqladmin processlist or “SHOW FULL PROCESSLIST” command
It will return number of threads connected and running by using other statistics, but this is a good way to check how long queries that are running take. If there are some very long-running queries  the admin/Application team should be informed.

3) mysqladmin extended
It’s a client utility available with MySQL. You can monitor MySQL status using this utility. You need to check below parameter.

# Slave_running: If replication is configured on system, it will be provide slave status. This command should be executed on slave server.
#Threads_connected: The number of clients currently connected to MySQL server. This should be less than configured value (max_connection).
# Threads_running: If the database is overloaded you’ll get an increased number of queries running. That also should be less than some preset value . It is OK to have values over the limit for very short times. Then you can monitor some other values, when the Threads_running was more than the preset value and when it did not fall back in 5 seconds.

4) mysqladmin extended
The idea is that you store the performance counter value and compute the difference with the new values. The interval between the recordings should be more than 10 seconds. The following values are good candidates for checking:

* Aborted_clients: The number of clients that were aborted (because they did not properly close the connection to the MySQL server). For some applications this can be OK, but for some other applications you might want to track the value, as aborted connects may indicate some sort of application failure.
* Questions: Number of queries you get per second. Also, it’s total queries, not number per second. To get number per second, you must divide Questions by Uptime.
* Handler_*: If you want to monitor low-level database load, these are good values to track. If the value of Handler_read_rnd_next is abnormal relative to the value that you normally would expect, it may indicate some optimization or index problems. Handler_rollback will show the number of queries that have been rolled back. You might want to wish to investigate them.
* Opened_tables: Number of table cache misses. If the value is large, you probably need to increase table_cache. Typically you would want this to be less than 1 or 2 opened tables per second.
* Select_full_join: Joins performed without keys. This should be zero. This is a good way to catch development errors, as just a few such queries can degrease the system’s performance.
* Select_scan: Number of queries that performed a full table scan. In some cases these are OK but their ratio to all queries should be constant. if you have the value growing it can be a problem with the optimizer, lack of indexes or some other problem
* Slow_queries: Number of queries longer than –long-query-time or that are not using indexes. These should be a small fraction of all queries. If it grows, the system will have performance problems.
* Threads_created: This should be low. Higher values may mean that you need to increase the value of thread_cache or you have the amount of connections increasing, which also indicates a potential problem.

This statement produces a great deal of information, from which you should extract the parts in which you are interested. The first thing you need to check is: “Per second averages calculated from the last xx seconds”. InnoDB rounds stats each minute.

* Pending normal aio reads: These are InnoDB IO request queue sizes. If they are bigger than 10-20 you might have some bottleneck.
* reads/s, avg bytes/read, writes/s, fsyncs/s: These are IO statistics. Large values for reads/writes means the IO subsystem is being loaded. Proper values for these depend on your system configuration.
* Buffer pool hit rate: The hit rate also depends a lot on your application. Check your hit rate, when there are problems.
* inserts/s, updates/s, deletes/s, reads/s: These are low level row operations that InnoDB does. You might use these to check your load if it is in expected range.
The detail information this output you can get here.

6) OS Data. Good tools to see the system status are vmstat/iostat/mpstat.
#iostat will provide information about io on *nix systems.

Friday, October 19, 2012

Some Useful Oracle Commands

Currently running Jobs in Oracle
SELECT sid, r.job, log_user, r.this_date, r.this_sec
FROM dba_jobs_running r, dba_jobs j
WHERE r.job = j.job

SELECT a.ksppinm name, b.ksppstvl VALUE, b.ksppstdf isdefault,
       DECODE(a.ksppity, 1, ‘boolean’, 2, ‘string’, 3, ‘number’, 4, ‘file’,
           a.ksppity) TYPE,
       a.ksppdesc description
FROM   sys.x$ksppi a, sys.x$ksppcv b
WHERE  a.indx = b.indx
  AND  a.ksppinm NOT LIKE ‘_%’ escape ”
order  by name

SET pages 99
col file_name format a45
col tablespace_name format a13
col tablespace_name heading ts_name
col blocks format 999,999
SELECT file_name,tablespace_name,bytes,blocks
FROM dba_data_files
ORDER BY tablespace_name,file_name

SQL query  to find Oracle Active Processes

SET pagesize 55;
SET linesize 170;
col SQL format a80;
col SERVER heading ‘SVR’ format a3;
col EVENT heading ‘WAITING’ format a30 fold_after;
col OSUSER heading ‘OSUSER’ format a8;
col USERNAME heading ‘USERNAME’ format a8;
col PID heading ‘OSPID’ format 99999;
col DISK_READS heading ‘DISK I/O’ format 99999999;
col BUFFER_GETS heading ‘BUFFER|GETS’ format 99999999;

     V$SESSION_WAIT.EVENT != ‘client message’

Monday, October 1, 2012

MySQL Enterprise Edition: Policy-based Auditing!

MySQL Enterprise Edition: Policy-based Auditing!

One of the most common requests feature for the MySQL is quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic environment. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database.

With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit.

MySQL Enterprise Audit

The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to your MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to:

1. Easily install the needed components.

Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin:


plugin-load=audit_log.so (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.)

or alternatively you can load the plugin at runtime:

mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so';

2. Dynamically enable and disable the audit stream for a specific MySQL server.

A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below.

3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server.

The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream:

  • "ALL" - enable audit stream and log all events
  • "LOGINS" - enable audit stream and log only login events
  • "QUERIES" - enable audit stream and log only querie events
  • "NONE" - disable audit stream

4. Manage audit log files using basic MySQL log rotation features.

A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging.

5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions.

The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases.

So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this:

It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled.


Shutdown a MySQL Instance

How  you can  shutdown you mysql instance? We can use SIGTERM, mysqladmin and SIGKILL . These three methods will end up with a dead mysqld but the one you choose depends on the situation and can even result in lost data. On Linux the difference between SIGTERM and SIGKILL is significant and often times misunderstood.

Before processes start to die it is important to understand the relationship between mysqld_safe and mysqld. mysqld_safe is the watchdog script for mysqld. It is responsible for starting mysqld and keeping an eye on it. It does this by waiting for mysqld to exit then checking the return code. On a safe shutdown such as one done by mysqldadmin or a SIGTERM mysqld will return zero. When mysqld_safe sees a zero return code it will also exit. If the return code is anything else then mysqld_safe assumes mysqld crashed and starts a new instance of mysqld. This difference can help explain why mysqld sometimes just won’t go away.

 It is important to understand the difference between mysqladmin and sigterm.  mysqladmin will create a shutdown thread. Using a TERM signal will also create a shutdown thread. The major difference between the two is that mysqladmin makes a connection to mysql and sends a shutdown packet. This means it can be used from a remote host. It also means that mysqladmin must pass mysql permissions before allowing the shutdown. SIGTERM is delivered through the signal mechanism in linux and must only pass the linux system user rules for delivering a signal. For example if mysqld is running as root then the anil user can’t deliver a signal to it without using sudo command or sudo acess.

SIGTERM is handy for safely shutting down mysqld when the root password is lost or there are too many connections and the reserved super user connection is also taken up. There are several different methods to find the mysqld process and send it a term signal. Here are a few examples where nnnn is the pid of the mysqld process. These also work with SIGKILL. Note that killall defaults to a TERM signal

  • kill -TERM nnnn
  • killalll mysqld
  • kill -TERM `pidof mysqld`
  • kill -TERM `cat /var/run/mysqld.pid`

Getting into SIGKILL or kill -KILL is where things get interesting. In Linux SIGKILL isn’t really a signal in that it never actually gets delivered to the process. Since it never gets delivered to the process it can’t be caught or blocked.

SIGKILL means is to remove a process from existence. The process never gets another chance to run to attempt to block the signal. Linux simply cleans it up. What this means for MySQL is that it doesn’t get a chance to perform any shutdown tasks like flushing indexes for myisam tables. To MySQL it is effectively the same as pulling the plug on the server except that the filesystem still has a chance to flush modified data to disk.

SIGKILL should really be a last resort or only used when you know your mysqld is safe to shutdown. There is plenty of discussion on this with respect to making consistent backups. The rules are basically the same. When you take a filesystem snapshot or lvm snapshot the way the snapshot looks is effectively the same as running SIGKILL on mysqld to remove it from existence.

Insight of MySQL Row based Replication

RBR provides a safe way of replicating between master and slave. It has several advantages over statement based replication (SBR), where the lack of non-deterministic operations jumps to mind rather quickly. However, there is one drawback that drives some users away from it. RBR ships the changes over to the slave, instead of the operations as it happens in SBR. This means that RBR may exhibit a large binary log footprint for operations that make a rather big number of changes or operate over sizable rows.

Below are some interesting enhancements in RBR in MySQL 5.6 Replication.

1. Optimized row image:
In RBR, one row event contains changes to one or more rows in a given table. A row change, in its turn, may consist of one or two full row copies of the row that is being changed. These are generally known as row images. The first one, known as before image (BI), contains data as it was before the row was modified. The second one, known as after image (AI), is what the row looks like after the changes were done. Each image has different purpose: the BI is used for locating, in the storage engine, the row to be updated/deleted, while the AI, is used for replaying the actual changes. Needless to say, not both images are needed for every operation. Deletes only need the BI, inserts, on the other hand, just need the AI, while updates need both.

Large data transfer between master and slave while using RBR is a thing of past. Why log complete rows on the master? Why not just the changed colums?  The MySQL 5.6 release enables a DBA to configure the master to log only the changed columns in the row.  The Before Image (BI) will contain the columns for identifying the rows and the After Image(A I) will just contain the column that has changed.

Before MySQL 5.6: 

As one can observe in the illustration that the row images were full, i.e. all the columns in the row were logged for both the Before Image(BI) and the After Image AI

In MySQL 5.6:

In MySQL 5.6 the DBA can configure if full or minimal or NOBLOB image of rows should be logged in the binary log in the following ways.

1. start the server with --binlog-row-image=MINIMAL|FULL|NOBLOB


In minimal image if there is a Primary Key (PK) or any kind of Unique Index/Key (UK) that can identify a row uniquely on the slave then the BI contains just that column. In the After Image (AI) only the changed columns are logged 
In case of NOBLOB is used, the BLOB fields will be skipped from logging unless changed. In case  FULL is used the logging will continue as before.

As you can see the amount of data transfer will be reduced by a lot especially if you have used proper indexes/keys for the tables on the master and the slave.

2. Batch operations on table without PK or UK:

is one of the long awaited optimization when using RBR. This enables the DBA to configure what algorithm will be used on the slave to find the appropriate row for updating/deleting. We have introduced the concept of HASH_SCAN, in which an in-memory hash is being created on the slave, and provides a way to find and apply rows in a row event in O(n) time.  This enhancement provides a great scope for performance improvement on the slave for the events logged in row format.

3. Informational Log events: 
One of the difficulties DBA have faced till now, while using RBR is that, it is difficult to maintain. The Row images are binary in nature and is not suitable for human "consumption". This is where this enhancement comes handy. There are two ways of using this.

    1. Enabling switch --binlog-rows-query-log-events during the server startup

    2. Changing the corresponding session variable.

Once enabled this will cause the original query for the row event to be logged in the binary log as a special event which the DBA can check using one of the two  methods.


1. One can check these special events in the binary-log by using SHOW BINLOG EVENTS. The query corresponding the the row event will be logged as an event type: Row_query. below is one such example




| Log_name      | Pos | Event_type  | S..id | End... | Info                                          |


| master.000001 |   4 | Format_desc |     1 |    114 | Server ver: 5.6.7-m5-debug-log, Binlog ver: 4 |

| master.000001 | 114 | Query       |     1 |    200 | use `test`; CREATE TABLE NUM_INTS (a INT)     |

| master.000001 | 200 | Query       |     1 |    268 | BEGIN                                         |

| master.000001 | 268 | Rows_query  |     1 |    323 | # INSERT INTO NUM_INTS VALUES (1), (2), (3)   |

| master.000001 | 323 | Table_map   |     1 |    364 | table_id: 54 (test.NUM_INTS)                  |

| master.000001 | 364 | Write_rows  |     1 |    408 | table_id: 54 flags: STMT_END_F                |

| master.000001 | 408 | Query       |     1 |    477 | COMMIT                                        |



In the above terminal output (snip-ed), one can see the event at pos= 268 is an informational event for the next Table_map and Write_rows events. 



2. Another way is to use the mysqlbinlog utility with  --vv switch (verbosity level 2 or more).


shell> mysqlbinlog -vv /master.000001

# at 268

#110401 14:24:29 server id 1  end_log_pos 323   Rows_query

# INSERT INTO t1 VALUES (1), (2), (3)

# at 323

#110401 14:24:29 server id 1  end_log_pos 364   Table_map: `test`.`NUM_INTS` mapped to number 54

# at 364

#110401 14:24:29 server id 1  end_log_pos 408   Write_rows: table id 54 flags:STMT_END_F



Foreign Key in MySQL

Foreign Keys are a way of implementing relationships/constraints between columns in different tables. For example, we want to make sure that the value of the county column in the towns table has an associated entry in the counties table. In this way, no-one can place a town in a non-existent county and similarly no one can remove a county and leave orphaned towns behind.

We refer to the towns table as the child and the counties table as the parent of world database.

There are different categories of constraints that influence how they’re enforced when a row is updated or deleted from the parent table:

  • Cascade: If a row is deleted from the parent then any rows in the child table with a matching FK value will also be deleted. Similarly for changes to the value in the parent table.
  • Restrict: A row cannot be deleted from the parent table if this would break a FK constraint in the child table. Similarly for changes to the value in the parent table.
  • No Action: Very similar to “Restrict” except that any events/triggers on the parent table will be executed before the constraint is enforced – giving the application writer the option to resolve any FK constraint conflicts using a stored procedure.
  • Set NULL: If NULL is a permitted value for the FK column in the child table then it will be set to NULL if the associated data in the parent table is updated or deleted.
  • Set Default: If there is a default value for the FK column in the child table then it will be used if the associated data in the parent table is updated or deleted. Note that this is not implemented in this version.

Oracle Announces MySQL 5.6 Release Candidate

Hi Guys,
Finally MySQL 5.6 is here. With great features for DBA and Product.. Below is some of key features from Oracle Keynot.

Release Candidate Delivers a Wide Range of Innovations
MySQL 5.6 RC provides enhanced linear scalability, helping users to leverage modern hardware capabilities.

With this Release Candidate, users can experience simplified query development and execution, better transactional throughput and application availability, flexible NoSQL access, improved replication and enhanced instrumentation.

New features and benefits include:

ü  Better Query Execution Times and Diagnostics provided through an enhanced MySQL Optimizer that includes:

ü  Subquery Optimizations: simplify query development by optimizing subqueries prior to execution. New efficiencies in how result sets are selected, sorted and returned deliver substantial improvement in query execution times.

ü  The addition of Index Condition Pushdown (ICP) and Batch Key Access (BKA) can improve selected query throughput by up to 280x(1).

ü  Enhanced Optimizer Diagnostics: with EXPLAIN for INSERT, UPDATE, and DELETE operations. EXPLAIN plan output in JSON format delivers more precise optimizer metrics and better readability, and Optimizer Traces enables to track the optimizer decision-making process.

ü  Better Transactional Throughput and Application Availability with an improved InnoDB storage engine:

ü  Better Transactional and Read Only Throughput: InnoDB has been re-factored to minimize legacy threading, flushing, purge mutex contentions and bottlenecks, enabling better concurrency on heavily loaded OLTP systems, and resulting in significantly improved throughput for read only workloads(2).

ü  Enhanced Availability: Online DDL operations enable DBAs to add indexes and perform table alterations while the application remains available for updates.

ü  Full-Text Search with InnoDB: allows developers to build FULLTEXT indexes on InnoDB tables to represent text-based content and speed up application searches for words and phrases.

ü  Simple, Key Value Lookup: flexible NoSQL access to InnoDB provides simple, key-value lookup of InnoDB data via the familiar Memcached API. Users get the “best of both worlds,” combining key-value operations and complex SQL queries in the same database.

ü  Improved Scale-Out and High Availability: with new features in MySQL replication including:

ü  Self-Healing Replication Clusters: the addition of Global Transaction Identifiers and HA Utilities make it simple to automatically detect and recover from failures. Crash-Safe Replication enables the binary log and slaves to automatically recover correct positions in the replication stream in case of a crash, and resume replication without administrator intervention. Checksums maintain data integrity across the cluster by automatically detecting and alerting on errors.

ü  High Performance Replication Clusters: up to 5x faster replication through Multi-Threaded Slaves(3), Binlog Group Commit and Optimized Row-Based Replication enable users to maximize the performance and efficiency of replication as they scale-out their workloads across commodity systems.

ü  Time-delayed Replication: provides protection against operational errors made on the master, for example accidentally dropping tables.

ü  Enhanced PERFORMANCE_SCHEMA: new instrumentation enables users to better monitor most resource intensive queries, objects, users and applications. New summaries with aggregated statistics grouped by query, thread, user, host and object are also available. The enhancements allow for easier default configuration with less than five percent overhead.

The MySQL 5.6 RC includes additional enhancements and is a feature complete aggregation of the Development Milestones Releases Oracle previously delivered to the MySQL community.