Friday, October 19, 2012

Some Useful Oracle Commands

Currently running Jobs in Oracle
SELECT sid, r.job, log_user, r.this_date, r.this_sec
FROM dba_jobs_running r, dba_jobs j
WHERE r.job = j.job

SELECT a.ksppinm name, b.ksppstvl VALUE, b.ksppstdf isdefault,
       DECODE(a.ksppity, 1, ‘boolean’, 2, ‘string’, 3, ‘number’, 4, ‘file’,
           a.ksppity) TYPE,
       a.ksppdesc description
FROM   sys.x$ksppi a, sys.x$ksppcv b
WHERE  a.indx = b.indx
  AND  a.ksppinm NOT LIKE ‘_%’ escape ”
order  by name

SET pages 99
col file_name format a45
col tablespace_name format a13
col tablespace_name heading ts_name
col blocks format 999,999
SELECT file_name,tablespace_name,bytes,blocks
FROM dba_data_files
ORDER BY tablespace_name,file_name

SQL query  to find Oracle Active Processes

SET pagesize 55;
SET linesize 170;
col SQL format a80;
col SERVER heading ‘SVR’ format a3;
col EVENT heading ‘WAITING’ format a30 fold_after;
col OSUSER heading ‘OSUSER’ format a8;
col USERNAME heading ‘USERNAME’ format a8;
col PID heading ‘OSPID’ format 99999;
col DISK_READS heading ‘DISK I/O’ format 99999999;
col BUFFER_GETS heading ‘BUFFER|GETS’ format 99999999;

     V$SESSION_WAIT.EVENT != ‘client message’

Monday, October 1, 2012

MySQL Enterprise Edition: Policy-based Auditing!

MySQL Enterprise Edition: Policy-based Auditing!

One of the most common requests feature for the MySQL is quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic environment. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database.

With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit.

MySQL Enterprise Audit

The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to your MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to:

1. Easily install the needed components.

Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin:

[mysqld] (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.)

or alternatively you can load the plugin at runtime:

mysql> INSTALL PLUGIN audit_log SONAME '';

2. Dynamically enable and disable the audit stream for a specific MySQL server.

A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below.

3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server.

The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream:

  • "ALL" - enable audit stream and log all events
  • "LOGINS" - enable audit stream and log only login events
  • "QUERIES" - enable audit stream and log only querie events
  • "NONE" - disable audit stream

4. Manage audit log files using basic MySQL log rotation features.

A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging.

5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions.

The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases.

So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this:

It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled.


Shutdown a MySQL Instance

How  you can  shutdown you mysql instance? We can use SIGTERM, mysqladmin and SIGKILL . These three methods will end up with a dead mysqld but the one you choose depends on the situation and can even result in lost data. On Linux the difference between SIGTERM and SIGKILL is significant and often times misunderstood.

Before processes start to die it is important to understand the relationship between mysqld_safe and mysqld. mysqld_safe is the watchdog script for mysqld. It is responsible for starting mysqld and keeping an eye on it. It does this by waiting for mysqld to exit then checking the return code. On a safe shutdown such as one done by mysqldadmin or a SIGTERM mysqld will return zero. When mysqld_safe sees a zero return code it will also exit. If the return code is anything else then mysqld_safe assumes mysqld crashed and starts a new instance of mysqld. This difference can help explain why mysqld sometimes just won’t go away.

 It is important to understand the difference between mysqladmin and sigterm.  mysqladmin will create a shutdown thread. Using a TERM signal will also create a shutdown thread. The major difference between the two is that mysqladmin makes a connection to mysql and sends a shutdown packet. This means it can be used from a remote host. It also means that mysqladmin must pass mysql permissions before allowing the shutdown. SIGTERM is delivered through the signal mechanism in linux and must only pass the linux system user rules for delivering a signal. For example if mysqld is running as root then the anil user can’t deliver a signal to it without using sudo command or sudo acess.

SIGTERM is handy for safely shutting down mysqld when the root password is lost or there are too many connections and the reserved super user connection is also taken up. There are several different methods to find the mysqld process and send it a term signal. Here are a few examples where nnnn is the pid of the mysqld process. These also work with SIGKILL. Note that killall defaults to a TERM signal

  • kill -TERM nnnn
  • killalll mysqld
  • kill -TERM `pidof mysqld`
  • kill -TERM `cat /var/run/`

Getting into SIGKILL or kill -KILL is where things get interesting. In Linux SIGKILL isn’t really a signal in that it never actually gets delivered to the process. Since it never gets delivered to the process it can’t be caught or blocked.

SIGKILL means is to remove a process from existence. The process never gets another chance to run to attempt to block the signal. Linux simply cleans it up. What this means for MySQL is that it doesn’t get a chance to perform any shutdown tasks like flushing indexes for myisam tables. To MySQL it is effectively the same as pulling the plug on the server except that the filesystem still has a chance to flush modified data to disk.

SIGKILL should really be a last resort or only used when you know your mysqld is safe to shutdown. There is plenty of discussion on this with respect to making consistent backups. The rules are basically the same. When you take a filesystem snapshot or lvm snapshot the way the snapshot looks is effectively the same as running SIGKILL on mysqld to remove it from existence.

Insight of MySQL Row based Replication

RBR provides a safe way of replicating between master and slave. It has several advantages over statement based replication (SBR), where the lack of non-deterministic operations jumps to mind rather quickly. However, there is one drawback that drives some users away from it. RBR ships the changes over to the slave, instead of the operations as it happens in SBR. This means that RBR may exhibit a large binary log footprint for operations that make a rather big number of changes or operate over sizable rows.

Below are some interesting enhancements in RBR in MySQL 5.6 Replication.

1. Optimized row image:
In RBR, one row event contains changes to one or more rows in a given table. A row change, in its turn, may consist of one or two full row copies of the row that is being changed. These are generally known as row images. The first one, known as before image (BI), contains data as it was before the row was modified. The second one, known as after image (AI), is what the row looks like after the changes were done. Each image has different purpose: the BI is used for locating, in the storage engine, the row to be updated/deleted, while the AI, is used for replaying the actual changes. Needless to say, not both images are needed for every operation. Deletes only need the BI, inserts, on the other hand, just need the AI, while updates need both.

Large data transfer between master and slave while using RBR is a thing of past. Why log complete rows on the master? Why not just the changed colums?  The MySQL 5.6 release enables a DBA to configure the master to log only the changed columns in the row.  The Before Image (BI) will contain the columns for identifying the rows and the After Image(A I) will just contain the column that has changed.

Before MySQL 5.6: 

As one can observe in the illustration that the row images were full, i.e. all the columns in the row were logged for both the Before Image(BI) and the After Image AI

In MySQL 5.6:

In MySQL 5.6 the DBA can configure if full or minimal or NOBLOB image of rows should be logged in the binary log in the following ways.

1. start the server with --binlog-row-image=MINIMAL|FULL|NOBLOB


In minimal image if there is a Primary Key (PK) or any kind of Unique Index/Key (UK) that can identify a row uniquely on the slave then the BI contains just that column. In the After Image (AI) only the changed columns are logged 
In case of NOBLOB is used, the BLOB fields will be skipped from logging unless changed. In case  FULL is used the logging will continue as before.

As you can see the amount of data transfer will be reduced by a lot especially if you have used proper indexes/keys for the tables on the master and the slave.

2. Batch operations on table without PK or UK:

is one of the long awaited optimization when using RBR. This enables the DBA to configure what algorithm will be used on the slave to find the appropriate row for updating/deleting. We have introduced the concept of HASH_SCAN, in which an in-memory hash is being created on the slave, and provides a way to find and apply rows in a row event in O(n) time.  This enhancement provides a great scope for performance improvement on the slave for the events logged in row format.

3. Informational Log events: 
One of the difficulties DBA have faced till now, while using RBR is that, it is difficult to maintain. The Row images are binary in nature and is not suitable for human "consumption". This is where this enhancement comes handy. There are two ways of using this.

    1. Enabling switch --binlog-rows-query-log-events during the server startup

    2. Changing the corresponding session variable.

Once enabled this will cause the original query for the row event to be logged in the binary log as a special event which the DBA can check using one of the two  methods.


1. One can check these special events in the binary-log by using SHOW BINLOG EVENTS. The query corresponding the the row event will be logged as an event type: Row_query. below is one such example




| Log_name      | Pos | Event_type  | | End... | Info                                          |


| master.000001 |   4 | Format_desc |     1 |    114 | Server ver: 5.6.7-m5-debug-log, Binlog ver: 4 |

| master.000001 | 114 | Query       |     1 |    200 | use `test`; CREATE TABLE NUM_INTS (a INT)     |

| master.000001 | 200 | Query       |     1 |    268 | BEGIN                                         |

| master.000001 | 268 | Rows_query  |     1 |    323 | # INSERT INTO NUM_INTS VALUES (1), (2), (3)   |

| master.000001 | 323 | Table_map   |     1 |    364 | table_id: 54 (test.NUM_INTS)                  |

| master.000001 | 364 | Write_rows  |     1 |    408 | table_id: 54 flags: STMT_END_F                |

| master.000001 | 408 | Query       |     1 |    477 | COMMIT                                        |



In the above terminal output (snip-ed), one can see the event at pos= 268 is an informational event for the next Table_map and Write_rows events. 



2. Another way is to use the mysqlbinlog utility with  --vv switch (verbosity level 2 or more).


shell> mysqlbinlog -vv /master.000001

# at 268

#110401 14:24:29 server id 1  end_log_pos 323   Rows_query

# INSERT INTO t1 VALUES (1), (2), (3)

# at 323

#110401 14:24:29 server id 1  end_log_pos 364   Table_map: `test`.`NUM_INTS` mapped to number 54

# at 364

#110401 14:24:29 server id 1  end_log_pos 408   Write_rows: table id 54 flags:STMT_END_F



Foreign Key in MySQL

Foreign Keys are a way of implementing relationships/constraints between columns in different tables. For example, we want to make sure that the value of the county column in the towns table has an associated entry in the counties table. In this way, no-one can place a town in a non-existent county and similarly no one can remove a county and leave orphaned towns behind.

We refer to the towns table as the child and the counties table as the parent of world database.

There are different categories of constraints that influence how they’re enforced when a row is updated or deleted from the parent table:

  • Cascade: If a row is deleted from the parent then any rows in the child table with a matching FK value will also be deleted. Similarly for changes to the value in the parent table.
  • Restrict: A row cannot be deleted from the parent table if this would break a FK constraint in the child table. Similarly for changes to the value in the parent table.
  • No Action: Very similar to “Restrict” except that any events/triggers on the parent table will be executed before the constraint is enforced – giving the application writer the option to resolve any FK constraint conflicts using a stored procedure.
  • Set NULL: If NULL is a permitted value for the FK column in the child table then it will be set to NULL if the associated data in the parent table is updated or deleted.
  • Set Default: If there is a default value for the FK column in the child table then it will be used if the associated data in the parent table is updated or deleted. Note that this is not implemented in this version.

Oracle Announces MySQL 5.6 Release Candidate

Hi Guys,
Finally MySQL 5.6 is here. With great features for DBA and Product.. Below is some of key features from Oracle Keynot.

Release Candidate Delivers a Wide Range of Innovations
MySQL 5.6 RC provides enhanced linear scalability, helping users to leverage modern hardware capabilities.

With this Release Candidate, users can experience simplified query development and execution, better transactional throughput and application availability, flexible NoSQL access, improved replication and enhanced instrumentation.

New features and benefits include:

ü  Better Query Execution Times and Diagnostics provided through an enhanced MySQL Optimizer that includes:

ü  Subquery Optimizations: simplify query development by optimizing subqueries prior to execution. New efficiencies in how result sets are selected, sorted and returned deliver substantial improvement in query execution times.

ü  The addition of Index Condition Pushdown (ICP) and Batch Key Access (BKA) can improve selected query throughput by up to 280x(1).

ü  Enhanced Optimizer Diagnostics: with EXPLAIN for INSERT, UPDATE, and DELETE operations. EXPLAIN plan output in JSON format delivers more precise optimizer metrics and better readability, and Optimizer Traces enables to track the optimizer decision-making process.

ü  Better Transactional Throughput and Application Availability with an improved InnoDB storage engine:

ü  Better Transactional and Read Only Throughput: InnoDB has been re-factored to minimize legacy threading, flushing, purge mutex contentions and bottlenecks, enabling better concurrency on heavily loaded OLTP systems, and resulting in significantly improved throughput for read only workloads(2).

ü  Enhanced Availability: Online DDL operations enable DBAs to add indexes and perform table alterations while the application remains available for updates.

ü  Full-Text Search with InnoDB: allows developers to build FULLTEXT indexes on InnoDB tables to represent text-based content and speed up application searches for words and phrases.

ü  Simple, Key Value Lookup: flexible NoSQL access to InnoDB provides simple, key-value lookup of InnoDB data via the familiar Memcached API. Users get the “best of both worlds,” combining key-value operations and complex SQL queries in the same database.

ü  Improved Scale-Out and High Availability: with new features in MySQL replication including:

ü  Self-Healing Replication Clusters: the addition of Global Transaction Identifiers and HA Utilities make it simple to automatically detect and recover from failures. Crash-Safe Replication enables the binary log and slaves to automatically recover correct positions in the replication stream in case of a crash, and resume replication without administrator intervention. Checksums maintain data integrity across the cluster by automatically detecting and alerting on errors.

ü  High Performance Replication Clusters: up to 5x faster replication through Multi-Threaded Slaves(3), Binlog Group Commit and Optimized Row-Based Replication enable users to maximize the performance and efficiency of replication as they scale-out their workloads across commodity systems.

ü  Time-delayed Replication: provides protection against operational errors made on the master, for example accidentally dropping tables.

ü  Enhanced PERFORMANCE_SCHEMA: new instrumentation enables users to better monitor most resource intensive queries, objects, users and applications. New summaries with aggregated statistics grouped by query, thread, user, host and object are also available. The enhancements allow for easier default configuration with less than five percent overhead.

The MySQL 5.6 RC includes additional enhancements and is a feature complete aggregation of the Development Milestones Releases Oracle previously delivered to the MySQL community.