Sunday, August 12, 2012

MySQL Prepared Statements

MySQL has many good features. Prepared statement is one of good feature which I like most. When I was a developer I was using this to complete many operations.  After A long time I was refreshing my concepts of development. Sharing so another can also use enjoy this features

Statement Handling
MySQL support the
prepared statement syntax. For the better part, a prepared statement is much like a 'normal', immediate statement. The main difference is seen in the way the statement is processed by the server.
Immediate Statements
When an immediate statement is issued, it is processed directly. Processing comprises the following steps:
  1. Parsing: lexical and syntactic analysis of the statement
  2. Planning: optimizer devises a strategy to realise the required result or action, the execution plan or query plan
  3. Execution: retrieval/seeking, writing and reading of data and, if applicable, the construction of a resultset

After these steps, the server responds to the request by sending the client a result set (if applicable), or an acknowledgement that the statement was executed. Of course, all these actions are performed in concert, and the client is not aware of these different steps taking place. This becomes clear when typing the following statement into the MySQL command line client tool:

mysql> select count(*) from information_schema.schemata;

| count(*) |
|        6 |
1 row in set (0.10 sec)

Immediately (well, almost) the request to select the number of schemata is responded to by returning the result set.

Prepared Statements
A prepared statement is initiated by the PREPARE statement. A preparation for a query equivalent to previous one could look like this:

mysql> prepare stmt from
    -> 'select count(*) from information_schema.schemata';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

This time, we didn't get a result set.
The PREPARE statement instructs the server to parse the query, and possibly, to devise the execution plan. PREPARE associates an identifier with the statement, stmt, which acts as a handle to refer to the statement and the corresponding execution plan.

Actual execution is postponed until called for by the EXECUTE statement, using the handle to identify the prepared statement to execute.

mysql> execute stmt;
| count(*) |
|        6 |
1 row in set (0.00 sec)

Which is the same result as we got when we issued the immediate statement.

This seems like an overly complex way to do what we could also do with just one statement. It is - until we execute it again:

mysql> execute stmt;
| count(*) |
|        6 |
1 row in set (0.00 sec)

....and again and again and again.

This actually touches upon the major purpose of the prepared statement concept: when a statement is to be repeatedly executed, a prepared statement is potentially more efficient. Because only the execution step needs to be repeated, there is less time wasted on parsing and building the query plan each time the result is required.
Increased efficiency is even more evident for statements that do not return a result set (such as INSERT, UPDATE and DELETE) because the actions needed to construct a result set are generally more time-consuming than parsing and creating an execution plan.

Now, we need to elaborate just a little on this efficiency argument. In the Reference manual, you will bump in to this phrase pretty quickly:

MySQL 5.0 provides support for server-side prepared statements. This...takes advantage of the efficient client/server binary protocol...provided that you use an appropriate client programming interface. Candidate interfaces include the MySQL C API client library (for C programs), MySQL Connector/J (for Java programs), and MySQL Connector/NET.

An alternative SQL interface to prepared statements is available. This interface is not as efficient as using...a prepared statement API...

A little further on, the manual explains that the prepared statement syntax is available from within the SQL language primarily for development purposes; NOT to gain efficiency.

(I did some tests that suggest that the SQL prepared statement syntax is slower than immediate statements, but I don't know if this has to do with the query cache. I used INSERT statements BTW)

Using Parameters
A very powerful feature of prepared statements is the possibility to bind parameters to it. Parameters are specified by writing special ? placeholders inside the sql statement that is to be prepared:

mysql> prepare stmt from
    -> 'select count(*) from information_schema.schemata where schema_name = ?'
Query OK, 0 rows affected (0.00 sec)
Statement prepared

When EXECUTE-ing the statement, these placeholders must be bound to
user variables with the USING syntax:

mysql> set @schema := 'test';
Query OK, 0 rows affected (0.00 sec)

mysql> execute stmt using @schema;
| count(*) |
|        1 |
1 row in set (0.00 sec)

So, even though the statement has been prepared, we can still enjoy the flexibility of controlling the query result.

One thing to keep in mind is that parameters are not implemented using simple string substitution. For example, the placeholder in the previous example is not quoted inside the statement. It merely provides a slot for a value, and the binding process takes care of transferring that value to the slot inside the statement. Quotes are merely syntactic methods to distinguish a string from the surrounding code. Because the binding process is way beyond the level of parsing, it does not make sense to use quotes.

You can't just put a placeholder wherever you like. Parameter placeholders can appear only in those places inside the statement where you would normally expect an expression. In particular, you cannot use parameter placeholders to parameterize identifiers or entire statement structures. So, the following attempt fails with a syntax error, because the parameter placeholder appears where you would normally put an identifier:

mysql> prepare stmt from 'create table ? (id int unsigned)';
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near '? (id int unsigned)' at line 1

However, is is not impossible to paramterize identifiers using the prepared statement syntax. I'll show that in a bit. It's just that you can't do it using parameters, because parameters are just a special cases of expressions, like column references, literals, etc.

Multiple Parameters
We are not restricted to just one parameter, we can use several:

mysql> prepare stmt from
    -> 'select count(*)
    -> from information_schema.schemata
    -> where schema_name = ? or schema_name = ?'
Query OK, 0 rows affected (0.00 sec)
Statement prepared
mysql> execute stmt
    -> using @schema1,@schema2
| count(*) |
|        2 |
1 row in set (0.00 sec)

Parameter binding occurs in a positional manner.
Each ? placeholder must be matched bij exactly one user variable in the USING clause, where the first placeholder matches the first user variable, the second placeholder matches the second user variable and so on.

You really must match each placeholder by exactly one user variable, or else you will encounter an error:

1210 (HY000): Incorrect arguments to EXECUTE

Dynamic SQL
For no particular reason, PREPARE accepts either a string literal, or a
user-defined variable to define the statement to prepare. It would've been just as conceivable to accept just a statement, like so:

mysql> prepare stmt from
    -> select count(*) from information_schema.schemata;

But no, this does not work. This just results in a syntax error.

Anyway, we already saw how PREPARE accepts a statement in the form of a string literal. It's pretty much the same for a global user variable:

mysql> set @sql_text := 'select count(*) from information_schema.schemata';

mysql> prepare stmt from
    -> @sql_text

Query OK, 0 rows affected (0.00 sec)
Statement prepared

Because we can freely assign whatever value we want to @sql_text user variable, we can use this as a device to employ dynamic SQL.

Now we know how to dynamically manipulate our identifiers too: we just manipulate our string before assigning it to the user variable, like so:

mysql> set @table_name := 'mytable';
Query OK, 0 rows affected (0.02 sec)

mysql> set @sql_text:=concat('create table ',@table_name,'(id int unsigned)');
Query OK, 0 rows affected (0.00 sec)

Cleaning up
There's one extra step in the process of using prepared statements that I did not yet mention. That's cleaning up (I guess that tells you something about me, right?). Once you've prepared a statement, the handle and the associated objects on the server's side will remain to exist until the client's session is over. This means that client will keep some of the server's resources occupied. Therefore, it's good practice to clean up afterwards. When you're sure you're done working with the statement, you should DEALLOCATE it:

mysql> deallocate prepare stmt;

This just tells the server to get rid of all the resources associated with the statement handle, and to forget about the statement handle as well.

How to check Index usage information in SQL Server

Today I got face once question  from one of my team-mate, How we will check index usage information in SQL Server?  After some R&D we got the way, to find out information J . Below is the query for same.
This function gives you information about insert, update and delete operations that occur on a particular index. In addition, this view also offers data about locking, latching and access methods. There are several columns that are returned from this view, but these are some of the more interesting columns:

leaf_insert_count - total count of leaf level inserts
leaf_delete_count - total count of leaf level inserts
leaf_update_count - total count of leaf level updates
Here is a sample query that provides some of the key columns that are helpful to determine insert, update and delete operations.


This view gives you information about overall access methods to your indexes. There are several columns that are returned from this DMV, but here are some helpful columns about index usage:

user_seeks - number of index seeks
user_scans- number of index scans
user_lookups - number of index lookups
user_updates - number of insert, update or delete operations


Enjoy J

MySQL: How to recover Recovery deleted ibdata1

A interesting Article to read
Recently I had a case when a customer deleted the InnoDB main table space – ibdata1 – and redo logs – ib_logfile*.
MySQL keeps InnoDB files open all the time. The following recovery technique is based on this fact and it allowed to salvage the database.
Actually, the files were deleted long time ago – 6 months or so. As long as file is open physically it still exits in the file system and reachable to processes which have opened it.
Thus, from user perspective nothing has changed after the deletion. By the way, this is a good reason to monitor existence of these files!
But after the restart InnoDB will detect that there is neither system table space nor log files, so it will create empty ones. The InnoDB dictionary will be empty and InnoDB won’t be able to use a bunch of existing ibd files. This situation will be a job for
ibdconnect, but as long as MySQL isn’t restarted it is possible to recover database fast. Let me illustrate how.
Let’s simulate the accident. For that I will delete /var/lib/mysql/ib* files, while sysbench generates read/write activity:
root@localhost:~# sysbench --num-threads=16 --max-requests=0 --test=oltp --oltp-table-size=1000000 --max-time=3600 --mysql-user=root run
sysbench 0.4.12:  multi-threaded system evaluation benchmark

No DB drivers specified, using mysql
Running the test with following options:
Number of threads: 16

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
root@localhost:/var/lib/mysql# rm ib*
Now the files have gone, but MySQL is still running. They don’t exist in /var/lib/mysql, but can be reachable in /proc file system:
root@localhost:/var/lib/mysql# ls -la  /proc/14101/fd/ | grep -e ibdata -e ib_
lrwx------ 1 root  root  64 Aug  7 23:29 3 -> /var/lib/mysql/ibdata1 (deleted)
lrwx------ 1 root  root  64 Aug  7 23:29 8 -> /var/lib/mysql/ib_logfile0 (deleted)
lrwx------ 1 root  root  64 Aug  7 23:29 9 -> /var/lib/mysql/ib_logfile1 (deleted)
where 14101 is the PID of mysqld process.
However, we can’t copy them back because at any given point of time, there are modified pages in the buffer pool. Those pages are not written on disk and will be lost if the changes are not permanently written. This can lead to corruption and data loss.
For the same reason we can’t make MySQL backup by just copying the files.
So, we have to make sure all modifications are written to the disk.
For that we have to stop any further writes and wait until InnoDB flushes all pages.
To stop write activity we can either stop application or lock tables:
mysql> flush tables with read lock;
Query OK, 0 rows affected (0.37 sec)
Now let’s wait until all dirty pages are flushed on disk. For that we will monitor checkpoint age. The checkpoint age is a difference between current log sequence number and last checkpoint in “SHOW ENGINE INNODB STATUS” output. If checkpoint age is zero, all pages are flushed:
Log sequence number 363096003
Log flushed up to   363096003
Last checkpoint at  363096003
Max checkpoint age    7782360
To speed up flushing we can set dirty pages percentage to zero:
mysql> set global innodb_max_dirty_pages_pct=0;
Query OK, 0 rows affected (0.01 sec)
It is also important to ensure all other background processes have finished their work.
One of them is the insert buffer thread. Its size should be not more than 1(it’s never less than 1):
Ibuf: size 1, free list len 398, seg size 400,
Another thread writing in background is the purge thread.
It should purge all transactions up to very last
Trx id counter 0 16644
Purge done for trx's n:o < 0 16644 undo n:o < 0 0
But if the last transaction wasn’t something that requires a purge operation (SELECT for example) Trx id counter will be bigger.
In that case at least ensure InnoDB isn’t doing any writes:
I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
332 OS file reads, 47 OS file writes, 32 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
When all modified pages are flushed it is safe now to copy InnoDB files back:
root@localhost:/var/lib/mysql# cp /proc/14101/fd/3 /var/lib/mysql/ibdata1
root@localhost:/var/lib/mysql# cp /proc/14101/fd/8 /var/lib/mysql/ib_logfile0
root@localhost:/var/lib/mysql# cp /proc/14101/fd/9 /var/lib/mysql/ib_logfile1
Let’s fix the owner:
root@localhost:/var/lib/mysql# chown -R mysql ib*
And restart MySQL:
root@localhost:/var/lib/mysql# /etc/init.d/mysql restart
After the restart all InnoDB tables are reachable:
mysql> select count(*) from sbtest;
| count(*) |
|  1000000 |
1 row in set (0.19 sec)


  • Add to your monitoring system checks that InnoDB files ibdata and ib_logfile* do exist
  • Do not restart MySQL until the further recovery strategy is clear