First, I created a simple MEM test environment:
[root@mylab matt]# cat createsimpletestenv.sh
#!/bin/sh
echo -n "Stopping any default mysqld instance..."
/etc/init.d/mysqld stop
echo " done."
echo -n "Setting up fresh mysqld setup..."
cp -R /var/lib/mysqlfresh /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql*
echo " done."
echo -n "Starting fresh default mysqld instance..."
/etc/init.d/mysqld start
echo " done."
echo
echo "### Beggining MEM install: `date`"
echo
echo -n "Installing monitor..."
/root/mysqlmonitor.bin --mode "unattended" --unattendedmodeui "minimal" \
--installdir "/opt/mysql/enterprise/monitor" \
--installdir "/opt/mysql/enterprise/monitor" \
--adminuser "manager" --adminpassword "manager"
echo " done."
echo -n "Installing agent in 'phone home' mode..."
# Install Agent in "phone home" mode, to enable auto monitoring
/root/mysqlmonitoragent.bin --mode "unattended" --unattendedmodeui "minimal" \
--mysqlhost "localhost" --mysqlport 3306 --managerhost "localhost" \
--mysqluser "root" --mysqlpassword "root" --managerport 18443 \
--mysqlhost "localhost" --mysqlport 3306 --managerhost "localhost" \
--mysqluser "root" --mysqlpassword "root" --managerport 18443 \
--agentuser "agent" --agentpassword "agent" --generaluser "matt" \
--generalpassword "matt" --limiteduser "sid" \
--generalpassword "matt" --limiteduser "sid" \
--limitedpassword "sid" --agent_autocreate 1 --mysqlconnectiongroup "Production"
/etc/init.d/mysql-monitor-agent start > /dev/null 2>&1
echo " done."
echo
echo "### Ending MEM install: `date`"
echo
Then I created some sample data in my new MySQL 5.6 instance using the sakila and employee sample databases:
[root@mylab matt]# cat loadtestdata.sh
#!/bin/sh
echo -n "Loading the sakila database ... "
mysql < /opt/mysql/sakila-db/sakila-schema.sql
mysql < /opt/mysql/sakila-db/sakila-data.sql
echo "done."
echo -n "Loading the employees database ... "
cd /opt/mysql/employees_db
mysql < employees.sql
echo "done."
Then I started monitoring the new MySQL instance within MEM:
Then I created a simple script to execute a slow query on our sample data:
[root@mylab matt]# cat runslowquery.sh
#!/bin/sh
ITERATIONS=$1
if test -z $ITERATIONS || [ "$ITERATIONS" -le 0 ]; then
ITERATIONS=10
fi
mysqlslap --port=3306 -c 50 -i ${ITERATIONS} --create-schema=test \
-q "select * from employees.employees, employees.salaries where employees.emp_no = salaries.emp_no order by salary desc limit 10"
exit 0
Then I adjusted the CPU usage advisor a bit so as to ensure that I can also get a simulated machine level (rather than MySQL instance level) critical alert about high CPU usage (just as an FYI):
Now let's finally simulate the situation where a developer adds a new query to the application; one that is in dire need of some performance tuning:
[root@mylab matt]# ./runslowquery.sh 50
Benchmark
Average number of seconds to run all queries: 66.574 seconds
Minimum number of seconds to run all queries: 59.725 seconds
Maximum number of seconds to run all queries: 78.115 seconds
Number of clients running queries: 50
Average number of queries per client: 1
Then let's come back to MEM to see the critical alerts for this MySQL instance:
The most serious issue is that we're currently experiencing a query pileup. This is because we cannot execute this query as fast as it's coming in. For example, the developer may have added a new query that's executed each time a user logs in and it's examining 2 billion rows. So now we know right away that there's likely N problematic queries. And when we look at the Query Analyzer (note: no additional setup is needed for MySQL 5.6 and later, as it uses Performance Schema for the QUAN data) for this instance, the problematic query immediately jumps out at us as it's at the top of our list and it has a QRTi score of 0:
From there we can get every possible statistic about the query. We can see when the query was first introduced--a new application was added this morning that's using this DB instance!?--how many rows it's examining each time, if a temp table is used, if a sort file on disk is used, what the join order was, etc. We can also see a sample query and the EXPLAIN output for the problematic query. With all of this information we can then determine exactly how to try and improve the problematic query.
As you can see, it's extremely easy to identify a problematic query with the Query Analyzer. And since it also provides you with every possible detail about the query, it makes determining how we might improve the query very easy as well.
I hope that this was helpful! Please let me know if you have any thoughts or comments. And as always, Thank you for using MySQL!
P.S. If you're not already using MySQL Enterprise, then I encourage you to try it out today! You can get started right now using a 30 day trial license.
P.S. If you're not already using MySQL Enterprise, then I encourage you to try it out today! You can get started right now using a 30 day trial license.
Matt,
ReplyDeleteWonderful .. Thank you