Mysql state invalidating my daughter is dating a manipulator

07-Sep-2015 18:09

As quoted from the mysql query cache documentation: Server workload has a significant effect on query cache efficiency.

A query mix consisting almost entirely of a fixed set of SELECT statements is much more likely to benefit from enabling the cache than a mix in which frequent INSERT statements cause continual invalidation of results in the cache.2) There’s a mutex around the code that handles query cache write, which means only 1 connection writes to the cache at a time.

You can see we had 250 threads waiting to execute while the site was under stress with a maximum of 8 threads executing simultaneously.

With the limit set to maximum, you can see peaks that go over 50 – meaning that, before, we had hundreds of threads that were blocked from executing., instead.

This means some connections are waiting before they can get the necessary information about the table to continue executing.

For the reports used to pull data for teachers to see, there are some huge queries. We were afraid that disabling the query cache could cause more harm then good by removing a bottleneck just to add another.

So you can imagine our surprise when we found out that the query cache was in fact a huge hindrance to our database performance.

For our debugging, in most cases what we did was one of two things:1) List the total maximum wait by event inside of Inno DB.-- "example of tracking down exactly what the contention points are" SELECT EVENT_NAME, SUM_TIMER_WAIT/1000000000 WAIT_MS, COUNT_STAR FROM performance_schema.events_waits_summary_global_by_event_name ORDER BY SUM_TIMER_WAIT DESC, COUNT_STAR DESC LIMIT 30; -- table handler waits on top, they also had the biggest increase through time ------------------------------------------------------ ----------------- -------------- | EVENT_NAME | WAIT_MS | COUNT_STAR | ------------------------------------------------------ ----------------- -------------- | idle | 4661443248.1257 | 6165953390 | | wait/io/table/sql/handler | 1755211603.3699 | 381701231771 | | wait/io/file/innodb/innodb_log_file | 459199280.6118 | 139800252 | | wait/io/file/innodb/innodb_data_file | 83050382.6978 | 197884296 | | wait/io/file/myisam/kfile | 56080735.9075 | 5274545307 | | wait/io/file/myisam/dfile | 13172549.9320 | 725383142 | | wait/lock/table/sql/handler | 5669784.9629 | 15013313221 | | wait/io/file/sql/binlog | 2407201.1162 | 389713292 | ... HOST)) USER, DB, COMMAND, STATE, TIME, EVENT_NAME LAST_WAIT, IF(TIMER_WAIT IS NULL , 'Still Waiting', TIMER_WAIT/1000000000) LAST_WAIT_MS FROM performance_schema.events_waits_current JOIN performance_schema.threads PPS USING (THREAD_ID) LEFT JOIN INFORMATION_SCHEMA. We would then see if they had correlation in increased wait with the periods where we ran into problems. Now here’s what we actually changed so that our My SQL would be perform again.

-- "what the current, or last completed, wait for each session was, and for exactly how long they waited" SELECT NAME, IF(PPS. The query cache is a feature of My SQL that allows it to return the same results when the same query is executed more than once, without having to fetch data and redo its calculations.

This means some connections are waiting before they can get the necessary information about the table to continue executing.

For the reports used to pull data for teachers to see, there are some huge queries. We were afraid that disabling the query cache could cause more harm then good by removing a bottleneck just to add another.

So you can imagine our surprise when we found out that the query cache was in fact a huge hindrance to our database performance.

For our debugging, in most cases what we did was one of two things:1) List the total maximum wait by event inside of Inno DB.-- "example of tracking down exactly what the contention points are" SELECT EVENT_NAME, SUM_TIMER_WAIT/1000000000 WAIT_MS, COUNT_STAR FROM performance_schema.events_waits_summary_global_by_event_name ORDER BY SUM_TIMER_WAIT DESC, COUNT_STAR DESC LIMIT 30; -- table handler waits on top, they also had the biggest increase through time ------------------------------------------------------ ----------------- -------------- | EVENT_NAME | WAIT_MS | COUNT_STAR | ------------------------------------------------------ ----------------- -------------- | idle | 4661443248.1257 | 6165953390 | | wait/io/table/sql/handler | 1755211603.3699 | 381701231771 | | wait/io/file/innodb/innodb_log_file | 459199280.6118 | 139800252 | | wait/io/file/innodb/innodb_data_file | 83050382.6978 | 197884296 | | wait/io/file/myisam/kfile | 56080735.9075 | 5274545307 | | wait/io/file/myisam/dfile | 13172549.9320 | 725383142 | | wait/lock/table/sql/handler | 5669784.9629 | 15013313221 | | wait/io/file/sql/binlog | 2407201.1162 | 389713292 | ... HOST)) USER, DB, COMMAND, STATE, TIME, EVENT_NAME LAST_WAIT, IF(TIMER_WAIT IS NULL , 'Still Waiting', TIMER_WAIT/1000000000) LAST_WAIT_MS FROM performance_schema.events_waits_current JOIN performance_schema.threads PPS USING (THREAD_ID) LEFT JOIN INFORMATION_SCHEMA. We would then see if they had correlation in increased wait with the periods where we ran into problems. Now here’s what we actually changed so that our My SQL would be perform again.

-- "what the current, or last completed, wait for each session was, and for exactly how long they waited" SELECT NAME, IF(PPS. The query cache is a feature of My SQL that allows it to return the same results when the same query is executed more than once, without having to fetch data and redo its calculations.

There were two reasons:1) If you have a workload that’s write heavy for a particular table, there will be constant invalidation of your cache.