I’ll be speaking at the data retrieval miniconf at Linux Conference in Wellington, New Zealand (Full Text Search with MySQL, Program)
I’ll cover some new sphinx search features (online updates)
|
||||||
I’ll be speaking at the data retrieval miniconf at Linux Conference in Wellington, New Zealand (Full Text Search with MySQL, Program) Sometimes we have a lots of small and rather fast queries which use group by/order by, thus creating temporary tables. Some of those queries are retrieving text fields and mysql have to use disk (myisam) temporary tables. Those queries usually run for less than 1-2 seconds, so they did not get into slow query log, however, they sometimes add serious load on the system. Here is the stat example: bash-3.00$ /usr/local/mysql/bin/mysqladmin -uroot -p -i 2 -r extended-status|grep tmp_disk ... | Created_tmp_disk_tables | 109 | | Created_tmp_disk_tables | 101 | | Created_tmp_disk_tables | 122 | ... 40-50 tmp_disk_tables created per second So, how can we grab those queries? Usually we have to temporary enable general log, filter out queries with “group by/order by” and profile them all. On solaris/mac we can use dtrace instead. Here is the simple script, which will find the list of queries creating tmp_disk_tables: #pragma D option quiet dtrace:::BEGIN { printf("Tracing... Hit Ctrl-C to end.\n"); } pid$target::*mysql_parse*:entry { self->query = copyinstr(arg1); } pid$target::*create_myisam_tmp_table*:return { @query[self->query] = count(); } put it into tmpdisktable.d, chmod +x tmpdisktable.d and run it with Ctrl+C after 5 seconds whatever and you will see the queries: # ./tmpdisktable.d -p `pgrep -x mysqld` Tracing... Hit Ctrl-C to end. ^C Queries are stripped by the “strsize”, which is can be tweaked: #pragma D option strsize=N We can increase the “strsize” length now and run the script again to get the real queries examples. Please note: running dtrace for a while can decrease performance, so do not run it for more than couple minutes on production systems. Reporting queries (I will use this term here) are the queries which summaries and groups data over the certain period of time. For example, in Social Network site we want to know how many messages have been sent for the given period of time, group by region and status (sent, received, etc), order by number of messages sent. As an example I will take a table which is used to send SMS (text messages).
This query will do a range scan over the submition_date and perform a filesort. There are common well known approaches which can be used to optimize table (“covered index”, “summary tables”, using external data warehouse, etc). Sometimes those approaches do not work or too complex. Yet another approach is to use external search/index solution, for example Sphinx Search (http://www.sphinxsearch.com). In this case, data will be stored in MySQL and sphinx will be used as an external indexer/searcher, with SQL protocol support. Using SphinxStarting with version 0.9.9-rc2, Sphinx searchd daemon supports MySQL binary network protocol and can be accessed with regular MySQL API. For instance, ‘mysql’ CLI client program works well. Here’s an example of querying Sphinx using MySQL client:
As Sphinx can use attributes (“fields”) and group/sort then, it can be used for our report. Also, an application can simply connect to Sphinx server with MySQL protocol: an application will think it will work with MySQL (there are minor differences in Sphinx SQL, like “@count” and support of timestamps only instead of datetime) Here is the example of the above query in Sphinx:
More speed comparison, group by 2 fields: Sphinx:
ConclusionIf you need fast ad-hock reporting queries, SphinxSearch can be a good option.
Disadvantages:
Sphinx config file
I’ve started my new blog on MySQL. I’ll focus on MySQL full text search, performance tuning and High Availability (HA) |
||||||
Copyright © 2024 Alexander Rubin's Blog on MySQL - All Rights Reserved Powered by WordPress & Atahualpa |