slow query for getting slots in range
Reported here.
Recently a couple more people have been complaining about the speed, some do not even have many appointments or rules.
Comments (13)
-
-
- changed title to slow query for getting slots in range
-
-
-
We get a factor of 10 improvement in SQL query response times when the queries are served from cache.
Here are my observations FWW:
I feel that it would be worthwhile benchmarking doing all the filtering in SQL and running a separate SQL query for each availability rule as it currently is
versus
Running the same SQL query from midnight on today’s date until 23:39 on the last day of the scheduling window and filtering using PHP for all of the availability rules at once. This way, instead of running multiple slow queries, there’s only one slow query, which will be served from cache for the next search.
The issue, as I see it, is that when an appointment is added to a cart the cache is invalidated, meaning that all those availability rules have to be evaluated for the next person performing a search, taking a search from a reasonable 6 seconds to a seemingly unending 60 seconds - most website users will assume the script is broken and click off before this point.
In whichever of the cases turns out to be faster in the benchmark, there is another way to mitigate the slow response of the front end, and that would be after an appointment is added to a cart, run either
All of the availability SQL queries for the first case
or
The single SQL query for the second case.
on the server and do nothing with the result. This will have the benefit of pre-loading the results into the cache, making it quicker by a factor of 10 for the next search.
-
There seem to be updates to BookingWP Appointments, but no movement on this issue
-
We’re evaluating the improvements. Response should instant, but it does take too much time at the moment. Issue is mainly apparent when using staff. When staff is not used, it loads very quickly in most cases.
-
It’s the inner join queries. I found a very interesting article about speeding up inner join queries by redesigning then to run as separate queries instead here:
So whilst an inner join might be correct in terms of SQL, it’s incorrect in terms of performance! (Or so it would seem)
-
Thank you for the suggestion. I’ll check this to see if suggestions can work in our case.
In the past couple of months, we were gradually adding performance improvements by reducing the duplicated queries, so you should have seen some gains with each update. This issue is still open and we’re creating an infrastructure to remove the bottleneck here as well.
-
We have lots of availability rules and checking availability has been getting faster and faster with each new version, going from 30 seconds down to 6 seconds for uncached, and then down to 4 seconds for cached.
I recently upgraded MariaDB and it wiped out the my.cnf
Availability queries got much, much slower.
With the use of mysqltuner.pl I have recovered most of the speed.
However, the suggestions are quite extreme.
It recommended to increase tmp_table_size and max_heap_table_size until they weretmp_table_size = 256M
max_heap_table_size = 256MIt has also been telling me to increase join_buffer_size (> 220.0M, or always use indexes with JOINs)
As you can see, I’m already at 220M, and that’s per connection. I think perhaps adding some indices to the INNER JOIN
-
After extensive testing, I have found that the correct MySQL (MariaDB) setup can give a 10 fold increase in speed.
In the quest for ever more speed, we upgraded our MariaDB from 10.3 to 10.5, but this made queries run MUCH SLOWER.
The key is to get the right settings in the /etc/mysql/my.cnf or /etc/mysql/mariadb.conf.d/50-server.cnf
Here are some of the important settings:
plugin-load-add = auth_socket
tmpdir = /dev/shm/ # If you have /dev/shm enabled
concurrent_insert=ALWAYS
join_buffer_size = 160M
aria_pagecache_buffer_size = 256MB
tmp_table_size = 256M
max_heap_table_size = 256M
query_cache_type = 0
query_cache_size = 0
innodb_stats_on_metadata=0
innodb_undo_directory = /dev/shm/
innodb-temp-data-file-path=../../../dev/shm/ibtmp1:128M:autoextend # Make sure this path is correct for your server
innodb_buffer_pool_dump_pct = 100
innodb_buffer_pool_size = 5G # This should be enough to hold your entire database - mysqltuner should give you the correct size
innodb_buffer_pool_dump_at_shutdown=ON
innodb_buffer_pool_load_at_startup=ON
optimizer_search_depth=3 # You can play around with this number, but 3 worked best for us
character_set_server=utf8
innodb_page_cleaners = 8
innodb_write_io_threads=16
innodb_read_io_threads=16
innodb_thread_concurrency = 16
table_open_cache_instances=16
innodb_log_file_size = 640M
innodb_open_files=4000
innodb_max_dirty_pages_pct=90
innodb_max_dirty_pages_pct_lwm=10
innodb_io_capacity=35000
innodb_io_capacity_max=40000
innodb_buffer_pool_instances=5We also run our webserver on OpenLitespeed and use the Litespeed Cache Wordpress plugin. The key speed up in this set up is using Redis as an object cache for the database. This means that after an appointment is put into a cart, the next query will come from MySQL, but any subsequent queries will be served almost instantly from the Redis cache (that is until another appointment is put in the cart).
If you use a different webserver, such as Nginx or Apache, there are a couple of Redis cache plugins on the Wordpress repository.
-
reporter improvements a user made here.
-
reporter more complaints here.
- Log in to comment
https://bookingwp.com/forums/topic/booking-product-page-loads-extremely-slow-then-crashes/#post-757125