Bug #11639
closedEvents Manager causing excessive database queries due to bbPress integration
Added by Boone Gorges over 5 years ago. Updated over 5 years ago.
0%
Updated by Boone Gorges over 5 years ago
- Subject changed from Events Manager causing excessive database queries due to bb to Events Manager causing excessive database queries due to bbPress integration
- Category name set to WordPress Plugins
- Target version set to 1.15.6
In the last couple days, my monitoring tools have shown a pattern of excessive database queries with the following pattern:
[17] => Array ( [0] => SELECT option_value FROM wp_3999_options WHERE option_name='wp_user_roles' [1] => 0.00070500373840332 [2] => require('wp-blog-header.php'), require_once('wp-load.php'), require_once('wp-config.php'), require_once('wp-settings.php'), do_action('init'), WP_Hook->do_action, WP_Hook->apply_filters, wp_widgets_init, do_action('widgets_init'), WP_Hook->do_action, WP_Hook->apply_filters, bpeo_group_widget_init, get_blog_option, restore_current_blog, do_action('switch_blog'), WP_Hook->do_action, WP_Hook->apply_filters, wp_switch_roles_and_user, WP_Roles->for_site, WP_Roles->get_roles_data, get_option, apply_filters('option_wp_3999_user_roles'), WP_Hook->apply_filters, _bbp_reinit_dynamic_roles, bbp_get_dynamic_roles, bbp_get_caps_for_role, apply_filters('bbp_get_caps_for_role'), WP_Hook->apply_filters, em_bbp_get_caps_for_role [3] => 1563211377.3423 )
This'll appear thousands of times, occurring on different sites but always querying for the same handful of options tables (wp_1, wp_3999, wp_8266). The backtrack shows that em_bbp_get_caps_for_role is doing a direct DB query, which means it skips the cache. events-manager and bbpress haven't been updated in a long time, so I'm unsure why this is just now cropping up. But it should be possible to write a workaround that at least avoids duplicate queries, if it's not possible to use the cached get_option() call for some reason.
Updated by Boone Gorges over 5 years ago
I've noticed a few DB restarts over the last few days, and I wonder whether there's a connection between those and the issue described above. As such, I'm going to put a fix for this issue in place right away. In https://github.com/cuny-academic-commons/cac/commit/764e6255759774cdb207ef031d5ebc75b0f2ce1b (and follow up https://github.com/cuny-academic-commons/cac/commit/d121840014f19c126cc2d742253688736a3edb85) I've added a non-persistent caching layer to the checks that EM does. I'll deploy and monitor.
Updated by Anonymous over 5 years ago
Boone Gorges wrote:
I've noticed a few DB restarts over the last few days, and I wonder whether there's a connection between those and the issue described above. As such, I'm going to put a fix for this issue in place right away. In https://github.com/cuny-academic-commons/cac/commit/764e6255759774cdb207ef031d5ebc75b0f2ce1b (and follow up https://github.com/cuny-academic-commons/cac/commit/d121840014f19c126cc2d742253688736a3edb85) I've added a non-persistent caching layer to the checks that EM does. I'll deploy and monitor.
Hi Boone,
You can ignore the db restarts that took place between 12am - 3am. That is the window where DB backups takes place. R1soft in particular tries to briefly lock down the DB to back up the DB. And it is known that sometimes it could not lock down due to on-going queries and it will keep trying for a while, causing db threads to increase >200.
Outside of this time slot, we suspect it is routine in the code that caused excessive db queries.
Lihua
Updated by Boone Gorges over 5 years ago
- Status changed from New to Resolved
Thanks for the context, Lihua.
This issue seems to have gone away.