有点背景了,前段时间我们开始在我们的一个MySQL数据库上经历了很高的CPU系统时间。该数据库还受到磁盘利用率高的困扰,因此我们认为这些东西是连接的。由于我们已经计划将其迁移到SSD,因此我们认为它将解决这两个问题。
它有帮助...但是持续了很长时间。
这无处不在,没有在负载或应用程序逻辑上进行任何明显的更改。
数据库统计:
- MySQL版本-5.7.20
- 操作系统-Debian
- DB大小-1.2Tb
- 内存-700Gb
- CPU核心-56
- 窥探负载-大约5kq / s的读取,600q / s的写入(尽管选择查询通常非常复杂)
- 线程-50个正在运行,300个已连接
- 它有大约300个表,全部是InnoDB
MySQL配置:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /opt/mysql-data
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
explicit_defaults_for_timestamp
sql_mode = STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
log-error = /opt/mysql-log/error.log
# Replication
server-id = 76
gtid-mode = ON
enforce-gtid-consistency = true
relay-log = /opt/mysql-log/mysql-relay-bin
relay-log-index = /opt/mysql-log/mysql-relay-bin.index
replicate-wild-do-table = dbname.%
log-bin = /opt/mysql-log/mysql-bin.log
expire_logs_days = 7
max_binlog_size = 1024M
binlog-format = ROW
log-bin-trust-function-creators = 1
log_slave_updates = 1
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/
# Here goes
skip_name_resolve = 1
general_log = 0
slow_query_log = 1
slow_query_log_file = /opt/mysql-log/slow.log
long_query_time = 3
max_allowed_packet = 16M
max_connections = 700
max_execution_time = 200000
open_files_limit = 32000
table_open_cache = 8000
thread_cache_size = 128
innodb_buffer_pool_size = 550G
innodb_buffer_pool_instances = 28
innodb_log_file_size = 15G
innodb_log_files_in_group = 2
innodb_flush_method = O_DIRECT
max_heap_table_size = 16M
tmp_table_size = 128M
join_buffer_size = 1M
sort_buffer_size = 2M
innodb_lru_scan_depth = 256
query_cache_type = 0
query_cache_size = 0
innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:30G
其他观察
PERF峰值加载期间的MySQL过程的:
68,31% 68,31% mysqld [kernel.kallsyms] [k] _raw_spin_lock
- _raw_spin_lock
+ 51,63% 0x7fd118e9dbd9
+ 48,37% 0x7fd118e9dbab
+ 37,36% 0,02% mysqld libc-2.19.so [.] 0x00000000000f4bd9
+ 33,83% 0,01% mysqld libc-2.19.so [.] 0x00000000000f4bab
+ 26,92% 0,00% mysqld libpthread-2.19.so [.] start_thread
+ 26,82% 0,00% mysqld mysqld [.] pfs_spawn_thread
+ 26,82% 0,00% mysqld mysqld [.] handle_connection
+ 26,81% 0,01% mysqld mysqld [.] do_command(THD*)
+ 26,65% 0,02% mysqld mysqld [.] dispatch_command(THD*, COM_DATA const*, enum_server_command)
+ 26,29% 0,01% mysqld mysqld [.] mysql_parse(THD*, Parser_state*)
+ 24,85% 0,01% mysqld mysqld [.] mysql_execute_command(THD*, bool)
+ 23,61% 0,00% mysqld mysqld [.] handle_query(THD*, LEX*, Query_result*, unsigned long long, unsigned long long)
+ 23,54% 0,00% mysqld mysqld [.] 0x0000000000374103
+ 19,78% 0,00% mysqld mysqld [.] JOIN::exec()
+ 19,13% 0,15% mysqld mysqld [.] sub_select(JOIN*, QEP_TAB*, bool)
+ 13,86% 1,48% mysqld mysqld [.] row_search_mvcc(unsigned char*, page_cur_mode_t, row_prebuilt_t*, unsigned long, unsigned long)
+ 8,48% 0,25% mysqld mysqld [.] ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)
+ 7,93% 0,00% mysqld [unknown] [.] 0x00007f40c4d7a6f8
+ 7,57% 0,00% mysqld mysqld [.] 0x0000000000828f74
+ 7,25% 0,11% mysqld mysqld [.] handler::ha_index_next_same(unsigned char*, unsigned char const*, unsigned int)
它表明mysql在spin_locks上花费了大量时间。我希望从中获得一些线索,可悲的是,这些运气来自哪里。
高负载期间的查询配置文件显示大量上下文切换。我使用了MyTable中的select *,其中pk = 123,MyTable大约有90M行。配置文件输出:
Status Duration CPU_user CPU_system Context_voluntary Context_involuntary Block_ops_in Block_ops_out Messages_sent Messages_received Page_faults_major Page_faults_minor Swaps Source_function Source_file Source_line
starting 0,000756 0,028000 0,012000 81 1 0 0 0 0 0 0 0
checking permissions 0,000057 0,004000 0,000000 4 0 0 0 0 0 0 0 0 check_access sql_authorization.cc 810
Opening tables 0,000285 0,008000 0,004000 31 0 0 40 0 0 0 0 0 open_tables sql_base.cc 5650
init 0,000304 0,012000 0,004000 31 1 0 0 0 0 0 0 0 handle_query sql_select.cc 121
System lock 0,000303 0,012000 0,004000 33 0 0 0 0 0 0 0 0 mysql_lock_tables lock.cc 323
optimizing 0,000196 0,008000 0,004000 20 0 0 0 0 0 0 0 0 optimize sql_optimizer.cc 151
statistics 0,000885 0,036000 0,012000 99 6 0 0 0 0 0 0 0 optimize sql_optimizer.cc 367
preparing 0,000794 0,000000 0,096000 76 2 32 8 0 0 0 0 0 optimize sql_optimizer.cc 475
executing 0,000067 0,000000 0,000000 10 1 0 0 0 0 0 0 0 exec sql_executor.cc 119
Sending data 0,000469 0,000000 0,000000 54 1 32 0 0 0 0 0 0 exec sql_executor.cc 195
end 0,000609 0,000000 0,016000 64 4 0 0 0 0 0 0 0 handle_query sql_select.cc 199
query end 0,000063 0,000000 0,000000 3 1 0 0 0 0 0 0 0 mysql_execute_command sql_parse.cc 4968
closing tables 0,000156 0,000000 0,000000 20 4 0 0 0 0 0 0 0 mysql_execute_command sql_parse.cc 5020
freeing items 0,000071 0,000000 0,004000 7 1 0 0 0 0 0 0 0 mysql_parse sql_parse.cc 5596
cleaning up 0,000533 0,024000 0,008000 62 0 0 0 0 0 0 0 0 dispatch_command sql_parse.cc 1902
彼得·扎伊采夫做了后最近关于上下文切换,在那里,他说:
但是,在现实世界中,如果每个查询的上下文切换少于十个,我就不必担心争用会成为大问题。
但是它显示了600多个开关!
什么会导致这些症状,该怎么办?我会很高兴就此提出任何建议或信息,到目前为止,我所遇到的一切都是过时的和/或不确定的。
PS我将很乐意提供其他信息(如果需要)。
显示全局状态和显示变量的输出
我无法在这里发布,因为内容超出了发布大小限制。
iostat
avg-cpu: %user %nice %system %iowait %steal %idle
7,35 0,00 5,44 0,20 0,00 87,01
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
fd0 0,00 0,00 0,00 0,00 0,00 0,00 8,00 0,00 32,00 32,00 0,00 32,00 0,00
sda 0,04 2,27 0,13 0,96 0,86 46,52 87,05 0,00 2,52 0,41 2,80 0,28 0,03
sdb 0,21 232,57 30,86 482,91 503,42 7769,88 32,21 0,34 0,67 0,83 0,66 0,34 17,50
avg-cpu: %user %nice %system %iowait %steal %idle
9,98 0,00 77,52 0,46 0,00 12,04
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
fd0 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sda 0,00 1,60 0,00 0,60 0,00 8,80 29,33 0,00 0,00 0,00 0,00 0,00 0,00
sdb 0,00 566,40 55,60 981,60 889,60 16173,60 32,90 0,84 0,81 0,76 0,81 0,51 53,28
avg-cpu: %user %nice %system %iowait %steal %idle
11,83 0,00 72,72 0,35 0,00 15,10
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
fd0 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
sda 0,00 2,60 0,00 0,40 0,00 12,00 60,00 0,00 0,00 0,00 0,00 0,00 0,00
sdb 0,00 565,20 51,60 962,80 825,60 15569,60 32,32 0,85 0,84 0,98 0,83 0,54 54,56
更新2018-03-15
> show global status like 'uptime%'
Uptime;720899
Uptime_since_flush_status;720899
> show global status like '%rollback'
Com_rollback;351422
Com_xa_rollback;0
Handler_rollback;371088
Handler_savepoint_rollback;0
global status
是否有与CPU使用率增加相关的事情。我认为目前无法获得任何数据。如果我发现新的东西,我会问另一个问题。
select * from MyTable where pk = 123
平均需要花费多长时间(经过的时间)?