1.master_check_ssh --conf=/etc/app1.conf
这个检查就报错的我觉得百分之九十都是ssh之间连接问题。务必要保证各节点之间都可以免秘钥访问!
2.master_check_repl --conf=/etc/app1.conf
(1)报错代码:
类似就是说什么copyuser复制用户在节点没有权限的代码,解决方法是每个节点创建这个用户即可。要是主从复制已经开启,记得节点先stop slave; 再分别创建用户。
MHA版本,应该需要在所有的数据库中都开启二进制日志,中继日志,授权也应该都相同,配置文件也基本相同。我想在这个前提下在安装执行MHA应该不会遇上太多问题了。只是目前还不能确定这种做法是不是正解。
(2)报错代码:
Tue Apr 30 09:26:44 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Tue Apr 30 09:26:44 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf.. Tue Apr 30 09:26:44 2019 - [info] Reading server configuration from /etc/mha/app1.cnf.. Tue Apr 30 09:26:44 2019 - [info] MHA::MasterMonitor version 0.56. Tue Apr 30 09:26:45 2019 - [info] GTID failover mode = 0 Tue Apr 30 09:26:45 2019 - [info] Dead Servers: Tue Apr 30 09:26:45 2019 - [info] Alive Servers: Tue Apr 30 09:26:45 2019 - [info] 103.75.1.22(103.75.1.22:3306) Tue Apr 30 09:26:45 2019 - [info] 103.75.1.23(103.75.1.23:3306) Tue Apr 30 09:26:45 2019 - [info] 103.75.1.24(103.75.1.24:3306) Tue Apr 30 09:26:45 2019 - [info] Alive Slaves: Tue Apr 30 09:26:45 2019 - [info] 103.75.1.23(103.75.1.23:3306) Version=5.7.25-log (oldest major version between slaves) log-bin:enabled Tue Apr 30 09:26:45 2019 - [info] Replicating from 103.75.1.22(103.75.1.22:3306) Tue Apr 30 09:26:45 2019 - [info] Primary candidate for the new Master (candidate_master is set) Tue Apr 30 09:26:45 2019 - [info] 103.75.1.24(103.75.1.24:3306) Version=5.7.25-log (oldest major version between slaves) log-bin:enabled Tue Apr 30 09:26:45 2019 - [info] Replicating from 103.75.1.22(103.75.1.22:3306) Tue Apr 30 09:26:45 2019 - [info] Current Alive Master: 103.75.1.22(103.75.1.22:3306) Tue Apr 30 09:26:45 2019 - [info] Checking slave configurations.. Tue Apr 30 09:26:45 2019 - [info] read_only=1 is not set on slave 103.75.1.24(103.75.1.24:3306). Tue Apr 30 09:26:45 2019 - [info] Checking replication filtering settings.. Tue Apr 30 09:26:45 2019 - [info] binlog_do_db= , binlog_ignore_db= Tue Apr 30 09:26:45 2019 - [info] Replication filtering check ok. Tue Apr 30 09:26:45 2019 - [info] GTID (with auto-pos) is not supported Tue Apr 30 09:26:45 2019 - [info] Starting SSH connection tests.. Tue Apr 30 09:26:53 2019 - [info] All SSH connection tests passed successfully. Tue Apr 30 09:26:53 2019 - [info] Checking MHA Node version.. Tue Apr 30 09:26:57 2019 - [info] Version check ok. Tue Apr 30 09:26:57 2019 - [info] Checking SSH publickey authentication settings on the current master.. Tue Apr 30 09:26:58 2019 - [info] HealthCheck: SSH to 103.75.1.22 is reachable. Tue Apr 30 09:26:59 2019 - [info] Master MHA Node version is 0.56. Tue Apr 30 09:26:59 2019 - [info] Checking recovery script configurations on 103.75.1.22(103.75.1.22:3306).. Tue Apr 30 09:26:59 2019 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data --output_file=/data/mastermha/app1//save_binary_logs_test --manager_version=0.56 --start_file=master-bin.000008 Tue Apr 30 09:26:59 2019 - [info] Connecting to root@103.75.1.22(103.75.1.22:22).. Failed to save binary log: Binlog not found from /data! If you got this error at MHA Manager, please set "master_binlog_dir=/path/to/binlog_directory_of_the_master" correctly in the MHA Manager's configuration file and try again. at /usr/bin/save_binary_logs line 123 eval {...} called at /usr/bin/save_binary_logs line 70 main::main() called at /usr/bin/save_binary_logs line 66 Tue Apr 30 09:27:00 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln158] Binlog setting check failed! Tue Apr 30 09:27:00 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln405] Master configuration failed. Tue Apr 30 09:27:00 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations. at /usr/bin/masterha_check_repl line 48 Tue Apr 30 09:27:00 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers. Tue Apr 30 09:27:00 2019 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!