如果您的所有数据库仅使用InnoDB,那么我有一些好消息。
您应该从从站并行转储所有数据库。
实际上,您可以强制所有数据库进入同一时间点。
对于从站,要记住的第一件事是,如果它不是其他从站的主站,则不需要启用二进制日志记录。
您不能将--master-data
选项用于并行转储,因为每个转储将在每个转储文件的第22行写入不同的位置。最好记录主服务器的最后一个日志文件,并定位使用执行的从服务器SHOW SLAVE STATUS\G
。这样,所有数据库都具有相同的时间点位置。
您可以收集所有数据库并为所有数据库的并行转储编写脚本。
DBLIST=/tmp/ListOfDatabasesToParallelDump.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql','performance_schema')" > ${DBLIST}
for DB in `cat ${DBLIST}`
do
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz &
done
wait
mysql -h... -u... -p... -e"START SLAVE;"
如果只是数据库太多,请按以下步骤一次转储10或20:
DBLIST=/tmp/ListOfDatabasesToParallelDump.txt
SSS=/tmp/ShowSlaveStatusDisplay.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','mysql','performance_schema')" > ${DBLIST}
COMMIT_LIMIT=20
COMMIT_COUNT=0
for DB in `cat ${DBLIST}`
do
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} | gzip > ${DB}.sql.gz &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
wait
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
mysql -h... -u... -p... -e"START SLAVE;"
如果需要恢复单个表,则可以按大小顺序一次并行转储表 20。
尝试这个:
TBLIST=/tmp/ListOfTablesToParallelDump.txt
SSS=/tmp/ShowSlaveStatusDisplay.txt
BACKUP_BASE=/backups
BACKUP_DATE=`date +"%Y%m%d_%H%M%S"`
BACKUP_HOME=${BACKUP_BASE}/${BACKUP_DATE}
mkdir ${BACKUP_HOME}
cd ${BACKUP_HOME}
mysql -h... -u... -p... -e"STOP SLAVE;"
mysql -h... -u... -p... -e"SHOW SLAVE STATUS\G" > ${SSS}
LOGFIL=`cat ${SSS} | grep "Relay_Master_Log_File" | awk '{print $2}'`
LOGPOS=`cat ${SSS} | grep "Exec_Master_Log_Pos" | awk '{print $2}'`
echo "Master was at ${LOGFIL} Position ${LOGPOS} for this Backup" > Master_Log_FilePos.txt
mysql -h... -u... -p... -AN -e"SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql','performance_schema') ORDER BY data_length" > ${DBLIST}
COMMIT_LIMIT=20
COMMIT_COUNT=0
for DBTB in `cat ${TBLIST}`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=$DB-{DB}-TBL-${TB}.sql.gz
mysqldump -h... -u... -p... --hex-blob --routines --triggers ${DB} ${TB} | gzip > ${DUMPFILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
wait
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
mysql -h... -u... -p... -e"START SLAVE;"
现在您已经有了转储数据库或单个表的脚本,您可以自行决定加载该数据了。如果需要从主数据库上的二进制日志中执行SQL,则可以使用mysqlbinlog
datetime并将其赋予位置,然后将SQL输出到其他文本文件。您只需执行尽职调查,即可从bnary日志具有的任何时间戳记中找到所需的数据量。只要记住,操作系统中每个二进制日志的时间戳都代表该日志的最后写入时间。