MongoDB副本集次要卡在“ ROLLBACK”状态


11

在我们的mongodb的最近一次自动更新过程中PRIMARY,当PRIMARY卸任时它永久进入ROLLBACK状态。

在该ROLLBACK状态下几个小时后,mongodb数据库目录.bson中的rollback目录中仍然没有回滚文件。那以及我们日志文件中的这一行:[rsSync] replSet syncThread: 13410 replSet too much data to roll back似乎表明该ROLLBACK过程失败了。

我想帮助您分析到底出了什么问题。

  • 在我们的日志中似乎发生了两种不同的回滚。是这种情况还是花了3个小时?
  • 如果第一次回滚成功(在19:00时)成功,为什么ou rollback目录中没有出现任何内容?
  • 对所有这些警告的起因有任何猜测吗?可能与回滚失败有关吗?
  • 我们是否由于第一次丢失了18秒的数据ROLLBACK
  • 是否有解决“卡在ROLLBACK状态”问题的通用解决方案?我们最终不得不使用整个数据库,并从主数据库重新同步。

相关的日志行是:

# Primary coming back after restart...
Tue May 15 19:01:01 [initandlisten] MongoDB starting : pid=3684 port=27017 dbpath=/var/lib/mongodb 64-bit host=magnesium
Tue May 15 19:01:01 [initandlisten] db version v2.0.5, pdfile version 4.5
# ... init stuff
Tue May 15 19:01:01 [initandlisten] journal dir=/var/lib/mongodb/journal
Tue May 15 19:01:01 [initandlisten] recover : no journal files present, no recovery needed
# ... More init stuff
Tue May 15 19:01:03 [rsStart] trying to contact rs1arb1.c9w.co:27017
Tue May 15 19:01:03 [rsStart] trying to contact rs1m2.c9w.co:27017
Tue May 15 19:01:03 [rsStart] replSet STARTUP2
Tue May 15 19:01:03 [rsHealthPoll] replSet member rs1arb1.c9w.co:27017 is up
Tue May 15 19:01:03 [rsHealthPoll] replSet member rs1arb1.c9w.co:27017 is now in state ARBITER
Tue May 15 19:01:03 [rsSync] replSet SECONDARY
Tue May 15 19:01:05 [rsHealthPoll] replSet member rs1m2.c9w.co:27017 is up
Tue May 15 19:01:05 [rsHealthPoll] replSet member rs1m2.c9w.co:27017 is now in state PRIMARY
Tue May 15 19:01:09 [rsSync] replSet syncing to: rs1m2.c9w.co:27017
Tue May 15 19:01:09 [rsSync] replSet our last op time written: May 15 19:00:51:6
Tue May 15 19:01:09 [rsSync] replSet rollback 0
Tue May 15 19:01:09 [rsSync] replSet ROLLBACK
Tue May 15 19:01:09 [rsSync] replSet rollback 1
Tue May 15 19:01:09 [rsSync] replSet rollback 2 FindCommonPoint
Tue May 15 19:01:09 [rsSync] replSet info rollback our last optime:   May 15 19:00:51:6
Tue May 15 19:01:09 [rsSync] replSet info rollback their last optime: May 15 19:01:09:19
Tue May 15 19:01:09 [rsSync] replSet info rollback diff in end of log times: -18 seconds
Tue May 15 19:01:10 [rsSync] replSet WARNING ignoring op on rollback no _id TODO : nimbus.system.indexes { ts: Timestamp 1337108400000|17, h: 1628369028235805797, op: "i", ns: "nimbus.system.indexes", o: { unique: true, name: "pascalquery_ns_key_start_ts_keyvals", key: { __ns__: 1, _key: 1, start_ts: 1, _keyval.a: 1, _keyval.b: 1, _keyval.c: 1, _keyval.d: 1, _keyval.e: 1, _keyval.f: 1, _keyval.g: 1, _keyval.h: 1 }, ns: "nimbus.wifi_daily_series", background: true } }
# ...
# Then for several minutes there are similar warnings
# ...
Tue May 15 19:03:52 [rsSync] replSet WARNING ignoring op on rollback no _id TODO : nimbus.system.indexes { ts: Timestamp 1337097600000|204, h: -3526710968279064473, op: "i", ns: "nimbus.system.indexes", o: { unique: true, name: "pascalquery_ns_key_start_ts_keyvals", key: { __ns__: 1, _key: 1, start_ts: 1, _keyval.a: 1, _keyval.b: 1, _keyval.c: 1, _keyval.d: 1, _keyval.e: 1, _keyval.f: 1, _keyval.g: 1, _keyval.h: 1 }, ns: "nimbus.wifi_daily_series", background: true } }
Tue May 15 19:03:54 [rsSync] replSet rollback found matching events at May 15 15:59:13:181
Tue May 15 19:03:54 [rsSync] replSet rollback findcommonpoint scanned : 6472020
Tue May 15 19:03:54 [rsSync] replSet replSet rollback 3 fixup

然后由于某种原因又发生了另一次回滚...

Tue May 15 22:14:24 [rsSync] replSet rollback re-get objects: 13410 replSet too much data to roll back
Tue May 15 22:14:26 [rsSync] replSet syncThread: 13410 replSet too much data to roll back
Tue May 15 22:14:37 [rsSync] replSet syncing to: rs1m2.c9w.co:27017
Tue May 15 22:14:37 [rsSync] replSet syncThread: 13106 nextSafe(): { $err: "capped cursor overrun during query: local.oplog.rs", code: 13338 }
Tue May 15 22:14:48 [rsSync] replSet syncing to: rs1m2.c9w.co:27017
Tue May 15 22:15:30 [rsSync] replSet our last op time written: May 15 19:00:51:6
Tue May 15 22:15:30 [rsSync] replSet rollback 0
Tue May 15 22:15:30 [rsSync] replSet rollback 1
Tue May 15 22:15:30 [rsSync] replSet rollback 2 FindCommonPoint
Tue May 15 22:15:30 [rsSync] replSet info rollback our last optime:   May 15 19:00:51:6
Tue May 15 22:15:30 [rsSync] replSet info rollback their last optime: May 15 22:15:30:9
Tue May 15 22:15:30 [rsSync] replSet info rollback diff in end of log times: -11679 seconds
# More warnings matching the above warnings
Tue May 15 22:17:30 [rsSync] replSet rollback found matching events at May 15 15:59:13:181
Tue May 15 22:17:30 [rsSync] replSet rollback findcommonpoint scanned : 7628640
Tue May 15 22:17:30 [rsSync] replSet replSet rollback 3 fixup

我发现的有关回滚的唯一有用信息是这些注释,它们未解决“卡在回滚情况下”的问题。 http://www.mongodb.org/display/DOCS/Replica+Sets+-+Rollbacks http://www.snailinaturtleneck.com/blog/2011/01/19/how-to-use-replica-set-rollbacks/


Answers:


7

当MongoDB实例进入回滚状态并且回滚数据大于300MB数据时,您必须手动进行干预。它将保持回滚状态,直到您采取措施保存/删除/移动该数据为止,然后应重新同步(现在是辅助数据)以使其与主数据保持一致。这不必是完全重新同步,但这是最简单的方法。

多次回滚是症状而不是问题的原因。仅当不同步的辅助节点(由于延迟或复制问题)成为主节点并进行写操作时,才会发生回滚。因此,首先要引起这些问题的原因是需要解决的-回滚本身是您作为管理员需要处理的事情-MongoDB无法自动协调数据的潜在陷阱太多。

如果要出于测试目的再次仿真,我在这里概述了如何进行仿真:

http://comerford.cc/2012/05/28/simulating-rollback-on-mongodb/

最终,这些数据将存储在一个集合中(在本地数据库中),而不是转储到磁盘上,这将提供机会更有效地处理它:

https://jira.mongodb.org/browse/SERVER-4375

但是,目前,如发现的那样,一旦发生回滚,就需要手动干预。

最后,该手册现在包含与Kristina博客类似的信息:

https://docs.mongodb.com/manual/core/replica-set-rollbacks


2

我们最终使用的“解决方案”是完全将数据库保留在处于ROLLBACK模式下的机器上,并允许新清空的SECONDARY主机与主服务器重新同步。这似乎是次优的解决方案,因为据我所知,我们仍然有足够的空间,oplog因此从理论上讲机器应该能够重新同步。

希望有人会为此提供更好的答案。


感谢您向我们介绍您所做的工作。如果您发现有关此问题的更多信息,请返回并更新您的答案(或提供另一个答案)。
Nick Chammas
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.