-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Add retention leases replication tests #38857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Pinging @elastic/es-distributed |
} | ||
} | ||
} | ||
group.promoteReplicaToPrimary(newPrimary).get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discuss: should we align the retention-leases when a new primary is promoted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under what circumstances can the new primary not hold an up-to-date set of leases already? It might perhaps be missing some renewals but I think that's ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are adding two new leases to the old primary: L1 and L2. L1 was synced to replica r1; L2 was synced to r2, but the old primary crashed before two leases are properly synced to all two replicas. If any replica is promoted, then the retention leases between copies are not aligned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We sync by copying all the leases from the primary to its replicas, so I don't follow how r2 could receive L2 without also receiving L1 (assuming they were added in this order).
However, I think I do see a potential problem:
- primary A shares a lease with one replica B, but not to another replica C
- A crashes
- C discards some history that the lease would have retained
- B is promoted to primary and shares its lease with C
- C cannot accept this lease since it has already discarded this history
I think we can prevent this, with peer-recovery retention leases, by insisting that leases do not "go backwards", i.e. they only retain history that is already being retained by another lease. This would mean that C could not discard the history in the situation above because it must already hold a different lease that retains that history.
run elasticsearch-ci/1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good - I needed to make similar adjustments to support peer-recovery retention leases. I left some minor comments. Also there are now merge conflicts.
} | ||
} | ||
} | ||
group.promoteReplicaToPrimary(newPrimary).get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under what circumstances can the new primary not hold an up-to-date set of leases already? It might perhaps be missing some renewals but I think that's ok.
|
||
@Override | ||
public void backgroundSync(ShardId shardId, RetentionLeases retentionLeases) { | ||
sync(shardId, retentionLeases, ActionListener.wrap(r -> {}, e -> fail("fail to sync retention leases [" + e + "]"))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I slightly prefer throw AssertionError("failed to sync retention leases", e);
rather than putting the inner stack trace in the message like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
public static WritePrimaryResult<Request, Response> performOnPrimary(final Request request, | ||
final IndexShard primary, | ||
final Logger logger) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that we hold an operation permit here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sync action is too thin now, so I inline it directly to the test.
return performOnReplica(request, replica, logger); | ||
} | ||
|
||
public static WriteReplicaResult<Request> performOnReplica(final Request request, final IndexShard replica, final Logger logger) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that we hold an operation permit here?
Thanks for looking @DaveCTurner. I have addressed your comments. Can you have another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @DaveCTurner. |
* master: Mute failing CCR retention lease unfollow test Add support for ccr follow info api to HLRC. (elastic#39115) Do not create the missing index when invoking getRole (elastic#39039) Relax history check in ShardFollowTaskReplicationTests (elastic#39162) Add retention leases replication tests (elastic#38857) Edits to text in Phrase Suggester doc (elastic#38966) Edits to text in API Conventions docs (elastic#39001)
* master: Mute failing CCR retention lease unfollow test Add support for ccr follow info api to HLRC. (elastic#39115) Do not create the missing index when invoking getRole (elastic#39039) Relax history check in ShardFollowTaskReplicationTests (elastic#39162) Add retention leases replication tests (elastic#38857) Edits to text in Phrase Suggester doc (elastic#38966) Edits to text in API Conventions docs (elastic#39001)
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests. Relates #37165
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests. Relates #37165
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests.
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests. Relates elastic#37165
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests. Relates elastic#37165
This commit introduces the retention leases to ESIndexLevelReplicationTestCase, then adds some tests verifying that the retention leases replication works correctly in spite of the presence of the primary failover or out of order delivery of retention leases sync requests.
Relates #37165