Skip to content

Remove dangling index auto import functionality #59698

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 9 additions & 26 deletions docs/reference/modules/gateway.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,29 +56,12 @@ NOTE: These settings only take effect on a full cluster restart.
[[dangling-indices]]
==== Dangling indices

When a node joins the cluster, if it finds any shards stored in its local data
directory that do not already exist in the cluster, it will consider those
shards to be "dangling". Importing dangling indices
into the cluster using `gateway.auto_import_dangling_indices` is not safe.
Instead, use the <<dangling-indices-api,Dangling indices API>>. Neither
mechanism provides any guarantees as to whether the imported data truly
represents the latest state of the data when the index was still part of
the cluster.

`gateway.auto_import_dangling_indices`::

deprecated:[7.9.0, This setting will be removed in 8.0. You should use the dedicated dangling indices API instead.]
Whether to automatically import dangling indices into the cluster
state, provided no indices already exist with the same name. Defaults
to `false`.

WARNING: The auto-import functionality was intended as a best effort to help users
who lose all master nodes. For example, if a new master node were to be
started which was unaware of the other indices in the cluster, adding the
old nodes would cause the old indices to be imported, instead of being
deleted. However there are several issues with automatic importing, and
its use is strongly discouraged in favour of the
<<dangling-indices-api,dedicated API>.

WARNING: Losing all master nodes is a situation that should be avoided at
all costs, as it puts your cluster's metadata and data at risk.
When a node joins the cluster, if it finds any shards stored in its local
data directory that do not already exist in the cluster, it will consider
those shards to belong to a "dangling" index. You can list, import or
delete dangling indices using the <<dangling-indices-api,Dangling indices
API>>.

NOTE: The API cannot offer any guarantees as to whether the imported data
truly represents the latest state of the data when the index was still part
of the cluster.
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@
import java.util.concurrent.atomic.AtomicReference;

import static org.elasticsearch.cluster.metadata.IndexGraveyard.SETTING_MAX_TOMBSTONES;
import static org.elasticsearch.gateway.DanglingIndicesState.AUTO_IMPORT_DANGLING_INDICES_SETTING;
import static org.elasticsearch.indices.IndicesService.WRITE_DANGLING_INDICES_INFO_SETTING;
import static org.elasticsearch.rest.RestStatus.ACCEPTED;
import static org.elasticsearch.rest.RestStatus.OK;
Expand Down Expand Up @@ -70,7 +69,6 @@ private Settings buildSettings(int maxTombstones) {
// when we delete an index, it's definitely considered to be dangling.
.put(SETTING_MAX_TOMBSTONES.getKey(), maxTombstones)
.put(WRITE_DANGLING_INDICES_INFO_SETTING.getKey(), true)
.put(AUTO_IMPORT_DANGLING_INDICES_SETTING.getKey(), false)
.build();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
import org.elasticsearch.env.TestEnvironment;
import org.elasticsearch.gateway.GatewayMetaState;
import org.elasticsearch.gateway.PersistedClusterStateService;
import org.elasticsearch.indices.IndicesService;
import org.elasticsearch.node.Node;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.InternalTestCluster;
Expand All @@ -41,12 +40,8 @@
import java.util.List;
import java.util.Locale;

import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDIATE;
import static org.elasticsearch.gateway.DanglingIndicesState.AUTO_IMPORT_DANGLING_INDICES_SETTING;
import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;
import static org.elasticsearch.indices.recovery.RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING;
import static org.elasticsearch.test.NodeRoles.nonMasterNode;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.notNullValue;
Expand Down Expand Up @@ -312,67 +307,6 @@ public void test3MasterNodes2Failed() throws Exception {
ensureStableCluster(4);
}

public void testAllMasterEligibleNodesFailedDanglingIndexImport() throws Exception {
internalCluster().setBootstrapMasterNodeIndex(0);

Settings settings = Settings.builder()
.put(AUTO_IMPORT_DANGLING_INDICES_SETTING.getKey(), true)
.build();

logger.info("--> start mixed data and master-eligible node and bootstrap cluster");
String masterNode = internalCluster().startNode(settings); // node ordinal 0

logger.info("--> start data-only node and ensure 2 nodes stable cluster");
String dataNode = internalCluster().startDataOnlyNode(settings); // node ordinal 1
ensureStableCluster(2);

logger.info("--> index 1 doc and ensure index is green");
client().prepareIndex("test").setId("1").setSource("field1", "value1").setRefreshPolicy(IMMEDIATE).get();
ensureGreen("test");
assertBusy(() -> internalCluster().getInstances(IndicesService.class).forEach(
indicesService -> assertTrue(indicesService.allPendingDanglingIndicesWritten())));

logger.info("--> verify 1 doc in the index");
assertHitCount(client().prepareSearch().setQuery(matchAllQuery()).get(), 1L);
assertThat(client().prepareGet("test", "1").execute().actionGet().isExists(), equalTo(true));

logger.info("--> stop data-only node and detach it from the old cluster");
Settings dataNodeDataPathSettings = Settings.builder()
.put(internalCluster().dataPathSettings(dataNode), true)
.put(AUTO_IMPORT_DANGLING_INDICES_SETTING.getKey(), true)
.build();
assertBusy(() -> internalCluster().getInstance(GatewayMetaState.class, dataNode).allPendingAsyncStatesWritten());
internalCluster().stopRandomNode(InternalTestCluster.nameFilter(dataNode));
final Environment environment = TestEnvironment.newEnvironment(
Settings.builder()
.put(internalCluster().getDefaultSettings())
.put(dataNodeDataPathSettings)
.put(AUTO_IMPORT_DANGLING_INDICES_SETTING.getKey(), true)
.build());
detachCluster(environment, false);

logger.info("--> stop master-eligible node, clear its data and start it again - new cluster should form");
internalCluster().restartNode(masterNode, new InternalTestCluster.RestartCallback(){
@Override
public boolean clearData(String nodeName) {
return true;
}
});

logger.info("--> start data-only only node and ensure 2 nodes stable cluster");
internalCluster().startDataOnlyNode(dataNodeDataPathSettings);
ensureStableCluster(2);

logger.info("--> verify that the dangling index exists and has green status");
assertBusy(() -> {
assertThat(indexExists("test"), equalTo(true));
});
ensureGreen("test");

logger.info("--> verify the doc is there");
assertThat(client().prepareGet("test", "1").execute().actionGet().isExists(), equalTo(true));
}

public void testNoInitialBootstrapAfterDetach() throws Exception {
internalCluster().setBootstrapMasterNodeIndex(0);
String masterNode = internalCluster().startMasterOnlyNode();
Expand Down
Loading