Skip to content

Fix handling of duplicate error messages in test cluster logs #69494

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 24, 2021

Conversation

mark-vieira
Copy link
Contributor

As part of #68333 I mistakenly broke the support for de-duplication of error messages. We use a normalized form of the error as a Map key to handle de-duplication for us. I naively replaced this with the original message so that we don't lose the original message timestamp. This obviously completely breaks de-duplication as we are adding the unique messages w/ timestamp as the map key. The other side effect of this is that for very large logs with lots of repeated errors, we end up loading them all into memory and blowing out the build JVM heap 😨

This change reverts to using the normalized message as the Map key while retaining the original message as well. The net result is reverting back to us dumping log messages formatted like so:

»    ↓ errors and warnings from /home/mark/workspaces/elasticsearch-7.x/qa/mixed-cluster/build/testclusters/v6.8.15-0/logs/v6.8.15.log ↓
» [2021-02-23T15:12:17,203][ERROR][o.e.x.c.t.IndexTemplateRegistry] [v6.8.15-0] error adding index template [.deprecation-indexing-settings] from [/org/elasticsearch/xpack/deprecation/deprecation-indexing-settings.json] for [deprecation]
»  org.elasticsearch.transport.RemoteTransportException: [v6.8.15-3][127.0.0.1:37793][cluster:admin/component_template/put]
»  Caused by: org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [cluster:admin/component_template/put]
»  	at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1033) ~[elasticsearch-7.13.0-SNAPSHOT.jar:7.13.0-SNAPSHOT]
»  	at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:932) ~[elasticsearch-7.13.0-SNAPSHOT.jar:7.13.0-SNAPSHOT]
»  	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:763) ~[elasticsearch-7.13.0-SNAPSHOT.jar:7.13.0-SNAPSHOT]
»  	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:53) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
»  	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) ~[?:?]
»  	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
»  	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[?:?]
»  	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[?:?]
»  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[?:?]
»  	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[?:?]
»  	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[?:?]
»  	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) ~[?:?]
»  	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) ~[?:?]
»  	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) ~[?:?]
»  	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) ~[?:?]
»  	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) ~[?:?]
»  	at java.lang.Thread.run(Thread.java:748) [?:?]
»   ↑ repeated 4794 times ↑

@mark-vieira mark-vieira added the :Delivery/Build Build or test infrastructure label Feb 23, 2021
@elasticmachine elasticmachine added the Team:Delivery Meta label for Delivery team label Feb 23, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-delivery (Team:Delivery)

@mark-vieira
Copy link
Contributor Author

@elasticmachine run elasticsearch-ci/1

@mark-vieira
Copy link
Contributor Author

@elasticmachine update branch


package org.elasticsearch.gradle.util;

public class Pair<L, R> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no existing tuple class we can use, already on the classpath?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gradle has one but it's in an internal package so I opted to just write one since it's so trivial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Delivery/Build Build or test infrastructure Team:Delivery Meta label for Delivery team v7.12.1 v7.13.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants