-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Add support for TLS Server Name Indication (SNI) extension #32517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging @elastic/es-security |
subscribe |
@jaymode @tbrooks8 my plan for this was to add SNI support to ConnectionProfiles and then pass it down to |
That sounds reasonable to me |
This is only relevant for security transports right? Is the plaintext transport supposed to throw exceptions if a connection profile that requests SNI features is passed down? Also I think this is actually (maybe) going to be a little tricky? Is the SNI name that is used always the hostname? Or can it be something else? In both NIO and Netty here is how security channels are created:
It is somewhere around there that we will need to set the SNI parameters on the |
@tbrooks8 these are lots of questions. The most relevant one is this I guess:
@jaymode @alexbrasetvik is the hostname enough for iteration 1 or do we need to do anything else here? I also wonder if we need to set this parameter conditionally ie. only for CCR / CCS connections but not for with-in cluster connections? |
I think a hostname should be enough for iteration 1. That would not let us route to specific nodes, but that is fine for now. It could be useful to have an optional hostname to override what's presented as the SNI-hostname, as otherwise we'll have to have the publicly visible (and actually resolving) hostname be a SAN in the targe. cc @nkvoll @AlexP-Elastic @franekrichardson to keep me honest |
I had a chat with @alexbrasetvik and subsequently with @tbrooks8 and it seems doable to pass down a per connection SNI to the transport. My idea for this on the API end is this:
this would also work for transport clients if they use sniffing through a proxy which is nice. I hope this makes sense |
This is related to elastic#32517. This commit passes the DiscoveryNode to the initiateChannel method for different Transport implementation. This will allow additional attributes (besides just the socket address) to be used when opening channels.
This is related to #32517. This commit passes the DiscoveryNode to the initiateChannel method for different Transport implementation. This will allow additional attributes (besides just the socket address) to be used when opening channels.
|
Hi @s1monw next I am going to add a optional SNI server_name to the discovery node and if it is present add it to the |
Pinging @elastic/es-core-infra |
Oh - I see that there is an attribute map already. So you want the SNI server name to go in the map with a key |
yeah I'd make this attribute optional. ie by default we would just use the hostname and if the attr is present we use it as well. |
This adds support for connecting to a remote cluster through a tcp proxy. A remote cluster can configured with an additional `search.remote.$clustername.proxy` setting. This proxy will be used to connect to remote nodes for every node connection established. We still try to sniff the remote clsuter and connect to nodes directly through the proxy which has to support some kind of routing to these nodes. Yet, this routing mechanism requires the handshake request to include some kind of information where to route to which is not yet implemented. The effort to use the hostname and an optional node attribute for routing is tracked in elastic#32517 Closes elastic#31840
This adds support for connecting to a remote cluster through a tcp proxy. A remote cluster can configured with an additional `search.remote.$clustername.proxy` setting. This proxy will be used to connect to remote nodes for every node connection established. We still try to sniff the remote clsuter and connect to nodes directly through the proxy which has to support some kind of routing to these nodes. Yet, this routing mechanism requires the handshake request to include some kind of information where to route to which is not yet implemented. The effort to use the hostname and an optional node attribute for routing is tracked in #32517 Closes #31840
This adds support for connecting to a remote cluster through a tcp proxy. A remote cluster can configured with an additional `search.remote.$clustername.proxy` setting. This proxy will be used to connect to remote nodes for every node connection established. We still try to sniff the remote clsuter and connect to nodes directly through the proxy which has to support some kind of routing to these nodes. Yet, this routing mechanism requires the handshake request to include some kind of information where to route to which is not yet implemented. The effort to use the hostname and an optional node attribute for routing is tracked in #32517 Closes #31840
This commit is related to #32517. It allows an "server_name" attribute on a DiscoveryNode to be propagated to the server using the TLS SNI extentsion. This functionality is only implemented for the netty security transport.
This is related to elastic#32517. This commit passes the DiscoveryNode to the initiateChannel method for different Transport implementation. This will allow additional attributes (besides just the socket address) to be used when opening channels.
This is related to #32517. This commit passes the DiscoveryNode to the initiateChannel method for different Transport implementation. This will allow additional attributes (besides just the socket address) to be used when opening channels.
This commit is related to elastic#32517. It allows an "server_name" attribute on a DiscoveryNode to be propagated to the server using the TLS SNI extentsion. This functionality is only implemented for the netty security transport.
@tbrooks8 can we close this or do we wait until SNI is implemeted on the other secure transport? Do we have an issue for the latter? |
We can close it after the back port to 6.x (which is just waiting on the build to pass). |
Closing as the work has been merged |
This commit is related to elastic#32517. It allows an "sni_server_name" attribute on a DiscoveryNode to be propagated to the server using the TLS SNI extentsion. Prior to this commit, this functionality was only support for the netty transport. This commit adds this functionality to the security nio transport.
This commit is related to #32517. It allows an "sni_server_name" attribute on a DiscoveryNode to be propagated to the server using the TLS SNI extentsion. Prior to this commit, this functionality was only support for the netty transport. This commit adds this functionality to the security nio transport.
In elastic#33062 we introduced the `cluster.remote.*.proxy` setting for proxied connections to remote clusters, but left it deliberately undocumented since it needed followup work so that it could work with SNI. However, since elastic#32517 is now closed we can add this documentation and remove the comment about its lack of documentation.
In elastic#33062 we introduced the `cluster.remote.*.proxy` setting for proxied connections to remote clusters, but left it deliberately undocumented since it needed followup work so that it could work with SNI. However, since elastic#32517 is now closed we can add this documentation and remove the comment about its lack of documentation.
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied connections to remote clusters, but left it deliberately undocumented since it needed followup work so that it could work with SNI. However, since #32517 is now closed we can add this documentation and remove the comment about its lack of documentation.
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied connections to remote clusters, but left it deliberately undocumented since it needed followup work so that it could work with SNI. However, since #32517 is now closed we can add this documentation and remove the comment about its lack of documentation.
Java 8+ supports the TLS server name indication extension, which we should add support for. SNI will enable remote connections through a proxy to be routed to the correct endpoint. This is especially important for scenarios like cross cluster search where the connections from one cluster to another need to go through a proxy to establish a connection.
The text was updated successfully, but these errors were encountered: