-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Include templated systemd service file in packages #24867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for opening this @tylerjl, this is something that I had been thinking about too and I very much appreciate you laid out everything so clearly here. I don't think we need to worry about Sys V init at all. I for one have come to accept our new init system overlord. |
+1 Here is an example of something I came across while trying to put together an install using RPM (with an additional systemd script for the 2nd instance). 2nd instance systemd script:
The problem here is that if the 2nd instance uses the same ES_HOME, the /usr/share/elasticsearch/bin/elasticsearch script will source /usr/share/elasticsearch/bin/elasticsearch-env, and when it gets to the |
@ppf2 indeed; the |
Linking #28159 to see if this is a bug we can address until there is formal support for multiple instances. |
This is not a bug, this is behaving as intended. Reasonable people can disagree on whether or not that behavior is desirable, but it is not behaving differently than I intended it to. |
@jasontedor fair enough, my main concern was whether the work around (i.e., removing any env vars in question from |
That is completely supported. Note that is the only way to supply custom environment variables in the archive distributions (there’s no environment variable file to source, the script doesn’t even include such a line and we default |
Hey @jasontedor . May be I missed the point in this whole change of conversations, I still have the question. How is one expected to extend custom environment variables in RPMs? From the product point of view, I believe all the deliverable (zip, rpm, msi, ...) should behave the same way. We had always been able to configure the environment variables previously and it gave us good flexibility. And now with this change it is only supported in archive it becomes a problem for us. I would vote to add this flexibility in the script for this extension in RPMs also. What do you think about it? |
FWIW, this also broke the elasticsearch Chef cookbook on v6.x+. It's pretty weird that the init scripts respect existing variables but elasticsearch-env doesn't. (Is that change documented? I couldn't find it when I reviewed the guidance on upgrading and breaking changes.) It sounds like we should always remove /etc/sysconfig/elasticsearch or /etc/default/elasticsearch, and always write an elasticsearch-env file instead? Is that the way forward? How do we write multiple elasticsearch-env files for multiple instances? Or do we also never share ES_HOME either? |
We discussed this today in FixItThursday, and agreed that we either need to document examples of how to run multiple nodes, or fully support it within our systemd service file(s). Given that we would want testing of such documentation, actually adding support makes that testing easier, thus we have agreed to move forward with adding templated service files. |
Any progress or hope that this will be included in the upcoming 7.14 or 8 release? |
+1 on the template file (although changes to either elasticsearch-env or /etc/default/elasticsearch will be needed, see #28159 (comment)) |
FWIW, just using the command line argument
Any shared config between the instances goes in the config file. Any instance-specific settings goes in the systemd-template, as shown above. With this, you can spin up instances with |
Describe the feature:
Although running more than one Elasticsearch process per machine is more of the exception than the rule, it's a common enough use case that both the ansible and puppet modules support this by manually defining service files for instances of Elasticsearch rather than using the service files that ship with the upstream service files. It isn't the prettiest way to run Elasticsearch, but we run into it a lot.
The primary problem here is that any upstream changes to the service files get lost once Puppet/Ansible/etc. suffer drift in the service file contents. Even if the config management service templates are kept in lock-step sync with upstream, there's the potential for different versions of service files to clash with the specific version of Elasticsearch being used (we haven't seen this yet, but it's more of a lucky streak than something we can rely on).
Prior to systemd, supporting this with the upstream Elasticsearch packages' SysV service files wasn't really an option; the desired instances at runtime can't be anticipated with service files like
/etc/init.d/elasticsearch-es-01
,/etc/init.d/elasticsearch-es-02
, and so on. Older distros that don't support systemd will probably have to keep with this fragile method the config management solutions use and keep the service files committed as templates in their respective modules.With all that being said, systemd specifiers permit "template" units that allow multiple instances of a service to be instantiated based on a single service file. If the Elasticsearch packages were to ship both
elasticsearch.service
andelasticsearch@%i.service
, users could continue to use theelasticsearch.service
in a backwards-incompatible manner while all users could have the additional option of using discrete services of the form[email protected]
to manage >1 instances per host if they desire to.Pros:
Cons:
/etc/init.d/elasticsearch@
and symlinking instances to it), it's not supported and very hacky.CONF_DIR
resides in/etc/elasticsearch/$instance
. Fortunately most (all?) of the CLI tools associated with Elasticsearch can handleCONF_DIR
values outside the default, as long as it's set appropriately at the time they're invoked.As a real-world example of what this could look like, consider the Arch Linux service template file that does exactly this (though we'd obviously need to base it on our
elasticsearch.service
file instead).The text was updated successfully, but these errors were encountered: