Puppet Enterprise deployments provisioned using the peadm module can be upgrading using the peadm module as well.
The peadm::upgrade
plan requires as input the version of PE to upgrade to, and the names of each PE infrastructure host. Master, replica, compilers, etc.
The following is an example parameters file for upgrading an Extra Large architecture deployment of PE 2019.0.1 to PE 2019.2.2.
{
"version": "2019.2.2",
"master_host": "pe-master-09a40c-0.us-west1-a.c.reidmv-peadm.internal",
"puppetdb_database_host": "pe-psql-09a40c-0.us-west1-a.c.reidmv-peadm.internal",
"master_replica_host": "pe-master-09a40c-1.us-west1-b.c.reidmv-peadm.internal",
"puppetdb_database_replica_host": "pe-psql-09a40c-1.us-west1-b.c.reidmv-peadm.internal",
"compiler_hosts": [
"pe-compiler-09a40c-0.us-west1-a.c.reidmv-peadm.internal",
"pe-compiler-09a40c-1.us-west1-b.c.reidmv-peadm.internal",
"pe-compiler-09a40c-2.us-west1-c.c.reidmv-peadm.internal",
"pe-compiler-09a40c-3.us-west1-a.c.reidmv-peadm.internal"
]
}
The upgrade plan may be run as:
bolt plan run peadm::upgrade --params @params.json
The peadm::upgrade plan downloads installation content from an online repository by default. To perform an offline installation, you can prefetch the needed content and place it in the staging directory. If content is available in the staging directory, peadm::upgrade will not try to download it.
The default staging directory is /tmp
. If a different staging dir is being used, it can be specified using the stagingdir
parameter to the peadm::upgrade plan.
The content needed is the PE installation tarball for the target version. The installation content should be in the staging dir, and should have its original name. E.g. /tmp/puppet-enterprise-2019.2.2-el-7-x86_64.tar.gz
.
Installation content can be downloaded from https://puppet.com/try-puppet/puppet-enterprise/download/.
The peadm::provision plan can be configured to download installation content directly to hosts. To configure online installation, set the download_mode
parameter of the peadm::provision
plan to direct
. The direct mode is often more efficient when PE hosts have a route to the internet.
The peadm::upgrade plan can be used with the Orchestrator (pcp) transport, provided that the Bolt executor is running as root on the master. To use the Orchestrator transport prepare an inventory file such as the following to set the default transport to be pcp
, but the master specifically to be local
.
---
version: 2
config:
transport: pcp
pcp:
cacert: /etc/puppetlabs/puppet/ssl/certs/ca.pem
service-url: https://pe-master-ad1d88-0.us-west1-a.c.reidmv-peadm.internal:8143
task-environment: production
token-file: /root/.puppetlabs/token
groups:
- name: pe-targets
targets:
- name: "pe-master-ad1d88-0.us-west1-a.c.reidmv-peadm.internal"
config:
transport: local
- name: "pe-master-ad1d88-1.us-west1-b.c.reidmv-peadm.internal"
- name: "pe-compiler-ad1d88-0.us-west1-a.c.reidmv-peadm.internal"
- name: "pe-compiler-ad1d88-1.us-west1-b.c.reidmv-peadm.internal"
- name: "pe-compiler-ad1d88-2.us-west1-c.c.reidmv-peadm.internal"
- name: "pe-compiler-ad1d88-3.us-west1-a.c.reidmv-peadm.internal"
- name: "pe-psql-ad1d88-0.us-west1-a.c.reidmv-peadm.internal"
- name: "pe-psql-ad1d88-1.us-west1-b.c.reidmv-peadm.internal"
Additionally, you MUST pre-stage a copy of the PE installation media in /tmp on the PuppetDB PostgreSQL node(s), if present. The Orchestrator transport cannot be used to send large files to remote systems, and the plan will fail if tries.
Pre-staging the installation media and using an inventory definition such as the example above, the peadm::upgrade plan can be run as normal. It will not rely on the Orchestrator service to operate on the master, and it will use the Orchestrator transport to operate on other PE nodes.
bolt plan run peadm::upgrade --params @params.json
In the event a manual upgrade is required, the steps may be followed along by reading directly from the upgrade plan, which is itself the most accurate technical description of the steps required. In general form, the upgrade process is as given below.
Note: it is assumed that the Puppet master is in cluster A when the upgrade starts, and that the replica is in cluster B. If the master is in cluster B, the A/B designations in the instruction should be inverted.
Phase 1: stop puppet service
- Stop the
puppet
service on all PE infrastructure nodes to prevent normal automatic runs from interfering with the upgrade process
Phase 2: upgrade HA cluster A
- Shut down the
pe-puppetdb
service on the compilers in cluster A - If different from the master, run the
install-puppet-enterprise
script for the new PE version on the PuppetDB PostgreSQL node for cluster A - Run the
install-puppet-enterprise
script for the new PE version on the master - Run
puppet agent -t
on the master - If different from the master, Run
puppet agent -t
on the PuppetDB PostgreSQL node for cluster A - Perform the standard
curl upgrade.sh | bash
procedure on the compilers for cluster A
Phase 3: upgrade HA cluster B
- Shut down the
pe-puppetdb
service on the compilers in cluster B - If different from the master (replica), run the
install-puppet-enterprise
script for the new PE version on the PuppetDB PostgreSQL node for cluster B - If different from the master (replica), Run
puppet agent -t
on the PuppetDB PostgreSQL node for cluster B - Perform the standard
curl upgrade.sh | bash
procedure on the master (replica) - Perform the standard
curl upgrade.sh | bash
procedure on the compilers for cluster B
Phase 4: resume puppet service
- Ensure the
puppet
service on all PE infrastructure nodes is running again