Red Hat, rhel, Satellite

Disconnected Red Hat Satellite Upgrade – Upgrading RHEL7 Satellite 6.10 to RHEL8 Satellite 6.13 : Part 2

Following on from the previous post, you should now have a Satellite Server that is running Red Hat Enterprise Linux 7.9 with the latest patches and Satellite 6.11.5.4.

The next step of this guide is to upgrade the Satellite Server operating system from to Red Hat Enterprise Linux 8.8 – Using the LEAPP upgrade process.

Preparing for the LEAPP upgrade

This process involves following these guidelines, but for a disconnected environment.

Download required LEAPP packages on to a connected host

The leapp packages must be downloaded on a connected helper host and then copied over to the disconnected Satellite server for local installation:

[connected-host] # yum --downloadonly --downloaddir=/var/tmp/leapp-files install leapp-upgrade
[connected-host] # tar -cvf leapp-packages.tar /var/tmp/leapp-files/*

Copy the tar file to the satellite server and install the packages

[satellite-server] # mkdir /var/tmp/leapp-files
[satellite-server] # tar -xf leap-packages.tar -C /var/tmp/leapp-files
[satellite-server] # foreman-maintain packages unlock
[satellite-server] # yum install -y /var/tmp/leapp-files/*.rpm
[satellite-server] # foreman-maintain packages lock

Prepare yum repos for LEAPP Upgrade

From the previous post, the rhel8.8 DVD iso was mounted in /media/rhel-8 and the Satellite repositories were configure using the mounted Satellite DVD isos. However, the leapp utility cannot read file:/// based repositories, requiring http based repos only. In order to facilitate this, we will use the Satellite’s public HTML directory to provide html repositories.

Create a symlink from /media to /var/www/html/pub

[satellite-server] # ln -s /media /var/www/html/pub/media

Update the Satellite yum repositories to point the local http server and relevant path:

[satellite-server] # cat <<EOF> /etc/yum.repos.d/satellite-6.11-rhel8.repo
[satellite-6.11-rhel8]
baseurl=http://rhel7-satellite-demo/pub/media/satellite-6.11-rhel8/Satellite
name=Satellite-6.11-rhel8
enabled=1
gpgcheck=0

[satellite-maintenance-6.11-rhel8]
baseurl=http://rhel7-satellite-demo/pub/media/satellite-6.11-rhel8/Maintenance/
name=Satellite-Maintenance-6.11-rhel8
enabled=1
gpgcheck=0
EOF

Disable the rhel7 and Satellite-6.11-rhel7 repositories:

[satellite-server] # sed -i 's/enabled=1/enabled=0/g' /etc/yum.repos.d/rhel7.repo
[satellite-server] # sed -i 's/enabled=1/enabled=0/g' /etc/yum.repos.d/satellite-6.11-rhel7.repo

Pre-requisite tasks for LEAPP upgrade

Two kernel modules must be removed prior to the upgrade process : floppy & pata_acpi

[satellite-server] # rmmod floppy pata_acpi

Ensure that there are no NFS mount points specified in /etc/fstab:

[satellite-server] # sed -i '/\bnfs(4)?\b/s/^/#/' /etc/fstab

Run LEAPP preupgrade check

Run the leapp preupgrade command to perform the pre-upgrade phase:

[satellite-server] # leapp preupgrade --no-rhsm --iso /<path>/rhel-8-dvd.iso --enablerepo=satellite-6.11-rhel8 --enablerepo=satellite-maintenance-6.11-rhel8

Once the pre-upgrade checks have been completed, a report and an answerfile is generated in /var/log/leapp/answerfile. The file can either be edited to provide the answer to the removal of pam_pkcs11 module (uncomment the line and confirm answer as True) or run the following command:

[satellite-server] # leapp answer --section remove_pam_pkcs11_module_check.confirm=True

Re-run the leapp preupgrade command again and confirm that there are no unresolved issues or inhibitors.

Backup / Snapshot

At this stage it is highly recommended to backup or take a snapshot of the Satellite server before performing the upgrade.

Upgrade to RHEL8 using LEAPP utility

To upgrade the Satellite server to RHEL8 using the leapp utility, run the following command:

[satellite-server] # leapp upgrade --no-rhsm --iso /<path>/rhel-8-dvd.iso --enablerepo=satellite-6.11-rhel8 --enablerepo=satellite-maintenance-6.11-rhel8

The upgrade process will start. It will determine the packages it requires and at some point will reboot and continue the upgrade. Once it has completed the last stages of the upgrade process it will reboot a few times and the Satellite server will now be running RHEL8. This process can take up to an hour. Once that is completed, there are some post-upgrade tasks which are required.

Post Upgrade Tasks

Once the upgrade has been completed, complete the following post-ugprade tasks:

Check Satellite Service Status

You may run into an error where Satellite services are not started correctly and/or httpd does not start because of an error. If so run the following command to check the service status:

# satellite-maintain service list

If you find that any services are disabled, then use the following command to enable them:

# satellite-maintain service enable --only dynflow-sidekiq@,foreman-proxy,httpd,postgresql,puppetserver,redis,tomcat

The httpd service may not start due to the duplicate loading of the mpm module, if that is the case disabled the module in the file

# vi /etc/httpd/conf.modules.d/00-mpm.conf

Setup the RHEL8 Repo from the install media

cat <<EOF> /etc/yum.repos.d/rhel-8.repo
[BaseOS]
name=Red Hat Enterprise Linux 8 for x86_64 BaseOS
baseurl=file:///media/rhel-8/BaseOS
gpgcheck=0
enabled=1
 
[AppStream]
name=Red Hat Enterprise Linux 8 for x86_64 AppStream
baseurl=file:///media/rhel-8/AppStream
gpgcheck=0
enabled=1
EOF

Remove

yum config-manager --save --setopt exclude=''
cd /lib/modules && ls -d *.el7*
[ -x /usr/sbin/weak-modules ] && /usr/sbin/weak-modules --remove-kernel 3.10.0-1160.25.1.el7.x86_64
/bin/kernel-install remove 3.10.0-1160.25.1.el7.x86_64 /lib/modules/3.10.0-1160.25.1.el7.x86_64/vmlinuz
rpm -qa | grep -e '\.el[67]' | grep -vE '^(gpg-pubkey|libmodulemd|katello-ca-consumer)' | yum remove
yum remove leapp-deps-el8 leapp-repository-deps-el8
rm -rf /lib/modules/*el7*
rm -rf /var/log/leapp /root/tmp_leapp_py3 /var/lib/leapp
rm -rf /boot/vmlinuz-*rescue* /boot/initramfs-*rescue*
dnf reinstall -y kernel-core-$(uname -r)
grubby --info=ALL | grep "\.el7" || echo "Old kernels are not present in the bootloader."

If there are still el7 kernels, find them and remove them:

rpm -q kernel|grep el7
dnf remove kernel-<version>.el7.x86_64
grubby --remove-kernel=/boot/vmlinuz-<version>.el7.x86_64

ls /boot/vmlinuz-*rescue* /boot/initramfs-*rescue*
lsinitrd /boot/initramfs-*rescue*.img | grep -qm1 "$(uname -r)/kernel/" && echo "OK" || echo "FAIL"
grubby --info $(ls /boot/vmlinuz-*rescue*)

This completes the upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise 8.

Update to latest RHEL8 Packages

The Satellite server has now been upgraded to RHEL8, but is not completely up to date with the latest security patches and bug fixes. The next step is to synchronise the BaseOS and AppStream repositories, using a helper host and then transfer the content onto the Satellite server in order to perform the update.

Use connected helper host to sync RHEL8 OS & Satellite repos

Use the reposync tool on the connected host to copy the rhel8 BaseOS and AppStream repositories. What I prefer to do is create a separate logical volume for the repository data : /repos

This allows me to manage the content separately from the rest of the OS on the connected host.

[connected-host] # reposync --gpgcheck --newpackage --download-metadata --repoid=rhel-8-for-x86_64-baseos-rpms -p /repos/
[connected-host] # reposync --gpgcheck --newpackage --download-metadata --repoid=rhel-8-for-x86_64-appstream-rpms -p /repos/

Once completed tar the rhel-8-for-x86_64-baseos-rpms and rhel-8-for-x86_64-appstream-rpms directories in the /repo directory and copy them across to the disconnected Satellite server.

Create the local repos on the Satellite server

On the Satellite server, create an additional logical volume /repos and upack the tar files into that directory.

Create the yum repo with the following command:

[satellite-server] # cat <<EOF> /etc/yum.repos.d/rhel8-local.repo
[rhel-8-for-x86_64-baseos-rpms]
name=Red Hat Enterprise Linux 8 for x86_64 BaseOS
baseurl=file:///repos/rhel-8-for-x86_64-baseos-rpms
gpgcheck=1
enabled=1

[rhel-8-for-x86_64-appstream-rpms]
name=Red Hat Enterprise Linux 8 for x86_64 AppStream
baseurl=file:///repos/rhel-8-for-x86_64-appstream-rpms
gpgcheck=1
enabled=1
EOF

The Satellite server can now update

Buy Me A Coffee
openshift, Tech Stuff

Updating Disconnected OpenShift Clusters

Intro

Disconnected OpenShift clusters are becoming increasingly popular, as more and more organisations seek to deploy and manage their own private cloud environments. In this blog post, I will show you how to mirror repository content to update a disconnected OpenShift cluster as well as some best practices for managing your environment.

Disconnected OpenShift clusters are important for a number of reasons. First and foremost, they allow organisations to maintain complete control and security over their environment, without relying on external sources for updates, patches, or security fixes. This is especially important for organisations that operate in highly regulated industries, where data privacy and security are paramount.

Updating a disconnected OpenShift cluster can be a complex process, but it’s not impossible. Here are the basic steps:

  1. Set up a local mirror repository: A local mirror repository is a copy of the OpenShift container images that you’ll use to install and manage your cluster. You can create a mirror using the oc-mirror plugin.
  2. Move the mirrored content into the registry in the disconnected / isolated environment.
  3. Configure the cluster to use the local, mirrored registry: Once you have a local mirror registry, you’ll need to apply the manifests to configure the OpenShift cluster to use it.
  4. Update the cluster!
  5. Manage the cluster: Once your disconnected cluster is updated, you’ll need to manage it. This includes applying updates, patches, and security fixes. To do this, you’ll need to periodically update your local mirror repository with the latest container images.

The oc-mirror plugin is a command-line tool that can be used to mirror images, manifests, and other artifacts, from external repositories in order to sync them to a mirrored registry for use by a disconnected cluster. This is an essential tool for organisations that need to manage OpenShift clusters in remote or offline environments.

The oc-mirror plugin works by creating a local mirror of the upstream repositories into a local registry or filesystem on removable media. This can then transferred into the disconnected environment, to update or deploy OpenShift applications and services in the disconnected cluster.

The oc-mirror plugin is a powerful tool that can be used to manage OpenShift clusters in disconnected environments. It is easy to use and can be used to mirror a wide range of artifacts from a connected cluster.

Pre-Reqs – Things you need.

  • A host which has connectivity to the upstream repositories and has the oc-mirror plugin installed
  • Pull secret from console.redhat.com to access the repositories

Download & Install oc-mirror plugin

Download the oc-mirror CLI plugin from the Downloads page of the OpenShift Cluster Manager Hybrid Cloud Console.

Extract the archive:

$ tar xvzf oc-mirror.tar.gz

If necessary, update the plugin file to be executable:

$ chmod +x oc-mirror

Copy the oc-mirror binary to /usr/local/bin

Download Pull Secret

Download the registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager.

Make a copy of your pull secret in JSON format:

cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>

Save the file either as ~/.docker/config.json or $XDG_RUNTIME_DIR/containers/auth.json on both the connected host and the host in disconnected environment.

Generate the base64-encoded user name and password or token for your mirror registry:

echo -n '<user_name>:<password>' | base64 -w0

Edit the JSON file and add a section that describes your registry to it:

  "auths": {
        "<mirror_registry>": {
        "auth": "<credentials>",
        "email": "you@example.com"
        }
  },

The JSON file should look like this:

{
  "auths": {
        "registry.example.com": {
        "auth": "BGVtbYk3ZHAtqXs=",
        "email": "you@example.com"
        },
        "cloud.openshift.com": {
        "auth": "b3BlbnNo...",
        "email": "you@example.com"
        },
        "quay.io": {
        "auth": "b3BlbnNo...",
        "email": "you@example.com"
        },
        "registry.connect.redhat.com": {
        "auth": "NTE3Njg5Nj...",
        "email": "you@example.com"
        },
        "registry.redhat.io": {
        "auth": "NTE3Njg5Nj...",
        "email": "you@example.com"
        }
  }
}

Step 1 : Mirror the Content Using oc-mirror

First thing that needs to be done is to mirror the content and the first thing to check is the update path, which can be found here

Once you have identified your upgrade path, you need to take note of all the releases you need to get to the desired release, as you will have to configure the ImageSetConfiguration to mirror all of the releases and operators, you require. 

First you need to install oc mirror plugin on a host that has connectivity to the upstream content – details are here

Next, you have to configure the imageset-config.yaml to specify what you need to mirror. The below ImageSetConfiguration.yaml file is an example :
apiVersion: mirror.openshift.io/v1alpha2
archiveSize: 4                                                  
storageConfig:                                                  
  registry:
imageURL: <quay.domain.com/mirror/oc-mirror-metadata>             
skipTLS: false
mirror:
  platform:
channels:
   - name: stable-4.11
     minVersion: 4.11.0
     maxVersion: 4.11.40
     shortestPath: True
   - name: stable-4.12
     minVersion: 4.12.17
     maxVersion: 4.12.17                                        
   type: ocp
graph: true
   operators:                                               
   - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11
    packages:
    - name: rhacs-operator
      channels:
      - name: stable-4.11
    - name: odf-operator
      channels:
      - name: stable-4.11
    - name: local-storage-operator
      channels:
      - name: stable-4.11
    - name: odf-csi-addons-operator
      channels:
      - name: stable-4.11
    - name: ocs-operator
      channels:
      - name: stable-4.11
    - name: mcg-operator
      channels:
      - name: stable-4.11
    - name: openshift-gitops-operator
      channels:
      - name: stable-4.11
    - name: oadp-operator
      channels:
      - name: stable-4.11
    - name: pipelines-operator
      channels:
      - name: stable-4.11
    - name: openshift-update-service
      channels
      - name: stable-4.11 
       - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
    packages:
    - name: rhacs-operator
      channels:
      - name: stable-4.12
    - name: odf-operator
      channels:
      - name: stable-4.12
    - name: local-storage-operator
      channels:
      - name: stable-4.12
    - name: odf-csi-addons-operator
      channels:
      - name: stable-4.12
    - name: ocs-operator
      channels:
      - name: stable-4.12
    - name: mcg-operator
      channels:
      - name: stable-4.12
    - name: openshift-gitops-operator
      channels:
      - name: stable-4.12
    - name: oadp-operator
      channels:
      - name: stable-4.12
    - name: pipelines-operator
      channels:
      - name: stable-4.12
    - name: openshift-update-service
      channels
      - name: stable-4.11                          
  additionalImages:
  - name: registry.redhat.io/ubi8/ubi:latest                    
  helm: {}

The above config mirrors the 4.11.0, 4.11.40 and the 4.12.17 OCP releases, as per the upgrade path, as well as the desired operators required for updating and operating the cluster. Now lets run the oc-mirror command to generate the imageset and save the contents to disk, on removable media:

oc mirror --config=./imageset-config.yaml file://<path_to_output_directory>

Depending on your network speed and your imageset configuration, this will take some time. Once it is completed, you can then transfer the content into the disconnected environment, where it will be need to be pushed into the mirror registry using the same command.

Step 2 : Copy Contents into Mirror Registry

On the helper host in the disconnected environment, attach the removable media and use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry.

oc mirror --from=./mirror_seq1_000000.tar docker://quay.domain.com:5000

This command updates the mirror registry with the image set and generates the ImageContentSourcePolicy and CatalogSource resources.

Step 3 : Apply the generated YAML manifests

This step will change the cluster settings to use the mirror registry instead of the upstream repositories.

  1. Navigate into the oc-mirror-workspace/ directory that was generated.
  2. Navigate into the results directory, for example, results-1639608409/.
  3. Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources.

Apply the YAML files from the directory to the cluster, using the oc apply command:

oc apply -f ./oc-mirror-workspace/results-1639608409/

Apply the release signatures to the cluster:

oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/

Verify that the ImageContentSourcePolicy resources were successfully applied:

oc get imagecontentsourcepolicy --all-namespaces

Verify that the CatalogSource resources were successfully installed by running the following command:

oc get catalogsource --all-namespaces

Optional Step – Create the graph-data image

This should be done automatically when using oc-mirror, but I found that the last time I did this, there was a problem with the container and thus the pod would not run. So it may be necessary to manually create the graph-data image and push it into the mirror registry for the update service operator to function correctly.

Create a directory, download the graph data and tar the contents into the directory:

$ mkdir graph-data
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/graph-data-4.12.0.tar.gz
$ tar -zxvf graph-data-4.12.0.tar.gz -C graph-data

Create the Dockerfile to build the image:

FROM registry.access.redhat.com/ubi8/ubi:8.1

RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data

RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner

CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"]

It may be the case that you are not able to reach the internet to download the tar file. If so, download the tar file onto the removable media from the connected environment and copy across to the disconnected environment. The docker file will then look like this:

FROM registry.access.redhat.com/ubi8/ubi:8.1

COPY cincinnati-graph-data.tar.gz . 

RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner

CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"]

Build the Image:

podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest

Push the graph data container image to the mirror registry that is accessible to the OpenShift Update Service:

podman push registry.example.com/openshift/graph-data:latest

Conclusion

The steps above show how to use the oc-mirror command to setup a mirror registry in a completely disconnected environment. Keeping a consistent imagesetconfiguration file and filesystem location will maintain the content in mirror registry, auto-pruning if necessary.

One consideration is to ensure that the mirror registry has the same availability level as the cluster. If the registry is not available and the cluster tries to schedule new pods, it may not be able to download the images and thus, fail.

openshift, Tech Stuff, Tekton

Auto pruning of Openshift Tekton pipelines.

Tekton pipelines are cool. But what’s not cool are all the resources that are left hanging around from all the pipelineruns, that have executed. Need to tidy that up.

You could do it manually. This would mean that you literally have f all else to do with your day. Or.. you could use annotations to let Openshift pipelines clear up the mess, automatically.

Heres how to do it and what to look out for.

DON’T USE THE CONSOLE.

It doesn’t work. Well.. it kinda works, but then kinda of not, so in my book – that means it doesn’t. Stay away from it and use the oc command.

Edit the target namespace

You’re going to add the following annotations, in the annotations section (obvs)

    operator.tekton.dev/prune.keep-since: "60"
    operator.tekton.dev/prune.resources: taskrun,pipelinerun

You need to edit the namespace/project that you are running the pipelines in, and set the number of minutes to remove resources that are OLDER than this value. I set to 60 mins.

The next line determines which resources you want to remove – task runs and pipeline runs seem the most obvious to me – but if anyone has any examples of any other useful resources to remove – let me know.

I will use pipelines-testing namespace for this example.

Log in and run the following

$ oc edit namespace pipeline-testing 

Add the annotations as highlighted below in bold.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: ""
    openshift.io/display-name: ""
    openshift.io/requester: scott
    openshift.io/sa.scc.mcs: s0:c27,c19
    openshift.io/sa.scc.supplemental-groups: 1000740000/10000
    openshift.io/sa.scc.uid-range: 1000740000/10000
    operator.tekton.dev/prune.hash: 13962c29c3ca7981c17ed1ca562e231d37990a38ab15e738bff7a665433b8614
    operator.tekton.dev/prune.keep-since: "60"
    operator.tekton.dev/prune.resources: taskrun,pipelinerun

Save and exit. You’re done. You now have more time to spend on mastodon and no more worries about unused PVCs consuming your storage.

If this was helpful and you would like to buy me a coffee, it would be greatly appreciated!

Buy Me A Coffee