openshift, Tech Stuff

Updating Disconnected OpenShift Clusters

Intro

Disconnected OpenShift clusters are becoming increasingly popular, as more and more organisations seek to deploy and manage their own private cloud environments. In this blog post, I will show you how to mirror repository content to update a disconnected OpenShift cluster as well as some best practices for managing your environment.

Disconnected OpenShift clusters are important for a number of reasons. First and foremost, they allow organisations to maintain complete control and security over their environment, without relying on external sources for updates, patches, or security fixes. This is especially important for organisations that operate in highly regulated industries, where data privacy and security are paramount.

Updating a disconnected OpenShift cluster can be a complex process, but it’s not impossible. Here are the basic steps:

  1. Set up a local mirror repository: A local mirror repository is a copy of the OpenShift container images that you’ll use to install and manage your cluster. You can create a mirror using the oc-mirror plugin.
  2. Move the mirrored content into the registry in the disconnected / isolated environment.
  3. Configure the cluster to use the local, mirrored registry: Once you have a local mirror registry, you’ll need to apply the manifests to configure the OpenShift cluster to use it.
  4. Update the cluster!
  5. Manage the cluster: Once your disconnected cluster is updated, you’ll need to manage it. This includes applying updates, patches, and security fixes. To do this, you’ll need to periodically update your local mirror repository with the latest container images.

The oc-mirror plugin is a command-line tool that can be used to mirror images, manifests, and other artifacts, from external repositories in order to sync them to a mirrored registry for use by a disconnected cluster. This is an essential tool for organisations that need to manage OpenShift clusters in remote or offline environments.

The oc-mirror plugin works by creating a local mirror of the upstream repositories into a local registry or filesystem on removable media. This can then transferred into the disconnected environment, to update or deploy OpenShift applications and services in the disconnected cluster.

The oc-mirror plugin is a powerful tool that can be used to manage OpenShift clusters in disconnected environments. It is easy to use and can be used to mirror a wide range of artifacts from a connected cluster.

Pre-Reqs – Things you need.

  • A host which has connectivity to the upstream repositories and has the oc-mirror plugin installed
  • Pull secret from console.redhat.com to access the repositories

Download & Install oc-mirror plugin

Download the oc-mirror CLI plugin from the Downloads page of the OpenShift Cluster Manager Hybrid Cloud Console.

Extract the archive:

$ tar xvzf oc-mirror.tar.gz

If necessary, update the plugin file to be executable:

$ chmod +x oc-mirror

Copy the oc-mirror binary to /usr/local/bin

Download Pull Secret

Download the registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager.

Make a copy of your pull secret in JSON format:

cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json>

Save the file either as ~/.docker/config.json or $XDG_RUNTIME_DIR/containers/auth.json on both the connected host and the host in disconnected environment.

Generate the base64-encoded user name and password or token for your mirror registry:

echo -n '<user_name>:<password>' | base64 -w0

Edit the JSON file and add a section that describes your registry to it:

  "auths": {
        "<mirror_registry>": {
        "auth": "<credentials>",
        "email": "you@example.com"
        }
  },

The JSON file should look like this:

{
  "auths": {
        "registry.example.com": {
        "auth": "BGVtbYk3ZHAtqXs=",
        "email": "you@example.com"
        },
        "cloud.openshift.com": {
        "auth": "b3BlbnNo...",
        "email": "you@example.com"
        },
        "quay.io": {
        "auth": "b3BlbnNo...",
        "email": "you@example.com"
        },
        "registry.connect.redhat.com": {
        "auth": "NTE3Njg5Nj...",
        "email": "you@example.com"
        },
        "registry.redhat.io": {
        "auth": "NTE3Njg5Nj...",
        "email": "you@example.com"
        }
  }
}

Step 1 : Mirror the Content Using oc-mirror

First thing that needs to be done is to mirror the content and the first thing to check is the update path, which can be found here

Once you have identified your upgrade path, you need to take note of all the releases you need to get to the desired release, as you will have to configure the ImageSetConfiguration to mirror all of the releases and operators, you require. 

First you need to install oc mirror plugin on a host that has connectivity to the upstream content – details are here

Next, you have to configure the imageset-config.yaml to specify what you need to mirror. The below ImageSetConfiguration.yaml file is an example :
apiVersion: mirror.openshift.io/v1alpha2
archiveSize: 4                                                  
storageConfig:                                                  
  registry:
imageURL: <quay.domain.com/mirror/oc-mirror-metadata>             
skipTLS: false
mirror:
  platform:
channels:
   - name: stable-4.11
     minVersion: 4.11.0
     maxVersion: 4.11.40
     shortestPath: True
   - name: stable-4.12
     minVersion: 4.12.17
     maxVersion: 4.12.17                                        
   type: ocp
graph: true
   operators:                                               
   - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.11
    packages:
    - name: rhacs-operator
      channels:
      - name: stable-4.11
    - name: odf-operator
      channels:
      - name: stable-4.11
    - name: local-storage-operator
      channels:
      - name: stable-4.11
    - name: odf-csi-addons-operator
      channels:
      - name: stable-4.11
    - name: ocs-operator
      channels:
      - name: stable-4.11
    - name: mcg-operator
      channels:
      - name: stable-4.11
    - name: openshift-gitops-operator
      channels:
      - name: stable-4.11
    - name: oadp-operator
      channels:
      - name: stable-4.11
    - name: pipelines-operator
      channels:
      - name: stable-4.11
    - name: openshift-update-service
      channels
      - name: stable-4.11 
       - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
    packages:
    - name: rhacs-operator
      channels:
      - name: stable-4.12
    - name: odf-operator
      channels:
      - name: stable-4.12
    - name: local-storage-operator
      channels:
      - name: stable-4.12
    - name: odf-csi-addons-operator
      channels:
      - name: stable-4.12
    - name: ocs-operator
      channels:
      - name: stable-4.12
    - name: mcg-operator
      channels:
      - name: stable-4.12
    - name: openshift-gitops-operator
      channels:
      - name: stable-4.12
    - name: oadp-operator
      channels:
      - name: stable-4.12
    - name: pipelines-operator
      channels:
      - name: stable-4.12
    - name: openshift-update-service
      channels
      - name: stable-4.11                          
  additionalImages:
  - name: registry.redhat.io/ubi8/ubi:latest                    
  helm: {}

The above config mirrors the 4.11.0, 4.11.40 and the 4.12.17 OCP releases, as per the upgrade path, as well as the desired operators required for updating and operating the cluster. Now lets run the oc-mirror command to generate the imageset and save the contents to disk, on removable media:

oc mirror --config=./imageset-config.yaml file://<path_to_output_directory>

Depending on your network speed and your imageset configuration, this will take some time. Once it is completed, you can then transfer the content into the disconnected environment, where it will be need to be pushed into the mirror registry using the same command.

Step 2 : Copy Contents into Mirror Registry

On the helper host in the disconnected environment, attach the removable media and use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry.

oc mirror --from=./mirror_seq1_000000.tar docker://quay.domain.com:5000

This command updates the mirror registry with the image set and generates the ImageContentSourcePolicy and CatalogSource resources.

Step 3 : Apply the generated YAML manifests

This step will change the cluster settings to use the mirror registry instead of the upstream repositories.

  1. Navigate into the oc-mirror-workspace/ directory that was generated.
  2. Navigate into the results directory, for example, results-1639608409/.
  3. Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources.

Apply the YAML files from the directory to the cluster, using the oc apply command:

oc apply -f ./oc-mirror-workspace/results-1639608409/

Apply the release signatures to the cluster:

oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/

Verify that the ImageContentSourcePolicy resources were successfully applied:

oc get imagecontentsourcepolicy --all-namespaces

Verify that the CatalogSource resources were successfully installed by running the following command:

oc get catalogsource --all-namespaces

Optional Step – Create the graph-data image

This should be done automatically when using oc-mirror, but I found that the last time I did this, there was a problem with the container and thus the pod would not run. So it may be necessary to manually create the graph-data image and push it into the mirror registry for the update service operator to function correctly.

Create a directory, download the graph data and tar the contents into the directory:

$ mkdir graph-data
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/graph-data-4.12.0.tar.gz
$ tar -zxvf graph-data-4.12.0.tar.gz -C graph-data

Create the Dockerfile to build the image:

FROM registry.access.redhat.com/ubi8/ubi:8.1

RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data

RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner

CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"]

It may be the case that you are not able to reach the internet to download the tar file. If so, download the tar file onto the removable media from the connected environment and copy across to the disconnected environment. The docker file will then look like this:

FROM registry.access.redhat.com/ubi8/ubi:8.1

COPY cincinnati-graph-data.tar.gz . 

RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner

CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"]

Build the Image:

podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest

Push the graph data container image to the mirror registry that is accessible to the OpenShift Update Service:

podman push registry.example.com/openshift/graph-data:latest

Conclusion

The steps above show how to use the oc-mirror command to setup a mirror registry in a completely disconnected environment. Keeping a consistent imagesetconfiguration file and filesystem location will maintain the content in mirror registry, auto-pruning if necessary.

One consideration is to ensure that the mirror registry has the same availability level as the cluster. If the registry is not available and the cluster tries to schedule new pods, it may not be able to download the images and thus, fail.

openshift, Tech Stuff

Automated OpenShift backups with OADP

I have recently been working with a client, assisting them with their OCP rollout and part of the delivery was to assist them with the backup all of the projects in their cluster/s. The simplest way to do this is by using the OADP Operator.

OADP (Openshift APIs for Data Protection) is an operator in Openshift, provided by Red Hat to create backup and restore APIs. It can be used to backup and restore cluster resources (yaml files), internal images and persistent volume data. I won’t go into the all the nitty gritty of OADP/Velero, more detailed information is provided here and here.

Installing the OADP Operator.

Let’s start with installing the OADP operator – which is really simple in Openshift. In the OCP console, click on Operators, search for the Red Hat OADP operator and install it. It will take a few minutes, but once done, the operator is ready and installed in the openshift-oadp namespace.

Setup Object Storage

It’s a good idea to setup the object storage you would like to backup to, before getting started with the OADP configuration. There are lots of examples, and over time, I will add additional storage providers, such as Azure/GCP/AWS/ODF. You can also use a private object storage platform, using Minio – but for this example, I will be backing up to a cloud based provider, Wasabi.

Wasabi – create bucket and access key

I have chosen Wasabi for this example because I like the interface and the pricing. There are plenty other providers which can also be configured and used in the same way.

Wasabi provides AWS ‘S3 type’ object storage and as such, the provider is configured in OADP as being ‘aws’ – but we will get to that later. First thing is setup an account in Wasabi. Once that is done, create a bucket and create an access key. Make sure you make a note of the bucket name and SAVE THE ACCESS KEY CREDENTIALS SOMEWHERE SAFE!

Create cloud-credentials secret

Now that you have the access key and secret, created in the previous step, the next thing to do is to create the cloud-credentials secret in Openshift. Securely store the credentials in a file in a similar manner as below:

$ cat << EOF > ./credentials-wasabi
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
EOF

Then create the secret using the oc command :

$ oc create secret generic cloud-credentials -n openshift-adp --from-file wasabi=credentials-wasabi

Alternatively, you can this using the OCP console, in administrator mode. Ensure that you are in the openshift-oadp workspace and then navigate to Workloads -> Secrets -> Create New Secret.

Create an instance of the Data Protection Application & Backup Storage Location/s

Log into the OCP console, switch to administrator mode and navigate to : Operators -> Installed Operators. Click on OADP operator. From the menu at the top, select DataProtectionApplication and click ‘Create DataProtectionApplication

At this point I find it easier to switch to the YAML view and edit the configuration

Replace the name to something you prefer and remove everything below the spec: directive.

Underneath the spec: we can set the configuration:

kind: DataProtectionApplication
apiVersion: oadp.openshift.io/v1alpha1
metadata:
  name: velero
  namespace: openshift-adp
spec:
  configuration:
    restic:
      enable: true
    velero:
      defaultPlugins:
        - openshift
        - aws
        - kubevirt
  backupImages: true
  backupLocations:
    - name: wasabi-s3
      velero:
        config:
          profile: default
          region: us-east-1
          s3ForcePathStyle: 'true'
          s3Url: 'https://s3.wasabisys.com'
        credential:
          key: wasabi
          name: cloud-credentials
        default: true
        objectStorage:
          bucket: ocp-backup
          prefix: velero
        provider: aws
  snapshotLocations:
    - velero:
        config:
          profile: default
          region: us-east-1
        provider: aws

This part is the configuration, which sets up the default plugins and enables restic backups any pod volumes. For this example, we are backing up to s3 storage, therefore the aws plugin is required. Other plugins are used for different storage types : azure, gcp etc. The backupImages specifies if you wish to backup images or not.

The next part of the configuration is the backup location. Here I have configured a backup location to use the wasabi s3 object storage that I setup earlier. The region and s3Url must be configured to as per the provider’s configuration.

The configuration is set to use the cloud-credentials secret and retrieve the access and secret data from the key named ‘wasabi’ – which I created earlier.

The next parts of the config defines the object storage configuration – set the bucket name, any prefix folder you wish to use and the provider – in this example, aws.

Lastly, I want to setup the location for snapshot backups.

Click ‘Create’

This can also be done using the oc command:

oc create -n openshift-adp -f oadp.yaml

We now see the created application:

Check that the status is ‘Reconciled’ and then click on BackupStorageLocation. There will be a configured storage location, which was configured when setting up the DataProtectionApplication

Create a Backup!

We are now able to backup a project / namespace! In the example here, I will backup an application – in this case, I am using the python-basic sample application, which is running in the sample-app1 project.

To execute a backup – return the OADP operator and create a backup instance.

There is a form which can be used to configure the backup, or select the yaml tab and edit the yaml directly. In this example, I will edit the yaml to configure the backup as follows:

apiVersion: velero.io/v1
kind: Backup
metadata:
  name: backup
  namespace: openshift-adp
spec:
  includedNamespaces:
    - sample-app1
  defaultVolumesToRestic: true
  storageLocation: velero-1

It is quite a simple configuration to determine which namespaces should be backed up, whether to backup pod volumes and where to backup to. Upon clicking create, the backup will begin and the status will show ‘InProgress’ and if all goes well, it should then show as ‘Completed’

Restore Backup to Different Namespace

Now that there is a successful backup (and you can check in your S3 bucket to verify it is there), it is possible to conduct a restore. In this example, I will restore the backup to a different project/namespace – restored-sample-app1

Return to the OADP operator and click on Restore and then select ‘Create Restore’. Again here, there is a form to complete or switch to yaml view, for consistency, this example will show the yaml config

apiVersion: velero.io/v1
kind: Restore
metadata:
  name: restore
  namespace: openshift-adp
spec:
  includedNamespaces:
    - sample-app1
  namespaceMapping:
    sample-app1: restored-sample-app1
  backupName: backup

Again the configuration is quite simple – select the backup name of the backup we created, include the namespace (if left blank it will restore all namespaces) and the interesting bit is the namespaceMapping – we want to restore sample-app1 to restored-sample-app1. Click create. Again, the status will show ‘InProgress’.

At this point, it is possible to change to the new namespace and switch to developer view, to view the progress of the restore.

Scheduled Backups.

For obvious reasons, you may not want to be conducting manual backups, so a good option is to create a schedule to fit your backup requirements. Navigate to the OADP operator and click on the Schedule tab and click Create Schedule. Again, there are options to either complete the form or edit the yaml. I prefer to edit the yaml:

apiVersion: velero.io/v1
kind: Schedule
metadata:
  name: daily-backup-1am
  namespace: openshift-adp
spec:
  schedule: 00 1 * * *
  template:
    defaultVolumesToRestic: true
    includedNamespaces:
      - sample-app1
    storageLocation: velero-1

The schedule configuration is fairly straight forward – it uses cron notation to determine when to backup. There are also options to backup up volumes, as was present in the backup configuration, earlier. The remaining configuration items are the namespace/s to backup and the location to backup to.

This schedule will run at 1am every day, backing up the sample-app1 namespace to the velero-1 backup storage location.

Conclusion

The above demonstration shows how to create and restore backups in Openshift, using the OADP / velero operator, to a S3 cloud storage provider (Wasabi). The demonstration also highlights how to restore to a different namespace, on the cluster and how to run backups periodically, using a schedule.

One last note/thought to consider when doing backups is that secrets are not encrypted. So it is highly recommended to use a storage solution / provider that provides data @ rest protection.

Feel free to add any comments / corrections / suggestions! If you found this helpful and would like to buy me a coffee, I would greatly appreciate it!

Buy Me A Coffee
openshift, Tech Stuff, Tekton

Auto pruning of Openshift Tekton pipelines.

Tekton pipelines are cool. But what’s not cool are all the resources that are left hanging around from all the pipelineruns, that have executed. Need to tidy that up.

You could do it manually. This would mean that you literally have f all else to do with your day. Or.. you could use annotations to let Openshift pipelines clear up the mess, automatically.

Heres how to do it and what to look out for.

DON’T USE THE CONSOLE.

It doesn’t work. Well.. it kinda works, but then kinda of not, so in my book – that means it doesn’t. Stay away from it and use the oc command.

Edit the target namespace

You’re going to add the following annotations, in the annotations section (obvs)

    operator.tekton.dev/prune.keep-since: "60"
    operator.tekton.dev/prune.resources: taskrun,pipelinerun

You need to edit the namespace/project that you are running the pipelines in, and set the number of minutes to remove resources that are OLDER than this value. I set to 60 mins.

The next line determines which resources you want to remove – task runs and pipeline runs seem the most obvious to me – but if anyone has any examples of any other useful resources to remove – let me know.

I will use pipelines-testing namespace for this example.

Log in and run the following

$ oc edit namespace pipeline-testing 

Add the annotations as highlighted below in bold.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: ""
    openshift.io/display-name: ""
    openshift.io/requester: scott
    openshift.io/sa.scc.mcs: s0:c27,c19
    openshift.io/sa.scc.supplemental-groups: 1000740000/10000
    openshift.io/sa.scc.uid-range: 1000740000/10000
    operator.tekton.dev/prune.hash: 13962c29c3ca7981c17ed1ca562e231d37990a38ab15e738bff7a665433b8614
    operator.tekton.dev/prune.keep-since: "60"
    operator.tekton.dev/prune.resources: taskrun,pipelinerun

Save and exit. You’re done. You now have more time to spend on mastodon and no more worries about unused PVCs consuming your storage.

If this was helpful and you would like to buy me a coffee, it would be greatly appreciated!

Buy Me A Coffee