1. Overview

Prisma Cloud Console is deployed as a ReplicationController, which ensures it’s always running. Prisma Cloud Defenders are deployed as a DaemonSet, which ensures that an instance of Defender runs on every node in the cluster. You can run Defenders on OpenShift master and infrastructure nodes using node selectors.

The Prisma Cloud Console and Defender container images can be stored either in the internal OpenShift registry or your own Docker v2 compliant registry. Alternatively, you can configure your deployments to pull images from Prisma Cloud’s cloud registry.

This guide shows you how to generate deployment YAML files for both Console and Defender, and then deploy them to your OpenShift cluster with the oc client.

1.1. Cluster context

Prisma Cloud can segment your environment by cluster. For example, you might have three clusters: test, staging, and production. The cluster pivot in Prisma Cloud lets you inspect resources and administer security policy on a per-cluster basis.

cluster pivot

Defenders in each DaemonSet are responsible for reporting which resources belong to which cluster. When deploying a Defender DaemonSet, Prisma Cloud tries to determine the cluster name through introspection. First, it tries to retrieve the cluster name from the cloud provider. As a fallback, it tries to retrieve the name from the corresponding kubeconfig file saved in the credentials store. Finally, you can override these mechanisms by manually specifying a cluster name when deploying your Defender DaemonSet.

Both the Prisma Cloud UI and twistcli tool accept an option for manually specifying a cluster name. Let Prisma Cloud automatically detect the name for provider-managed clusters. Manually specify names for self-managed clusters, such as those built with kops.

Radar lets you explore your environment cluster-by-cluster. You can also create stored filters (also known as collections) based on cluster names. Finally, you can scope policy by cluster. Vulnerability and compliance rules for container images and hosts can all be scoped by cluster name.

There are some things to consider when manually naming clusters:

  • If you specify the same name for two or more clusters, they’re treated as a single cluster.

  • For GCP, if you have clusters with the same name in different projects, they’re treated as a single cluster. Consider manually specifying a different name for each cluster.

  • Manually specifying names isn’t supported in Manage > Defenders > Manage > DaemonSet. This page lets you deploy and manage DaemonSets directly from the Prisma Cloud UI. For this deployment flow, cluster names are retrieved from the cloud provider or the supplied kubeconfig only.

2. Preflight checklist

To ensure that your installation goes smoothly, work through the following checklist and validate that all requirements are met.

2.1. Minimum system requirements

Validate that the components in your environment (nodes, host operating systems, orchestrator) meet the specs in System requirements.

For OpenShift installs, we recommend using the overlay or overlay2 storage drivers due to a known issue in RHEL. For more information, see https://bugzilla.redhat.com/show_bug.cgi?id=1518519.

2.2. Permissions

Validate that you have permission to:

  • Push to a private docker registry. For most OpenShift setups, the registry runs inside the cluster as a service. You must be able to authenticate with your registry with docker login.

  • Pull images from your registry. This might require the creation of a docker-registry secret.

  • Have the correct role bindings to pull and push to the registry. For more information, see Accessing the Registry.

  • Create and delete projects in your cluster. For OpenShift installations, a project is created when you run oc new-project.

  • Run oc create commands.

2.3. Internal cluster network communication

TCP: 8083, 8084

2.4. External cluster network communication

TCP: 443

The Prisma Cloud Console connects to the Prisma Cloud Intelligence Stream (https://intelligence.twistlock.com) on TCP port 443 for vulnerability updates. If your Console is unable to contact the Prisma Cloud Intelligence Stream, follow the guidance for offline environments.

3. Install Prisma Cloud

Use twistcli to install the Prisma Cloud Console and Defenders. The twistcli utility is included with every release. After completing this procedure, both Prisma Cloud Console and Prisma Cloud Defenders will be running in your OpenShift cluster.

3.1. Download the Prisma Cloud software

Download the latest Prisma Cloud release to any system where the OpenShift oc client is installed.

  1. Go to Releases, and copy the link to current recommended release.

  2. Download the release tarball to your cluster controller.

    $ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
  3. Unpack the release tarball.

    $ mkdir twistlock
    $ tar xvzf twistlock_<VERSION>.tar.gz -C twistlock/

3.2. Create an OpenShift project for Prisma Cloud

Create a project named twistlock.

  1. Login to the OpenShift cluster and create the twistlock project:

    $ oc new-project twistlock

3.3. (Optional) Push the Prisma Cloud images to a private registry

When Prisma Cloud is deployed to your cluster, the images are retrieved from a registry. You have a number of options for storing the Prisma Cloud Console and Defender images:

  • OpenShift internal registry.

  • Private Docker v2 registry. You must create a docker-secret to authenticate with the registry.

Alternatively, you can pull the images from the Prisma Cloud cloud registry at deployment time. Your cluster nodes must be able to connect to the Prisma Cloud cloud registry (registry-auth.twistlock.com) with TLS on TCP port 443.

This guides shows you how to use both the OpenShift internal registry and the Prisma Cloud cloud registry. If you’re going to use the Prisma Cloud cloud registry, you can skip this section. Otherwise, this procedure shows you how to pull, tag, and upload the Prisma Cloud images to the OpenShift internal registry’s twistlock imageStream.

  1. Determine the endpoint for your OpenShift internal registry. Use either the internal registry’s service name or cluster IP.

    $ oc get svc -n default
    NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
    docker-registry    ClusterIP   172.30.163.181   <none>        5000/TCP      88d
  2. Pull the images from the Prisma Cloud cloud registry using your access token. The major, minor, and patch numerals in the <VERSION> string are separated with an underscore. For exampe, 18.11.128 would be 18_11_128.

      $ docker pull \
        registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION>
    
      $ docker pull \
        registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION>
  3. Tag the images for the OpenShift internal registry.

    $ docker tag \
      registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/defender:defender_<VERSION> \
      172.30.163.181:5000/twistlock/private:defender_<VERSION>
    
    $ docker tag \
      registry-auth.twistlock.com/tw_<ACCESS_TOKEN>/twistlock/console:console_<VERSION> \
      172.30.163.181:5000/twistlock/private:console_<VERSION>
  4. Push the images to the twistlock project’s imageStream.

    $ docker push 172.30.163.181:5000/twistlock/private:defender_<VERSION>
    $ docker push 172.30.163.181:5000/twistlock/private:console_<VERSION>

3.4. Install Console

Use the twistcli tool to generate YAML files or a Helm chart for Prisma Cloud Compute Console. The twistcli tool is bundled with the release tarball. There are versions for Linux, macOS, and Windows.

The twistcli tool generates YAML files or helm charts for a Deployment and other service configurations, such as a PersistentVolumeClaim, SecurityContextConstraints, and so on. Run the twistcli command with the --help flag for additional details about the command and supported flags.

You can optionally customize twistlock.cfg to enable additional features, such as custom compliance SCAP scanning. Then run twistcli from the root of the extracted release tarball.

Prisma Cloud Console uses a PersistentVolumeClaim to store data. There are two ways to provision storage for Console:

  • Dynamic provisioning: Allocate storage for Console on-demand at deployment time. When generating the Console deployment YAML files or helm chart with twistcli, specify the name of the storage class with the --storage-class flag. Most customers use dynamic provisioning.

  • Manual provisioning: Pre-provision a persistent volume for Console, then specify its label when generating the Console deployment YAML files. OpenShift uses NFS mounts for the backend infrastructure components (e.g. registry, logging, etc.). The NFS server is typically one of the master nodes. Guidance for creating an NFS backed PersistentVolume can be found here. Also see Appendix: NFS PersistentVolume example.

3.4.1. Option #1: Deploy with YAML files

Deploy Prisma Cloud Compute Console with YAML files.

  1. Generate a deployment YAML file for Console. A number of command variations are provided. Use them as a basis for constructing your own working command.

    Prisma Cloud Console + dynamically provisioned PersistentVolume + image pulled from the OpenShift internal registry.

    $ <PLATFORM>/twistcli console export openshift \
      --storage-class "<STORAGE-CLASS-NAME>" \
      --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \
      --service-type "ClusterIP"

    Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the OpenShift internal registry. Using the NFS backed PersistentVolume described in Appendix: NFS PersistentVolume example, pass the label to the --persistent-volume-labels flag to specify the PersistentVolume to which the PersistentVolumeClaim will bind.

    $ <PLATFORM>/twistcli console export openshift \
      --persistent-volume-labels "app-volume=twistlock-console" \
      --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \
      --service-type "ClusterIP"

    Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the Prisma Cloud cloud registry. If you omit the --image-name flag, the Prisma Cloud cloud registry is used by default, and you are prompted for your access token.

    $ <PLATFORM>/twistcli console export openshift \
      --persistent-volume-labels "app-volume=twistlock-console" \
      --service-type "ClusterIP"
  2. Deploy Console.

    $ oc create -f ./twistlock_console.yaml
    You can safely ignore the error that says the twistlock project already exists.

3.4.2. Option #2: Deploy with Helm chart

Deploy Prisma Cloud Compute Console with a Helm chart.

Prisma Cloud Console Helm charts fail to install on OpenShift 4 clusters due to a Helm bug. If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you’ll get the following error:

Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"

To work around the issue, you’ll need to manually modify the generated Helm chart.

  1. Generate a deployment helm chart for Console. A number of command variations are provided. Use them as a basis for constructing your own working command.

    Prisma Cloud Console + dynamically provisioned PersistentVolume + image pulled from the OpenShift internal registry.

    $ <PLATFORM>/twistcli console export openshift \
      --storage-class "<STORAGE-CLASS-NAME>" \
      --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \
      --service-type "ClusterIP" \
      --helm

    Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the OpenShift internal registry. Using the NFS backed PersistentVolume described in Appendix: NFS PersistentVolume example, pass the label to the --persistent-volume-labels flag to specify the PersistentVolume to which the PersistentVolumeClaim will bind.

    $ <PLATFORM>/twistcli console export openshift \
      --persistent-volume-labels "app-volume=twistlock-console" \
      --image-name "172.30.163.181:5000/twistlock/private:console_<VERSION>" \
      --service-type "ClusterIP" \
      --helm

    Prisma Cloud Console + manually provisioned PersistentVolume + image pulled from the Prisma Cloud cloud registry. If you omit the --image-name flag, the Prisma Cloud cloud registry is used by default, and you are prompted for your access token.

    $ <PLATFORM>/twistcli console export openshift \
      --persistent-volume-labels "app-volume=twistlock-console" \
      --service-type "ClusterIP" \
      --helm
  2. Unpack the chart into a temporary directory.

    $ mkdir helm-console
    $ tar xvzf twistlock-console-helm.tar.gz -C helm-console/
  3. Open helm-console/twistlock-console/templates/securitycontextconstraints.yaml for editing.

  4. Change apiVersion from v1 to security.openshift.io/v1.

    {{- if .Values.openshift }}
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: twistlock-console
    ...
  5. Repack the Helm chart

    $ cd helm-console/
    $ tar cvzf twistlock-console-helm.tar.gz twistlock-console/
  6. Install the updated Helm chart.

    $ helm install --namespace=twistlock -g twistlock-console-helm.tar.gz

3.5. Create an external route to Console

Create an external route to Console so that you can access the web UI and API.

  1. From the OpenShift web interface, go to the twistlock project.

  2. Go to Application > Routes.

  3. Select Create Route.

  4. Enter a name for the route, such as twistlock-console.

  5. Hostname = URL used to access the Console, e.g. twistlock-console.apps.ose.example.com

  6. Path = /

  7. Service = twistlock-console

  8. Target Port = 8083 → 8083

  9. Select the Security > Secure Route radio button.

  10. TLS Termination = Passthrough (if using 8083)

    If you plan to issue a custom certificate for the Prisma Cloud Console that is trusted and will allow the TLS establishment with the Prisma Cloud Console, then Select Passthrough TLS for TCP port 8083.

  11. Insecure Traffic = Redirect

  12. Click Create.

3.6. Configure Console

Create your first admin user, enter your license key, and configure Console’s certificate so that Defenders can establish a secure connection to it.

  1. In a web browser, navigate to the external route you configured for Console, e.g. https://twistlock-console.apps.ose.example.com.

  2. Create your first admin account.

  3. Enter your license key.

  4. Add a SubjectAlternativeName to Console’s certificate to allow Defenders to establish a secure connection with Console.

    Use either Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or Console’s cluster IP.

    $ oc get svc -n twistlock
    NAME                TYPE           CLUSTER-IP     EXTERNAL-IP                 PORT(S)
    twistlock-console   LoadBalancer   172.30.41.62   172.29.61.32,172.29.61.32   8084:3184...
    1. Go to Manage > Defenders > Names.

    2. Click Add SAN and enter Console’s service name.

    3. Click Add SAN and enter Console’s cluster IP.

      install openshift san

3.7. Install Defender

Prisma Cloud Defenders run as containers on the nodes in your OpenShift cluster. They are deployed as a DaemonSet. Use the twistcli tool to generate the DaemonSet deployment YAML or helm chart.

The command has the following basic structure It creates a YAML file named defender.yaml or a helm chart twistlock-defender-helm.tar.gz in the working directory.

Example for export of a YAML file:

$ <PLATFORM>/twistcli defender export openshift \
  --address <ADDRESS> \
  --cluster-address <CLUSTER-ADDRESS> \
  --cri

Example for export of a Helm chart:

$ <PLATFORM>/twistcli defender export openshift \
  --address <ADDRESS> \
  --cluster-address <CLUSTER-ADDRESS> \
  --helm \
  --cri

The command connects to Console’s API, specified in --address, to generate the Defender DaemonSet YAML config file or helm chart. The location where you run twistcli (inside or outside the cluster) dictates which Console address should be supplied.

The --cluster-address flag specifies the address Defender uses to connect to Console. For Defenders deployed inside the cluster, specify Prisma Cloud Console’s service name, twistlock-console or twistlock-console.twistlock.svc, or cluster IP address. For Defenders deployed outside the cluster, specify either Console’s external address, which is exposed by your external route.

If SELinux is enabled on the OpenShift nodes, pass the --selinux-enabled argument to twistcli.

For managed clusters, Prisma Cloud automatically gets the cluster name from the cloud provider. To override the the cloud provider’s cluster name, use the --cluster option. For self-managed clusters, manually specify a cluster name with the --cluster option.

3.7.1. Option #1: Deploy with YAML files

Deploy the Defender DaemonSet with YAML files.

  1. Generate the Defender DaemonSet YAML. A number of command variations are provided. Use them as a basis for constructing your own working command.

    Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry. Use the OpenShift external route for your Prisma Cloud Console, --address https://twistlock-console.apps.ose.example.com. Designate Prisma Cloud’s cloud registry by omitting the --image-name flag. Defining CRI-O as the default container engine by using the cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://twistlock-console.apps.ose.example.com \
      --cluster-address 172.30.41.62 \
      --selinux-enabled \
      --cri

    Outside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image from the OpenShift internal registry. Defining CRI-O as the default container engine by using the cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://twistlock-console.apps.ose.example.com \
      --cluster-address 172.30.41.62 \
      --selinux-enabled \
      --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
      --cri

    Inside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry. When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://172.30.41.62:8083 \
      --cluster-address 172.30.41.62 \
      --selinux-enabled \
      --cri

    Inside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image in the OpenShift internal registry. Defining CRI-O as the default container engine by using the -cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://172.30.41.62:8083 \
      --cluster-address 172.30.41.62 \
      --selinux-enabled \
      --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
      --cri
  2. Deploy the Defender DaemonSet.

    $ oc create -f ./defender.yaml

3.7.2. Option #2: Deploy with Helm chart

Deploy the Defender DaemonSet with a Helm chart.

Prisma Cloud Defenders Helm charts fail to install on OpenShift 4 clusters due to a Helm bug. If you generate a Helm chart, and try to install it in an OpenShift 4 cluster, you’ll get the following error:

Error: unable to recognize "": no matches for kind "SecurityContextConstraints" in version "v1"

To work around the issue, you’ll need to manually modify the generated Helm chart.

  1. Generate the Defender DaemonSet helm chart. A number of command variations are provided. Use them as a basis for constructing your own working command.

    Outside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry. Use the OpenShift external route for your Prisma Cloud Console, --address https://twistlock-console.apps.ose.example.com. Designate Prisma Cloud’s cloud registry by omitting the --image-name flag. Defining CRI-O as the default container engine by using the -cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://twistlock-console.apps.ose.example.com \
      --cluster-address 172.30.41.62 \
      --helm \
      --cri

    Outside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image from the OpenShift internal registry. Defining CRI-O as the default container engine by using the -cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://twistlock-console.apps.ose.example.com \
      --cluster-address 172.30.41.62 \
      --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
      --helm \
      --cri

    Inside the OpenShift cluster + pull the Defender image from the Prisma Cloud cloud registry. When generating the Defender DaemonSet YAML with twistcli from a node inside the cluster, use Console’s service name (twistlock-console) or cluster IP in the --cluster-address flag. This flag specifies the endpoint for the Prisma Cloud Compute API and must include the port number. Defining CRI-O as the default container engine by using the -cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://172.30.41.62:8083 \
      --cluster-address 172.30.41.62 \
      --helm \
      --cri

    Inside the OpenShift cluster + pull the Defender image from the OpenShift internal registry. Use the --image-name flag to designate an image in the OpenShift internal registry. Defining CRI-O as the default container engine by using the -cri flag.

    $ <PLATFORM>/twistcli defender export openshift \
      --address https://172.30.41.62:8083 \
      --cluster-address 172.30.41.62 \
      --image-name 172.30.163.181:5000/twistlock/private:defender_<VERSION> \
      --helm \
      --cri
  2. Unpack the chart into a temporary directory.

    $ mkdir helm-defender
    $ tar xvzf twistlock-defender-helm.tar.gz -C helm-defender/
  3. Open helm-console/twistlock-defender/templates/securitycontextconstraints.yaml for editing.

  4. Change apiVersion from v1 to security.openshift.io/v1.

    {{- if .Values.openshift }}
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
    name: twistlock-console
    ...
  5. Repack the Helm chart

    $ cd helm-defender/
    $ tar cvzf twistlock-defender-helm.tar.gz twistlock-defender/
  6. Install the updated Helm chart.

    $ helm install --namespace=twistlock -g twistlock-defender-helm.tar.gz

3.8. Confirm the Defenders were deployed.

  1. In Prisma Cloud Console, go to Manage > Defenders > Manage to see a list of deployed Defenders.

    install openshift tl defenders
  2. In the OpenShift Web Console, go to the Prisma Cloud project’s monitoring window to see which pods are running.

    install openshift ose defenders
  3. Using the OpenShift CLI to see the DaemonSet pod count.

    $ oc get ds -n twistlock
    NAME                    DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    twistlock-defender-ds   4         3         3         3            3           <none>          29m
    The desired and current pod counts do not match. This is a job for the nodeSelector.

4. Control Defender deployments with NodeSelector

You can deploy Defenders to all nodes in an OpenShift cluster (master, infra, compute). Depending upon the nodeSelector configuration, Prisma Cloud Defenders may not get deployed to all nodes. Adjust the guidance in the following procedure according to your organization’s deployment strategy.

  1. Review the following OpenShift configuration settings.

    1. The OpenShift master nodeSelector configuration can be found in /etc/origin/master/master-config.yaml. Look for any nodeSelector and nodeSelectorLabelBlacklist settings.

      defaultNodeSelector: compute=true
    2. Prisma Cloud project - The nodeSelector can be defined at the project level.

      $ oc describe project twistlock
      Name:                   twistlock
      Created:                10 days ago
      Labels:                 <none>
      Annotations:            openshift.io/description=
                              openshift.io/display-name=
                              openshift.io/node-selector=node-role.kubernetes.io/compute=true
                              openshift.io/sa.scc.mcs=s0:c17,c9
                              openshift.io/sa.scc.supplemental-groups=1000290000/10000
                              openshift.io/sa.scc.uid-range=1000290000/10000
      Display Name:           <none>
      Description:            <none>
      Status:                 Active
      Node Selector:          node-role.kubernetes.io/compute=true
      Quota:                  <none>
      Resource limits:        <none>

      In this example the Prisma Cloud project default nodeSelector instructs OpenShift to only deploy Defenders to the node-role.kubernetes.io/compute=true nodes.

  2. The following command removes the Node Selector value from the Prisma Cloud project.

    $ oc annotate namespace twistlock openshift.io/node-selector=""
  3. Add a Deploy_Prisma Cloud : true label to all nodes to which Defender should be deployed.

      $ oc label node ip-172-31-0-55.ec2.internal Deploy_Prisma Cloud=true
    
      $ oc describe node ip-172-31-0-55.ec2.internal
      Name:               ip-172-31-0-55.ec2.internal
      Roles:              compute
      Labels:             Deploy_Prisma Cloud=true
                          beta.kubernetes.io/arch=amd64
                          beta.kubernetes.io/os=linux
                          kubernetes.io/hostname=ip-172-31-0-55.ec2.internal
                          logging-infra-fluentd=true
                          node-role.kubernetes.io/compute=true
                          region=primary
      Annotations:        volumes.kubernetes.io/controller-managed-attach-detach=true
      CreationTimestamp:  Sun, 05 Aug 2018 05:40:10 +0000
  4. Set the nodeSelector in the Defender DaemonSet deployment YAML.

    version: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: twistlock-defender-ds
      namespace: twistlock
    spec:
      template:
        metadata:
          labels:
            app: twistlock-defender
        spec:
          serviceAccountName: twistlock-service
          nodeSelector:
            Deploy_Prisma Cloud: "true"
          restartPolicy: Always
          containers:
          - name: twistlock-defender-2-5-127
          ...
  5. Check the desired and current count for the Defender DaemonSet deployment.

    $ oc get ds -n twistlock
    
    NAME                    DESIRED   CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR
    twistlock-defender-ds   4         4        4      4           4          Deploy_Prisma Cloud=true

5. Uninstall

To uninstall Prisma Cloud, delete the twistlock project, then delete the Prisma Cloud PersistentVolume.

  1. Delete the twistlock Project

    $ oc delete project twistlock
  2. Delete the twistlock PersistentVolume

    $ oc delete pv twistlock

6. Appendix: NFS PersistentVolume example

Create an NFS mount for the Prisma Cloud Console’s PV on the host that serves the NFS mounts.

  1. mkdir /opt/twistlock_console

  2. Check selinux: sestatus

  3. chcon -R -t svirt_sandbox_file_t -l s0 /opt/twistlock_console

  4. sudo chown nfsnobody /opt/twistlock_console

  5. sudo chgrp nfsnobody /opt/twistlock_console

  6. Check perms with: ls -lZ /opt/twistlock_console (drwxr-xr-x. nfsnobody nfsnobody system_u:object_r:svirt_sandbox_file_t:s0)

  7. Create /etc/exports.d/twistlock.exports

  8. In the /etc/exports.d/twistlock.exports add in line /opt/twistlock_console *(rw,root_squash)

  9. Restart nfs mount sudo exportfs -ra

  10. Confirm with showmount -e

  11. Get the IP address of the Master node that will be used in the PV (eth0, openshift uses 172. for node to node communication). Make sure TCP 2049 (NFS) is allowed between nodes.

  12. Create a PersistentVolume for Prisma Cloud Console.

    The following example uses a label for the PersistentVolume and the volume and claim pre-binding features. The PersistentVolumeClaim uses the app-volume: twistlock-console label to bind to the PV. The volume and claim pre-binding claimref ensures that the PersistentVolume is not claimed by another PersistentVolumeClaim before Prisma Cloud Console is deployed.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: twistlock
     labels:
      app-volume: twistlock-console
    storageClassName: standard
    spec:
      capacity:
       storage: 100Gi
      accessModes:
      - ReadWriteOnce
      nfs:
       path: /opt/twistlock_console
       server: 172.31.4.59
    persistentVolumeReclaimPolicy: Retain
    claimRef:
      name: twistlock-console
      namespace: twistlock

7. Appendix: Implementing SAML federation with a Prisma Cloud Console inside an OpenShift cluster

When federating Prisma Cloud Console that is accessed through an OpenShift external route with a SAML v2.0 Identity Provider (IdP), the SAML authentication request’s AssertionConsumerServiceURL value must be modified. Prisma Cloud automatically generates the AssertionConsumerServiceURL value sent in a SAML authentication request based on Console’s configuration. When Console is accessed through an OpenShift external route, the URL for Console’s API endpoint is most likely not the same as the automatically generated AssertionConsumerServiceURL. Therefore, you must configure the AssertionConsumerServiceURL value that Prisma Cloud sends in the SAML authentication request.

  1. Log into Prisma Cloud Console.

  2. Go to Manage > Authentication > SAML.

  3. In Console URL, define the AssertionConsumerServiceURL.

    In this example, enter https://twistlock-console.apps.ose.example.com