1. Overview

This guide shows you how to deploy Prisma Cloud in an ECS cluster with a single infrastructure node and two worker nodes. Console runs on the infrastructure node. An instance of Defender runs on each node in the cluster.

Console is the Prisma Cloud management interface It runs as a service in your ECS cluster. The parameters of the service are described in a task definition, and the task definition is written in JSON format.

Defender protects your containerized environment according to the policies you set in Prisma Cloud Console It also runs a service in your ECS cluster. To automatically deploy an instance of Defender on each node in your cluster, you’ll run the Defender task as a daemon service.

The installation described in this article is meant to be highly available. Data is persisted across nodes. If an infrastructure node were to go down, ECS can reschedule the Console service on any healthy node, and Console will continue to have access to its state. To enable this capability, you’ll attach storage that’s accessible from each of your infrastructure nodes, and Amazon Elastic File System (EFS) is an excellent option.

When you have multiple infrastructure nodes, ECS can schedule Console on any of them. Defenders need a reliable way to connect to Console. A load balancer automatically directs traffic to the node where Console runs, and offers a stable interface that Defenders can use to connect to Console and that operators can use to access its web interface.

We assume you are deploying Prisma Cloud to the default VPC. If you are not using the default VPC, adjust your settings accordingly.
This guide assumes you know very little about AWS ECS. As such, it is extremely prescriptive, and includes step for building your cluster. If you are already familiar with AWS ECS and do not need assistance navigating the interface, simply read the section synopsis, which summarizes all key configurations.

1.1. Cluster context

Prisma Cloud can segment your environment by cluster. For example, you might have three clusters: test, staging, and production. The cluster pivot in Prisma Cloud lets you inspect resources and administer security policy on a per-cluster basis.

radar clusters pivot

Defenders in each DaemonSet are responsible for reporting which resources belong to which cluster. When deploying a Defender DaemonSet, Prisma Cloud tries to determine the cluster name through introspection. First, it tries to retrieve the cluster name from the cloud provider. As a fallback, it tries to retrieve the name from the corresponding kubeconfig file saved in the credentials store. Finally, you can override these mechanisms by manually specifying a cluster name when deploying your Defender DaemonSet.

Both the Prisma Cloud UI and twistcli tool accept an option for manually specifying a cluster name. Let Prisma Cloud automatically detect the name for provider-managed clusters. Manually specify names for self-managed clusters, such as those built with kops.

Radar lets you explore your environment cluster-by-cluster. You can also create stored filters (also known as collections) based on cluster names. Finally, you can scope policy by cluster. Vulnerability and compliance rules for container images and hosts, runtime rules for container images, and trusted images rules can all be scoped by cluster name.

There are some things to consider when manually naming clusters:

  • If you specify the same name for two or more clusters, they’re treated as a single cluster.

  • For GCP, if you have clusters with the same name in different projects, they’re treated as a single cluster. Consider manually specifying a different name for each cluster.

  • Manually specifying names isn’t supported in Manage > Defenders > Manage > DaemonSet. This page lets you deploy and manage DaemonSets directly from the Prisma Cloud UI. For this deployment flow, cluster names are retrieved from the cloud provider or the supplied kubeconfig only.

2. Download the Prisma Cloud software

The Prisma Cloud release tarball contains all the release artifacts.

  1. Download the latest recommended release.

  2. Retrieve the release tarball.

    $ wget <LINK_TO_CURRENT_RECOMMENDED_RELEASE_LINK>
  3. Unpack the Prisma Cloud release tarball.

    $ mkdir twistlock
    $ tar xvzf prisma_cloud_compute_edition_<VERSION>.tar.gz  -C twistlock/

3. Create a cluster

Create an empty cluster named pc-ecs-cluster. Later, you will create launch configurations and auto-scaling groups to start EC2 instances in the cluster.

  1. Log into the AWS Management Console.

  2. Go to Services > Containers > Elastic Container Service.

  3. Click Create Cluster.

  4. Select Networking only, then click Next Step.

  5. Enter a cluster name, such as pc-ecs-cluster.

  6. Click Create.

4. Create a security group

Create a new security group named pc-security-group that opens the following ports. This security group will be associated with resources in your cluster.

Port Description

8083

Prisma Cloud Console’s UI and API.

8084

Prisma Cloud secure websocket for Console-Defender communication.

2049

NFS for Prisma Cloud Console to access its state.

22

SSH for managing nodes.

You can harden this configuration as required. For example, you might want to limit access to port 22 to specific source IPs.

  1. Go to Services > Compute > EC2.

  2. In the left menu, click NETWORK & SECURITY > Security Groups.

  3. Click Create Security Group.

  4. In Security group name, enter a name, such as pc-security-group.

  5. In Description, enter Prisma Cloud ports.

  6. In VPC, select your default VPC.

  7. Under the Inbound rules section, click Add Rule.

    1. Under Type, select Custom TCP.

    2. Under Port Range, enter 8083-8084.

    3. Under Source, select Anywhere.

  8. Click Add Rule.

    1. Under Type, select NFS.

    2. Under Source, select Anywhere.

  9. Click Add Rule.

    1. Under Type, select SSH.

    2. Under Source, select Anywhere.

  10. Click Create security group.

5. Create an EFS file system for Console

Create the Console EFS file system, and then get the command that will be used to mount the file system on every infrastructure node.

The EFS file system and ECS cluster must be in the same VPC and security group.

Prerequisites: Prisma Cloud Console depends on an EFS file system with the following performance characteristics:

  • Performance mode: General purpose.

  • Throughput mode: Provisioned. Provision 0.1 MiB/s per deployed Defender. For example, if you plan to deploy 10 Defenders, provision 1 MiB/s of throughput.

  1. Log into the AWS Management Console.

  2. Go to Services > Storage > EFS.

  3. Click Create File System.

  4. Enter a value for Name, such as pc-efs-console

  5. Select a VPC.

  6. Click Customize.

  7. Set throughput mode to Provisioned, and set Throughput to 0.1 MiB/s per Defender to be deployed.

    For example, if you plan to deploy ten Defenders, set throughput to 1 MiB/s (10 Defenders * 0.1 MiB/s = 1 MiB/s).

  8. Click Next.

  9. For each mount target, select the pc-security-group.

  10. Click Next.

  11. In File System Policy, click Next.

  12. Review your settings and click Create.

  13. Click View file system.

  14. Click Attach, copy the NFS client mount command, and set it aside for later.

    You will use the mount command when setting up Console’s launch configuration.

6. Set up a load balancer

Set up an AWS Classic Load Balancer, and capture the Load Balancer DNS name.

You’ll create two load balancer listeners. One is used for Console’s UI and API, which are served on port 8083. Another is used for the websocket connection between Defender and Console, which is established on port 8084.

For detailed instructions on how to create a load balancer for Console, see Configure an AWS Load Balancer for ECS.

7. Deploy Console

Launch an infrastructure node that runs in the cluster, then start Prisma Cloud Console as a service on that node.

7.1. Create a launch configuration for the infrastructure node

Launch configurations are templates that are used by an auto-scaling group to start EC2 instances in your cluster.

Create a launch configuration named pc-infra-node that:

  • Creates an instance type of t2.xlarge, or higher. For more information about Console’s minimum requirements, see the system requirements.

  • Runs Amazon ECS-Optimized Amazon Linux 2 AMI.

  • Uses the ecsInstanceRole IAM role.

  • Runs a user data script that joins the pc-ecs-cluster and defines a custom attribute named purpose with a value of infra. Console tasks will be placed to this instance.

  1. Go to Services > Compute > EC2.

  2. In the left menu, click Auto Scaling > Launch Configurations.

  3. Click Create launch configuration.

  4. In Name, enter a name for your launch configuration, such as pc-infra-node.

  5. In Amazon machine image, select Amazon ECS-Optimized Amazon Linux 2 AMI.

    You can get a complete list of per-region Amazon ECS-optimized AMIs from here.

  6. Under instance type, select t2.xlarge.

  7. Under Additional Configuration:

    1. In IAM instance profile, select ecsInstanceRole.

      If this role doesn’t exist, see Amazon ECS Container Instance IAM Role.
    2. Under User data, select Text, and paste the following code snippet, which installs the NFS utilities and mounts the EFS file system:

      #!/bin/bash
      cat <<'EOF' >> /etc/ecs/ecs.config
      ECS_CLUSTER=pc-ecs-cluster
      ECS_INSTANCE_ATTRIBUTES={"purpose": "infra"}
      EOF
      
      yum install -y nfs-utils
      mkdir /twistlock_console
      <CONSOLE_MOUNT_COMMAND> /twistlock_console
      
      mkdir -p /twistlock_console/var/lib/twistlock
      mkdir -p /twistlock_console/var/lib/twistlock-backup
      mkdir -p /twistlock_console/var/lib/twistlock-config

      ECS_CLUSTER must match your cluster name. If you’ve named your cluster something other than pc-ecs-cluster, then update the user data script accordingly.

      <CONSOLE_MOUNT_COMMAND> is the Console mount command you copied from the AWS Management Console after creating your console EFS file system. The mount target must be /twistlock_console, not the efs mount target provided in the sample command.

    3. (Optional) In IP Address Type, select Assign a public IP address to every instance.

      With this option, you can easily SSH to this instance to troubleshoot issues.

  8. Under Security groups:

    1. Select Select an existing security group.

    2. Select pc-security-group.

  9. Under Key pair (login), select an existing key pair, or create a new key pair so that you can access your instances.

  10. Click Create launch configuration.

7.2. Create an auto scaling group for the infrastructure node

Launch a single instance of the infrastructure node into your cluster.

  1. Go to Services > Compute > EC2.

  2. In the left menu, click Auto Scaling > Auto Scaling Groups.

  3. Click Create an Auto Scaling group.

  4. In Choose launch template or configuration:

    1. In Auto Scaling group Name, enter pc-infra-autoscaling.

    2. In Launch template, click Switch to launch configuration.

    3. Select pc-infra-node.

    4. Click Next.

  5. Under Configure settings:

    1. In VPC, select your default VPC.

    2. In Subnet, select a public subnet, such as 172.31.0.0/20.

    3. Click Skip to review.

  6. Review the configuration and click Create Auto Scaling Group.

    After the auto scaling group spins up (it will take some time), validate that your cluster has one container instance, where a container instance is the ECS vernacular for an EC2 instance that has joined the cluster and is ready to accept container workloads:

    • Go to Services > Containers > Elastic Container Service. The count for Container instances should be 1.

    • Click on the cluster, then click on the ECS Instances tab. In the status table, there should be a single entry. Click on the link under the EC2 Instance column. In the details page for the EC2 instance, record the Public DNS.

7.3. Copy the Prisma Cloud config file into place

The Prisma Cloud API serves the version of the configuration file used to instantiate Console. Use scp to copy twistlock.cfg from the Prisma Cloud release tarball to /twistlock_console/var/lib/twistlock-config on the infrastructure node.

  1. Upload twistlock.cfg to the infrastructure node.

    1. Go to the directory where you unpacked the Prisma Cloud release tarball.

    2. Copy twistlock.cfg to the infrastructure node.

      $ scp -i <PATH-TO-KEY-FILE> twistlock.cfg ec2-user@<ECS_INFRA_NODE_DNS_NAME>:~
  2. SSH to the infrastructure node.

    $ ssh -i <PATH-TO-KEY-FILE> ec2-user@<ECS_INFRA_NODE_DNS_NAME>
  3. Copy the twistlock.cfg file into place.

    $ sudo cp twistlock.cfg /twistlock_console/var/lib/twistlock-config
  4. Close your SSH session.

    $ exit

7.4. Create a Prisma Cloud Console task definition

Prisma Cloud provides a task definition template for Console. Download the template, then update the variables specific to your environment. Finally, load the task definition in ECS.

Prerequisites:

  • The task definition provisions sufficient resources for Console to operate. The template specifies reasonable defaults. For more information, see the system requirements.

  1. Download the Prisma Cloud Compute Console task definition, and open it for editing.

  2. Update the value for image.

    Replace the following placeholder strings with the appropriate values:

    • <ACCESS-TOKEN> — Your Prisma Cloud access token. All characters must be lowercase.

    • <VERSION> — Version of the Console image to use. For example, for version 20.04.177, specify 20_04_177. The image and tag will look like console:console_20_04_177.

  3. Update <CONSOLE-DNS> to the Load Balancer’s DNS name.

  4. Go to Services > Containers > Elastic Container Service.

  5. In the left menu, click Task Definitions.

  6. Click Create new Task Definition.

  7. Select EC2, and then click Next step.

  8. In Step 2: Configure task and container definitions, scroll to the bottom of the page and click Configure via JSON.

  9. Delete the default task definition, and replace it with the Prisma Cloud Compute Console task definition.

  10. Click Save.

  11. (Optional) Change the name of the task definition. By default, its name is pc-console.

  12. Click Create.

7.5. Start the Prisma Cloud Console service

Create the Console service using the previously defined task definition. A single instance of Console will run on the infrastructure node.

  1. Go to Services > Containers > Elastic Container Service.

  2. In the left menu, click Clusters.

  3. Click on your cluster.

  4. In the Services tab, then click Create.

  5. In Step 1: Configure service:

    1. For Launch type, select EC2.

    2. For Task Definition, select pc-console.

    3. In Service Name, enter pc-console.

    4. In Number of tasks, enter 1.

    5. Click Next Step.

  6. In Step 2: Configure network:

    1. For Load Balancer type, select Classic Load Balancer.

    2. For Service IAM role, leave the default ecsServiceRole.

    3. For Load Balancer Name, select previously created load balancer.

    4. Unselect Enable Service discovery integration

    5. click Next Step.

  7. In Step 3: Set Auto Scaling, accept the defaults, and click Next.

  8. In Step 4: Review, click Create Service.

  9. Wait for the service to launch, and then click View Service.

  10. Wait for Last status to change to RUNNING (it can take a few minutes), and then proceed to the next step.

7.6. Configure Prisma Cloud Console

Navigate to Console’s web interface, create your first admin account, and enter your license.

  1. Start a browser, then navigate to https://<LB_DNS_NAME>:8083

  2. At the login page, create your first admin account. Enter a username and password.

  3. Enter your license key, then click Register.

8. Deploy Defender

Create worker nodes in your ECS cluster, create a task definition for the Prisma Cloud Defender, and then create a service of type Daemon to deploy Defender to every node in the cluster.

If you already have worker nodes in your cluster, skip directly to creating the Defender task definition.

8.1. Create a launch configuration for worker nodes

Create a launch configuration named pc-worker-node that:

  • Runs the Amazon ECS-Optimized Amazon Linux 2 AMI.

  • Uses the ecsInstanceRole IAM role.

  • Runs a user data script that joins the pc-ecs-cluster and runs the commands required to install Defender.

  1. Go to Services > Compute > EC2.

  2. In the left menu, click Auto Scaling > Launch Configurations.

  3. Click Create Launch Configuration

  4. In Name, enter a name for your launch configuration, such as pc-worker-node.

  5. In Amazon machine image, select Amazon ECS-Optimized Amazon Linux 2 AMI.

    You can get a complete list of per-region Amazon ECS-optimized AMIs from here.

  6. Choose an instance type, such as t2.medium.

  7. Under Additional configuration:

    1. In IAM instance profile, select ecsInstanceRole.

    2. Under User data, select Text, and paste the following code snippet:

      #!/bin/bash
      echo ECS_CLUSTER=pc-ecs-cluster >> /etc/ecs/ecs.config

      Where:

      • ECS_CLUSTER must match your cluster name. If you’ve named your cluster something other than pc_ecs_cluster, then modify your user data script accordingly.

    3. (Optional) In IP Address Type, select Assign a public IP address to every instance.

      With this option, you can easily SSH to this instance to troubleshoot issues.

  8. Under Security groups:

    1. Select Select an existing security group.

    2. Select pc-security-group.

  9. Under Key pair (login), select an existing key pair, or create a new key pair so that you can access your instances.

  10. Click Create launch configuration.

8.2. Create an auto scaling group for worker nodes

Launch two worker nodes into your cluster.

  1. Go to Services > Compute > EC2.

  2. In the left menu, click Auto Scaling > Auto Scaling Groups.

  3. Click Create an Auto Scaling group.

  4. In Choose launch template or configuration:

    1. In Auto Scaling group Name, enter pc-worker-autoscaling.

    2. In Launch template, click Switch to launch configuration.

    3. Select pc-worker-node.

    4. Click Next.

  5. Under Configure settings:

    1. In VPC, select your default VPC.

    2. In Subnet, select a public subnet, such as 172.31.0.0/20.

    3. Click Next.

  6. In Configure advanced options, accept the defaults, and click Next.

  7. In Configure group size and scaling policies:

    1. Set Desired capacity to 2.

    2. Leave Minimum capacity at 1.

    3. Set Maximum capacity to 2.

    4. Click Skip to review.

  8. Review the configuration and click Create Auto Scaling Group.

    After the auto scaling group spins up (it will take some time), validate that your cluster has three container instances.

    1. Go to Services > Containers > Elastic Container Service.

    2. The count for Container instances in your cluster should now be a total of three.

8.3. Create a Prisma Cloud Defender task definition

Generate a task definition for Defender in Prisma Cloud Console.

  1. Log into Prisma Cloud Compute Console.

  2. Go to Manage > Defenders > Deploy > Defenders.

  3. In Deployment method, select Orchestrator.

  4. For orchestrator type, select ECS.

  5. For the name that Defender uses to connect to Console, select the DNS name of the load balancer that sits in front of Console.

  6. In Specify a cluster name, leave the field blank.

    Console will automatically retrieve the cluster name from AWS. Only enter a value if you want to override the cluster name assigned in AWS.

  7. In Specify ECS task name, leave the field blank.

    By default, the task name is pc-defender.

  8. Click Download to download the task definition.

  9. Log into AWS.

  10. Go to Services > Containers > Elastic Container Service.

  11. In the left menu, click Task Definitions.

  12. Click Create new Task Definition.

  13. In Step 1: Select launch type compatibility, select EC2, then click Next step.

  14. In Step 2: Configure task and container definitions, scroll to the bottom of the page and click Configure via JSON.

  15. Delete the contents of the window, and replace it with the Prisma Cloud Console task definition you just generated.

  16. Click Save.

  17. (Optional) Change the name of the task definition before creating it. The default name is pc-defender.

  18. Click Create.

8.4. Start the Prisma Cloud Defender service

Create the Defender service using the task definition. With Daemon scheduling, ECS schedules one Defender per node.

  1. Go to Services > Containers > Elastic Container Service.

  2. In the left menu, click Clusters.

  3. Click on your cluster.

  4. In the Services tab, click Create.

  5. In Step 1: Configure service:

    1. For Launch type, select EC2.

    2. For Task Definition, select pc-defender.

    3. In Service Name, enter pc-defender.

    4. In Service Type, select Daemon.

    5. Click Next Step.

  6. In Step 2: Configure network, accept the defaults, and click Next step.

  7. In Step 3: Set Auto Scaling, accept the defaults, and click Next step.

  8. In Step 4: Review, click Create Service.

  9. Click View Service.

  10. Verify that you have Defenders running on each node in your ECS cluster.

    1. Go to your Prisma Cloud Console and view the list of Defenders in Manage > Defenders > Manage There should be a total of three Defenders, one for each EC2 instance in the cluster.

9. Using a private registry

For maximum control over your environment, you might want to store the Console container image in your own private registry, and then install Prisma Cloud from your private registry. When the Console service is started, ECS retrieves the image from your registry. This procedure shows you how to push the Console container image to Amazon’s Elastic Container Registry (ECR).

Prerequisites:

  • AWS CLI is installed on your machine. It is required to push the Console image to your registry.

  1. Go to the directory where you unpacked the Prisma Cloud release tarball.

    $ cd prisma_cloud_compute_edition/
  2. Load the Console image.

    $ docker load < ./twistlock_console.tar.gz
  3. Go to Services > Containers > Elastic Container Service.

  4. In the left menu, click Repositories.

  5. Click Create repository.

  6. Follow the AWS instructions for logging in to the registry, tagging the Console image, and pushing it to your repo.

    Be sure to update your Console task definition so that the value for image points to your private registry.