Skip to content

You are viewing documentation for Immuta version 2.8.

For the latest version, view our documentation for Immuta SaaS or the latest self-hosted version.

Integrating with the AWS Cloud Provider for Kubernetes

Audience: System Administrators

Content Summary: The Immuta Helm installation integrates well with Kubernetes on AWS. This guide walks through the various components that can be set up. Prerequisite: An Amazon EKS cluster with a recommended minimum of 3 m5.xlarge worker nodes.

If using a Kubernetes namespace...

If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace option into all helm and kubectl commands provided throughout this section.

Deployment

For deploying Immuta on a Kubernetes cluster using the AWS cloud provider, you can mostly follow the Kubernetes Helm installation guide.

The only deviations from that guide are in the custom values file(s) you create. You will want to incorporate any changes referenced throughout this guide, particularly in the Backups and Load Balancing sections below.

Backups

On AWS we recommend that you use S3. To begin, you will need an IAM role that Immuta can use to access the S3 bucket from your Kubernetes cluster. There are four options for role assumption:

  1. IAM Roles for Service Accounts : recommended for EKS.
  2. Kube2iam or kiam: recommended if you have other workloads running in the cluster.
  3. Instance profile: recommended if only Immuta is running in the cluster.
  4. AWS secret access keys: simplest set-up if access keys and secrets are allowed in your environment.

Necessary IAM Permissions

The role you choose above must have at least the following IAM permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ImmutaS3Backups00",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket>",
                "arn:aws:s3:::<bucket>/<prefix>/database/*",
                "arn:aws:s3:::<bucket>/<prefix>/query-engine/*"
            ]
        }
    ]
}

Sample Helm Values

backup:
  enabled: true
  type: s3
  # Uncomment to enable restore
  # restore:
  #   enabled: true
  # Uncomment if using kube2iam or kiam
  # podAnnotations:
  #   iam.amazonaws.com/role: <ROLE NAME>
  # Uncomment if using EKS
  # serviceAccountAnnotations:
  #   eks.amazonaws.com/role-arn: <ROLE ARN>
  s3:
    bucket: <BUCKET NAME>
    # Uncomment if using a bucket prefix. Remember to reflect this prefix in your IAM Role Policy.
    # bucketPrefix: <BUCKET PREFIX>
    awsRegion: <REGION>
    # Uncomment if using AWS Access Keys
    # awsAccessKeyId:
    # awsSecretAccessKey:

Load Balancing

The easiest way to expose your Immuta deployment running on Kubernetes with the AWS cloud provider is to set up nginx ingress as serviceType: LoadBalancer and let the chart handle creation of an ELB.

If you would like to use automatic ELB provisioning, you can use the following values:

nginxIngress:
  enabled: true
  controller:
    service:
      type: LoadBalancer

You can then manually edit the ELB configuration in the AWS console to use ACM TLS certificates to ensure your HTTPS traffic is secured by a trusted certificate. For instructions on doing this, please see Amazon's guide on how to Configure an HTTPS Listener for Your Classic Load Balancer

Another option is to set up nginx ingress with serviceType: NodePort and configure load balancers outside of the cluster.

For example,

nginxIngress:
  enabled: true
  controller:
    service:
      type: NodePort

In order to determine the ports to configure the load balancer for, examine the Service configuration:

kubectl get svc \
  -l app=immuta,component=controller \
  -o go-template='{{range .items}}{{range .spec.ports}}{{ printf "%s: %v\n" .name .nodePort}}{{end}}{{end}}'

This will print out the port name and port. For example,

https: 32757
postgres: 31517

Maintenance

The Immuta deployment to EKS has a very low maintenance burden. Immuta recommends the following basic procedures for monitoring and periodic maintenance of your installation:

Failure Recovery Scenarios

Ensure that your Immuta deployment is taking regular backups to the S3 volume attached to EKS per the backup instructions

Your Immuta deployment is highly available and resilient to failure. For some catastrophic failures, recovery from backup may be required. Below is a list of failure conditions and the steps necessary to ensure Immuta is operational.

  • Internal Immuta Service Failure: Because Immuta is running in a Kubernetes deployment, no action should be necessary. Should a failure occur that is not automatically resolved, follow Immuta backup restoration procedures.
  • EKS Cluster Failure: Should your EKS cluster experience a failure, simply create a new cluster and follow Immuta backup restoration procedures
  • Availability Zone Failure: Because EKS and ELB as well as the Immuta installation within EKS are designed to tolerate the failure of an availability zone, there are no steps needed to address the failure of an availability zone.
  • Region Failure: To provide recovery capability in the unlikely event of an AWS Region failure, Immuta recommends periodically copying database backups into an S3 bucket in a different AWS region. Should you experience a region failure, simply create a new cluster in a working region and follow Immuta backup restoration procedures.