Amazon EKS
Amazon EKS is a managed Kubernetes cluster that enables the execution of containerized workloads in the AWS cloud at scale.
Seqera Platform offers native support for Amazon EKS clusters to streamline the deployment of Nextflow pipelines.
Requirements
You must have an EKS cluster up and running. Follow the cluster preparation instructions to create the resources required by your Seqera instance. In addition to the generic Kubernetes instructions, you must make a number of EKS-specific modifications.
Service account role
Assign a service account role to the AWS IAM user used by Seqera to access the EKS cluster.
Assign a service account role to your Seqera IAM user
-
Modify the EKS auth configuration:
kubectl edit configmap -n kube-system aws-auth
-
In the editor that opens, add this entry:
mapUsers: |
- userarn: <AWS USER ARN>
username: tower-launcher-user
groups:
- tower-launcher-role -
Retrieve your user ARN from the AWS IAM console, or with the AWS CLI:
aws sts get-caller-identity
The same user must be used when specifying the AWS credentials in the Seqera compute environment configuration.
-
The AWS user must have this IAM policy applied.
See the AWS documentation for more details.
Seqera compute environment
Your Seqera compute environment uses resources that you may be charged for in your AWS account. See Cloud costs for guidelines to manage cloud resources effectively and prevent unexpected costs.
After you have prepared your Kubernetes cluster and assigned a service account role to your Seqera IAM user, create a Seqera EKS compute environment:
- In a workspace, select Compute environments > New environment.
- Enter a descriptive name for this environment, e.g., Amazon EKS (eu-west-1).
- From the Provider drop-down menu, select Amazon EKS.
- Under Storage, select either Fusion storage (recommended) or Legacy storage. The Fusion v2 virtual distributed file system allows access to your AWS S3-hosted data (
s3://
URLs). This eliminates the need to configure a shared file system in your Kubernetes cluster. See Fusion v2 below. - From the Credentials drop-down menu, select existing AWS credentials, or select + to add new credentials. If you choose to use existing credentials, skip to step 9.
The user must have the IAM permissions required to describe and list EKS clusters, per service account role requirements in the previous section.
- Enter a name, e.g., EKS Credentials.
- Add the IAM user Access key and Secret key. This is the IAM user with the service account role detailed in the previous section.
- (Optional) Under Assume role, specify the IAM role to be assumed by the Seqera IAM user to access the compute environment AWS resources.
When using AWS keys without an assumed role, the associated AWS user account must have Seqera Launch and Forge permissions. When an assumed role is provided, the keys are only used to retrieve temporary credentials impersonating the role specified. In this case, Seqera Launch and Forge permissions must be granted to the role instead of the user account.
- Select a Region, e.g., eu-west-1 - Europe (Ireland).
- Select a Cluster name from the list of available EKS clusters in the selected region.
- Specify the Namespace created in the cluster preparation instructions, which is tower-nf by default.
- Specify the Head service account created in the cluster preparation instructions, which is tower-launcher-sa by default.
If you enable Fusion v2 (Fusion storage in step 4 above), the head service account must have access to the S3 storage bucket specified as your work directory.
- Specify the Storage claim created in the cluster preparation instructions, which serves as a scratch filesystem for Nextflow pipelines. The storage claim is called tower-scratch in the provided examples.
The Storage claim isn't needed when Fusion v2 is enabled.
- Apply Resource labels to the cloud resources consumed by this compute environment. Workspace default resource labels are prefilled.
- Expand Staging options to include:
- Optional pre- or post-run Bash scripts that execute before or after the Nextflow pipeline execution in your environment.
- Global Nextflow configuration settings for all pipeline runs launched with this compute environment. Values defined here are pre-filled in the Nextflow config file field in the pipeline launch form. These values can be overridden during pipeline launch.
Configuration settings in this field override the same values in the pipeline repository
nextflow.config
file. See Nextflow config file for more information on configuration priority. - Specify custom Environment variables for the Head job and/or Compute jobs.
- Configure any advanced options described in the next section, as needed.
- Select Create to finalize the compute environment setup.
Advanced options
Seqera Platform compute environments for EKS include advanced options for storage and work directory paths, resource allocation, and pod customization.
Seqera EKS advanced options
- The Storage mount path is the file system path where the Storage claim is mounted (default:
/scratch
). - The Work directory is the file system path used as a working directory by Nextflow pipelines. This must be the storage mount path (default) or a subdirectory of it.
- The Compute service account is the service account used by Nextflow to submit tasks (default: the
default
account in the given namespace). - The Pod cleanup policy determines when to delete terminated pods.
- Use Custom head pod specs to provide custom options for the Nextflow workflow pod (
nodeSelector
,affinity
, etc). For example:
spec:
nodeSelector:
disktype: ssd
- Use Custom service pod specs to provide custom options for the compute environment pod. See above for an example.
- Use Head Job CPUs and Head Job memory to specify the hardware resources allocated for the Nextflow workflow pod.
See Launch pipelines to start executing workflows in your EKS compute environment.
Fusion v2
To use Fusion v2 in your Seqera EKS compute environment:
- Use Seqera Platform version 23.1 or later.
- Use an S3 bucket as the pipeline work directory.
- Both the head service and compute service accounts must have access to the S3 bucket specified as the work directory.
Configure IAM to use Fusion v2
-
Allow the IAM role access to your S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<YOUR-BUCKET>"]
},
{
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<YOUR-BUCKET>/*"],
"Effect": "Allow"
}
]
}Replace
<YOUR-BUCKET>
with a bucket name of your choice. -
The IAM role must have a trust relationship with your Kubernetes service account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<YOUR-ACCOUNT-ID>:oidc-provider/oidc.eks.<YOUR-REGION>.amazonaws.com/id/<YOUR-CLUSTER-ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.eu-west-2.amazonaws.com/id/<YOUR CLUSTER ID>:aud": "sts.amazonaws.com",
"oidc.eks.eu-west-2.amazonaws.com/id/<YOUR CLUSTER ID>:sub": "system:serviceaccount:<YOUR-EKS-SERVICE-ACCOUNT>"
}
}
}
]
}Replace
<YOUR-ACCOUNT-ID>
,<YOUR-REGION>
,<YOUR-CLUSTER-ID>
,<YOUR-EKS-SERVICE-ACCOUNT>
with your corresponding values. -
Annotate the Kubernetes service account with the IAM role:
kubectl annotate serviceaccount <YOUR-EKS-SERVICE-ACCOUNT> --namespace <YOUR-EKS-NAMESPACE> eks.amazonaws.com/role-arn=arn:aws:iam::<YOUR-ACCOUNT-ID>:role/<YOUR-IAM-ROLE>
Replace
<YOUR-EKS-SERVICE-ACCOUNT>
,<YOUR-EKS-NAMESPACE>
, and<YOUR-IAM-ROLE>
with your corresponding values.
See the AWS documentation for further details.