Karpenter 中的多个节点池允许我们为不同的工作负载和应用使用不同的节点池,具体取决于需求。这种能力意味着我们可以根据性能、成本和其他因素为每个工作负载选择最合适的节点池。
多个NodePool的使用场景如下:
Deprovisioning
设置首先,让我们删除之前的资源。
kubectl delete deployment inflate
kubectl delete nodepools.karpenter.sh default
kubectl delete ec2nodeclasses.karpenter.k8s.aws default
部署 NodePool和 EC2 NodeClass:
mkdir -p ~/environment/karpenter
cd ~/environment/karpenter
cat << EoF > karpenter_multi_nodepool_node_class.yaml
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: default
spec:
template:
metadata:
labels:
eks-immersion-team: default
spec:
nodeClassRef:
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "kubernetes.io/arch"
operator: In
values: ["amd64"]
- key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand
operator: In
values: ["on-demand"]
kubelet:
cpuCFSQuota: true
disruption:
consolidateAfter: 30s
consolidationPolicy: WhenEmpty
expireAfter: Never
limits:
cpu: "10"
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2
role: "KarpenterNodeRole-${CLUSTER_NAME}"
securityGroupSelectorTerms:
- tags:
alpha.eksctl.io/cluster-name: $CLUSTER_NAME
subnetSelectorTerms:
- tags:
alpha.eksctl.io/cluster-name: $CLUSTER_NAME
tags:
intent: apps
managed-by: karpenter
eks-immersion-team: my-team
EoF
kubectl -f karpenter_multi_nodepool_node_class.yaml create
输出
nodepool.karpenter.sh/default created
ec2nodeclass.karpenter.k8s.aws/default created
让我们创建我们的第一个自定义节点池,它将与默认节点池一起部署。自定义节点池只会启动特定满足特定工作负载要求(例如"t"系列实例类型)的节点。
首先,让我们看看我们当前在集群中有多少个节点池:
kubectl get nodepools.karpenter.sh
我们应该会看到以下消息
NAME NODECLASS
default default
现在,让我们部署一个新的自定义节点池。我们将把新的节点池命名为"team-nodepool”。我们还将创建一个新的 EC2 节点类,也称为"team-nodepool”
mkdir -p ~/environment/karpenter
cd ~/environment/karpenter
cat <<EoF> team-nodepool.yaml
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: team-nodepool
spec:
disruption:
consolidateAfter: 30s
consolidationPolicy: WhenEmpty
expireAfter: Never
limits:
cpu: "50"
template:
metadata:
labels:
eks-immersion-team: team-nodepool
spec:
nodeClassRef:
name: team-nodepool
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: kubernetes.io/os
operator: In
values:
- linux
- key: karpenter.sh/capacity-type
operator: In
values:
- on-demand
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: team-nodepool
spec:
amiFamily: AL2
role: "KarpenterNodeRole-${CLUSTER_NAME}"
securityGroupSelectorTerms:
- tags:
alpha.eksctl.io/cluster-name: $CLUSTER_NAME
subnetSelectorTerms:
- tags:
alpha.eksctl.io/cluster-name: $CLUSTER_NAME
tags:
intent: apps
managed-by: karpenter
EoF
kubectl apply -f team-nodepool.yaml
输出:
nodepool.karpenter.sh/team-nodepool created
ec2nodeclass.karpenter.k8s.aws/team-nodepool created
检查一下当前在集群中有多少个节点池。将看到另一个名为"team-nodepool"的节点池已被创建:
kubectl get nodepools.karpenter.sh
输出
NAME NODECLASS
default default
team-nodepool team-nodepool
现在,让我们通过应用一个特定的所需工作负载来测试我们的新节点池。我们将使用 nodeSelector 要求"t"系列的实例类型。下面的 pod spec使用node selector要求"t"实例类型:
cd ~/environment/karpenter
cat <<EoF> team-workload.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 1
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- t3.micro
EoF
kubectl apply -f team-workload.yaml
输出
deployment.apps/inflate created
Karpenter将启动一个新节点,它是"t"类型。这个新节点与上述工作负载兼容:
同时,检查 Karpenter 日志以查看扩展操作:
kubectl -n karpenter logs -l app.kubernetes.io/name=karpenter | grep provisioner
输出:
现在,检查一下我们的 inflate 应用是否能够启动:
kubectl get pods
输出
NAME READY STATUS RESTARTS AGE
inflate-594c784f6c-6wndc 1/1 Running 0 3m6s
让我们删除我们的部署"inflate”:
kubectl delete deployment inflate
删除节点池以及节点类:
kubectl delete nodepools.karpenter.sh team-nodepool
kubectl delete ec2nodeclass.karpenter.k8s.aws team-nodepool
使用 team-nodepool,Karpenter 添加了一个新节点,该节点满足了 Deployment Node Selector的要求。一旦该节点运行,我们的工作负载就被调度到这个新节点上。虽然我们的集群同时拥有"default"和"team-nodepool”,但 Karpenter 选择了"team-nodepool"来满足工作负载要求。