📄️ Myth: Kubernetes Scheduler Considers Resource Limits for Scheduling
During a design review in a large-scale cluster, I once noticed a deployment configured with extremely low requests but very high limits. Engineers expected the scheduler to recognize the pod's potential to burst and avoid scheduling it on already constrained nodes. However, the pod was consistently placed on small nodes, causing CPU throttling and memory contention. This led to a debate where several team members argued that the scheduler must be considering limits, actual usage, or both. The issue highlighted a widespread misunderstanding about how Kubernetes decides resource placement.
📄️ Myth: A Higher-Priority Pod Will Always Preempt a Lower-Priority Pod
I once reviewed an incident where a team created a high-priority pod expecting it to preempt other pods on the node. When the cluster stayed fully packed and the new pod remained unscheduled, they concluded “preemption is broken.”
📄️ Myth: Kubernetes Has a Concept of Node Anti-Affinity
During a platform engineering interview, a candidate was asked:
📄️ Myth: Pod memory requests are only used for scheduling
During an incident review, a team noticed that several Pods were evicted even though they were well below their configured memory limits.

