In a modern cluster, the hardest problem isn’t running workloads—it's sharing resources fairly. Red Hat OpenShift clusters are seeing a surge of AI-accelerated workloads, from GPU-intensive training jobs to large batches of inference requests. At the same time, other tenants still need consistent throughput for their everyday CI/CD pipelines and data processing tasks. The result is a constant battle for resources, where some jobs wait too long, others consume more than their fair share, and administrators are left fighting bottlenecks. This is exactly the challenge that Kueue, a Kubernetes-native job queueing and scheduling framework, was built to solve. It introduces structured queues, priorities, and quota enforcement to bring fairness and predictability back into scheduling. With Red Hat Build of Kueue, these upstream innovations are packaged, hardened, and delivered into Red Hat OpenShift as a supported, enterprise-ready solution to enable clusters to run efficiently while giving every workload a fair chance. Topology-aware scheduling Once workloads are queued fairly, the next challenge is where they actually land. For distributed jobs, placement can matter as much as allocation: pods that constantly exchange data perform very differently depending on whether they're co-located or scattered across zones. This is where topology-aware scheduling (TAS) comes in. Rather than treating the cluster as a flat pool of machines, TAS considers the physical and logical layout of the infrastructure (racks, blocks, zones) and makes scheduling decisions that optimize communication and efficiency. Workloads that talk a lot can be placed closer together, multi-pod jobs can start in sync through gang scheduling, and fairness across tenants is preserved even as locality is optimized. In practice, this means model-training jobs finish sooner, build pipelines run with fewer bottlenecks, and overall cluster utilization improves. It's scheduling with awareness, and placement itself becomes a performance feature. Today TAS is in alpha, but it's already pointing to a future where Red Hat OpenShift clusters squeeze more efficiency out of the same hardware simply by being smarter about where workloads run. Kueue with dynamic resource allocation Placement is only one side of the story. Increasingly, workloads also depend on specialized hardware, such as graphics processing units (GPU), field-programmable gate arrays (FPGA), smart network interface cards (Smart NIC), and even GPU slices like the NVIDIA multi-instance GPU (MIG). These resources don't behave like generic CPUs. They need precise allocation, careful sharing, and guardrails to prevent monopolization. Kueue's integration with the Kubernetes dynamic resource allocation (DRA) API addresses this gap. DRA creates a consistent framework for workloads to request and bind to devices, while Kueue extends its fairness and queueing model to manage those requests. Instead of static assignments or manual juggling, specialized hardware can now be orchestrated as cleanly as CPUs or memory. For Red Hat OpenShift users, this means accelerators stop being exceptions and become core resources in the scheduling process. Tenants can trust that their jobs are able to get the devices they request, no one team can monopolize scarce hardware, and administrators gain predictability across the cluster. Like TAS, DRA support in Kueue is alpha and feature-gated. Why it matters Features like topology-aware scheduling (TAS) and dynamic resource allocation (DRA) are still in their early stages, but that's precisely what makes them interesting. These upstream efforts show where scheduling in Kubernetes, and by extension Red Hat OpenShift, is headed: Clusters that are aware of topology, hardware, and fairness across tenants. As these capabilities mature upstream, they will continue to flow downstream through Red Hat Build of Kueue, tested, hardened, and integrated into OpenShift's enterprise-grade ecosystem. By keeping pace with Kueue's development today, OpenShift users can stay ahead of the curve and prepare for the next generation of intelligent workload management. Next steps To explore how these capabilities evolve downstream in Red Hat Build of Kueue, check out the following resources: Product trial Red Hat OpenShift Container Platform | Product Trial About the authors More like this Browse by channel Automation The latest on IT automation for tech, teams, and environments Artificial intelligence Updates on the platforms that free customers to run AI workloads anywhere Open hybrid cloud Explore how we build a more flexible future with hybrid cloud Security The latest on how we reduce risks across environments and technologies Edge computing Updates on the platforms that simplify operations at the edge Infrastructure The latest on the world’s leading enterprise Linux platform Applications Inside our solutions to the toughest application challenges Virtualization The future of enterprise virtualization for your workloads on-premise or across clouds