Amazon Aurora PostgreSQL now supports integration with Kiro powers

<p>Today, AWS announces Amazon Aurora PostgreSQL-Compatible Edition integration with Kiro powers, enabling developers to build Aurora PostgreSQL backed applications faster with AI agent-assisted development using Kiro. <a href="https://kiro.dev/powers/" target="_blank">Kiro powers</a> is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases. Kiro power for Aurora PostgreSQL packages the MCP server with targeted database development guidance, giving the Kiro agent instant expertise in Aurora PostgreSQL operations and schema design.<br /> <br /> Kiro power for Aurora PostgreSQL bundles direct database connectivity through the Aurora PostgreSQL MCP server for data plane operations (queries, table creation, schema management), and control plane operations (cluster creation) and the steering file with Aurora PostgreSQL–specific best practices. When developers work on database tasks, the power dynamically loads relevant guidance – whether creating new Aurora clusters, designing schemas, or optimizing queries – so AI agents receive only the context needed for the specific task at hand.<br /> <br /> Aurora PostgreSQL power is available within <a href="https://kiro.dev/powers/#how-do-i-install-powers" target="_blank">Kiro IDE</a> and <a href="https://kiro.dev/powers/" target="_blank">Kiro powers webpage</a> for one-click installation and can create and manage Aurora PostgreSQL clusters in all AWS Regions. For more information about development use cases, read this <a href="https://aws.amazon.com/blogs/database/introducing-amazon-aurora-powers-for-kiro/" target="_blank">blog post</a>. To learn more about Aurora PostgreSQL MCP server, visit our <a href="https://awslabs.github.io/mcp/servers/postgres-mcp-server" target="_blank">documentation</a>.<br /> <br /> Amazon Aurora is designed for unparalleled high performance and availability at global scale with full PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, and automated multi-Region replication. To get started with Amazon Aurora, visit our getting started page.</p>

Read article →

Amazon EC2 C7i instances are now available in the Asia Pacific (Hyderabad) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Asia Pacific (Hyderabad) Region. C7i instances are supported by custom Intel processors, available only on AWS.<br /> <br /> C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.<br /> <br /> C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances.<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/c7i/">Amazon EC2 C7i Instances</a>. To get started, see the <a href="https://console.aws.amazon.com/">AWS Management Console</a>.</p>

Read article →

Amazon Aurora DSQL now supports cluster creation in seconds

<p>Amazon Aurora DSQL now supports faster cluster creation, reducing setup time from minutes to seconds.<br /> <br /> With cluster creation now in seconds, developers can instantly provision Aurora DSQL databases to rapidly prototype new ideas. Developers can use the integrated query editor in the AWS console to immediately start building without needing to configure external clients or connect through the Aurora DSQL Model Context Protocol (MCP) server to enable AI-powered development tools. Whether prototyping or running production workloads, Aurora DSQL delivers virtually unlimited scalability, active-active high availability, zero infrastructure management, and pay-for-what-you-use pricing, ensuring your database effortlessly scales alongside your application needs.<br /> <br /> This enhancement is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">Regions where Aurora DSQL is offered</a>. Get started with Aurora DSQL for free with the <a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;all-free-tier.sort-order=asc&amp;awsf.Free%20Tier%20Types=*all&amp;awsf.Free%20Tier%20Categories=categories%23databases">AWS Free Tier</a>. To learn more, visit the Aurora DSQL <a href="https://aws.amazon.com/rds/aurora/dsql/">webpage</a> and <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/what-is-aurora-dsql.html">documentation</a>.</p>

Read article →

Amazon EC2 I7i instances now available in additional AWS regions

<p>Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Asia Pacific (Singapore, Jakarta), Europe (Stockholm) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.<br /> <br /> I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks.<br /> <br /> I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth.<br /> To learn more, visit the I7i instances<a href="https://aws.amazon.com/ec2/instance-types/i7i/"> page</a>.</p>

Read article →

Amazon EC2 High Memory U7i instances now available in additional regions

<p>Amazon EC2 High Memory U7i instances with 24TB of memory (u7in-24tb.224xlarge) are now available in AWS Europe (Frankfurt), U7i instances with 16TB of memory (u7in-16tb.224xlarge) are now available in AWS Asia Pacific (Mumbai), and U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the AWS Europe (Paris) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-24tb instances offer 24TiB of DDR5 memory, U7in-16tb instances offer 16TiB of DDR5 memory, and U7i-6tb instances offer 6TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.</p> <p>U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7in-24tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.</p> <p>To learn more about U7i instances, visit the <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/u7i/" style="cursor: pointer;">High Memory instances page</a>.</p>

Read article →

Amazon Cognito identity pools now support private connectivity with AWS PrivateLink

<p>Amazon Cognito identity pools now support AWS PrivateLink, enabling you to securely exchange federated identities for AWS credentials through private connectivity between your virtual private cloud (VPC) and Cognito. This eliminates the need to route authentication traffic over the public internet, providing enhanced security for your workloads. Identity pools map authenticated and guest identities to your AWS Identity and Access Management (IAM) roles and provide temporary AWS credentials, with this new feature, through a secure and private connection.<br /> <br /> You can use PrivateLink connections in all AWS Regions where Amazon Cognito identity pools are available, except AWS China (Beijing) Region, operated by Sinnet, and AWS GovCloud (US) Regions. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to <a href="https://aws.amazon.com/privatelink/pricing/" target="_blank">AWS PrivateLink pricing page</a> for details. You can get started by creating an AWS PrivateLink VPC interface endpoint for Amazon Cognito identity pools using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html" target="_blank">creating a VPC interface endpoint</a> and <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/vpc-interface-endpoints.html" target="_blank">Amazon Cognito’s developer guide</a>.&nbsp;</p>

Read article →

Amazon WorkSpaces Secure Browser introduces Web Content Filtering

<p>Amazon WorkSpaces Secure Browser now includes Web Content Filtering, a comprehensive security and compliance feature that enables organizations to control and monitor web content access. This new capability allows administrators to define granular access policies, block specific URLs or entire domain categories using 25+ predefined categories, and seamlessly integrate with Session Logger for enhanced monitoring and compliance reporting.<br /> <br /> While existing Chrome policies for domain control remain supported, Web Content Filtering provides a more comprehensive way to control web access through category-based filtering and improved logging capabilities. Organizations can better manage their remote work security and compliance requirements through centralized policy management that scales across the enterprise. IT security teams can implement default-deny policies for high-security environments, while compliance officers benefit from detailed logging and monitoring capabilities. The feature maintains flexibility by allowing customized policies and exceptions based on specific business needs.<br /> <br /> This feature is available at no additional cost in 10 AWS Regions, including US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, London, Ireland), and Asia Pacific (Tokyo, Mumbai, Sydney, Singapore). WorkSpaces Secure Browser offers pay-as-you go <a href="https://aws.amazon.com/workspaces/secure-browser/pricing/" target="_blank">pricing</a>.<br /> <br /> To get started with WorkSpaces Secure Browser, see <a href="https://docs.aws.amazon.com/workspaces-web/latest/adminguide/getting-started.html" target="_blank">Getting Started with Amazon WorkSpaces Secure Browser</a>. You can enable this feature in your AWS console and automatically migrate any browser policies for URL Blocklists or URL Allowlists. To learn more about the feature, please refer to the feature <a href="https://docs.aws.amazon.com/workspaces-web/latest/adminguide/web-content-filtering.html" target="_blank">documentation</a>.</p>

Read article →

AWS Application Migration Service supports IPv6

<p>AWS Application Migration Service (MGN) now supports Internet Protocol version 6 (IPv6) for both service communication and application migrations. Organizations can migrate applications that use IPv6 addressing, enabling transitions to modern network infrastructures.</p> <p>You can connect to AWS MGN using new <a contenteditable="false" href="https://docs.aws.amazon.com/general/latest/gr/mgn.html" style="cursor: pointer;">dual-stack service endpoints</a> that support both IPv4 and IPv6 communications. When migrating applications, you can transfer replication data using IPv4 or IPv6 while maintaining network connections and security. Then, during testing and cutover phases, you can use your chosen network configuration (IPv4, IPv6, or dual-stack) to launch servers in your target environment.<br /> <br /> This feature is available in every AWS Region that supports AWS MGN and Amazon Elastic Compute Cloud (Amazon EC2) dual-stack endpoints. For supported regions, see the <a contenteditable="false" href="https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html#supported-regions" style="cursor: pointer;">AWS MGN Supported AWS Regions</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/ec2/latest/devguide/ec2-endpoints.html#ipv6" style="cursor: pointer;">Amazon EC2 Endpoints</a> documentation.<br /> <br /> To learn more about AWS MGN, visit our <a contenteditable="false" href="https://aws.amazon.com/application-migration-service/" style="cursor: pointer;">product page</a> or <a contenteditable="false" href="https://docs.aws.amazon.com/mgn/latest/ug/getting-started.html" style="cursor: pointer;">documentation</a>. To get started, sign in to the <a contenteditable="false" href="https://console.aws.amazon.com/mgn/home" style="cursor: pointer;">AWS Application Migration Service</a> Console.</p>

Read article →

Amazon ECS now supports custom container stop signals on AWS Fargate

<p><a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service</a> (Amazon ECS) now supports custom container stop signals for Linux tasks running on AWS Fargate, honoring the stop signal configured in Open Container Initiative (OCI) images when tasks are stopped. The enhancement improves graceful shutdown behavior by aligning Fargate task termination with each container’s preferred termination signal.<br /> <br /> Previously, when an Amazon ECS task running on AWS Fargate was stopped, each Linux container always received SIGTERM followed by SIGKILL after the configured timeout. With the new behavior, the Amazon ECS container agent reads the stop signal from the container image configuration and sends that signal when stopping the task. Containers that rely on signals such as SIGQUIT or SIGINT for graceful shutdown can now run on Fargate with their intended termination semantics. If no STOPSIGNAL is configured, Amazon ECS continues to send SIGTERM by default.<br /> <br /> Customers can use custom stop signals on Amazon ECS with AWS Fargate by adding a STOPSIGNAL instruction (for example, STOPSIGNAL SIGQUIT) to their OCI‑compliant container images. Support for container‑defined stop signals is available in all AWS Regions. To learn more, refer to the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-lifecycle-explanation.html">ECS Developer Guide</a>.</p>

Read article →

Amazon CloudWatch SDK supports optimized JSON, CBOR protocols

<p><a href="https://aws.amazon.com/cloudwatch" target="_blank">Amazon CloudWatch</a> announces support for both the JSON and Concise Binary Object Representation (CBOR) protocols in the CloudWatch SDK, enabling lower latency and improved performance for CloudWatch customers. The SDK will automatically use JSON or CBOR as its new default communication protocol, offering customers a lower end-to-end processing latency as well as reduced payload sizes, application client side CPU, and memory usage.<br /> <br /> Customers use the CloudWatch SDK either directly or through Infrastructure as Code solutions to manage their monitoring resources. Reducing control plane operations latency and payload size helps customer optimize their operational maintenance and resources usage and costs. JSON and the CBOR data formats are standards designed to enable better performance over the traditional AWS Query protocol.<br /> <br /> The CloudWatch SDK for JSON and CBOR protocols support is available in all AWS Regions where Amazon CloudWatch is available and for all generally available AWS SDK language variants.<br /> <br /> To leverage the performance improvements, customers can install the <a href="https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html" target="_blank">latest SDK version here</a>. To learn more about the AWS SDK, see <a href="https://aws.amazon.com/developer/tools/" target="_blank">Amazon Developer tools</a>.</p>

Read article →

Amazon ElastiCache Serverless now supports same-slot WATCH command

<p>Today, we are announcing that <a href="https://aws.amazon.com/elasticache/" target="_blank">Amazon ElastiCache Serverless</a> now supports the WATCH command for same-slot transactions, helping developers build more reliable applications with improved data consistency in high-concurrency scenarios. With this launch, the WATCH command makes transactions conditional, ensuring they execute only when monitored keys remain unchanged.<br /> <br /> For ElastiCache Serverless, the WATCH command works with transactions that operate on keys within the same hash slot as the watched keys. When applications attempt to watch keys that are not in the same hash slot, they'll receive a CROSSSLOT error. Developers can control key placement by using hash tags in their key names to ensure keys hash to the same slot. The transaction will also be aborted when ElastiCache Serverless cannot guarantee the state of watched keys.<br /> <br /> WATCH command support is available in all AWS regions where ElastiCache Serverless is supported at no additional cost. To get started, create transactions using the WATCH command through your preferred client library. To learn more about conditional transactions and the WATCH command, see the <a target="_blank"></a><a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/serverless.html" target="_blank">ElastiCache Serverless documentation</a>, and the Valkey <a href="https://valkey.io/topics/transactions/" target="_blank">transactions documentation</a>.</p>

Read article →

Amazon EC2 X8g instances now available in Asia Pacific (Sydney) region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8g instances are available in Asia Pacific (Sydney) region. These instances are powered by AWS Graviton4 processors and deliver up to 60% better performance than AWS Graviton2-based Amazon EC2 X2gd instances. X8g instances offer up to 3 TiB of total memory and increased memory per vCPU compared to other Graviton4-based instance. They have the best price performance among EC2 X-series instances, and are ideal for memory-intensive workloads such as electronic design automation (EDA) workloads, in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), real-time big data analytics, real-time caching servers, and memory-intensive containerized applications.<br /> <br /> X8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 3TiB) than Graviton2-based X2gd instances. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge.<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/x8g/">Amazon EC2 X8g Instances</a>. To quickly migrate your workloads to Graviton-based instances, see <a href="https://aws.amazon.com/ec2/graviton/fast-start/">AWS Graviton Fast Start program</a>. To get started, see the <a href="https://console.aws.amazon.com/">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html">AWS SDKs</a>.</p>

Read article →

Now generally available: Amazon EC2 C8gb instances

<p>Today, AWS announces the general availability of the new Amazon Elastic Block Storage (Amazon EBS) optimized Amazon Elastic Compute Cloud (Amazon EC2) C8gb instances. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. At up to 150 Gbps of EBS bandwidth, these instances offer higher EBS performance compared to same-sized equivalent Graviton4-based instances. Take advantage of the higher block storage performance offered by these new EBS optimized EC2 instances to scale the performance and throughput of workloads such as high-performance file systems, while optimizing the cost of running your workloads.</p> <p>For increased scalability, these instances offer instance sizes up to 24xlarge, including a metal-24xl size, up to 192 GiB of memory, up to 150 Gbps of EBS bandwidth, up to 200 Gbps of networking bandwidth. These instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, metal-24xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.</p> <p>The new C8gb instances are available in US East (N. Virginia) and US West (Oregon) regions. Metal sizes are only available in US East (N. Virginia) region.</p> <p>To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/c8g/">Amazon EC2&nbsp;C8gb Instances</a>. To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/" target="_blank">Level up your compute with AWS Graviton page</a>. To get started, see <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank">AWS SDKs</a>.</p>

Read article →

Amazon Braket now supports Qiskit 2.0

<p>Amazon Braket now supports Qiskit 2.0, enabling quantum developers to use the latest version of the most popular quantum software framework with native primitives and client-side compilation capabilities.<br /> <br /> With this release, Braket provides native implementations of Qiskit's Sampler and Estimator primitives that leverage Braket's program sets for optimized batching, reducing execution time and costs compared to generic wrapper approaches. The native primitives handle parameter sweeps and observable measurements service-side, eliminating the need for customers to implement this logic manually. Additionally, the bidirectional circuit conversion capability enables customers to use Qiskit's extensive compilation framework for client-side transpilation before submitting to Braket devices, providing the control and reproducibility that enterprise users and researchers require for device characterization experiments and custom compilation passes.<br /> <br /> Qiskit 2.0 support is available in all AWS Regions where Amazon Braket is available. To get started, see the <a contenteditable="false" href="https://qiskit-community.github.io/qiskit-braket-provider/" style="cursor: pointer;">Qiskit-Braket provider documentation</a> and the Amazon Braket <a contenteditable="false" href="https://docs.aws.amazon.com/braket/latest/developerguide/what-is-braket.html" style="cursor: pointer;">Developer Guide</a>.</p>

Read article →

AWS Support Center Console now supports screen sharing for troubleshooting support cases

<p>Today, AWS announces that AWS Support Center Console now support screen sharing for troubleshooting support cases. With this new feature, you can request a virtual meeting while in an active chat or call, join support calls with one click through a meeting bridge link. With the new virtual meetings, you will be able to share your screen during the meeting and maintain seamless access to case details for efficient troubleshooting. This enhancement simplifies your support experience by keeping all support interactions within the AWS Support Center console.</p> <p>To learn more visit the <a contenteditable="false" href="https://docs.aws.amazon.com/awssupport/latest/user/case-management.html" style="cursor: pointer;" target="_blank">AWS Support page</a>.</p>

Read article →

Amazon EC2 C8gn instances are now available in additional regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS US East (Ohio) and Middle East (UAE) Regions. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances.<br /> <br /> Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.<br /> <br /> For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.<br /> <br /> C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand), Middle East (UAE)<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/c8g/" target="_blank">Amazon C8gn Instances</a>. To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/" target="_blank">Level up your compute with AWS Graviton page</a>. To get started, see <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank">AWS SDKs</a>.</p>

Read article →

Amazon GameLift Servers enhances AWS Console for game developers with AI powered assistance

<p>Today, Amazon GameLift Servers is launching AI-powered assistance in the AWS Console, leveraging Amazon Q Developer to provide tailored guidance for game developers. This new feature integrates specialized GameLift Servers knowledge to help customers navigate complex workflows, troubleshoot issues, and optimize their game server deployments more efficiently.<br /> <br /> Developers can now access AI-assisted recommendations for game server integration, fleet configuration, and performance optimization directly within the AWS Console via Amazon GameLift Servers. This enhancement aims to streamline decision making processes, reduce troubleshooting time, and improve overall resource utilization, leading to cost savings and better player experiences.<br /> <br /> AI-powered assistance is now available in all Amazon GameLift Servers <a href="https://docs.aws.amazon.com/gameliftservers/latest/developerguide/gamelift-regions.html">supported regions</a>, except AWS China. To learn more about this new feature, visit the Amazon GameLift Servers <a href="https://docs.aws.amazon.com/gameliftservers/latest/developerguide/release-notes.html">documentation</a>.</p>

Read article →

Amazon RDS and Aurora now support resource tagging for Automated Backups

<p>Amazon RDS and Aurora now support resource tagging for automated backups and cluster automated backups. You can now tag your automated backups separately from the parent DB instance or DB cluster, enabling Attribute-Based Access Control (ABAC) and simplifying resource management and cost tracking.</p> <p>With this launch, you can tag automated backups in the same way as other RDS resources using the AWS Management Console, API, or SDK. Use these tags with IAM policies to control access and permissions to automated backups. Additionally, these tags can help you categorize your resources by application, project, department, environment, and more, as well as manage, organize, and track costs of your automated backups. For example, create application specific tags to control permissions for describing, deleting, or restoring automated backups and to organize and track backup costs of the application.</p> <p>This capability is available in all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>, including the AWS GovCloud (US) Regions where Aurora and RDS are available.</p> <p>To learn more about tagging <a contenteditable="false" href="https://aws.amazon.com/rds/aurora/" style="cursor: pointer;">Aurora</a> and <a contenteditable="false" href="https://aws.amazon.com/rds/" style="cursor: pointer;">RDS</a> automated backups, see the Amazon documentation on <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.html" style="cursor: pointer;">Tagging Amazon Aurora resources,</a> <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html" style="cursor: pointer;">Tagging Amazon RDS resources</a>, and <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_service-with-iam.html#security_iam_service-with-iam-tags" style="cursor: pointer;">Using tags for attribute-based access control</a>.</p>

Read article →

Amazon EC2 X8g instances now available in Europe (Stockholm) region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8g instances are available in Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors, and they offer up to 3 TiB of total memory and increased memory per vCPU compared to other Graviton4-based instances. X8g instances are ideal for memory-intensive workloads, such as electronic design automation (EDA) workloads, in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), real-time big data analytics, real-time caching servers, and memory-intensive containerized applications.</p> <p>X8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 3TiB) than Graviton2-based X2gd instances. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge.</p> <p>X8g instances are currently available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>: US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt, Stockholm).</p> <p>To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/x8g/" style="cursor: pointer;">Amazon EC2 X8g Instances</a>. To quickly migrate your workloads to Graviton-based instances, see <a href="https://aws.amazon.com/ec2/graviton/fast-start/" style="cursor: pointer;">AWS Graviton Fast Start program</a>. To get started, see the <a href="https://console.aws.amazon.com/" style="cursor: pointer;">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/" style="cursor: pointer;">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" style="cursor: pointer;">AWS SDKs</a>.</p>

Read article →

AWS Partner Central now includes opportunity deal sizing

<p>Today, AWS announces deal sizing capability in AWS Partner Central. This new feature, available within the APN Customer Engagements (ACE) Opportunities, uses AI to provide deal size estimates and AWS service recommendations. Deal Sizing capability allows Partners to save time on deal management by simplifying the process of estimating AWS monthly recurring revenue (MMR) when creating or updating opportunities.<br /> <br /> Partners can optionally import AWS Pricing Calculator URLs to automatically populate AWS service selections and corresponding spend estimates into their opportunities, reducing the need for manual re-entry. When a Pricing Calculator URL is provided, deal sizing delivers enhanced insights including pricing strategy optimization recommendations, potential cost savings analysis, Migration Acceleration Program (MAP) eligibility indicators, and modernization pathway analysis. These enhanced insights help Partners refine their technical approach and strengthen funding applications, accelerating the funding approval process.<br /> <br /> Deal sizing is now available in AWS Partner Central worldwide. The feature is accessible through both AWS Partner Central and the AWS Partner Central API for Selling, which is available in the US East (N. Virginia) Region.<br /> <br /> To get started, log in to <a href="https://aws.amazon.com/partners/partner-central/" target="_blank">AWS Partner Centr</a><a href="https://aws.amazon.com/partners/partner-central/">al</a> in the console to create or update opportunities and view deal sizing insights. For API integration with your CRM system, see the <a href="https://docs.aws.amazon.com/partner-central/latest/APIReference/welcome.html" target="_blank">AWS Partner Central API Documentation</a>. To learn more about deal sizing, visit the <a href="https://docs.aws.amazon.com/partner-central/latest/sales-guide/creating-opportunity.html" target="_blank">Partner Central Sales Guide</a>.</p>

Read article →

AWS Directory Service for Microsoft AD and AD Connector available in Asia Pacific (New Zealand) Region

<p>AWS Directory Service for Microsoft Active Directory, also known as <a href="https://aws.amazon.com/directoryservice/">AWS Managed Microsoft AD</a>, and <a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html">AD Connector</a> are now available in the Asia Pacific (New Zealand) Region.</p> <p><br /> Built on actual Microsoft Active Directory (AD), AWS Managed Microsoft AD enables you to migrate AD-aware applications while reducing the work of managing AD infrastructure in the AWS Cloud. You can use your Microsoft AD credentials to domain join EC2 instances, and also manage containers and Kubernetes clusters. You can keep your identities in your existing Microsoft AD or create and manage identities in your AWS managed directory.<br /> <br /> AD Connector is a proxy that enables AWS applications to use your existing on-premises AD identities without requiring AD infrastructure in the AWS Cloud. You can also use AD Connector to <a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ad_connector_join_instance.html">join Amazon EC2 instances</a> to your on-premises AD domain and manage these instances using your existing group policies.<br /> <br /> Please see all <a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/regions.html">AWS Regions</a> where AWS Managed Microsoft AD and AD Connector are available. To learn more, see <a href="https://aws.amazon.com/directoryservice/">AWS Directory Service.</a></p>

Read article →

Announcing Spatial Data Management on AWS to accelerate spatial-data insights

<p>Today, AWS is announcing Spatial Data Management on AWS (SDMA), a solution that enables customers to store, enrich, and connect spatial data at scale. SDMA enables customers to store their multimodal spatial data representing their physical assets (3D, geospatial, behavioral, temporal data) in a secure, centralized cloud environment. SDMA serves as a collaborative hub enabling connectivity between customer’s spatial data, their ISV SaaS applications, and AWS Services. In addition, customers can use SDMA’s collection rules to define how their spatial data is organized and enriched, helping maintain consistency and governance. Customers can use SDMA’s APIs, desktop application, and web interface to efficiently manage spatial data to accelerate insights and informed decision making around physical operations.</p> <p>SDMA centralizes customer’s spatial data in a secure and highly available cloud repository to enhance data transparency and accessibility across workflows. Leveraging SDMA's automated metadata extraction for spatial data file formats, starting with: .LAZ, .E57, .GLB, and .GLTF, customers can improve data discoverability and relationships. SDMA’s REST APIs and customizable connectors simplify integrations with external applications — eliminating manual file handling and enhancing cloud and on-premises interoperability. SDMA's intuitive web and desktop interfaces enable users across technical skill levels to manage spatial data efficiently. Auto-generated file previews are designed to improve workflow speed and data accuracy, they allow users to view and validate data without downloading large files.</p> <p>SDMA is available in the following AWS regions: Asia Pacific (Tokyo, Singapore, Sydney), Europe (Frankfurt, Ireland, London), US East (N. Virginia, Ohio), US West (Oregon).</p> <p>To learn more, visit the <a href="https://aws.amazon.com/solutions/implementations/spatial-data-management-on-aws/" target="_blank">SDMA Product page</a>.</p>

Read article →

Amazon Quick Suite integrates Quick Research with Quick Flows for report automation

<p>Amazon Quick Suite now includes Quick Research as a step within Quick Flows. This integration enables teams to generate comprehensive research reports as part of automated, multi-step workflows, transforming research projects into reusable workflows that can be shared across their organization.<br /> <br /> Quick Suite is Amazon's new AI-powered workspace that helps organizations get answers from their business data and move quickly from insights to action. With this integration, teams can trigger research automatically within their flows rather than conducting separate analysis. This addresses a critical productivity challenge by enabling teams to capture and scale proven research methods across hundreds of automated use cases. The integration also allows users to automate research workflows through scheduled triggers so users can set up flows that automatically generate research at specific times. Common use cases include automated account plan creation, standardizing product compliance analysis, and scheduled industry reports.<br /> <br /> Users benefit from pre-configured flows that generate research based on flow creator instructions and optional user inputs. The generated research report can be used further to automatically trigger downstream actions like updating a Salesforce opportunity for an account team to follow up on, posting on a Jira ticket for a compliance team to review, or creating an Asana task for a patent lawyer to approve. This unlocks "set and forget" workflows that deliver consistent analysis without manual heavy lifting. Now operating within these automated workflows, Quick Research maintains its core strength of streamlining analysis across diverse enterprise data sources while delivering verified, source-traced insights. For existing Flow users, this provides access to more comprehensive analysis.<br /> <br /> Quick Research with Flows integration is available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more about automating your research needs, read the <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/quick-research-steps-in-flows.html" target="_blank">Quick Suite user guide</a>.&nbsp;&nbsp;</p>

Read article →

Amazon Q now can analyze SES email sending

<p>Today, <a href="https://aws.amazon.com/q/" style="cursor: pointer;">Amazon Q</a> (Q) added support for analyzing email sending in <a href="https://aws.amazon.com/ses/" style="cursor: pointer;">Amazon Simple Email Service</a> (SES). Now customers can ask Q questions about their SES resource setup and usage patterns, and Q will help them optimize their configuration and troubleshoot deliverability problems. This makes it easier to manage SES operational activities with less technical knowledge.<br /> <br /> Previously, customers could use SES features such as <a href="https://docs.aws.amazon.com/ses/latest/dg/vdm.html" style="cursor: pointer;">Virtual Deliverability Manager</a> to manage and explore their SES resource configuration and usage. SES provided convenient dashboard views and query tools to help customers find information, however customers needed deep understanding of email sending concepts to interact with the service. Now, customers can ask Q for help in optimizing resource configuration and troubleshooting deliverability challenges. Q will evaluate customer’s usage patterns and SES resource configuration, find the answers customers need, and help them understand the context without requiring pre-knowledge or manual exploration.<br /> <br /> Q supports SES resource analysis in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a> where SES and Q are available.<br /> <br /> For more information, see the <a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/chat-email.html" target="_blank">Q documentation</a> for information about interacting with SES through Q.</p>

Read article →

AWS Elastic Beanstalk now supports Python 3.14 on Amazon Linux 2023

<p>AWS Elastic Beanstalk now enables customers to build and deploy Python 3.14 applications on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest features and improvements in Python while taking advantage of the enhanced security and performance of AL2023.</p> <p>AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Python 3.14 on AL2023 delivers enhanced interactive interpreter capabilities, improved error messages, important security and API improvements. Developers can create Elastic Beanstalk environments running Python 3.14 on AL2023 through the Elastic Beanstalk Console, CLI, or API.</p> <p>This platform is available in all commercial AWS Regions where Elastic Beanstalk is available, including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>.</p> <p>To learn more about Python 3.14 on Amazon Linux 2023, see the AWS Elastic Beanstalk <a contenteditable="false" href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-apps.html" style="cursor: pointer;">Developer guide.</a> For additional information, visit the AWS Elastic Beanstalk <a contenteditable="false" href="https://aws.amazon.com/elasticbeanstalk/" style="cursor: pointer;">product page</a>.</p>

Read article →

AWS launches simplified enablement of AWS CloudTrail events in Amazon CloudWatch

<p>Today, AWS launches simplified enablement of AWS CloudTrail events in Amazon CloudWatch, a monitoring and logging service that helps you collect, monitor, and analyze log data from your AWS resources and applications. With this launch, you can now centrally configure collection of CloudTrail events in <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/telemetry-config-cloudwatch.html" style="cursor: pointer;">CloudWatch</a> alongside other popular AWS log sources such as Amazon VPC flow logs and Amazon EKS Control Plane Logs. CloudWatch's ingestion experience provides a consolidated view that simplifies collecting telemetry from different sources for accounts in your AWS Organization thus ensuring comprehensive monitoring and data collection across your AWS environment.<br /> <br /> This new integration leverages <a contenteditable="false" href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-service-linked-channels.html" style="cursor: pointer;">service-linked channels</a> (SLCs) to receive events from CloudTrail without requiring trails, and also provides additional benefits such as safety-checks and termination protection. You incur both <a contenteditable="false" href="https://aws.amazon.com/cloudtrail/pricing/" style="cursor: pointer;">CloudTrail event delivery charges</a> and <a contenteditable="false" href="https://aws.amazon.com/cloudwatch/pricing/" style="cursor: pointer;">CloudWatch Logs ingestion fees</a> based on custom logs pricing.<br /> <br /> To learn more about enablement of CloudTrail events in CloudWatch and supported AWS regions, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/telemetry-config-cloudwatch.html" style="cursor: pointer;">Amazon CloudWatch documentation.</a></p>

Read article →

AWS Elastic Beanstalk now supports Node.js 24 on Amazon Linux 2023

<p>AWS Elastic Beanstalk now enables customers to build and deploy Node.js 24 applications on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest features and improvements in Node.js while taking advantage of the enhanced security and performance of AL2023.</p> <p>AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 24 on AL2023 delivers updates to the V8 JavaScript engine, npm 11, and security and performance improvements. Developers can create Elastic Beanstalk environments running Node.js 24 on AL2023 through the Elastic Beanstalk Console, CLI, or API.</p> <p>This platform is available in all commercial AWS Regions where Elastic Beanstalk is available, including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>.</p> <p>To learn more about Node.js 24 on Amazon Linux 2023, see the AWS Elastic Beanstalk <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-apps.html" style="cursor: pointer;">Developer guide.</a> For additional information, visit the AWS Elastic Beanstalk <a href="https://aws.amazon.com/elasticbeanstalk/" style="cursor: pointer;">product page</a>.</p>

Read article →

TwelveLabs’ Pegasus 1.2 model now in 23 new AWS regions via Global cross-region inference

<p>Amazon Bedrock introduces <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/global-cross-region-inference.html">Global cross-Region inference</a> for TwelveLabs' Pegasus 1.2, expanding model availability to 23 new regions in addition to the seven regions where the model was already available. You can now also access the model in all EU regions in Amazon Bedrock using <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/geographic-cross-region-inference.html">Geographic cross-Region inference</a>. Geographic cross-Region inference is ideal for workloads with data residency or compliance requirements within a specific geographic boundary, while Global cross-Region inference is recommended for applications that prioritize availability and performance across multiple geographies.<br /> <br /> Pegasus 1.2 is a powerful video-first language model that can generate text based on the visual, audio, and textual content within videos. Specifically designed for long-form video, it excels at video-to-text generation and temporal understanding. With Pegasus 1.2's availability in these additional regions, you can now build video-intelligence applications closer to your data and end users, reducing latency and simplifying your architecture.<br /> <br /> For a complete list of supported inference profiles and regions for Pegasus 1.2, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html#inference-profiles-support-system">Cross-Region Inference documentation</a>. To get started with Pegasus 1.2, visit the Amazon Bedrock console. To learn more, read the&nbsp;<a href="https://aws.amazon.com/bedrock/twelvelabs/">product pa</a><a href="https://aws.amazon.com/bedrock/twelvelabs/">ge</a>&nbsp;and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-twelvelabs.html">Amazon Bedrock documentation</a>.&nbsp;</p>

Read article →

Amazon SageMaker now supports self-service migration of Notebook instances to latest platform versions

<p>Amazon SageMaker Notebook instance now supports self-service migration, allowing you to update your notebook instance platform identifier through the UpdateNotebookInstance API. This enables you to seamlessly transition from unsupported platform identifiers (notebook-al1-v1, notebook-al2-v1, notebook-al2-v2) to supported versions (notebook-al2-v3, notebook-al2023-v1).<br /> <br /> With the new <a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateNotebookInstance.html#sagemaker-UpdateNotebookInstance-request-PlatformIdentifier">PlatformIdentifier</a> parameter in the UpdateNotebookInstance API, you can update to newer versions of the Notebook instance platform while preserving your existing data and configurations. The <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-jl.html#nbi-jl-version-maintenance">platform identifier</a> determines which Operating System and JupyterLab version combination your notebook instance runs. This self-service capability simplifies the migration process and helps you keep your notebook instances current.<br /> <br /> This feature is supported through AWS CLI (version 2.31.27 or newer) and SDK, and is available in all AWS Regions where Amazon SageMaker Notebook instances are supported. To learn more, see <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-update.html">Update a Notebook Instance</a> in the Amazon SageMaker Developer Guide.</p>

Read article →

Amazon Connect launches WhatsApp channel for Outbound Campaigns

<p>Amazon Connect Outbound Campaigns now supports WhatsApp, expanding on the WhatsApp Business messaging capabilities that already allow customers to contact your agents. You can now engage customers through proactive, automated campaigns on their preferred messaging platform, delivering timely communications such as appointment reminders, payment notifications, order updates, and product recommendations directly through WhatsApp. Setting up WhatsApp campaigns uses the same familiar Amazon Connect interface, where you can define your target audience, choose personalized message templates, schedule delivery times, and apply compliance guardrails, just as you do for SMS, voice, and email campaigns.<br /> <br /> Previously, Outbound Campaigns supported SMS, email, and voice channels, while WhatsApp was available only for customers to initiate conversations with your agents. With WhatsApp support in Outbound Campaigns, you can now proactively reach customers through an additional messaging platform while maintaining a unified campaign management experience. You can personalize WhatsApp messages using real-time customer data, track delivery and engagement metrics, and manage communication frequency and timing to ensure compliance. This expansion provides greater flexibility to connect with customers on their preferred platforms while streamlining your omnichannel outreach strategy.<br /> <br /> This feature is available in all AWS Regions where Amazon Connect Outbound Campaigns is supported. To learn more, visit the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/how-to-create-campaigns.html#create-campaigns-channel-configurations">Amazon Connect Outbound Campaigns documentation</a>.</p>

Read article →

SES Mail Manager is now available in 10 additional AWS Regions, 27 total

<p>Amazon SES announces that the SES Mail Manager product is now available in 10 additional <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/" target="_blank">commercial AWS Regions</a>. This expands coverage from the current 17 commercial AWS Regions where Mail Manager is launched, meaning that Mail Manager is now offered in all commercial Regions where SES offers its core Outbound service.<br /> <br /> SES Mail Manager allows customers to configure email routing and delivery mechanisms for their domains, and to have a single view of email governance, risk, and compliance solutions for all email workloads. Organizations commonly deploy Mail Manager to replace legacy hosted mail relays or simplify integration with third-party mailbox providers and email security solutions. Mail Manager also supports onward delivery to WorkMail mailboxes, built-in archiving with search and export capabilities, and integration with third-party security add-ons directly within the console.<br /> <br /> The 10 new Mail Manager Regions include Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Malaysia), Europe (Milan), Israel (Tel Aviv), Canada West (Calgary), and Europe (Zurich). The full list of Mail Manager Region availability is <a href="https://docs.aws.amazon.com/general/latest/gr/ses.html" target="_blank">here</a>. <br /> <br /> To learn more, see the Amazon SES Mail Manager <a href="https://docs.aws.amazon.com/ses/latest/dg/eb.html" target="_blank">product page</a> and the SES Mail Manager documentation. You can start using Mail Manager in these new Regions through the Amazon SES console.</p>

Read article →

Amazon SES adds VPC support for API endpoints

<p>Today, Amazon <a href="https://aws.amazon.com/ses/">Simple Email Service</a> (SES) added support for accessing SES API endpoints through Virtual Private Cloud (VPC) endpoints. Customers use VPC endpoints to enable access to SES APIs for sending emails and managing their SES resource configuration. This release helps customers increase security in their VPCs.<br /> <br /> Previously, customers who ran their workloads in a VPC could access SES APIs by configuring an internet gateway resource in their VPC. This enabled traffic from the VPC to flow into the internet, and reach SES public API endpoints. Now, customers can use the VPC endpoints to access SES APIs without the need for an internet gateway, reducing the chances for activity in the VPC to be exposed to the internet..<br /> <br /> SES supports VPC for SES API endpoints in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a> where SES is available.<br /> <br /> For more information, see the documentation for information about <a href="https://docs.aws.amazon.com/ses/latest/dg/send-email-set-up-vpc-endpoints.html">setting up VPC endpoints with Amazon SES</a>.</p>

Read article →

Amazon OpenSearch Service now supports automatic semantic enrichment

<p>Amazon OpenSearch Service now brings automatic semantic enrichment to managed clusters, matching the capability we launched for <a href="https://aws.amazon.com/about-aws/whats-new/2025/08/amazon-opensearch-serverless-introduces-automatic-semantic-enrichment/" style="cursor: pointer;">OpenSearch Serverless</a> earlier this year. This feature allows you to leverage the power of semantic search with minimal configuration effort.<br /> <br /> Traditional lexical search only matches exact phrases, often missing relevant content. Automatic semantic enrichment understands context and meaning, delivering more relevant results. For example, a search for "eco-friendly transportation options" finds matches about "electric vehicles" or "public transportation"—even when these exact terms aren't present. This new capability handles all semantic processing automatically, eliminating the need to manage machine learning models. It supports both English-only and multi-lingual variants, covering 15 languages including Arabic, French, Hindi, Japanese, Korean, and more. You pay only for actual usage during data ingestion, billed as OpenSearch Compute Unit (OCU) - Semantic Search. View the <a href="https://aws.amazon.com/opensearch-service/pricing/" style="cursor: pointer;">pricing page</a> for cost details and a pricing example.<br /> <br /> This feature is now available for Amazon OpenSearch Service domains running OpenSearch version 2.19 or later. Currently, this feature supports non-VPC domains in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).<br /> <br /> Get started with our <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/opensearch-semantic-enrichment.html" style="cursor: pointer;">documentation</a> on automatic semantic enrichment.</p>

Read article →

Amazon Connect Customer Profiles launches new segmentation capabilities (Beta)

<p>Amazon Connect Customer Profiles now offers new segmentation capabilities powered by Spark SQL (Beta), enabling you to build sophisticated customer segments using your complete Customer Profiles data with AI assistance.<br /> <br /> You can:<br /> </p> <ul> <li><b>Access complete profile data</b>: Use both custom objects and standard objects for segmentation</li> <li><b>Leverage SQL capabilities</b>: Join objects, filter with statistical functions like percentiles, and standardize date fields for complex analysis</li> <li><b>Build segments with AI assistance</b>: Use natural language prompts with the Segment AI assistant to automatically generate segment definitions in Spark SQL, or write SQL directly</li> <li><b>Validate before deployment</b>: Review AI-generated SQL, view natural language explanations, and get automatic segment estimates</li> </ul> <p>For example, you can create segments like "customers who called customer services more than 3 times in the past month about new purchases they made" or "high-value customers in the 90th percentile of lifetime spend" to enable precise targeting for outbound campaigns and personalized customer experiences.<br /> <br /> These new segmentation capabilities are offered alongside existing segmentation features. Both integrate seamlessly with segment membership calls, Flow blocks, and Outbound Campaigns, allowing you to choose the approach that best fits your use case.<br /> <br /> <b>Getting started</b>: Enable Data store from the Customer Profiles page to use the new segmentation capabilities<br /> <br /> <b>Availability</b>: Available in all AWS regions where Amazon Connect Customer Profiles is offered.<br /> <br /> For more information, see <a href="https://docs.aws.amazon.com/connect/latest/adminguide/customer-segments-building-segments.html">Build customer segments in Amazon Connect</a> in the Amazon Connect Administrator Guide.</p>

Read article →

Amazon Bedrock now supports Responses API from OpenAI

<p>Amazon Bedrock now supports Responses API on new OpenAI API-compatible service endpoints. Responses API enables developers to achieve asynchronous inference for long-running inference workloads, simplifies tool use integration for agentic workflows, and also supports stateful conversation management. Instead of requiring developers to pass the entire conversation history with each request, Responses API enables them to automatically rebuild context without manual history management. These new service endpoints support both streaming and non-streaming modes, enable reasoning effort support within Chat Completions API, and require only a base URL change for developers to integrate within existing codebases with OpenAI SDK compatibility.<br /> <br /> Chat Completions with reasoning effort support is available for all Amazon Bedrock models that are powered by Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock, provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and provides out-of-the-box compatibility with OpenAI API specifications. Responses API support is available today starting with OpenAI's GPT OSS 20B/120B models, with support for other models coming soon.<br /> <br /> To get started, visit the service documentation <b><a></a><a href="https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html">here</a></b></p>

Read article →

Announcing new Amazon EC2 M9g instances powered by AWS Graviton5 processors (Preview)

<p>Starting today, new general purpose Amazon Elastic Compute Cloud (Amazon EC2) M9g instances, powered by AWS Graviton5 processors, are available in preview. AWS Graviton5 is the latest in the Graviton family of processors that are custom designed by AWS to provide the best price performance for workloads in Amazon EC2. These instances offer up to 25% better compute performance, and higher networking and Amazon Elastic Block Store (Amazon EBS) bandwidth than AWS Graviton4-based M8g instances. They are up to 30% faster for databases, up to 35% faster web applications, and up to 35% faster for machine learning workloads compared to M8g.<br /> <br /> M9g instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The <a href="https://aws.amazon.com/ec2/nitro/" style="cursor: pointer;">AWS Nitro System</a> enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. Amazon EC2 M9g instances are ideal for workloads such as application servers, microservices, gaming servers, midsize data stores, and caching fleets.<br /> <br /> To learn more or request access to the M9g preview, see Amazon EC2 M9g instances. To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/" style="cursor: pointer;">Level up your compute with AWS Graviton page</a>.</p>

Read article →

Amazon Bedrock now supports reinforcement fine-tuning delivering 66% accuracy gains on average over base models

<p>Amazon Bedrock now supports reinforcement fine-tuning, helping you improve model accuracy without needing deep machine learning expertise or large sums of labeled data. Amazon Bedrock automates the reinforcement fine-tuning workflow, making this advanced model customization technique accessible to everyday developers. Models learn to align with your specific requirements using a small set of prompts rather than the large sums of data needed for traditional fine-tuning methods, enabling teams to get started quickly. This capability teaches models through feedback on multiple possible responses to the same prompt, improving their judgement of what makes a good response. Reinforcement fine-tuning in Amazon Bedrock delivers 66% accuracy gains on average over base models so you can use smaller, faster, and more cost-effective model variants while maintaining high quality.</p> <p>Organizations struggle to adapt AI models to their unique business needs, forcing them to choose between generic models with average performance or expensive, complex customization that requires specialized talent, infrastructure, and risky data movement. Reinforcement fine-tuning in Amazon Bedrock removes this complexity by making advanced model customization fast, automated, and secure. You can train models by uploading training data directly from your computer or choose from datasets already stored in Amazon S3, eliminating the need for any labeled datasets. You can define reward functions using verifiable rule-based graders or AI-based judges along with built-in templates to optimize your models for both objective tasks such as code generation or math reasoning, and subjective tasks such as instruction following or chatbot interactions. Your proprietary data never leaves AWS's secure, governed environment during the entire customization process, mitigating security and compliance concerns.</p> <p>You can get started with reinforcement fine-tuning in Amazon Bedrock through the <a contenteditable="false" href="https://console.aws.amazon.com/bedrock" style="cursor: pointer;">Amazon Bedrock console</a> and via the <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html" style="cursor: pointer;">Amazon Bedrock APIs</a>. At launch, you can use reinforcement fine-tuning with Amazon Nova 2 Lite with support for additional models coming soon. To learn more about reinforcement fine-tuning in Amazon Bedrock, read the <a href="https://aws.amazon.com/blogs/aws/improve-model-accuracy-with-reinforcement-fine-tuning-in-amazon-bedrock/">launch blog</a>, <a contenteditable="false" href="https://aws.amazon.com/bedrock/pricing/" style="cursor: pointer;">pricing page</a>, and <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock/latest/userguide/reinforcement-fine-tuning.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Introducing elastic training on Amazon SageMaker HyperPod

<p>Amazon SageMaker HyperPod now supports elastic training, enabling organizations to accelerate foundation model training by automatically scaling training workloads based on resource availability and workload priorities. This represents a fundamental shift from training with a fixed set of resources, as it saves hours of engineering time spent reconfiguring training jobs based on compute availability.</p> <p>Any change in compute availability previously required manually halting training, reconfiguring training parameters, and restarting jobs—a process that requires distributed training expertise and leaves expensive AI accelerators sitting idle during training job reconfiguration. Elastic training automatically expands training jobs to absorb idle AI accelerators and seamlessly contracting when higher-priority workloads need resources—all without halting training entirely.</p> <p>By eliminating manual reconfiguration overhead and ensuring continuous utilization of available compute, elastic training can help save time previously spent on infrastructure management, reduce costs by maximizing cluster utilization, and accelerate time-to-market. Training can start immediately with minimal resources and grow opportunistically as capacity becomes available.</p> <p>SageMaker HyperPod is available in all regions where Amazon SageMaker HyperPod is currently available. Organizations can enable elastic training with zero code changes using HyperPod recipes for publicly available models including Llama and GPT OSS. For custom model architectures, customers can integrate elastic training capabilities through lightweight configuration updates and minimal code modifications, making it accessible to teams without requiring distributed systems expertise.</p> <p>To get started, visit the <a contenteditable="false" href="https://aws.amazon.com/sagemaker/ai/hyperpod/" style="cursor: pointer;">Amazon SageMaker HyperPod</a> product page and see the <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-elastic-training.html" style="cursor: pointer;">elastic training documentation</a> for implementation guidance.</p>

Read article →

Announcing TypeScript support in Strands Agents (preview) and more

<p>In May, we open sourced the Strands Agents SDK, an open source python framework that takes a model-driven approach to building and running AI agents in just a few lines of code. Today, we’re announcing that TypeScript support is available in preview. Now, developers can choose between Python and TypeScript for building Strands Agents.<br /> <br /> TypeScript support in Strands has been designed to provide an idiomatic TypeScript experience with full type safety, async/await support, and modern JavaScript/TypeScript patterns. Strands can be easily run in client applications, in browsers, and server-side applications in runtimes like AWS Lambda and Bedrock AgentCore. Developers can also build their entire stack in Typescript using the AWS CDK.<br /> <br /> We’re also announcing three additional updates for the Strands SDK. First, edge device support for Strands Agents is generally available, extending the SDK with bidirectional streaming and additional local model providers like llama.cpp that let you run agents on small-scale devices using local models. Second, Strands steering is now available as an experimental feature, giving developers a modular prompting mechanism that provides feedback to the agent at the right moment in its lifecycle, steering agents toward a desired outcome without rigid workflows. Finally, Strands evaluations is available in preview. Evaluations gives developers the ability to systematically validate agent behavior, measure improvements, and deploy with confidence during development cycles.<br /> <br /> Head to the Strands Agents <a href="https://github.com/strands-agents" target="_blank">GitHub</a> to get started building.</p>

Read article →

New serverless model customization capability in Amazon SageMaker AI

<p>Amazon Web Services (AWS) announces a new serverless model customization capability that empowers AI developers to quickly customize popular models with supervised fine-tuning and the latest techniques like reinforcement learning.&nbsp;Amazon SageMaker AI is&nbsp;a fully managed service that brings together a broad set of tools to enable high-performance, low-cost AI model development for any use case.&nbsp;</p> <p>Many AI developers seek to customize models with proprietary data for improved accuracy, but this often requires lengthy iteration cycles. For example, AI developers must define a use case and prepare data, select a model and customization technique, train the model, then evaluate the model for deployment.&nbsp;Now AI developers can&nbsp;simplify the end-to-end model customization workflow, from data preparation to evaluation and deployment, and accelerate the process. With an easy-to-use interface, AI developers can quickly get started and customize popular models, including Amazon Nova, Llama, Qwen, DeepSeek, and GPT-OSS, with their own data. They can use supervised fine-tuning and the latest customization techniques such as reinforcement learning and direct preference optimization. In addition, AI developers can use the AI agent-guided workflow (in preview), and use natural language to&nbsp;generate synthetic data, analyze data quality, and handle model training and evaluation—all entirely serverless.&nbsp;</p> <p>You can use this easy-to-use interface in the following AWS Regions: Europe (Ireland),&nbsp;US East (N. Virginia),&nbsp;Asia Pacific (Tokyo), and US West (Oregon).&nbsp;To join the waitlist to access the AI agent-guided workflow, visit the <a href="https://pages.awscloud.com/AmazonSageMakerAI-preview.html" target="_blank">sign-up page</a>.&nbsp;</p> <p>To learn more, visit the&nbsp;<a href="https://aws.amazon.com/sagemaker/ai/model-customization/" target="_blank">SageMaker AI model customization page</a>&nbsp;and <a href="https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning/" target="_blank">blog</a>.</p>

Read article →

Amazon SageMaker HyperPod now supports checkpointless training

<p>Amazon SageMaker HyperPod now supports checkpointless training, a new foundational model training capability that mitigates the need for a checkpoint-based job-level restart for fault recovery. Checkpointless training maintains forward training momentum despite failures, reducing recovery time from hours to minutes. This represents a fundamental shift from traditional checkpoint-based recovery, where failures require pausing the entire training cluster, diagnosing issues manually, and restoring from saved checkpoints, a process that can leave expensive AI accelerators idle for hours, costing your organization wasted compute.</p> <p>Checkpointless training transforms this paradigm by preserving the model training state across the distributed cluster, automatically swapping out faulty training nodes on the fly and using peer-to-peer state transfer from healthy accelerators for failure recovery.&nbsp;By mitigating checkpoint dependencies during recovery, checkpointless training can help your organization save on&nbsp;idle AI accelerator costs and accelerate time. Even at larger scales, checkpointless training on Amazon SageMaker HyperPod enables upwards of 95% training goodput on cluster sizes with thousands of AI accelerators.</p> <p>Checkpointless training on SageMaker HyperPod is available in all AWS Regions where Amazon SageMaker HyperPod is currently available.&nbsp;You can enable checkpointless training with zero code changes using HyperPod recipes for popular publicly available models such as Llama and GPT OSS. For custom model architectures, you can integrate checkpointless training components with minimal modifications for PyTorch-based workflows,&nbsp;making it accessible to your teams regardless of their distributed training expertise.</p> <p>To get started, visit the <a href="https://aws.amazon.com/sagemaker/ai/hyperpod/" target="_blank">Amazon SageMaker HyperPod</a> product page and see the <a href="https://github.com/aws/sagemaker-hyperpod-checkpointless-training" target="_blank">checkpointless training GitHub page</a> for implementation guidance.</p>

Read article →

Announcing new memory optimized Amazon EC2 X8aedz instances

<p>AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance compared to previous generation X2iezn instances.<br /> <br /> X8aedz instances are built using the latest sixth generation <a contenteditable="false" href="https://aws.amazon.com/ec2/nitro/" style="cursor: pointer;">AWS Nitro Cards</a> and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis.<br /> <br /> X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage.<br /> <br /> X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the <a href="https://aws.amazon.com/ec2/instance-types/x8aedz" target="_blank">Amazon EC2 X8aedz instance page</a> or <a href="https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads/" target="_blank">AWS news blog</a>.</p>

Read article →

Amazon Bedrock AgentCore now includes Policy (preview), Evaluations (preview) and more

<p>Today, Amazon Bedrock AgentCore introduces new offerings, including Policy (preview) and Evaluations (preview), to give teams the controls and quality assurance they need to confidently scale agent deployment across their organization, transforming agents from prototypes to solutions in production.<br /> <br /> Policy in AgentCore integrates with AgentCore Gateway to intercept every tool call in real time, ensuring agents stay within defined boundaries without slowing down. Teams can create policies using natural language that automatically convert to Cedar—the AWS open-source policy language—helping development, compliance, and security teams set up, understand, and audit rules without writing custom code. AgentCore Evaluations helps developers test and continuously monitor agent performance based on real-world behavior to improve quality and catch issues before they cause widespread customer impact. Developers can use 13 built-in evaluators for common quality dimensions, such as helpfulness, tools selection, and accuracy, or create custom model-based scoring systems, drastically reducing the effort required to develop evaluation infrastructure. All quality metrics are accessible through a unified dashboard powered by Amazon CloudWatch. We’ve also added new features to AgentCore Memory, AgentCore Runtime, and AgentCore Identity to support more advanced agent capabilities. AgentCore Memory now includes episodic memory, enabling agents to learn and adapt from experiences, building knowledge over time to create more humanlike interactions. AgentCore Runtime supports bidirectional streaming for natural conversations where agents simultaneously listen and respond while handling interruptions and context changes mid-conversation, unlocking powerful voice agent use cases. AgentCore Identity now supports custom claims for enhanced authentication rules across multi-tenant environments while maintaining seamless integration with your chosen identity providers.<br /> <br /> AgentCore Evaluations is available in preview in four AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Frankfurt). Policy in AgentCore is available in preview in all <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html">AWS Regions</a> where AgentCore is available.<br /> <br /> Learn more about new AgentCore updates through the <a href="https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents" target="_blank">blog</a>, deep dive using AgentCore resources, and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.</p>

Read article →

Amazon Bedrock adds 18 fully managed open weight models, the largest expansion of new models to date

<p>Amazon Bedrock is a platform for building generative AI applications and agents at production scale. Amazon Bedrock provides access to a broad selection of fully managed models from leading AI companies through a unified API, enabling you to evaluate, switch, and adopt new models without rewriting applications or changing infrastructure. Today, Amazon Bedrock is adding 18 fully managed open weight models to its model offering, the largest expansion of new models to date.</p> <p>You can now access the following models in Amazon Bedrock:<br /> </p> <p><b>Google:&nbsp;</b>Gemma 3 4B,&nbsp;Gemma 3 12B,&nbsp;Gemma 3 27B</p> <p><b>MiniMax AI:&nbsp;</b>MiniMax M2</p> <p><b>Mistral AI:&nbsp;</b>Mistral Large 3, Ministral 3 3B, Ministral 3 8B, Ministral 3 14B, Magistral Small 1.2, Voxtral Mini 1.0, Voxtral Small 1.0</p> <p><b>Moonshot AI:&nbsp;</b>Kimi K2 Thinking</p> <p><b>NVIDIA:&nbsp;</b>NVIDIA Nemotron Nano 2 9B,&nbsp;NVIDIA Nemotron Nano 2 VL 12B</p> <p><b>OpenAI:&nbsp;</b>gpt-oss-safeguard-20b,&nbsp;gpt-oss-safeguard-120b</p> <p><b>Qwen:&nbsp;</b>Qwen3-Next-80B-A3B,&nbsp;Qwen3-VL-235B-A22B</p> <p>For the full list of available AWS Regions, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html" target="_blank">documentation</a>.</p> <p>To learn more about all the models that Amazon Bedrock offers, view the <a href="https://aws.amazon.com/bedrock/model-choice/" target="_blank">Amazon Bedrock model choice page</a>. To get started using these models in Amazon Bedrock, read the launch blog and&nbsp;visit the <a href="https://console.aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock console</a>.</p>

Read article →

Amazon S3 Tables now offer the Intelligent-Tiering storage class

<p>Amazon S3 Tables now offer the Intelligent-Tiering storage class, which optimizes costs based on access patterns, without performance impact or operational overhead. Intelligent-Tiering automatically transitions data in tables across three low-latency access tiers as access patterns change, reducing storage costs by up to 80%. Additionally, S3 Tables automated maintenance operations such as compaction, snapshot expiration, and unreferenced file removal never tier up your data. This helps you to keep your tables optimized while saving on storage costs.<br /> <br /> With the Intelligent-Tiering storage class, data in tables not accessed for 30 consecutive days automatically transitions to the Infrequent Access tier (40% lower cost than the Frequent Access tier). After 90 days without access, that data transitions to the Archive Instant Access tier (68% lower cost than the Infrequent Access tier). You can now select Intelligent-Tiering as the storage class when you create a table or set it as the default for all new tables in a table bucket.<br /> <br /> The Intelligent-Tiering storage class is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html" style="cursor: pointer;">AWS Regions where S3 Tables are available</a>. For pricing details, visit the <a contenteditable="false" href="https://aws.amazon.com/s3/pricing/" style="cursor: pointer;"><u>Amazon S3 pricing page</u></a>. To learn more about S3 Tables, visit the <a contenteditable="false" href="https://aws.amazon.com/s3/features/tables/" style="cursor: pointer;">product page</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/tables-intelligent-tiering.html" style="cursor: pointer;">documentation</a>, and read the <a href="https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables" target="_blank">AWS News Blog</a>.</p>

Read article →

Amazon SageMaker AI announces serverless MLflow capability for faster AI development

<p>Amazon SageMaker AI now offers a serverless MLflow capability that dynamically scales to support AI model development tasks. With MLflow, AI developers can begin tracking, comparing, and evaluating experiments without waiting for infrastructure setup.<br /> <br /> As customers across industries accelerate AI development, they require capabilities to track experiments, observe behavior, and evaluate the performance of AI models, applications and agents. However, managing MLflow infrastructure requires administrators to continuously maintain and scale tracking servers, make complex capacity planning decisions, and deploy separate instances for data isolation. This infrastructure burden diverts resources away from core AI development and creates bottlenecks that impact team productivity and cost effectiveness.<br /> <br /> With this update, MLflow now scales dynamically to deliver fast performance for demanding and unpredictable model development tasks, then scales down during idle time. Administrators can also enhance productivity by setting up cross-account access via Resource Access Manager (RAM) to simplify collaboration across organizational boundaries.<br /> <br /> The serverless MLflow capability on Amazon SageMaker AI is offered at no additional charge and works natively with familiar Amazon SageMaker AI model development capabilities like SageMaker AI JumpStart, SageMaker Model Registry and SageMaker Pipelines. Customers can access the latest version of MLflow on Amazon SageMaker AI with automatic version updates.<br /> <br /> Amazon SageMaker AI with MLflow is now available in select AWS Regions. To learn more, see the <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker/latest/dg/mlflow.html" style="cursor: pointer;" target="_blank">Amazon SageMaker AI user guide</a> and the <a href="https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow" target="_blank">AWS News Blog</a>.</p>

Read article →

Amazon Bedrock AgentCore Runtime now supports bi-directional streaming

<p>Amazon Bedrock AgentCore Runtime now supports bi-directional streaming, enabling real-time conversations where agents listen and respond simultaneously while handling interruptions and context changes mid-conversation. This feature eliminates conversational friction by enabling continuous, two-way communication where context is preserved throughout the interaction.<br /> <br /> Traditional agents require users to wait for them to finish responding before providing clarification or corrections, creating stop-start interactions that break conversational flow and feel unnatural, especially in voice applications. Bi-directional streaming addresses this limitation by enabling continuous context handling, helping power voice agents that deliver natural conversational experiences where users can interrupt, clarify, or change direction mid-conversation, while also enhancing text-based interactions through improved responsiveness. Built into AgentCore Runtime, this feature eliminates months of engineering effort required to build real-time streaming capabilities, so developers can focus on building innovative agent experiences rather than managing complex streaming infrastructure.<br /> <br /> This feature is available in all nine <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html">AWS Regions</a> where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).<br /> <br /> To learn more about AgentCore Runtime bi-directional streaming, read the blog, visit the AgentCore documentation and get started with the <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-get-started-toolkit.html">AgentCore Starter Toolkit</a>. With AgentCore Runtime's consumption-based pricing, you only pay for <a href="https://aws.amazon.com/bedrock/agentcore/pricing/">active resources consumed</a> during agent execution, with no charges for idle time or upfront costs.&nbsp;</p>

Read article →

Amazon CloudWatch GenAI observability now supports Amazon AgentCore Evaluations

<p>Amazon CloudWatch now enables automated quality assessment of AI agents through AgentCore Evaluations. This new capability helps developers continuously monitor and improve agent performance based on real-world interactions, allowing teams to identify and address quality issues before they impact customers.<br /> <br /> AgentCore Evaluations comes with 13 pre-built evaluators covering essential quality dimensions like helpfulness, tool selection, and response accuracy, while also supporting custom model-based scoring systems. You can access unified quality metrics and agent telemetry in CloudWatch dashboards, with end-to-end tracing capabilities to correlate evaluation metrics with prompts and logs. The feature integrates seamlessly with CloudWatch's existing capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights. This capability eliminates the need for teams to build and maintain custom evaluation infrastructure, accelerating the deployment of high-quality AI agents. Developers can monitor their entire agent fleet through the AgentCore section in the CloudWatch GenAI observability console.</p> <p>AgentCore Evaluations is now available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). To get started, visit the <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/evaluations.html">documentation</a> and <a href="https://aws.amazon.com/bedrock/agentcore/pricing/">pricing</a> details. Standard CloudWatch <a href="https://aws.amazon.com/cloudwatch/pricing/">pricing</a> applies for underlying telemetry data.</p>

Read article →

Amazon Bedrock AgentCore now includes Policy, Evaluations (preview) and more

<p>Today, Amazon Bedrock AgentCore introduces new offerings, including Policy and Evaluations (preview), to give teams the controls and quality assurance they need to confidently scale agent deployment across their organization, transforming agents from prototypes to solutions in production.<br /> <br /> Policy in AgentCore integrates with AgentCore Gateway to intercept every tool call in real time, ensuring agents stay within defined boundaries without slowing down. Teams can create policies using natural language that automatically convert to Cedar—the AWS open-source policy language—helping development, compliance, and security teams set up, understand, and audit rules without writing custom code. AgentCore Evaluations helps developers test and continuously monitor agent performance based on real-world behavior to improve quality and catch issues before they cause widespread customer impact. Developers can use 13 built-in evaluators for common quality dimensions, such as helpfulness, tools selection, and accuracy, or create custom model-based scoring systems, drastically reducing the effort required to develop evaluation infrastructure. All quality metrics are accessible through a unified dashboard powered by Amazon CloudWatch. We’ve also added new features to AgentCore Memory, AgentCore Runtime, and AgentCore Identity to support more advanced agent capabilities. AgentCore Memory now includes episodic memory, enabling agents to learn and adapt from experiences, building knowledge over time to create more humanlike interactions. AgentCore Runtime supports bidirectional streaming for natural conversations where agents simultaneously listen and respond while handling interruptions and context changes mid-conversation, unlocking powerful voice agent use cases. AgentCore Identity now supports custom claims for enhanced authentication rules across multi-tenant environments while maintaining seamless integration with your chosen identity providers.<br /> <br /> AgentCore Evaluations is available in preview in four AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Frankfurt). Policy in AgentCore is available in preview in all <a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html">AWS Regions</a> where AgentCore is available.<br /> <br /> Learn more about new AgentCore updates through the blog, deep dive using AgentCore resources, and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.</p>

Read article →

Announcing Amazon EC2 M4 Max Mac instances (Preview)

<p>Amazon Web Services announces preview of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari.<br /> <br /> M4 Max Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M4 Max Mac Studio computers featuring a 16-core CPU, 40-core GPU, 16-core Neural Engine, and 128GB of unified memory. Compared to EC2 M4 Pro Mac instances, M4 Max instances offer twice the GPU cores and more than 2.5x the unified memory, offering customers more choice to match instance capabilities to their specific workload requirements and further expanding the selection of Apple silicon Mac hardware on AWS.</p> <p>To learn more or request access to the Amazon EC2 M4 Max Mac instances preview, visit the <a href="https://aws.amazon.com/ec2/instance-types/mac/">Amazon EC2 Mac page.</a></p>

Read article →

Announcing Amazon EC2 Memory optimized X8i instances (Preview)

<p>Amazon Web Services is announcing the preview of Amazon EC2 X8i, next-generation Memory optimized instances. X8i instances are powered by custom Intel Xeon 6 processors delivering the highest performance and fastest memory among comparable Intel processors in the cloud. X8i instances offer 1.5x more memory capacity (up to 6TB) , and up to 3.4x more memory bandwidth compared to previous generation X2i instances.<br /> <br /> X8i instances will be SAP-certified and deliver 46% higher SAPS compared to X2i instances, for mission-critical SAP workloads. X8i instances are a great choice for memory-intensive workloads, including in-memory databases and analytics, large-scale traditional databases, and Electronic Design Automation (EDA). X8i instances offer 35% higher performance than X2i instances with even higher gains for some workloads.<br /> <br /> To learn more or request access to the X8i instances preview, visit the Amazon EC2 X8i page.</p>

Read article →

Announcing the Apache Spark upgrade agent for Amazon EMR

<p>AWS announces the Apache Spark upgrade agent, a new capability that accelerates Apache Spark version upgrades for Amazon EMR on EC2 and EMR Serverless. The agent converts complex upgrade processes that typically take months into projects spanning weeks through automated code analysis and transformation. Organizations invest substantial engineering resources analyzing API changes, resolving conflicts, and validating applications during Spark upgrades. The agent introduces conversational interfaces where engineers express upgrade requirements in natural language, while maintaining full control over code modifications.<br /> <br /> The Apache Spark upgrade agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers can initiate upgrades directly from SageMaker Unified Studio, Kiro CLI or IDE of their choice with the help of MCP (Model Context Protocol) compatibility. During the upgrade process, the agent analyzes existing code and suggests specific changes, and engineers can review and approve before implementation. The agent validates functional correctness through data quality validations. The agent currently supports upgrades from Spark 2.4 to 3.5 and maintains data processing accuracy throughout the upgrade process.<br /> <br /> The Apache Spark upgrade agent is now available in all AWS Regions where SageMaker Unified Studio is available. To start using the agent, visit SageMaker Unified Studio and select IDE Spaces or install the Kiro CLI. For detailed implementation guidance, reference documentation, and migration examples, visit the <a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/spark-upgrades.html" target="_blank">documentation</a>.</p>

Read article →

Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables

<p>Amazon S3 Storage Lens provides organization-wide visibility into your storage usage and activity to help optimize costs, improve performance, and strengthen data protection. Today, we are adding three new capabilities to S3 Storage Lens that give you deeper insights into your S3 storage usage and application performance: performance metrics that provide insights into how your applications interact with S3 data, analytics for billions of prefixes in your buckets, and metrics export directly to S3 Tables for easier querying and analysis.<br /> <br /> We are adding three specific types of performance metrics. Access pattern metrics identify inefficient requests, including those that are too small and create unnecessary network overhead. Request origin metrics, such as cross-Region request counts, show when applications access data across regions, impacting latency and costs. Object access count metrics reveal when applications frequently read a small subset of objects that could be optimized through caching or moving to high-performance storage.<br /> <br /> We are expanding the prefix analytics in S3 Storage Lens to enable analyzing billions of prefixes per bucket, whereas previously metrics were limited to the largest prefixes that met minimum size and depth thresholds. This gives you visibility into storage usage and activity across all your prefixes. Finally, we are making it possible to export metrics directly to managed S3 Tables, making them immediately available for querying with AWS analytics services like Amazon QuickSight and enabling you to join this data with other AWS service data for deeper insights.<br /> <br /> To get started, enable performance metrics or expanded prefixes in your S3 Storage Lens advanced metrics dashboard configuration. These capabilities are available in all AWS Regions, except for AWS China Regions and AWS GovCloud (US) Regions. You can enable metrics export to managed S3 Tables in both free and advanced dashboard configurations in AWS Regions where S3 Tables are available. To learn more, visit the <a href="https://aws.amazon.com/s3/storage-lens/" target="_blank">S3 Storage Lens overview page</a>, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens.html" target="_blank">documentation</a>, <a href="https://aws.amazon.com/s3/pricing/" target="_blank">S3 pricing page</a>, and read the AWS News Blog.</p>

Read article →

Announcing Amazon EC2 General purpose M8azn instances (Preview)

<p>Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. The M8azn instances offer up to 2x compute performance versus previous generation M5zn instances. These instances also deliver 24% higher performance than M8a instances.<br /> <br /> M8azn instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a> enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. These instances are ideal for applications such as gaming, high-performance computing, high-frequency trading (HFT), CI/CD, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries.<br /> <br /> To learn more or request access to the M8azn instances preview, visit the <a href="https://aws.amazon.com/ec2/instance-types/m8a">Amazon EC2 M8a page</a>.</p>

Read article →

Amazon SageMaker Catalog now exports asset metadata as queryable dataset

<p>Amazon SageMaker Catalog now exports asset metadata as an Apache Iceberg table through Amazon S3 Tables. This allows data teams to query catalog inventory and answer questions such as, "How many assets were registered last month?", "Which assets are classified as confidential?", or "Which assets lack business descriptions?" using standard SQL without building custom ETL infrastructure for reporting.</p> <p>This capability automatically converts catalog asset metadata into a queryable table accessible from Amazon Athena, SageMaker Unified Studio notebooks, AI agents, and other analytics and BI tools. The exported table includes technical metadata (such as resource_id, resource_type), business metadata (such as asset_name, business_description), ownership details, and timestamps. Data is partitioned by snapshot_date for time travel queries and automatically appears in SageMaker Unified Studio under the aws-sagemaker-catalog bucket.</p> <p>This capability is available in all AWS Regions where SageMaker Catalog is supported at no additional charge. You pay only for underlying services including S3 Tables storage and Amazon Athena queries. You can control storage costs by setting retention policies on the exported tables to automatically remove records older than your specified period.<br /> <br /> To get started, activate dataset export using the AWS CLI, then access the asset table through S3 Tables or SageMaker Unified Studio's Data tab within 24 hours. Query using Amazon Athena, Studio notebooks, or connect external BI tools through the S3 Tables Iceberg REST Catalog endpoint. For instructions, see the Amazon SageMaker <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/export-asset-metadata.html">user guide</a>.&nbsp;</p>

Read article →

AWS previews EC2 C8ine instances

<p>AWS launches the preview of Amazon EC2 C8ine instances, powered by custom sixth-generation Intel Xeon Scalable processors (Granite Rapids) and the latest AWS Nitro v6 card. These instances are designed specifically for dataplane packet processing workloads.<br /> <br /> Amazon EC2 C8ine instance configurations can deliver up to 2.5 times higher packet performance per vCPU versus prior generation C6in instances. They can offer up to 2x higher network bandwidth through internet gateways and up to 3x more Elastic Network Interface (ENI) compared to existing C6in network optimized instances. They are ideal for packet processing workloads requiring high performance at small packet sizes. These workloads include security virtual appliances, firewalls, load balancers, DDoS protection systems, and Telco 5G UPF applications.<br /> <br /> These instances are available for preview upon request through your AWS account team. Connect with your account representatives to signup.<br /> </p>

Read article →

Amazon API Gateway adds MCP proxy support

<p>Amazon API Gateway now supports Model Context Protocol (MCP) proxy, allowing you to transform your existing REST APIs into MCP-compatible endpoints. This new capability enables organizations to make their APIs accessible to AI agents and MCP clients. Through integration with Amazon Bedrock AgentCore's Gateway service, you can securely convert your REST APIs into agent-compatible tools while enabling intelligent tool discovery through semantic search.<br /> <br /> The MCP proxy capability, alongside Bedrock AgentCore Gateway services, delivers three key benefits. First, it enables REST APIs to communicate with AI agents and MCP clients through protocol translation, eliminating the need for application modifications or managing additional infrastructure. Second, it provides comprehensive security through dual authentication - verifying agent identities for inbound requests while managing secure connections to REST APIs for outbound calls. Finally, it enables AI agents to search and select the most relevant REST APIs that best match the prompt context.<br /> <br /> To learn about pricing for this feature, please see the <a contenteditable="false" href="https://aws.amazon.com/bedrock/agentcore/pricing/" style="cursor: pointer;">Amazon Bedrock AgentCore pricing page. </a>Amazon API Gateway MCP proxy capability is available in the nine AWS Regions that Amazon Bedrock AgentCore is available in: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Dublin), Europe (Frankfurt), US East (N. Virginia), US East (Ohio), and US West (Oregon). To get started, visit <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/mcp-server.html" target="_blank">Amazon API Gateway documentation</a>.</p>

Read article →

Amazon S3 Batch Operations introduces performance improvements

<p>Amazon S3 Batch Operations now completes jobs up to 10x faster at a scale of up to 20 billion objects in a job, helping you accelerate large-scale storage operations.<br /> <br /> With S3 Batch Operations, you can perform operations at scale such as copying objects between staging and production buckets, tagging objects for S3 Lifecycle management, or computing object checksums to verify the content of stored datasets. S3 Batch Operations now pre-processes objects, executes jobs, and generates completion reports up to 10x faster for jobs processing millions of objects with no additional configuration or cost. To get started, create a job in the AWS Management Console and specify operation type as well as filters like bucket, prefix, or creation date. S3 automatically generates the object list, creates an AWS Identity and Access Management (IAM) role with permission policies as needed, then initiates the job.<br /> <br /> S3 Batch Operations performance improvements are available in all AWS Regions, except for AWS China Regions and AWS GovCloud (US) Regions. For pricing information, please visit the Management &amp; Insights tab of the <a href="https://aws.amazon.com/s3/pricing/">Amazon S3 pricing page</a>. To learn more about S3 Batch Operations, visit the <a href="https://aws.amazon.com/s3/features/batch-operations/">overview page</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops.html">documentation</a>.</p>

Read article →

AWS Support transformation: AI-powered operations with the human expertise you trust

<p>AWS Support announces a transformation of its Support portfolio, simplified into three intelligent, experience-driven plans: Business Support+, Enterprise Support, and Unified Operations. Each plan combines the speed and precision of AI with the expertise of AWS engineers. Each higher plan builds on the previous one<b>,</b> adding faster response times, proactive guidance, and smarter operations. The result: reduced engineering burden, stronger reliability and resiliency, and streamlined cloud operations.<br /> <br /> Business Support+ delivers 24/7 AI-powered assistance that understands your context, with direct engagement to AWS experts for critical issues within 30 minutes—twice as fast as current plans. Enterprise Support expands on this with designated Technical Account Managers (TAMs) who blend generative AI insights with human judgment to provide strategic operational guidance across resiliency, cost, and efficiency. It also includes <a href="https://aws.amazon.com/security-incident-response/">AWS Security Incident Response</a> at no additional cost, which customers can activate to automate security alert investigation and triage. Unified Operations, the top plan, is designed for mission-critical workloads—offering a global team of designated experts who deliver architecture reviews, guided testing, proactive optimization, and five-minute context-specific response times for critical incidents. Customers using AWS DevOps Agent (preview) can engage with AWS Support with one-click from an investigation when needed, giving AWS experts immediate context for faster resolution. AWS DevOps Agent is a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multicloud, and hybrid environments.<br /> <br /> Business Support+, Enterprise Support, and Unified Operations are available in all commercial AWS Regions. Existing customers can continue with their current plans or explore the new offerings for enhanced performance and efficiency. To see how AWS blends AI intelligence and human expertise to transform your cloud operations, visit the <a href="https://aws.amazon.com/premiumsupport">AWS Support product page</a>.</p>

Read article →

Announcing Amazon EC2 Trn3 UltraServers for faster, lower-cost generative AI training

<p>AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn3 UltraServers powered by our fourth–generation AI chip <u></u><a href="https://aws.amazon.com/ai/machine-learning/trainium/" target="_blank">Trainium3</a>, our first 3nm AWS AI chip purpose-built to deliver the best token economics for next-generation agentic, reasoning, and video generation applications.<br /> <br /> Each AWS Trainium3 chip provides 2.52 petaflops (PFLOPs) of FP8 compute, increases the memory capacity by 1.5x and bandwidth by 1.7x over Trainium2 to 144 GB of HBM3e memory, and 4.9 TB/s of memory bandwidth. Trainium3 is designed for both dense and expert-parallel workloads with advanced data types (MXFP8 and MXFP4) and improved memory-to-compute balance for real-time, multimodal, and reasoning tasks.<br /> <br /> Trn3 UltraServers can scale up to 144 Trainium3 chips (362 FP8 PFLOPs total) and are available in EC2 UltraClusters 3.0 to scale to hundreds of thousands of chips. A fully configured Trn3 UltraServer delivers up to 20.7 TB of HBM3e and 706 TB/s of aggregate memory bandwidth. The next-generation Trn3 UltraServer, feature the NeuronSwitch-v1, an all-to-all fabric that doubles interchip interconnect bandwidth over Trn2 UltraServer.<br /> <br /> Trn3 delivers up to 4.4x higher performance, 3.9x higher memory bandwidth and 4x better performance/watt compared to our Trn2 UltraServers, providing the best price-performance for training and serving frontier-scale models, including reinforcement learning, Mixture-of-Experts (MoE), reasoning, and long-context architectures. On Amazon Bedrock, Trainium3 is our fastest accelerator, delivering up to 3× faster performance than Trainium2 with over 5× higher output tokens per megawatt at similar latency per user.<br /> <br /> New Trn3 UltraServers are built for AI researchers and powered by the <a href="https://aws.amazon.com/ai/machine-learning/neuron/">AWS Neuron SDK</a>, to unlock breakthrough performance. With native PyTorch integration, developers can train and deploy without changing a single line of model code. For AI performance engineers, we’ve enabled deeper access to Trainium3 so they can fine-tune performance, customize kernels, and push models even further. Because innovation thrives on openness, we are committed to engaging with our developers through open-source tools and resources.</p>

Read article →

Announcing Amazon Nova 2 Sonic for real-time conversational AI

<p>Today, Amazon announces the availability of Amazon Nova 2 Sonic, our speech-to-speech model for natural, real-time conversational AI that delivers industry leading quality and price for voice-based conversational AI. It offers best-in-class streaming speech understanding with robustness to background noise and users’ speaking styles, efficient dialog handling, and speech generation with expressive voices that can speak natively in multiple languages (Polyglot voices). It has superior reasoning, instruction following, and tool invocation accuracy over the previous model.</p> <p>Nova 2 Sonic builds on the capabilities introduced in the original Nova Sonic model with new features including expanded language support (Portuguese and Hindi), polyglot voices that enable the model to speak different languages with native expressivity using the same voice, and turn-taking controllability to allow developers to set low, medium, or high pause sensitivity. The model also adds cross-modal interaction, allowing users to seamlessly switch between voice and text in the same session, asynchronous tool calling to support multi-step tasks without interrupting conversation flow, and a one-million token context window for sustained interactions.</p> <p>Developers can integrate Nova Sonic 2 directly into real-time voice systems using Amazon Bedrock’s bidirectional streaming API. Nova Sonic 2 now also seamlessly integrates with Amazon Connect and other leading telephony providers, including Vonage, Twilio, and AudioCodes, as well as open source frameworks such as LiveKit and Pipecat.</p> <p>Amazon Nova 2 Sonic is available in Amazon Bedrock in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Stockholm). To learn more, read the AWS News Blog and the <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html" style="cursor: pointer;" target="_blank">Amazon Nova Sonic User Guide</a>. To get started with Nova Sonic 2 in Amazon Bedrock, visit the <a href="https://console.aws.amazon.com/bedrock" style="cursor: pointer;" target="_blank">Amazon Bedrock console</a>.</p>

Read article →

Amazon GuardDuty Extended Threat Detection now supports Amazon EC2 and Amazon ECS

<p>AWS announces further enhancements to Amazon GuardDuty Extended Threat Detection with new capabilities to detect multistage attacks targeting Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Elastic Container Service (Amazon ECS) clusters running on AWS Fargate or Amazon EC2. GuardDuty Extended Threat Detection uses artificial intelligence and machine learning algorithms trained at AWS scale to automatically correlate security signals and detect critical threats. It analyzes multiple security signals across network activity, process runtime behavior, malware execution, and AWS API activity over extended periods to detect sophisticated attack patterns that might otherwise go unnoticed.<br /> <br /> With this launch, GuardDuty introduces two new critical-severity findings: AttackSequence:EC2/CompromisedInstanceGroup and AttackSequence:ECS/CompromisedCluster. These findings provide attack sequence information, allowing you to spend less time on initial analysis and more time responding to critical threats, minimizing business impact. For example, GuardDuty can identify suspicious processes followed by persistence attempts, crypto-mining activities, and reverse shell creation, representing these related events as a single, critical-severity finding. Each finding includes a detailed summary, events timeline, mapping to MITRE ATT&amp;CK® tactics and techniques, and remediation recommendations.<br /> <br /> While GuardDuty Extended Threat Detection is automatically enabled for GuardDuty customers at no additional cost, its detection comprehensiveness depends on your enabled GuardDuty protection plans. To improve attack sequence coverage and threat analysis of Amazon EC2 instances, enable Runtime Monitoring for EC2. To enable detection of compromised ECS clusters, enable Runtime Monitoring for Fargate or EC2 depending on your infrastructure type.<br /> <br /> To get started, enable GuardDuty protection plans via the Console or API. New GuardDuty customers can start with a <a href="https://portal.aws.amazon.com/billing/signup?pg=guarddutyprice&amp;cta=herobtn&amp;redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation" style="cursor: pointer;">30-day free trial</a>, and existing customers who haven't used Runtime Monitoring can also try it free for 30 days. For additional information, visit the blog post and <a href="https://aws.amazon.com/guardduty/" target="_blank">Amazon Guard Duty product page</a>.</p>

Read article →

Build agents to automate production UI workflows with Amazon Nova Act (GA)

<p>We are excited to announce the general availability of Amazon Nova Act, a new AWS service for developers to build and manage fleets of highly reliable agents for automating production UI workflows. Nova Act is powered by a custom Nova 2 Lite model and provides high reliability with unmatched cost efficiency, fastest time-to-value, and ease of implementation at scale.<br /> <br /> Nova Act can reliably complete repetitive UI workflows in the browser, execute APIs or tools (e.g. write to PDF), and escalate to a human supervisor when appropriate. Developers that need to automate repetitive processes across the enterprise can define workflows combining the flexibility of natural language with more deterministic Python code. Technical teams using Nova Act can start prototyping quickly on the online playground at <a contenteditable="false" href="http://nova.amazon.com/act" style="cursor: pointer;">nova.amazon.com/act</a>, refine and debug their scripts using the Nova Act IDE extension, and deploy to AWS in just a few steps.<br /> <br /> Nova Act is available today in AWS Region&nbsp;US East (N. Virginia).<br /> <br /> <a href="https://aws.amazon.com/nova/act" target="_blank">Learn more about Nova Act</a>.</p>

Read article →

Amazon Nova Forge: Build your own Frontier Models using Nova

<p>We are excited to announce the general availability of Nova Forge, a new service to build your own frontier models using Nova.<br /> <br /> With Nova Forge, you can start your model development on SageMaker AI from early Nova checkpoints across pre-training, mid-training, or post-training phases. You can blend proprietary data with Amazon Nova-curated data to train the model. You can also take advantage of model development features available exclusively on Nova Forge, including the ability to execute Reinforcement Fine Tuning (RFT) with reward functions in your environment and to implement custom safety guardrails using the built-in responsible AI toolkit. Nova Forge allows you to build models that deeply understand your organization’s proprietary knowledge and reflects your expertise, while preserving general capabilities like reasoning and minimizing risks like catastrophic forgetting. In addition, Nova Forge customers get early access to new Nova models, including Nova 2 Pro and Nova 2 Omni.<br /> <br /> Nova Forge is available today in US East (N. Virginia) AWS Region and will be available in additional regions in the coming months. Learn more about Nova Forge on the <a href="https://aws.amazon.com/blogs/aws/amazon-nova-premier-our-most-capable-model-for-complex-tasks-and-teacher-for-model-distillation">AWS News Blog</a>, the <a href="https://aws.amazon.com/nova/">Amazon Nova product page</a>, or the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/nova-forge.html">Amazon Nova user guide</a>. You can get started with Nova Forge today from the <a href="https://console.aws.amazon.com/sagemaker/home">Amazon SageMaker AI console</a>.</p>

Read article →

AWS Security Agent (Preview): AI agent for proactive app security

<p>Today, AWS announces the preview of AWS Security Agent, an AI-powered agent that proactively secures your applications throughout the development lifecycle. AWS Security Agent conducts automated security reviews tailored to your organizational requirements and delivers context-aware penetration testing. By continuously validating security from design to deployment, it helps prevent vulnerabilities early in development across all your environments.</p> <p>Security teams define organizational security requirements once in the AWS Security Agent console, such as approved encryption libraries, authentication frameworks, and logging standards. AWS Security Agent then automatically validates these requirements throughout development by evaluating architectural documents and code against your defined standards, providing specific guidance when violations are detected. For deployment validation, security teams define their penetration testing scope and AWS Security Agent develops application context, executes sophisticated attack chains, and discovers and validates vulnerabilities. This delivers consistent security policy enforcement across all teams, scales security reviews to match development velocity, and transforms penetration testing from a periodic bottleneck into an on-demand capability that dramatically reduces risk exposure.</p> <p>AWS Security Agent (Preview) is currently available in the US East (N. Virginia) Region. All of your data remains safe and private. Your queries and data are never used to train models. AWS Security Agent logs API activity to AWS CloudTrail for auditing and compliance.</p> <p>To learn more about AWS Security Agent, visit the product page and read the launch announcement. For technical details and to get started, see the AWS Security Agent documentation.</p>

Read article →

Introducing Amazon Nova 2 Omni in Preview

<p>We are excited to announce Amazon Nova 2 Omni, an all-in-one model for multimodal reasoning and image generation. It is the industry’s first reasoning model that supports text, images, video, and speech inputs while generating both text and image outputs. It enables multimodal understanding, image generation and editing using natural language, and speech transcription.<br /> <br /> Unlike traditional approaches that often force organizations to stitch together various specialized models, each supporting different input and output types, Nova 2 Omni eliminates the complexity of managing multiple AI models. This helps to accelerate application development while reducing complexity and costs, enabling developers to tackle diverse tasks from marketing content creation and customer support call transcription to video analysis and documentation with visual aids.<br /> <br /> The model supports a 1M token context window, 200+ languages for text processing and 10 languages for speech input. It can generate and edits high-quality images using natural language, enabling character consistency, text rendering within image as well as object and background modification. Nova 2 Omni delivers superior speech understanding with native reasoning to transcribe, translate and summarize multi-speaker conversations. And with flexible reasoning controls for depth and budget, developers can ensure optimal performance, accuracy, and cost management across different use cases.<br /> <br /> Nova 2 Omni is in preview with early access available to all Nova Forge customers, and to authorized customers. Please reach out to your AWS account team for access. To learn more about Amazon Nova 2 Omni read the <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html">user guide</a>.&nbsp;</p>

Read article →

Announcing New Compute-Optimized Amazon EC2 C8a Instances

<p>AWS announces the general availability of new compute-optimized Amazon EC2 C8a instances. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances.<br /> <br /> C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.<br /> <br /> C8a instances are built on <a href="https://aws.amazon.com/ec2/nitro/" style="cursor: pointer;">AWS Nitro System</a> and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding.<br /> <br /> C8a instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.</p>

Read article →

AWS Lambda announces durable functions for multi-step applications and AI workflows

<p>AWS Lambda announces durable functions, enabling developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Durable functions automatically checkpoint progress, suspend execution for up to one year during long-running tasks, and recover from failures - all without requiring you to manage additional infrastructure or write custom state management and error handling code.<br /> <br /> Customers use Lambda for the simplicity of its event-driven programming model and built-in integrations. While traditional Lambda functions excel at handling single, short-lived tasks, developers building complex multi-step applications, such as order processing, user onboarding, and AI-assisted workflows, previously needed to implement custom state management logic or integrate with external orchestration services. Lambda durable functions address this opportunity by extending the Lambda programming model with new operations like "steps" and "waits" that let you checkpoint progress and pause execution without incurring compute charges. The service handles state management, error recovery, and efficient pausing and resuming of long-running tasks, allowing you to focus on your core business logic.<br /> <br /> Lambda durable functions are generally available in US East (Ohio) with support for Python (versions 3.13 and 3.14) and Node.js (versions 22 and 24) runtimes. For the latest region availability, visit the AWS Capabilities by Region <a href="https://builder.aws.com/build/capabilities">page</a>.<br /> <br /> You can activate durable functions for new Python or Node.js based Lambda functions using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the <a href="https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html">AWS Lambda Developer Guide</a> and launch blog post. To learn about pricing, visit <a href="https://aws.amazon.com/lambda/pricing/">AWS Lambda pricing</a>.&nbsp;</p>

Read article →

Amazon CloudWatch launches unified management and analytics for operational, security, and compliance data

<p>Amazon CloudWatch now provides new data management and analytics capabilities that allow you to unify operational, security, and compliance data across your AWS environment and third-party sources. DevOps teams, security analysts, and compliance officers can now access all their data in a single place, eliminating the need to maintain multiple separate data stores and complex (extract-transform-load) ETL pipelines. CloudWatch now offers greater flexibility in where and how customers gain insights into this data, both natively in CloudWatch or with any Apache Iceberg-compatible tool.<br /> <br /> With the unified data store enhancements, customers can now easily collect and aggregate logs across AWS accounts and regions aligned to geographic boundaries, business units, or persona-specific requirements. With AWS Organization-wide enablement for AWS sources such as AWS CloudTrail, Amazon VPC, and Amazon WAF, and managed collectors for third party sources such as Crowdstrike, Okta, Palo Alto Networks, CloudWatch makes it easy to bring more of your logs together. Customers can use pipelines to transform and enrich their logs to standard formats such as Open Cybersecurity Schema Framework (OCSF) for security analytics, and define facets to accelerate insights on their data. Customers can make their data available in managed Amazon S3 Tables at no additional storage charge, enabling teams to query data in Amazon SageMaker Unified Studio, Amazon Quick Suite, Amazon Athena, Amazon Redshift, or any Apache Iceberg-compatible analytics tool.<br /> <br /> To get started, visit the Ingestion page in the CloudWatch console and add one or more data sources. To learn more about Amazon CloudWatch unified data store, visit the product page, pricing page, and <a href="https://docs.aws.amazon.com/cloudwatch/" target="_blank">documentation</a>. For Regional availability, visit the <a href="https://builder.aws.com/build/capabilities" target="_blank">AWS Builder Center</a>.</p>

Read article →

Amazon OpenSearch Service adds GPU-accelerated and auto-optimized vector indexes

<p>You can now build billion-scale vector databases in under an hour on Amazon OpenSearch Service with GPU-acceleration, and auto-optimize vector indexes for optimal trade-offs between search quality, speed and cost.<br /> <br /> Previously, large-scale vector indexes took days to build, and optimizing them required experts to spend weeks of manual tuning. The time, cost and effort weighed down innovation velocity, and customers forwent cost and performance optimizations. You can now run serverless, auto-optimize jobs to generate optimization recommendations. You simply specify search latency and recall requirements, and these jobs will evaluate index configurations (k-NN algorithms, quantization, and engine settings) automatically. Then, you can use vector GPU-acceleration to build an optimized index up to 10X faster at a quarter of the indexing cost. Serverless GPUs dynamically activate and accelerate your domain or collection, so you’re only billed when you benefit from speed boosts—all done without you managing GPU instances.<br /> <br /> These capabilities help you scale AI applications including semantic search, recommendation engines, and agentic systems more efficiently. By simplifying and accelerating the time to build large-scale, optimized vector databases, your team will be empowered to innovate faster.<br /> <br /> Asia Pacific (Sydney),Vector <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-auto-optimize.html" target="_blank">GPU-acceleration</a> is available for vector collections and OpenSearch 3.1+ domains in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), and Asia Pacific (Tokyo) Regions. Vector <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-auto-optimize.html" target="_blank">auto-optimize</a> is available for vector collections and OpenSearch 2.17+ domains in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt) and Europe (Ireland) Regions. Learn more.</p>

Read article →

Amazon RDS for Oracle and SQL Server now support up to 256 TiB storage with additional storage volumes

<p><a href="https://aws.amazon.com/rds/oracle/" style="cursor: pointer;">Amazon Relational Database Service (Amazon RDS) for Oracle</a> and <a href="https://aws.amazon.com/rds/sqlserver/" style="cursor: pointer;">SQL Server</a> now support up to 256 TiB storage size, a 4x increase in storage size per database instance. Customers can add up to three additional storage volumes in addition to the primary storage volume, each up to 64 TiB storage, to their database instance. Additional storage volumes can be added, scaled up, or removed from the database instance without application downtime, so customers have the flexibility to add and adjust storage volumes over time based on changing workload requirements.<br /> <br /> With additional storage volumes, customers can continue to scale database storage beyond the maximum storage size available in the primary volume. Also, customers can temporarily add volumes when they have a short-term requirement for additional storage, such as for month-end data processing or importing data from local storage, and remove unused volumes when they are no longer required. Furthermore, customers can optimize cost performance by using a combination of high-performance Provisioned IOPS SSD (io2) volumes and General Purpose (gp3) volumes for their database instance. For example, data that requires consistent IOPS performance can be stored on an io2 volume, and infrequently accessed historical data can be stored on a gp3 volume to optimize storage cost.<br /> <br /> To get started, customers can create additional storage volumes in a new or existing database instance through the AWS Management Console, AWS CLI, or SDKs. For more information, visit the RDS for Oracle User Guide and RDS for SQL Server User Guide. To learn more about how customers can benefit from additional storage volumes, visit the AWS news blog post. Additional storage volumes are available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">all commercial AWS Regions and the AWS GovCloud (US) Regions</a>.&nbsp;</p>

Read article →

Announcing Database Savings Plans with up to 35% savings

<p>Today, AWS announces Database Savings Plans, a new flexible pricing model that helps you save up to 35% in exchange for a commitment to a consistent amount of usage (measured in $/hour) over a one-year term with no upfront payment.<br /> <br /> Database Savings Plans automatically apply to eligible serverless and provisioned instance usage regardless of supported engine, instance family, size, deployment option, or AWS Region. For example, with Database Savings Plans, you can change between Aurora db.r7g and db.r8g instances, shift a workload from EU (Ireland) to US (Ohio), modernize from Amazon RDS for Oracle to Amazon Aurora PostgreSQL or from RDS to Amazon DynamoDB and still benefit from discounted pricing offered by Database Savings Plans.<br /> <br /> Database Savings Plans will be available starting today in all AWS Regions, except China Regions, with support for Amazon Aurora, Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, Amazon DocumentDB (with MongoDB compatibility), Amazon Neptune, Amazon Keyspaces (for Apache Cassandra), Amazon Timestream, and AWS Database Migration Service (DMS).<br /> <br /> You can get started with Database Savings Plans from the AWS Billing and Cost Management Console or by using the AWS CLI. To realize the largest savings, you can make a commitment to Savings Plans by using purchase recommendations provided in the console. For a more customized analysis, you can use the Savings Plans Purchase Analyzer to estimate potential cost savings for custom purchase scenarios. For more information, visit the Database Savings Plans pricing page and the AWS Savings Plans FAQs.</p>

Read article →

Mistral Large 3 and Ministral 3 family now available first on Amazon Bedrock

<p>Customers can now use Mistral Large 3 and the Ministral 3 family of models available first on Amazon Bedrock as well as additional models including Voxtral Mini 1.0, Voxtral Small 1.0, and Magistral Small 1.2 on Amazon Bedrock, a platform for building generative AI applications and agents at production scale.</p> <p>Mistral Large 3 is a state-of-the-art, open-weight, general-purpose multimodal model with a granular Mixture-of-Experts architecture featuring 41B active parameters and 675B total parameters, designed for reliability and long-context comprehension. The Ministral 3 family—consisting of 14B, 8B, and 3B models—offers competitive checkpoints across language, vision, and instruct variants, enabling developers to select the right scale for customization and deployment.&nbsp;Amazon Bedrock is the first platform to offer these cutting-edge models, giving customers early access to Mistral AI's latest innovations. Mistral Large 3 excels at production-grade assistants, retrieval-augmented systems, and complex enterprise workflows with support for a 256K context window and powerful agentic capabilities. The Ministral 3 family complements this with flexible deployment options: Ministral 3 14B delivers advanced multimodal capabilities for local deployment, Ministral 3 8B provides best-in-class text and vision capabilities for edge deployment and single-GPU operation, and Ministral 3 3B offers robust capabilities in a compact package for low-resource environments. Together, these models span the full spectrum from frontier intelligence to efficient edge computing.</p> <p>These models are now available in Amazon Bedrock. For the full list of available AWS Regions, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html" target="_blank">documentation</a>.</p> <p>To get started with these models in Amazon Bedrock, visit the <a href="https://aws.amazon.com/bedrock/mistral/" target="_blank">Amazon Bedrock Mistral AI page</a></p> <p>&nbsp;</p>

Read article →

Amazon S3 Vectors is now generally available with 40 times the scale of preview

<p>Amazon S3 Vectors, the first cloud object storage with native support to store and query vectors, is now generally available. S3 Vectors delivers purpose-built, cost-optimized vector storage for AI agents, inference, Retrieval Augmented Generation (RAG), and semantic search at billion-vector scale. S3 Vectors is designed to provide the same elasticity, durability, and availability as Amazon S3 and reduces the total costs to upload, store, and query vectors by up to 90%. With general availability, you can store and query up to two billion vectors per index and elastically scale to 10,000 vector indexes per vector bucket. Infrequent queries continue to return results in under one second, with more frequent queries now resulting in latencies around 100 milliseconds or less. Your application can achieve write throughput of 1,000 vectors per second when streaming single-vector updates into your indexes, retrieve up to 100 search results per query, and store up to 50 metadata keys alongside each vector for fine-grained filtering in your queries.</p> <p>With S3 Vectors you get a new bucket type—a vector bucket—that is optimized for durable, low-cost vector storage. Within vector buckets, you organize your vector data with vector indexes and get a dedicated set of APIs to store, access, and query vectors without provisioning any infrastructure. By default, S3 Vectors encrypts all vector data in a vector bucket with server-side encryption using S3-managed keys (SSE-S3) or optionally, you can use AWS Key Management Service (SSE-KMS) to set a default customer-managed key to encrypt all new vector indexes in the vector bucket. You can now also set a dedicated customer-managed key per vector index, helping you build scalable multi-tenant applications and meet regulatory and governance requirements. You can also tag vector buckets and indexes for attribute-based access control (ABAC) as well as to track and organize costs using AWS Billing and Cost Management.</p> <p>S3 Vectors integrates with Amazon Bedrock Knowledge Bases to reduce the cost of using large vector datasets for RAG. When creating a Knowledge Base in Amazon Bedrock or Amazon SageMaker Unified Studio, you can choose an existing Amazon S3 vector index or create a new one using the Quick Create workflow. With Amazon OpenSearch Service, you can optimize costs for hybrid search workloads by configuring OpenSearch to automatically manage vector storage in S3.</p> <p>S3 Vectors is now generally available in 14 AWS Regions, expanding from 5 Regions in preview. To learn more, visit the <a href="https://aws.amazon.com/s3/features/vectors/" target="_blank">product page</a>, <a href="https://aws.amazon.com/s3/pricing/" target="_blank">S3 pricing page</a>, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-vectors.html" target="_blank">documentation</a>, and AWS News blog.</p>

Read article →

Introducing AWS AI Factories

<p>AWS AI Factories are now available, providing rapidly deployable, high-performance AWS&nbsp;AI infrastructure in your own&nbsp;data centers. By combining the latest AWS Trainium accelerators and NVIDIA GPUs, specialized low-latency networking, high-performance storage, and AWS AI services, AI Factories accelerate your AI buildouts by months or years compared to building independently. Leveraging nearly&nbsp;two decades of AWS cloud leadership expertise, AWS&nbsp;AI Factories eliminate the complexity of procurement, setup, and optimization that typically delays AI initiatives.</p> <p>With integrated AWS AI services like Amazon Bedrock and Amazon SageMaker, you gain immediate access to leading foundation models without negotiating separate contracts with individual model providers.&nbsp; AWS AI Factories operate&nbsp;as dedicated environments built exclusively for you or your designated trusted community, ensuring complete separation and operating independence while integrating with the broader set of AWS services. This approach helps governments and enterprises meet digital sovereignty requirements while benefiting from the unparalleled security, reliability, and capabilities of the AWS Cloud. You provide the data center space and power capacity you've already acquired, while AWS deploys and manages the infrastructure.&nbsp;</p> <p>AWS AI Factories deliver advanced AI technologies to enterprises across all industries and government organizations seeking secure, isolated environments with strict data residency requirements. These dedicated environments provide access to the same advanced technologies available in public cloud Regions, allowing you to build AI-powered applications as well as&nbsp;train and deploy large language models using your own proprietary data. Rather than spending years building capacity independently, AWS accelerates deployment timelines so you can focus on innovation instead of infrastructure complexity.&nbsp;</p> <p>Contact your AWS account team to learn more about deploying AWS AI Factories in your data center and accelerating your AI initiatives with AWS proven expertise in building and maintaining dedicated AI infrastructure at scale.</p>

Read article →

Amazon Bedrock adds 18 fully managed open-weight models, the largest expansion of new models to date

<p>Amazon Bedrock is a platform for building generative AI applications and agents at production scale. Amazon Bedrock provides access to a broad selection of fully managed models from leading AI companies through a unified API, enabling you to evaluate, switch, and adopt new models without rewriting applications or changing infrastructure. Today, Amazon Bedrock is adding 18 fully managed open weight models to its model offering, the largest expansion of new models to date.</p> <p>You can now access the following models in Amazon Bedrock:<br /> </p> <p><b>Google:&nbsp;</b>Gemma 3 4B,&nbsp;Gemma 3 12B,&nbsp;Gemma 3 27B</p> <p><b>MiniMax AI:&nbsp;</b>MiniMax M2</p> <p><b>Mistral AI:&nbsp;</b>Mistral Large 3, Ministral 3 3B, Ministral 3 8B, Ministral 3 14B, Magistral Small 1.2, Voxtral Mini 1.0, Voxtral Small 1.0</p> <p><b>Moonshot AI:&nbsp;</b>Kimi K2 Thinking</p> <p><b>NVIDIA:&nbsp;</b>NVIDIA Nemotron Nano 2 9B,&nbsp;NVIDIA Nemotron Nano 2 VL 12B</p> <p><b>OpenAI:&nbsp;</b>gpt-oss-safeguard-20b,&nbsp;gpt-oss-safeguard-120b</p> <p><b>Qwen:&nbsp;</b>Qwen3-Next-80B-A3B,&nbsp;Qwen3-VL-235B-A22B</p> <p>For the full list of available AWS Regions, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html" target="_blank">documentation</a>.</p> <p>To learn more about all the models that Amazon Bedrock offers, view the <a href="https://aws.amazon.com/bedrock/model-choice/" target="_blank">Amazon Bedrock model choice page</a>. To get started using these models in Amazon Bedrock, read the launch blog and&nbsp;visit the <a href="https://console.aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock console</a>.</p>

Read article →

Introducing AWS DevOps Agent (preview), frontier agent for operational excellence

<p>We're excited to launch AWS DevOps Agent in preview, a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multicloud, and hybrid environments. AWS DevOps Agent investigates incidents and identifies operational improvements as an experienced DevOps engineer would: by learning your resources and their relationships, working with your observability tools, runbooks, code repositories, and CI/CD pipelines, and correlating telemetry, code, and deployment data across all of them to understand the relationships between your application resources.</p> <p>AWS DevOps Agent autonomously triages incidents and guides teams to rapid resolution to reduce Mean Time to Resolution (MTTR). AWS DevOps Agent begins investigating the moment an alert comes in, whether at 2 AM or during peak hours, to quickly restore your application to optimal performance. It analyzes patterns across historical incidents to provide actionable recommendations that strengthen key areas including observability, infrastructure optimization, and deployment pipeline enhancement. AWS DevOps Agent helps access the untapped insights in your operational data and tools without changing your workflows.</p> <p>AWS DevOps Agent is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, read the AWS News Blog and see getting started.</p>

Read article →

Amazon EMR Serverless eliminates local storage provisioning for Apache Spark workloads

<p>Amazon EMR Serverless now offers serverless storage that eliminates local storage provisioning for Apache Spark workloads, reducing data processing costs by up to 20% and preventing job failures from disk capacity constraints. You no longer need to configure local disk type and size for each application. EMR Serverless automatically handles intermediate data operation such as shuffle with no local storage charges. You pay only for compute and memory resources your job consumes.<br /> <br /> EMR Serverless offloads intermediate data operations to a fully managed, auto-scaling serverless storage that encrypts data in transit and at rest with job-level isolation. Serverless storage decouples storage from compute, allowing Spark to release workers immediately when idle rather than keeping workers active to preserve temporary data. It eliminates job failures from insufficient disk capacity and reduces costs by avoiding idle worker charges. This is particularly valuable for jobs using dynamic resource allocation, such as recommendation engines processing millions of customer interactions, where initial stages process large datasets with high parallelism then narrow as data aggregates.<br /> <br /> This feature is generally available for EMR release 7.12 and later. See <a href="https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/jobs-serverless-storage.html#jobs-serverless-storage-regions">Supported AWS Regions</a> for availability. To get started, visit see serverless storage for EMR Serverless documentation.&nbsp;</p>

Read article →

Amazon S3 increases the maximum object size to 50 TB

<p>Amazon S3 increased the maximum object size to 50 TB, a 10x increase from the previous 5 TB limit. This simplifies the processing of large objects such as high-resolution videos, seismic data files, AI training datasets and more. You can store 50 TB objects in all S3 storage classes and use them with all S3 features.<br /> <br /> Optimize upload and download performance for your large objects by using the latest AWS Common Runtime (CRT) and S3 Transfer Manager in the AWS SDK. You can apply S3's storage management capabilities to these objects. For example, use S3 Lifecycle to automatically archive infrequently accessed objects to S3 Glacier storage classes, or use S3 Replication to copy objects across AWS accounts or Regions.<br /> <br /> Amazon S3 supports objects up to 50 TB in all AWS Regions. To learn more about working with large objects, visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html">S3 User Guide</a>.&nbsp;</p>

Read article →

Amazon S3 Tables now support automatic replication of Apache Iceberg tables

<p>Amazon S3 Tables now support automatic replication of Apache Iceberg tables across AWS Regions and accounts. This new capability replicates your complete table structure, including all snapshots and metadata to reduce query latency and improve data accessibility for global analytics workloads.<br /> <br /> S3 Tables replication automatically creates read-only replica tables in your destination table buckets, backfills them with the latest state of the source table, and continuously monitors for new updates to keep replicas in sync. Replica tables can be configured with independent snapshot retention policies and encryption keys from source tables to meet compliance and data protection requirements. You can query replica tables using Amazon SageMaker Unified Studio or any Iceberg-compatible engine including Amazon Athena, Amazon Redshift, Apache Spark, and DuckDB.<br /> <br /> S3 Tables replication is now available in all <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html" style="cursor: pointer;">AWS Regions where S3 Tables are supported</a>. For pricing details, visit the <a href="https://aws.amazon.com/s3/pricing/" style="cursor: pointer;"><u>Amazon S3 pricing page</u></a>. To learn more about S3 Tables, visit the <a href="https://aws.amazon.com/s3/features/tables/" style="cursor: pointer;">product page</a>, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-replication-tables.html" style="cursor: pointer;">documentation</a>, and read the AWS News Blog.</p>

Read article →

AWS Security Hub is now generally available with near real-time risk analytics

<p>Amazon Web Services (AWS) announces the general availability of AWS Security Hub, a unified cloud security solution that prioritizes your critical security issues and helps you respond at scale, reduce security risks, and improve team productivity. With general availability, Security Hub now includes near real-time risk analytics, advanced trends, unified enablement and management, and streamlined pricing across multiple AWS security services. Security Hub detects critical risks by correlating and enriching security signals from Amazon GuardDuty, Amazon Inspector, and AWS Security Hub CSPM, enabling you to quickly surface and prioritize active risks in your cloud environment.<br /> <br /> Security Hub now delivers near real-time risk analytics and advanced trends, transforming correlated security signals into actionable insights through enhanced visualizations and contextual enrichment. You can enable Security Hub for individual accounts or across your entire AWS Organization with centralized deployment and management. These new capabilities complement existing capabilities, including exposure findings, security-focused resource inventory, attack path visualization, and automated response workflows with ticketing system integration. This centralized management reduces the need for manual correlation across multiple consoles and enables streamlined remediation at scale while helping minimize potential operational disruptions, now with improved <a href="https://us-east-1.console.aws.amazon.com/securityhub/v2/home?ca-oauth-flow-id=cb84&amp;oauthStart=1764028154380&amp;region=us-east-1#costEstimator">cost predictability</a> through streamlined pricing that consolidates charges across multiple AWS security services. The service automatically visualizes potential attack paths by showing how adversaries could chain together threats, vulnerabilities, and misconfigurations to compromise critical resources, providing deeper risk context powered by more comprehensive analytics.<br /> <br /> For more information about AWS commercial Regions where Security Hub is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?p=ngi&amp;loc=4" style="cursor: pointer;">AWS Region table</a>. The service integrates with existing AWS security services, providing more comprehensive security posture without additional operational overhead. To learn more about Security Hub and get started, visit the <a href="https://console.aws.amazon.com/securityhub/home?cp=bn&amp;pg=ln" style="cursor: pointer;">AWS Security Hub console</a> or the AWS Security Hub <a href="https://aws.amazon.com/security-hub/" style="cursor: pointer;">product page.</a></p>

Read article →

Announcing Amazon Nova 2 foundation models now available in Amazon Bedrock

<p>Today AWS announces Amazon Nova 2, our next generation of general models that deliver reasoning capabilities with industry-leading price performance. The new models available today in Amazon Bedrock are:</p> <p>• Amazon Nova 2 Lite, a fast, cost-effective reasoning model for everyday workloads.</p> <p>• Amazon Nova 2 Pro (Preview), our most intelligent model for highly complex, multistep tasks.</p> <p>Amazon Nova 2 Lite and Amazon Nova 2 Pro (Preview) offer significant advancements over our previous generation models. These models support extended thinking with step-by-step reasoning and task decomposition and include three thinking intensity levels—low, medium, and high—giving developers control over the balance of speed, intelligence, and cost. The models also offer built-in tools such as code interpreter and web grounding, support remote MCP tools, and provide a one-million-token context window for richer interactions.</p> <p>Nova 2 Lite can be used for a broad range of your everyday tasks. It offers the best combination of price, performance, and speed. Early customers are using Nova 2 Lite for customer service chatbots, document processing, and business process automation. Amazon Nova 2 Pro (Preview) can be used for highly complex agentic tasks such as multi-document analysis, video reasoning, and software migrations. Nova 2 Pro is in preview with early access available to all Amazon Nova Forge customers. If interested, reach out to your AWS account team regarding access. Nova 2 Lite can be customized using supervised fine-tuning (SFT) on Amazon Bedrock and Amazon SageMaker, and full fine-tuning is available on Amazon SageMaker.</p> <p>Amazon Nova 2 Lite and Nova 2 Pro (Preview) is now available in Amazon Bedrock via global cross region inference in multiple locations.</p> <p>Learn more at the AWS News Blog, Amazon Nova models product page, and Amazon Nova user guide.</p>

Read article →

Amazon RDS for SQL Server now supports Developer Edition

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server</a> now offers Microsoft SQL Server 2022 Developer Edition. SQL Server Developer Edition is a free edition of SQL Server that contains all the features of Enterprise Edition and can be used in any non-production environment. This enables customers to build, test, and demonstrate applications using SQL Server while reducing costs and maintaining consistency with their production database configurations.<br /> <br /> Previously, customers that created Amazon RDS for SQL Server instances for development and test environments had to use SQL Server Standard Edition or SQL Server Enterprise Edition, which resulted in additional database licensing costs for non-production usage. Now, customers can lower the cost of their Amazon RDS development and testing instances by using SQL Server Developer Edition. Furthermore, Amazon RDS for SQL Server features such as automated backups, automated software updates, monitoring, and encryption for development and testing purposes will work on Developer Edition.<br /> <br /> The license for Microsoft SQL Server Developer Edition strictly limits its use to development and testing purposes. It cannot be used in a production environment, or for any commercial purposes that directly serve end-users. For more information, refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/sqlserver-dev-edition.html" target="_blank">Amazon RDS for SQL Server User Guide</a>. See <a href="https://aws.amazon.com/rds/sqlserver/pricing/" target="_blank">Amazon RDS for SQL Server Pricing</a> for pricing details and regional availability.&nbsp;</p>

Read article →

Amazon EC2 P6e-GB300 UltraServers accelerated by NVIDIA GB300 NVL72 are now generally available

<p>Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6e-GB300 UltraServers. P6e-GB300 UltraServers, accelerated by NVIDIA GB300 NVL72, provide 1.5x GPU memory and 1.5x FP4 compute (without sparsity) compared to P6e-GB200.&nbsp;</p> <p>Customers can optimize performance for the most powerful models in production with P6e-GB300 for applications that require higher context and implement emerging inference techniques like reasoning and Agentic AI.</p> <p>To get started with P6e-GB300 UltraServers, please contact your AWS sales representative.</p> <p>To learn more about P6e UltraServers and instances, visit <a href="https://aws.amazon.com/ec2/instance-types/p6/" target="_blank">Amazon EC2 P6 instances</a>.</p>

Read article →

Announcing new memory-optimized Amazon EC2 X8aedz Instances

<p>AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance and 31% price-performance compared to previous generation X2iezn instances.<br /> <br /> X8aedz instances are built using the latest sixth generation <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro Cards</a> and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis.<br /> <br /> X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage.<br /> <br /> X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page or AWS news blog.</p>

Read article →

Amazon RDS for SQL Server launches optimize CPU with new generation instances for up to 55% lower price

<p>Amazon RDS for SQL Server launches optimize CPU with support for M7i and R7i instance families, which reduce prices by up to 55% compared to equivalent previous generation instances. Optimize CPU optimizes Simultaneous Multi-threading (SMT) configuration to reduce commercial software charges. Customers can lower cost by upgrading to M7i and R7i instances from similar 6th generation instances. Furthermore, for memory or IO intensive database workloads, customers can get additional cost reduction by fine tuning optimize CPU configuration.<br /> <br /> RDS for SQL Server price for database instance hours consumed is inclusive of Microsoft Windows and Microsoft SQL Server software charges. Optimize CPU disables SMT for instances with 2 or more physical CPU cores. This reduces the number of vCPUs, and the corresponding commercial software charges by 50% while providing the same number of physical CPU cores, and near equivalent performance. The most significant savings are available on 2Xlarge and higher instances, and instances that use Multi-AZ deployment, where RDS optimizes to reduce SQL Server software charges for only a single active node for most usage. For workloads that are memory or IO intensive, customers can fine tune the number of active physical CPU cores for further savings.<br /> <br /> RDS for SQL Server supports M7i and R7i instances in all AWS Regions. With unbundled instance pricing, database costs are calculated with separate charges for third party licensing fees per vCPU hour, and third party licensing fees are not eligible towards your organization’s discounts with AWS. You can view Microsoft Windows and SQL Server charges associated with your usage on AWS Billing and Cost Management, and in monthly bills. For more details, visit <a href="https://aws.amazon.com/rds/sqlserver/pricing/" target="_blank">RDS for SQL Server pricing</a>, Amazon RDS User Guide and AWS News Blog.</p>

Read article →

Amazon FSx for NetApp ONTAP now supports Amazon S3 access

<p>You can now attach Amazon S3 Access Points to your Amazon FSx for NetApp ONTAP file systems so that you can access your file data as if it were in S3. With this new capability, your file data in FSx for NetApp ONTAP is effortlessly accessible for use with the broad range of artificial intelligence, machine learning, and analytics services and applications that work with S3 while your file data continues to reside in your FSx for NetApp ONTAP file system.<br /> <br /> Amazon FSx for NetApp ONTAP is the first and only complete, fully managed NetApp ONTAP file system in the cloud, allowing you to migrate on-premises applications that rely on NetApp ONTAP or other NAS appliances to AWS without having to change how you manage your data. An S3 Access Point is an endpoint that helps control and simplify how different applications or users can access data. Now, with S3 Access Points for FSx for NetApp ONTAP, you can discover new insights, innovate faster, and make even better data-driven decisions with the data you migrate to AWS. For example, you can use your data to augment generative AI applications with Amazon Bedrock, train machine learning models with Amazon SageMaker, run analysis using Amazon Glue or a wide range of AWS Data and Analytics Competency Partner solutions, and run workflows using S3-based cloud-native applications.<br /> <br /> Get started with this capability by creating and attaching S3 Access Points to new FSx for NetApp ONTAP file systems using the Amazon FSx console, the AWS Command Line Interface (AWS CLI), or the AWS Software Development Kit (AWS SDK). Support for existing FSx for NetApp ONTAP file systems will come in an upcoming weekly maintenance window. This new capability is available in the <a href="https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/accessing-data-via-s3-access-points.html#access-points-for-fsx-ontap-supported-regions" target="_blank">select AWS Regions</a>.<br /> <br /> To get started, see the following list of resources:<br /> </p> <ul> <li><a href="https://aws.amazon.com/fsx/netapp-ontap/" target="_blank">Amazon FSx for NetApp ONTAP</a></li> <li><a href="https://aws.amazon.com/s3/features/access-points/" target="_blank">Amazon S3 Access Points</a></li> <li><a href="https://aws.amazon.com/blogs/aws/amazon-fsx-for-openzfs-now-supports-amazon-s3-access-without-any-data-movement/" target="_blank">AWS News Blog</a></li> </ul>

Read article →

AWS Transform adds new agentic AI capabilities for enterprise VMware migrations

<p>AWS Transform adds powerful new agentic AI capabilities to automate VMware migrations to AWS. The migration agent collaborates with migration teams to understand business priorities and intelligently plan and migrate hundreds of applications spanning thousands of servers, significantly reducing manual effort, time, and complexity.<br /> <br /> The agent can now discover your on-premises environment and prioritize applications for migration using the <a contenteditable="false" href="https://aws.amazon.com/blogs/migration-and-modernization/introducing-the-aws-transform-discovery-tool/" style="cursor: pointer;">AWS Transform discovery tool</a>, inventory data from various third-party discovery tools, and unstructured data such as documents, notes, and business rules. It analyzes infrastructure, database, and application details, maps dependencies, and generates migration plans grouped by business and technical priorities such as ownership, department, function, subnet, and operating systems. It generates networks with hub-and-spoke and isolated network configurations, provides flexible IP address management options, deploys to multiple accounts, generates network configurations for your AWS landing zones, and migrates from source environments like NSX, Palo Alto, Fortigate, and Cisco ACI. The agent migrates servers to AWS securely and iteratively in waves and provides clear progress updates throughout the deployment. It also migrates Windows and Linux x86 servers, hypervisors such as VMware, HyperV, Nutanix, and KVM, and bare-metal physical environments to multiple target accounts. Throughout your migration, you can ask the agent questions as it guides your decisions, whether that’s repeating or skipping steps, or adjusting plans. To simplify internal approvals, the agent also generates a detailed report with the migration plan and mapping of networks, servers, and applications.<br /> <br /> With AWS Transform, you can accelerate time to value, lower risk, and reduce the complexity of VMware migrations. These new capabilities are available in all <a contenteditable="false" href="https://docs.aws.amazon.com/transform/latest/userguide/regions.html" style="cursor: pointer;">AWS Regions where AWS Transform </a>is offered, with support for migrating servers and networks to&nbsp;<a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware-acct-connections.html">16 AWS Regions</a>.<br /> <br /> Learn more on the <a contenteditable="false" href="https://aws.amazon.com/transform/vmware/?refid=48ebaf74-0ade-44c7-b8c2-12a0e7718d21" style="cursor: pointer;">product page</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware.html?refid=48ebaf74-0ade-44c7-b8c2-12a0e7718d21" style="cursor: pointer;">user guide</a>, and get started with <a contenteditable="false" href="https://console.aws.amazon.com/transform/home?refid=48ebaf74-0ade-44c7-b8c2-12a0e7718d21" style="cursor: pointer;">AWS Transform</a>.</p>

Read article →

AWS Transform for mainframe now supports application reimagining

<p>AWS Transform for mainframe delivers new data and activity analysis capabilities to extract comprehensive insights to drive the reimagining of mainframe applications. These insights can be combined with business logic extraction to inform decomposition of legacy applications into logical business domains. Together, these form the basis of a comprehensive specification for coding agents like Kiro to reimagine applications into cloud-native architectures.<br /> <br /> The new capabilities empower organizations to reimagine legacy workloads, providing a comprehensive reverse engineering workflow that includes automated code and data structure analysis, activity analysis, technical documentation generation, business logic extraction, and intelligent code decomposition. Through in-depth data and activity analysis, AWS Transform helps identify application components with high utilization or business value, allowing teams to optimize their modernization efforts and make data-informed architectural decisions.<br /> <br /> In the AI-powered chat interface, users can customize their modernization approach through flexible job plans that allow them to select predefined comprehensive workflows—full modernization, analysis focus, or business logic focus—or create their own combination of capabilities based on specific objectives.<br /> <br /> The reimagine capabilities in AWS Transform for mainframe are available today in US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">Regions</a>.<br /> <br /> To learn more about reimagining mainframe applications with AWS Transform for mainframe, read the <a href="https://aws.amazon.com/blogs/aws/aws-transform-for-mainframe-introduces-reimagine-capabilities-and-automated-testing-functionality/">AWS News Blog post</a> or visit the <a contenteditable="false" href="https://aws.amazon.com/transform/mainframe/" style="cursor: pointer;">AWS Transform product page</a>.&nbsp;</p>

Read article →

AWS Transform expands .NET transformation capabilities and enhances developer experience

<p>Today, AWS announces the general availability of expanded .NET transformation capabilities and an enhanced developer experience in AWS Transform. Customers can now modernize .NET Framework and .NET code to .NET 10 or .NET Standard. New transformation capabilities include UI porting of ASP.NET Web Forms to Blazor on ASP.NET Core and porting Entity Framework ORM code. The new developer experience, available with the <a contenteditable="false" href="https://aws.amazon.com/visualstudio/" style="cursor: pointer;">AWS Toolkit for Visual Studio 2026 or 2022</a>, is customizable, interactive, and iterative. It includes an editable transformation plan, estimated transformation time, real-time updates during transformation, the ability to repeat transformations with a revised plan, and next steps markdown for easy handoff to AI code companions. With these enhancements, AWS Transform provides a path to modern .NET for more project types, supports the latest releases of .NET and Visual Studio, and gives developers oversight and control of transformations.</p> <p>Developers can now streamline their .NET modernization through an enhanced IDE experience. The process begins with automated code analysis that produces a customizable transformation plan. Developers can customize the transformation plan, such as fine-tuning package updates. Throughout the transformation, they benefit from transparent progress tracking and detailed activity logs. Upon completion, developers receive a Next Steps document that outlines remaining tasks, including Linux readiness requirements, which they can address through additional AWS Transform iterations or by leveraging AI code companion tools such as Kiro.</p> <p>AWS Transform is available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>: US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London).</p> <p>To get started with AWS Transform, refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/transform/latest/userguide/dotnet.html" style="cursor: pointer;">AWS Transform documentation</a>.</p>

Read article →

AWS Transform for mainframe delivers new testing automation capabilities

<p>AWS Transform for mainframe now offers test planning and automation features to accelerate mainframe modernization projects. New capabilities include automated test plan generation, test data collection scripts, and test case automation scripts, alongside functional test environment tools for continuous delivery and regression testing, helping accelerate and de-risk testing and validation during mainframe modernization projects.<br /> <br /> The new capabilities address key testing challenges across the modernization lifecycle, reducing the time and effort required for mainframe modernization testing, which typically consumes over 50% of project duration. Automated test plan generation helps teams reduce upfront planning efforts and align on critical functional tests needed to mitigate risk and ensure modernization success, while test data collection scripts accelerate the error-prone, complex process of capturing mainframe data. Test automation scripts then enable scalable execution of test cases by automating test environment staging, test case execution, and results validation against expected outcomes.<br /> <br /> By automating complex testing tasks and reducing dependency on scarce mainframe expertise, organizations can now modernize their applications with greater confidence while improving accuracy through consistent, automated processes.<br /> <br /> The new testing capabilities in AWS Transform for mainframe are available today in US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">Regions</a>.<br /> <br /> To learn more about automated testing in AWS Transform for mainframe, and how it can help your organization accelerate modernization, read the AWS News Blog, visit the <a href="https://aws.amazon.com/transform/mainframe/" target="_blank">AWS Transform for mainframe product page</a>, or explore the AWS Transform <a contenteditable="false" href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-mainframe.html" style="cursor: pointer;">User Guide</a>.</p>

Read article →

AWS launches AWS Transform custom to accelerate organization-wide application modernization

<p>AWS Transform custom is now generally available, accelerating organization-specific code and application modernization at scale using agentic AI. AWS Transform is the first agentic AI service to accelerate the transformation of Windows, mainframe, VMware, and more—reducing technical debt and making your tech stack AI-ready. Technical debt accumulates when organizations maintain legacy systems and outdated code, requiring them to allocate 20-30% of their software development resources to repeatable, cross-codebase transformation tasks that must be performed manually. AWS Transform can automate repeatable transformations of version upgrades, runtime migrations, framework transitions, and language translations at scale, reducing execution time by over 80% in many cases while eliminating the need for specialized automation expertise.</p> <p>The custom transformation agent in AWS Transform provides both pre-built and custom solutions. It includes out-of-the-box transformations for common scenarios, such as Python and Node.js runtime upgrades, Lambda function modernization, AWS SDK updates across multiple languages, and Java 8 to 17 upgrades (supporting any build system including Gradle and Maven). For organization-specific needs, teams can define custom transformations using natural language, reference documents, and code samples. Users can trigger autonomous transformations with a simple one-line CLI command, which can be scripted or embedded into any existing pipeline or workflow. Within your organization, the agent continually learns from developer feedback and execution results, improving transformation accuracy and tightly aligning the agent’s performance with your organization’s preferences. This approach enables organizations to systematically address technical debt at scale, with the agent continually improving while developers can focus on innovation and high-impact tasks.</p> <p>AWS Transform custom is now available in the US East (N. Virginia) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region</a>.</p> <p>To learn more, visit the user guide, overview page, and <a href="https://aws.amazon.com/transform/pricing/">pricing page</a>.</p>

Read article →

AWS Transform launches an AI agent for full-stack Windows modernization

<p>AWS Transform is expanding its capability from the .NET modernization agent to now include the full-stack Windows modernization agent that handles both .NET applications and their associated databases. The new agent automates the transformation of .NET applications and Microsoft SQL Server databases to Amazon Aurora PostgreSQL and deploys them to containers on Amazon ECS or Amazon EC2 Linux. AWS Transform accelerates full-stack Windows modernization by 5x across application and database layers, while reducing operating costs by up to 70%.<br /> <br /> With AWS Transform, customers can accelerate their full-stack modernization journey through automated discovery, transformation, and deployment. The full-stack Windows modernization agent scans Microsoft SQL Server databases in Amazon EC2 or Amazon RDS instances, and it scans .NET application code from source repositories (GitHub, GitLab, Bitbucket, or Azure Repos) to create customized, editable modernization plans. It automatically transforms SQL Server schemas to Aurora PostgreSQL and migrates databases to new or existing Aurora PostgreSQL target clusters. For .NET application transformation, the agent updates database connections in the source code and modifies database access code written in Entity Framework and ADO.NET to be compatible with Aurora PostgreSQL—all in a unified workflow with human supervision. All the transformed code is committed to a new repository branch. Finally, the transformed application along with the databases can be deployed into a new or existing environment to validate the transformed applications and databases. Customers can monitor transformation progress through worklog updates and interactive chat, and they can use the detailed transformation summaries for next steps recommendations and for easy handoff to AI code companions.<br /> <br /> AWS Transform for full-stack Windows modernization is available in the US East (N. Virginia)&nbsp;<a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region</a>.<br /> <br /> To learn more, visit the <a href="https://aws.amazon.com/transform/windows" target="_blank">overview page</a> and <a href="https://docs.aws.amazon.com/transform/latest/userguide/dotnet.html" target="_blank">AWS Transform documentation</a>.</p>

Read article →

Amazon Connect now provides native testing and simulation capabilities

<p>Amazon Connect now allows you to test and simulate contact center experiences in just a few clicks, making it easy to validate workflows, self-service voice interactions, and their outcomes. For each test, you can configure the test parameters including the caller's phone number or customer profile, the reason for the call (such as "I need to check my order status"), the expected responses (such as "Your request has been processed"), and business conditions like after-hours scenarios or full call queues. After executing tests, results show success or failure based on your defined criteria, along with the path taken by the simulated interaction and detailed logs to quickly diagnose potential issues<br /> <br /> With this launch, you can run multiple tests simultaneously to validate scenarios and workflows at scale, reducing testing time. Companies can view test results and identify common failure patterns across all their tests in Connect's analytics dashboards. These capabilities enable you to rapidly validate changes to your workflows and confidently deploy new experiences to adapt to your ever-changing business needs.<br /> <br /> To learn more about these features, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html" target="_blank">Amazon Connect Administrator Guide</a>. These features are available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">all AWS regions</a> where Amazon Connect is available. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

AWS Partner Central is now available in the AWS Management Console

<p>Today, AWS announces the availability of AWS Partner Central in the AWS Management Console, simplifying access for AWS Partners to Partner Central and the AWS Marketplace Management Portal, and introducing APIs that offer integration and process automation capabilities.<br /> <br /> The integration of AWS Partner Central into the AWS Console delivers an enhanced experience and new capabilities for Partners. With an expanded set of APIs, partners can automate co-selling processes, streamline AWS Marketplace activities, and unlock AWS Partner Network benefits more seamlessly. Enhanced security and user management features, built on AWS Identity and Access Management (IAM), allow for granular permissions and single sign-on (SSO), improving operational efficiency and scalability.<br /> <br /> AWS Partner Central in the console is available for AWS Partners today. This new experience is available in all AWS Regions, providing Partners with a consistent and secure way to manage their AWS business across the globe. Existing Partners can begin their migration to the new experience using the migration feature in the existing Partner Central portal, which provides step-by-step guidance for migrating to the AWS Console. To learn more about the new AWS Partner Central experience and how to get started, read the blog.</p>

Read article →

Amazon Connect enhances its agent assistance capabilities

<p>Amazon Connect now provides customer service representatives with new AI agents that guide them through customer interactions by recommending actions, retrieving information, and executing tasks on their behalf. For example, an AI agent can guide a representative through processing a product return by automatically pulling order history, calculating refund amounts, and initiating the return process. These AI agents analyze conversation context and customer sentiment in real-time, actively completing tasks such as preparing documentation and handling routine processes. This enables representatives to focus on building customer relationships and handling complex situations while AI manages the background work, enhancing productivity and ensuring consistent outcomes. You can get started with out-of-the-box agents provided by Amazon Connect or easily customize AI agent behavior and actions to align with your business needs.<br /> <br /> To learn more about Amazon Connect AI agents, please visit the <a href="https://aws.amazon.com/connect/q/" target="_blank">website</a> or see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/amazon-q-connect.html" target="_blank">help documentation</a>. For region availability, please see the availability of <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html" target="_blank">Amazon Connect features by Region</a>. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Introducing multi-product solutions in AWS Marketplace

<p>AWS Marketplace now supports solution-centric procurement with multi-product solutions — a combination of products and services from one or more AWS Partners tailored to specific industries and customer use cases. Partners can now implement their vertical expertise and solution-selling strategies in AWS Marketplace. This new capability allows customers to discover and purchase complete solutions through a seamless procurement process.<br /> <br /> Partners, from Independent Software Vendors (ISVs) to System Integrators, can now sell comprehensive solutions in AWS Marketplace by combining their own software and services with products they are authorized to resell from other AWS Partners. Each component maintains distinct pricing and terms, giving partners and customers flexibility in how they structure the sale. Partners can position solutions to their target audience by outlining use cases and explaining how components work together. Customers benefit from streamlined procurement with a single point of contact for negotiation, total cost assessment, and one-time approval covering all products. After purchase, customers have the flexibility to independently manage renewals and term lengths for each component, making this approach valuable for organizations addressing complex use cases that require multiple products and services.<br /> <br /> This new capability is available in all <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/supported-regions.html" target="_blank">AWS Regions</a> where AWS Marketplace operates, supporting SaaS, Server, AI Agents and Tools, Machine Learning, and Professional Services product types. <br /> <br /> To learn more about solution-centric procurement in AWS Marketplace, review this blog. Partners can start listing multi-product solutions through <a href="https://us-east-1.console.aws.amazon.com/partnercentral/solutions?region=us-east-1" target="_blank"><u>AWS Partner Central</u></a> after reviewing the <a href="https://docs.aws.amazon.com/marketplace/latest/userguide/multi-product-solutions.html" target="_blank">seller documentation</a>. Customers can explore multi-product solutions in <a href="https://aws.amazon.com/marketplace/search/results?NUMBER_OF_PRODUCTS=MULTI_PRODUCT&amp;filters=NUMBER_OF_PRODUCTS" target="_blank"><u>AWS Marketplace</u></a>.</p>

Read article →

AWS Launches Resilience Software Competency to help customers build highly available applications

<p>Today, AWS announced the expansion of its AWS Resilience Competency program to include Technology Partners, helping customers identify and implement software solutions that enhance the availability and resilience of their critical cloud workloads. This new offering addresses the growing demand for "always on, always available" applications and services.<br /> <br /> The AWS Resilience Software Competency validates partner solutions across three essential categories: Design (high availability solutions including proxy and load balancing), Recovery (disaster recovery and data replication), and Operate (continuous resilience through observability and chaos engineering). All participating partners undergo rigorous technical validation by AWS experts to ensure they meet strict performance and operational requirements.<br /> <br /> As Werner Vogels, CTO of Amazon, explains: "Everything fails, all the time. With validated and curated solutions from AWS Resilience Partners, customers can achieve in AWS, with a fraction of the cost, a higher system availability they could ever experience if still running critical workloads on-premises." This program follows AWS's shared responsibility model, where AWS manages cloud infrastructure resilience while providing customers with trusted tools and partners to ensure workload resilience.<br /> <br /> To get started with the AWS Resilience Software Competency program and browse qualified partners, visit the <a href="https://aws.amazon.com/resilience/partners/" target="_blank">AWS Resilience Competency page</a>. Solutions are available through AWS Marketplace for streamlined procurement.</p>

Read article →

Announcing AWS Lambda Managed Instances, a capability to run functions on your Amazon EC2 instances

<p>AWS Lambda Managed Instances lets you run Lambda functions on your Amazon EC2 instances while maintaining Lambda's operational simplicity. With Lambda Managed Instances, you can access specialized compute configurations and drive cost efficiency through EC2 pricing advantages, without managing infrastructure.<br /> <br /> Lambda Managed Instances fully manages all infrastructure tasks, including instance lifecycle, OS and runtime patching, built-in routing, load balancing, and auto-scaling based on configurable parameters - so you can focus on writing code. This operational simplicity extends to the extensive EC2 instance catalog, giving you access to the latest-generation processors like AWS Graviton4 and high-bandwidth networking options. You can process parallel requests within each execution environment, maximizing resource utilization and improving price-performance.<br /> <br /> Lambda Managed Instances is ideal for customers requiring specialized hardware configurations, as well as those with steady-state or predictable workloads seeking to optimize costs while maintaining Lambda's serverless experience. You can further improve costs by leveraging EC2 pricing models including Compute Savings Plans and Reserved Instances.<br /> <br /> Getting started is straightforward - you can continue building functions with familiar development workflows, including Console and your preferred IDEs. First, create a capacity provider that defines your compute preferences, including VPC configuration, optional instance requirements and scaling policies. Then, attach your Lambda functions to the capacity provider via the AWS Lambda Console, APIs, or Infrastructure as Code tooling. Lambda Managed Instances integrates seamlessly with all Lambda event sources and tools like Amazon CloudWatch, AWS X-Ray and AWS Config. Latest versions of Java, Node.js, Python and .NET runtimes are supported.<br /> <br /> Lambda Managed Instances is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. To learn more, visit the launch blog and AWS Lambda Managed Instances documentation.</p>

Read article →

AWS expands AI Competency with new Agentic AI categories

<p>AWS announces a major expansion of the AI Competency (formerly Generative AI Competency) in the largest Specialization launch to date including 60 validated partners across three new Agentic AI categories: Agentic AI Tools, Agentic AI Applications, and Agentic AI Consulting Services. These categories help customers identify and work with AWS Partners who specialize in developing and implementing autonomous AI systems that can perceive, reason, and act with minimal human oversight. To streamline the partner validation process, AWS today launched an AI agent in AWS Partner Central that provides partners with immediate feedback on their AI Specialization applications, significantly accelerating the path to competency attainment.<br /> <br /> As organizations move beyond AI experimentation toward production-ready autonomous systems, they need partners with proven expertise in deploying AI agents that can orchestrate complex workflows, maintain contextual awareness, and collaborate across multiple platforms. The new Agentic AI categories validate partners who can deliver sophisticated solutions and offerings using Amazon Bedrock AgentCore, Strands Agents, Amazon SageMaker AI, and other AWS AI services while maintaining strong commitments to responsible AI development, governance, and monitoring.<br /> <br /> AWS Partners in these categories undergo rigorous technical validation and must demonstrate successful customer implementations that meet AWS's high standards for security, reliability, and operational excellence. These validated partners are uniquely positioned to help customers deploy production-grade autonomous AI systems that drive real business value.<br /> <br /> Apply to the AWS AI Competency on <a href="https://partnercentral.awspartner.com/partnercentral2/s/" target="_blank">Partner Central</a> and learn more about the <a href="https://aws.amazon.com/ai/generative-ai/partners/" target="_blank">AWS AI Competency</a> through our APN Blog and explore validated partners in the new Agentic AI categories.</p>

Read article →

AWS Glue now supports Apache Iceberg based materialized views

<p>AWS Glue now supports materialized views, a new capability that makes it easier for data teams to transform data and accelerate query performance. Materialized views are managed tables in the AWS Glue Data Catalog that store precomputed query results in Apache Iceberg format and automatically keep them up to date as source data changes. This feature is designed to make it easy for data engineers and analytics teams to transform data through multiple stages, from raw data to final analytical tables while reducing engineering effort and operational overhead.<br /> <br /> Customers can now create materialized views using standard Spark SQL syntax with a data refresh schedule. The service automatically handles the refresh schedule, change detection, incremental updates, and compute infrastructure management. Spark engines across Amazon Athena, Amazon EMR, and AWS Glue intelligently rewrite queries to use these materialized views, accelerating performance by up to 8x while reducing compute costs. You can use SQL query engines like Athena and Redshift to access the materialized views as Iceberg tables from SQL editors and Amazon SageMaker notebooks.<br /> <br /> Materialized views in AWS Glue are available in Europe (Stockholm), Asia Pacific (Thailand), Asia Pacific (Mumbai), Europe (Paris), US East (Ohio),Europe (Ireland), Europe (Frankfurt), South America (Sao Paulo), Asia Pacific (Hong Kong), US East (N. Virginia), Asia Pacific (Seoul), Asia Pacific (Malaysia), Europe (London), Asia Pacific (Tokyo), US West (Oregon), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), and Europe (Spain). To learn more, visit <a href="https://docs.aws.amazon.com/lake-formation/latest/dg/materialized-views.html" target="_blank">Working with Materialized Views </a>in the AWS Glue developer guide.</p>

Read article →

AWS announces a preview of the AWS MCP Server

<p>Today, AWS announces the AWS MCP Server, a managed remote Model Context Protocol (MCP) server that helps AI agents and AI-native IDEs perform real-world, multi-step tasks across one or more AWS services. The AWS MCP Server consolidates capabilities from the existing AWS API MCP and AWS Knowledge servers into a unified interface, providing access to AWS documentation, generating and executing calls to over 15,000 AWS APIs including those for newly released services, and following pre-built workflows called Agent standard operating procedures (SOPs) that guide AI agents through common tasks on AWS.<br /> <br /> With the AWS MCP Server, you can ask AI assistants to perform tasks like hosting static websites on S3, provisioning EC2 instances, troubleshooting Lambda issues, and configuring CloudWatch alarms using Agent SOPs to provide step-by-step guidance. The server handles authentication and authorization through AWS Identity and Access Management (IAM) and provides audit logging through AWS CloudTrail, giving you full control over resources and permissions while enabling AI agents to execute tasks across multiple AWS services helping you complete real-world tasks faster.<br /> <br /> The AWS MCP Server is available at no additional cost in the US East (N. Virginia) Region. You pay only for AWS resources you create and applicable data transfer costs. To learn more, see the <a href="https://docs.aws.amazon.com/aws-mcp/latest/userguide/understanding-mcp-server-tools.html" target="_blank">AWS MCP Server documentation</a>.</p>

Read article →

Amazon SageMaker Catalog provides automatic data classification using AI agents

<p>Amazon SageMaker Catalog now provides automated data classification that suggests business glossary terms during data publishing, reducing manual tagging effort and improving metadata consistency across organizations.<br /> <br /> This capability analyzes table metadata and schema information using Amazon Bedrock's language models to recommend relevant terms from organizational business glossaries. Data producers receive AI-generated suggestions for business terms defined within their glossaries, which include both functional terms and sensitive data classifications such as PII and PHI, making it easy to tag their datasets with standardized vocabulary. Producers can accept or modify these suggestions before publishing, ensuring consistent terminology across data assets and improving data discoverability for business users.<br /> <br /> Automated data classification is available in US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Seoul, Singapore, Sydney, Mumbai), and Europe (Frankfurt, Ireland, London, Paris) AWS regions where Amazon<br /> SageMaker operates.<br /> <br /> To get started, go to SageMaker Unified Studio to configure your business glossary to generate recommendations for business glossary terms. You can also use the AWS CLI or SDKs to programmatically manage glossary term suggestions.<br /> For more information, see the SageMaker Catalog <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/autodoc.html" target="_blank">user guide.</a></p>

Read article →

AWS Marketplace now supports variable payments for professional services

<p>Today, AWS Marketplace announces the general availability of variable payments, a new billing option that allows professional services sellers to bill customers as work is delivered. This capability allows sellers to set contract caps and create payment requests throughout project delivery, rather than requiring upfront payment or fixed installment schedules.</p> <p>Professional services engagements often involve complexity and uncertainty, making it challenging to accurately scope and price deliverables before work begins. Variable payments supports a flexible engagement approach, while providing transparency and control for buyers. AWS Marketplace Sellers can create private offers for professional services and utilize variable payments to bill up to a predetermined contract maximum. This allows sellers to bill in AWS Marketplace based on outcomes, as milestones are completed, or as time and materials are consumed. Throughout the engagement, sellers create payment requests that describe deliverables and specify milestones or time and materials. Customers receive email notifications and can review and approve each request manually, or enable auto-approval for streamlined processing.</p> <p>To learn more, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/marketplace/latest/userguide/proserv-variable-payments.html" style="cursor: pointer;" target="_blank">variable payments documentation</a>, or access the <a contenteditable="false" href="https://aws.amazon.com/marketplace/management/offers/" style="cursor: pointer;">AWS Marketplace Management Portal</a> to create a professional services private offer with variable payment.</p>

Read article →

Amazon Connect now enables business users to create custom UIs to adjust contact center configurations in real time

<p>Amazon Connect now gives business users greater control over daily contact center operations without requiring technical resources. With new capabilities to create customer UIs that adjust queues, routing behavior, and customer experience settings in real time, business users can respond to changing conditions immediately while maintaining enterprise-grade governance and security. For example, during a weather disruption, an airline contact center operations manager can shift agents to rebooking queues, update after-hours routing, and activate a pre-approved protocol that refreshes IVR prompts and triggers customer notifications, all in minutes and without technical team intervention. This reduces wait times, increases agent productivity, and improves the customer experience at moments of peak demand.<br /> <br /> Contact center administrators can start by defining key business configurations such as queue assignments, operating hours, skill mappings, and escalation rules, in data tables that directly drive contact flows. Guides can then be configured to surface role-specific actions for each business user within persona based workspaces. Together, these updates enable a business-led operating model that keeps contact center operations fast, consistent, and secure, all without relying on IT.<br /> <br /> These new capabilities are available in all <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">AWS regions</a> where Amazon Connect is available. To learn more, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/use-contact-segment-attributes.html" target="_blank">Amazon Connect Administrator Guide</a><a href="https://docs.aws.amazon.com/connect/latest/adminguide/use-contact-segment-attributes.html">.</a> To learn more about Amazon Connect, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect now provides improved analytics and monitoring for AI agents

<p>Amazon Connect now provides analytics and monitoring capabilities for AI agents across self-service and agent assistance experiences. With this launch, you can measure and continuously improve AI agent performance and customer outcomes through easy to customize dashboards that provide key metrics like number of AI agent led interactions, hand-off rates, conversation turns, and average handle time. You can also compare AI agent performance across versions to identify optimal configurations and review insights to understand where AI agents are performing well and where improvements are needed. Additionally, with this launch, you can configure rules to trigger automated actions, such as sending alerts when self-service contacts are transferred to human agents with low sentiment scores. Amazon Connect also provides AI agent traces via APIs with detailed information such as request and response payloads and tool invocations, enabling you to easily understand AI agent actions and decision-making for faster troubleshooting.<br /> <br /> This capabilities is available in all AWS Regions where Amazon Connect AI agents are offered. To learn more about AI agent analytics, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/amazon-connect-get-started.html" target="_blank">Amazon Connect Administrator Guide</a>. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect now provides granular access controls for performance evaluations

<p>Amazon Connect now enables businesses to restrict access to specific performance evaluation forms, preventing unauthorized access to evaluation form templates and completed evaluations. Businesses can provide managers access to modify or use only the evaluation form templates that are relevant to their business line or function, improving security and making it easier for managers to select the right form while completing evaluations. Additionally, both managers and agents can be restricted from viewing certain completed evaluations. For example, you can restrict agents from viewing test evaluations filled with a form template that is yet to be finalized.<br /> <br /> This feature is available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">Amazon Connect</a> is offered. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/create-evaluation-forms.html" target="_blank">documentation</a> and our <a href="https://aws.amazon.com/connect/contact-lens/" target="_blank">webpage</a>.&nbsp;</p>

Read article →

Amazon Connect launches Model Context Protocol (MCP) support

<p>Amazon Connect now supports Model Context Protocol (MCP), enabling AI agents for end-customer self-service and employee assistance to use standardized tools for retrieving information and completing actions. With this launch, businesses can enhance their AI agents with extensible tool capabilities that improve issue resolution. For example, an AI agent can automatically look up order status, process refunds, and update customer records during a self-service interaction without requiring human intervention.<br /> <br /> With this launch, Amazon Connect provides out-of-the-box MCP tools for common tasks such as updating contact attributes and retrieving case information. You can also use flow modules as MCP tools to reuse the same business logic across both deterministic and generative AI workflows. Additionally, you can integrate custom tools or third-party services through flow modules or the Amazon Bedrock AgentCore Gateway.<br /> <br /> For region availability, please see the availability of <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html" target="_blank">Amazon Connect features by Region</a>. To learn more about Connect’s AI agents please visit the <a href="https://aws.amazon.com/connect/q/" target="_blank">website</a> or see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/amazon-q-connect.html" target="_blank">help documentation</a>. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect now supports AI agent assistance and summarization for Agentforce Service

<p>Amazon Connect launches real-time AI agent assistance and contact summarization for <a href="https://aws.amazon.com/partners/amazon-connect-and-salesforce/" target="_blank">Salesforce Contact Center with Amazon Connect (SCC-AC)</a>. It enables Connect AI agents to automatically leverage customer information and knowledge base articles from Salesforce CRM for accelerated issue resolution and consistent outcomes across voice and chat interactions.<br /> <br /> When human intervention is required, the seamless integration within SCC-AC connects customers to agents who have a unified view of customer data, issue context, and interaction history within Agentforce Service and Agentforce Sales. Agents receive real-time voice transcripts and contextual recommendations, while supervisors gain enhanced call monitoring capabilities directly in Salesforce. Upon resolution, automated post-contact summarization enables agents to easily update Salesforce cases, streamlining administrative tasks. Administrators can deploy and configure this integrated contact center solution in minutes, leveraging Amazon Connect's voice, digital channels, and intelligent routing capabilities.<br /> <br /> This feature is available in all <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">AWS Regions</a> where Amazon Connect is available. To learn more and get started, see the <a href="https://amazon-connect.github.io/amazon-connect-sccac-docs/" target="_blank">Salesforce Contact Center with Amazon Connect documentation</a>. To learn more about Amazon Connect, see <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect</a> and <a href="https://aws.amazon.com/partners/amazon-connect-and-salesforce/" target="_blank">our strategic Salesforce partnership</a></p>

Read article →

Amazon Connect introduces agentic self-service with more natural, expressive, and adaptive voice interactions

<p>Amazon Connect is introducing agentic self-service capabilities that enable AI agents to understand, reason, and take action across voice and messaging channels to automate routine and complex customer service tasks. Connect enables you to blend deterministic and agentic experiences, allowing you to deploy these AI agents at scale, reliably and safely. With integration with advanced speech models from Amazon Nova Sonic, voice self-service experiences now deliver more natural and adaptive interactions. Connect's self-service voice AI agents understand not only what customers say but how they say it, adapting voice responses to match customer tone and sentiment while maintaining natural conversational pace across multiple languages and accents. For example, when a customer calls about an order issue, your AI agent can greet them by name, ask clarifying questions, look up their order status, and process a refund, with voice interactions that adapt to the customer's tone and respond expressively throughout the conversation. This enables your contact center to automate complex troubleshooting, account management, and consultative interactions while maintaining the ability to escalate to a live representative at any point.<br /> <br /> Nova Sonic support with Amazon Connect is available in two commercial AWS Regions: US East (N. Virginia) and US West (Oregon) and fully available in English and Spanish and in preview for French, Italian, and German. To learn more about this feature see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/monitor-flow-performance.html" target="_blank">Amazon Connect Administrator Guide</a> and <a href="https://aws.amazon.com/connect/pricing/" target="_blank">Amazon Connect pricing page</a>. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect adds support for third-party speech-to-text and text-to-speech AI models for end-customer self-service

<p>Amazon Connect now supports third-party speech providers for end-customer self-service, giving you greater flexibility in how you deliver voice experiences. You can integrate Deepgram for speech-to-text and ElevenLabs for text-to-speech directly within Amazon Connect, using them together with Amazon Connect's native speech capabilities, built-in orchestration, analytics, and compliance controls.<br /> <br /> This feature is available with <a href="https://aws.amazon.com/connect/pricing/" target="_blank">Amazon Connect unlimited AI</a> and in all commercial <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">AWS regions</a> where Amazon Connect is offered. For more information, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide" target="_blank">Amazon Connect Administrator Guide</a><a href="https://docs.aws.amazon.com/connect/latest/adminguide/use-contact-segment-attributes.html">.</a> To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

New automated integration for CrowdStrike Falcon Next-Gen SIEM in AWS Marketplace

<p>Today, AWS and CrowdStrike are making it easier to unify cloud-native security monitoring with a new automated integration experience for CrowdStrike Falcon Next-Gen Security Information and Event Management (SIEM), available in AWS Marketplace. CrowdStrike Falcon Next-Gen SIEM unifies threat detection, investigation, and response capabilities by correlating data from AWS services including <a href="https://aws.amazon.com/security-hub/">AWS Security Hub</a>, <a href="https://aws.amazon.com/guardduty/">Amazon GuardDuty</a>, and <a href="https://aws.amazon.com/cloudtrail/">AWS CloudTrail</a>. This new streamlined experience accelerates the configuration and integration process, eliminating manual setup across multiple AWS service consoles.<br /> <br /> The guided wizard interface automates AWS service connector setup, provisioning AWS IAM roles with least privilege access, Amazon SQS queues, Amazon EventBridge rules, and Amazon SNS topics. Security teams can immediately begin leveraging agentic AI-assisted investigation capabilities, advanced correlation, and automated response features to detect and stop breaches in real-time across their AWS Organization.<br /> <br /> CrowdStrike now offers pay-as-you-go pricing in AWS Marketplace, allowing customers to quickly subscribe without long-term commitments. To get started, visit the AWS Marketplace listing for CrowdStrike.</p>

Read article →

AWS announces gated preview of AWS Interconnect - last mile

<p>AWS launches AWS Interconnect - last mile, a fully managed connectivity offering that allows customers to connect their branch offices, data centers, and remote locations to AWS with just a few clicks, eliminating the friction of discovering partners and the complexity of network setup. As a milestone collaboration between AWS and Lumen, AWS Interconnect - last mile combines AWS cloud innovation with Lumen’s extensive network footprint to redefine how businesses connect to the cloud.<br /> <br /> Customers can now instantly establish private, high-speed connections to AWS by simply entering their location, selecting bandwidth, and choosing their AWS Region. The launch simplifies the connectivity experience by automating complex network configuration including BGP peering, VLAN configuration, and ASN assignment. Customers can dynamically scale bandwidth from 1 Gbps to 100 Gbps through the AWS console and benefit from zero down-time maintenance. The service is architected for high availability and backed by SLAs. MACsec encryption is enabled by default for enhanced security between AWS Direct Connect and partner devices.<br /> <br /> AWS Interconnect - last mile is available as a gated preview through Lumen, our launch partner, for customers in US starting today. Request access <a contenteditable="false" href="https://aws.amazon.com/interconnect/lastmile" style="cursor: pointer;" target="_blank">here</a>.</p>

Read article →

AWS Marketplace introduces agent mode and AI-enhanced search to accelerate solution discovery

<p>AWS Marketplace introduces two new AI-powered capabilities, agent mode and enhanced search, to accelerate solution discovery across over 30,000 listings. These capabilities reflect the shift to solution-centric procurement in AWS Marketplace, helping you more quickly discover and evaluate solutions to your business challenges.<br /> <br /> Agent mode, a conversational discovery experience that’s purpose-built for software procurement, helps you reach informed purchasing decisions fast. Describe your use case, upload business requirements documentation, and discover solutions that match your needs. Through interactive dialogue, you can ask questions and explore product insights drawn from AWS data, security and compliance records, verified vendor information, and real-time web intelligence. Agent mode accelerates your evaluation with dynamic side-by-side comparisons personalized to your requirements that can be customized with natural language. Once you’re ready to buy, you can initiate a purchase or create a downloadable detailed purchasing proposal to share with your internal stakeholders for approvals. You can also get the same tailored discovery experience on your preferred AI application through an integration with the AWS Marketplace MCP server.<br /> <br /> AI-enhanced search helps you find the right the solutions fast and start evaluating your options on product pages or in agent mode. You can describe your needs and receive relevant solution results with AI-generated summaries to better understand your options and key consideration factors. New smart categories dynamically adapt to your specific search, helping you narrow down results with tailored topics. With the AWS Specializations badge added to search results, you can easily identify technically validated Partners across industries, use cases, and services.<br /> <br /> To start discovering products, visit the <a href="https://aws.amazon.com/marketplace">AWS Marketplace website</a> to use AI-enhanced search and agent mode. To learn more about Marketplace MCP, visit the MCP server documentation.</p>

Read article →

Announcing AWS AI League 2026 Championship

<p>Today, AWS announces the AWS AI League 2026 Championship, expanding its flagship AI tournament with new challenges and doubling the prize pool to $50,000 for builders to compete and innovate. AI League transforms how builders use AWS AI services through gamified competition centered on solving real world business challenges.<br /> <br /> The program provides participants with a quick orientation, then focuses on tournaments with two challenge tracks: the Model Customization challenge using <a href="https://aws.amazon.com/sagemaker-ai/">Amazon SageMaker AI</a> to fine-tune foundation models for specific domains, and the Agentic AI challenge using <a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore </a>to build intelligent agents that can reason, plan, and execute complex tasks. Enterprises can apply<a href="https://pages.awscloud.com/AWS-AI-League-2025_Application.html"> </a>to host internal tournaments and receive AWS credits, creating environments where teams collaborate and compete while building AI solutions relevant to their specific business needs. Individual developers can participate at AWS Summits, testing their abilities against peers while working directly with AWS AI services. <br /> <br /> For more information about the AWS AI League and how to participate, please visit the <a href="https://aws.amazon.com/ai/aileague">AWS AI League page</a>.</p>

Read article →

AWS announces IAM Policy Autopilot to help builders generate IAM policies from code

<p>AWS Identity and Access Management (IAM) announces IAM Policy Autopilot, an open source Model Context Protocol (MCP) server and command-line tool that helps your AI coding assistants quickly create baseline IAM policies that you can refine as your application evolves, so you can build faster. IAM Policy Autopilot analyzes your application code locally to create identity-based policies to control access for application roles, reducing the time you spend on writing IAM policies and troubleshooting access issues.<br /> <br /> IAM Policy Autopilot integrates with AI coding assistants like Kiro, Claude Code, and Cursor, and supports Python, TypeScript, and Go applications. It stays up to date with the latest AWS services and features so that builders and coding assistants have access to the latest AWS IAM permissions knowledge.<br /> <br /> IAM Policy Autopilot is available at no additional cost and can be used from your own machine. To start using IAM Policy Autopilot, visit the GitHub repository and follow the setup instructions for MCP server. You can also learn more about IAM Policy Autopilot by visiting AWS News Blog.&nbsp;</p>

Read article →

Introducing Amazon Route 53 Global Resolver for secure anycast DNS resolution (preview)

<p>Today, AWS announced the preview of Amazon Route 53 Global Resolver, a new internet-reachable DNS resolver that provides easy, secure, and reliable DNS resolution from anywhere for queries made by your authorized clients.<br /> <br /> With Global Resolver, authorized clients in your organization can achieve split DNS resolution by resolving public domains on the internet and private domains associated with Route 53 private hosted zones, from anywhere. Global Resolver also allows you to create rules that protects your clients from DNS-based data exfiltration attacks. Using DNS Firewall rules for Global Resolver, you can filter queries for domains based on threat categories (e.g. Malware, Spam), web-content (e.g. Adult and Mature Content, Gambling), or advanced DNS threats (DNS tunneling, Domain Generation Algorithms), and log all queries centrally for easy auditing. Global Resolver enables you to achieve high availability of DNS resolution for your clients, by allowing you to select two or more regions for anycast DNS resolution with automatic failover to the closest available region.<br /> <br /> With the launch of Global Resolver, we are renaming Route 53 Resolver to Route 53 VPC Resolver, to help clarify the distinction between the two services. Route 53 VPC Resolver allows you to resolve DNS queries from AWS resources in your Amazon VPCs for public domain names, VPC-specific DNS names, and Amazon Route 53 private hosted zones, and is available by default in each VPC. You can also associate Resolver endpoints with the VPC Resolver to forward DNS queries between your on-premises and Amazon VPCs.<br /> <br /> Visit the service page for Global Resolver pricing and feature details. During the preview, Global Resolver will be available at no additional cost. For more information about AWS Regions where Global Resolver is available during preview, see here. To get started with a step-by-step walkthrough, see the AWS News Blog or documentation.</p>

Read article →

Amazon CloudWatch incident reports now support Five Whys analysis

<p>Amazon CloudWatch launched incident report generation capabilities with an AI-powered root-cause workflow that guides customers through the "Five Why’s" analysis technique. The feature is modeled on the correction or errors process used by both teams within Amazon and our customers to improve their operations.<br /> <br /> The incident report generation capability now supports a guided, chat-based workflow powered by Amazon Q that walks customers through identifying the “Five Why’s” behind an incident. Teams can use this process to help identify the underlying root causes behind an incident. The capability leverages both human input and AI-based analysis of incident data to recommends specific measures operators can take to prevent future occurrences and improve their operations.<br /> <br /> The incident report generation feature is available at no additional cost for CloudWatch customers and is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), and Europe (Stockholm).<br /> <br /> You can create an incident report by first creating a <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Investigations.html" target="_blank">CloudWatch investigation</a> and then clicking “<i>Incident report</i>”. To initiate the Five Whys workflow, scroll down to the “<i>Five Why’s</i>” section of your report and select “<i>Guide Me</i>”. To learn more, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Investigations-Incident-Reports.html" target="_blank">CloudWatch incident reports documentation.</a></p>

Read article →

AWS announces preview of AWS Interconnect - multicloud

<p>AWS announces preview of AWS Interconnect - multicloud, providing simple, resilient, high-speed private connections to other cloud service providers (CSPs), starting in preview with Google Cloud as the first launch partner and then with Microsoft Azure later in 2026.<br /> <br /> Customers have been adopting multicloud strategies while migrating more applications to the cloud. They do so for many reasons including interoperability requirements, the freedom to choose technology that best suits their needs, and the ability to build and deploy applications on any environment with greater ease and speed. Previously, when interconnecting workloads across multiple cloud providers, customers had to go the route of a ‘do-it-yourself’ multicloud approach, leading to complexities of managing global multi-layered networks at scale. AWS Interconnect - multicloud is the first purpose-built product of its kind and a new way of how clouds connect and talk to each other. It enables customers to quickly establish private, secure, high-speed network connections with dedicated bandwidth and built-in resiliency between their Amazon VPCs and other cloud environments. Interconnect - multicloud makes it easy to connect AWS networking services such as AWS Transit Gateway, AWS Cloud WAN, and Amazon VPC to other Cloud Service Providers (CSPs) quickly, instead of weeks or months.<br /> <br /> Interconnect - multicloud is available in preview in five AWS Regions. You can enable this capability using the AWS Management Console. CSPs can also easily adopt via a published open API package on GitHub. For more information, see the AWS Interconnect - multicloud documentation pages.</p>

Read article →

Multimodal retrieval for Bedrock Knowledge Bases now generally available

<p>Today, AWS announces the general availability of multimodal retrieval in Bedrock Knowledge Bases. Amazon Bedrock Knowledge Bases offers managed, end-to-end Retrieval Augmented Generation (RAG) workflows to create accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Supporting multimodal retrieval in Knowlesdge Bases enables developers to build AI-powered search and question-answering applications that work across text, images, audio, and video files. For example, a user could ask their assistant "show me Q1 projections for Amazon Bedrock" and Bedrock Knowledge Bases will retrieve relevant text from documents, graphs, video snippets, and audio related to revenue projections for Bedrock, allowing the assistant to generate richer and more complete answers for the end user. Previously, customers could only search through text documents and images. Now they can unlock insights from all their enterprise data formats through one unified, fully managed workflow.</p> <p>Organizations struggle to extract insights from their growing multimedia data—videos, audio recordings, images, and documents— because building AI applications that can search across these different modalities is complex. As a result, valuable information trapped in terabytes of meeting recordings, training videos, and visual documentation remains inaccessible, preventing organizations from making data-driven decisions quickly and accurately. With multimodal retrieval for Knowledge Bases, developers can ingest multimodal content with full control of the parsing, chunking, embedding (e.g. Amazon Nova multimodal), and vector storage options. From there, they can then send a text query or an image as input and get relevant text, image, audio, and video segments back in order to generate a response in their generative AI applications using their choice of LLM.</p> <p>For more information about creating multimodal Knowledge Bases in Bedrock, please refer to the documentation. Region availability is dependent on the features selected for multimodal support, please refer to the documentation for details.</p>

Read article →

AWS Clean Rooms supports synthetic dataset generation training custom ML training

<p>AWS Clean Rooms now enables you and your partners to generate privacy-enhancing synthetic datasets from your collective data to train regression and classification machine learning (ML) models.</p> <p>Synthetic dataset generation allows you and your partners to create training datasets with similar statistical properties to the original data, without the training code having access to real records. This new capability de-identifies subjects—such as people or entities about whom data has been collected—in the original data, mitigating the risk that a model will memorize information about individuals in the training data. This unlocks new ML model training use cases that were previously restricted by privacy concerns, such as campaign optimization, fraud detection, and medical research. For example, an airline with a proprietary algorithm wants to collaborate with a hotel brand to offer joint promotions to high-value customers, but neither organization wants to share sensitive consumer data. Using AWS Clean Rooms ML, they can generate a synthetic version of their collective dataset to train the model without exposing raw data—enabling more accurate promotions targeting while protecting customer privacy.</p> <p>For more information about the AWS Regions where AWS Clean Rooms ML is available, see the <a href="https://docs.aws.amazon.com/general/latest/gr/clean-rooms-ml.html">AWS Regions</a> table. To learn more, visit<a href="https://aws.amazon.com/clean-rooms/ml/"> AWS Clean Rooms ML</a>.</p>

Read article →

AWS Marketplace introduces express private offers for fast personalized pricing

<p>AWS Marketplace customers can now receive private offers for third-party products in minutes through express private offers. This capability enables customers to access personalized pricing and terms which previously required lengthy sales discussions, thereby accelerating software procurement and reducing customer time-to-value.<br /> <br /> Previously, obtaining personalized pricing required customers to initiate contact with sales teams, engage in discussions to negotiate terms, and navigate multiple review cycles before receiving a customized offer. Now, customers can use express private offers to get an offer within minutes by responding to a few questions. Customers are invited to use the new AI-powered experience on participating products, where they specify their purchase requirements and contract duration. AWS generates a private offer by automatically evaluating these requirements against a seller’s pre-configured pricing rules. For more customized solutions that fall outside these parameters, customers can be connected to sales representatives for additional assistance. This streamlines access to personalized pricing while ensuring customers receive offers tailored to their needs.<br /> <br /> This capability is available today in all <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/supported-regions.html" target="_blank">AWS Regions</a> where the AWS Marketplace website is supported.<br /> <br /> To learn more about requesting express private offers, visit the AWS Marketplace Buyer Guide. For sellers interested in enabling this feature on their listing page, visit the AWS Marketplace Seller Guide.</p>

Read article →

Announcing Amazon EKS Capabilities

<p>Amazon Elastic Kubernetes Service (EKS) announces the general availability of EKS Capabilities, a fully-managed extensible set of Kubernetes-native platform features for workload deployment, AWS cloud resource management, and Kubernetes resource composition and orchestration. EKS Capabilities provides out-of-the-box platform features and offloads operations to AWS, improving the performance and security of your platform components.<br /> <br /> EKS Capabilities streamlines building and scaling with Kubernetes, allowing you to focus on deploying applications rather than maintaining platform infrastructure. These capabilities run in AWS-owned infrastructure separate from your clusters, with AWS handling auto scaling, patching, and upgrading. Application developers get ready-to-use platform capabilities that enable faster workload deployment and scaling across the organization, while platform teams can offload operational tasks to AWS. Three capabilities are available at launch including continuous deployment with Argo CD, AWS resource management through AWS Controllers for Kubernetes (ACK), and dynamic resource orchestration using Kube Resource Orchestrator (KRO).<br /> <br /> EKS Capabilities is available today in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>, except AWS GovCloud (US) and China Regions. To get started with EKS Capabilities, use the EKS API, CLI, eksctl, AWS Console, or your favorite infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more, visit the EKS Capabilities <a href="https://aws.amazon.com/eks/features/" target="_blank">feature webpage</a>, <a href="https://docs.aws.amazon.com/eks/latest/userguide/capabilities.html" target="_blank">user guide</a>, <a href="https://aws.amazon.com/eks/pricing/" target="_blank">pricing webpage</a>, and AWS News Launch blog.</p>

Read article →

Amazon Connect now supports multiple knowledge bases and integrates with your Amazon Bedrock Knowledge Bases

<p>Amazon Connect now allows you to bring your own Amazon Bedrock Knowledge Bases and supports multiple knowledge bases per AI agent, giving you greater flexibility in how you organize and access knowledge content for your AI agents. You can now connect your existing Bedrock Knowledge Bases directly to Amazon Connect AI agents in just a few clicks, with no additional setup or data duplication required. This allows you to leverage your current data sources and the Amazon Bedrock Knowledge Base connectors, including Adobe Experience Manager, Confluence, SharePoint, and OneDrive, giving you flexibility to use existing content repositories.<br /> <br /> With support for multiple knowledge bases per AI agent, you can configure AI agents to query multiple sources in parallel for more comprehensive responses. For example, a financial services company can easily connect separate knowledge bases for compliance documentation, product information, and internal policies, enabling AI agents to provide complete guidance across all relevant content during customer interactions.<br /> <br /> This feature is available in all AWS Regions where Amazon Connect AI agents and Amazon Bedrock Knowledge Bases <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html" target="_blank">are offered.</a> To learn more about these features, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/amazon-connect-get-started.html" target="_blank">Amazon Connect Administrator Guide</a>. To learn more about Amazon Connect, the AWS cloud-based contact center, and Amazon Connect AI agents please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect Website</a>.</p>

Read article →

Amazon Connect now simplifies linking related contacts to cases using flows

<p>Amazon Connect now makes it easier to link related contacts such as email replies, call transfers, persistent chats, and queued callbacks to the same case so agents can view the complete customer journey and resolve issues faster. You can use flows to link a follow-up contact to an existing case, eliminating the need for custom logic or manual linking.<br /> <br /> Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases <a href="https://aws.amazon.com/connect/cases/" target="_blank">webpage</a> and <a href="https://docs.aws.amazon.com/connect/latest/adminguide/cases.html" target="_blank">documentation</a>.</p>

Read article →

Amazon Connect introduces new criteria to automatically select relevant contacts for performance evaluation

<p>Amazon Connect provides managers with new criteria while setting up automated evaluations, making it easier to identify relevant contacts for evaluation, and providing additional insights to automatically populate evaluation forms. For example, managers can specify that inbound contacts with no connectivity issues, handled by agents in a specific department, should be automatically evaluated using a particular evaluation form. Additionally, managers can use new metrics criteria on agent call avoidance, contact handling efficiency, and audibility, to automatically fill the selected form.<br /> <br /> This feature is available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">Amazon Connect</a> is offered. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/build-rules-for-contact-lens.html" target="_blank">documentation</a> and our <a href="https://aws.amazon.com/connect/contact-lens/" target="_blank">webpage</a>.&nbsp;</p>

Read article →

Amazon Connect now supports creation of custom metrics for use in dashboards and APIs

<p>Amazon Connect now supports creation of custom metrics, enabling contact center supervisors to analyze tailored performance measurements without requiring technical skills. This feature provides a simple, no-code interface for performing mathematical operations (e.g., addition, subtraction, sum, average) on existing Connect data to build metrics that align with your organization's specific business requirements. Custom metrics are available to use in the dashboards and APIs.<br /> <br /> With custom metrics, you can track performance in ways that matter most to your business. For example, create average handle time metrics for premium versus standard customer segments, calculate total agent time on outbound calls by product line, or measure queue performance filtered by contact type such as callbacks versus incoming calls.<br /> This new feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS regions</a> where Amazon Connect is offered. To learn more about Amazon Connect custom metrics, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/amazon-connect-metrics.html">Administrator Guide</a>. To learn more about Amazon Connect, see the <a href="https://aws.amazon.com/connect/features/">Amazon Connect website</a>.</p>

Read article →

Amazon Connect Chat now supports in-flight data redaction and message processing

<p>Amazon Connect now supports message processing that intercepts and processes chat messages before they reach any participant. This new capability enables automatic redaction of sensitive data and custom message processing, helping businesses maintain compliance and security standards while delivering personalized customer experiences.<br /> <br /> The built-in sensitive data redaction can automatically detect and remove sensitive information like credit card numbers and social security numbers across multiple languages, including English, French, Portuguese, German, Italian, and Spanish variants. You can choose to redact selected or all sensitive data entities, with options to replace them with generic or entity-specific placeholders (e.g., [PII] or [NAME]). Businesses can also integrate custom processors for use cases such as language translation or profanity filtering, ensuring compliant and effective communications for their specific business needs.<br /> <br /> These message processing capabilities are now available in the following regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt) Europe (London), Africa (South Africa). To learn more about Amazon Connect, visit the Amazon Connect documentation and pricing.&nbsp;</p>

Read article →

Amazon Connect launches AI-powered predictive insights (Preview)

<p>Today, Amazon Connect is launching AI-powered predictive insights that transform how businesses understand and serve their customers. This new feature set builds upon Connect's existing customer profiles, introducing five recommendation algorithms that leverage AI to analyze customer behavior patterns and interaction history. These AI-powered insights are available for both self-service and agent interactions, enabling businesses to transform all customer touchpoints – from suggesting complementary products during service calls to providing smart product discovery through intelligent chat experiences by leveraging their existing customer data within Connect Customer Profiles. Businesses can also leverage these AI-powered insights to build their Connect AI agent for specialized for sales.<br /> <br /> The five recommendation algorithms are as follows: "Recommended for You" provides tailored suggestions based on individual user interactions patterns with any catalog; "Similar Items" uses generative AI to suggest alternative products or services; "Frequently Paired Items" powers cross-selling by identifying complementary product or service combinations, "Popular Items" surfaces top-performing product recommendations, and "Trending Now" captures real-time customer interest for timely engagement.<br /> <br /> With Amazon Connect Customer Profiles, you only pay-as-you-go for utilized profiles. Public preview for AI-powered predictive insights is available in Europe (Frankfurt), US East (N. Virginia), Asia Pacific (Seoul), Asia Pacific (Tokyo), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central).<br /> <br /> To learn more, visit our webpages for <a href="https://docs.aws.amazon.com/connect/latest/adminguide/segmentation-admin-website.html" target="_blank">Customer Profiles.</a></p>

Read article →

Amazon Connect agent workspace now supports custom visual themes

<p>Amazon Connect now allows you to customize the visual appearance of the agent workspace. You can apply a custom theme, including a logo, font, and color palette for light and dark modes, so the agent workspace aligns with the brand identity of your company or business unit.<br /> <br /> Contact center agents spend hours each day in the Amazon Connect agent workspace, which provides them with all of the customer information, applications, and step-by-step guidance they need to deliver superior customer experiences. With today’s launch, organizations can change the default Amazon Connect theme to their own branded experience, creating a more familiar and intuitive experience for agents who use the agent workspace and other company applications. The agent workspace also has a new header bar where agents can easily access their settings, including their preference of light and dark mode, contributing to greater agent satisfaction and efficiency.<br /> <br /> The Amazon Connect agent workspace is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (London), and AWS GovCloud (US-West).<br /> <br /> To learn more and get started, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html">administrator guide</a> and <a href="https://docs.aws.amazon.com/agentworkspace/latest/devguide/what-is-service.html">developer guide</a>.</p>

Read article →

Amazon Connect launches automated email responses using conditional keywords and phrases

<p>Amazon Connect now allows you to automate email responses and agent routing logic using keyword and phrase conditions, helping organizations increase self-service, reduce manual handling time, and improve routing accuracy. For example, if a customer sends an email asking if a certain product is in stock, or is checking on their shipment status, an automated response can be sent without involving an agent.<br /> <br /> To enable this feature, add the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/get-stored-content.html">Get stored content</a> block to your flows and use accompanying flow blocks such as <a href="https://docs.aws.amazon.com/connect/latest/adminguide/check-contact-attributes.html">Check contact attributes</a> and <a href="https://docs.aws.amazon.com/connect/latest/adminguide/send-message.html">Send message</a> to configure automated email responses and routing.<br /> <br /> Amazon Connect email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">regions</a>. To learn more and get started, please refer to the help <a href="https://docs.aws.amazon.com/connect/latest/adminguide/setup-email-channel.html">documentation</a> or visit the <a href="https://aws.amazon.com/connect/">Amazon Connect</a> website.</p>

Read article →

Amazon Connect now provides AI-powered case summaries

<p>Amazon Connect now provides AI-powered case summaries for complete context into customer issues, reduce manual wrap-up work, and help resolve cases faster. With a single click, agents can generate a concise case summary even when the case spans multiple interactions, follow-up tasks, and teams, capturing key details such as issue background, actions taken, and next steps. Administrators can configure custom prompts and guardrails to ensure that summaries align with organizational style and preferences.<br /> <br /> Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases <a href="https://aws.amazon.com/connect/cases/">webpage</a> and <a href="https://docs.aws.amazon.com/connect/latest/adminguide/cases.html">documentation</a>.</p>

Read article →

Amazon Connect now streams messages for AI-powered interactions

<p>Amazon Connect now supports message streaming for AI-powered chat interactions. This new capability shows Connect AI agent responses as they're being generated, which reduces perceived wait times and improves the customer experience.<br /> <br /> When using Amazon Connect AI agents, customers see status updates like "One moment while I review your account" during processing, and watch responses appear progressively. This experience gives customers confidence their request is actively being worked on while AI agents reason, invoke tools, and craft comprehensive solutions.<br /> <br /> Message streaming for AI-powered interactions is now available in the following regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (London<b>) </b>and Africa (Cape Town). To learn more, visit the Amazon Connect documentation.</p>

Read article →

Amazon Connect now provides native testing and simulation capabilities (Preview)

<p>Amazon Connect now allows you to test and simulate contact center experiences in just a few clicks, making it easy to validate workflows, self-service voice interactions, and their outcomes. For each test, you can configure the test parameters including the caller's phone number or customer profile, the reason for the call (such as "I need to check my order status"), the expected responses (such as "Your request has been processed"), and business conditions like after-hours scenarios or full call queues. After executing tests, results show success or failure based on your defined criteria, along with the path taken by the simulated interaction and detailed logs to quickly diagnose potential issues<br /> <br /> With this launch, you can run multiple tests simultaneously to validate scenarios and workflows at scale, reducing testing time. Companies can view test results and identify common failure patterns across all their tests in Connect's analytics dashboards. These capabilities enable you to rapidly validate changes to your workflows and confidently deploy new experiences to adapt to your ever-changing business needs.<br /> <br /> To learn more about these features, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html" target="_blank">Amazon Connect Administrator Guide</a>. These features are available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">all AWS regions</a> where Amazon Connect is available. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect Outbound Campaigns now supports multi-step, multi-channel customer engagement journey builder

<p>Amazon Connect Outbound Campaigns now supports visual journey builder, a new feature that lets you create multi-step, multi-channel customer engagements directly in the Amazon Connect console. You can design end-to-end engagement experiences that combine voice, SMS, email, and WhatsApp interactions to reach customers proactively and reduce inbound contact volume.<br /> <br /> Outbound Campaigns help you automate personalized communication flows based on customer behavior or time-based triggers. For example, you can send an appointment reminder by SMS, follow up with a voice call if the customer does not respond, and send a confirmation email once the appointment is booked. You can also configure steps in the journey builder that offer customers the option to connect with a live agent through Amazon Connect when additional support is needed. You can use existing Amazon Connect Flow integrations, AI capabilities, and customer data from Amazon Connect Customer Profiles to tailor each interaction. This helps contact centers improve engagement rates, reduce manual effort, and deliver more consistent customer experiences.<br /> <br /> This feature is available in all AWS Regions where Amazon Connect Outbound Campaigns is supported. To learn more, visit the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/how-to-create-campaigns.html#create-campaigns-channel-configurations">Amazon Connect Outbound Campaigns documentation</a>.</p>

Read article →

Amazon Connect Chat now supports agent-initiated workflows

<p>Amazon Connect now supports agent-initiated workflows, enabling agents to send interactive forms to collect sensitive data or share general policies and disclosures within customer chat conversations, increasing efficiency and improving customer experience. For example, when a customer needs to update their address, agents can now send a form that customers complete without leaving the chat interface.<br /> <br /> Agents can trigger these workflows at any point during a chat conversation, making interactions more dynamic and responsive to customer needs. By handling everything within the ongoing chat conversation, businesses can maintain security and compliance standards while helping customers get faster solutions.<br /> <br /> These new agent capabilities are now available in the following regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (London), and Africa (Cape Town). To learn more, visit the Amazon Connect documentation.</p>

Read article →

Amazon Connect now provides automated performance evaluations for self-service interactions

<p>Amazon Connect now provides businesses with the ability to automatically evaluate the quality of self-service interactions and get aggregated insights to improve customer experience. Managers can define custom criteria to assess the quality of self-service interactions, that can be filled manually or automatically using insights from conversational analytics, and other Connect data. For example, you can automatically assess if the AI agent repeatedly fails to understand the customer, resulting in poor customer sentiment and transfer to a human agent. Managers can review these insights in aggregate and on individual contacts, alongside self-service interaction recordings and transcripts, to identify opportunities to improve AI agent performance.<br /> <br /> Manually filled evaluations of self-service interactions are available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">Amazon Connect</a> is offered. Automated evaluations of self-service interactions are available in the following AWS regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Europe (Frankfurt). For information about Amazon Connect pricing, please visit our <a href="https://aws.amazon.com/connect/pricing/" target="_blank">pricing page</a>. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/create-evaluation-forms.html" target="_blank">documentation</a> and our <a href="https://aws.amazon.com/connect/contact-lens/" target="_blank">webpage</a>.&nbsp;&nbsp;</p>

Read article →

Amazon Aurora now supports PostgreSQL 17.6, 16.10, 15.14, 14.19, and 13.22

<p>Amazon Aurora PostgreSQL-Compatible Edition has added support for PostgreSQL versions 17.6, 16.10, 15.14, 14.19, and 13.22. The <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html">update</a> includes the PostgreSQL community's product improvements and bug fixes, and also includes Aurora-specific enhancements.</p> <p>Dynamic Data Masking (DDM) (16.10 and 17.6 only) is a new database-level security feature that protects sensitive data like personally identifiable information by masking column values dynamically at query time based on role-based policies, without altering the actual stored data. This release also includes a shared plan cache, improved performance and recovery-time-objective (RTO) and improvement for Global Database switchovers.<br /> <br /> To use the new versions, create a new Aurora PostgreSQL-compatible database with just a few clicks in the Amazon RDS Management Console. You can also upgrade your existing database. Please review <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html">the Aurora documentation</a> to learn more about upgrading. Refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html">Aurora version policy</a> to help you to decide how often to upgrade and how to plan your upgrade process. These releases are available in all commercial AWS Regions and the AWS GovCloud (US) Regions.<br /> <br /> Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.html">getting started page</a>.</p>

Read article →

Amazon S3 Metadata expands to 22 additional AWS Regions

<p>Amazon S3 Metadata is now available in twenty-two additional AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Canada West (Calgary), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (Sao Paulo), and US West (N. California).<br /> <br /> Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating. S3 Metadata automatically populates metadata for both new and existing objects, providing you with a comprehensive, queryable view of your data.<br /> <br /> With this expansion, S3 Metadata is now generally available in twenty-eight <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-restrictions.html#metadata-tables-regions">AWS Regions</a>. For pricing details, visit the <a href="https://aws.amazon.com/s3/pricing/">S3 pricing page</a>. To learn more, visit the <a href="https://aws.amazon.com/s3/features/metadata/">product page</a>, <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-overview.html">documentation</a>, and <a href="https://aws.amazon.com/blogs/storage/analyzing-amazon-s3-metadata-with-amazon-athena-and-amazon-quicksight/">AWS Storage Blog</a>.</p>

Read article →

Amazon SageMaker AI now supports Flexible Training Plans capacity for Inference

<p>Amazon SageMaker AI’s Flexible Training Plans (FTP) now support inference endpoints, giving customers guaranteed GPU capacity for planned evaluations and production peaks. Now, customers can reserve the exact instance types they need and rely on SageMaker AI to bring up the inference endpoint automatically, without doing any infrastructure management themselves.</p> <p>As customers plan their ML development cycles, they need confidence that the GPUs required for model evaluation and pre-production testing will be available on the exact dates they need them. FTP makes it easy for customers to access GPU capacity to run ML workloads. With FTP support for inference endpoints, you choose your preferred instance types, compute requirements, reservation length, and start date for your inference workload. When creating the endpoint, you simply reference the reservation ARN and SageMaker AI automatically provisions and runs the endpoint on that guaranteed capacity for the entire plan duration. This removes weeks of infrastructure management and scheduling effort, letting you run inference predictably while focusing your time on improving model performance.</p> <p>Flexible Training Plans support for SageMaker AI Inference is available in following regions: US East (N. Virginia), US West (Oregon), US East (Ohio).</p> <p>To learn more about using FTP reservations for inference endpoints, visit the SageMaker AI Inference API reference <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_SearchTrainingPlanOfferings.html#API_SearchTrainingPlanOfferings_RequestSyntax" style="cursor: pointer;">here</a>.</p>

Read article →

Amazon Bedrock introduces Reserved Service tier

<p>Today, Amazon Bedrock introduces a new Reserved service tier designed for workloads requiring predictable performance and guaranteed tokens-per-minute capacity. The Reserved tier provides the ability to reserve prioritized compute capacity, keeping service levels predictable for your mission critical applications. It also includes the flexibility to allocate different input and output tokens-per-minute capacities to match the exact requirements of your workload and control cost. This is particularly valuable because many workloads have asymmetric token usage patterns. For instance, summarization tasks consume many input tokens but generate fewer output tokens, while content generation applications require less input and more output capacity. When your application needs more tokens-per-minute capacity than what you reserved , the service automatically overflows to the pay-as-you-go Standard tier, ensuring uninterrupted operations. The Reserved tier targets 99.5% uptime for model response and is available today for Anthropic Claude Sonnet 4.5. Customers can reserve capacity for 1 month or 3 month duration. Customers pay a fixed price per 1K tokens-per-minute and are billed monthly.<br /> <br /> With the Reserved service tier, Amazon Bedrock continues to provide more choice to customers, helping them develop, scale, and deploy applications and agents that improve productivity and customer experiences while balancing performance and cost requirements.<br /> <br /> For more information about the AWS Regions where Amazon Bedrock Reserved is available, refer to the<a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html"> Documentation</a>. To get access to the Reserved tier, please contact your AWS account team.&nbsp;</p>

Read article →

AWS announces support for Apache Iceberg V3 deletion vectors and row lineage

<p>AWS now supports deletion vectors and row lineage as defined in the Apache Iceberg Version 3 (V3) specification. These new features are available with Apache Spark on Amazon EMR 7.12, AWS Glue, Amazon SageMaker notebooks, Amazon S3 Tables, and the AWS Glue Data Catalog.<br /> <br /> These Iceberg V3 capabilities help customers build petabyte-scale data lakes with improved performance for data modifications and functionality to easily track changed records. Deletion vectors write optimized delete files that speed up data pipelines and reduce data compaction costs. Row lineage provides metadata fields on each record to track changes with a simple SQL query, eliminating the computational expense of finding small changes in large tables.<br /> <br /> Get started creating V3 tables by setting the table property to 'format-version = 3' in the CREATE TABLE command in Spark or a SageMaker notebook. To upgrade existing tables, simply update the table property in metadata with the new format version. When you do this, AWS query engines that support V3 will automatically begin to use deletion vectors and row lineage.<br /> <br /> Iceberg V3 deletion vectors and row lineage are now available in all AWS Regions where each respective service/feature—Amazon EMR, AWS Glue, SageMaker notebooks, S3 Tables, and AWS Glue Data Catalog—is supported. To learn more about AWS support for Iceberg V3, visit <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/working-with-apache-iceberg-v3.html" target="_blank">Apache Iceberg V3 on AWS</a>, and read the <a href="https://aws.amazon.com/blogs/big-data/accelerate-data-lake-operations-with-apache-iceberg-v3-deletion-vectors-and-row-lineage/" target="_blank">blog post</a>.</p>

Read article →

The AWS API MCP Server is now available on AWS Marketplace

<p>AWS announces the availability of the AWS API MCP Server on AWS Marketplace, enabling customers to deploy the Model Context Protocol (MCP) server to Amazon Bedrock AgentCore. The marketplace entry includes step-by-step configuration and deployment instructions for deploying the AWS API MCP Server as a managed service with built-in authentication and session isolation to Bedrock Agent Core Runtime.<br /> <br /> The AWS Marketplace deployment simplifies container management while providing enterprise-grade security, scalability, and session isolation through Amazon Bedrock AgentCore Runtime. Customers can deploy the AWS<br /> API MCP Server with configurable authentication methods (SigV4 or JWT), implement least-privilege IAM policies, and leverage AgentCore's built-in logging and monitoring capabilities. The deployment lets customers configure IAM roles, authentication methods, and network settings according to their security requirements.<br /> <br /> The AWS API MCP Server can now be deployed from AWS Marketplace in all AWS Regions where Amazon Bedrock AgentCore is supported.<br /> <br /> Get started by visiting the <a href="https://aws.amazon.com/marketplace/pp/prodview-lqqkwbcraxsgw">AWS API MCP Server listing on AWS Marketplace</a> or explore the deployment guide on <a href="https://github.com/awslabs/mcp/blob/main/src/aws-api-mcp-server/DEPLOYMENT.md">AWS Labs GitHub repository</a>. Learn more about Amazon Bedrock AgentCore in the <a href="https://aws.amazon.com/bedrock/agentcore/">AWS documentation</a>.</p>

Read article →

AWS Knowledge MCP Server now supports topic-based search

<p>Today, AWS announces enhanced search capabilities for the AWS Knowledge MCP Server, which now supports topic-based search across specialized AWS documentation domains. The AWS Knowledge MCP Server is a Model Context Protocol (MCP) server that provides AI agents and developers with programmatic access to AWS documentation and knowledge resources. This enhancement enables more precise and relevant search results by allowing MCP clients and agentic frameworks to query specific documentation domains such as Troubleshooting, AWS Amplify, AWS CDK, CDK Constructs, and AWS CloudFormation, reducing noise and improving response accuracy for domain-specific queries.<br /> <br /> These topic-based searches complement existing capabilities for searching API references, What's New announcements, and general AWS documentation. Developers building AI agents can now retrieve targeted information for specific use cases—for example, searching Troubleshooting documentation for error resolution, Amplify documentation for frontend development guidance, or CDK Constructs for production-ready architectural patterns. This focused approach accelerates development workflows and improves the quality of AI-generated responses for AWS-specific queries.<br /> <br /> The enhanced search capabilities are available immediately at no additional cost through the AWS Knowledge MCP Server. Usage remains subject to standard rate limits. To learn more and get started, see the <a contenteditable="false" href="https://github.com/awslabs/mcp/tree/main/src/aws-knowledge-mcp-server" style="cursor: pointer;">AWS Knowledge MCP Server documentation.</a></p>

Read article →

Amazon SageMaker HyperPod now supports programmatic node reboot and replacement

<p>Today, Amazon SageMaker HyperPod announces the general availability of new APIs that enable programmatic rebooting and replacement of SageMaker HyperPod cluster nodes. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). The new BatchRebootClusterNodes and BatchReplaceClusterNodes APIs enable customers to programmatically reboot or replace unresponsive or degraded cluster nodes, providing a consistent, orchestrator agnostic approach to node recovery operations.<br /> <br /> The new APIs enhance node management capabilities for both Slurm and EKS orchestrated clusters complementing existing node reboot and replacement workflows. Existing orchestrator-specific methods, such as Kubernetes labels for EKS clusters and Slurm commands for Slurm clusters, remain available alongside the newly introduced programmatic capabilities for reboot and replace operations through these purpose-built APIs. When cluster nodes become unresponsive due to issues such as memory overruns or hardware degradation, recovery operations such as node reboots and replacements maybe be necessary and can be initiated through these new APIs. These capabilities are particularly valuable when running time-sensitive workloads. For instance, when a Slurm controller, login or compute node becomes unresponsive, administrators can trigger a reboot operation using the API and monitor its progress to get nodes back to operational status. Similarly, EKS cluster administrators can replace degraded worker nodes programmatically. Each API supports batch operations of up to 25 instances, enabling efficient management of large-scale recovery scenarios.<br /> <br /> The reboot and replace APIs are currently supported in three AWS regions where SageMaker HyperPod is available: US East (Ohio), Asia Pacific (Mumbai), and Asia Pacific (Tokyo).The APIs can be accessed through the AWS CLI, SDK, or API calls. For more information, see the Amazon SageMaker HyperPod documentation for <a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_BatchRebootClusterNodes.html" target="_blank">BatchRebootClusterNodes</a> and <a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_BatchReplaceClusterNodes.html" target="_blank">BatchReplaceClusterNodes</a>.</p>

Read article →

Amazon SageMaker AI Inference now supports bidirectional streaming

<p>Amazon SageMaker AI Inference now supports bidirectional streaming for real-time speech-to-text transcription, enabling continuous speech processing instead of batch input. Models can now receive audio streams and return partial transcripts simultaneously as users speak, enabling you to build voice agents that process speech with minimal latency.<br /> <br /> As customers build AI voice agents, they need real-time speech transcription to minimize delays between user speech and agent responses. Data scientists and ML engineers lack managed infrastructure for bidirectional streaming, making it necessary to build custom WebSocket implementations and manage streaming protocols. Teams spend weeks developing and maintaining this infrastructure rather than focusing on model accuracy and agent capabilities. With bidirectional streaming on Amazon SageMaker AI Inference, you can deploy speech-to-text models by invoking your endpoint with the new Bidirectional Stream API. The client opens an HTTP2 connection to the SageMaker AI runtime, and SageMaker AI automatically creates a WebSocket connection to your container. This can process streaming audio frames and return partial transcripts as they are produced. Any container implementing a WebSocket handler following the SageMaker AI contract works automatically, with real-time speech models such as Deepgram running without modifications. This eliminates months of infrastructure development, enabling you to deploy voice agents with continuous transcription while focusing your time on improving model performance.<br /> <br /> Bidirectional streaming is available in following AWS Regions - Canada (Central), South America (São Paulo), Africa (Cape Town), Europe (Paris), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Israel (Tel Aviv), Europe (Zurich), Asia Pacific (Tokyo), AWS GovCloud US (West), AWS GovCloud US (East), Asia Pacific (Mumbai), Middle East (Bahrain), US West (Oregon), China (Ningxia), US West (Northern California), Asia Pacific (Sydney), Europe (London), Asia Pacific (Seoul), US East (N. Virginia), Asia Pacific (Hong Kong), US East (Ohio), China (Beijing), Europe (Stockholm), Europe (Ireland), Middle East (UAE), Asia Pacific (Osaka), Asia Pacific (Melbourne), Europe (Spain), Europe (Frankfurt), Europe (Milan), Asia Pacific (Singapore).<br /> <br /> To learn more, visit AWS News Blog <a href="https://aws.amazon.com/blogs/machine-learning/introducing-bidirectional-streaming-for-real-time-inference-on-amazon-sagemaker-ai/">here </a>and SageMaker AI documentation <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-test-endpoints.html#realtime-endpoints-test-endpoints-sdk">here</a>.</p>

Read article →

Amazon Kinesis Video Streams now supports a new cost effective warm storage tier

<p>AWS announces a new warm storage tier for <a href="https://aws.amazon.com/kinesis/video-streams/">Amazon Kinesis Video Streams</a> (Amazon KVS), delivering cost-effective storage for extended media retention. The standard Amazon KVS storage tier, now designated as the hot tier, remains optimized for real-time data access and short-term storage. The new warm tier enables long-term media retention with sub-second access latency at reduced storage costs.<br /> <br /> The warm storage tier enables developers of home security and enterprise video monitoring solutions to cost-effectively stream data from devices, cameras, and mobile phones while maintaining extended retention periods for video analytics and regulatory compliance. Moreover, developers now have the flexibility to configure fragment sizes based on their specific requirements — selecting smaller fragments for lower latency use cases or larger fragments to reduce ingestion costs. Both hot and warm storage tiers integrate seamlessly with Amazon Rekognition Video and Amazon SageMaker, enabling continuous data processing to support the creation of computer vision and video analytics applications.<br /> <br /> Amazon Kinesis Video Streams with the new warm storage tier is available in all regions where Amazon Kinesis Video Streams is available, except the AWS GovCloud (US) Regions.<br /> <br /> To learn more, refer to the <a href="https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/tiered-storage.html">getting started guide</a>.</p>

Read article →

Amazon Aurora now supports PostgreSQL 17.6, 16.10, 15.14, 14.19, and 13.22

<p>Amazon Aurora PostgreSQL-Compatible Edition has added support for Poshttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.htmtgreSQL versions <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html">17.6, 16.10, 15.14, 14.19, and 13.22</a>. The update includes the PostgreSQL community's product improvements and bug fixes, and also includes Aurora-specific enhancements. Dynamic Data Masking (DDM) (16.10 and 17.6 only) is a new database-level security feature that protects sensitive data like personally identifiable information by masking column values dynamically at query time based on role-based policies, without altering the actual stored data. This release also includes a shared plan cache, improved performance and recovery-time-objective (RTO) and improvement for Global Database switchovers.<br /> <br /> To use the new versions, create a new Aurora PostgreSQL-compatible database with just a few clicks in the Amazon RDS Management Console. You can also upgrade your existing database. Please review <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html">the Aurora documentation</a> to learn more about upgrading. Refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html">Aurora version policy</a> to help you to decide how often to upgrade and how to plan your upgrade process. These releases are available in all commercial AWS Regions and the AWS GovCloud (US) Regions.<br /> <br /> Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.html">getting started page</a>.</p>

Read article →

Introducing AWS Glue 5.1

<p>AWS Glue 5.1 is now generally available, delivering improved performance, security updates, expanded Apache Iceberg capabilities, and AWS Lake Formation write support for data integration workloads.</p> <p><a href="https://aws.amazon.com/glue/" style="cursor: pointer;" target="_blank">AWS Glue</a> is a serverless, scalable data integration service that simplifies discovering, preparing, moving, and integrating data from multiple sources. This release upgrades core engines to Apache Spark 3.5.6, Python 3.11, and Scala 2.12.18, bringing performance and security enhancements. It also updates support for open table format libraries, including Apache Hudi 1.0.2, Apache Iceberg 1.10.0, and Delta Lake 3.3.2. </p> <p>AWS Glue 5.1 introduces support for Apache Iceberg format version 3.0, adding default column values, deletion vectors for merge-on-read tables, multi-argument transforms, and row lineage tracking. This release also extends <a href="https://aws.amazon.com/lake-formation/">AWS Lake Formation</a> fine-grained access control to write operations (both DML and DDL) for Spark DataFrames and Spark SQL. Previously, this capability was limited to read operations only. AWS Glue 5.1 also adds full-table access control in Apache Spark for Apache Hudi and Delta Lake tables, providing more comprehensive security options for your data.</p> <p>AWS Glue 5.1 is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (São Paulo). Visit the AWS Glue <a href="https://docs.aws.amazon.com/glue/" style="cursor: pointer;" target="_blank">documentation</a> for more information.</p> <p>&nbsp;</p>

Read article →

AWS Compute Optimizer now supports unused NAT Gateway recommendations

<p>Today, AWS announces that AWS Compute Optimizer now supports idle resource recommendations for NAT Gateways. With this new recommendation type, you will be able to identify NAT Gateways that are unused, resulting in cost savings.<br /> <br /> With the new unused NAT Gateway recommendation, you will be able to identify NAT Gateways that show no traffic activity over a 32-day analysis period. Compute Optimizer analyzes CloudWatch metrics including active connection count, incoming packets from source, and incoming packets from destination to validate if NAT Gateways are truly unused. To avoid recommending critical backup resources, Compute Optimizer also examines if the NAT Gateway resource is associated in any AWS Route Tables. You can view the total savings potential of these unused NAT Gateways and access detailed utilization metrics to verify unused conditions before taking action.<br /> <br /> This new feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s <a href="https://aws.amazon.com/compute-optimizer/" target="_blank">product page</a> and <a href="https://docs.aws.amazon.com/compute-optimizer/latest/ug/what-is-compute-optimizer.html" target="_blank">user guide</a>.</p>

Read article →

Amazon EMR and AWS Glue now support write operations with AWS Lake Formation fine-grained access controls

<p>Amazon EMR and AWS Glue now enable you to enforce fine-grained access control (FGAC) on both read and write operations for AWS Lake Formation registered tables in your Apache Spark jobs. Previously, you could only apply Lake Formation's table, column, and row-level permissions for read operations (SELECT, DESCRIBE). This simplifies data workflows by allowing both read and write tasks in a single Spark job, eliminating the need for separate clusters or applications. Organizations can now execute end-to-end data workflows with consistent security controls, streamlining operations and reducing infrastructure costs.<br /> <br /> With this launch, administrators can control who is authorized to insert new data, update specific records, or merge changes through DML operations (CREATE, ALTER, INSERT, UPDATE, DELETE, MERGE INTO, DROP), ensuring that all data modifications adhere to specified security policies to mitigate the risk of unauthorized data modification, or misuse. This launch simplifies data governance and security frameworks by providing a single point for defining access rules in AWS Lake Formation and enforcing these rules in Spark for both read and write operations.<br /> <br /> This feature is available in all AWS Regions where Amazon EMR (EC2, EKS and Serverless), AWS Glue and AWS Lake Formation are available. To learn more, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/emr-serverless-lf-enable.html#emr-serverless-lf-enable-open-table-format-support" style="cursor: pointer;">open table format support</a> documentation.</p>

Read article →

Amazon EMR and AWS Glue now support audit context support with Lake Formation

<p>Amazon EMR and AWS Glue now provide comprehensive audit context support for AWS Lake Formation credential vending APIs and AWS Glue Data Catalog GetTable and GetTables API calls. This auditing capability helps you maintain compliance with regulatory frameworks, including the Digital Markets Act (DMA) and data protection regulations. The feature is enabled by default, offering seamless integration into existing workflows while strengthening security and compliance monitoring across your data lake infrastructure.<br /> <br /> You can view this audit context information in AWS CloudTrail logs, enabling enhanced security auditing, regulatory compliance, and improved troubleshooting for EMR for Apache Spark native fine-grained access control (FGAC) and full table access jobs. The audit logging feature automatically records the platform type (EMR-EC2, EMR on EKS, EMR Serverless, or AWS Glue) and its corresponding identifiers like such as Cluster ID, Step ID, Job Run ID, and Virtual Cluster ID. This enables security teams to track and correlate API calls from individual Spark jobs, streamline compliance reporting, and analyze historical data access patterns. Additionally, data engineers can quickly troubleshoot access-related issues by connecting them to specific job executions, resolve FGAC permission challenges, and monitor access patterns across different compute platforms.<br /> <br /> This feature is available in all AWS Regions that support Amazon EMR, AWS Glue, and AWS Lake Formation, requiring EMR version 7.12+ or AWS Glue version 5.1+.</p>

Read article →

Amazon SageMaker HyperPod now supports custom Kubernetes labels and taints

<p>Amazon SageMaker HyperPod now supports custom Kubernetes labels and taints, enabling customers to control pod scheduling and integrate seamlessly with existing Kubernetes infrastructure. Customers deploying AI workloads on HyperPod clusters orcehstrated with EKS need precise control over workload placement to prevent expensive GPU resources from being consumed by system pods and non-AI workloads, while ensuring compatibility with custom device plugins such as EFA and NVIDIA GPU operators. Previously, customers had to manually apply labels and taints using kubectl and reapply them after every node replacement, scaling, or patching operation, creating significant operational overhead.<br /> <br /> This capability allows you to configure labels and taints at the instance group level through the CreateCluster and UpdateCluster APIs, providing a managed approach to defining and maintaining scheduling policies across the entire node lifecycle. Using the new KubernetesConfig parameter, you can specify up to 50 labels and 50 taints per instance group. Labels enable resource organization and pod targeting through node selectors, while taints repel pods without matching tolerations to protect specialized nodes. For example, you can apply NoSchedule taints to GPU instance groups to ensure only AI training jobs with explicit tolerations consume high-cost compute resources, or add custom labels that enable device plugin pods to schedule correctly. HyperPod automatically applies these configurations during node creation and maintains them across replacement, scaling, and patching operations, eliminating manual intervention and reducing operational overhead.<br /> <br /> This feature is available in all AWS Regions where Amazon SageMaker HyperPod is available. To learn more about custom labels and taints, see the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-custom-labels-and-taints.html">user guide</a>.</p>

Read article →

SageMaker HyperPod now supports Managed tiered KV cache and intelligent routing

<p>Amazon SageMaker HyperPod now supports Managed Tiered KV Cache and Intelligent Routing for large language model (LLM) inference, enabling customers to optimize inference performance for long-context prompts and multi-turn conversations. Customers deploying production LLM applications need fast response times while processing lengthy documents or maintaining conversation context, but traditional inference approaches require recalculating attention mechanisms for all previous tokens with each new token generation, creating computational overhead and escalating costs. Managed Tiered KV Cache addresses this challenge by intelligently caching and reusing computed values, while Intelligent Routing directs requests to optimal instances.<br /> <br /> These capabilities deliver up to 40% latency reduction, 25% throughput improvement, and 25% cost savings compared to baseline configurations. The Managed Tiered KV Cache feature uses a two-tier architecture combining local CPU memory (L1) with disaggregated cluster-wide storage (L2). AWS-native disaggregated tiered storage is the recommended backend, providing scalable terabyte-scale capacity and automatic tiering from CPU memory to local SSD for optimal memory and storage utilization. We also offer Redis as an alternative L2 cache option. The architecture enables efficient reuse of previously computed key-value pairs across requests. The newly introduced Intelligent Routing maximizes cache utilization through three configurable strategies: prefix-aware routing for common prompt patterns, KV-aware routing for maximum cache efficiency with real-time cache tracking, and round-robin for stateless workloads. These features work seamlessly together. Intelligent routing directs requests to instances with relevant cached data, reducing time to first token in document analysis and maintaining natural conversation flow in multi-turn dialogues. Built-in observability integration with Amazon Managed Grafana provides metrics for monitoring performance. You can enable these features through InferenceEndpointConfig or SageMaker JumpStart when deploying models via the HyperPod Inference Operator on EKS-orchestrated clusters.<br /> <br /> These features are available in all regions where SageMaker HyperPod is available. To learn more, see the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment-deploy.html">user guide</a>.</p>

Read article →

Announcing AWS Glue zero-ETL for self-managed Database Sources

<p><a href="https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html">AWS Glue</a> now supports zero-ETL for self-managed database sources. Using Glue zero-ETL, you can now setup an integration to replicate data from Oracle, SQL Server, MySQL or PostgreSQL databases which are located on-premises or on AWS EC2 to Redshift with a simple experience that eliminates configuration complexity.<br /> <br /> AWS zero-ETL for self-managed database sources will automatically create an integration for an on-going replication of data from your on-premises or EC2 databases through a simple, no-code interface. You can now replicate data from Oracle, SQL Server, MySQL and PostgreSQL databases into Redshift. This feature further reduces users' operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines to ingest data from self-managed databases to Redshift.&nbsp; &nbsp;<br /> <br /> AWS Glue zero-ETL for self-managed database sources are available in the following AWS Regions: US East (Ohio), Europe (Stockholm), Europe (Ireland), Europe (Frankfurt),&nbsp; Canada West (Calgary), US West (Oregon), and Asia Pacific (Seoul) regions. To get started, sign into the&nbsp;<a href="https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#LaunchInstances:instanceType=r8a.large">AWS Management Console</a>.&nbsp; For more information visit the&nbsp;<a href="https://aws.amazon.com/glue/">AWS Glue page</a>&nbsp;or review the <a href="https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html">AWS Glue zero-ETL</a> documentation.</p>

Read article →

Amazon Lex now supports LLMs as the primary option for natural language understanding

<p>Amazon Lex now allows you to use Large Language Models (LLMs) as the primary option to understand customer intent across voice and chat interactions. With this capability, your voice and chat bots can better understand customer requests, handle complex utterances, maintain accuracy despite spelling errors, and extract key information from verbose inputs. When customer intent is unclear, bots can intelligently ask follow-up questions to fulfill requests accurately. For example, when a customer says “I need help with my flight,” the LLM automatically clarifies whether the customer wants to check their flight status, upgrade their flight, or change their flight.<br /> <br /> This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the <a href="https://docs.aws.amazon.com/lexv2/latest/dg/generative-intent-disambiguation.html">Amazon Lex documentation</a> or explore the Amazon Connect <a href="https://aws.amazon.com/connect/self-service/">website</a> to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.&nbsp;</p>

Read article →

Amazon CloudWatch now supports deletion protection for logs

<p>Amazon CloudWatch now offers configuring deletion protection on your CloudWatch log groups, helping customers safeguard their critical logging data from accidental or unintended deletion. This feature provides an additional layer of protection for logs maintaining audit trails, compliance records, and operational logs that must be preserved.<br /> <br /> With deletion protection enabled, administrators can prevent unintended deletions of their most important log groups. Once enabled, log groups cannot be deleted until the protection is explicitly turned off, helping safeguard critical operational, security, and compliance data. This protection is particularly valuable for preserving audit logs and production application logs needed for troubleshooting and analysis.<br /> <br /> Log group deletion protection is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/" target="_blank">AWS commercial Regions.</a><br /> <br /> You can enable deletion protection during log group creation or on existing log groups using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. For more information, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html" target="_blank">Amazon CloudWatch Logs User Guide</a>..</p>

Read article →

Improved AWS Health event triage

<p>AWS Health now includes two new properties in its event schema - actionability and persona - enabling customers to identify the most relevant events. These properties allow organizations to programmatically identify events requiring customer action and direct them to relevant teams. The enhanced event schema is accessible through both the AWS Health API and Health EventBridge communication channels, improving operational efficiency and team coordination.<br /> <br /> AWS customers receive various operational notifications and scheduled changes, including Planned Lifecycle Events. With the new actionability property, teams can quickly distinguish between events requiring action and those shared for awareness. The persona property streamlines event routing and visibility to specific teams like security and billing, ensuring critical information reaches appropriate stakeholders. These structured properties streamline integration with existing operational tools, allowing teams to effectively identify and remediate affected resources while maintaining appropriate visibility across the organization.<br /> <br /> This enhancement is available across all AWS Commercial and AWS GovCloud (US) Regions. To learn more about implementing these new properties, see the AWS Health <a href="https://docs.aws.amazon.com/health/latest/ug/aws-health-concepts-and-terms.html" target="_blank">User Guide</a> and the <a href="https://docs.aws.amazon.com/health/latest/APIReference/Welcome.html" target="_blank">API</a> and <a href="https://docs.aws.amazon.com/health/latest/ug/aws-health-events-eventbridge-schema.html" target="_blank">EventBridge</a> schema documentation.</p>

Read article →

Amazon Quick Research now includes trusted third-party industry intelligence

<p>Amazon Quick Suite, the AI-powered workspace helping organizations get answers from their enterprise data and move swiftly from insights to action, enhances Quick Research with access to specialized third-party datasets.<br /> <br /> Quick Research transforms how business professionals tackle complex business problems by completing weeks of data discovery, analysis, and insight generation in minutes. Today, Quick Research launches its partner ecosystem with industry intelligence providers S&amp;P Global, FactSet, and IDC, with more to come. Users with existing subscriptions can combine these authoritative datasets with all of their business data and real-time web search, accelerating their path to deeper insights and strategic decision-making. Additionally, all users have access to decades of US Patent and Trademark Office data along with millions of PubMed citations and abstracts in biomedical and life sciences literature.<br /> <br /> Business professionals from any industry can now access and analyze multiple data sources in one unified workspace, eliminating the need to switch between platforms. For example, a financial analyst can evaluate investment opportunities using FactSet's financial data alongside real-time web search and internal market reports, while energy teams can optimize trading strategies using S&amp;P Global's commodity data combined with insights from their strategy teams. Similarly, sales and product teams can spot emerging trends faster by leveraging IDC's industry intelligence with their customer data. By bringing critical data sources together in one place, organizations can move from insight to action with greater speed and confidence.<br /> <br /> Quick Research's third-party data integration is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more, read our <a contenteditable="false" href="https://docs.aws.amazon.com/quicksuite/latest/userguide/third-party-data" style="cursor: pointer;">User Guide</a>.&nbsp;</p>

Read article →

Amazon S3 Block Public Access now supports organization-level enforcement

<p>Amazon S3 Block Public Access (BPA) now allows organization-level control through AWS Organizations, allowing you to standardize and enforce S3 public access settings across all accounts in your AWS organization through a single policy configuration.<br /> <br /> S3 Block Public Access at the organization level uses a single configuration that controls all public access settings across accounts within your organization. When you attach the policy at the root or Organizational Unit (OU)-level of your organization, it propagates to all sub-accounts within that scope, and new member accounts automatically inherit the policy. Alternatively, you can choose to apply the policy to specific accounts for more granular control. To get started, navigate to the AWS Organizations console and use the "Block all public access" checkbox or JSON editor. Additionally, you can use AWS CloudTrail to audit or keep track of policy attachment as well as enforcement for member accounts.<br /> <br /> This feature is available in the AWS Organizations console as well as AWS CLI/SDK, in all AWS Regions where AWS Organizations and Amazon S3 are supported, with no additional charges. For more information, visit the <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_s3.html" target="_blank">AWS Organizations User Guide</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html" target="_blank">Amazon S3 Block Public Access documentation</a>.</p>

Read article →

Amazon Route 53 announces accelerated recovery for managing public DNS records

<p>Amazon Route 53 is excited to release the accelerated recovery option for managing DNS records in public hosted zones. Accelerated recovery targets a 60-minute recovery time objective (RTO) for regaining the ability to make DNS changes to your DNS records in Route 53 public hosted zones, if AWS services in US East (N. Virginia) become temporarily unavailable.<br /> <br /> The Route 53 public DNS service API is used by customers today for making changes to DNS records in order to facilitate software deployments, run infrastructure operations, and onboard new users. Customers in banking, financial technology (FinTech), and software-as-a-service (SaaS) in particular need a predictable and short RTO for meeting business continuity and disaster recovery objectives. In the past, if AWS services in US East (N. Virginia) became unavailable, customers would not be able to modify or recreate DNS records to point users and internal services to updated endpoints. Now, when you enable the accelerated recovery option on your Route 53 public hosted zone, you can make changes to Route 53 public DNS records (Resource Record Sets) in that hosted zone soon after such an interruption, most often in less than one hour.<br /> <br /> Accelerated recovery for managing public DNS records is available globally, except in AWS GovCloud and Amazon Web Services in China. There is no additional charge for using this feature. To learn more about the accelerated recovery option, visit our <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/accelerated-recovery.html" target="_blank">documentation</a>.</p>

Read article →

Amazon SageMaker AI now supports EAGLE speculative decoding

<p>Amazon SageMaker AI now supports EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) speculative decoding to improve large language model inference throughput by up to 2.5x. This capability enables models to predict and validate multiple tokens simultaneously rather than one at a time, improving response times for AI applications.<br /> <br /> As customers deploy AI applications to production, they need capabilities to serve models with low latency and high throughput to deliver responsive user experiences. Data scientists and ML engineers lack efficient methods to accelerate token generation without sacrificing output quality or requiring complex model re-architecture, making it hard to meet performance expectations under real-world traffic. Teams spend significant time optimizing infrastructure rather than improving their AI applications. With EAGLE speculative decoding, SageMaker AI enables customers to accelerate inference throughput by allowing models to generate and verify multiple tokens in parallel rather than one at a time, maintaining the same output quality while dramatically increasing throughput. SageMaker AI automatically selects between EAGLE 2 and EAGLE 3 based on your model architecture, and provides built-in optimization jobs that use either curated datasets or your own application data to train specialized prediction heads. You can then deploy optimized models through your existing SageMaker AI inference workflow without infrastructure changes, enabling you to deliver faster AI applications with predictable performance.<br /> <br /> You can use EAGLE speculative decoding in the following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Europe (Ireland), Asia Pacific (Singapore), and Europe (Frankfurt)<br /> <br /> <br /> To learn more about EAGLE speculative decoding, visit AWS News Blog <a href="https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-ai-introduces-eagle-based-adaptive-speculative-decoding-to-accelerate-generative-ai-inference/">here</a>, and SageMaker AI documentation <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-optimize-create-job.html">here</a>.</p>

Read article →

Amazon SageMaker AI Inference now supports bidirectional streaming

<p>Amazon SageMaker AI Inference now supports bidirectional streaming for real-time speech-to-text transcription, enabling continuous speech processing instead of batch input. Models can now receive audio streams and return partial transcripts simultaneously as users speak, enabling you to build voice agents that process speech with minimal latency.<br /> <br /> As customers build AI voice agents, they need real-time speech transcription to minimize delays between user speech and agent responses. Data scientists and ML engineers lack managed infrastructure for bidirectional streaming, making it necessary to build custom WebSocket implementations and manage streaming protocols. Teams spend weeks developing and maintaining this infrastructure rather than focusing on model accuracy and agent capabilities. With bidirectional streaming on Amazon SageMaker AI Inference, you can deploy speech-to-text models by invoking your endpoint with the new Bidirectional Stream API. The client opens an HTTP2 connection to the SageMaker AI runtime, and SageMaker AI automatically creates a WebSocket connection to your container. This can process streaming audio frames and return partial transcripts as they are produced. Any container implementing a WebSocket handler following the SageMaker AI contract works automatically, with real-time speech models such as Deepgram running without modifications. This eliminates months of infrastructure development, enabling you to deploy voice agents with continuous transcription while focusing your time on improving model performance.<br /> <br /> Bidirectional streaming is available in following AWS Regions - Canada (Central), South America (São Paulo), Africa (Cape Town), Europe (Paris), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Israel (Tel Aviv), Europe (Zurich), Asia Pacific (Tokyo), AWS GovCloud US (West), AWS GovCloud US (East), Asia Pacific (Mumbai), Middle East (Bahrain), US West (Oregon), China (Ningxia), US West (Northern California), Asia Pacific (Sydney), Europe (London), Asia Pacific (Seoul), US East (N. Virginia), Asia Pacific (Hong Kong), US East (Ohio), China (Beijing), Europe (Stockholm), Europe (Ireland), Middle East (UAE), Asia Pacific (Osaka), Asia Pacific (Melbourne), Europe (Spain), Europe (Frankfurt), Europe (Milan), Asia Pacific (Singapore).<br /> <br /> To learn more, visit AWS News Blog <a href="https://aws.amazon.com/blogs/machine-learning/introducing-bidirectional-streaming-for-real-time-inference-on-amazon-sagemaker-ai/">here </a>and SageMaker AI documentation <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-test-endpoints.html#realtime-endpoints-test-endpoints-sdk">here</a>.</p>

Read article →

AWS Lambda adds support for Node.js 24

<p>AWS Lambda now supports creating serverless applications using Node.js 24. Developers can use Node.js 24 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.<br /> <br /> Node.js 24 is the latest long-term support release of Node.js and is expected to be supported for security and bug fixes until April 2028. With this release, Lambda has simplified the developer experience, focusing on the modern async/await programming pattern and no longer supports callback-based function handlers. You can use Node.js 24 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. <a href="https://docs.powertools.aws.dev/lambda/typescript/latest/" target="_blank">Powertools for AWS Lambda (TypeScript)</a>, a developer toolkit to implement serverless best practices and increase developer velocity, also supports Node.js 24. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Node.js 24.<br /> <br /> The Node.js 24 runtime is available in all Regions, including the AWS GovCloud (US) Regions and China Regions.<br /> <br /> For more information, including guidance on upgrading existing Lambda functions, see our <a></a><a href="https://aws.amazon.com/blogs/compute/node-js-24-runtime-now-available-in-aws-lambda/" target="_blank">blog post</a>. For more information about AWS Lambda, visit our <a href="https://aws.amazon.com/lambda/" target="_blank">product page</a>.&nbsp;</p>

Read article →

AWS Service Quotas adds now support for automatic quota management

<p>Today, we’re excited to announce the general availability of new capability of automatic quota management feature in AWS Service Quotas. Today, automatic quota management supports customers to receive notifications when their quota usage approaches their allocated quotas and configure their preferred notifications channel, such as email, SMS, or Slack, through Service Quotas console or API. Now, this feature adjusts values of AWS services’ quotas automatically and safely based on customer’s usage, which reduces operational burden from customers to constantly monitor their quota usage, and request quota increases across multiple AWS services in different AWS accounts and Regions. Customers can now confidently scale their applications on AWS to meet their growing customer demand without the risk of unexpected service interruptions due to quota exhaustion.<br /> <br /> This new capability is now available at no additional cost in all AWS commercial regions. To explore this feature and for details, please visit <a style="cursor: pointer;">Service Quotas console</a> and <a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/automatic-management.html">AWS Service Quotas documentation</a>.</p>

Read article →

Introducing AWS Network Firewall Proxy in preview

<p>AWS introduces Network Firewall Proxy in public preview. You can use it to exert centralized controls against data exfiltration and malware injection. You can set up your Network Firewall Proxy in explicit mode in just a few clicks and filter the traffic going out from your applications and the response that these applications receive.<br /> <br /> Network Firewall Proxy enables customers to efficiently manage and secure web and inter-network traffic. It protects your organization against atempts to spoof the domain name or the server name index (SNI) and offers flexibility to set fine-grained access controls. You can use Network Firewall Proxy to restrict access from your applications to trusted domains or IP addresses, or block unintended response from external servers. You can also turn on TLS inspection and set granular filtering controls on HTTP header attributes. Your Network Firewall Proxy offers comprehensive logs for monitoring your applications. You can enable them and send to Amazon S3 and AWS CloudWatch for detailed analyses and audit.<br /> <br /> Try out AWS Network Firewall Proxy in your test environment today in US East (Ohio) region. Proxy is available for free during public preview. For more information check <a href="https://aws.amazon.com/network-firewall/">AWS Network Firewall proxy documentation</a>.</p>

Read article →

Manage Amazon SageMaker HyperPod clusters with the new Amazon SageMaker AI MCP Server

<p>The Amazon SageMaker AI MCP Server now supports tools that help you setup and manage HyperPod clusters. Amazon SageMaker HyperPod removes the undifferentiated heavy lifting involved in building generative AI models by quickly scaling model development tasks such as training, fine-tuning, or deployment across a cluster of AI accelerators. The SageMaker AI MCP Server now empowers AI coding assistants to provision and operate AI/ML clusters for model training and deployment.<br /> <br /> MCP servers in AWS provide a standard interface to enhance AI-assisted application development by equipping AI code assistants with real-time, contextual understanding of various AWS services. The SageMaker AI MCP server comes with tools that streamline end-to-end AI/ML cluster operations using the AI assistant of your choice—from initial setup through ongoing management. It enables AI agents to reliably setup HyperPod clusters orchestrated by Amazon EKS or Slurm complete with pre-requisites, powered by CloudFormation templates that optimize networking, storage, and compute resources. Clusters created via this MCP server are fully optimized for high-performance distributed training and inference workloads, leveraging best practice architectures to maximize throughput and minimize latency at scale. Additionally, it provides comprehensive tools for cluster and node management—including scaling operations, applying software patches, and performing various maintenance tasks. When used in conjunction with AWS API MCP Server, AWS Knowledge MCP Server, and Amazon EKS MCP Server you gain complete coverage for all SageMaker HyperPod APIs and you can effectively troubleshoot common issues, such as diagnosing why a cluster node became inaccessible. For cluster administrators, these tools streamline daily operations. For data scientists, they enable you to set up AI/ML clusters at scale without requiring infrastructure expertise, allowing you to focus on what matters most—training and deploying models.<br /> <br /> You can manage your AI/ML clusters through the SageMaker AI MCP server in all regions where SageMaker HyperPod is available. To get started, visit the <a href="https://awslabs.github.io/mcp/servers/sagemaker-ai-mcp-server">AWS MCP Servers documentation</a>.</p>

Read article →

Amazon Quick Suite introduces scheduling for Quick Flows

<p>Amazon Quick Flows now supports scheduling, enabling you to automate repetitive workflows without requiring manual intervention. You can now configure Quick Flows to run automatically at specified times or intervals, improving operational efficiency and ensuring critical tasks execute consistently.<br /> <br /> You can schedule Quick Flows to run daily, weekly, monthly, or on custom intervals. This capability is great for automating routine and administrative tasks such as generating recurring reports from dashboards, summarizing open items assigned to you in external services, or generating daily meeting briefings before you head out to work.<br /> <br /> You can schedule any flow you have access to—whether you created it or it was shared with you. To schedule a flow, click the scheduling icon and configure your desired date, time, and frequency.<br /> <br /> Scheduling in Quick Flows is available now in IAD, PDX, and DUB. There are no additional charges for using scheduled execution beyond standard Quick Flows usage.<br /> <br /> To learn more about configuring scheduled Quick Flows, please visit our <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/schedules-in-quick-flows.html">documentation</a>.</p>

Read article →

Amazon OpenSearch Service introduces Agentic Search

<p>Amazon OpenSearch Service launches Agentic Search, transforming how users interact with their data through intelligent, agent-driven search. Agentic Search introduces an intelligent agent-driven system that understands user intent, orchestrates the right set of tools, generates OpenSearch DSL (domain-specific language) queries, and provides transparent summaries of its decision-making process through a simple 'agentic' query clause and natural language search terms.<br /> <br /> Agentic Search automates OpenSearch query planning and execution, eliminating the need for complex search syntax. Users can ask questions in natural language like "Find red cars under $30,000" or "Show last quarter's sales trends." The agent interprets intent, applies optimal search strategies, and delivers results while explaining its reasoning process. The feature provides two agent types: conversational agents, which handle complex interactions with the ability to store conversations in memory, and flow agents for efficient query processing. The built-in QueryPlanningTool uses large language models (LLMs) to create DSL queries, making search accessible regardless of technical expertise. Users can manage Agentic Search through APIs or OpenSearch Dashboards to configure and modify agents. Agentic Search’s advanced settings allow you to connect with external MCP servers and use custom search templates.<br /> <br /> Support for agentic search is available for OpenSearch Service version 3.3 and later in all AWS Commercial and AWS GovCloud (US) Regions where OpenSearch Service is available. See <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">here</a> for a full listing of our Regions.<br /> <br /> Build agents and run agentic searches using the new Agentic Search use case available in the <a href="https://docs.opensearch.org/latest/vector-search/ai-search/building-agentic-search-flows/">AI Search Flows plugin</a>. To learn more about Agentic Search, visit the OpenSearch <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/agentic-search.html">technical documentation</a>.</p>

Read article →

AWS Glue Data Quality now supports rule labeling for enhanced reporting

<p>Today, AWS announces the general availability of rule label, a feature of AWS Glue Data Quality, enabling you to apply custom key-value pair labels to your data quality rules for improved organization, filtering, and targeted reporting. This enhancement allows you to categorize data quality rules by business context, team ownership, compliance requirements, or any custom taxonomy that fits your data quality and governance needs.<br /> <br /> Rule labels provide effective way to organize analyze data quality results. You can query results by specific labels to identify failing rules within particular categories, count rule outcomes by team or domain, and create focused reports for different stakeholders. For example, you can apply all rules that pertain to finance team with a label "team=finance" and generate a customized report to showcase quality metrics specific to finance team. You can label high priority rules with "criticality=high" to prioritize remediation efforts. Labels can be authored as part of the DQDL. You can query the labels as part of rule outcomes, row-level results, and API responses, making it easy to integrate with your existing monitoring and reporting workflows.<br /> <br /> AWS Glue Data Quality rule labeling is available in all commercial AWS Regions where<a href="https://docs.aws.amazon.com/glue/latest/dg/glue-data-quality.html"> AWS Glue Data Quality </a>is available. See the AWS Region Table for more details. To learn more about rule labeling, see the AWS Glue Data Quality <a href="https://docs.aws.amazon.com/glue/latest/dg/dqdl.html#dqdl-labels">documentation</a>.</p>

Read article →

AWS Glue Data Quality now supports pre-processing queries

<p>Today, AWS announces the general availability of preprocessing queries for AWS Glue Data Quality, enabling you to transform your data before running data quality checks through AWS Glue Data Catalog APIs. This feature allows you to create derived columns, filter data based on specific conditions, perform calculations, and validate relationships between<br /> columns directly within your data quality evaluation process.</p> <p>Preprocessing queries provide enhanced flexibility for complex data quality scenarios that require data transformation before validation. You can create derived metrics like calculating total fees from tax and shipping columns, limiting number of columns that are considered for data quality recommendations or filter datasets to focus quality checks on specific data subsets. This capability eliminates the need for separate data pre-processing steps, streamlining your data quality workflows.</p> <p>AWS Glue Data Quality preprocessing queries are available through AWS Glue Data Catalog APIs - start-data-quality-rule-recommendation-run and start-data-quality-ruleset-evaluation-run, in all commercial AWS Regions where AWS Glue Data Quality is available. To learn more about preprocessing queries, see the <a href="https://docs.aws.amazon.com/glue/latest/dg/glue-data-quality.html">Glue Data Quality documentation</a>.&nbsp;</p>

Read article →

AWS Device Farm supports Managed Appium Endpoint and Environment Features

<p>AWS Device Farm enables web and mobile developers to test their applications using real mobile devices and desktop browsers. Today, we are announcing three new capabilities that make it easier to build better web and mobile experiences: a fully-managed <a href="https://appium.io/docs/en/latest/">Appium</a> endpoint, support for environment variables, and IAM role integration.</p> <p>With the new Appium endpoint, you can connect using just a few lines of code and run interactive tests on multiple physical devices directly from your IDE or local host. This feature works seamlessly with <a href="https://github.com/appium/appium-inspector">Appium Inspector</a> —both hosted and local versions—for all actions, including element inspection. Support for live video and log streaming enables faster feedback within your local workflow.<br /> <br /> Environment variables enable test filtering, test sharding, dynamic software version selection, and granular configuration of your test environment. You can pass simple key-value pairs to our test scheduling APIs, which are then configured as environment variables on the test host during runtime. This eliminates the need to maintain multiple test specification yaml files for different test scenarios and simplifies CI/CD pipelines by enabling dynamic test environment configuration.<br /> <br /> Additionally, Device Farm test hosts can now assume IAM roles to connect with other AWS services, enabling workflows such as uploading artifacts to Amazon S3 and logging test output to Amazon CloudWatch. Both environment variables and IAM roles can be persisted at the project level, reducing the maintenance overhead of passing them to each run.<br /> <br /> These features complement our existing <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types.html">server-side execution</a> capabilities, giving you the scale, customizability and controls needed to run secure enterprise-grade workloads. Together, they help you author, debug, and test your mobile apps faster, whether working from your IDE, AWS Console, or other environments.</p> <p>To learn more, see <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/appium-endpoint.html">Appium Testing</a>, <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/custom-test-environments-iam-roles.html">Accessing other AWS resources</a>, and <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/custom-test-environment-variables.html">Environment variables</a> in the <i>AWS Device Farm Developer Guide</i>. </p>

Read article →

Amazon SageMaker HyperPod now supports Spot Instances

<p>Amazon SageMaker HyperPod now supports Spot Instances, enabling customers to reduce GPU compute costs by up to 90% compared to on-demand instances on HyperPod . As AI workloads scale, optimizing infrastructure costs becomes increasingly critical. SageMaker HyperPod's Spot integration addresses this by allowing customers to automatically leverage spare EC2 capacity at significant discounts, while providing the managed AI experience customers enjoy on HyperPod.&nbsp;</p> <p>With Spot Instances, organizations can run fault-tolerant workloads cost-effectively at scale. You can combine Spot with on-demand instances to balance cost optimization with guaranteed availability. The feature is available on HyperPod EKS clusters and integrates with Karpenter for intelligent auto-scaling, automatically discovering available Spot capacity and handling instance interruptions.</p> <p>You can enable Spot Instances when creating instance groups through the CreateCluster API or AWS Console. The feature supports all instance types available on HyperPod, including CPUs and GPUs. Capacity availability depends on supply from EC2 and varies by region and instance type. Spot instance support is available in all regions where SageMaker HyperPod is currently available. To learn more, please refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-spot.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

AWS IoT Core now supports IoT thing registry data retrieval from IoT rules

<p>AWS IoT Core announces a new capability to dynamically retrieve IoT thing registry data using an IoT rule, enhancing your ability to filter, enrich, and route IoT messages. Using the new<a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html#iot-sql-function-get-registry-data"> get_registry_data() i</a>nline rule function, you can access IoT thing registry data, such as device attributes, device type, and group membership and leverage this information directly in IoT rules.<br /> <br /> For example, your rule can filter AWS IoT Core connectivity lifecycle events and then retrieve thing<i> </i>attributes (such as "test" or "production" device) to inform routing of lifecycle events to different endpoints for downstream processing. You can also use this feature to enrich or route IoT messages with registry data from other devices. For instance, you can add a sensor’s threshold temperature from IoT thing registry to the messages relayed by its gateway.<br /> <br /> To get started, connect your devices to AWS IoT Core and store your IoT device data in IoT thing registry. You can then use IoT rules to retrieve your registry data. This capability is available in all AWS regions where <a href="https://aws.amazon.com/iot-core/?nc=sn&amp;loc=2&amp;dn=3">AWS IoT Core</a> is present. For more information refer to the <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-thing-management.html">developer</a> guide and <a href="https://docs.aws.amazon.com/iot/latest/apireference/Welcome.html">API documentation</a>.</p>

Read article →

Amazon EC2 announces interruptible Capacity Reservations

<p>Today, Amazon EC2 announces interruptible Capacity Reservations to help you better utilize your reserved capacity and save costs. On-Demand Capacity Reservations (ODCRs) help you reserve compute capacity in a specific Availability Zone for any duration. When ODCRs are not in use, you can now make them temporarily available as interruptible ODCRs, enabling other workloads within your organization to utilize them while preserving your ability to reclaim the capacity for critical operations.<br /> <br /> By repurposing unused capacity as interruptible ODCRs, workloads suitable for flexible, fault-tolerant operations—such as batch processing, data analysis, and machine learning training can benefit from temporarily available capacity. Reservation owners can reclaim their capacity at any time, while consumers of interruptible ODCRs will receive an interruption notice before termination to allow for graceful shutdown or checkpointing before.<br /> <br /> Interruptible ODCRs are now available at no additional cost to all Capacity Reservations customers. Refer to the <a href="https://builder.aws.com/build/capabilities/explore?f=eJxtksFuhCAQhl_FzFkPdbdt4m3TpA-w18UDa6cuCRUCaGs2vnsRFwXcG__3_84wMnfQyLEx-HXGlolOQwUXAr0ukGpTvBDIN1V69Yuh59Ti0e9Ci97cHiaVYRmvSq86ocwtDmzoSergUdJjkWUk47IbepI67NFxj1736G2P3h3CvmiwM4ryxxUCUHrg5tr8cKZVruHgj3sVe8sQDU0aWxA_ViuG5GlnEmQYT0r8_KUA9yC8vaZrhxrydcFOA2WcXhlnZrRbdurGLEI5SCUkKjN-Mm5Q2cydgCFQXWpbVdoDAaFci3am1hUOVo4N7sA6-6nqpWFXjtkHlbSxxbMzalQDNfOKE5jqCaZ_jYf47g&amp;tab=service-feature" target="_blank">AWS Capabilities by Region</a> website for the feature's regional availability. CloudFormation support will be coming soon. For more details, please refer to the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/interruptible-capacity-reservations.html" target="_blank">Capacity Reservations user guide</a>.</p>

Read article →

Amazon Quick Suite Embedded Chat is now available

<p>Today, AWS announces the general availability of <a href="https://aws.amazon.com/quicksuite/" target="_blank">Amazon Quick Suite</a> Embedded Chat, enabling you to embed Quick Suite's conversational AI, which combines structured data and unstructured knowledge in a single conversation - directly into your applications, eliminating the need to build conversational interfaces, orchestration logic, or data access layers from scratch.</p> <p>Quick Suite Embedded Chat solves a fundamental problem: users want answers where they work, not in another tool. Whether in a CRM, support console, or analytics portal, they need instant, contextual responses. Most conversational tools excel at either structured data or documents, analytics or knowledge bases, answering questions or performing actions—rarely all of the above. Quick Suite closes this gap. Now, users can reference a KPI, pull details from a file, check customer feedback, and trigger actions in one continuous conversation without leaving the embedded chat.<br /> <br /> Embedded Chat brings this unified experience into your applications with simple integration, either through 1-click embedding or through API-based iframes for registered users with your existing authentication. You can connect your Agentic Chat to your data through connectors to search SharePoint, websites, send Slack messages, or create Jira tasks and customize the Agent with your brand colors, communication style, and personalized greetings. Security always stays under your control as you choose what the agent accesses and explicitly scope all actions.<br /> <br /> Quick Suite Embedded Chat is available the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland), and we'll expand availability to additional AWS Regions over the coming months. There is no additional cost for Quick Suite Embedded Chat. Existing Quick Suite pricing is available <a href="https://aws.amazon.com/quicksuite/pricing/" target="_blank">here</a>.<br /> <br /> To learn more, see Embedding Amazon Quick Suite <a href="https://aws.amazon.com/blogs/business-intelligence/announcing-embedded-chat-in-amazon-quick-suite/" target="_blank">launch blog</a>. To get started with Amazon Quick Suite, visit the Amazon Quick Suite <a href="https://aws.amazon.com/quicksuite/" target="_blank">product page</a>.</p>

Read article →

Amazon CloudFront announces support for mutual TLS authentication

<p>Amazon CloudFront announces support for mutual TLS Authentication (mTLS), a security protocol that requires both the server and client to authenticate each other using X.509 certificates, enabling customers to validate client identities at CloudFront's edge locations. Customers can now ensure only clients presenting trusted certificates can access their distributions, helping protect against unauthorized access and security threats.<br /> <br /> Previously, customers had to spend ongoing effort implementing and maintaining their own client access management solutions, leading to undifferentiated heavy lifting. Now with the support for mutual TLS, customers can easily validate client identities at the AWS edge before connections are established with their application servers or APIs. Example use cases include B2B secure API integrations for enterprises and client authentication for IoT. For B2B API security, enterprises can authenticate API requests from trusted third parties and partners using mutual TLS. For IoT use cases, enterprises can validate that devices are authorized to receive proprietary content such as firmware updates. Customers can leverage their existing third-party Certificate Authorities or <a href="https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html">AWS Private Certificate Authority</a> to sign the X.509 certificates. With Mutual TLS, customers get the performance and scale benefits of CloudFront for workloads that require client authentication.<br /> <br /> Mutual TLS authentication is available to all CloudFront customers at no additional cost. Customers can configure mutual TLS with CloudFront using the AWS Management Console, CLI, SDK, CDK, and CloudFormation. For detailed implementation guidance and best practices, visit <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/mtls-authentication.html">CloudFront Mutual TLS (viewer) documentation</a>.</p>

Read article →

Amazon Aurora PostgreSQL introduces dynamic data masking

<p><a contenteditable="false" href="https://aws.amazon.com/rds/aurora/" style="cursor: pointer;">Amazon Aurora</a> PostgreSQL-Compatible Edition now supports dynamic data masking through the new pg_columnmask extension, allowing you to simplify the protection of sensitive data in your database. pg_columnmask extends Aurora's security capabilities by enabling column-level protection that complements PostgreSQL's native row-level security and column level grants. Using pg_columnmask, you can control access to sensitive data through SQL-based masking policies and define how data appears to users at query time based on their roles, helping you comply with data privacy regulations like GDPR, HIPAA, and PCI DSS.</p> <p>With pg_columnmask, you can create flexible masking policies using built-in or user-defined functions. You can completely hide information, replace partial values with wildcards, or define custom masking approaches. Further, you can apply multiple masking policies to a single column and control their precedence using weights. pg_columnmask helps protect data in complex queries with WHERE, JOIN, ORDER BY, or GROUP BY clauses. Data is masked at the database level during query processing, leaving stored data unmodified.</p> <p>pg_columnmask is available for Aurora PostgreSQL version 16.10 and higher, and 17.6 and higher in all AWS Regions where Aurora PostgreSQL is available. To learn more, review our <a href="https://aws.amazon.com/blogs/database/protect-sensitive-data-with-dynamic-data-masking-for-amazon-aurora-postgresql/">blog post</a> and visit <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Security.DynamicMasking.html">technical documentation</a>.&nbsp;</p>

Read article →

Claude Opus 4.5 now available in Amazon Bedrock

<p>Customers can now use Claude Opus 4.5 in Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies. Opus 4.5 is Anthropic's newest model, setting new standards across coding, agentic workflows, computer use, and office tasks while making Opus-level intelligence accessible at one-third the cost.<br /> <br /> Opus 4.5 excels at professional software engineering tasks, achieving state-of-the-art performance on SWE-bench. The model handles ambiguity, reasons about tradeoffs and can figure out fixes for bugs that require reasoning across multiple systems. It can help transform multi-day team development projects into hours-long tasks with improved multilingual coding capabilities. This generation of Claude spans the full development lifecycle: Opus 4.5 for production code and lead agents, Sonnet 4.5 for rapid iteration and scaled user experiences, Haiku 4.5 for sub-agents and free-tier products.<br /> <br /> Beyond coding, the model powers agents that produce documents, spreadsheets, and presentations with consistency, professional polish, and domain awareness, making it ideal for finance and other precision-critical verticals. As Anthropic's best vision model yet, it unlocks workflows that depend on complex visual interpretation and multi-step navigation. Through the Amazon Bedrock API, Opus 4.5 introduces two new capabilities: tool search and tool use examples. Together, these updates enable Claude to navigate large tool libraries and accurately execute complex tasks. A new effort parameter, available in beta, lets you control how much effort Claude allocates across thinking, tool calls, and responses to balance performance with latency, and cost.<br /> <br /> Claude Opus 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. For the full list of available regions, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html" target="_blank">documentation</a>. To get started with the model in Amazon Bedrock, read the <a href="https://aws.amazon.com/blogs/machine-learning/claude-opus-4-5-now-in-amazon-bedrock/" target="_blank">launch blog </a>or visit the <a href="https://console.aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock console</a>.</p>

Read article →

AWS Glue announces catalog federation for remote Apache Iceberg catalogs

<p>AWS Glue announces the general availability of catalog federation for remote Iceberg catalogs. This capability provides direct and secure access to Iceberg tables stored in Amazon S3 and cataloged in remote catalogs using AWS analytics engines.<br /> <br /> With catalog federation, you can federate to remote Iceberg catalogs and query remote Iceberg tables using your preferred AWS analytics engines, without moving or copying tables. It synchronizes metadata real-time across AWS Glue Data Catalog and remote catalogs when data teams query remote tables, which means that query results are always completely up-to-date. You can now choose the best price-performance for your workloads when analyzing remote Iceberg tables using your preferred AWS analytics engines, while maintaining consistent security controls when discovering or querying data. Catalog federation is supported by a wide variety of analytics engines, including Amazon Redshift, Amazon EMR, Amazon Athena, AWS Glue, third-party engines like Apache Spark, and Amazon SageMaker with the serverless notebooks.<br /> <br /> Catalog federation uses AWS Lake Formation for access controls, allowing you to use fine-grained access controls, cross-account sharing, and trusted identity propagation when sharing remote catalog tables with other data consumers. Catalog federation integrates with catalog implementations that support the Iceberg REST specifications.<br /> <br /> Catalog federation is available in Lake Formation console and using AWS Glue and Lake Formation SDKs and APIs. This feature is generally available in all AWS commercial regions where AWS Glue and Lake Formation are available. With just a few clicks in the console, you can federate to remote catalogs, discover its databases and tables, grant permissions to access table data, and query remote Iceberg tables using AWS analytics engines. To learn more, visit the <a href="https://docs.aws.amazon.com/lake-formation/latest/dg/catalog-federation.html" target="_blank">documentation</a>.&nbsp;</p>

Read article →

Amazon CloudFront integrates with VPC IPAM to support BYOIP

<p>Amazon CloudFront now supports bringing your own IP addresses (BYOIP) for Anycast Static IPs via VPC IP Address Manager (IPAM). This capability enables network administrators to use their own public IPv4 address pools with CloudFront distributions, simplifying IP address management across AWS's global infrastructure.</p> <p>CloudFront typically uses rotating IP addresses to serve traffic. CloudFront Anycast Static IPs enables customers to provide a dedicated list of IP addresses to partners and customers, enhancing security and simplifying network management. Previously, customers implementing Anycast Static IPs received AWS-provided static IP addresses for their workloads. With IPAM's unified interface, customers can now create dedicated IP address pools using BYOIP and assign them to CloudFront Anycast Static IP lists. Customers do not need to change the existing IP address space for their applications when they migrate to CloudFront, thus maintaining existing allow-lists and branding.</p> <p>The feature is available within Amazon VPC IPAM in all commercial AWS Regions, excluding the AWS GovCloud (US) Regions, and China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD). To learn more about CloudFront BYOIP feature, view the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/bring-your-own-ip-address-using-ipam.html" style="cursor: pointer;" title="BYOIP CloudFront documentation link">BYOIP CloudFront documentation</a>. For details on pricing, refer to the IPAM tab on the <a contenteditable="false" href="https://aws.amazon.com/vpc/pricing/" style="cursor: pointer;" title="VPC Pricing Page">Amazon VPC Pricing Page</a>.</p>

Read article →

OpenSearch Service Enhances Log Analytics with New PPL Experience

<p>Today, AWS announces enhanced log analytics capabilities in Amazon OpenSearch Service, making Piped Processing Language (PPL) and natural language the default experience in OpenSearch UI's Observability workspace. This update combines proven pipeline syntax with simplified workflows to deliver an intuitive observability experience, helping customers analyze growing data volumes while controlling costs. The new experience includes 35+ new commands for deep analysis, faceted exploration, and natural language querying to help customers gain deeper insights across infrastructure, security, and business metrics.<br /> <br /> With this enhancement, customers can streamline their log analytics workflows using familiar pipeline syntax while leveraging advanced analytics capabilities. The solution includes enterprise-grade query capabilities, supporting advanced event correlation using natural language that help teams uncover meaningful patterns faster. Users can seamlessly move from query to visualization within a single interface, reducing mean time to detect and resolve issues. Admins can quickly stand up an end-to-end OpenTelemetry solution using OpenSearch's Get Started workflow in the AWS console. The unified workflow includes out-of-the-box OpenSearch Ingestion pipelines for OpenTelemetry data, making it easier for teams to get started quickly.<br /> <br /> Amazon OpenSearch UI is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Zurich), South America (São Paulo), and Canada (Central).<br /> <br /> To learn more about the new OpenSearch log analytics experience, visit the <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/observability.html">OpenSearch Service observability documentation</a> and start using these enhanced capabilities today in OpenSearch UI.</p>

Read article →

Amazon OpenSearch Service now supports OpenSearch version 3.3

<p>You can now run OpenSearch version 3.3 in Amazon OpenSearch Service. OpenSearch 3.3 introduces several improvements in areas like search performance, observability and new functionality to make agentic AI integrations simpler and more powerful.</p> <p>This launch includes several improvements in vector search capabilities. First, with <a contenteditable="false" href="https://docs.opensearch.org/latest/vector-search/ai-search/agentic-search/index/" style="cursor: pointer;" target="_blank">agentic search</a>, you can now achieve precise search results using natural language inputs without the need to construct complex domain-specific language (DSL) queries. Second, batch processing for <a contenteditable="false" href="https://docs.opensearch.org/latest/search-plugins/searching-data/highlight/#the-semantic-highlighter" style="cursor: pointer;" target="_blank">semantic highlighter</a> improves performance by reducing overhead latency and improving GPU utilization. Finally, enhancements to Neural Search plugin make semantic search more efficient and provide optimization options for your specific data, performance, and relevance needs.</p> <p>This launch also introduces support for Apache Calcite as default query engine for PPL that delivers optimization capabilities, improvements to query processing efficiency, and an extensive library of new <a contenteditable="false" href="https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/index.rst" style="cursor: pointer;" target="_blank">PPL commands and functions</a>. Additionally, this launch includes enhancements to the approximation framework that improve the responsiveness of paginated search results, real-time dashboards, and applications requiring deep pagination through large time-series or numeric datasets. Finally, workload management plugin now allows you to group search traffic and isolate network resources. This prevents specific requests from overusing network resources and offers tenant-level isolation.</p> <p>For information on upgrading to OpenSearch 3.3, please see the <a contenteditable="false" href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/version-migration.html" style="cursor: pointer;" target="_blank">documentation</a>. OpenSearch 3.3 is now available in all AWS Regions where Amazon OpenSearch Service is available.</p>

Read article →

AWS Lambda announces enhanced error handling capabilities for Kafka event processing

<p>AWS Lambda launches enhanced error handling capabilities for Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka (SMK) event sources. These capabilities allow customers to build custom retry configurations, optimize retries of failed messages, and send failed events to a Kafka topic as an on-failure destination, enabling customers to build resilient Kafka workloads with robust error handling strategies.<br /> <br /> Customers use Kafka event source mappings (ESM) with their Lambda functions to build their mission-critical Kafka applications. Kafka ESM offers robust error handling of failed events by retrying events with exponential backoff, and retaining failed events in on-failure destinations like Amazon SQS, Amazon S3, Amazon SNS. However, customers need customized error handling to meet stringent business and performance requirements. With this launch, developers can now exercise precise control over failed event processing and leverage Kafka topics as an additional on-failure destination when using Provisioned mode for Kafka ESM. Customers can now define specific retry limits and time boundaries for retry, automatically discarding failed records beyond these limits to customer-specified destination. They can now also set automatic retries of failed records in the batch and enhance their function code to report individual failed messages, optimizing the retry process.<br /> <br /> This feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Commercial Regions</a> where AWS Lambda’s Provisioned mode for Kafka ESM is available.<br /> <br /> To enable these capabilities, provide configuration parameters for your Kafka ESM in the ESM API, AWS Console, and AWS CLI. To learn more, read the <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-kafka-esm.html" target="_blank">Lambda ESM documentation</a> and <a href="https://aws.amazon.com/lambda/pricing/" target="_blank">AWS Lambda pricing</a>.&nbsp;</p>

Read article →

Amazon Connect flow modules now support custom inputs, outputs, and version management

<p>Amazon Connect flow modules now support custom inputs, outputs, and branches, along with version and alias management. With this launch, you can now define flexible parameters for your reusable flow modules to math your specific business logic. For example, you can create an authentication module that accepts a phone number and PIN as inputs, then returns the customer name and authentication status as outputs with branches such as "authenticated" or "not authenticated". All parameters are customizable to meet your specific needs.<br /> <br /> Additionally, advanced versioning and aliasing capabilities allow you to manage module updates more seamlessly. You can create immutable version snapshots and map aliases to specific versions. When you update an alias to point to a new version, all flows using that module automatically reference the updated version. These new features make flow modules more powerful and reusable, allowing you to build and maintain flows more efficiently.<br /> <br /> To learn more about these feature, see the <a href="https://docs.aws.amazon.com/connect/latest/adminguide/contact-flow-modules.html" target="_blank">Amazon Connect Administrator Guide</a>. This feature is available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">all AWS regions</a> that offers Amazon Connect. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the <a href="https://aws.amazon.com/connect/" target="_blank">Amazon Connect website</a>.</p>

Read article →

Amazon Connect now enables agents to send follow-up replies to email contacts

<p>Amazon Connect now allows agents to send follow-up replies to email contacts, making it easier to share additional information or continue assisting customers without starting a new thread. This capability preserves the full conversation history, helping agents maintain context and deliver consistent, seamless support.<br /> <br /> Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">regions</a>. To learn more and get started, please refer to the help <a href="https://docs.aws.amazon.com/connect/latest/adminguide/setup-email-channel.html">documentation</a>, <a href="https://aws.amazon.com/connect/pricing/">pricing page</a>, or visit the <a href="https://aws.amazon.com/connect/">Amazon Connect</a> website.</p>

Read article →

Amazon SageMaker HyperPod now supports NVIDIA Multi-Instance GPU (MIG) for generative AI tasks

<p>Amazon SageMaker HyperPod now supports NVIDIA Multi-Instance GPU (MIG) technology, enabling administrators to partition a single GPU into multiple isolated GPUs. This capability allows administrators to maximize resource utilization by running diverse, small generative AI (GenAI) tasks simultaneously on GPU partitions while maintaining performance and task isolation.<br /> <br /> Administrators can choose either the easy-to-use configuration setup on the SageMaker HyperPod console or a custom setup approach to enable fine-grained, hardware-isolated resources for specific task requirements that don't require full GPU capacity. They can also allocate compute quota to ensure fair and efficient distribution of GPU partitions across teams. With real-time performance metrics and resource utilization monitoring dashboard across GPU partitions, administrators gain visibility to optimize resource allocation. Data scientists can now accelerate time-to-market by scheduling lightweight inference tasks and running interactive notebooks in parallel on GPU partitions, eliminating wait times for full GPU availability.<br /> <br /> This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>: US West (Oregon), US East (N.Virginia), US East (Ohio), US West (N. California), Canada (Central), South America (Sao Paulo), Europe (Stockholm), Europe (Spain), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Singapore).<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/sagemaker-ai/hyperpod/">SageMaker HyperPod webpage</a>, and <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-gpu-partitioning.html">SageMaker HyperPod documentation.&nbsp;</a></p>

Read article →

AWS Elemental MediaTailor now supports HLS Interstitials for live streams

<p>AWS Elemental MediaTailor now supports HTTP Live Streaming (HLS) Interstitials for live streams, enabling broadcasters and streaming service providers to deliver seamless, personalized ad experiences across a wide range of modern video players. This capability allows customers to insert interstitial advertisements and promotions directly into live streams using the HLS Interstitials specification (RFC 8216), which is natively supported by popular players including HLS.js, Shaka Player, Bitmovin Player, and Apple devices running iOS 16.4, iPadOS 16.4, tvOS 16.4, and later.<br /> <br /> With HLS Interstitials, MediaTailor automatically generates the necessary metadata tags (Interstitial class EXT-X-DATERANGE with X-ASSET-LIST attributes) that signal to client players when and how to play interstitial content. This approach eliminates the need for custom player-side stitching logic, reducing development complexity and ensuring consistent playback behavior. The feature integrates with MediaTailor's existing server-side ad insertion (SSAI) capabilities, delivering frame-accurate transitions with no buffering between content and interstitials. Server-side beaconing continues to work with HLS Interstitials, ensuring ad tracking and measurement workflows remain intact.<br /> <br /> HLS Interstitials for live streams is particularly valuable for sports broadcasts, live news, and event streaming where precise ad timing and minimal latency are critical. The feature supports pre-roll and mid-roll insertion, giving customers flexibility in how they monetize their live content. This launch complements MediaTailor's existing <a href="https://aws.amazon.com/blogs/media/support-for-hls-interstitials-in-aws-elemental-mediatailor/" target="_blank">HLS Interstitials support for VOD</a>, rounding out support across Linear, Live, FAST, and VOD workflows. MediaTailor makes it easy to test and deploy—customers can rapidly enable or disable HLS Interstitials with a simple query parameter on the multi-variant manifest request, providing per playback session control without changing the underlying MediaTailor configuration.<br /> <br /> AWS Elemental MediaTailor HLS Interstitials for live streams is available today in all AWS Regions where MediaTailor operates. You pay only for the features you use, with no upfront commitments. To learn more and get started, visit the <a href="https://docs.aws.amazon.com/mediatailor/" target="_blank">AWS Elemental MediaTailor documentation</a> and the <a href="https://docs.aws.amazon.com/mediatailor/latest/ug/server-guided.html" target="_blank">HLS Interstitials implementation guide</a>.</p>

Read article →

Amazon MSK Replicator is now available in five additional AWS Regions

<p>You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (<a href="https://aws.amazon.com/msk/" target="_blank">Amazon MSK</a>) clusters in five additional AWS Regions: Asia Pacific (Thailand), Mexico (Central), Asia Pacific (Taipei), Canada West (Calgary), Europe (Spain).<br /> <br /> MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing.<br /> <br /> You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. To learn more, visit the MSK Replicator <a href="https://aws.amazon.com/msk/features/msk-replicator/" target="_blank">product page</a>, <a href="https://aws.amazon.com/msk/pricing/" target="_blank">pricing page</a>, and <a href="https://docs.aws.amazon.com/msk/latest/developerguide/msk-replicator.html" target="_blank">documentation</a>.</p>

Read article →

Amazon U7i instances now available in Asia Pacific (Jakarta) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Asia Pacific (Jakarta) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/">High Memory instances page</a>.</p>

Read article →

Amazon Redshift now supports federated permissions across multi-warehouse architectures

<p>Amazon Redshift now supports federated permissions across multi-warehouse architectures<br /> <br /> Amazon Redshift now supports federated permissions, which simplify permissions management across multiple Redshift data warehouses. Customers are adopting multi-warehouse architectures to scale and isolate workloads and are looking for simplified, consistent permissions management across warehouses. With Redshift federated permissions, you define data permissions once from any Redshift warehouse and automatically enforce them across all warehouses in the account.<br /> <br /> Amazon Redshift warehouses with federated permissions are auto-mounted in every Redshift warehouse, and you can use existing workforce identities with AWS IAM Identity Center or use existing IAM roles to query data across warehouses. Regardless of which warehouse is used for querying, row-level, column-level, and masking controls always apply automatically, delivering fine-grained access compliance. You can get started by registering a Redshift Serverless namespace or Redshift provisioned cluster with AWS Glue Data Catalog and start querying across warehouses using Redshift Query Editor V2, or any supported SQL client. You get horizontal scalability with multiple warehouses by allowing you to add new warehouses without increasing governance complexity, as new warehouses automatically enforce permission policies and analysts immediately see all databases from registered warehouses.<br /> <br /> Amazon Redshift federated permissions is available at no additional cost in supported <a href="https://docs.aws.amazon.com/redshift/latest/dg/federated-permissions-considerations.html">AWS regions</a>. To learn more, visit the <a href="http://docs.aws.amazon.com/redshift/latest/dg/federated-permissions.html">Amazon Redshift</a> documentation.</p>

Read article →

AWS Glue launches Amazon DynamoDB connector with Spark DataFrame support

<p><a contenteditable="false" href="https://aws.amazon.com/glue/" style="cursor: pointer;" target="_blank">AWS Glue</a> now supports a new Amazon DynamoDB connector that works natively with Apache Spark DataFrames. This enhancement allows Spark developers to work directly with Spark DataFrames, to share code easily across AWS Glue, Amazon EMR, and other Spark environments.<br /> <br /> Previously, developers working with DynamoDB data in AWS Glue were required to use the Glue-specific DynamicFrame object. With this new connector, developers can now reuse their existing Spark DataFrame code with minimal modifications. This change streamlines the process of migrating jobs to AWS Glue and simplifies data pipeline development. Additionally, the connector unlocks access to the full range of Spark DataFrame operations and the latest performance optimizations.<br /> <br /> The new connector is available in all AWS Commercial Regions where AWS Glue is available. To get started, visit AWS Glue <a contenteditable="false" href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect-dynamodb-dataframe-support.html" style="cursor: pointer;" target="_blank">documentation</a>.</p>

Read article →

Amazon Athena launches auto-scaling solution for Capacity Reservations

<p><a href="https://docs.aws.amazon.com/athena/" target="_blank">Amazon Athena</a> now offers an auto-scaling solution for Capacity Reservations that dynamically adjusts your reserved capacity based on workload demand. The solution uses <a href="https://aws.amazon.com/step-functions/" target="_blank">AWS Step Functions</a> to monitor utilization metrics and scale your Data Processing Units (DPUs) up or down according to the thresholds and limits you configure, helping you optimize costs while maintaining query performance and eliminating the need for manual capacity adjustments.<br /> <br /> You can customize scaling behavior by setting utilization thresholds, measurement frequency, and capacity limits to match your workload needs. The solution uses Step Functions to add or remove DPUs to any active Capacity Reservation based on capacity utilization metrics in <a href="https://aws.amazon.com/cloudwatch/" target="_blank">Amazon CloudWatch</a>. Capacity automatically scales up when utilization exceeds your high threshold and scales down when it falls below your low threshold - all while adhering to your defined limits. You can further customize the solution by modifying the <a href="https://aws.amazon.com/cloudformation/" target="_blank">Amazon CloudFormation</a> template to fit your specific requirements.<br /> <br /> The auto-scaling solution for Athena Capacity Reservations is available in AWS Regions where Capacity Reservations is supported. To get started, see <a href="https://docs.aws.amazon.com/athena/latest/ug/capacity-management-automatically-adjust-capacity.html" target="_blank">Automatically adjust capacity</a> in the Athena user guide.</p>

Read article →

Announcing notebooks with a built-in AI agent in Amazon SageMaker

<p>Amazon SageMaker introduces a new notebook experience that provides data and AI teams a high-performance, serverless programming environment for analytics and machine learning (ML) jobs. This helps customers quickly get started working with data without pre-provisioning data processing infrastructure. The new notebook gives data engineers, analysts, and data scientists one place to perform SQL queries, execute Python code, process large-scale data jobs, run ML workloads and create visualizations. A built-in AI agent&nbsp;accelerates development by generating code and SQL statements from natural language prompts while it guides users through their tasks. The notebook is backed by Amazon Athena for Apache Spark to deliver high-performance results, scaling from interactive SQL queries to petabyte-scale data processing.&nbsp;It’s available in the new one-click onboarding experience for Amazon SageMaker Unified Studio.</p> <p>Data engineers, analysts, and data scientists can flexibly combine&nbsp;SQL, Python, and natural language within a single interactive workspace. This removes the need to switch between different tools based on your workload. For example, you can start with SQL queries to explore your data, use Python for advanced analytics or to build ML models, or use natural language prompts to generate code automatically using the built-in AI agent.&nbsp;To get started, sign in to the console, find SageMaker, open SageMaker Unified Studio, and go to "Notebooks" in the navigation.</p> <p>You can use the SageMaker notebook feature in the following Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney).</p> <p>To learn more, read the <a href="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio" style="cursor: pointer;">AWS News Blog</a> or see <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/notebooks.html" style="cursor: pointer;">SageMaker documentation</a>.<a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/notebooks.html" style="cursor: pointer;"></a></p>

Read article →

Introducing Amazon SageMaker Data Agent for analytics and AI/ML development

<p>Amazon SageMaker introduces a built-in AI agent that accelerates the development of data analytics and machine learning (ML) applications. SageMaker Data Agent is available in the new notebook experience in Amazon SageMaker Unified Studio and helps data engineers, analysts, and data scientists who spend significant time on manual setup tasks and boilerplate code when building analytics and ML applications. The agent generates code and execution plans from natural language prompts and integrates with data catalogs and business metadata to streamline the development process.<br /> <br /> SageMaker Data Agent works within the new&nbsp;<a href="https://aws.amazon.com/sagemaker/unified-studio/notebooks/" target="_blank">notebook experience</a>&nbsp;to break down complex analytics and ML tasks into manageable steps. Customers can describe objectives in natural language and the agent creates a detailed execution plan and generates the required SQL and Python code. The agent maintains awareness of the notebook context, including available data sources and catalog information, accelerating common tasks including data transformation, statistical analysis, and model development.<br /> <br /> To get started, log in to Amazon SageMaker and click on “Notebooks” on the left navigation. Amazon SageMaker Data Agent is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, read the <a href="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio/" target="_blank">AWS News Blog</a>&nbsp;or visit the Amazon SageMaker <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/sagemaker-data-agent.html" target="_blank">documentation</a>.</p>

Read article →

Amazon Athena adds cost and performance controls for Capacity Reservations

<p><a href="https://aws.amazon.com/athena/" target="_blank">Amazon Athena</a> now gives you control over Data Processing Unit (DPU) usage for queries running on Capacity Reservations. You can now configure DPU settings at the workgroup or query level to balance cost efficiency, concurrency, and query-level performance needs.<br /> <br /> Capacity Reservations provides dedicated serverless processing capacity for your Athena queries. Capacity is measured in DPUs, and queries consume DPUs based on their complexity. Now you can set explicit DPU values for each query—ensuring small queries use only what they need while guaranteeing critical queries get sufficient resources for fast execution. The Athena console and API now return per-query DPU usage, helping you understand DPU usage and determine your capacity needs. These updates help you control per-query capacity usage, control query concurrency, reduce costs by eliminating over-provisioning, and deliver consistent performance for business-critical workloads.<br /> <br /> Cost and performance controls are available today in AWS Regions where Capacity Reservations is supported. To learn more, see <a href="https://docs.aws.amazon.com/athena/latest/ug/capacity-management-control-capacity-usage.html" target="_blank">Control capacity usage</a> in the Athena user guide.</p>

Read article →

Introducing one-click onboarding of existing datasets to Amazon SageMaker

<p>Amazon SageMaker introduces one-click onboarding of existing AWS datasets to Amazon SageMaker Unified Studio. This helps AWS customers to start working with their data in minutes, using their existing AWS Identity and Access Management (IAM) roles and permissions. Customers can start working with any data they have access to using a new serverless notebook with a built-in AI agent. This new notebook, which supports SQL, Python, Spark or natural language, gives data engineers, analysts, and data scientists a single high-performance interface to develop and run both SQL queries and code. Customers also have access to many other existing tools such as a Query Editor for SQL analysis, JupyterLab IDE, Visual ETL and workflows, and machine learning (ML) capabilities. The ML capabilities include the ability to discover foundation models from a centralized model hub, customize them with sample notebooks, use MLflow for experimentation, publish trained models in the model hub for discovery, and deploy them as inference endpoints for prediction.<br /> <br /> Customers can start directly from Amazon SageMaker, Amazon Athena, Amazon Redshift, and Amazon S3 Tables console pages, giving them a fast path from their existing tools and data to the simple experience in SageMaker Unified Studio. After clicking ‘Get started’ and specifying an IAM role, SageMaker prompts for specific policy updates and then automatically creates a project in SageMaker Unified Studio. The project is set up with all existing data permissions from AWS Glue Data Catalog, AWS Lake Formation, and Amazon S3, and a notebook and serverless compute are pre-configured to accelerate first use.<br /> <br /> To get started, simply click "Get Started" from the SageMaker console or open SageMaker Unified Studio from Amazon Athena, Amazon Redshift, or Amazon S3 Tables. One-click onboarding of existing datasets is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney).&nbsp;To learn more read the <a href="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio/" target="_blank">AWS News Blog</a> or visit the Amazon SageMaker <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/iam-based-domains.html" target="_blank">documentation</a>.&nbsp;</p>

Read article →

AWS CloudFormation StackSets now supports deployment ordering

<p>AWS CloudFormation StackSets offers deployment ordering for auto-deployment mode, enabling you to define the sequence in which your stack instances automatically deploy across accounts and regions. This capability allows you to coordinate complex multi-stack deployments where foundational infrastructure must be provisioned before dependent application components. Organizations managing large-scale deployments can now ensure proper deployment ordering without manual intervention.<br /> <br /> When creating or updating a CloudFormation StackSet, you can specify up to 10 dependencies per stack instances using the new <b>DependsOn</b> parameter in the AutoDeployment configuration, allowing StackSets to automatically orchestrate deployments based on your defined relationships. For example, you can make sure that your networking and security stack instance complete deployment before your application stack instances begin, preventing deployment failures due to missing dependencies. StackSets includes built-in cycle detection to prevent circular dependencies and provides error messages to help resolve configuration issues.<br /> <br /> This feature is available in all AWS Regions where CloudFormation StackSets is available at no additional cost.<br /> <br /> Get started by creating or updating your StackSets auto-deployement option through the CLI, SDK or the CloudFormation Console to define dependencies using stack instances ARNs. To learn more about StackSets deployment ordering, check out the detailed feature walkthrough on the <a href="https://aws.amazon.com/blogs/devops/take-fine-grained-control-of-your-aws-cloudformation-stacksets-deployment-with-stackset-dependencies/" target="_blank">AWS DevOps Blog</a> or visit the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html#stacksets-orgs-auto-deployment-considerations" target="_blank">AWS CloudFormation User Guide</a>.</p>

Read article →

Amazon Athena for Apache Spark is now available in Amazon SageMaker notebooks

<p>Amazon SageMaker now supports Amazon Athena for Apache Spark, bringing a new notebook experience and fast serverless Spark experience together within a unified workspace. Now, data engineers, analysts, and data scientists can easily query data, run Python code, develop jobs, train models, visualize data, and work with AI from one place, with no infrastructure to manage and second-level billing.<br /> <br /> Athena for Apache Spark scales in seconds to support any workload, from interactive queries to petabyte-scale jobs. Athena for Apache Spark now runs on Spark 3.5.6, the same high-performance Spark engine available across AWS, optimized for open table formats including Apache Iceberg and Delta Lake. It brings you new debugging features, real-time monitoring in the Spark UI, and secure interactive cluster communication through Spark Connect. As you use these capabilities to work with your data, Athena for Spark now enforces table-level access controls defined in AWS Lake Formation.</p> <p>Athena for Apache Spark is now available with Amazon SageMaker notebooks in&nbsp;US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit <a href="https://docs.aws.amazon.com/athena/latest/ug/notebooks-spark-release-versions.html#notebooks-spark-release-versions-spark-35">Apache Spark engine version 3.5</a>, read the <u><a href="https://aws.amazon.com/blogs/aws/new-one-click-onboarding-and-notebooks-with-ai-agent-in-amazon-sagemaker-unified-studio">AWS News Blog</a></u> or visit Amazon SageMaker <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html">documentation</a>.&nbsp;Visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/notebooks.html">Getting Started</a> guide to try it from Amazon SageMaker notebooks.</p>

Read article →

Amazon CloudWatch Container Insights now supports Neuron UltraServers on Amazon EKS

<p>Amazon CloudWatch Container Insights now supports Neuron UltraServers on Amazon EKS, providing enhanced observability for customers running large-scale, high-performance machine learning workloads on multi-instance nodes. This new capability enables data scientists and ML engineers to efficiently monitor and troubleshoot their containerized ML applications, offering aggregated metrics and simplified management across Neuron UltraServer groups.</p> <p>Neuron UltraServers combine multiple EC2 instances into a single logical server unit, optimized for machine learning workloads using AWS Trainium and Inferentia accelerators. Container Insights, a monitoring and diagnostics feature in Amazon CloudWatch, automatically collects metrics from containerized applications. With this launch, Container Insights introduces a new filter specifically for UltraServers in EKS environments. You can now select an UltraServer ID to view new aggregate metrics across all instances within that server, replacing the need to monitor individual instances separately. In addition to per-instance metrics, you can now view consolidated performance data for the entire UltraServer group, streamlining the monitoring of ML workloads running on AWS Neuron.</p> <p>Amazon CloudWatch Container Insights is available in all commercial AWS Regions, and the AWS GovCloud (US).</p> <p>To get started, see <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-enhanced-EKS.html#Container-Insights-metrics-EKS-Neuron">AWS Neuron metrics for AWS Trainium and AWS Inferentia</a> in the Amazon CloudWatch User Guide</p>

Read article →

Amazon ECS Managed Instances now available in AWS GovCloud (US) Regions

<p>Amazon Elastic Container Service (Amazon ECS) Managed Instances is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. By offloading infrastructure operations to AWS, you get the application performance you want and the simplicity you need while reducing your total cost of ownership.<br /> <br /> Managed Instances dynamically scales EC2 instances to match your workload requirements and continuously optimizes task placement to reduce infrastructure costs. It also enhances your security posture through regular security patching initiated every 14 days. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances Capacity Provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer.<br /> <br /> To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the <a href="https://aws.amazon.com/ecs/managed-instances/">feature page</a>, <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html">documentation</a>, and <a href="https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications">AWS News launch blog</a>.</p>

Read article →

Amazon RDS for Oracle is now available with Oracle Database Standard Edition 2 (SE2) License Included instances in Asia Pacific (Taipei) region

<p><a href="https://aws.amazon.com/rds/oracle/" target="_blank">Amazon Relational Database Service (Amazon RDS) for Oracle</a> now offers Oracle Database Standard Edition 2 (SE2) License Included R7i and M7i instances in Asia Pacific (Taipei) region.<br /> <br /> With Amazon RDS for Oracle SE2 License Included instances, you do not need to purchase Oracle Database licenses. You simply launch Amazon RDS for Oracle instances through the AWS Management Console, AWS CLI, or AWS SDKs, and there are no separate license or support charges. Review the AWS blog <a href="https://aws.amazon.com/blogs/database/rethink-oracle-standard-edition-two-on-amazon-rds-for-oracle/" target="_blank">Rethink Oracle Standard Edition Two on Amazon RDS for Oracle</a> to explore how you can lower cost and simplify operations by using Amazon RDS Oracle SE2 License Included instances for your Oracle databases.<br /> <br /> To learn more about pricing and regional availability, see <a href="https://aws.amazon.com/rds/oracle/pricing/" target="_blank">Amazon RDS for Oracle pricing</a>.</p>

Read article →

Amazon WorkSpaces Applications now supports IPv6

<p>Amazon WorkSpaces Applications now supports IPv6 for <a contenteditable="false" href="https://docs.aws.amazon.com/appstream2/latest/developerguide/allowed-domains.html" style="cursor: pointer;">WorkSpaces Applications domains</a> and external endpoints, allowing end users to connect to WorkSpaces Applications over IPv6 from IPv6 compatible devices (except SAML authentication). This helps you meet IPv6 compliance requirements and eliminates the need for expensive networking equipment to handle address translation between IPv4 and IPv6.</p> <p>The Internet's growth is consuming IPv4 addresses quickly. WorkSpaces Applications, by supporting IPv6, assists customers in streamlining their network architecture. This support offers a much larger address space and removes the necessity to manage overlapping address spaces in their VPCs. Customers can now base their applications on IPv6, ensuring their infrastructure is future-ready and compatible with existing IPv4 systems via a fallback mechanism.</p> <p>This feature is available at no additional cost in 16 AWS Regions, including US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Paris, Frankfurt, London, Ireland), Asia Pacific (Tokyo, Mumbai, Sydney, Seoul, Singapore), and South America (Sao Paulo) and AWS GovCloud (US-West, US-East). WorkSpaces Applications offers pay-as-you go <a contenteditable="false" href="https://aws.amazon.com/appstream2/pricing/" style="cursor: pointer;">pricing</a>.</p> <p>To get started with WorkSpaces Applications, see <a contenteditable="false" href="https://aws.amazon.com/appstream2/getting-started/" style="cursor: pointer;">Getting Started with Amazon WorkSpaces Applications</a>. To enable this feature for your users, you must use the latest <a contenteditable="false" href="https://clients.amazonappstream.com/" style="cursor: pointer;">WorkSpaces Applications client</a> for Windows, macOS or directly through web access. To learn more about the feature, please refer to the service <a contenteditable="false" href="https://docs.aws.amazon.com/appstream2/latest/developerguide/allowed-domains.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Route 53 DNS service adds support for IPv6 API service endpoint

<p>Starting today, Amazon Route 53 supports dual stack for the Route 53 DNS service API endpoint at route53.global.api.aws, enabling you to connect from Internet Protocol Version 6 (IPv6), Internet Protocol Version 4 (IPv4), or dual stack clients. The existing Route 53 DNS service IPv4 API endpoint will remain available for backwards compatibility.<br /> <br /> Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service that allows customers to register a domain, setup DNS records corresponding to your infrastructure, perform global traffic routing using Traffic Flow, and use Route 53 health checks to monitor the health and performance of your applications and resources. Due to the continued growth of the internet, IPv4 address space is being exhausted and customers are transitioning to IPv6 addresses. Now, clients can connect via IPv6 to the Route 53 DNS service API endpoint, enabling organizations to meet compliance requirements and removing the added complexity of IP address translation between IPv4 and IPv6.<br /> <br /> Support for IPv6 on the Route 53 DNS service API endpoint is available in all Commercial Regions and available at no additional cost. You can get started with this feature through the AWS CLI or <a href="https://console.aws.amazon.com/rds/home">AWS Management Console</a>. To learn more about which Route 53 features are accessible via the route53.amazon.aws service endpoint, visit <a href="https://docs.aws.amazon.com/general/latest/gr/r53.html">this page</a> and to learn more about the Route 53 DNS service, visit our <a href="https://docs.aws.amazon.com/Route53/latest/APIReference/Welcome.html">documentation</a>.</p>

Read article →

Amazon Quick Sight dashboard customization now includes tables and pivot tables

<p><a href="https://aws.amazon.com/quicksuite/quicksight/">Amazon Quick Sight</a> has expanded customization capabilities to include tables and pivot tables in dashboards. This update enables readers to personalize their data views by sorting, reordering, hiding/showing, and freezing columns—all without requiring updates from dashboard authors.<br /> <br /> These capabilities are especially valuable for teams that need to tailor dashboard views for different analytical needs and collaborate across departments. For example, sales managers can quickly sort by revenue to identify top performers, while finance teams can freeze account columns to maintain context in large datasets.<br /> <br /> These new customization features are now available in Amazon Quick Sight Enterprise Edition across all <a href="https://docs.aws.amazon.com/quicksight/latest/user/regions-qs.html">supported Amazon Quick Sight regions</a>.&nbsp;Learn how to get started with these new customization features in <a href="https://aws.amazon.com/blogs/business-intelligence/empower-readers-with-customizable-tables-and-pivot-tables-in-amazon-quick-sight/">our blog post.</a></p>

Read article →

AWS Transfer Family web apps now support VPC endpoints

<p>AWS Transfer Family web apps now supports Virtual Private Cloud (VPC) endpoints, enabling private access to your web app at no additional charge. This allows your users to securely access and manage files in Amazon S3 through a web browser while maintaining all traffic within your VPC.<br /> <br /> Transfer Family web apps provide a simple and secure web interface for accessing your data in Amazon S3. With this launch, your workforce users can connect through your VPC directly, AWS Direct Connect, or VPN connections. This enables you to support internal use cases requiring strict security controls, such as regulated document workflows and sensitive data sharing, while leveraging the security controls and network configurations already defined in your VPC. You can manage access using security groups based on source IP addresses, implement subnet-level filtering through NACLs, and ensure all file transfers remain within your private network boundary, maintaining full visibility and control over all network traffic.<br /> <br /> VPC endpoints for web apps are available in <a href="https://integ.www.docs.aws.a2z.com/transfer/latest/userguide/web-app.html#webapp-regions" target="_blank">select AWS Regions</a> at no additional charge. To get started, visit the AWS Transfer Family console, or use AWS CLI/SDK. To learn more, visit the <a href="https://docs.aws.amazon.com/transfer/latest/userguide/create-webapp-in-vpc.html" target="_blank">Transfer Family User Guide</a>.</p>

Read article →

Amazon Connect now supports multi skill agent scheduling

<p>Amazon Connect now enables you to optimize scheduling based on agent’s multiple specialized skills. You can now maximize agent utilization across multiple dimensions such as departments, languages, and customer tiers by intelligently matching agents with multiple skills to forecasted demand. You can now also preserve multi-skilled agents for high-value interactions when needed most. For example, bilingual agents can now be strategically scheduled to cover peak periods for high-value French language queues that frequently experience staffing shortages, while handling general inquiries during off-peak times.<br /> <br /> This feature is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#optimization_region" style="cursor: pointer;" target="_blank">AWS Regions</a> where Amazon Connect agent scheduling is available. To learn more about multi skill agent scheduling, visit the <a contenteditable="false" href="https://aws.amazon.com/blogs/contact-center/implementing-multi-skill-forecasting-and-scheduling-in-amazon-connect/" style="cursor: pointer;" target="_blank">blog</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/multiskill-forecasting.html" style="cursor: pointer;" target="_blank">admin guide</a>.</p>

Read article →

AWS Lambda announces new capabilities to optimize costs up to 90% for Provisioned mode for Kafka ESM

<p>AWS Lambda announces new capabilities for Provisioned mode for Kafka event source mappings (ESMs) that allow you to group your Kafka ESMs and support higher density of event pollers, enabling you to optimize costs up to 90% for your Kafka ESMs. With these cost optimization capabilities, you can now use Provisioned mode for all your Kafka workloads, including those with lower throughput requirements, while benefiting from features like throughput controls, schema validation, filtering of Avro/Protobuf events, low-latency invocations, and enhanced error handling.<br /> <br /> Customers use Provisioned mode for Kafka ESM to fine-tune the throughput of the ESM by provisioning and auto-scaling polling resources called event pollers. Charges are calculated using a billing unit called Event Poller Unit (EPU). Each EPU supports up to 20 MB/s of throughput capacity, and a default of 4 event pollers per EPU. With this launch, each EPU automatically supports a default of 10 event pollers for low-throughput use cases, improving utilization of your EPU capacity. Additionally, you can now group multiple Kafka ESMs within the same Amazon VPC to share EPU capacity by configuring the new PollerGroupName parameter. With these enhancements, you can reduce your EPU costs up to 90% for your low throughput workloads. These optimizations enable you to maintain the performance benefits of Provisioned mode while significantly reducing costs for applications with varying throughput requirements.<br /> <br /> This feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Commercial Regions</a> where AWS Lambda’s Provisioned mode for Kafka ESM is available. <br /> <br /> Starting today, existing Provisioned mode for Kafka ESMs will automatically benefit from improved packing of low-throughput event pollers. You can implement ESM grouping through the Lambda ESM API, AWS Console, CLI, SDK, CloudFormation, and SAM by configuring the PollerGroupName parameter along with minimum and maximum event poller settings. For more information about these new capabilities and pricing details, visit the <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-kafka-esm.html">Lambda ESM documentation</a> and <a href="https://aws.amazon.com/lambda/pricing/">AWS Lambda pricing</a>.&nbsp;</p>

Read article →

Announcing AWS Lambda Kafka event source mapping integration in Amazon MSK Console

<p>AWS announces Lambda’s Kafka event source mapping (ESM) integration in the <a href="https://aws.amazon.com/msk/">Amazon MSK</a> Console, streamlining the process of connecting MSK topics to Lambda functions. This capability allows you to simply provide your topic and target function in the MSK Console while the integration handles ESM configuration automatically, enabling you to trigger Lambda functions from MSK topics without switching consoles.<br /> <br /> Customers use MSK as an event source for Lambda functions to build responsive event-driven Kafka applications. Previously, configuring MSK as an event source required navigating between MSK and Lambda consoles to provide parameters like cluster details, authentication method, and network configuration. The new integrated experience brings Lambda ESM configuration directly into the MSK Console with a simplified interface requiring only target function and topic name as mandatory fields. The integration handles ESM creation with optimized defaults for authentication and event polling configurations, and can automatically generate the <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-msk-permissions.html">required Lambda execution role permissions</a> for MSK cluster access. To optimize latency and throughput, and to remove the need for networking setup, the integration uses <a href="https://docs.aws.amazon.com/lambda/latest/dg/kafka-scaling-modes.html">Provisioned Mode</a> for ESM as the recommended default. These improvements streamline MSK integration with Lambda and reduce configuration errors, enabling you to quickly get started with your MSK and Lambda applications.<br /> <br /> This feature is generally available in all AWS Commercial Regions where both Amazon MSK and AWS Lambda are available, except Asia Pacific (Thailand), Asia Pacific (Malaysia), Israel (Tel Aviv), Asia Pacific (Taipei), and Canada West (Calgary).<br /> <br /> You can configure Lambda’s Kafka event source mapping from the MSK Console by navigating to your MSK cluster and providing the topic, Lambda function, and optional fields under the Lambda integration tab. Standard <a href="https://aws.amazon.com/lambda/pricing/">Lambda pricing</a> and <a href="https://aws.amazon.com/msk/pricing">MSK pricing </a>applies. To learn more, read <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html">Lambda developer guide</a> and <a href="https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html">MSK developer guide</a>.&nbsp;</p>

Read article →

Announcing Amazon ECS Express Mode

<p>Today, AWS announces Amazon Elastic Container Service (Amazon ECS) Express Mode, a new feature that empowers developers to rapidly launch containerized applications, including web applications and APIs. ECS Express Mode makes it easy to orchestrate and manage the cloud architecture for your application, while maintaining full control over your infrastructure resources.<br /> <br /> Amazon ECS Express Mode streamlines the deployment and management of containerized applications on AWS, allowing developers to focus on delivering business value through their containerized applications. Every Express Mode service automatically receives an AWS-provided domain name, making your application immediately accessible without additional configuration. Applications using ECS Express Mode incorporate AWS operational best practices, serve either public or private HTTPS requests, and scale in response to traffic patterns. Traffic is distributed through Application Load Balancer (ALB)s, and automatically consolidates up to 25 Express Mode services behind a single ALB when appropriate. ECS Express uses intelligent rule-based routing to maintain isolation between services while efficiently utilizing the ALB resource. All resources provisioned by ECS Express Mode remain fully accessible in your account, ensuring you never sacrifice control or flexibility. As your application requirements evolve, you can directly access and modify any infrastructure resource, leveraging the complete feature set of Amazon ECS and related services without disruption to your running applications.<br /> <br /> To get started just provide your container image, and ECS Express Mode handles the rest by deploying your application in Amazon ECS and auto-generating a URL. Amazon ECS Express Mode is available now in all AWS Regions at no additional charge. You pay only for the AWS resources created to run your application. To deploy a new ECS Express Mode service, use the Amazon ECS Console, SDK, CLI, CloudFormation, CDK and Terraform. For more information, see the <a contenteditable="false" href="https://aws.amazon.com/blogs/aws/build-production-ready-applications-without-infrastructure-complexity-using-amazon-ecs-express-mode" style="cursor: pointer;">AWS News blog</a>, or the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/express-service-overview.html" style="cursor: pointer;">documentation</a>.&nbsp;</p>

Read article →

Amazon EKS introduces Provisioned Control Plane

<p>Today, Amazon Elastic Kubernetes Service (EKS) introduced Provisioned Control Plane, a new feature that gives you the ability to select your cluster's control plane capacity to ensure predictable, high performance for the most demanding workloads. With Provisioned Control Plane, you can pre-provision the desired control plane capacity from a set of well-defined scaling tiers, ensuring the control plane is always<b> </b>ready to handle traffic spikes or unpredictable bursts. These new scaling tiers unlock significantly higher cluster performance and scalability, allowing you to run ultra-scale workloads in a single cluster.<br /> <br /> Provisioned Control Plane ensures your cluster's control plane is ready to support workloads that require minimal latency and high performance during anticipated high-demand events like product launches, holiday sales, or major sporting and entertainment events. It also ensures consistent control plane performance across development, staging, production, and disaster recovery environments, so the behavior you observe during testing accurately reflects what you'll experience in production or during failover events. Finally, it enables you to run massive-scale workloads such as AI training/inference, high-performance computing, or large-scale data processing jobs that require thousands of worker nodes in a single cluster.<br /> <br /> To get started with Amazon EKS Provisioned Control Plane, use the EKS APIs, AWS Console, or infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Provisioned Control Plane , visit the EKS Provisioned Control plane <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-provisioned-control-plane-introduction.html" target="_blank">documentation</a> and EKS <a href="https://aws.amazon.com/eks/pricing/" target="_blank">pricing page</a>.</p>

Read article →

AWS Organizations now supports upgrade rollout policy for Amazon Aurora and Amazon RDS

<p>Today, AWS Organizations announces support for upgrade rollout policy, a new capability that helps customers stagger automatic upgrades across their Amazon Aurora (MySQL-Compatible Edition and PostgreSQL-Compatible Edition) and Amazon Relational Database Service (Amazon RDS) including RDS for MySQL, RDS for PostgreSQL, RDS for MariaDB, RDS for SQL Server, RDS for Oracle, and RDS for Db2 databases. This capability eliminates the operational overhead of coordinating automatic minor version upgrades either manually or through custom tools across hundreds of resources and accounts, while giving customers peace of mind by ensuring upgrades are first tested in less critical environments before being rolled out to production.<br /> <br /> With upgrade rollout policy, you can define upgrade sequences using simple orders (first, second, last) applied through account-level policies or resource tags. When new minor versions become eligible for automatic upgrade, the policy ensures upgrades start with development environments, allowing you to validate changes before proceeding to more critical environments. AWS Health notifications between phases and built-in validation periods help you monitor progress and ensure stability throughout the upgrade process. You can also disable automatic progression at any time if issues are detected, giving you complete control over the upgrade journey<i>.</i><br /> <br /> This feature is available in all AWS commercial Regions and AWS GovCloud (US) Regions, supporting automatic minor version upgrades for Amazon Aurora and Amazon RDS database engines. You can manage upgrade policies using the AWS Management Console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS CDK. For Amazon RDS for Oracle, the upgrade rollout policy supports automatic minor version upgrades for engine versions released after January 2026.<br /> <br /> To learn more about automatic minor version upgrades, see the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/RDS.Maintenance.AMVU.UpgradeRollout.html">Amazon RDS </a>and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Maintenance.AMVU.UpgradeRollout.html">Aurora </a>user guide. For more information about upgrade rollout policy, see <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_upgrade_rollout.html">Managing organization policies with AWS Organizations (Upgrade rollout policy).</a></p>

Read article →

AWS Cost Anomaly Detection accelerates anomaly identification

<p>AWS Cost Anomaly Detection now features an improved detection algorithm that enables faster identification of unusual spending patterns. The enhanced algorithm analyzes your AWS spend using rolling 24-hour windows, comparing current costs against equivalent time periods from previous days each time AWS receives updated cost and usage data.<br /> <br /> The enhanced algorithm addresses two common challenges in cost pattern analysis. First, it removes the delay in anomaly detection caused by comparing incomplete calendar-day costs against historical daily totals. The rolling window always compares full 24-hour periods, enabling faster identification of unusual patterns. Second, it provides more accurate comparisons by evaluating costs against similar times of day, accounting for workloads that have different morning and evening usage patterns. These improvements help reduce false positives while enabling faster, more accurate anomaly detection.<br /> <br /> This enhancement to AWS Cost Anomaly Detection is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about this new feature, AWS Cost Anomaly Detection, and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection <a href="https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/">product page</a> and <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/billing-getting-started.html">getting started guide</a>.</p>

Read article →

AWS Transfer Family announces Terraform module to integrate with a custom identity provider

<p>The&nbsp;<a href="https://github.com/aws-ia/terraform-aws-transfer-family">AWS Transfer Family Terraform module</a>&nbsp;now supports deploying Transfer Family endpoints with a custom identity provider (IdP) for authentication and access control. This allows you to automate and streamline the deployment of Transfer Family servers integrated with your existing identity providers.<br /> <br /> AWS Transfer Family provides fully-managed file transfers over SFTP, AS2, FTPS, FTP, and web browser-based interfaces for AWS storage services. Using this new module, you can now use Terraform to provision Transfer Family server resources using your custom authentication systems, eliminating manual configurations and enabling repeatable deployments that scale with your business needs. The module is built on the open&nbsp;source&nbsp;<a href="https://github.com/aws-samples/toolkit-for-aws-transfer-family/tree/main/solutions/custom-idp">Custom IdP solution</a>&nbsp;which provides standardized integration with widely-used identity providers and includes built-in security controls such as multi-factor authentication, audit logging, and per-user IP allowlisting. To help you get started, the Terraform module includes an end-to-end example using Amazon Cognito user pools.&nbsp;<br /> <br /> Customers can get started by using the new module from the&nbsp;<a href="https://registry.terraform.io/modules/aws-ia/transfer-family/aws/latest">Terraform Registry</a>. To learn more about the Transfer Family Custom IdP solution, visit the&nbsp;<a href="https://docs.aws.amazon.com/transfer/latest/userguide/custom-idp-toolkit.html">user guide</a>. To see all the regions where Transfer Family is available, visit the&nbsp;<a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region table</a>.</p>

Read article →

Announcing AWS Compute Optimizer automation rules

<p>Today, we are introducing automation rules, a new feature in AWS Compute Optimizer that enables you to optimize Amazon Elastic Block Store (EBS) volumes at scale. With automation rules, you can streamline the process of cleaning up unattached EBS volumes and upgrading volumes to the latest-generation volume types, saving cost and improving performance across your cloud infrastructure.<br /> <br /> Automation rules let you automatically apply optimization recommendations on a recurring schedule when they match your criteria. You can set criteria like AWS Region to target specific geographies and Resource Tags to distinguish between production and development workloads. Configure rules to run daily, weekly, or monthly, and AWS Compute Optimizer will continuously evaluate new recommendations against your criteria. A new dashboard allows you to summarize automation events over time, examine detailed step history, and estimate savings achieved. If you need to reverse an action, you can do so directly from the same dashboard.<br /> <br /> AWS Compute Optimizer automation rules are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).<br /> <br /> To get started, navigate to the new Automation section in the AWS Compute Optimizer console, visit the AWS Compute Optimizer <a href="https://docs.aws.amazon.com/compute-optimizer/latest/ug/automation.html" target="_blank">user guide documentation</a>, or read the <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/introducing-automated-amazon-ebs-volume-optimization-in-aws-compute-optimizer/" target="_blank">announcement blog</a> to learn more.</p>

Read article →

Amazon EKS and Amazon ECS announce fully managed MCP servers in preview

<p>Today, <a href="https://aws.amazon.com/eks/">Amazon Elastic Kubernetes Service</a> (EKS) and <a href="https://aws.amazon.com/ecs/">Amazon</a><a href="https://aws.amazon.com/ecs/"> Elastic Container Service</a> (ECS) announced fully managed MCP servers enabling AI powered experiences for development and operations in preview. MCP (Model Context Protocol) provides a standardized interface that enriches AI applications with real-time, contextual knowledge of EKS and ECS clusters, enabling more accurate and tailored guidance throughout the application lifecycle, from development through operations. With this launch, EKS and ECS now offer fully managed MCP servers hosted in the AWS cloud, eliminating the need for local installation and maintenance. The fully managed MCP servers provide enterprise-grade capabilities like automatic updates and patching, centralized security through AWS IAM integration, comprehensive audit logging via AWS CloudTrail, and the proven scalability, reliability, and support of AWS.<br /> <br /> The fully managed Amazon EKS and ECS MCP servers enable developers to easily configure AI coding assistants like Kiro CLI, Cursor, or Cline for guided development workflows, optimized code generation, and context-aware debugging. Operators gain access to a knowledge base of best practices and troubleshooting guidance derived from extensive operational experience managing clusters at scale.<br /> <br /> To learn more about the Amazon EKS MCP server preview, visit EKS MCP server <a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-mcp-introduction.html">documentation</a> and launch <a href="https://aws.amazon.com/blogs/containers/introducing-the-fully-managed-amazon-eks-mcp-server-preview/">blog post</a>. To learn more about the Amazon ECS MCP server preview, visit ECS MCP server <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-mcp-introduction.html">documentation</a> and launch <a href="https://aws.amazon.com/blogs/containers/accelerate-container-troubleshooting-with-the-fully-managed-amazon-ecs-mcp-server-preview/">blog post</a>.</p>

Read article →

Amazon ECR now supports managed container image signing

<p>Amazon ECR now supports managed container image signing to enhance your security posture and eliminate the operational overhead of setting up signing. Container image signing allows you to verify that images are from trusted sources. With managed signing, ECR simplifies setting up container image signing to just a few clicks in the ECR Console or a single API call.<br /> <br /> To get started, create a signing rule with an AWS Signer signing profile that specifies parameters such as signature validity period, and which repositories ECR should sign images for. Once configured, ECR automatically signs images as they are pushed using the identity of the entity pushing the image. ECR leverages AWS Signer for signing operations, which handles key material and certificate lifecycle management including generation, secure storage, and rotation. All signing operations are logged through CloudTrail for full auditability.<br /> <br /> ECR managed signing is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a> where AWS Signer is available. To learn more, visit the <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-signing.html" target="_blank">documentation</a>.</p>

Read article →

Amazon EMR 7.12 now supports the Apache Iceberg v3 table format

<p>Amazon EMR 7.12 is now available featuring the new Apache Iceberg v3 table format with Apache Iceberg 1.10. This release enables you to reduce costs when deleting data, strengthen governance and compliance through better tracking for row level changes, and enhance data security with more granular data access control.<br /> <br /> With Iceberg v3, you can delete data cost-effectively because Iceberg v3 marks deleted rows without rewriting entire files - speeding up your data pipelines while reducing storage costs. You get better governance and compliance capabilities through automatic tracking of every row’s creation and modification history, creating the audit trails needed for regulatory requirements and change data capture. You can enhance data security with table-level encryption, helping you meet privacy regulations for your most sensitive data.<br /> <br /> With Apache Spark 3.5.6 included in this release, you can leverage these Iceberg 1.10 capabilities for building robust data lakehouse architectures on Amazon S3. This release also includes support for data governance operations across your Iceberg tables using AWS Lake Formation. In addition, this release also includes Apache Trino 476.<br /> <br /> Amazon EMR 7.12 is available in all AWS Regions that support Amazon EMR. To learn more about Amazon EMR 7.12 release, visit the <a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-7120-release.html">Amazon EMR 7.12 release documentation</a>.&nbsp;</p>

Read article →

Announcing a Fully Managed Appium Endpoint for AWS Device Farm

<p>AWS Device Farm enables mobile and web developers to test their apps using real mobile devices and desktop browsers. Starting today, you can connect to a fully managed <a href="https://appium.io/docs/en/latest/">Appium</a> endpoint using only a few lines of code and run interactive tests on multiple physical devices directly from your IDE or local machine. This feature also seamlessly works with third-party tools such as <a href="https://github.com/appium/appium-inspector">Appium Inspector </a>— both hosted and local versions — for all actions including element inspection.</p> <p>Support for live video and log streaming enables you to get faster test feedback within your local workflow. It complements our existing <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types.html">server-side execution</a> which gives you the scale and control to run secure enterprise-grade workloads. Taken together, Device Farm now offers you the ability to author, inspect, debug, test, and release mobile apps faster, whether from your IDE, AWS Console, or other environments. </p> <p>To learn more, see <a href="https://docs.aws.amazon.com/devicefarm/latest/developerguide/appium-endpoint.html">Appium Testing</a> in <i>AWS Device Farm Developer Guide</i>.</p>

Read article →

AWS Payments Cryptography announces support for post-quantum cryptography to secure data in transit

<p>Today, AWS Payments Cryptography announces support for hybrid post-quantum (PQ) TLS to secure API calls. With this launch, customers can future-proof transmissions of sensitive data and commands using ML-KEM post-quantum cryptography.<br /> <br /> Enterprises operating highly regulated workloads wish to reduce post-quantum risks from “harvest now, decrypt later”. Long-lived data-in-transit can be recorded today, then decrypted in the future when a sufficiently capable quantum computer becomes available. With today’s launch, AWS Payment Cryptography joins data protection services such as AWS Key Management Service (KMS) in addressing this concern by supporting PQ-TLS.<br /> <br /> To get started, simply ensure that your application depends on a version of AWS SDK or browser that supports PQ-TLS. For detailed guidance by language and platform, visit the <a href="https://docs.aws.amazon.com/payment-cryptography/latest/userguide/pqtls-details.html" target="_blank">PQ-TLS enablement</a> documentation. Customers can also validate that ML-KEM was used to secure the TLS session for an API call by reviewing tlsDetails for the corresponding CloudTrail event in the console or a configured CloudTrail trail.<br /> <br /> These capabilities are generally available in all AWS Regions at no added cost. To get started with PQ-TLS and Payment Cyptography, see our post-quantum TLS <a href="https://docs.aws.amazon.com/payment-cryptography/latest/userguide/pqtls.html" target="_blank">guide</a>. For more information about PQC at AWS, please see <a href="https://youtu.be/SG9ndQWH8S4?t=458" target="_blank">PQC shared responsibility</a>.</p>

Read article →

Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview)

<p>Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview). With Spark 4.0.1, you can build and maintain data pipelines more easily with ANSI SQL and VARIANT data types, strengthen compliance and governance frameworks with Apache Iceberg v3 table format, and deploy new real-time applications faster with enhanced streaming capabilities. This enables your teams to reduce technical debt and iterate more quickly, while ensuring data accuracy and consistency.<br /> <br /> With Spark 4.0.1, you can build data pipelines with standard ANSI SQL, making it accessible to a larger set of users who don't know programming languages like Python or Scala. Spark 4.0.1 natively supports JSON and semi-structured data through VARIANT data types, providing flexibility for handling diverse data formats. You can strengthen compliance and governance through Apache Iceberg v3 table format, which provides transaction guarantees and tracks how your data changes over time, creating the audit trails you need for regulatory requirements. You can deploy real-time applications faster with improved streaming controls that let you manage complex stateful operations and monitor streaming jobs more easily. With this capability, you can support use cases like fraud detection and real-time personalization.<br /> <br /> Apache Spark 4.0.1 is available in preview in all regions where EMR Serverless is available, excluding China and AWS GovCloud (US) regions. To learn more about Apache Spark 4.0.1 on Amazon EMR, visit the <a href="https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/release-version-emr-spark-8.0-preview.html">Amazon EMR Serverless release notes</a>, or get started by creating an EMR application with Spark 4.0.1 from the <a href="https://console.aws.amazon.com/emr/serverless">AWS Management Console</a>.</p>

Read article →

AWS Application Load Balancer now supports Health Check Logs

<p>AWS Application Load Balancers (ALB) now supports Health Check Logs that allows you to send detailed target health check log data directly to your designated Amazon S3 bucket. This optional feature captures comprehensive target health check status, timestamp, target identification data, and failure reasons.<br /> <br /> Health Check Logs provide complete visibility into target health status with precise failure diagnostics, enabling faster troubleshooting without contacting AWS Support. You can analyze target’s health patterns over time, determine exactly why instances were marked unhealthy, and significantly reduce mean time to resolution for target health investigations. Logs are automatically delivered to your S3 bucket every 5 minutes with no additional charges beyond standard S3 storage costs.<br /> <br /> This feature is available in all AWS Commercial Regions, AWS GovCloud (US) Regions and AWS China Regions where Application Load Balancer is offered. You can enable Health Check Logs through the AWS Management Console, AWS CLI, or programmatically using the AWS SDK. Learn more about Health Check Logs for ALBs in the <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-monitoring.html">AWS documentation</a>.</p>

Read article →

AWS Security Token Service Now Supports Internet Protocol version 6 (IPv6)

<p>AWS Security Token Service (STS) now supports Internet Protocol version 6 (IPv6) addresses via new dual-stack endpoints. You can connect to STS over the public internet using IPv6, IPv4, or dual-stack (both IPv4 and IPv6) clients. Dual-stack support is also available when you access STS endpoints privately from your Amazon Virtual Private Cloud (VPC) using AWS PrivateLink, allowing you to invoke STS APIs without traversing the public internet.<br /> <br /> Support for dual-stack STS endpoints is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and China Regions. To get started, configure your STS client to use the new dual-stack endpoints using the configuration instructions in the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_dual-stack_endpoint_support.html">IAM user guide</a>.</p>

Read article →

Amazon AppStream 2.0 now supports Internet Protocol Version 6 (IPv6)

<p>Amazon WorkSpaces Applications now supports IPv6 for <a href="https://docs.aws.amazon.com/appstream2/latest/developerguide/allowed-domains.html">WorkSpaces Applications domains</a> and external endpoints, allowing end users to connect to WorkSpaces Applications over IPv6 from IPv6 compatible devices (except SAML authentication). This helps you meet IPv6 compliance requirements and eliminates the need for expensive networking equipment to handle address translation between IPv4 and IPv6.</p> <p>The Internet's growth is consuming IPv4 addresses quickly. WorkSpaces Applications, by supporting IPv6, assists customers in streamlining their network architecture. This support offers a much larger address space and removes the necessity to manage overlapping address spaces in their VPCs. Customers can now base their applications on IPv6, ensuring their infrastructure is future-ready and compatible with existing IPv4 systems via a fallback mechanism.</p> <p>This feature is available at no additional cost in 16 AWS Regions, including US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Paris, Frankfurt, London, Ireland), Asia Pacific (Tokyo, Mumbai, Sydney, Seoul, Singapore), and South America (Sao Paulo) and AWS GovCloud (US-West, US-East). WorkSpaces Applications offers pay-as-you go <a href="https://aws.amazon.com/appstream2/pricing/">pricing</a>.</p> <p>To get started with WorkSpaces Applications, see <a href="https://aws.amazon.com/appstream2/getting-started/">Getting Started with Amazon WorkSpaces Applications</a>. To enable this feature for your users, you must use the latest <a href="https://clients.amazonappstream.com/">WorkSpaces Applications client</a> for Windows, macOS or directly through web access. To learn more about the feature, please refer to the service <a href="https://docs.aws.amazon.com/appstream2/latest/developerguide/allowed-domains.html">documentation</a>.</p>

Read article →

Amazon RDS for SQL Server now supports Resource Governor

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server</a>&nbsp;now supports resource governor, a Microsoft SQL Server feature that enables customers to optimize database performance by managing how different workloads consume compute resources. Customers can use resource governor on RDS for SQL Server Enterprise Edition database instances to prevent resource-intensive queries from impacting critical workloads, implement predictable performance in multi-tenant environments, and efficiently manage resource allocation during peak usage periods.<br /> <br /> RDS for SQL Server provides stored procedures that allow customers to implement resource governor configurations such as resource pools, workload groups, and classifier functions. Using these features, customers can allocate and control CPU, memory, and I/O resources for different database workloads within a single RDS for SQL Server instance. For more information about configuring and using resource governor, refer to the&nbsp;<a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.ResourceGovernor.html" target="_blank">Amazon RDS for SQL Server User Guide</a>. Resource governor is available with SQL Server Enterprise Edition in all AWS Regions where Amazon RDS for SQL Server is available.&nbsp;</p>

Read article →

AWS Backup now supports Amazon FSx Intelligent-Tiering

<p>AWS Backup now supports Amazon FSx Intelligent-Tiering, a storage class which delivers fully elastic file storage that automatically scales up and down with your workloads.</p> <p>The FSx Intelligent-Tiering storage class is available for FSx for Lustre and Amazon FSx for OpenZFS file systems and combines performance, pay-for-what-you-use elasticity, with automated cost optimization in a single solution. With this integration, you can now protect OpenZFS and Lustre file systems using FSx Intelligent-Tiering through AWS Backup's centralized backup management capabilities. Customers with existing backup plans for Amazon FSx do not need to make any changes, as all scheduled backups will continue to work as expected.</p> <p>AWS Backup support is available in all AWS Regons where FSx Intelligent Tiering is available. For a full list of supported Regions see region availability documentation for <a href="https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/available-aws-regions.html">Amazon FSx for OpenZFS</a> and <a href="https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html#persistent-deployment-regions">Amazon FSx for Lustre</a>.</p> <p>To learn more about AWS Backup for Amazon FSx, visit the AWS Backup <a href="https://aws.amazon.com/backup/">product page</a>, <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-supported-services.html#working-with-fsx">technical documentation</a>, and <a href="https://aws.amazon.com/backup/pricing/">pricing page</a>. For more information on the AWS Backup features available across AWS Regions, see <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#features-by-region">AWS Backup documentation</a>. To get started, visit the <a href="https://console.aws.amazon.com/backup">AWS Backup console</a>.</p>

Read article →

Amazon Bedrock Data Automation now supports synchronous image processing

<p>Amazon Bedrock Data Automation (BDA) now supports synchronous API processing for images, enabling you to receive structured insights from visual content with low latency. Synchronous processing for images complements the existing asynchronous API, giving you the flexibility to choose the right approach based on your application's latency requirements.<br /> <br /> BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With synchronous image processing, you can build interactive experiences—such as social media platforms that moderate user-uploaded photos, e-commerce apps that identify products from customer images, or travel applications that recognize landmarks and provide contextual information. This eliminates polling or callback handling, simplifying your application architecture and reducing development complexity. Synchronous processing supports both Standard Output for common image analysis tasks like summarization and text extraction, and Custom Output using Blueprints for industry-specific field extraction. You now get the high-quality, structured results you expect from BDA with low-latency response times that enable more responsive user experiences.<br /> <br /> Amazon Bedrock Data Automation is available in 8 AWS regions: Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Sydney), US West (Oregon) and US East (N. Virginia), and AWS GovCloud (US-West) AWS Regions.<br /> <br /> To learn more, see the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda.html">Bedrock Data Automation User Guide</a> and the <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock Pricing</a> page. To get started with using Bedrock Data Automation, visit the <a href="https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#overview">Amazon Bedrock console</a>.</p>

Read article →

Amazon ECS and Amazon EKS now offer enhanced AI-powered troubleshooting in the Console

<p><a contenteditable="false" href="https://aws.amazon.com/ecs/" style="cursor: pointer;">Amazon Elastic Container Service</a> (ECS) and <a contenteditable="false" href="https://aws.amazon.com/eks/" style="cursor: pointer;">Amazon Elastic Kubernetes Service</a> (EKS) now offer enhanced AI-powered troubleshooting experiences in the AWS Management Console through Amazon Q Developer. The new AI-powered experiences appear contextually alongside error or status messages in the console, helping customers root cause issues and view mitigation suggestions with a single click.<br /> <br /> In the ECS Console, customers can use the new “Inspect with Amazon Q” button to troubleshoot issues such as failed tasks, container health check failures, or deployment rollbacks. Simply click the status reason on task details, task definition details, or deployment details page, and click “Inspect with Amazon Q” from the popover to start troubleshooting with context from the issue provided to the agent for you. Once clicked, Amazon Q automatically uses appropriate AI tools to analyze the issue, gather the relevant logs and metrics, help you understand the root cause, and recommend mitigation actions.<br /> <br /> The Amazon EKS console integrates Amazon Q throughout the observability dashboard, enabling you to inspect and troubleshoot cluster, control plane, and node health issues with contextual AI assistance. Simply click "Inspect with Amazon Q" directly from tables that outline issues, or click on an issue to view details and then select "Inspect with Amazon Q" to begin your investigation. The Q-powered experience provides deeper understanding of cluster-level insights, such as upgrade insights, helping you proactively identify and mitigate potential issues. Amazon Q also streamlines workload troubleshooting by helping you investigate Kubernetes events on pods that indicate issues, accelerating root cause identification and resolution.<br /> <br /> Amazon Q integration in the Amazon ECS and Amazon EKS consoles is now available in all AWS commercial regions. To learn more, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting-with-Q.html" style="cursor: pointer;">ECS developer guide</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/eks/latest/userguide/amazon-q-integration.html" style="cursor: pointer;">EKS user guide</a>.</p>

Read article →

Amazon Simple Email Service is now available in two new AWS Regions

<p><a href="https://aws.amazon.com/ses/">Amazon Simple Email Service (Amazon SES)</a>&nbsp;is now available in the Asia Pacific (Malaysia), Canada West (Calgary) Regions. Customers can now use these new Regions to leverage Amazon SES to send emails and, if needed, to help manage data sovereignty requirements.<br /> <br /> Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this&nbsp;<a href="https://aws.amazon.com/ses/">page</a>.<br /> <br /> With this launch, Amazon SES is available in 29 AWS Regions globally: US East (Virginia, Ohio), US West (N. California, Oregon), AWS GovCloud (US-West, US-East), Asia Pacific (Osaka, Mumbai, Hyderabad, Sydney, Singapore, Seoul, Tokyo, Jakarta, Malaysia), Canada (Central, Calgary), Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (São Paulo), and Africa (Cape Town).<br /> <br /> For a complete list of all of the regional endpoints for Amazon SES, see&nbsp;<a href="https://docs.aws.amazon.com/general/latest/gr/ses.html">AWS Service Endpoints</a>&nbsp;in the AWS General Reference.</p>

Read article →

Oracle Database@AWS now supports AWS KMS integration with Oracle Transparent Data Encryption

<p>Oracle Database@AWS is now integrated with AWS Key Management Service (KMS) to manage database encryption keys. KMS is an AWS managed service to create and control keys used to encrypt and sign data. With this integration, customers can now use KMS to encrypt Oracle Transparent Data Encryption (TDE) master keys in Oracle Database@AWS. This provides customers a consistent mechanism to create and control keys used for encrypting data in AWS, and meet security and compliance requirements.<br /> <br /> Thousands of customers use KMS to manage keys for encrypting their data in AWS. KMS provides robust key management and control through central policies and granular access, comprehensive logging and auditing via <a href="https://www.google.com/search?q=AWS+CloudTrail&amp;rlz=1C1GCEA_enUS1032US1033&amp;oq=what+are+the+most+useful+features+in+AWS+KMS&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBCTEwMzUwajBqNKgCALACAQ&amp;sourceid=chrome&amp;ie=UTF-8&amp;mstk=AUtExfAY4B8an_7rip_0e_D8pOxAszsgBYsvh3GNIdQiq-g3DnQkmyGUtlDQohWSl1QnttqmKdamXPGyWRGCwEfx6oOuJ8Sf4Asd5TbtbYKSgBqsGQDeI825zKlc7zLrNc4R_VaDlzcWbUft-QWVaHXx8uW0d2fYhlWDd43z4jAYrFjkDFk&amp;csui=3&amp;ved=2ahUKEwjQjfHjuv-QAxWVCjQIHVPmBsYQgK4QegQIARAC">AWS CloudTrail</a>, and automatic key rotation for enhanced security. By using KMS to encrypt Oracle TDE master keys, customers can get the same benefits for database encryption keys for Oracle Database@AWS, and apply consistent auditing and compliance procedures for data in AWS.<br /> <br /> AWS KMS integration with TDE is available in all AWS regions where Oracle Database@AWS are available. Other than <a href="https://aws.amazon.com/kms/pricing/">standard AWS KMS pricing</a>, there is no additional Oracle Database@AWS charge for the feature. To get started, see <a href="https://aws.amazon.com/marketplace/featured-seller/oracle">Oracle Database@AWS</a> and <a href="https://docs.oracle.com/en-us/iaas/Content/database-at-aws-exadata-awssc/awssc-security-protect-exadata-database.html">documentation</a> to use KMS.</p>

Read article →

Amazon Lightsail expands blueprint selection with updated support for Nginx Blueprint

<p>Amazon Lightsail now offers a new Nginx blueprint. This new blueprint has Instance Metadata Service Version 2 (IMDSv2) enforced by default, and supports IPv6-only instances. With just a few clicks, you can create a Lightsail virtual private server (VPS) of your preferred size that comes with Nginx preinstalled.<br /> <br /> With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly<br /> <br /> This new blueprint is now available in all <a contenteditable="false" href="https://docs.aws.amazon.com/en_us/lightsail/latest/userguide/understanding-regions-and-availability-zones-in-amazon-lightsail.html" style="cursor: pointer;">AWS Regions where Lightsail is available</a>. For more information on blueprints supported on Lightsail, see <a contenteditable="false" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/compare-options-choose-lightsail-instance-image" style="cursor: pointer;">Lightsail documentation</a>. For more information on pricing, or to get started with your free trial, <a contenteditable="false" href="https://aws.amazon.com/lightsail/pricing/" style="cursor: pointer;">click here.</a></p>

Read article →

AWS Application and Network Load Balancers Now Support Post-Quantum Key Exchange for TLS

<p>AWS Application Load Balancers (ALB) and Network Load Balancers (NLB) now support post-quantum key exchange options for the Transport Layer Security (TLS) protocol. This opt-in feature introduces new TLS security policies with hybrid post-quantum key agreement, combining classical key exchange algorithms with post-quantum key encapsulation methods, including the standardized Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) algorithm.<br /> <br /> Post-quantum TLS (PQ-TLS) security policies protect your data in transit against potential "Harvest Now, Decrypt Later" (HNDL) attacks, where adversaries collect encrypted data today with the intention to decrypt it once quantum computing capabilities mature. This quantum-resistant encryption ensures long-term security for your applications and data transmissions, future-proofing your infrastructure against emerging quantum computing threats.<br /> <br /> This feature is available for ALB and NLB in all AWS Commercial Regions, AWS GovCloud (US) Regions and AWS China Regions at no additional cost. To use this capability, you must explicitly update your existing ALB HTTPS listeners or NLB TLS listeners to use a PQ-TLS security policy, or select a PQ-TLS policy when creating new listeners through the AWS Management Console, CLI, API or SDK. You can monitor the use of classical or quantum-safe key exchange using ALB connection logs or NLB access logs.<br /> <br /> For more information, please visit <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/describe-ssl-policies.html">ALB User Guide</a>, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/describe-ssl-policies.html">NLB User Guide</a>, and <a href="https://aws.amazon.com/security/post-quantum-cryptography/">AWS Post-Quantum Cryptography</a> documentation.</p>

Read article →

Amazon API Gateway REST APIs now supports private integration with Application Load Balancer

<p>Amazon API Gateway REST APIs now support direct private integration with Application Load Balancer (ALB), enabling inter-VPC connectivity to internal ALBs. This enhancement extends API Gateways existing VPC connectivity, providing you with more flexible and efficient architecture choices for your REST API implementations.<br /> <br /> This direct ALB integration delivers multiple advantages: reduced latency by eliminating the additional network hop previously required through Network Load Balancer, lower infrastructure costs through simplified architecture, and enhanced Layer 7 capabilities including HTTP/HTTPS health checks, advanced request-based routing, and native container service integration. You can still use API Gateway's integration with Network Load Balancers for layer-4 connectivity.<br /> <br /> Amazon API Gateway private integration with ALB is available in all AWS GovCloud (US) regions and the following AWS commercial regions US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (São Paulo).&nbsp;For more information, visit the <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/private-integration.html">Amazon API Gateway documentation</a> and <a href="https://aws.amazon.com/blogs/compute/build-scalable-rest-apis-using-amazon-api-gateway-private-integration-with-application-load-balancer/">blog post</a>.</p>

Read article →

Amazon Lex extends wait & continue feature in 10 new languages

<p>Amazon Lex now supports <i>wait &amp; continue </i>functionality in 10 new languages, enabling more natural conversational experiences in Chinese, Japanese, Korean, Cantonese, Spanish, French, Italian, Portuguese, Catalan, and German. This feature allows deterministic voice and chat bots to pause while customers gather additional information, then seamlessly resume when ready. For example, when asked for payment details, customers can say "hold on a second" to retrieve their credit card, and the bot will wait before continuing.<br /> <br /> This feature is available in all AWS Regions where Amazon Lex operates. To learn more, visit the <a href="https://docs.aws.amazon.com/lexv2/latest/dg/wait-and-continue.html">Amazon Lex documentation</a> or explore the Amazon Connect <a href="https://aws.amazon.com/connect/self-service/">website</a> to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.&nbsp;</p>

Read article →

Amazon Aurora DSQL database clusters now support up to 256 TiB of storage volume

<p>Amazon Aurora DSQL now supports a maximum storage limit of 256 TiB, doubling the previous limit of 128 TiB. Now, customers can store and manage larger datasets within a single database cluster, simplifying data management for large-scale applications. With Aurora DSQL, customers only pay for the storage they use and storage automatically scales with usage, ensuring that customers do not need to provision storage upfront.<br /> <br /> All Aurora DSQL clusters by default have a storage limit of 10 TiB. Customers that desire clusters with higher storage limits can request a limit increase using either the <a href="https://console.aws.amazon.com/servicequotas/home">Service Quotas console</a> or <a href="https://docs.aws.amazon.com/cli/latest/reference/service-quotas/request-service-quota-increase.html">AWS CLI.</a> Visit the <a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html">Service Quotas documentation</a> for a step-by-step guide to requesting a quote increase.<br /> <br /> The increased storage limits are available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">Regions where Aurora DSQL is available</a>. Get started with Aurora DSQL for free with the <a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;all-free-tier.sort-order=asc&amp;awsf.Free%20Tier%20Types=*all&amp;awsf.Free%20Tier%20Categories=categories%23databases">AWS Free Tier</a>. To learn more about Aurora DSQL, visit the <a href="https://aws.amazon.com/rds/aurora/dsql/">webpage</a> and <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/what-is-aurora-dsql.html">documentation</a>.</p>

Read article →

AWS WAF announces Web Bot Auth support

<p>Today, we're excited to announce the addition of Web Bot Auth (WBA) support in AWS WAF, providing a secure and standardized way to authenticate legitimate AI agents and automated tools accessing web applications. This new capability helps distinguish trusted bot traffic from potentially harmful automated access attempts.</p> <p>Web Bot Auth is an authentication method that leverages cryptographic signatures in HTTP messages to verifythat a request comes from an automated bot. Web Bot Auth is used as a verification method for verified bots and signed agents. It relies on two active IETF drafts: a directory draft allowing the crawler to share their public keys, and a protocol draft defining how these keys should be used to attach crawler's identity to HTTP requests.</p> <p>AWS WAF now automatically allows verified AI agent traffic Verified WBA bots will now be automatically allowed by default, previously Category AI blocked unverified bots, this behavior is now refined to respect WBA verification.</p> <p>To learn more, please review the&nbsp;<a href="https://docs.aws.amazon.com/waf/latest/developerguide/waf-bot-control.html">documentation</a>.</p>

Read article →

Aurora DSQL launches new Python, Node.js, and JDBC Connectors that simplify IAM authorization

<p>Today we are announcing the release of Aurora DSQL Connectors for Python, Node.js, and JDBC that simplify IAM authorization for customers using standard PostgreSQL drivers to connect to Aurora DSQL clusters. These connectors act as transparent authentication layers that automatically handle IAM token generation, eliminating the need to write token generation code or manually supply IAM tokens. The connectors work seamlessly with popular PostgreSQL drivers including psycopg and psycopg2 for Python, node-postgres and Postgres.js for Node.js, and the standard PostgreSQL JDBC driver, while supporting existing workflows, connection pooling libraries (including HikariCP for JDBC and built-in pooling for Node.js and Python), and frameworks like Spring Boot.<br /> <br /> The Aurora DSQL Connectors streamline authentication and eliminate security risks associated with traditional user-generated passwords. By automatically generating IAM tokens for each connection using valid AWS credentials and the AWS SDK, the connectors ensure valid tokens are always used while maintaining full compatibility with existing PostgreSQL driver features.<br /> <br /> The above connectors are available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">Regions where Aurora DSQL is available</a>. To get started, visit the <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_connectors.html">Connectors for Aurora DSQL documentation</a> page. For code examples, visit our Github page for <a href="https://github.com/awslabs/aurora-dsql-nodejs-connector/tree/main/packages/node-postgres/example">node-postgres</a>, <a href="https://github.com/awslabs/aurora-dsql-nodejs-connector/tree/main/packages/postgres-js/example">Postgres.js</a>, <a href="https://github.com/awslabs/aurora-dsql-python-connector/tree/main?tab=readme-ov-file#examples">psycopg and psycopg2</a>, and <a href="https://github.com/awslabs/aurora-dsql-jdbc-connector/?tab=readme-ov-file#examples">JDBC</a>. Get started with Aurora DSQL for free with the <a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;all-free-tier.sort-order=asc&amp;awsf.Free%20Tier%20Types=*all&amp;awsf.Free%20Tier%20Categories=categories%23databases">AWS Free Tier</a>. To learn more about Aurora DSQL, visit the <a href="https://aws.amazon.com/rds/aurora/dsql/">webpage</a>.</p>

Read article →

AWS IoT Core enhances IoT rules-SQL with variable setting and error handling capabilities

<p><a href="https://aws.amazon.com/iot-core/">AWS IoT Core</a> now supports a SET clause in IoT rules-SQL, which lets you set and reuse variables across SQL statements. This new feature provides a simpler SQL experience and ensures consistent content when variables are used multiple times. Additionally, a new get_or_default() function provides improved failure handling by returning default values while encountering data encoding or external dependency issues, ensuring IoT rules continue execution successfully.<br /> <br /> AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html">Rules for AWS IoT</a> is a component of AWS IoT Core which enables you to filter, process, and decode IoT device data using <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-reference.html">SQL-like statements</a>, and route the data to 20+ AWS and third-party services. As you define an IoT rule, these new capabilities help you eliminate complicated SQL statements and make it easy for you to manage IoT rules-SQL failures.</p> <p>These new features are available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. For more information and getting started experience, visit the developer guides on <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-set.html">SET clause</a> and <a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html#iot-sql-function-get-or-default">get_or_default()</a> function.</p>

Read article →

Automated Reasoning checks now include natural language test Q&A generation

<p>AWS announces the launch of natural language test Q&amp;A generation for Automated Reasoning checks in Amazon Bedrock Guardrails. Automated Reasoning checks uses formal verification techniques to validate the accuracy and policy compliance of outputs from generative AI models. Automated Reasoning checks deliver up to 99% accuracy at detecting correct responses from LLMs, giving you provable assurance in detecting AI hallucinations while also assisting with ambiguity detection in model responses. <br /> <br /> To get started with Automated Reasoning checks, customers create and test Automated Reasoning policies using natural language documents and sample Q&amp;As. Automated Reasoning checks generates up to N test Q&amp;As for each policy using content from the input document, reducing the work required to go from initial policy generation to production-ready, refined policy.<br /> <br /> Test generation for Automated Reasoning checks is now available in the US (N. Virginia), US (Ohio), US (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris) Regions. Customers can access the service through the Amazon Bedrock console, as well as the Amazon Bedrock Python SDK. <br /> <br /> To learn more about Automated Reasoning checks and how you can integrate it into your generative AI workflows, please read the Amazon Bedrock <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-automated-reasoning-checks.html">documentation</a>, review the <a href="https://aws.amazon.com/blogs/machine-learning/build-reliable-ai-systems-with-automated-reasoning-on-amazon-bedrock-part-1/">tutorials on the AWS AI blog</a>, and visit the <a href="https://aws.amazon.com/bedrock/guardrails/">Bedrock Guardrails webpage</a>.</p>

Read article →

EC2 Image Builder now supports auto-versioning and enhances Infrastructure as Code experience

<p>Amazon EC2 Image Builder now supports automatic versioning for recipes and automatic build version incrementing for components, reducing the overhead of managing versions manually. This enables you to increment versions automatically and dynamically reference the latest compatible versions in your pipelines without manual updates.<br /> <br /> With automatic versioning, you no longer need to manually track and increment version numbers when creating new versions of your recipes. You can simply place a single 'x' placeholder in any position of the version number, and Image Builder detects the latest existing version and automatically increments that position. For components, Image Builder automatically increments the build version when you create a component with the same name and semantic version. When referencing resources in your configurations, wildcard patterns automatically resolve to the highest available version matching the specified pattern, ensuring your pipelines always use the latest versions.<br /> <br /> Auto-versioning is available in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK. Refer to documentation to learn more about <a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-image-recipes.html">recipes</a>, <a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-component.html">components</a> and <a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/ibhow-semantic-versioning.html">semantic versioning</a>.</p>

Read article →

Amazon CloudWatch Introduces In-Console Agent Management on EC2

<p>Amazon CloudWatch now offers an in-console experience for automated installation and configuration of the Amazon CloudWatch agent on EC2 instances. Amazon CloudWatch agent is used by developers and SREs to collect infrastructure and application metrics, logs, and traces from EC2 and send them to CloudWatch and AWS X-Ray. This new experience provides visibility into agent status across your EC2 fleet, performs automatic detection of supported workloads, and leverages CloudWatch observability solutions to recommend monitoring configurations based on detected workloads.<br /> <br /> Customers can now deploy the CloudWatch agent through one-click installation to individual instances or by creating tag-based policies for automated fleet-wide management. The automated policies ensure newly launched instances, including those created through auto-scaling, are automatically configured with the appropriate monitoring settings. By simplifying agent deployment and providing intelligent configuration recommendations, customers can ensure consistent monitoring across their environment while reducing setup time from hours to minutes.<br /> <br /> Amazon CloudWatch agent is available in the following AWS regions: Europe (Stockholm), Asia Pacific (Mumbai), Europe (Paris), US East (Ohio), Europe (Ireland), Europe (Frankfurt), South America (Sao Paulo), US East (N. Virginia), Asia Pacific (Seoul), Asia Pacific (Tokyo), US West (Oregon), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Sydney), and Canada (Central).<br /> <br /> To get starting with Amazon CloudWatch agent in the CloudWatch console, see <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html">Installing the CloudWatch agent</a> in the Amazon CloudWatch User Guide.</p>

Read article →

AWS Glue supports AWS CloudFormation and AWS CDK for zero-ETL integrations

<p>AWS Glue zero-ETL integrations now support AWS CloudFormation and AWS Cloud Development Kit (AWS CDK), through which you can create Zero-ETL integrations using infrastructure as code. Zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines.<br /> <br /> Using AWS Glue zero-ETL, you can ingest data from AWS DynamoDB or enterprise SaaS sources, including Salesforce, ServiceNow, SAP, and Zendesk, into Amazon Redshift, Amazon S3, and Amazon S3 Tables. CloudFormation and CDK support for these Glue zero-ETL integrations simplifies the way you can create, update, and manage zero-ETL integrations using infrastructure as code. With CloudFormation and CDK support, data engineering teams can now consistently deploy any zero-ETL integration across multiple AWS accounts while maintaining version control of their configurations.<br /> <br /> This feature is available in all AWS Regions where AWS Glue zero-ETL is currently available.<br /> <br /> To get started with the new AWS Glue zero-ETL infrastructure as code capabilities, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/AWS_Glue.html" style="cursor: pointer;">CloudFormation documentation</a> for AWS Glue, <a contenteditable="false" href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_glue-readme.html" style="cursor: pointer;" target="_blank">CDK documentation</a>, or the <a contenteditable="false" href="https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html" style="cursor: pointer;">AWS Glue zero-ETL user guide.</a></p>

Read article →

AWS Security Incident Response now provides agentic AI-powered investigation

<p><a href="https://aws.amazon.com/security-incident-response/" target="_blank">AWS Security Incident Response</a> now provides agentic AI-powered investigation capabilities to help you prepare for, respond to, and recover from security events faster and more effectively. The new investigative agent automatically gathers evidence across multiple AWS data sources, correlates the data, then presents findings for you in clear, actionable summaries. This helps you reduce the time required to investigate and respond to potential security events, thereby minimizing business disruption.<br /> <br /> When a security event case is created in the Security Incident Response console, the investigative agent immediately assesses the case details to identify missing information, such as potential indicators, resource names, and timeframes. It asks the case submitter clarifying questions to gather these details. This proactive approach helps minimize delays from back-and-forth communications that traditionally extend case resolution times. The investigative agent then collects relevant information from various data sources, such as AWS CloudTrail, AWS Identity and Access Management (IAM), Amazon EC2, and AWS Cost Explorer. It automatically correlates this data to provide you with a comprehensive analysis, reducing the need for manual evidence gathering and enabling faster investigation. Security teams can track all investigation activities directly through the AWS console and view summaries in their preferred integration tools.<br /> <br /> This feature is automatically enabled for all Security Incident Response customers at no additional cost in all AWS Regions where the service is <a href="https://docs.aws.amazon.com/security-ir/latest/userguide/supported-configs.html" target="_blank">available</a>.<br /> <br /> To learn more and get started, visit the Security Incident Response <a href="https://aws.amazon.com/security-incident-response/" target="_parent">overview page</a> and <a href="https://us-east-1.console.aws.amazon.com/security-ir" target="_blank">console</a>.</p>

Read article →

AWS introduces new VPC Encryption Controls and further raises the bar on data encryption

<p>AWS launches VPC Encryption Controls to make it easy to audit and enforce encryption in transit within and across Amazon Virtual Private Clouds (VPC), and demonstrate compliance with encryption standards. You can turn it on your existing VPCs to monitor encryption status of traffic flows and identify VPC resources that are unintentionally allowing plaintext traffic. This feature also makes it easy to enforce encryption across different network paths by automatically (and transparently) turning on hardware-based AES-256 encryption on traffic between multiple VPC resources including AWS Fargate, Network Load Balancers, and Application Load Balancers.<br /> <br /> To meet stringent compliance standards like HIPAA and PCI DSS, customers rely on both application layer encryption and the hardware-based encryption that AWS offers across different network paths. AWS provides hardware-based AES-256 encryption transparently between modern EC2 Nitro instances. AWS also encrypts all network traffic between AWS data centers in and across Availability Zones, and AWS Regions before the traffic leaves our secure facilities. All inter-region traffic that uses VPC Peering, Transit Gateway Peering, or AWS Cloud WAN receives an additional layer of transparent encryption before leaving AWS data centers. Prior to this release, customers had to track and confirm encryption across all network paths. With VPC Encryption Controls, customers can now monitor, enforce and demonstrate encryption within and across Virtual Private Clouds (VPCs) in just a few clicks. Your information security team can turn it on centrally to maintain a secure and compliant environment, and generate audit logs for compliance and reporting.<br /> <br /> VPC Encryption Controls is now available in the following AWS Commercial regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Milan), Europe (Zurich), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Melbourne), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Canada West (Calgary), Canada (Central), Middle East (UAE), Middle East (Bahrain), Africa (Cape Town) and South America (São Paulo). To learn more about this feature and its use cases, please see our <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-encryption-controls.html" target="_blank">documentation</a>.</p>

Read article →

Announcing Nginx blueprint by Amazon Lightsail

<p>Amazon Lightsail now offers a new Nginx blueprint. This new blueprint has Instance Metadata Service Version 2 (IMDSv2) enforced by default, and supports IPv6-only instances. With just a few clicks, you can create a Lightsail virtual private server (VPS) of your preferred size that comes with Nginx preinstalled.<br /> <br /> With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly<br /> <br /> This new blueprint is now available in all <a contenteditable="false" href="https://docs.aws.amazon.com/en_us/lightsail/latest/userguide/understanding-regions-and-availability-zones-in-amazon-lightsail.html" style="cursor: pointer;">AWS Regions where Lightsail is available</a>. For more information on blueprints supported on Lightsail, see <a contenteditable="false" href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/compare-options-choose-lightsail-instance-image" style="cursor: pointer;">Lightsail documentation</a>. For more information on pricing, or to get started with your free trial, <a contenteditable="false" href="https://aws.amazon.com/lightsail/pricing/" style="cursor: pointer;">click here.</a></p>

Read article →

Amazon CloudWatch Application Signals adds GitHub Action and MCP server improvements

<p>AWS announces the general availability of a new <a href="https://github.com/marketplace/actions/application-observability-for-aws">GitHub Action</a> and improvements to <a href="https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server">CloudWatch Application Signals MCP server</a> that bring application observability into developer tools, making troubleshooting issues faster and more convenient. Previously, developers had to leave GitHub to triage production issues, look up trace data, and ensure observability coverage, often switching between consoles, dashboards, and source code. Starting today, Application observability for AWS GitHub Action helps you catch breaching SLOs or critical service errors, in GitHub workflows. In addition, now you can use the CloudWatch Application Signals MCP server in AI coding agents such as Kiro to identify the exact file, function, and line of code responsible for latency, errors, or SLO violations. Furthermore, you can get instrumentation guidance that ensures comprehensive observability coverage.<br /> <br /> With this new GitHub Action, developers can mention @awsapm in GitHub Issues with prompts like "Why is my checkout service experiencing high latency?" and receive intelligent, observability-based responses without switching between consoles, saving time and effort. In addition, with improvements in CloudWatch Application Signals MCP server, developers can now ask questions like "Which line of code caused the latency spike in my service?". Furthermore, when instrumentation is missing, the MCP server can modify infrastructure-as-code (e.g., CDK, Terraform) to help teams set up OTel-based application performance monitoring for ECS, EKS, Lambda, and EC2 without requiring coding effort.<br /> <br /> Together, these features bring observability into development workflows, reduce context switching, and power intelligent, agent-assisted debugging from code to production. To get started, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Service-Application-Observability-for-AWS-GitHub-Action.html">Application Observability for AWS GitHub Action</a> documentation and the <a href="https://awslabs.github.io/mcp/servers/cloudwatch-applicationsignals-mcp-server">CloudWatch Application Signals MCP server documentation</a>.</p>

Read article →

AWS Security Incident Response now offers metered pricing with free tier

<p>Today, <a href="https://aws.amazon.com/security-incident-response/">AWS Security Incident Response</a> announces a new metered pricing model that charges customers based on the number of security findings ingested, making automated security incident response capabilities and expert guidance from the AWS Customer Incident Response Team (CIRT) more flexible and scalable for organizations of all sizes.<br /> <br /> The new pricing model introduces a free tier covering the first 10,000 findings per month, allowing security teams to explore and validate the service's value at no cost. Customers pay $0.000676 per finding after the free tier, with tiered discounts that reduce rates as volume increases. This consumption-based approach enables customers to scale their security incident response capabilities as their needs evolve, without upfront commitments or minimum fees. Customers can monitor the number of monthly findings through Amazon CloudWatch at no additional cost, making it easy to track usage against the free tier and any applicable charges.<br /> <br /> The new pricing model automatically applies to all AWS Regions where Security Incident Response is <a href="https://docs.aws.amazon.com/security-ir/latest/userguide/supported-configs.html">available</a> starting November 21, 2025, requiring no action from customers. <br /> <br /> To learn more, visit the Security Incident Response <a href="https://aws.amazon.com/security-incident-response/pricing/">pricing page</a>.</p>

Read article →

AWS Network Firewall now supports flexible cost allocation via Transit Gateway

<p><br /> <a contenteditable="false" href="https://aws.amazon.com/network-firewall/" style="cursor: pointer;">AWS Network Firewall </a>now supports flexible cost allocation through <a contenteditable="false" href="https://aws.amazon.com/transit-gateway/" style="cursor: pointer;">AWS Transit Gateway</a> native attachments, enabling you to automatically distribute data processing costs across different AWS accounts. Customers can create metering policies to apply data processing charges based on their organization's chargeback requirements instead of consolidating all expenses in the firewall owner account.<br /> <br /> This capability helps security and network teams better manage centralized firewall costs by distributing charges to application teams based on actual usage. Organizations can now maintain centralized security controls while automatically allocating inspection costs to the appropriate business units or application owners, eliminating the need for custom cost management solutions.<br /> <br /> Flexible cost allocation is available in all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Commercial Regions</a> and Amazon China Regions where both AWS Network Firewall and Transit Gateway attachments are supported. There are no additional charges for using this attachment or flexible cost allocation beyond standard pricing of <a contenteditable="false" href="https://aws.amazon.com/network-firewall/pricing/" style="cursor: pointer;">AWS Network Firewall</a> and <a contenteditable="false" href="https://aws.amazon.com/transit-gateway/pricing/" style="cursor: pointer;">AWS Transit Gateway</a>.<br /> <br /> To learn more, visit the AWS Network Firewall service <a contenteditable="false" href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-creating.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Aurora DSQL now provides an integrated query editor in the AWS Management Console

<p>Amazon Aurora DSQL now provides an integrated query editor for browser-based SQL access. With this launch, customers can securely connect to their Aurora DSQL clusters and run SQL queries directly from the AWS Management Console, without installing or configuring external clients. This capability helps developers, analysts, and data engineers start querying within seconds of cluster creation, accelerating time-to-value and simplifying database interactions.<br /> <br /> The Aurora DSQL query editor provides an intuitive workspace with built-in syntax highlighting, auto-completion, and intelligent code assistance. You can quickly explore schema objects, develop and execute SQL queries, and view results, all within a single interface. This unified experience streamlines data exploration and analysis, making it easier for users to get started with Aurora DSQL.<br /> <br /> Aurora DSQL Console Query Editor is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">Regions where Aurora DSQL is available</a>. Try it out today in the <a href="https://console.aws.amazon.com/dsql">AWS Management Console</a>, and visit the <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/getting-started-query-editor.html">Aurora DSQL Query Editor documentation</a> to learn more.&nbsp;</p>

Read article →

Amazon CloudWatch Container Insights adds Sub-Minute GPU Metrics for Amazon EKS

<p><a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html" style="cursor: pointer;">Amazon CloudWatch Container Insights</a> now supports collection of GPU metrics at sub-minute frequencies for AI and ML workloads running on Amazon EKS. Customers can configure the metric sample rate in seconds, enabling more granular monitoring of GPU resource utilization.<br /> <br /> This enhancement enables customers to effectively monitor GPU-intensive workloads that run for less than 60 seconds, such as ML inference jobs that consume GPU resources for short durations. By increasing the sampling frequency, customers can maintain detailed visibility into short-lived GPU workloads. Sub-minute GPU metrics are sent to CloudWatch once per minute. This granular monitoring helps customers optimize their GPU resource utilization, troubleshoot performance issues, and ensure efficient operation of their containerized GPU applications.<br /> <br /> Sub-Minute GPU metrics in Container Insights is available in all AWS Commercial Regions and the AWS GovCloud (US) Regions.<br /> <br /> To learn more about Sub-Minute GPU metrics in Container Insights, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-enhanced-EKS.html#Container-Insights-metrics-EKS-GPU" style="cursor: pointer;">NVIDIA GPU metrics</a> page in the Amazon CloudWatch User Guide. Sub-Minute GPU metrics in Container Insights are available for no addition cost. For Container Insights pricing, see the <a contenteditable="false" href="https://aws.amazon.com/cloudwatch/pricing/" style="cursor: pointer;">Amazon CloudWatch Pricing Page</a>.</p>

Read article →

Amazon SageMaker HyperPod now supports running IDEs and Notebooks to accelerate AI development

<p>Amazon SageMaker HyperPod now supports IDEs and Notebooks, enabling AI developers to run JupyterLab, Code Editor, or connect local IDEs to run their interactive AI workloads directly on HyperPod clusters.<br /> <br /> AI developers can now run IDEs and notebooks on the same persistent HyperPod EKS clusters used for training and inference. This enables developers to leverage HyperPod's scalable GPU capacity with familiar tools like HyperPod CLI, while sharing data across IDEs and training jobs through mounted file systems such as FSx, EFS, etc..<br /> <br /> Administrators can maximize CPU/GPU investments through unified governance across IDEs, training, and inference workloads using HyperPod Task Governance. HyperPod Observability provides usage metrics including CPU, GPU, and memory consumption, enabling cost-efficient cluster utilization.<br /> <br /> This feature is available in all AWS Regions where Amazon SageMaker HyperPod is currently available, excluding China and GovCloud (US) regions. To learn more, visit our <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-eks-cluster-ide.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

AWS announces Flexible Cost Allocation on AWS Transit Gateway

<p>AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization.</p> <p>Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure.<br /> <br /> Flexible Cost Allocation is available in all commercial <a href="https://docs.aws.amazon.com/network-manager/latest/cloudwan/what-is-cloudwan.html#cloudwan-available-regions">AWS Regions</a> where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway<a href="https://docs.aws.amazon.com/vpc/latest/tgw/metering-policy.html"> documentation pages.</a></p>

Read article →

Announcing flexible AMI distribution capabilities for EC2 Image Builder

<p>Amazon EC2 Image Builder now allows you to distribute existing Amazon Machine Images(AMIs), retry distributions, and define custom distribution workflows. Distribution workflows are a new workflow type that complements existing build and test workflows, enabling you to define sequential distribution steps such as AMI copy operations, wait-for-action checkpoints, and AMI attribute modifications.<br /> <br /> With enhanced distribution capabilities, you can now distribute an existing image to multiple regions and accounts without running a full Image Builder pipeline. Simply specify your AMI and distribution configuration, and Image Builder handles the copying and sharing process. Additionally, with distribution workflows, you can now customize distribution process by defining custom steps. For example, you can distribute AMIs to a test region first, add a wait-for-action step to pause for validation, and then continue distribution to production regions after approval. This provides the same step-level visibility and control you have with build and test workflows.<br /> <br /> These capabilities are available to all customers at no additional costs, in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions.<br /> You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder <a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/distribution-enhanced_functionality.html">documentation</a>.</p>

Read article →

Amazon ECR dual-stack endpoints now support AWS PrivateLink

<p>Amazon Elastic Container Registry (ECR) announces AWS PrivateLink support for its dual-stack endpoints. This makes it easier to standardize on IPv6 and enhance your security posture.<br /> <br /> Previously, ECR announced IPv6 support for API and Docker/OCI requests via the new dual-stack endpoints. With these dual-stack endpoints, you can make requests from either an IPv4 or an IPv6 network. With today’s launch, you can now make requests to these dual-stack endpoints using AWS PrivateLink to limit all network traffic between your Amazon Virtual Private Cloud (VPC) and ECR to the Amazon network, thereby improving your security posture.<br /> <br /> This feature is generally available in all AWS commercial and AWS GovCloud (US) regions at no additional cost. To get started, visit <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html">ECR documentation</a>.</p>

Read article →

Second-generation AWS Outposts racks now supported in the AWS Asia Pacific (Tokyo) Region

<p>Second-generation AWS Outposts racks are now supported in the AWS Asia Pacific (Tokyo) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience.<br /> <br /> Organizations from startups to enterprises and the public sector in and outside of Japan can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to.<br /> <br /> To learn more about second-generation Outposts racks, read <a href="https://aws.amazon.com/blogs/aws/announcing-second-generation-aws-outposts-racks-with-breakthrough-performance-and-scalability-on-premises/" target="_blank"><u>this blog post</u></a> and <a href="https://docs.aws.amazon.com/outposts/latest/network-userguide/what-is-outposts.html" target="_blank"><u>user guide</u></a>. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the <a href="https://aws.amazon.com/outposts/rack/faqs/" target="_blank"><u>Outposts rack FAQs page</u></a>.</p>

Read article →

Amazon Connect launches monitoring of contacts queued for callback

<p>Amazon Connect now provides you with the ability to monitor which contacts are queued for callback. This feature enables you to search for contacts queued for callback and view additional details such as the customer’s phone number and duration of being queued within the Connect UI and APIs. You can now pro-actively route contacts to agents that are at risk of exceeding the callback timelines communicated to customers. Businesses can also identify customers that have already successfully connected with agents, and clear them from the callback queue to remove duplicative work.<br /> <br /> This feature is available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region">Amazon Connect</a> is offered. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/search-in-progress-contacts.html">documentation</a> and our <a href="https://aws.amazon.com/connect/contact-lens/">webpage</a>.&nbsp;</p>

Read article →

Amazon EC2 Fleet adds new encryption attribute for instance type selection

<p>Amazon EC2 Fleet now supports a new encryption attribute for Attribute-Based Instance Type Selection (ABIS). Customers can use the RequireEncryptionInTransit parameter to specifically launch instance types that support encryption-in-transit, in addition to specifying resource requirements like vCPU cores and memory.<br /> <br /> The new encryption attribute addresses critical compliance needs for customers who use VPC Encryption Controls in enforced mode and require all network traffic to be encrypted in transit. By combining encryption requirements with other instance attributes in ABIS, customers can achieve instance type diversification for better capacity fulfillment while meeting their security needs. Additionally, the GetInstanceTypesFromInstanceRequirements (GITFIR) allows you to preview which instance types you might be allocated based on your specified encryption requirements.<br /> <br /> This feature is available in all AWS commercial and AWS GovCloud (US) Regions.<br /> <br /> To get started, set the RequireEncryptionInTransit parameter to true in InstanceRequirements when calling the CreateFleet or GITFIR APIs. For more information, refer to the user guides for <a contenteditable="false" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-attribute-based-instance-type-selection.html" style="cursor: pointer;">EC2 Fleet</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetInstanceTypesFromInstanceRequirements.html" style="cursor: pointer;">GITFIR</a>.</p>

Read article →

AWS Control Tower introduces a controls-dedicated experience

<p>AWS Control Tower offers the easiest way to manage and govern your environment with AWS managed controls. Starting today, customers can have direct access to these AWS managed controls without requiring a full Control Tower deployment. This new experience offers over 750 managed controls that customers can deploy within minutes while maintaining their existing account structure.<br /> <br /> AWS Control Tower v4.0 introduces direct access to Control Catalog, allowing customers to review available managed controls and deploy them into their existing AWS Organization. With this release, customers now have more flexibility and autonomy over their organizational structure, as Control Tower will no longer enforce a mandatory structure. Additionally, customers will have improved operations such as cleaner resource and permissions management and cost attribution due to the separation of S3 buckets and SNS notifications for the AWS Config and AWS CloudTrail integrations.<br /> <br /> This controls-focused experience is now available in all AWS Regions where AWS Control Tower is supported. For more information about this new capability see the <a href="https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html" style="cursor: pointer;"><u>AWS Control Tower User Guide</u></a> or contact your AWS account team. For a full list of Regions where AWS Control Tower is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;"><u>AWS Region Table</u></a>.</p>

Read article →

CloudWatch Database Insights adds cross-account cross-region monitoring

<p>Amazon CloudWatch Database Insights now supports cross-account and cross-region database fleet monitoring, enabling centralized observability across your entire AWS database infrastructure. This enhancement allows DevOps engineers and database administrators to monitor, troubleshoot, and optimize databases spanning multiple AWS accounts and regions from a single unified console experience.<br /> <br /> With this new capability, organizations can gain holistic visibility into their distributed database environments without account or regional boundaries. Teams can now correlate performance issues across their entire database fleet, streamline incident response workflows, and maintain consistent monitoring standards across complex multi-account architectures, significantly reducing operational overhead and improving mean time to resolution.<br /> <br /> This feature is available in all AWS commercial regions where CloudWatch Database Insights is supported.<br /> <br /> To learn more about cross-account and cross-region monitoring in CloudWatch Database Insights, as well as instructions to get started monitoring your databases across your entire organization and regions, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Database-Insights-Cross-Account-Cross-Region.html" target="_blank">CloudWatch Database Insights documentation</a>.</p>

Read article →

Amazon OpenSearch Service OR2 and OM2 now available in additional Regions

<p style="text-align: left;">Amazon OpenSearch Service, expands availability of OR2 and OM2, OpenSearch Optimized Instance family to 11 additional regions. The OR2 instance delivers up to 26% higher indexing throughput compared to previous OR1 instances and 70% over R7g instances. The OM2 instance delivers up to 15% higher indexing throughput compared to OR1 instances and 66% over M7g instances in internal benchmarks.</p> <p style="text-align: left;">The OpenSearch Optimized instances, leveraging best-in-class cloud technologies like Amazon S3, to provide high durability, and improved price-performance for higher indexing throughput better for indexing heavy workload. Each OpenSearch Optimized instance is provisioned with compute, local instance storage for caching, and remote Amazon S3-based managed storage. OR2 and OM2 offers pay-as-you-go pricing and reserved instances, with a simple hourly rate for the instance, local instance storage, as well as the managed storage provisioned. OR2 instances come in sizes ‘medium’ through ‘16xlarge’, and offer compute, memory, and storage flexibility. OM2 instances come in sizes ‘large’ through ‘16xlarge’ Please refer to the Amazon OpenSearch Service&nbsp;<a href="https://aws.amazon.com/opensearch-service/pricing/" target="_blank" title="Pricing Page">pricing page</a>&nbsp;for pricing details.<br /> <br /> OR2 instance family is now available on Amazon OpenSearch Service across 12 additional regions globally: US West (N. California), Canada (Central),&nbsp; Asia Pacific (Hong Kong, Jakarta , Malaysia, Melbourne, Osaka , Seoul, Singapore), Europe (London), and South America (Sao Paulo).&nbsp;<br /> <br /> OM2 instance family is now available on Amazon OpenSearch Service across 14 additional regions globally:&nbsp;US West (N. California), Canada (Central), Asia Pacific (Hong Kong, Hyderabad, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe ( Paris, Spain), Middle East (Bahrain), South America (Sao Paulo).</p>

Read article →

AWS Control Tower now supports seven new compliance frameworks and 279 additional AWS Config rules

<p>Today, AWS Control Tower announces support for an additional 279 managed Config rules in Control Catalog for various use cases such as security, cost, durability, and operations. With this launch, you can now search, discover, enable and manage these additional rules directly from AWS Control Tower and govern more use cases for your multi-account environment. AWS Control Tower also supports seven new compliance frameworks in Control Catalog. In addition to existing frameworks, most controls are now mapped to ACSC-Essential-Eight-Nov-2022, ACSC-ISM-02-Mar-2023, AWS-WAF-v10, CCCS-Medium-Cloud-Control-May-2019, CIS-AWS-Benchmark-v1.2, CIS-AWS-Benchmark-v1.3, CIS-v7.1<br /> <br /> To get started, go to the Control Catalog and search for controls with the implementation filter AWS Config to view all AWS Config rules in the Catalog. You can enable relevant rules directly using the AWS Control Tower console or the ListControls, GetControl and EnableControl APIs. We've also enhanced control relationship mapping, helping you understand how different controls work together. The updated ListControlMappings API now reveals important relationships between controls - showing which ones complement each other, are alternatives, or are mutually exclusive. For instance, you can now easily identify when a Config Rule (detection) and a Service Control Policy (prevention) can work together for comprehensive security coverage.<br /> <br /> These new features are available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where AWS Control Tower is available, including AWS GovCloud (US). Reference the list of supported regions for each Config rule to see where it can be enabled. To learn more, visit the <a href="https://docs.aws.amazon.com/controltower/latest/controlreference/config-controls.html" target="_blank">AWS Control Tower User Guide</a>.</p>

Read article →

Amazon EKS add-ons now supports the AWS Secrets Store CSI Driver provider

<p>Today, AWS announces the general availability of the AWS Secrets Store CSI Driver provider EKS add-on. This new integration allows customers to retrieve secrets from AWS Secrets Manager and parameters from AWS Systems Manager Parameter Store and mount them as files on their Kubernetes clusters running on Amazon Elastic Kubernetes Service (Amazon EKS). The add-on installs and manages the <a href="https://github.com/aws/secrets-store-csi-driver-provider-aws" target="_blank">AWS provider for the Secrets Store CSI Driver</a>.<br /> <br /> Now, with the new Amazon EKS add-on, customers can quickly and easily set up new and existing clusters using automation to leverage AWS Secrets Manager and AWS Systems Manager Parameter Store, enhancing security and simplifying secrets management. Amazon EKS add-ons are curated extensions that automate the installation, configuration, and lifecycle management of operational software for Kubernetes clusters, simplifying the process of maintaining cluster functionality and security.<br /> <br /> Customers rely on AWS Secrets Manager to securely store and manage secrets such as database credentials and API keys throughout their lifecycle. To learn more about Secrets Manager, visit the <a href="https://aws.amazon.com/documentation/secretsmanager/" target="_blank">documentation</a>. For a list of regions where Secrets Manager is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region table</a>. To get started with Secrets Manager, visit the <a href="https://aws.amazon.com/secrets-manager/" target="_blank">Secrets Manager home page</a>.<br /> <br /> This new Amazon EKS add-on is available in all AWS commercial and AWS GovCloud (US) Regions.<br /> To get started, see the following resources:<br /> </p> <ul> <li><a href="https://docs.aws.amazon.com/eks/latest/userguide/workloads-add-ons-available-eks.html#add-ons-aws-secrets-store-csi-driver-provider" target="_blank">Amazon EKS add-ons user guide</a></li> <li><a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" target="_blank">AWS Secrets Manager user guide</a></li> </ul>

Read article →

AWS License Manager introduces license asset groups for centralized software asset management

<p>AWS License Manager now provides centralized software asset management across AWS regions and accounts in an organization, reducing compliance risks and streamlines license tracking through automated license asset groups. Customers can now track license expiry dates, streamline audit responses, and make data-driven renewal decisions with a product-centric view of their commercial software portfolio.<br /> <br /> With this launch, customers no longer need to manually track licenses across multiple regions and accounts in their organization. Now with license asset groups, customers can gain organization-wide visibility of their commercial software usage with customizable grouping and automated reporting. The new feature is available in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">all commercial regions</a> where AWS License Manager is available.<br /> <br /> To get started, visit the Licenses section of the AWS License Manager console, and the <a href="https://docs.aws.amazon.com/license-manager/" target="_blank">AWS License Manager User Guide</a>.&nbsp;</p>

Read article →

Amazon Location Service introduces Address Form Solution Builder

<p>Today, AWS announced Address Form Solution Builder from Amazon Location Service, enabling developers to build a customized address form, without writing any code, that helps their users enter their address with predictive suggestions, autofill address fields such as postal code, and customizable layout. This guided experience allows developers to generate a ready-to-use application in minutes and download the developer package in React JavaScript, React Typescript, or Standalone HTML/JavaScript.<br /> <br /> Developers can use address forms to improve the user experience, speed, and accuracy of collecting address information from their users. Features such as predictive suggestions helps end-users select their complete address after just a few keystrokes, reducing the data entry time and error rate. The integrated map view lets users visualize their selected address's location and adjust the placement of the pin on the map to indicate a specific entrance. By improving the speed and accuracy of address collection, enterprises can improve their customer experience, reduce fraud, and increase delivery success rate.<br /> <br /> Amazon Location Service’s Address Form Solution Builder is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). Build your first address form using the&nbsp;<a href="https://docs.aws.amazon.com/location/latest/developerguide/address-form-sdk.html">Amazon Location Console</a> or learn more about this feature in our <a href="https://docs.aws.amazon.com/location/latest/developerguide/address-form-sdk.html">Developer Guide</a>.&nbsp;</p>

Read article →

Amazon S3 now supports attribute-based access control

<p>Amazon S3 supports attribute-based access control (ABAC) for S3 general purpose buckets. In addition to using tags on your S3 buckets for cost allocation, you can now use them for ABAC to automatically manage permissions to your data. This helps eliminate frequent AWS Identity and Access Management (IAM) or bucket policy updates as your organization grows, simplifying how you govern access at scale.<br /> <br /> With ABAC support, Amazon S3 automatically evaluates tag based conditions in your policies before granting access to your data. For example, create an IAM policy that references tags on your buckets, then grant users and roles access simply by adding or modifying tags to new or existing buckets. To get started, enable ABAC on your bucket using the S3 PutBucketAbac API and manage tags through the S3 TagResource and UntagResource APIs. You can also require that users add specific tags at the time of bucket creation to set consistent tagging standards across your organization.<br /> <br /> ABAC support for S3 general purpose bucket is available in all AWS Regions at no additional cost via the AWS Management Console, S3 REST API, AWS CLI, AWS SDK, and AWS CloudFormation. To learn more about using tags for access control in S3 general purpose buckets, read our <a href="https://aws.amazon.com/blogs/aws/introducing-attribute-based-access-control-for-amazon-s3-general-purpose-buckets/" style="cursor: pointer;">blog,</a> or visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/buckets-tagging-enable-abac.html" style="cursor: pointer;">S3 User Guide</a>.</p>

Read article →

AWS India customers can now use UPI to sign-up and automate monthly payments

<p>India customers can now sign-up for AWS using <a contenteditable="false" href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/edit-aispl-payment-method.html#using-aispl-autopay-UPI" style="cursor: pointer;"><u>UPI (Unified Payments Interface) AutoPay</u></a> as their default payment method, with automatic recurring payments set up from the start.</p> <p>UPI is a popular and convenient payment method in India, which facilitates instant bank to bank transfers between two parties through mobile phones with internet. Customers can make payments through UPI mobile app simply by using a Virtual Payment Address or UPI ID linked to their bank account.</p> <p>Customers now have the flexibility to sign-up for AWS using UPI, where previously only card payments were accepted. This addition of UPI, India's most widely used payment method, makes it easier for customers to start their AWS journey using their preferred payment method. Customers can use UPI AutoPay to make automated recurring payments, which will avoid the need to come to console to make manual payments, reduce the risk of missed payments and any non-payment related actions.</p> <p>Customers can set up automatic payments up to INR 15,000 using their UPI ID linked to their bank account. To enable this, customers can log in to the AWS console and add UPI AutoPay from the payment page. Customers will be required to provide their UPI ID, verify it, and confirm billing address. Once completed, Customers will receive a request in their UPI mobile app (such as Amazon Pay) associated with their UPI ID for verifying and authorizing automated payments. After verification, future bills up to INR 15,000 will be automatically charged starting from the next billing cycle.</p> <p>To learn more, see <a contenteditable="false" href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/edit-aispl-payment-method.html" style="cursor: pointer;"><u>Managing Payment Methods in India</u></a><u>.</u></p>

Read article →

Amazon EC2 Auto Scaling now supports root volume replacement through instance refresh

<p>Today, Amazon EC2 Auto Scaling announced a new strategy, ReplaceRootVolume, within instance refresh. This feature allows customers to update the root volume of an EC2 instance without stopping or terminating the instance, while preserving other associated instance resources. The capability reduces operational complexity, simplifies software patching, and streamlines recovery from corrupted root volumes.<br /> <br /> Customers use instance refresh to update the instances in their Auto Scaling groups (ASGs). This feature can be useful when customers want to migrate their instances to new instance types to take advantage of the latest improvements and optimizations. Traditionally, this process involved terminating older instances and launching new ones in a controlled manner. The new ReplaceRootVolume strategy transforms how customers manage instance lifecycles and software updates in their ASGs by enabling EC2 Auto Scaling service to replace the root Amazon EBS volume for running instances without stopping them. Organizations can now implement OS-level updates and security patches more efficiently without worrying about capacity management. This is especially valuable for workloads that use specialized instance types like Mac or GPU instances. Customers with stateful applications can now refresh their fleets with more confidence that their instances data, metadata, and attachments (such as network interfaces and elastic IPs) will be preserved with the new ReplaceRootVolume strategy.<br /> <br /> This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, refer to our <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/replace-root-volume.html">technical documentation</a>.</p>

Read article →

EC2 Auto Scaling introduces instance lifecycle policy

<p>Today, EC2 Auto Scaling announces a new feature called instance lifecycle policy. Customers can now configure a way to retain their instances when their termination lifecycle hooks fail or timeout, providing greater confidence in managing instances for graceful shutdown.<br /> <br /> You can add <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html">lifecycle hooks</a> to an Auto Scaling group (ASG) to perform custom actions when an instance enters a wait state. You can choose a target service (e.g., Amazon EventBridge or AWS Lambda) to perform these actions depending on your preferred development approach. Customers use ASG lifecycle hooks to save application state, properly close database connections, back up important data from local storage, delete sensitive data/credentials, or deregister from service discovery before instance termination. Previously, both default results—continue and abandon—led to ASG terminating instances when the lifecycle hook timeout elapsed or if an unexpected failure occurred. With the new instance lifecycle policy, you can now configure retention-triggers to keep your instances in a retained state for manual intervention until you're ready to terminate them again. This policy provides greater confidence in graceful instance termination and is especially helpful for stateful applications running on ASG.<br /> <br /> This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore). To get started, visit the EC2 Auto Scaling console or refer to our <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/instance-lifecycle-policy.html">technical documentation</a>.</p>

Read article →

Amazon EC2 introduces AMI ancestry for complete AMI lineage visibility

<p>Amazon EC2 now provides Amazon Machine Image (AMI) ancestry that enables you to trace the complete lineage of any AMI, from its immediate parent through each preceding generation back to the root AMI. This capability gives you complete transparency into where your AMIs originated and how they've been propagated across regions.<br /> <br /> Previously, tracking AMI lineage required manual processes, custom tagging strategies, and complex record-keeping across regions. This approach was error-prone and difficult to maintain at scale, especially when AMIs were copied across multiple regions. Now, with AMI ancestry, you have full visibility into the entire generational chain of any AMI in your environment. AMI ancestry addresses critical use cases such as tracking AMIs for compliance with internal policies, identifying all potentially vulnerable AMIs when security issues are discovered in the ancestral chain, and maintaining complete visibility of an AMI’s origin across regions.<br /> <br /> AMI ancestry can be accessed using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS China and AWS GovCloud (US) Regions. To learn more, please visit our <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-ancestry.html" target="_blank">documentation here</a>.&nbsp;</p>

Read article →

Amazon RDS supports Multi-AZ for SQL Server Web Edition

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server</a> now supports Multi-AZ deployment for SQL Server Web Edition. SQL Server Web Edition is specifically designed to support public and internet-accessible web pages, websites, web applications, and web services, and is used by web hosters and web value-added providers (VAPs). These applications need high availability, and automated failover to recover from hardware and database failures. Now customers can use SQL Server Web Edition with Amazon RDS Multi-AZ deployment option, which provides a high availability solution. The new feature eliminates the need for customers to use more expensive options for high availability, such as using SQL Server Standard Edition or Enterprise Edition.<br /> <br /> To use the feature, customers simply configure their Amazon RDS for SQL Server Web Edition instance with Multi-AZ deployment option. Amazon RDS automatically provisions and maintains a standby replica in a different Availability Zone (AZ), and synchronously replicates data across the two AZs. In situations where your Multi-AZ primary database becomes unavailable, Amazon RDS automatically fails over to the standby replica, so customers can resume database operations quickly and without any administrative intervention.<br /> <br /> For more information about Multi-AZ deployment for RDS SQL Server Web Edition, refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerMultiAZ.html" target="_blank">Amazon RDS for SQL Server User Guide</a>. See <a href="https://aws.amazon.com/rds/sqlserver/pricing/" target="_blank">Amazon RDS for SQL Server Pricing</a> for pricing details and regional availability.</p>

Read article →

AWS Site-to-Site VPN now supports BGP logging for VPN tunnels

<p>AWS Site-to-Site VPN now allows customers to publish Border Gateway Protocol (BGP) logs from VPN tunnels to AWS CloudWatch, providing enhanced visibility into VPN configurations and simplifying troubleshooting of connectivity issues.<br /> <br /> AWS Site-to-Site VPN is a fully managed service that enables secure connections between on-premises data centers or branch offices and AWS resources using IPSec tunnels. Until now, customers only had access to tunnel activity logs showing IKE/IPSec tunnel details. With this launch, customers can now access detailed BGP logs that provide visibility into BGP session status and transitions, routing updates, and detailed BGP error states. These logs help identify configuration mismatches between AWS VPN endpoints and customer gateway devices, providing granular visibility into BGP-related events. With both VPN tunnel logs and BGP logs now available in CloudWatch, customers can more easily monitor and analyze their VPN connections, enabling faster resolution of connectivity issues.<br /> <br /> This capability is available in all AWS commercial Regions and AWS GovCloud (US) Regions where AWS Site-to-Site VPN is available. To learn more and get started, visit the AWS Site-to-Site VPN <a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/monitoring-logs.html">documentation</a>.</p>

Read article →

Amazon Kinesis Data Streams now supports up to 50 enhanced fan-out consumers

<p>Amazon Kinesis Data Streams now supports 50 enhanced fan-out consumers for On-demand Advantage streams. A higher fan-out limit lets customers attach many more independent, low-latency, high-throughput consumers to the same stream—unlocking parallel analytics, ML pipelines, compliance workflows, and multi-team architectures without creating extra streams or causing throughput contention. On-demand Advantage is an account-level setting that unlocks more capabilities and provides a different pricing structure for all on-demand streams in an AWS Region. On-demand Advantage offers data usage with 60% lower pricing compared to On-demand Standard, with data ingest at $0.032/GB, data retrieval at $0.016/GB, and enhanced fan-out data retrieval at $0.016/GB. High fan-out workloads are most cost effective with On-demand Advantage.<br /> <br /> Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Enhanced fan-out is an Amazon Kinesis Data Streams feature that enables consumers to receive records from a data stream with dedicated throughput of up to 2 MB of data per second per shard, and this throughput automatically scales with the number of shards in a stream. A consumer that uses enhanced fan-out doesn't have to contend with other consumers that are receiving data from the stream. For accounts with On-demand Advantage enabled, you can continue to use the existing Kinesis API RegisterStreamConsumer to register new consumers to use enhanced fan-out up to the new 50 limit.<br /> <br /> Support for enhanced fan-out consumers is available in the AWS Regions listed <a href="https://docs.aws.amazon.com/streams/latest/dev/enhanced-consumers.html" target="_blank">here</a>. For more information on Kinesis Data Streams quotas and limits, please see our <a href="https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html" target="_blank">documentation</a>. For more information on On-demand Advantage, please see our <a href="https://docs.aws.amazon.com/streams/latest/dev/how-do-i-size-a-stream.html#diff-modes-kds" target="_blank">documentation</a> for On-demand Advantage.&nbsp;</p>

Read article →

AWS Application Load Balancer launches Target Optimizer

<p>Application Load Balancer (ALB) now offers Target Optimizer, a new feature that allows you to enforce a maximum number of concurrent requests on a target.<br /> <br /> With Target Optimizer, you can fine-tune your application stack so that targets receive only the number of requests they can process, achieving higher request success rate, more target utilization, and lower latency. This is particularly useful for compute-intensive workloads. For example, if you have applications that perform complex data processing or inference, you can configure each target to receive as few as one request at a time, ensuring the number of concurrent requests is in line with the target's processing capabilities.<br /> <br /> You can enable this feature by creating a new target group with a target control port. Once enabled, the feature works with the help of an agent provided by AWS that you run on your targets that tracks request concurrency. For deployments that include multiple target groups per ALB, you have the flexibility to configure this capability for each target group individually.<br /> <br /> You can enable Target Optimizer through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. ALB Target Optimizer is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and AWS China Regions. Traffic to target groups that enable Target Optimizer generates more LCU usage than regular target groups. For more information, see the <a href="https://aws.amazon.com/elasticloadbalancing/pricing/?nc=sn&amp;loc=3">pricing page</a>, <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/drive-application-performance-with-application-load-balancer-target-optimizer/">launch blog</a>, and ALB <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html#register-targets-target-optimizer">User Guide</a>.</p>

Read article →

Amazon CloudWatch application map now supports un-instrumented services discovery

<p>Application map in Amazon CloudWatch now supports un-instrumented services discovery, cross-account views, and change history, helping SRE and DevOps teams monitor and troubleshoot their large-scale distributed applications. Application map now detects and visualizes services not instrumented with Application Signals, providing out-of-the-box observability coverage in your distributed environment. In addition, it provides a single, unified view for applications, services, and infrastructure distributed across AWS accounts, enabling end-to-end visibility. Furthermore, it provides a history of recent changes, helping teams quickly correlate when a modification occurred and how it aligns with shifts in application health or performance.<br /> <br /> These enhancements help SRE and DevOps teams troubleshoot issues faster and operate with greater confidence in large-scale, distributed environments. For example, when latency or error rates spike, developers can now investigate recent configuration changes, and analyze dependencies across multiple AWS accounts, all from a single map. During post-incident reviews, teams can use historical change data to understand what shifted and when, improving long-term reliability. By unifying service discovery, dependency mapping, and change history, application map reduces mean-time-to-resolution (MTTR) and helps teams maintain application health across complex systems.<br /> <br /> <i>Starting today, the new capabilities in Application Map are available at no additional cost in all AWS commercial regions (except Taipei and New Zealand). To learn more about Application Map, please visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Services.html" style="cursor: pointer;">Amazon CloudWatch Application Signals documentation.</a> </i></p>

Read article →

Recycle Bin adds support for Amazon EBS Volumes

<p>Recycle Bin for Amazon EBS, which helps you recover accidentally deleted snapshots and EBS-backed AMIs, now supports EBS Volumes. If you accidentally delete a volume, you can now recover it directly from Recycle Bin instead of restoring from a snapshot, reducing your recovery point objective with no data loss between the last snapshot and deletion. Your recovered volume can immediately achieve the full performance without waiting for data to download from snapshots.<br /> <br /> To use Recycle Bin, you can set a retention period for deleted volumes, and you can recover any volume within that period. Recovered volumes are immediately available and will retain all attributes—tags, permissions, and encryption status. Volumes not recovered are deleted permanently when the retention period expires. You create retention rules to enable Recycle Bin for all volumes or specific volumes, using tags to target which volumes to protect.<br /> <br /> EBS Volumes in Recycle Bin are billed at the same price as EBS Volumes, read more on the <a href="https://aws.amazon.com/ebs/pricing/" target="_blank">pricing page</a>. To get started, read the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/recycle-bin.html" target="_blank">documentation</a>. The feature is now available through the <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (CLI)</a>, <a href="https://aws.amazon.com/tools/">AWS SDKs</a>, or the <a href="https://aws.amazon.com/console/" target="_blank">AWS Console</a> in all AWS commercial, China, and AWS GovCloud (US) Regions.</p>

Read article →

AWS Site-to-Site VPN is collaborating with eero to simplify remote connectivity

<p>AWS Site-to-Site VPN is collaborating with eero to simplify how customers connect their remote sites to AWS. This collaboration will help customers to establish secure connectivity between their remote sites and AWS in just a few clicks.<br /> <br /> Many AWS customers operate hundreds of remote sites - from restaurants and retail stores to gas stations and mobile offices. These sites rely on WiFi to connect employees, customers, and IoT applications like kiosks, ATMs, and vending machines, while also connecting with AWS for business operations. These customers also need a faster and efficient way to connect hundreds of sites to AWS. For example, quick service restaurants need to connect their point of sales systems at each site to their payment gateways in AWS. AWS Site-to-Site VPN and eero are collaborating to simplify remote site connectivity by combining eero's ease of use with AWS's networking services. This solution leverages eero’s WiFi access points and network gateways to provide local connectivity. Using eero’s gateway appliances and AWS Site-to-Site VPN, customers can automatically establish VPN connectivity to access their applications hosted in AWS such as payment gateways for point of sales systems in just a few clicks. This makes it simple and faster for customers to scale their remote site connectivity across hundreds of sites and eliminates the need for an onsite technician with networking expertise to set-up the connectivity.<br /> <br /> Customers can use eero devices in the US geography to establish connectivity to AWS using Site-to-Site VPN. To learn more and get started, visit the AWS Site-to-Site VPN documentation and eero <a href="https://support.eero.com/hc/en-us/articles/42827838351899-AWS-Account-and-VPN-Configuration">documentation</a>.</p>

Read article →

Amazon OpenSearch Serverless adds AWS PrivateLink for management console

<p>Amazon OpenSearch Serverless now supports AWS PrivateLink for secure and private connectivity to management console. With AWS PrivateLink, you can establish a private connection between your virtual private cloud (VPC) and Amazon OpenSearch Serverless to create, manage, and configure your OpenSearch Serverless resources without using the public internet. By enabling private network connectivity, this enhancement eliminates the need to use public IP addresses or relying solely on firewall rules to access OpenSearch Serverless. With this feature release the OpenSearch Serverless management and data operations can be securely accessed through PrivateLinks. Data ingestion and query operations on collections still requires OpenSearch Serverless provided VPC endpoint configuration for private connectivity as described in the <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-vpc.html#vpc-endpoint-network">OpenSearch Serverless VPC developer guide</a>.<br /> <br /> You can use PrivateLink connections in all AWS Regions where Amazon OpenSearch Serverless is available. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to AWS PrivateLink pricing page for details. You can get started by creating an AWS PrivateLink interface endpoint for Amazon OpenSearch Serverless using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on creating an <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-vpc.html#serverless-vpc-privatelink">interface VPC endpoint</a> for management console.<br /> <br /> Please refer to the <a href="https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-service-regions">AWS Regional Services List</a> for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, <a href="https://aws.amazon.com">see the documentation.&nbsp;</a></p>

Read article →

AWS announces availability of Microsoft SQL Server 2025 images on Amazon EC2

<p>Amazon EC2 now supports Microsoft SQL Server 2025 with License-Included (LI) Amazon Machine Images (AMIs), providing a quick way to launch the latest version of SQL Server. By running SQL Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest SQL Server features.<br /> <br /> Amazon creates and manages Microsoft SQL Server 2025 AMIs to simplify the provisioning and management of SQL Server 2025 on EC2 Windows instances. These images support version 1.3 of the Transport Layer Security (TLS) protocol by default for enhanced performance and security. These images also come with pre-installed software such as AWS Tools for Windows PowerShell, AWS Systems Manager, AWS CloudFormation, and various network and storage drivers to make your management easier.<br /> <br /> SQL Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions.<br /> <br /> To learn more about the new AMIs, see SQL Server AMIs <a href="https://docs.aws.amazon.com/ec2/latest/windows-ami-reference/ami-windows-sql.html" target="_blank">User Guide</a> or read the <a href="https://aws.amazon.com/blogs/modernizing-with-aws/whats-new-in-microsoft-sql-server-2025-on-aws/" target="_blank">blog post</a>.</p>

Read article →

Validate and enforce required tags in CloudFormation, Terraform and Pulumi with Tag Policies

<p>AWS Organizations Tag Policies announces Reporting for Required Tags, a new validation check that proactively ensures your CloudFormation, Terraform, and Pulumi deployments include the required tags critical to your business. Your infrastructure-as-code (IaC) operations can now be automatically validated against tag policies to ensure tagging consistency across your AWS environments. With this, you can ensure compliance for your IaC deployments in two simple steps: 1) define your tag policy, and 2) enable validation in each IaC tool.<br /> <br /> Tag Policies enables you to enforce consistent tagging across your AWS accounts with proactive compliance, governance, and control. With this launch, you can specify mandatory tag keys in your tag policies, and enforce guardrails for your IaC deployments. For example, you can define a tag policy that all EC2 instances in your IaC templates must have “Environment”, “Owner”, and “Application” as required tag keys. You can start validation by activating AWS::TagPolicies::TaggingComplianceValidator Hook in CloudFormation, adding validation logic in your Terraform plan, or activating aws-organizations-tag-policies pre-built policy pack in Pulumi. Once configured, all CloudFormation, Terraform, and Pulumi deployments in the target account will be automatically validated and/or enforced against your tag policies, ensuring that resources like EC2 instances include the required "Environment", "Owner", and "Application" tags.<br /> <br /> You can use Reporting for Required Tags feature via AWS Management Console, AWS Command Line Interface, and AWS Software Development Kit. This feature is available with AWS Organizations Tag Policies in <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-supported-regions.html">AWS Regions</a> where Tag Policies is available. To learn more, visit Tag Policies <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html">documentation</a>. To learn how to set up validation and enforcement, see the <a href="https://docs.aws.amazon.com/organizations/latest/userguide/enforce-required-tag-keys-iac.html">user guide</a> for CloudFormation, this <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/tag-policy-compliance">user guide</a> for Terraform, and this <a href="https://www.pulumi.com/blog/aws-organizations-tag-policies/">blog post</a> for Pulumi.</p>

Read article →

Amazon Connect now offers persistent agent connections for faster call handling

<p>Amazon Connect now offers the ability to maintain an open communication channel between your agents and Amazon Connect, helping reduce the time it takes to establish a connection with a customer. Contact center administrators can configure an agent’s user profile to maintain a persistent connection after a conversation ends, allowing for subsequent calls to connect faster. Amazon Connect persistent agent connection makes it easier to support compliance requirements with telemarketing laws such as the U.S. Telephone Consumer Protection Act (TCPA) for outbound campaigns’ calling by reducing the time it takes for a customer to connect with your agents.<br /> <br /> Amazon Connect persistent connection is now available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS regions</a> where Amazon Connect is offered, and there is no additional charge beyond standard pricing for the Amazon Connect service usage and associated telephony charges. To learn more, visit our <a href="https://aws.amazon.com/connect/features/">product page</a> or refer to our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/enable-persistent-connection.html">Admin Guide</a>.</p>

Read article →

Amazon MQ now supports RabbitMQ version 4.2

<p>Amazon MQ now supports RabbitMQ version 4.2 which introduces native support for the AMQP 1.0 protocol, a new Raft based metadata store named Khepri, local shovels, and message priorities for quorum queues. RabbitMQ 4.2 also includes various bug fixes and performance improvements for throughput and memory management.<br /> <br /> A key highlight of RabbitMQ 4.2 is the support of AMQP 1.0 as a core protocol offering enhanced features like modified outcome which allow consumers to modify message annotations before requeueing or dead lettering, and granular flow control, which offers benefits including letting a client application dynamically adjust how many messages it wants to receive from a specific queue. Amazon MQ has also introduced configurable resource limits for RabbitMQ 4.2 brokers which you can modify based on your application requirements. Starting from RabbitMQ 4.0, mirroring of classic queues is no longer supported. Non-replicated classic queues are still supported. Quorum queues are the only replicated and durable queue type supported on RabbitMQ 4.2 brokers, and now offer message priorities in addition to consumer priorities.<br /> <br /> To start using RabbitMQ 4.2 on Amazon MQ, simply select RabbitMQ 4.2 when creating a new broker using the m7g instance type through the AWS Management console, AWS CLI, or AWS SDKs. Amazon MQ automatically manages patch version upgrades for your RabbitMQ 4.2 brokers, so you need to only specify the major.minor version. To learn more about the changes in RabbitMQ 4.2, see the <a href="https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-release-notes.html">Amazon MQ release notes</a> and the Amazon MQ <a href="https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/working-with-rabbitmq.html">developer guide</a>. This version is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">regions</a> where Amazon MQ m7g type instances are available today.&nbsp;</p>

Read article →

Amazon CloudFront now supports TLS 1.3 for origin connections

<p>Amazon CloudFront now supports TLS 1.3 when connecting to your origins, providing enhanced security and improved performance for origin communications. This upgrade offers stronger encryption algorithms, reduced handshake latency, and better overall security posture for data transmission between CloudFront edge locations and your origin servers. TLS 1.3 support is automatically enabled for all origin types, including custom origins, Amazon S3, and Application Load Balancers, with no configuration changes required on your part.<br /> <br /> TLS 1.3 provides faster connection establishment through a reduced number of round trips during the handshake process, delivering up to 30% improvement in connection performance when your origin supports it. CloudFront will automatically negotiate TLS 1.3 when your origin supports it, while maintaining backward compatibility with lower TLS versions for origins that haven't yet upgraded. This enhancement benefits applications requiring high security standards, such as financial services, healthcare, and e-commerce platforms that handle sensitive data.<br /> <br /> TLS 1.3 support for origin connections is available at no additional charge in all CloudFront edge locations. To learn more about CloudFront origin TLS, see the <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/secure-connections-supported-ciphers-cloudfront-to-origin.html" target="_blank">Amazon CloudFront Developer Guide</a>.</p>

Read article →

Amazon CloudFront announces 3 new CloudFront Functions capabilities

<p>Amazon CloudFront now supports three new capabilities for CloudFront Functions: edge location and Regional Edge Cache (REC) metadata, raw query string retrieval, and advanced origin overrides. Developers can now build more sophisticated edge computing logic with greater visibility into CloudFront's infrastructure and precise, granular control over origin connections. CloudFront Functions allows you to run lightweight JavaScript code at CloudFront edge locations to customize content delivery and implement security policies with sub-millisecond execution times.<br /> <br /> Edge location metadata, includes the three-letter airport code of the serving edge location and the expected REC. This enables geo-specific content routing or compliance requirements, such as directing European users to GDPR-compliant origins based on client location. The raw query string capability provides access to the complete, unprocessed query string as received from the viewer, preserving special characters and encoding that may be altered during standard parsing. Advanced origin overrides solve critical challenges for complex application infrastructures by allowing you to customize SSL/TLS handshake parameters, including Server Name Indication (SNI). For example, multi-tenant setups may override SNI where CloudFront connects through CNAME chains that resolve to servers with different certificate domains.<br /> <br /> These new CloudFront Functions capabilities are available at no additional charge in all CloudFront edge location. To learn more about CloudFront Functions, see the <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html">Amazon CloudFront Developer Guide</a>.</p>

Read article →

Amazon Redshift Serverless now offers 4-RPU Minimum Capacity across more aws regions

<p>Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-capacity.html">base capacity configuration</a> of 4 Redshift Processing Units (RPUs) in the AWS Asia Pacific (Thailand), Asia Pacific (Jakarta), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Osaka), Asia Pacific (Malaysia), Asia Pacific (Taipei), Mexico (Central), Israel (Tel Aviv), Europe (Spain), Europe (Milan), Europe (Frankfurt) and Middle East (UAE) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs. 1 RPU provides you 16 GB of memory. You pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 8 RPUs. You can start using Amazon Redshift Serverless for as low as $1.50 per hour and pay only for the compute capacity your data warehouse consumes when it is active.<br /> <br /> Amazon Redshift Serverless enables users to run and scale analytics without managing data warehouse clusters. The new lower capacity configuration makes Amazon Redshift Serverless suitable for both production and development environments, particularly when workloads require minimal compute and memory resources. This entry-level configuration supports data warehouses with up to 32 TB of Redshift managed storage, offering a maximum of 100 columns per table and 64 GB of memory.<br /> <br /> To get started, see the Amazon Redshift Serverless <a href="https://aws.amazon.com/redshift/redshift-serverless/">feature page</a>, <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-serverless.html">user documentation</a>, and <a href="https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/Welcome.html">API Reference</a>.</p>

Read article →

Amazon CloudFront now supports CBOR Web Tokens and Common Access Tokens

<p>Amazon CloudFront now supports <a href="https://datatracker.ietf.org/doc/html/rfc8392" target="_blank">CBOR Web Tokens (CWT)</a> and Common Access Tokens (CAT), enabling secure token-based authentication and authorization with CloudFront Functions at CloudFront edge locations. CWT provides a compact, binary alternative to JSON Web Tokens (JWT) using <a href="https://datatracker.ietf.org/doc/html/rfc8949" target="_blank">Concise Binary Object Representation (CBOR)</a> encoding, while CAT extends CWT with additional fine grained access control including URL patterns, IP restrictions, and HTTP method limitations. Both token types use <a href="https://datatracker.ietf.org/doc/html/rfc8152" target="_blank">CBOR Object Signing and Encryption (COSE)</a> for enhanced security and allow developers to implement lightweight, high-performance authentication mechanisms directly at the edge with sub-millisecond execution times.<br /> <br /> CWT and CAT are ideal for performance critical applications such as live video streaming platforms that need to validate viewer access tokens millions of times per second, or IoT applications where bandwidth efficiency is crucial. These tokens also provide a single, standardized method for content authentication across multi-CDN deployments, simplifying security management and preventing the need for unique configurations for each CDN provider. For example, a media company can use CAT to create tokens that restrict access to specific video content based on subscription tiers, geographic location, and device types, all validated consistently across CloudFront and other CDN providers without requiring application network calls. With CWT and CAT support, you can validate incoming tokens, generate new tokens, and implement token refresh logic within CloudFront Functions. The feature integrates seamlessly with CloudFront Functions KeyValueStore for secure key management.<br /> <br /> CWT and CAT support for CloudFront Functions is available at no additional charge in all CloudFront edge locations. To learn more about CloudFront Functions CBOR Web Token support, see the <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cwt-support-cloudfront-functions.html" target="_blank">Amazon CloudFront Developer Guide</a>.</p>

Read article →

AWS CloudTrail launches Insights for data events to automatically detect anomalies in data access

<p>Today, AWS extends <a contenteditable="false" href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html" style="cursor: pointer;">AWS CloudTrail Insights</a> to data events. CloudTrail Insights help you identify and respond to unusual activity associated with API call rates and API error rates in your AWS accounts. Until today, Insights worked by continuously analyzing only CloudTrail management events. Now, with today’s launch, Insights also analyzes data events, thereby strengthening your ability to quickly investigate and respond to potential security or operational issues.<br /> <br /> Available on CloudTrail <a contenteditable="false" href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-trails" style="cursor: pointer;">trails</a>, Insights for data events automatically detects anomalies in data access activities, such as unexpected surges in delete Amazon S3 object API calls or increased error rates for AWS Lambda function invocations, enabling you to rapidly uncover potential security and operational issues, all without requiring you to build detection systems or export data to third-party tools.<br /> <br /> CloudTrail Insights for data events works by establishing normal baselines for data access patterns in your AWS accounts and creates a CloudTrail event when it detects anomalies. When an unusual pattern is detected, CloudTrail provides the relevant data events from the anomaly period - helping you precisely investigate what led to the anomaly. You can configure alerts to be automatically notified when potential issues occur, enabling rapid response to potential threats or issues.<br /> <br /> CloudTrail Insights for data events is available in all regions where <a contenteditable="false" href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-regions.html" style="cursor: pointer;">AWS CloudTrail is available</a>. To get started with CloudTrail Insights, see our <a contenteditable="false" href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html" style="cursor: pointer;">documentation</a>. Additional charges apply for Insights for data events. To learn more about pricing for this feature, visit the <a contenteditable="false" href="https://aws.amazon.com/cloudtrail/pricing/" style="cursor: pointer;">AWS CloudTrail pricing</a> page.</p>

Read article →

Amazon EC2 C7i instances are now available in the Asia Pacific (Melbourne) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Asia Pacific (Melbourne) Region. C7i instances are supported by custom Intel processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.<br /> <br /> C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.<br /> <br /> C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances.<br /> <br /> To learn more, visit <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/c7i/" style="cursor: pointer;">Amazon EC2 C7i Instances</a>. To get started, see the <a contenteditable="false" href="https://console.aws.amazon.com/" style="cursor: pointer;">AWS Management Console</a>.</p>

Read article →

Amazon MSK Serverless expands availability to South America (São Paulo) region

<p>You can now connect your Apache Kafka applications to <a href="https://aws.amazon.com/msk/features/msk-serverless/" target="_blank">Amazon MSK Serverless</a> in the South America (São Paulo) <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>.<br /> <br /> Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK Serverless is a cluster type for Amazon MSK that allows you to run Apache Kafka without having to manage and scale cluster capacity. MSK Serverless automatically provisions and scales compute and storage resources, so you can use Apache Kafka on demand.<br /> <br /> With these launches, Amazon MSK Serverless is now generally available in Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (Paris), Europe (London), South America (São Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS regions. To learn more and get started, see our <a href="https://aws.amazon.com/msk/features/msk-serverless/" target="_blank">developer guide</a>.</p>

Read article →

Amazon EC2 R8i and R8i-flex instances are now available in additional AWS regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Sydney), Canada (Central) and US West (N. California) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.<br /> <br /> R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.<br /> <br /> R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. <a href="https://docs.aws.amazon.com/sap/latest/general/sap-hana-aws-ec2.html">R8i instances are SAP-certified </a>and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads.<br /> <br /> To get started, sign in to the <a href="https://aws.amazon.com/console/">AWS Management Console</a>. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new <a href="https://aws.amazon.com/ec2/instance-types/r8i">R8i and R8i-flex </a>instances visit the AWS News <a href="https://aws.amazon.com/blogs/aws/best-performance-and-fastest-memory-with-the-new-amazon-ec2-r8i-and-r8i-flex-instances/">blog</a>.</p>

Read article →

AWS Cloud WAN adds Routing Policy for advanced traffic control and flexible network deployments

<p>AWS announces the general availability of Cloud WAN Routing Policy providing customers fine-grained controls to optimize route management, control traffic patterns, and customize network behavior across their global wide-area networks.<br /> <br /> AWS Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Using the new Routing Policy feature, customers can perform advanced routing techniques such as route filtering and summarization to have better control on routes exchanged between AWS Cloud WAN and external networks. This feature enables customers to build controlled routing environments to minimize route reachability blast radius, prevent sub-optimal or asymmetric connectivity patterns, and avoid over-running of route-tables due to propagation of unnecessary routes in global networks. In addition, this feature allows customers to set advanced Border Gateway Protocol (BGP) attributes to customize network traffic behavior per their individual needs and build highly resilient hybrid-cloud network architectures. This feature also provides advanced visibility in the routing databases to allow rapid troubleshooting of network issues in complex multi-path environments.<br /> <br /> The new Routing Policy feature is available in all <a href="https://docs.aws.amazon.com/network-manager/latest/cloudwan/what-is-cloudwan.html#cloudwan-available-regions">AWS Regions</a> where AWS Cloud WAN is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for enabling Routing Policy on AWS Cloud WAN. For more information, see the AWS Cloud WAN <a href="https://docs.aws.amazon.com/network-manager/latest/cloudwan/what-is-cloudwan.html">documentation pages.</a></p>

Read article →

AWS DMS Schema Conversion adds SAP (Sybase) ASE to PostgreSQL support with generative AI

<p>AWS <a href="https://aws.amazon.com/dms/">Database Migration Service</a> (DMS) <a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_SchemaConversion.html">Schema Conversion</a> is a fully managed feature of DMS that automatically assesses and converts database schemas to formats compatible with AWS target database services. Today, we're excited to announce that Schema Conversion now supports conversions from SAP Adaptive Server Enterprise (ASE) database (formerly known as Sybase) to Amazon RDS PostgreSQL and Amazon Aurora PostgreSQL, powered by Generative AI capability.<br /> <br /> Using Schema Conversion, you can automatically convert database objects from your SAP (Sybase) ASE source to an to Amazon RDS PostgreSQL and Amazon Aurora PostgreSQL target. The integrated generative AI capability intelligently handles complex code conversions that typically require manual effort, such as stored procedures, functions, and triggers. Schema Conversion also provides detailed assessment reports to help you plan and execute your migration effectively.<br /> <br /> To learn more about this feature, see the documentation for using SAP (Sybase) ASE as a source for<a href="https://docs.aws.amazon.com/dms/latest/userguide/dm-data-providers-source-sybase-ASE.html"> AWS DMS Schema Conversion</a> and using SAP (Sybase) ASE as a source for <a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html">AWS DMS</a> for data migration. For details about the generative AI capability, please refer to the <a href="https://docs.aws.amazon.com/dms/latest/userguide/schema-conversion-convert.databaseobjects.html#schema-conversion-convert.databaseobjects.genai">User Guide</a>. For AWS DMS Schema Conversion regional availability, please refer to the <a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_SchemaConversion.html#schema-conversion-supported-regions">Supported AWS Regions</a> page.</p>

Read article →

Amazon Quick Sight expands Dashboard Theme Customization

<p><a href="https://aws.amazon.com/quicksight/" style="cursor: pointer;">Amazon Quick Sight</a> now supports comprehensive theming capabilities that enable organizations to maintain consistent brand identity across their analytics dashboards. Authors can customize interactive sheet backgrounds with gradient colors and angles, implement sophisticated card styling with configurable borders and opacity, and control typography for visual titles and subtitles at the theme level.<br /> <br /> These enhancements address critical enterprise needs including maintaining corporate visual identity and creating seamless embedded analytics experiences. With theme-level controls, organizations can ensure visual consistency across departments while enabling embedded dashboards to match host application styling. The theming capabilities are particularly valuable for embedded analytics scenarios, as the features enable dashboards to appear native within host applications, enhancing the overall professional appearance and user experience.<br /> <br /> Expanded theme capabilities is available in all <a href="https://docs.aws.amazon.com/quicksight/latest/user/regions-qs.html" style="cursor: pointer;">supported Amazon Quick Sight regions</a>.</p>

Read article →

Amazon Aurora DSQL now provides statement-level cost estimates in query plans

<p><a contenteditable="false" href="https://aws.amazon.com/rds/aurora/dsql/" style="cursor: pointer;">Amazon Aurora DSQL</a> now provides statement-level cost estimates in query plans, giving developers immediate insight into the resources consumed by individual SQL statements. This enhancement surfaces Distributed Processing Unit (DPU) usage estimates directly within the query plan output, helping developers identify workload cost drivers, tune query performance, and better forecast resource usage.<br /> <br /> With this launch, Aurora DSQL appends per-category (compute, read, write, and multi-Region write) and total estimated DPU usage at the end of the EXPLAIN ANALYZE VERBOSE plan output. The feature complements CloudWatch metrics by providing fine-grained, real-time visibility into query costs.<br /> <br /> Aurora DSQL support for DPU usage in EXPLAIN ANALYZE VERBOSE plans is available in all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">Regions where Aurora DSQL is available</a>. To get started, visit the Aurora DSQL <a contenteditable="false" href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/understanding-dpus-explain-analyze.html" style="cursor: pointer;">Understanding DPUs in EXPLAIN ANALYZE</a> docmentation.&nbsp;</p>

Read article →

Amazon EC2 Mac instances now support Apple macOS Tahoe

<p>Starting today, customers can run Apple macOS Tahoe (version 26) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Tahoe is the latest major macOS version, and introduces multiple new features and performance improvements over prior macOS versions including running Xcode version 26.0 or later (which includes the latest SDKs for iOS, iPadOS, macOS, tvOS, watchOS, and visionOS).<br /> <br /> Backed by Amazon Elastic Block Store (EBS), EC2 macOS AMIs are AWS-supported images that are designed to provide a stable, secure, and high-performance environment for developer workloads running on EC2 Mac instances. EC2 macOS AMIs include the AWS Command Line Interface, Command Line Tools for Xcode, Amazon SSM Agent, and Homebrew. The <a href="https://github.com/aws/homebrew-aws">AWS Homebrew Tap</a> includes the latest versions of AWS packages included in the AMIs.<br /> <br /> Apple macOS Tahoe AMIs are available for Apple silicon EC2 Mac instances and are published to all AWS regions where Apple silicon EC2 Mac instances are available today. Customers can get started with macOS Tahoe AMIs via the AWS Console, Command Line Interface (CLI), or API. Learn more about EC2 Mac instances <a href="https://aws.amazon.com/ec2/instance-types/mac/">here</a> or get started with an EC2 Mac instance <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html">here</a>. You can also subscribe to EC2 macOS AMI release notifications <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html#subscribe-notifications">here</a>.</p>

Read article →

AWS Parallel Computing Service now supports Slurm REST API

<p>AWS Parallel Computing Service (AWS PCS) now supports the Slurm REST API. This new feature enables you to programmatically submit jobs, monitor cluster status, and manage resources using HTTP requests instead of relying on command-line tools.<br /> <br /> AWS PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. The Slurm REST API helps you automate cluster operations and integrate HPC resources into existing systems and workflows, including web portals, CI/CD pipelines, and data processing frameworks, all without the overhead of maintaining additional REST API infrastructure.<br /> <br /> This feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions where AWS PCS is available</a>, and there's no additional cost to use the feature. To learn more about using the Slurm REST API, see the <a href="https://docs.aws.amazon.com/pcs/latest/userguide/slurm-rest-api.html" style="cursor: pointer;">AWS PCS User Guide</a>.</p>

Read article →

Amazon EC2 High Memory U7i instances now available in additional regions

<p>Amazon EC2 High Memory U7i instances with 16TB of memory (u7in-16tb.224xlarge) are now available in the AWS Europe (Ireland) region, U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Asia Pacific (Hyderabad), and U7i instances with 8TB of memory (u7i-8tb.112xlarge) are now available in the Asia Pacific (Mumbai) and AWS GovCloud (US-West) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-16tb instances offer 16TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, and U7i-8tb instances offer 8TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-8tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/u7i/" style="cursor: pointer;">High Memory instances page</a>.</p>

Read article →

Amazon SageMaker Unified Studio adds EMR on EKS support with SSO capabilities

<p>Amazon SageMaker Unified Studio announces support for EMR on EKS as a compute resource for interactive Apache Spark sessions. This launch enables EMR on EKS capabilities such as large-scale distributed compute with automatic scaling, cost optimization, and containerized workload isolation directly within Amazon SageMaker Unified Studio. It allows customers to transition between interactive analysis and production-level data processing jobs without moving their workload between platforms.<br /> <br /> Building on this capability, EMR on EKS in Amazon SageMaker Unified Studio now supports corporate identity through AWS Identity Center's trusted identity propagation. This enables seamless single sign-on and end-to-end data access traceability for interactive analytics sessions on EMR on EKS clusters. Data practitioners can access Glue Data Catalog resources using their corporate credentials from SageMaker Unified Studio's JupyterLab environment, while administrators maintain fine-grained access controls and audit trails. This integration simplifies security governance and streamlines compliance for enterprise data workflows.<br /> <br /> EMR on EKS compute support in Amazon SageMaker Unified Studio is available in all existing SageMaker Unified Studio <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html" style="cursor: pointer;">regions</a>. To learn more, visit the SageMaker Unified Studio <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/getting-started-with-emr-on-eks.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Braket adds new quantum processor from Alpine Quantum Technologies (AQT)

<p>Amazon Braket now offers access to IBEX Q1, a trapped-ion quantum processing unit (QPU) from Alpine Quantum Technologies (AQT), a new quantum hardware provider on Amazon Braket. IBEX Q1 is a 12-qubit system with all-to-all connectivity, enabling any qubit to directly interact with any other qubit without requiring intermediate SWAP gates.<br /> <br /> With this launch, customers now have on-demand access to AQT's trapped-ion technology for building and testing quantum programs, and priority access via Hybrid Jobs for running variational quantum algorithms - all with pay-as-you-go pricing. Customers can also reserve dedicated capacity on this QPU for time-sensitive workloads via Braket Direct with hourly pricing and no upfront commitments.<br /> <br /> At launch, IBEX Q1 is available Tuesdays and Wednesdays from 09:00 to 16:00 UTC, providing customers in European time zones convenient access during their work hours. IBEX Q1 is accessible from the Europe (Stockholm) Region.<br /> <br /> Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the <a href="https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/" target="_blank">AWS Cloud Credits for Research program</a>. To get started with IBEX Q1, visit the <a href="https://console.aws.amazon.com/braket/" target="_blank">Amazon Braket devices page</a> in the AWS Management Console to explore device specifications and capabilities. You can also explore our <a href="https://github.com/amazon-braket/amazon-braket-examples" target="_blank">example notebooks</a> and read our <a href="https://aws.amazon.com/blogs/quantum-computing/amazon-braket-launches-trapped-ion-quantum-computer-from-alpine-quantum-technologies/" target="_blank">launch blog post</a>.</p>

Read article →

Amazon SageMaker Unified Studio now supports long-running sessions with corporate identities

<p>Amazon SageMaker Unified Studio now supports long-running sessions with corporate identities through AWS IAM Identity Center's trusted identity propagation (TIP) capability. This feature enables data scientists, data engineers, and analytics professionals to achieve uninterrupted workflow continuity and improved productivity. Users can now initiate interactive notebooks from Amazon SageMaker Unified Studio and data processing sessions on Amazon EMR (EC2, EKS, Serverless) and AWS Glue that continue running in the background using their corporate credentials, even when they log off or their session expires.<br /> <br /> With this capability, you can now launch resource-intensive complex data processing sessions, or exploratory analytics flows and step away from your workstations without interrupting progress. Sessions automatically maintain corporate identity permissions through IAM Identity Center's trusted identity propagation, ensuring consistent security and access controls throughout execution. You can start multi-hour or multi-day workflows knowing the jobs will persist through network disconnections, laptop shutdowns, or credential refresh cycles, with sessions running for up to 90 days (default 7 days). This eliminates the productivity bottleneck of monitoring long-running processes and enables more efficient resource utilization across data teams.<br /> <br /> Long running sessions are available in Amazon SageMaker Unified Studio in all existing SageMaker Unified Studio <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html" style="cursor: pointer;">regions</a>. To learn more about user background sessions, see <a contenteditable="false" href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/user-background-sessions.html" style="cursor: pointer;">Amazon EMR on EC2</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/security-iam-service-trusted-prop-user-background.html" style="cursor: pointer;">Amazon EMR Serverless</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/glue/latest/dg/user-background-sessions.html" style="cursor: pointer;">AWS Glue</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/configuring-user-background-sessions-for-emr-on-eks.html" style="cursor: pointer;">Amazon EMR on EKS</a> documentation.</p>

Read article →

Amazon Braket introduces spending limits feature for quantum processing units

<p>Amazon Braket now supports spending limits, enabling customers to set spending caps on quantum processing units (QPUs) to manage costs. With spending limits, customers can define maximum spending thresholds on a per-device basis, and Amazon Braket automatically validates each task submission doesn't exceed the pre-configured limits. Tasks that would exceed remaining budgets are rejected before creation. For comprehensive cost management across all of Amazon Web Services, customers should continue to use the AWS Budgets feature as part of <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/what-is-costmanagement.html" target="_blank">AWS Cost Management</a>.<br /> <br /> Spending limits are particularly valuable for research institutions managing quantum computing budgets across multiple users, for educational environments preventing accidental overspending during coursework, and for development teams experimenting with quantum algorithms. Customers can update or delete spending limits at any time as their requirements change. Spending limits apply only to on-demand tasks on quantum processing units and do not include costs for simulators, notebook instances, hybrid jobs, or tasks created during Braket Direct reservations.<br /> <br /> Spending limits are available now in all AWS Regions where Amazon Braket is supported at no additional cost. Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the <a href="https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/" target="_blank">AWS Cloud Credits for Research program</a>. To get started, visit the Spending limits page in the <a href="https://us-east-1.console.aws.amazon.com/braket/home?region=us-east-1#/spending-limits" target="_blank">Amazon Braket console</a> and read our <a href="https://aws.amazon.com/blogs/quantum-computing/amazon-braket-introduces-spending-limits-for-quantum-processing-units/" target="_blank">launch blog post</a>.</p>

Read article →

AWS Glue supports additional SAP entities as zero-ETL integration sources

<p>AWS Glue now supports full snapshot and incremental load ingestion for new SAP entities using zero-ETL integrations. This enhancement introduces full snapshot data ingestion for SAP entities that lack complete change data capture (CDC) functionality, while also providing incremental data loading capabilities for SAP entities that don't support the Operational Data Provisioning (ODP) framework. These new features work alongside existing capabilities for ODP-supported SAP entities, to give customers the flexibility to implement zero-ETL data ingestion strategies across diverse SAP environments.<br /> <br /> Fully managed AWS zero-ETL integrations eliminate the engineering overhead associated with building custom ETL data pipelines. This new zero-ETL functionality enables organizations to ingest data from multiple SAP applications into <a href="https://aws.amazon.com/redshift/" target="_blank">Amazon Redshift</a> or the <a href="https://aws.amazon.com/sagemaker/lakehouse/" target="_blank">lakehouse architecture of Amazon SageMaker</a> to address scenarios where SAP entities lack deletion tracking flags or don't support the Operational Data Provisioning (ODP) framework. Through full snapshot ingestion for entities without deletion tracking and timestamp-based incremental loading for non-ODP systems, zero-ETL integrations reduce operational complexity while saving organizations weeks of engineering effort that would otherwise be required to design, build, and test custom data pipelines across diverse SAP application environments.<br /> <br /> This feature is available in all AWS Regions where AWS Glue zero-ETL is currently available.<br /> <br /> To get started with the enhanced zero-ETL coverage for SAP sources refer to the <a href="https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html" target="_blank">AWS Glue zero-ETL user guide.</a></p>

Read article →

AWS Step Functions enhances Local Testing with TestState API

<p>AWS Step Functions enhances the TestState API to support local unit testing of workflows, allowing you to validate your workflow logic, including advanced patterns like Map and Parallel states, without deploying state machines to your AWS account.<br /> <br /> <a href="https://aws.amazon.com/step-functions/">AWS Step Functions</a> is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. The TestState API now supports testing of complete workflows including error handling patterns in your local development environment. You can now mock AWS service integrations, with optional API contract validation that verifies your mocked responses match the expected responses from actual AWS services, helping ensure your workflows work correctly in production. You can integrate TestState API calls into your preferred testing frameworks such as Jest and pytest and CI/CD pipelines, enabling automated workflow testing as part of your development process. These capabilities help accelerate development by providing instant feedback on workflow definitions, enabling validation of workflow behavior in your local environment, and catching potential issues earlier in the development cycle.<br /> <br /> The enhanced TestState API is available through the AWS SDK in all AWS Regions where Step Functions is available. For a complete list of regions and service offerings, see <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>.<br /> <br /> To get started, you can access the TestState API through the AWS SDK, AWS CLI, or check out the AWS Step Functions <a href="https://docs.aws.amazon.com/step-functions/latest/dg/test-state-isolation.html">Developer Guide</a>.</p>

Read article →

Amazon API Gateway now supports additional TLS security policies for REST APIs

<p>Amazon API Gateway now supports enhanced TLS security policies on API endpoints and custom domain names, providing you with greater control over the security posture of your APIs. These new policies help you meet evolving security requirements, comply with stricter regulations, and enhance encryption for your API connections.<br /> <br /> When configuring REST APIs and custom domain names, you can now select from an extended list of security policies, including options that require TLS 1.3 only, implement Perfect Forward Secrecy, comply with Federal Information Processing Standard (FIPS), or leverage Post Quantum Cryptography. These policies help meet evolving security requirements and stricter regulations while simplifying API security management. The enhanced policies also support endpoint access control for additional governance.<br /> <br /> API Gateway enhanced TLS security policies are available in the following AWS commercial Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (UAE), South America (São Paulo).<br /> <br /> For more information, visit the <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-security-policies.html">Amazon API Gateway documentation</a>.</p>

Read article →

Amazon API Gateway adds Developer Portal capabilities

<p>Amazon API Gateway launches Portals that now enable businesses to create fully managed, AWS native developer portals that serve as the central hub for AWS assets such as REST APIs for discovery, documentation, governance, and monetization across their AWS infrastructure. Portals solve the challenge of fragmented APIs by automatically discovering existing APIs across accounts, generating documentation and also allow custom documentation. Teams can organize APIs into logical products for different audiences, customize branding by attaching company logos, configure access controls, ensure API compliance with organizational standards, and use analytics for understanding user engagement. Users can benefit from discovery and "Try It" button for API exploration.<br /> <br /> Portals deliver three benefits that address the pressing challenges in API management today. They eliminate the security risks of third-party solutions by keeping all API configurations within AWS boundaries while providing access control for internal and external audiences. Portals also reduce developer onboarding time from weeks to minutes through automated portal generation, and documentation that updates as APIs evolve. This eliminates the weeks of infrastructure setup and also promotes re-use across developer teams. Portals also provide visibility into developer portal usage and analytics, through CloudWatch RUM (Real User Monitoring) making it easier to understand user engagement.<br /> <br /> To learn about pricing for this feature, please see the Amazon API Gateway pricing page. Amazon API Gateway Portals is available in all AWS Regions, excluding the AWS GovCloud (US) and China Regions. To get started, visit Amazon <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-portals.html" target="_blank">API Gateway documentation</a> and <a href="https://aws.amazon.com/blogs/compute/improve-api-discoverability-with-the-new-amazon-api-gateway-portal/" target="_blank">AWS blog post</a>.</p>

Read article →

Amazon API Gateway now supports response streaming for REST APIs

<p>Amazon API Gateway now progressively streams response payloads to clients as they become available. This improves REST API responsiveness by eliminating the need to buffer complete responses before transmission. This new capability works with backends that support streaming, including Lambda functions, HTTP proxy integrations, and private integrations.<br /> <br /> Response streaming delivers three key benefits: improved time-to-first-byte (TTFB) performance, extended integration timeouts up to 15 minutes, and support for payloads larger than 10 MB. Generative AI applications particularly benefit from improved TTFB as users see responses appear incrementally in real-time, while complex deliberation-focused models that take longer to process can now run with extended timeouts. Additionally, large payload support enables direct streaming of media files and large datasets without requiring workarounds like pre-signed Amazon S3 URLs.<br /> <br /> To learn about pricing for this feature, please see the Amazon API Gateway pricing page. Amazon API Gateway response streaming is available in all AWS Regions, including the AWS GovCloud (US) Regions, and works with regional, private, and edge-optimized endpoints. To get started, visit <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/response-transfer-mode.html">Amazon API Gateway documentation</a>, <a href="https://aws.amazon.com/blogs/compute/building-responsive-apis-with-amazon-api-gateway-response-streaming/">AWS blog</a> and <a href="https://aws.amazon.com/blogs/architecture/building-an-ai-gateway-to-amazon-bedrock-with-amazon-api-gateway/">customer success blog posts</a>.</p>

Read article →

AWS IAM enables identity federation to external services using JSON Web Tokens (JWTs)

<p>AWS Identity and Access Management (IAM) announces outbound identity federation, enabling customers to securely federate their AWS identities to external services using short-lived JSON Web Tokens (JWTs). This allows customers to securely authenticate their AWS workloads with third-party cloud providers, SaaS providers, and self-hosted applications without using long-term credentials or implementing complex workarounds.<br /> <br /> Customers can now exchange their AWS IAM credentials for cryptographically signed, short-lived JSON Web Tokens (JWTs), providing a simple and secure mechanism for AWS workloads to access external services. These tokens contain rich context about the AWS workloads, enabling external services to implement fine-grained access control. Administrators can control access to token generation and enforce token properties (such as lifetime, audience and signing algorithms) using IAM policies and audit token usage using CloudTrail logs, allowing them to meet their organization’s security and compliance requirements.<br /> <br /> This capability is available in all AWS commercial Regions, AWS GovCloud (US) Regions, and China Regions. To get started, visit the list of resources below:<br /> </p> <ul> <li>Read the <a href="https://aws.amazon.com/blogs/aws/simplify-access-to-external-services-using-aws-iam-outbound-identity-federation">AWS News Blog Post</a></li> <li>Visit <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_outbound.html">IAM Documentation</a></li> </ul>

Read article →

Amazon Connect now supports enhanced Instance-to-Instance communication

<p>Amazon Connect now routes calls between instances within the same account through the AWS global backbone, without relying on the Public Switched Telephony Network (PSTN) when both numbers are provisioned or ported into Amazon Connect.<br /> <br /> Customers calling between Amazon Connect instances - whether within the same region or across regions - now benefit from AWS's global network infrastructure. Customers will enjoy higher call quality, simplified billing, and enhanced contact sharing capabilities that preserve call context across transfers.<br /> <br /> This feature is available in all commercial regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region">Amazon Connect</a> is offered except for Africa (Cape Town).<br /> <br /> To learn more about Amazon Connect, review the following resources:<br /> </p> <ul> <li><a href="https://aws.amazon.com/connect/">Amazon Connect website</a> and <a href="https://aws.amazon.com/connect/pricing/">pricing</a></li> <li><a href="https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html">Amazon Connect Administrator Guide</a></li> </ul>

Read article →

Amazon S3 adds new bucket-level setting to standardize encryption types used in your buckets

<p>Amazon S3 now supports a new default encryption configuration setting to enforce Amazon S3 managed server-side encryption (SSE-S3) or server-side encryption with AWS KMS keys (SSE-KMS) for all write requests to your buckets. This new bucket-level setting helps you standardize the server-side encryption types that can be used with your buckets. Using the PutBucketEncryption API, you can disable server-side encryption with customer-provided keys (SSE-C) on specific buckets or in your AWS CloudFormation templates.<br /> <br /> This enhancement to the PutBucketEncryption API is now available in all AWS Regions. You can use the AWS Management Console, SDK, API, or CLI to configure encryption controls for your buckets. To learn more, see the <a contenteditable="false" href="https://aws.amazon.com/blogs/storage/advanced-notice-amazon-s3-to-disable-the-use-of-sse-c-encryption-by-default-for-all-new-buckets-and-select-existing-buckets-in-april-2026/" style="cursor: pointer;">AWS Storage Blog post</a> or <a contenteditable="false" style="cursor: pointer;"></a><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-s3-c-encryption-setting-faq.html">default SSE-C setting for new S3 buckets FAQ</a> in the S3 User Guide. For more information on the PutBucketEncryption API, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html" style="cursor: pointer;">S3 documentation</a>.</p>

Read article →

Amazon FSx for Windows File Server now supports File Server Resource Manager

<p>Amazon FSx for Windows File Server, a fully-managed service that provides file storage built on Windows Server, now supports File Server Resource Manager (FSRM), a Windows Server feature that provides powerful capabilities to manage, govern, and monitor your file data. With FSRM, you can better control storage usage, strengthen compliance, and optimize costs across your FSx for Windows file systems.<br /> <br /> With this launch, you can now classify, identify, and control sensitive data using file classification and file screening, control storage usage and costs using folder-level quotas, and better understand and optimize your storage usage with storage reports. FSRM on FSx for Windows File Server is also deeply integrated with AWS observability services. You can publish FSRM events directly to <a href="https://aws.amazon.com/cloudwatch/">Amazon CloudWatch</a> Logs or stream events to <a href="https://aws.amazon.com/kinesis/">Amazon Kinesis</a> Data Firehose, allowing you to query, process, store, and archive logs, trigger <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> functions to take reactive actions based on file events, and perform advanced monitoring and analysis to automate administration of your file data.<br /> <br /> FSRM support is available today at no additional cost for new file systems in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?refid=bc0b1108-3dde-4bd6-8c6b-5bc65141884a">AWS Regions where Amazon FSx for Windows File Server is available</a>. Existing file systems will receive FSRM support during an upcoming maintenance window. To get started, visit <a href="https://docs.aws.amazon.com/fsx/latest/WindowsGuide/managing-files-fsrm.html">File Server Resource Manager</a> in the FSx for Windows User Guide and read the blog <a href="https://aws-blogs-prod.amazon.com/storage/using-file-server-resource-manager-fsrm-for-amazon-fsx-for-windows-file-server/">Using File Server Resource Manager (FSRM) on Amazon FSx for Windows File Server</a>.</p>

Read article →

Amazon MSK Console now supports viewing Kafka topics with new public APIs

<p>Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports viewing topics directly through the Amazon MSK console, making it easier to inspect all your Kafka topics without setting up Kafka admin clients. You can browse and search topics within a cluster, quickly review replication settings and partition counts, and drill into individual topics to examine detailed configuration, partition-level information, and metrics. These console capabilities are powered by three new MSK APIs, ListTopics, DescribeTopic, and DescribeTopicPartitions that you can also use directly for programmatic access. The ListTopics API returns the list of all topics in a cluster, while the DescribeTopic and DescribeTopicPartitions APIs provide detailed configuration and partition information for a topic. All three APIs are available through the AWS CLI and AWS SDKs.<br /> <br /> These MSK topic viewing capabilities are available for all Amazon MSK Provisioned clusters using Kafka version 3.6 and above across <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS regions</a> where Amazon MSK is offered. To start using these features, you'll need to set up the appropriate IAM permissions. To learn more on how to get started, see the <a contenteditable="false" href="https://docs.aws.amazon.com/msk/latest/developerguide/getting-started.html" style="cursor: pointer;">Amazon MSK Developer Guide</a>.</p>

Read article →

AWS IAM launches aws:SourceVpcArn condition key for region-based access control

<p>AWS Identity and Access Management (IAM) now supports a new global condition key, aws:SourceVpcArn, that enables customers to enforce region-based access controls for resources accessed through <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html">AWS PrivateLink</a>. This condition key returns the ARN of the VPC where the VPC endpoint is attached, allowing customers to verify whether requests travel through a specific VPC and implement controls on private access to their resources in same-region or cross-region scenarios.<br /> <br /> Customers can use aws:SourceVpcArn in policies to ensure resources are only accessible from VPC endpoints in specific regions, helping enforce data residency requirements. For example, you can attach a policy to an Amazon S3 bucket that restricts access to requests made through VPC endpoints in designated regions only.<br /> <br /> The aws:SourceVpcArn condition key is available in all commercial AWS Regions. For a complete list of supported AWS services and to learn more, please refer to the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-network-properties">IAM User Guide.</a></p>

Read article →

Amazon Bedrock Custom Model Import now supports OpenAI GPT OSS models

<p><a contenteditable="false" href="https://aws.amazon.com/bedrock/custom-model-import/" style="cursor: pointer;">Amazon Bedrock Custom Model Import</a> now supports Open AI GPT OSS models. You can import custom weights for gpt-oss-120b and gpt-oss-20b models. This enables you to bring your own customized GPT OSS models into Amazon Bedrock and deploy them in a fully managed, serverless environment—without having to manage infrastructure or model serving.<br /> <br /> GPT OSS models are text-to-text models designed for reasoning, agentic, and developer tasks. The larger gpt-oss-120b model is optimized for production, general purpose, and high reasoning use cases, while the smaller gpt-oss-20b model is best suited for lower latency, or specialized used cases such as data processing or domain-specific summarization.<br /> <br /> Amazon Bedrock Custom Model Import for GPT OSS models is generally available in the US-East (N. Virginia) AWS Region. You can get started by importing your custom GPT OSS models in the custom models section of the <a contenteditable="false" href="https://console.aws.amazon.com/bedrock/" style="cursor: pointer;">Amazon Bedrock console</a>. To learn more about OpenAI models in Amazon Bedrock visit the <a contenteditable="false" href="https://aws.amazon.com/bedrock/openai/" style="cursor: pointer;">product page</a>. To see what all architectures are supported visit the <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html" style="cursor: pointer;">documentation</a>.&nbsp;</p>

Read article →

AWS enables developers to use console credentials for AWS CLI and SDK authentication

<p>Developers can now use their existing AWS Management Console sign-in credentials for programmatic access to AWS services. After a quick browser-based authentication flow, AWS automatically generates temporary credentials that work across local development tools like the AWS CLI, AWS Tools for PowerShell and AWS SDKs. To get started, simply install or upgrade to the latest version of the AWS CLI and run <i>aws login </i>in your terminal.<br /> <br /> This login for AWS local development feature makes it easier to start building with AWS services within minutes of account sign-up, eliminating the need to create and manage separate identities and access keys for programmatic access. The <i>aws login</i> CLI command generates short-lived credentials that are automatically rotated, reducing the risks associated with long-term access keys and enhancing your security posture.<br /> <br /> This feature is available in all commercial AWS regions.<br /> <br /> To get started, install or upgrade to AWS CLI version 2.32.0 and the latest versions of all AWS SDKs. For more information, please read our <a contenteditable="false" href="https://aws.amazon.com/blogs/security/simplified-developer-access-to-aws-with-aws-login/" style="cursor: pointer;">blog</a> or visit the <a contenteditable="false" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sign-in.html" style="cursor: pointer;">AWS CLI User Guide.</a></p>

Read article →

AWS Network Load Balancer simplifies deployments with support for Weighted Target Groups

<p>Network Load Balancer now supports weighted target groups, allowing you to distribute traffic across multiple target groups with configurable weights for advanced deployment strategies.<br /> <br /> Weighted target groups enables key use cases like Blue-Green and Canary Deployments, Application Migration, and A/B Testing by allowing you to register multiple target groups with configurable weights ranging from 0 to 999, providing precise control over traffic distribution. Blue-Green and Canary Deployments allow you to gradually shift traffic between application versions, minimizing downtime during upgrades and patches; Application Migration enables seamless transitions from legacy stacks to new stacks without disrupting production traffic; and A/B Testing facilitates splitting incoming traffic across experimental environments. All target group types are supported, including instance, IP address, and Application Load Balancer (ALB) targets.<br /> <br /> Weighted Target Groups routing is available for all existing and new Network Load Balancers across AWS commercial and AWS GovCloud (US) regions at no additional charge. Standard Network Load Balancer Capacity Unit (LCU) pricing applies.<br /> <br /> To learn more, please refer to <a contenteditable="false" href="https://aws.amazon.com/blogs/networking-and-content-delivery/network-load-balancers-now-support-weighted-target-groups/" style="cursor: pointer;">this AWS blog post</a>, and the <a contenteditable="false" href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/listener-update-rules.html" style="cursor: pointer;">NLB User Guide</a>.&nbsp;</p>

Read article →

AWS Directory Service now supports AWS PrivateLink for private VPC connectivity

<p><a contenteditable="false" href="https://aws.amazon.com/directoryservice/" style="cursor: pointer;">AWS Directory Service</a> now supports <a contenteditable="false" href="https://aws.amazon.com/privatelink/" style="cursor: pointer;">AWS PrivateLink</a>, enabling you to ensure all API calls to AWS Directory Service are constrained to within the private networks that you specify. This new capability provides private connectivity to both the AWS Directory Service APIs and Directory Service Data APIs, delivering faster network paths, reduced latency, and eliminating public internet-based call patterns.<br /> <br /> With AWS PrivateLink support, your access to AWS Directory Service APIs can be constrained to the private network connectivity you specify and eliminate any requirements for an internet gateway or NAT device. This encompasses all essential operations such as creating directories, configuring trust relationships, managing user accounts, and adding users to groups. This capability is particularly valuable for organizations that must maintain strict isolation between their workloads and public network connectivity.<br /> <br /> To establish a private connection, you create an interface Amazon VPC endpoint powered by AWS PrivateLink, which creates requester-managed network interfaces in each enabled subnet to serve as entry points for Directory Service API traffic. This feature is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/regions.html" style="cursor: pointer;">AWS Regions</a> where AWS Directory Service is supported. To learn more, see the AWS <a contenteditable="false" href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/vpc-interface-endpoints.html" style="cursor: pointer;">Directory Service documentation</a>.</p>

Read article →

Amazon S3 now supports post-quantum TLS key exchange on S3 endpoints

<p>Amazon S3 now supports post-quantum TLS key exchange on regional S3, S3 Tables, and S3 Express One Zone endpoints providing customers with post-quantum cryptography options for encryption of their data in-transit. All regional S3, S3 Tables, and S3 Express One Zone endpoints now support Module Lattice-Based Key Encapsulation Mechanisms (ML-KEM), one of National Institute of Standards &amp; Technology (NIST) standardized post-quantum cryptographic algorithms. Through the new PQ-TLS key exchange, Amazon S3 now supports quantum-resistant cryptography for the encryption of data in-transit. This new support combined with Amazon S3’s server-side encryption by default utilizing AES-256 algorithms offers customers quantum-resistant encryption both in-transit and at-rest.<br /> <br /> Post-quantum TLS key exchange for Amazon S3 is available for all clients configured to use the ML-KEM key exchange algorithm, where you receive the benefits of the post-quantum TLS key exchange. This is because Amazon S3 will automatically negotiate the highest TLS protocol version that your client software supports.<br /> <br /> Post-quantum TLS key exchange for Amazon S3 is supported at no additional cost in all regional S3, S3 Tables, and S3 Express One Zone endpoints in all AWS regions. To learn more about PQ-TLS support in Amazon S3, visit our <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryptionInTransit.PQ-TLS.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Announcing enhanced cost management capabilities in Amazon Q Developer

<p>Amazon Q Developer now offers enhanced cost management capabilities, enabling customers to analyze costs across a wider range of Cloud Financial Management domains with more advanced analytical capabilities. Customers can now ask complex, open-ended questions about historical and forecasted costs and usage, optimization recommendations, commitment coverage and utilization, cost anomalies, budgets, free tier usage, product attributes, and cost estimation. Q can explore data, form hypotheses, and perform calculations to provide deeper insights with less time and expertise required.<br /> <br /> With these capabilities, FinOps practitioners, engineers, and Finance professionals can increase productivity by delegating more cost analysis and estimation tasks to Q. For example, customers can ask "Why did costs for this application increase last week?". Q will explore the data by retrieving costs and usage quantities by service, account, or resource, form hypotheses, gather data from multiple sources, and perform calculations ranging from simple period-over-period cost changes to unit economic metrics like effective cost per instance-hour. Q provides transparency on each API call it makes to retrieve data, including specific parameters used, and provides matching console links where customers can verify the data or dive deeper.<br /> <br /> To get started, open the Amazon Q chat panel from anywhere in the AWS Management Console and ask a question about your costs. To learn more, see <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/ce-cost-analysis-q.html" style="cursor: pointer;" target="_blank">Managing your costs using generative AI with Amazon Q Developer</a> in the AWS Cost Management user guide.</p>

Read article →

Amazon EC2 M7i instances are now available in the Europe (Zurich) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Europe (Zurich) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.<br /> <br /> M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/m7i/">Amazon EC2 M7i Instances</a>. To get started, see the <a href="https://console.aws.amazon.com/">AWS Management Console</a>.</p>

Read article →

Amazon Connect outbound campaigns supports ring time configuration for unanswered calls

<p>Amazon Connect outbound campaigns now offers campaign managers the ability to configure how long voice calls should ring—between a range of 15 and 60 seconds—before marking a call as “no answer” and moving to the next contact. Each contact also records when ringing began and ended for precise reporting and traceability.<br /> <br /> When ring duration is static, businesses struggle to balance calling efficiency and customer reach. Calls that ring too briefly may miss customers who take longer to answer, while excessive ring times delay overall campaign pacing. This lack of control leads to inconsistent contact rates and reduced agent productivity.<br /> <br /> With configurable ring time, campaign managers can tune dialing behavior to their audience for each campaign, use analytics to see exactly how long each call rang, and understand where connections were missed. This visibility helps identify patterns, refine calling strategies, and continuously improve campaign effectiveness.<br /> <br /> With Amazon Connect outbound campaigns, companies pay-as-they-go for campaign processing and channel usage. This feature is available in AWS regions, including US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London).<br /> <br /> To learn more about configuring ring time for campaigns, visit our <a href="https://aws.amazon.com/connect/outbound/" target="_blank">webpage</a>.</p>

Read article →

AWS Site-to-Site VPN announces VPN Concentrator

<p>AWS Site-to-Site VPN launches VPN Concentrator, a new feature that simplifies multi-site connectivity for distributed enterprises. VPN Concentrator is suitable for customers who need to connect 25+ remote sites to AWS, with each site needing low bandwidth (under 100 Mbps).<br /> <br /> Until now, customers who needed to connect large number of low-bandwidth remote sites to AWS relied on solutions that were complex to use. These solutions create operational overhead as customers need to deploy and manage multiple virtual appliances in AWS. For example, customers are responsible for deploying appliances in multiple availability zones and network configuration to ensure high availability. AWS Site-to-Site VPN is a fully managed service that allows you to create a secure connection between your data center or branch office and your AWS resources using IP Security (IPSec) tunnels. With this launch, customers can now connect up to 100 low-bandwidth sites using a single VPN Concentrator to access their workloads in AWS. VPN Concentrator simplifies multi-site connectivity by allowing multiple remote sites to connect through a single attachment to AWS Transit Gateway, simplifying multi-site connectivity. Aggregating large number of low-bandwidth sites using a VPN Concentrator also provides efficient bandwidth utilization, and in turn, reduces VPN costs per site.<br /> <br /> This capability is available in all AWS commercial Regions and AWS GovCloud (US) Regions where AWS Site-to-Site VPN is available. To learn more and get started, visit the AWS Site-to-Site VPN <a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-concentrator.html" target="_blank">documentation</a>.</p>

Read article →

Amazon ECS Managed Instances adds configurable scale-in delay

<p><a href="https://aws.amazon.com/ecs/managed-instances/" target="_blank">Amazon ECS Managed Instances </a>(ECS Managed Instances) now gives you greater control over infrastructure optimization with configurable scale-in delay. This enhancement allows you to fine-tune instance management based on your specific workload patterns and business requirements, helping you better balance cost optimization with operational needs.<br /> <br /> ECS Managed Instances is a fully managed compute option that automatically provisions right-sized Amazon EC2 instances based on your workload requirements. Over time, your compute resources may drift from workload requirements due to changing traffic patterns or dynamic scaling. ECS Managed Instances continuously monitors and proactively optimizes costs by terminating idle Amazon EC2 instances not running any tasks, and consolidating tasks from underutilized instances onto other, right-sized instances, provisioning new instances if required. ECS uses a heuristic based delay for scaling-in your instances to deliver a balance of high availability and cost optimization. However, your workloads or business may have unique requirements. For example, you might need to retain instances for a longer time period to accommodate incoming batch jobs and minimize instance churn. Starting today, you can set the scaleInAfter configuration parameter to up to 60 minutes to align with your specific infrastructure optimization needs. You can also set the scaleInAfter to -1 to disable infrastructure optimization workflows, which will allow your instances to run until they are patched after 14 days.<br /> <br /> You can use ECS API, console, SDK, CDK, CloudFormation to configure scaleInAfter parameter when creating or updating an ECS Managed Instances capacity provider. This feature is available in all commercial <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>. To learn more, review <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html" target="_blank">documentation</a> and <a href="https://aws.amazon.com/blogs/containers/deep-dive-amazon-ecs-managed-instances-provisioning-and-optimization/" target="_blank">deep dive blog post</a>.</p>

Read article →

Amazon Route 53 DNS service now supports AWS PrivateLink

<p>Amazon Route 53 now supports <a href="https://aws.amazon.com/privatelink/">AWS PrivateLink</a> for API requests to the route53.amazonaws.com service endpoint, allowing your AWS workloads to make changes to critical DNS infrastructure, including hosted zones, records, and health checks, without using the public internet. With this release, you can set up private connectivity between your virtual private clouds (VPCs) and the Route 53 API, from your VPC on the AWS backbone, in any AWS Region.<br /> <br /> The Route 53 API is used by customers for domain name system (DNS) operations, which are a foundational layer of their cloud infrastructure automation, user-facing applications, and internal services. This integration simplifies cloud architecture by removing the need for customers to setup and manage complex networking services that connect resources in their virtual private clouds (VPCs) privately to the Route 53 API. Now, customers can use a VPC endpoint within their VPC to establish connectivity to the Route 53 API. Customers outside the us-east-1 can use <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-cross-region-privatelink-support.html">cross-region Interface VPC endpoints</a> to natively connect to Route53 from other Regions, without the need to send traffic over the public internet or set up inter-region connectivity like VPC peering.<br /> <br /> Route 53 support for PrivateLink is available globally, except in AWS GovCloud and Amazon Web Services in China. To learn more about this feature, or to get started, visit the <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html">AWS PrivateLink documentation</a>. To learn about pricing, visit the <a href="https://aws.amazon.com/privatelink/pricing/">PrivateLink pricing page</a>.</p>

Read article →

Amazon OpenSearch Serverless now supports backup and restore through the AWS Management Console

<p>Amazon OpenSearch Serverless now supports backup and restore through the AWS Management Console. OpenSearch Serverless automatically backs up all collections and indexes in your account every hour and retains backups for 14 days. You can restore backups using either the <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-snapshots.html#serverless-snapshots-working-with">API </a>or the AWS Console. This feature is enabled by default and requires no configuration. For more information, see <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-snapshots.html#serverless-snapshots-working-with">Working with snapshots</a> in the Amazon OpenSearch Serverless Developer Guide.<br /> <br /> Please refer to the <a href="https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-service-regions">AWS Regional Services List</a> for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless.html">see the documentation.&nbsp;</a><a href="https://aws.amazon.com"></a></p>

Read article →

Amazon CloudWatch real user monitoring (RUM) adds support for iOS and Android applications

<p>Amazon CloudWatch RUM now supports iOS and Android applications, expanding real user monitoring beyond web applications. Developers and SREs can now quickly isolate mobile application issues and improve end-user experience, with visibility into performance metrics such as screen load times, crash rates, and API latencies.<br /> <br /> CloudWatch RUM for mobile uses the OpenTelemetry (OTEL) standard to send spans and events. The service captures mobile spans such as application startup time, screen load time and backend network calls. It also captures events including crashes, and ANRs/AppHangs to provide rich troubleshooting insights on the CloudWatch console.<br /> <br /> You can perform impact analysis for specific errors or crashes, drill down to correlated telemetry, and filter by location, device type, operating system, and app versions to quickly identify root causes. Mobile telemetry integrates with application metrics, traces, logs, web RUM monitoring, and synthetic monitoring in CloudWatch Application Signals to speed up troubleshooting and reduce application disruption.<br /> <br /> CloudWatch RUM support for iOS and Android is available in all AWS Commercial Regions where web monitoring is available. To learn more, see <a href="https://aws.amazon.com/cloudwatch/pricing/">pricing, </a>getting started for <a href="https://github.com/aws-observability/aws-otel-android?tab=readme-ov-file#aws-distro-for-opentelemetry---android">Android</a> and <a href="https://github.com/aws-observability/aws-otel-swift?tab=readme-ov-file#aws-distro-for-opentelemetry-for-swift">iOS</a>, and <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM-web-mobile.html#CloudWatch-RUM-mobile-monitoring">documentation</a>.</p>

Read article →

Amazon DynamoDB now supports multi-attribute composite keys in global secondary indexes

<p>Amazon DynamoDB now supports primary keys composed of up to eight attributes in global secondary indexes (GSIs). While previously, partition and sort keys were limited to one attribute each, DynamoDB now supports up to four attributes each for the partition and sort keys. With multi-attribute keys, you no longer need to manually concatenate values into synthetic keys, which sometimes result in the need to backfill data before adding new indexes. Instead, you can create primary keys using up to eight existing attributes, making it easier to model diverse access patterns and adapt to new query requirements.<br /> <br /> Multi-attribute partition keys improve data distribution and uniqueness. Multi-attribute sort keys enable flexible querying by letting you specify conditions on sort key attributes from left to right. For example, an index with partition key UserId and sort key attributes Country, State, and City lets you query all locations for a user, then narrow results by Country, State, or City.<br /> <br /> Multi-attribute partition and sort keys are available at no additional charge in all AWS Regions where DynamoDB is available. You can create them using the AWS Management Console, AWS CLI, AWS SDKs, or DynamoDB API. To learn more, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html" target="_blank">Global Secondary Indexes</a> in the Amazon DynamoDB Developer Guide.</p>

Read article →

AWS PrivateLink now supports cross-region connectivity for AWS Services

<p>AWS PrivateLink now supports native cross-region connectivity to AWS services. Until now, Interface VPC endpoints only supported connectivity to AWS services in the same Region. This launch enables customers to connect to select AWS services hosted in other Regions of the same AWS <a href="https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html#:~:text=AWS%20groups%20Regions%20into%20partitions,resources%20in%20a%20different%20partition.">pa</a><a href="https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html#:~:text=AWS%20groups%20Regions%20into%20partitions,resources%20in%20a%20different%20partition." target="_blank">rtition</a> over Interface endpoints.<br /> <br /> As a service consumer, you can access Amazon S3, Route53, Elastic Container Registry (ECR) and other services, privately without the need to setup cross-region peering or exposing your data over the public internet. These services can be accessed through Interface endpoints at a private IP address in your VPC, enabling simpler and more secure inter-region connectivity. This feature helps you build globally distributed private networks that comply with data residency requirements, while accessing supported AWS Services through PrivateLink<br /> <br /> To learn about pricing for this feature, please see the <a href="https://aws.amazon.com/privatelink/pricing/" target="_blank">AWS PrivateLink pricing page</a>. For a complete list of supported AWS services and Regions, please refer to our <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-cross-region-privatelink-support.html" target="_blank">documentation</a> and <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/aws-privatelink-extends-cross-region-connectivity-to-aws-services/" target="_blank">launch blog</a>. To learn more, visit <a href="https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html" target="_blank"><u>AWS PrivateLink</u></a> in the Amazon VPC Developer Guide.</p>

Read article →

Amazon Inspector supports organization-wide management through AWS Organizations policies

<p>Amazon Inspector can now be enabled, configured and managed across your organization using AWS Org policies. With this new capability, you can centrally configure and manage scan types—such as Amazon EC2 scanning, ECR scanning, Lambda standard and Code Scanning, and Code Security — across all the accounts in your organization, selected organizational units (OUs), or individual accounts. The new Inspector policy type within AWS Organization simplifies your service onboarding, management, and ensures consistent, organization-wide vulnerability scanning coverage.<br /> <br /> This feature helps you maintain a uniform security baseline by automating Inspector enablement through a single AWS Organization policy. To get started, designate a delegated admin within Amazon Inspector, enable the “Inspector policies” policy type in the AWS Organizations console, and create a policy that specifies the desired scan types and Regions. Once attached to your organization root or OUs, Inspector will automatically be enabled for all the specified scan-types across covered accounts . When the Inspector policy is created and attached, all in-scope accounts automatically are aligned with your Organization-wide policy definition. New accounts that join the organization or are moved into an OU with an attached policy, inherit Inspector enablement automatically—reducing operational overhead and eliminating coverage gaps.<br /> <br /> <a href="https://aws.amazon.com/inspector/" target="_blank">Amazon Inspector</a> is a vulnerability management service that continuously scans AWS workloads including Amazon EC2 instances, container images, AWS Lambda functions, and code repositories for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS organization. The AWS Organizations Inspector policy for organization-wide enablement is available at no additional cost to Amazon Inspector customers in all AWS commercial, China, and AWS GovCloud (US) Regions where <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">Amazon Inspector is available</a>.<br /> <br /> To learn more about Amazon Inspector policies within AWS Organization, visit:<br /> </p> <ul> <li><a href="https://docs.aws.amazon.com/inspector/latest/user/getting_started_tutorial.html" target="_blank">Getting started with Amazon Inspector</a></li> <li><a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html" target="_blank">Managing organization policies with AWS Organizations</a></li> </ul>

Read article →

Amazon Bedrock is now available in additional Regions

<p>Beginning today, customers can use Amazon Bedrock in the Africa (Cape Town), Canada West (Calgary), Mexico (Central), and Middle East (Bahrain) regions to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.<br /> <br /> Amazon Bedrock is a comprehensive and secure service for building generative AI applications and agents. Amazon Bedrock connects you to leading foundation models (FMs) and services to deploy and operate agents, enabling you to quickly move from experimentation to real-world deployment.<br /> <br /> To get started, visit the <a href="https://aws.amazon.com/bedrock/" style="cursor: pointer;">Amazon Bedrock page</a> and see the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html" style="cursor: pointer;">Amazon Bedrock documentation</a> for more details.</p>

Read article →

Amazon Connect now provides conversational analytics for voice and chat bots

<p>Amazon Connect now provides conversational analytics for end-customer self-service interactions across voice and digital channels, helping you better understand and improve your customers' self-service experiences. This includes across PSTN/telephony, in-app and web-calling, web and mobile chat, SMS, WhatsApp Business messaging, and Apple Messages for Business. <br /> <br /> With this launch, Connect now provides rich conversational analytics across both human-agent interactions and end-customer self-service interactions. You can now automatically analyze the quality of automated self-service interactions including customer sentiment, redact sensitive data, discover top contact drivers and themes, identify compliance risks, and proactively identify areas for improvement through easy-to-customize dashboards. Connect’s conversational analytics also enables you to use semantic matching rules to categorize interactions based on customer behavior, keywords, sentiment, or issue types, such as billing inquiries or agent escalation requests.<br /> <br /> Amazon Connect is an AI-powered application that provides one seamless experience for your contact center customers, agents, and supervisors. To learn more about Amazon Connect and its conversational analytics capabilities, refer to the following resources:<br /> </p> <ul> <li><a href="https://aws.amazon.com/connect">Amazon Connect website</a> and <a href="https://aws.amazon.com/connect/pricing/">pricing</a></li> <li><a href="https://docs.aws.amazon.com/connect/latest/adminguide/analyze-conversations.html">Conversational analytics</a> in the Administrator Guide</li> <li><a href="https://docs.aws.amazon.com/connect/latest/adminguide/supported-languages.html#supported-languages-contact-lens">Supported languages</a> and <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#contactlens_region">Regions</a></li> </ul>

Read article →

AWS CloudTrail adds data event aggregation to simplify security monitoring

<p>AWS announces CloudTrail aggregated events, a new feature that simplifies how enterprises monitor and analyze their CloudTrail data events at scale. Aggregations are available for CloudTrail data events, which could generate thousands of events per minute as users access resources like Amazon S3 buckets or AWS Lambda functions. With this feature, security, compliance, and operations teams can efficiently monitor high-volume data access patterns without processing massive numbers of individual events.<br /> <br /> Aggregation for data events streamlines security monitoring by consolidating high-volume AWS API activity into 5-minute summaries. These summaries highlight key trends like access frequency, error rates, and most-used actions, allowing teams to quickly identify patterns while maintaining access to detailed events when needed. Security teams can easily answer questions like "How has this user's activity changed over the past week?" or "What are the top actions being performed on this critical resource?" without having to scan through voluminous CloudTrail data events.<br /> <br /> You can enable aggregation in your trails capturing data events through the AWS console or CLI, and choose from pre-built aggregation templates for API activity, resource access, and user activity summaries. For more information, see the CloudTrail trail documentation. You are charged for aggregations based on the number of CloudTrail data events that are analyzed to create the aggregation. For more information, visit the <a contenteditable="false" href="https://aws.amazon.com/cloudtrail/pricing/" style="cursor: pointer;">CloudTrail pricing page</a>.<br /> <br /> You can use CloudTrail aggregations for data in all commercial AWS Regions.&nbsp;</p>

Read article →

AWS Marketplace adds A2A server support for Amazon Bedrock AgentCore Runtime

<p>AWS Marketplace now offers Agent-to-Agent (A2A) server support and streamlined deployment for third-party AI agents and tools built for <a href="https://aws.amazon.com/bedrock/agentcore/">Amazon Bedrock AgentCore</a> Runtime. The new capabilities accelerate deployment by pre-populating required environment variables in the AgentCore console and AWS CLI instructions in AWS Marketplace. Customers can now also procure and deploy A2A servers on AgentCore Runtime through AWS Marketplace, making it easier for them to leverage AI agents from AWS Partners. The improvements reduce deployment complexity by leveraging vendor-defined launch configurations while adding protocol flexibility to meet diverse customer needs.<br /> <br /> AWS Partners can now offer A2A servers in addition to MCP servers and AI agents using AgentCore Runtime containers in the AWS Marketplace Management Portal. To accelerate customer onboarding, AWS Partners can define required environment variables for AgentCore Runtime supported products so that customers can quickly get started. AWS Partners can also enable free pricing for API-based SaaS products. These capabilities provide AWS Partners with the flexibility to bring new products to market and implement pricing strategies that align with their business models and customers’ needs.<br /> <br /> Customers can learn more in the <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-ai-agents-products.html">buyer guide</a><b> </b>and start exploring AI agent solutions in AWS Marketplace on the <a href="https://aws.amazon.com/marketplace/solutions/ai-agents-and-tools">solutions page</a>. For AWS Partners interested in implementing the capabilities, visit the <a href="https://docs.aws.amazon.com/marketplace/latest/userguide/ai-agents-tools.html">seller guide</a> and complete the AWS Marketplace listing <a href="https://catalog.workshops.aws/mpseller/en-US/use-cases/publish-agentcore-free">workshop.</a></p>

Read article →

AWS Announces Elemental MediaConnect Router

<p>Today, AWS announces the general availability of AWS Elemental MediaConnect Router, a new capability that enables broadcasters and content providers to dynamically route live video between sources and destinations in the AWS network. This new capability transforms how you build and manage complex live video workflows in the cloud, eliminating the need to reconfigure infrastructure as routing needs change. The router enables complex scenarios like switching between primary and backup feeds, routing regional variants independently, and managing multiple feeds for comprehensive coverage.<br /> <br /> MediaConnect Router optimizes content delivery across the AWS network, reducing transport latency while improving packet delivery reliability when compared to standard transport technologies. This fully managed capability supports routing between inputs and outputs in any supported region as well as between private and public endpoints, and it eliminates operational overhead and unused capacity costs.<br /> <br /> You can start using MediaConnect Router through the MediaConnect console, via MediaConnect API, or AWS CDK. It works independently or alongside existing MediaConnect flows. It can also be part of a larger video workflow with <a contenteditable="false" href="https://aws.amazon.com/media-services/elemental/" style="cursor: pointer;">AWS Elemental</a>, a family of media services that help customers process, monetize, and deliver the highest quality video at global scale.<br /> <br /> MediaConnect Router is available in all standard <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>.<br /> <br /> To learn more about MediaConnect, please visit <a contenteditable="false" href="https://aws.amazon.com/mediaconnect/" style="cursor: pointer;">here</a>.&nbsp;</p>

Read article →

AWS Organizations introduces direct account transfers between organizations

<p>AWS Organizations now provides customers the ability to directly transfer an account to a different organization without first having to remove the account from their current organization. This new capability streamlines the process of transferring accounts between organizations, whether those transfers are part of ongoing operations or an acquisition integration project.</p> <p>Allowing direct transfers of accounts between organizations eliminates the previous requirement for the account to temporarily operate as a standalone account. With the standalone step removed, customers no longer need to manually configure the account's payment method, contact information, and support plan as part of the transfer. Direct transfers of accounts also ensure the account maintains access to the governance features and consolidated billing benefits of the AWS organization they are in before and after the transfer process. The updated process is simpler and uses the same AWS Organizations console experience and APIs as before: an organization invites an account, and the account accepts the invite.</p> <p>Direct account transfers between organizations are now available in all commercial AWS Regions and the AWS GovCloud (US) Regions.</p> <p>To learn more about directly transferring accounts between AWS organizations, see <a contenteditable="false" href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html" style="cursor: pointer;">Managing account invitations with AWS Organizations</a> from the AWS user guide, or review the <a contenteditable="false" href="https://docs.aws.amazon.com/organizations/latest/APIReference/" style="cursor: pointer;">AWS Organizations API Reference.</a></p>

Read article →

Amazon GuardDuty Malware Protection for AWS Backup is now available

<p>Amazon GuardDuty Malware Protection for AWS Backup is now available, extending malware detection to your Amazon EC2, Amazon EBS, and Amazon S3 backups. This capability automates malware detection in your backups without requiring additional security software or agents. You can identify your last known clean backup to minimize business disruption during recovery.</p> <p>Malware protection scans new backups automatically, runs on-demand scans of existing backups, and verifies backups are clean before restoration. You can enable this capability even if <a href="https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_data-sources.html">GuardDuty foundational data sources</a> aren't enabled in your account. You can also use incremental scanning which analyzes only changed data between backups, reducing costs compared to rescanning full backups.</p> <p>Amazon GuardDuty Malware Protection for AWS Backup is available in the list of <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html">supported Regions</a>. You can get started using the <a href="https://console.aws.amazon.com/backup/home">AWS Backup console</a>, API, or CLI. To learn more, read the <a href="https://aws.amazon.com/blogs/storage/scan-backups-for-malware-with-amazon-guardduty-malware-protection-for-aws-backup">launch blog</a> or visit the <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html">AWS Backup documentation</a> and <a href="https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html">Amazon GuardDuty Malware Protection documentation</a>.</p>

Read article →

New AWS Well-Architected Lenses for AI and ML workloads

<p>Today, AWS announces the release of one new - Responsible AI and two updated Machine Learning and Generative AI Well-Architected Lenses. These lenses are designed to help organizations implement AI workloads that prioritize responsible AI practices, technical excellence, and specialized business use-cases. These lenses provide comprehensive guidance for organizations at any stage of their AI journey, addressing the growing need for structured approaches to building responsible, secure, reliable, and efficient AI workloads. The lenses are particularly valuable for business leaders, data scientists, ML engineers, and risk and compliance professionals working with AI technologies.<br /> <br /> The three AI lenses - Responsible AI, Generative AI, and Machine Learning - work together to provide a comprehensive guidance for AI development. The Responsible AI lens guides safe, fair, and secure AI development. It helps balance business needs with technical requirements, streamlining the transition from experimentation to production. The Generative AI lens helps customers evaluate large language model (LLM) based architectures and it’s updates include guidance for Amazon SageMaker HyperPod users, new insights on Agentic AI, and updated architectural scenarios. The Machine Learning lens guides organizations in evaluating workloads across both modern AI and traditional machine learning approaches. Recent updates focus on key areas including enhanced data and AI collaborative workflows, AI-assisted development capabilities, large-scale infrastructure provisioning, and customizable model deployment. These improvements are powered by key AWS services including Amazon SageMaker Unified Studio, Amazon Q, Amazon SageMaker HyperPod, and Amazon Bedrock.<br /> <br /> Read the <a href="https://aws.amazon.com/blogs/architecture/architecting-for-ai-excellence-aws-launches-three-well-architected-lenses-at-reinvent-2025/" target="_blank">launch blog</a> to learn more about our launches, comprehensive architectural guidance throughout your AI journey, and implementation strategies.</p>

Read article →

AWS Cost Anomaly Detection expands AWS managed monitoring

<p>AWS Cost Anomaly Detection now enables you to monitor all linked accounts, cost allocation tags, or cost categories with a single managed monitor. Previously available only for AWS services, this capability helps you identify unusual spending patterns across your entire AWS organization without manual configuration.<br /> <br /> As organizations scale, you need visibility into costs for individual accounts, teams, or business units to maintain accountability and quickly identify anomalies. For example, if you track 500 application teams using a 'team' cost allocation tag, you previously needed to create and maintain 500 individual monitors. Now, you can create a single managed monitor that automatically tracks each team's spending separately. When your organization evolves—such as 'team-mobile' splitting into 'team-ios' and 'team-android'—both new teams are automatically monitored individually without any configuration changes, ensuring continuous anomaly detection as your organization grows.<br /> <br /> The extension of AWS managed monitors to linked accounts, cost allocation tags, and cost categories is available today in all commercial AWS Regions at no additional charge.<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/">AWS Cost Anomaly Detection</a>, or read our <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/extending-aws-managed-monitors-in-cost-anomaly-detection/">blog post</a>. To get started, see the the <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/getting-started-ad.html">user guide</a>.</p>

Read article →

AWS Lambda announces new tenant isolation mode to simplify building tenant-aware applications

<p>Today, AWS Lambda announced a new tenant isolation mode, enabling customers to isolate request processing for individual tenants or end-users invoking a Lambda function. This launch simplifies building multi-tenant applications on Lambda, such as SaaS platforms for workflow automation or code execution.<br /> <br /> Customers building multi-tenant applications have strict isolation requirements when running code or processing data for individual tenants or end-users. Previously, customers met these requirements by implementing custom solutions, such as creating dedicated Lambda functions per tenant and routing requests from individual tenants to their associated functions. Today’s launch enables you to isolate request processing for each tenant invoking a Lambda function, helping you meet strict tenant isolation requirements without the need to build and operate custom solutions. This launch extends Lambda’s isolation boundary from a single function to each tenant invoking that function.<br /> <br /> To use the new tenant isolation mode, customers specify a unique tenant identifier when invoking their Lambda function. Lambda uses this identifier to route invocation requests to a function’s underlying execution environments and ensures that execution environments associated with a particular tenant are never used to serve requests from other tenants invoking the function.<br /> <br /> The new tenant isolation mode for AWS Lambda is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">AWS Regions</a>, except Asia Pacific (New Zealand), AWS GovCloud (US), and China. To learn more, visit <a href="https://docs.aws.amazon.com/lambda/latest/dg/tenant-isolation.html">Lambda documentation</a> and the <a href="https://aws.amazon.com/blogs/aws/streamlined-multi-tenant-application-development-with-tenant-isolation-mode-in-aws-lambda">launch blog post</a>. For tenant isolation mode pricing information, visit <a href="https://aws.amazon.com/lambda/pricing/">AWS Lambda Pricing</a>.</p>

Read article →

Amazon SageMaker Catalog introduces column-level metadata forms and rich descriptions

<p>Amazon SageMaker Catalog now supports custom metadata forms and rich text descriptions at the column level, extending existing curation capabilities for business names, descriptions, and glossary term classifications. Data stewards can create custom metadata forms to capture business-specific information directly on individual columns. Columns also support markdown-enabled rich text descriptions for comprehensive data documentation and business context. Custom metadata form field values and rich text content are indexed in real-time and become immediately discoverable through search.<br /> <br /> This enhancement enables organizations to curate columns with comprehensive business context using customer-defined metadata schemas and formatted documentation. Asset owners can define custom key-value metadata forms and rich text descriptions to provide detailed column documentation that improves data discovery across enterprise teams. Data analysts can search using custom form field values and rich text content alongside existing column names, descriptions, and glossary terms.<br /> <br /> This capability is available in all AWS Regions where Amazon SageMaker is supported.<br /> <br /> To learn more about Amazon SageMaker Catalog, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/update-metadata.html" style="cursor: pointer;" target="_blank">Amazon SageMaker documentation</a>.</p>

Read article →

Amazon Bedrock Guardrails adds support for coding use cases

<p>AWS announced expanded capabilities in Amazon Bedrock Guardrails for code-related use cases, enabling customers to protect against harmful content in code while building generative AI applications. This new capability allows customers to leverage existing safeguards offered by Bedrock Guardrails including content filters, denied topics, and sensitive information filters to detect intent to inject malicious code, detect and prevent prompt leakages, and help protect against introducing personally identifiable information (PII) within code.<br /> <br /> With expanded support for code-related use cases, Amazon Bedrock Guardrails now provides customers with safeguards against harmful content introduced within code elements, including comments, variable and function names, and string literals. Content filters (with standard tier) in Bedrock Guardrails now detect and filter such harmful content in code in the same way as text and image content protection. Additionally, Bedrock Guardrails offers enhanced protection with prompt leakage detection with standard tier, helping detect and prevent unintended disclosure of information from system prompts in model responses that could compromise intellectual property. Furthermore, denied topics (with standard tier) and sensitive information filters with Bedrock Guardrails now help safeguard against vulnerabilities using code within topics and help prevent inclusion of PII within code structures.<br /> <br /> The expanded capabilities for code-related cases is available in all AWS Regions where Amazon Bedrock Guardrails is supported. Customers can access the service through the Amazon Bedrock console, as well as the supported APIs.<br /> <br /> To learn more, read the launch blog, <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-model-tiers.html" target="_blank">technical documentation</a>, and the <a href="https://aws.amazon.com/bedrock/guardrails" target="_blank">Bedrock Guardrails product page</a>.</p>

Read article →

AWS NAT Gateway now supports regional availability

<p>Amazon Web Services (AWS) announces regional availability mode for NAT Gateways. With this launch, you can create a single NAT Gateway that automatically expands and contracts across availability zones (AZs) in your Virtual Private Cloud (VPC) based on your workload presence, to maintain high availability while offering simplified setup and management.<br /> <br /> A NAT Gateway enables instances in a private subnet to connect to services outside your VPC using the NAT Gateway's IP address. With this launch, you can create a NAT Gateway and set its availability to regional. You do not need a public subnet to host a regional NAT Gateway. You also do not have to create and delete NAT Gateways, and edit your route tables every time your workloads expand to new availability zones. You simply create a NAT Gateway with regional mode, choose your VPC, and it automatically expands and contracts across all availability zones based on your workload's presence, maintaining high availability. You can use this feature with Amazon provided IP addresses or bring your own IP addresses.<br /> <br /> This capability is available in all commercial AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about VPC NAT Gateway and this feature, please visit our <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" target="_blank">documentation.</a></p>

Read article →

AWS Data Exports for FOCUS 1.2 is now generally available

<p>Today, AWS announces the general availability of AWS Data Exports for FOCUS 1.2. FOCUS 1.2 is an open cloud cost and usage specification that provides standardization to simplify Cloud Financial Management across multiple sources. AWS Data Exports for FOCUS 1.2 enables customers to export their AWS cost and usage data with the FOCUS 1.2 schema to Amazon S3.<br /> <br /> With AWS Data Exports for FOCUS 1.2, customers can streamline their financial close processes with invoice reconciliation capabilities, track capacity reservation status to identify unused reservations, and leverage virtual currency support for multi-cloud and SaaS cost management scenarios. The specification maintains the standardized four-cost-column structure (ListCost, ContractedCost, BilledCost, and EffectiveCost) from FOCUS 1.0 while extending support for additional enterprise use cases. This helps organizations standardize cost reporting across cloud providers and solution providers, and improve financial operations efficiency.<br /> <br /> AWS Data Exports for FOCUS 1.2 is available in the US East (N. Virginia) Region and includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.<br /> <br /> Learn more about AWS Data Exports for FOCUS 1.2 in the <a href="https://docs.aws.amazon.com/cur/latest/userguide/dataexports-table-dictionary.html">User Guide</a>, <a href="https://aws.amazon.com/aws-cost-management/aws-data-exports/">product details page</a>, and the <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/data-exports-for-focus-1-2-is-now-generally-available/">announcement blog</a>. Get started by visiting the AWS Data Exports page in the AWS Billing and Cost Management console and creating an export named "FOCUS 1.2 with AWS columns".</p>

Read article →

Accelerate infrastructure development with AWS CloudFormation intelligent authoring in IDEs

<p>Today, <a href="https://docs.aws.amazon.com/cloudformation/" target="_blank">AWS CloudFormation</a> announces the launch of the AWS CloudFormation Language Server, a new capability that brings intelligent authoring, early validation, troubleshooting, and drift management directly into Integrated Development Environment (IDE) through the <a href="https://aws.amazon.com/visualstudio/" target="_blank">AWS Toolkit</a>. This new feature empowers developers to build infrastructure faster and deploy safely.<br /> <br /> With this launch, developers using Visual Studio, Kiro, and other compatible IDEs can now benefit from context-aware authoring powered by the Language Server. It offers built-in auto-complete, schema validation, policy checks using CloudFormation Guard, and deployment validation directly within the IDE. For example, it immediately flags invalid resource properties or missing IAM permission requirements, while the drift-aware deployment view highlights differences between your template and deployed infrastructure, helping you spot configuration changes made outside of CloudFormation. These capabilities help developers identify issues, such as syntax errors, missing permissions, or configuration mismatches before deployment. It also provides a drift view that highlights differences between the current template and the deployed stack configuration. By integrating validation and real-time feedback directly into the authoring experience, the CloudFormation Language Server keeps developers in their flow state, turning infrastructure coding into a seamless experience, and improves infrastructure safety. This unified experience enables developers to move from design to deployment faster while maintaining compliance and best practices, spending more time building and less time troubleshooting.<br /> <br /> The AWS CloudFormation Language Server is available in all AWS Commercial Regions where AWS CloudFormation is supported. To get started, install or <a href="https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.aws-toolkit-vscode" target="_blank">upgrade</a> the AWS Toolkit. To learn more, refer to <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/ide-extension.html">AWS CloudFormation Language Server.</a></p>

Read article →

Amazon VPC IPAM now supports policies to enforce IP allocation strategy

<p>Amazon Virtual Private Cloud (VPC) IP Address Manager (IPAM) supports policies to centrally configure and enforce your desired IP allocation strategy. This ensures resources launch with public IPv4 addresses from specific IPAM pools, improving operational posture, and simplifying network and security management.<br /> <br /> Using IPAM policies, the IP administrator can centrally define public IP allocation rules for AWS resources, such as Network Address Translation (NAT) Gateways when used in regional availability mode and Elastic IP addresses. The IP allocation policy configured centrally cannot be superseded by individual application teams, ensuring compliance at all times. Before this feature, IP administrator had to educate application owners across their organization, and rely on them to always comply with IP allocation best practices. IPAM policies improve your operational model multi-fold. Now, you can add IP based filters in your networking and security constructs like access control lists, route tables, security groups, and firewalls, with confidence that public IPv4 addresses assignments to AWS resources always come from specific IPAM pools.<br /> <br /> The feature is available in all AWS commercial regions and the AWS GovCloud (US) Regions, in both Free Tier and Advanced Tier of VPC IPAM. When used with the Advanced Tier of VPC IPAM, customers can set policies across AWS accounts and AWS regions. To get started please see the <a href="https://docs.aws.amazon.com/vpc/latest/ipam/define-public-ipv4-allocation-strategy-with-ipam-policies.html">IPAM policies documentation page</a>.<br /> <br /> To learn more about IPAM, view the <a href="https://docs.aws.amazon.com/vpc/latest/ipam/what-it-is-ipam.html">IPAM documentation</a>. For details on pricing, refer to the IPAM tab on the <a href="https://www.amazonaws.cn/en/vpc/pricing/">Amazon VPC Pricing Page</a>.</p>

Read article →

AWS Cost Explorer now provides 18-month forecasting and explainable AI-powered forecasts

<p>Today, AWS announces that AWS Cost Explorer now empowers customers with three key improvements: two generally available enhancements, including an 18-month forecasting horizon extending from the previous 12-month limit, and improved machine learning models that analyze up to 36 months of historical data (vs. previous 6 months) to identify seasonal patterns and long-term growth trends, and plus a new public preview feature<b> </b>offering AI-powered explanations that provide transparency into forecast methodology. AWS Cost Explorer helps customers analyze and manage their cloud spending through detailed cost and usage reports with forecasting capabilities. These enhancements provide the extended visibility needed for annual budget planning cycles.<br /> <br /> These capabilities enable finance teams to account for seasonal patterns, holiday peaks, and business cycles with enhanced accuracy and present forecasts with greater stakeholder confidence. The AI explanations help teams understand and communicate the key drivers behind their cost projections, making it easier to identify optimization opportunities and build executive buy-in for cloud investments.<br /> <br /> You can access the 18-month forecasting horizon directly through the AWS Cost Explorer console or via the GetCostForecast API. AI-powered explanations are currently available in the console only during public preview. To learn more about this enhanced feature, see <a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/">Cost Explorer details page</a>, user guides, and the<a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/introducing-18-month-forecasting-and-explainable-ai-insights-in-aws-cost-explorer/"> announcement blog</a>.&nbsp;</p>

Read article →

AWS Cost Optimization Hub introduces Cost Efficiency metric to measure and track cloud cost efficiency

<p>AWS Cost Optimization Hub, a feature within the Billing and Cost Management Console, now supports a Cost Efficiency metric that helps you measure and track cloud cost efficiency over time across your organization. This metric automatically calculates the percentage of your cloud spend that can be optimized by considering rightsizing, idle, and commitment recommendations, allowing you to establish consistent cost savings benchmarks, set performance goals, and track progress to maximize your return on cloud investments.<br /> <br /> AWS Cost Optimization Hub provides you with a measure of your cost efficiency by dividing aggregated estimated monthly savings of your cost optimization opportunities by your optimizable spend. You can track this metric over time across your organization to understand and benchmark your cost efficiency. With daily refreshes, the metric provides daily insights into optimization progress, showing score improvements when you implement cost-saving recommendations and score decreases when inefficient resources are provisioned.<br /> <br /> Cost efficiency is now available in AWS <a href="https://aws.amazon.com/aws-cost-management/cost-optimization-hub/" target="_blank">Cost Optimization Hub</a> across all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where AWS Cost Optimization Hub is supported. To get started with cost efficiency metric, please visit the <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/cost-optimization-hub.html" target="_blank">user guide</a> and <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/measuring-cloud-cost-efficiency-with-the-new-cost-efficiency-metric-by-aws/" target="_blank">blog</a>.&nbsp;</p>

Read article →

AWS Network Firewall Now Supports Managed Rules from AWS Partners available in AWS Marketplace

<p>AWS Network Firewall now supports managed rules from AWS Partners, enabling you to deploy expert-curated, automatically updated security rules from AWS Partners directly within your network firewall policies. This new capability allows you to integrate pre-configured rule groups into your AWS Network Firewall with just a few clicks through the AWS Network Firewall console. Managed rules are maintained by AWS Partners who continuously update them to address emerging threats, providing comprehensive protection without the operational overhead of managing custom rules.<br /> <br /> You can subscribe to managed rules from AWS Partners either from the AWS Network Firewall console, or from the AWS Marketplace website. Subscriptions to these rules will provide you the same benefits as any other product in AWS Marketplace, including consolidated billing and lower pricing for long-term contracts. You can simplify security operations by deploying specialized rule groups tailored to different industry needs, compliance requirements, and threat landscapes. This reduces the time your security teams spend researching, creating, and maintaining custom security rules, while ensuring your protections stay current against evolving threats.<br /> <br /> Managed rules for AWS Network Firewall are available from AWS Marketplace sellers of Check Point, Fortinet, Infoblox, Lumen, Rapid7, ThreatSTOP, and Trend Micro, in all AWS commercial regions where AWS Network Firewall and AWS Marketplace is available.<br /> <br /> To get started, visit the AWS Network Firewall console or browse available managed rules in <a href="https://aws.amazon.com/marketplace" style="cursor: pointer;">AWS Marketplace</a>. For more information, see the AWS Network Firewall <a href="https://aws.amazon.com/network-firewall/" style="cursor: pointer;">product page</a> and the service <a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/" style="cursor: pointer;">documentation</a>.</p>

Read article →

AWS launches Billing Transfer for multi-organization billing and cost management

<p>Today AWS announces Billing Transfer, a new feature that allows customers to centrally manage and pay bills across multiple AWS organizations.</p> <p>With Billing Transfer, customers operating in multi-organization environments can designate a single management account to centrally manage and pay for bills for multiple organizations, including invoice collection, payment processing, and detailed cost analysis.</p> <p>Billing Transfer makes billing and cost management operations more efficient and scalable, while ensuring individual management accounts maintain complete security autonomy over their organizations. To protect proprietary pricing information, Billing Transfer is integrated with AWS Billing Conductor. This integration enables billing administrators to control how the cost data will be seen by their AWS organizations and implement advanced cost allocation strategies across multiple AWS organizations. For AWS Billing Transfer customers, there is no cost to use AWS Billing Conductor when they choose an AWS managed pricing plan. If they choose a Customer managed pricing plan, there will be a fee of $50 per AWS Organization. AWS offers a free trial for Billing Transfer through May 31, 2026. During this period, both AWS managed and Customer managed pricing plans in Billing Conductor are available at no charge. Starting June 1, 2026, Billing Transfer customers will be charged by the number of AWS organizations with Customer managed pricing plan attached to it.</p> <p>If you’re using Billing Conductor on its own without Billing Transfer, you will still follow the standard per-account pricing model regardless of the type of pricing plan used (see <a contenteditable="false" href="https://aws.amazon.com/aws-cost-management/aws-billing-conductor/pricing/" style="cursor: pointer;">pricing</a> details).</p> <p>Billing Transfer is available today in all public AWS Regions, excluding the GovCloud, China (Beijing) and China (Ningxia) Regions.<br /> To learn more about using Billing Transfer to centralize billing and cost management across your multi-organization environment, visit <a contenteditable="false" href="https://aws.amazon.com/aws-cost-management/aws-billing-transfer/" style="cursor: pointer;">Billing Transfer product page</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/orgs_transfer_billing.html" style="cursor: pointer;">AWS Billing documentation</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/cost-management/latest/userguide/what-is-costmanagement.html" style="cursor: pointer;">AWS Cost Management documentation</a>, and <a contenteditable="false" href="https://aws.amazon.com/blogs/aws/new-aws-billing-transfer-for-centrally-managing-aws-billing-and-costs-across-multiple-organizations" style="cursor: pointer;" target="_blank">news blog</a>.</p>

Read article →

AWS Introduces E-Invoice delivery for AWS customers using SAP Ariba and Coupa procurement portals

<p>Today, AWS announces the general availability of AWS E-Invoice delivery, a new capability that enables AWS customers to connect their SAP Ariba and Coupa procurement portal accounts with AWS and retrieve POs. AWS customers can also use AWS E-Invoice delivery to deliver PO-matched AWS invoices back to their procurement portal on the same day.<br /> <br /> AWS customers can now onboard to the AWS E-Invoice delivery feature through the AWS Billing and Cost Management console. After onboarding onto the AWS E-Invoice delivery feature, AWS customers can track AWS Invoice delivery status in both the AWS Billing and Cost Management console and their procurement portal. AWS E-Invoice delivery enables AWS customers to streamline the invoice processing workflow.<br /> <br /> AWS E-Invoice delivery feature is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions.<br /> <br /> To get started with AWS self-service invoice correction feature, please visit the <a contenteditable="false" href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-pref.html#procurement-portal" style="cursor: pointer;">user guide</a> and blog.&nbsp;</p>

Read article →

Streamline integration with Amazon and AWS Partner products using AWS IAM temporary delegation

<p>AWS Identity and Access Management (IAM) is launching temporary delegation, a new capability that helps you accelerate onboarding and simplify management for products from Amazon and AWS Partners that integrate with your AWS accounts.<br /> <br /> With today’s launch, you can safely delegate limited, temporary access to these product providers to perform initial deployments, ad-hoc maintenance, or feature upgrades on your behalf. This approach provides a more secure and streamlined experience by eliminating the need for you to create persistent IAM roles for such tasks, or perform them manually. It reduces your setup time and lowers your operational burden, while giving you complete control and auditability over delegated access and actions.<br /> <br /> This feature is available in all AWS commercial Regions. Amazon products and AWS Partners such as Amazon Leo (coming soon), Archera, Aviatrix, CrowdStrike (coming soon), Databricks, HashiCorp, Qumulo, Rapid7, and SentinelOne are already implementing AWS IAM temporary delegation.<br /> <br /> To get started,<br /> </p> <ul> <li><b>Customers</b>: See the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-temporary-delegation.html">AWS IAM user guide</a> or <a href="https://aws.amazon.com/blogs/apn/streamline-customer-onboarding-and-accelerate-time-to-value-with-aws-iam-temporary-delegation/">AWS blog</a></li> <li><b>AWS Partners</b>: Refer to the <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-temporary-delegation-partner-guide.html">partner integration guide</a> for onboarding details</li> </ul>

Read article →

Savings Plans and Reserved Instances Group Sharing is now generally available

<p>AWS today announced the general availability of Reserved Instances and Savings Plans (RISP) Group Sharing, a new Billing and Cost Management feature that gives customers granular control over how AWS commitments are shared across their organization. This capability allows customers to define how Reserved Instances and Savings Plans benefits are distributed among specific groups of accounts within their AWS organization, ensuring cost savings align with their business structure and accountability requirements.<br /> <br /> RISP Group Sharing addresses a common challenge faced by enterprise customers managing AWS costs across multiple business units: for example, Reserved Instances and Savings Plans don't always benefit the teams that purchased them. With this feature, customers can create groups using AWS Cost Categories that reflect their organizational hierarchy—whether by business units, projects, geographical regions, or funding sources. The feature offers two sharing options: the Prioritized Group Sharing applies commitments to defined groups first, then shares unused capacity organization-wide, while the Restricted Group Sharing keeps commitments exclusively within defined groups for complete isolation when strict boundaries are required.<br /> <br /> RISP Group Sharing is available now in all AWS Regions, except AWS GovCloud (US) Regions and the China Regions.<br /> <br /> To get started with RISP Group Sharing, visit the Billing preferences<b> </b>from the <a href="http://console.aws.amazon.com/billing/home" target="_blank">AWS Billing and Cost Management Console</a> and follow the guided setup to create your first Cost Category and configure sharing preferences. For detailed implementation guidance, see the <a href="http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-sp-group-sharing.html." target="_blank">user guide</a> and <a href="https://aws.amazon.com/blogs/aws-cloud-financial-management/control-your-aws-commitments-with-risp-group-sharing/" target="_blank">announcement blog</a>.&nbsp;&nbsp;</p>

Read article →

Amazon SageMaker Catalog enforces metadata rules for glossary terms for asset publishing

<p>Amazon SageMaker Catalog now supports metadata enforcement rules for glossary terms, requiring data producers to apply approved business vocabulary when publishing assets. This helps consistent data classification and improves discoverability across organizational catalogs.<br /> <br /> This new capability allows administrators to define mandatory glossary term requirements for data assets during the publishing workflow. Data producers must now classify their assets with approved business terms from organizational glossaries before publication, ensuring consistent metadata standards and improving data discoverability. The enforcement rules validate that required glossary terms are applied, preventing assets from being published without proper business context. By standardizing metadata and aligning technical data schemas with business language, this capability enhances data governance, improves search relevance, and helps business users more easily understand and trust published data assets.<br /> <br /> Metadata enforcement rules for glossary terms are available in all AWS regions where Amazon SageMaker Catalog operates.<br /> <br /> To get started, visit the Amazon SageMaker console and navigate to the Catalog governance section to configure glossary term enforcement policies. You can also use the AWS CLI or SDKs to programmatically manage metadata rules for asset publishing.&nbsp;</p> <p>To learn more about Amazon SageMaker Catalog, visit the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/metadata-rules-publishing.html" target="_blank">Amazon Sagemaker documentation</a>.</p>

Read article →

Amazon EKS introduces enhanced container network observability

<p>Today, we’re announcing new network observability features in <a href="https://aws.amazon.com/eks/" target="_blank">Amazon Elastic Kubernetes Service (EKS)</a> that provide deeper insights into your container networking environment. These new capabilities help you better understand, monitor, and troubleshoot your Kubernetes network landscape in AWS.<br /> <br /> Customers are increasingly deploying microservices to expand and incrementally innovate with software in the AWS cloud, while using Amazon EKS as the underlying platform to run their applications. With enhanced container network observability, customers can leverage granular, network-related metrics for better proactive anomaly detection across cluster traffic, cross-AZ flows, and AWS services. Using these metrics, customers can better measure system performance and visualize the underlying metrics using their preferred observability stack.<br /> <br /> Additionally, EKS now provides network monitoring visualizations in the AWS console that accelerate and enhance precise troubleshooting for faster root cause analysis. Customers can also leverage these visual capabilities to pinpoint top-talkers and network flows causing retransmissions and retransmission timeouts, eliminating blind spots during incidents. These network monitoring features in EKS are powered by <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-NetworkFlowMonitor.html" target="_blank">Amazon CloudWatch Network Flow Monitor.</a><br /> <br /> Enhanced container network observability for EKS is available in all commercial AWS Regions where CloudWatch Network Flow Monitor is <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-NetworkFlowMonitor-Regions.html" target="_blank">available</a>. To learn more, visit the <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-observability.html" target="_blank">Amazon EKS documentation</a> and AWS News Launch Blog.&nbsp;</p>

Read article →

AWS Secrets Manager announces managed external secrets

<p>Today, AWS Secrets Manager announces the launch of managed external secrets, a feature that offers default enabled automatic rotation for your third party Software-as-a-service (SaaS) secrets. You also get an option to choose from multiple different rotation strategies that are supported by your SaaS provider, without the overhead of rotation Lambda function creation, or management. With this launch, you can secure your SaaS secrets with AWS Secrets Manager with a pre-defined secret format, as prescribed by your SaaS provider.<br /> <br /> This launch also includes an <a href="https://docs.aws.amazon.com/secretsmanager/latest/mes-onboarding/secrets-manager-mes-onboarding.html" target="_blank">onboarding guide</a>, for any SaaS provider to be listed as a partner. This would allow partners to offer their customers prescriptive guidance around managing their secrets, reducing the customer overhead for managing secrets. At launch, managed external secrets feature is available for 3 listed partners — Salesforce, BigID and Snowflake.<br /> <br /> To get started with the feature, refer to the <a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/managed-external-secrets.html" target="_blank">technical documentation</a>. The feature is available in all AWS Regions where AWS Secrets Manager is available. For a list of regions where Secrets Manager is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region table</a>.&nbsp;</p>

Read article →

AWS Channel Partners can now resell using Billing Transfer

<p>AWS Channel Partners who are part of AWS Solution Provider or Distribution program can now resell AWS services using <a href="https://aws.amazon.com/aws-cost-management/aws-billing-transfer/" target="_blank">Billing Transfer.</a> Billing Transfer enables Partners to assume financial responsibility for customer AWS Organizations while customers retain full control of their management accounts. Using Billing Transfer with AWS Partner Central channel management, Partners receive eligible program benefits applied to AWS bills delivered to their AWS Organization, while end customers view their costs at Partner-configured rates within their separate AWS Organization.<br /> <br /> Billing Transfer helps all AWS Channel Partners simplify operations by centrally managing billing and payments across many customer AWS Organizations from a single partner management account. Partners can also now use new APIs for Partner Central to manage channel program reporting and incentive qualification from their own systems. End customers gain autonomy to independently manage their AWS Organizations while benefiting from Partner value-added services such as cost optimization and service management.<br /> <br /> Billing Transfer is available to all AWS Channel Partners and their end customers operating in public AWS Regions, excluding the AWS GovCloud (US), China (Beijing), and China (Ningxia) Regions.<br /> <br /> Channel Partners can get started today through AWS Partner Central channel management. To learn more, see the <a href="https://docs.aws.amazon.com/partner-central/latest/getting-started/channel-management.html" target="_blank">Channel management user guide</a> in Partner Central documentation and read the&nbsp;AWS Blog.<a href="https://docs.aws.amazon.com/partner-central/latest/getting-started/channel-management.html" target="_blank"></a></p>

Read article →

Amazon CloudWatch now supports scheduled queries in Logs Insights

<p>Amazon CloudWatch Logs now supports automatically running Logs Insights queries on a recurring schedule for your log analysis needs. With scheduled queries, you can now automate log analysis tasks and deliver query results to Amazon S3 and Amazon EventBridge.</p> <p>With today's launch, you can track trends, monitor key operational metrics, and detect anomalies without needing to manually re-run queries or maintain custom automation. This feature makes it easier to maintain continuous visibility into your applications and infrastructure, streamline operational workflows, and ensure consistent insight generation at scale. For example, you can setup scheduled queries for your weekly audit reporting. The query results can also be stored in Amazon S3 for analysis, or trigger incident response workflows through Amazon EventBridge. The feature supports all CloudWatch Logs Insights query languages and helps teams improve operational efficiency by eliminating manual query executions.</p> <p>Scheduled queries is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).</p> <p>You can configure a scheduled query using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. For more information, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ScheduledQueries.html" style="cursor: pointer;" target="_blank">Amazon CloudWatch documentation</a>.</p>

Read article →

Get Invoice PDF API is now generally available.

<p>Today, AWS announces the general availability of the Get Invoice PDF API, enabling customers to programmatically download AWS invoices via SDK calls<b>.</b><br /> <br /> Customers can retrieve individual invoice PDF artifacts by invoking API calls with AWS Invoice ID as input and receives pre-signed Amazon S3 URL for immediate download of AWS invoice and supplemental documents in PDF format. For bulk invoice retrieval, customers can first call the List Invoice Summaries API to get Invoice IDs for a specific billing period, then use the Invoice IDs as input to Get Invoice API to download each Invoice PDF artifact.<br /> <br /> The Get Invoice PDF API is available in the US East (N. Virginia) Region. Customers from any commercial regions (except China Regions) can use the service. To get started with Get Invoice PDF API please visit the <a href="https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_invoicing_GetInvoicePDF.html">API Documentation.</a></p>

Read article →

Amazon OpenSearch Service launches Cluster Insights for improved operational visibility

<p>Amazon OpenSearch Service now includes Cluster Insights, a monitoring solution that provides comprehensive operational visibility of your clusters through a single dashboard. This eliminates the complexity of having to analyze and correlate various logs and metrics to identify potential risks to cluster availability or performance. The solution automates the consolidation of critical operational data across nodes, indices, and shards, transforming complex troubleshooting into a streamlined process.<br /> <br /> When investigating performance issues like slow search queries, Cluster Insights displays relevant performance metrics, affected cluster resources, top-N query analysis, and specific remediation steps in one comprehensive view. The solution operates through OpenSearch UI's resilient architecture, maintaining monitoring capabilities even during cluster unavailability. Users gain immediate access to account-level cluster summaries, enabling efficient management of multiple deployments.<br /> <br /> Cluster Insights is available at no additional cost for OpenSearch version 2.17 or later in all Regions where OpenSearch UI is available. View the complete list of supported Regions <a contenteditable="false" href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/opensearch-ui-endpoints-quotas.html" style="cursor: pointer;">here</a>.<br /> <br /> To learn more about Cluster Insights, refer to our <a contenteditable="false" href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/cluster-insights.html" style="cursor: pointer;">technical documentation</a>.</p>

Read article →

Amazon ECR introduces archive storage class for rarely accessed container images

<p>Amazon ECR now offers a new archive storage class to reduce storage costs for large volumes of rarely accessed container images. The new archive storage class helps you meet your compliance and retention requirements while optimizing storage cost. As part of this launch, ECR lifecycle policies now support archiving images based on last pull time, allowing you to use lifecycle rules to automatically archive images based on usage patterns.<br /> <br /> To get started, you can archive images by configuring lifecycle rules to automatically archive images based on criteria such as image age, count, or last pull time, or using the ECR Console or API to archive images individually. You can archive an unlimited number of images. Archived images do not count against your image per repository limit. Once the images are archived, they are no longer accessible for pulls, but can be easily restored via ECR Console, CLI, or API within 20 minutes. Once restored, images can be pulled normally. All archival and restore operations are logged through CloudTrail for auditability.<br /> <br /> The new ECR archive storage class is available in all AWS Commercial and AWS GovCloud (US) Regions. For pricing, visit the <a href="https://aws.amazon.com/ecr/pricing/">pricing page</a>. To learn more, visit the <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html">documentation</a>.</p>

Read article →

AWS Lambda adds support for Python 3.14

<p>AWS Lambda now supports creating serverless applications using Python 3.14. Developers can use Python 3.14 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.<br /> <br /> Python 3.14 is the latest long-term support release of Python. This release provides Lambda customers access to the latest Python 3.14 language features. You can use Python 3.14 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. <a contenteditable="false" href="https://docs.powertools.aws.dev/lambda/python/latest/" style="cursor: pointer;" target="_blank">Powertools for AWS Lambda (Python)</a>, a developer toolkit to implement serverless best practices and increase developer velocity, also supports Python 3.14. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Python 3.14.<br /> <br /> The Python 3.14 runtime is available in all Regions, including the AWS GovCloud (US) Regions and China Regions.<br /> <br /> For more information, including guidance on upgrading existing Lambda functions, read our <a contenteditable="false" href="https://aws.amazon.com/blogs/compute/python-3-14-runtime-now-available-in-aws-lambda/" style="cursor: pointer;" target="_blank">blog post</a>. For more information about AWS Lambda, visit the <a contenteditable="false" href="https://aws.amazon.com/lambda/" style="cursor: pointer;" target="_blank">product page</a>.&nbsp;</p>

Read article →

Amazon Redshift now supports Just-In-Time (JIT) ANALYZE for Apache Iceberg tables

<p><a contenteditable="false" href="https://aws.amazon.com/redshift/" style="cursor: pointer;">Amazon Redshift</a> today announces the general availability of Just-In-Time (JIT) ANALYZE capability for Apache Iceberg tables, enabling users to run high performance read and write analytics queries on Apache Iceberg tables within the Redshift data lake. The Apache Iceberg open table format has been used by many customers to simplify data processing on rapidly expanding and evolving tables stored in data lakes.<br /> <br /> Unlike traditional data warehouses, data lakes often lack comprehensive table-level and column-level statistics about the underlying data, making it challenging for query engines to choose the most optimal query execution plans without visibility into the table and column statistics. Sub-optimal query execution plans can lead to slower and less predictable performance.<br /> <br /> ‘JIT ANALYZE’ is a new Amazon Redshift feature that automatically collects and utilizes statistics for Iceberg tables during query execution, eliminating manual statistics collection while giving the query engine the information it needs to generate optimal query execution plans. The system uses intelligent heuristics to identify queries that will benefit from statistics, maintains lightweight sketch data structures, and builds high quality table-level and column-level statistics. JIT ANALYZE delivers out-of-the-box performance on par with queries that have pre-calculated statistics, while providing the foundation for many other performance optimizations.<br /> <br /> The Amazon Redshift JIT ANALYZE feature for Apache Iceberg tables is now available in all AWS regions where Amazon Redshift is available. Users do not need to make any changes or enable any settings to take advantage of this new data lake query optimization. To get started, visit the documentation page for Amazon Redshift <a contenteditable="false" href="https://docs.aws.amazon.com/redshift/latest/dg/iceberg-writes.html" style="cursor: pointer;">Management Guide</a>.</p>

Read article →

Safely handle configuration drift with AWS CloudFormation drift-aware change sets

<p><a href="https://docs.aws.amazon.com/cloudformation/">AWS CloudFormation</a> launches drift-aware change sets that can compare an IaC template with the actual state of infrastructure and bring drifted resources in line with their template definitions. Configuration drift occurs when infrastructure managed by IaC is modified via the AWS Management Console, SDK, or CLI. With drift-aware change sets, you can revert drift and keep infrastructure in sync with templates. Additionally, you can preview the impact of deployments on drifted resources and prevent unexpected changes.<br /> <br /> Customers can modify infrastructure outside of IaC when troubleshooting operational incidents. This creates the risk of unexpected changes in future IaC deployments, impacts the security posture of infrastructure, and hampers reproducibility for testing and disaster recovery. <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html">Standard change sets</a> can compare a template with your last-deployed template, but do not consider drift. Drift-aware change sets provide a three-way diff between a new template, last-deployed template, and actual infrastructure state. If your diff predicts unintended overwrites of drift, you can update your template values and recreate the change set. During change set execution, CloudFormation will match resource properties with template values and recreate resources deleted outside of IaC. If a provisioning error occurs, CloudFormation will restore infrastructure to its actual state before deployment.<br /> <br /> To get started, create a change set for an existing stack from the CloudFormation Console and choose “Drift-aware” as the change set type. Alternatively, pass the --deployment-mode REVERT_DRIFT parameter to the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateChangeSet.html">CreateChangeSet API</a> from the AWS CLI or SDK. To learn more, visit the <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/drift-aware-change-sets.html">CloudFormation User Guide</a>.<br /> <br /> Drift-aware change sets are available in AWS Regions where CloudFormation is available. Refer to the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region table</a> to learn more.</p>

Read article →

AWS Backup launches a low-cost warm storage tier for Amazon S3 backups

<p>AWS Backup introduced a low-cost warm storage tier for Amazon S3 backup data that can reduce costs by up to 30%. After S3 backup data resides in a vault for 60 days (or longer based on your settings), you can move it to the new low-cost warm storage tier. The low-cost tier provides the same performance and features as the warm storage tier, including ransomware protection, recovery, and auditing.</p> <p>Use the new low-cost warm storage tier to reduce storage costs for business, compliance or regulatory data you must retain long-term. With this launch, you can now configure automatic tiering for all S3 backups for all vaults in an account, a specific vault, or a bucket within a vault by setting an age threshold of 60 days or more. When you enable tiering, existing backup data beyond the threshold automatically moves to the low-cost warm tier, delivering immediate cost savings with no action required and no performance impact.</p> <p>This low-cost storage tier is available in all AWS Regions where AWS Backup for Amazon S3 is available. There is a one-time transition fee when data moves to the low-cost warm tier. For additional pricing information, visit the <a href="https://aws.amazon.com/backup/pricing/">AWS Backup pricing page</a>.</p> <p>To learn more about AWS Backup for Amazon S3, visit the <a href="https://aws.amazon.com/backup/faqs/">product page</a> and <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html">technical documentation</a>. To get started, visit the <a href="https://console.aws.amazon.com/backup">AWS Backup console</a>.&nbsp;</p>

Read article →

Amazon RDS for MariaDB now supports community MariaDB minor versions 10.6.24, 10.11.15, and 11.4.9

<p><a href="https://aws.amazon.com/rds/mariadb/">Amazon Relational Database Service (Amazon RDS) for MariaDB</a> now supports community MariaDB minor versions 10.6.24, 10.11.15, and 11.4.9. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community.<br /> <br /> You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html">Amazon RDS Managed Blue/Green deployments</a> for safer, simpler, and faster updates to your MariaDB instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MariaDB.html">Amazon RDS User Guide.</a><br /> <br /> Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at <a href="https://aws.amazon.com/rds/mariadb/pricing/">Amazon RDS for MariaDB</a>. Create or update a fully managed Amazon RDS database in the <a href="https://console.aws.amazon.com/rds/home">Amazon RDS Management Console</a>.</p>

Read article →

Amazon FSx for Lustre improves directory listing performance by up to 5x

<p><a href="https://aws.amazon.com/fsx/lustre">Amazon FSx for Lustre</a> now delivers up to 5x faster directory listing (ls) performance, allowing you to browse and analyze the contents of your file systems more efficiently.<br /> <br /> Amazon FSx for Lustre is a high-performance, cost-effective, and scalable file storage service for compute-intensive workloads like machine learning training, financial analytics, and high-performance computing. ML researchers, data scientists, and developers who use FSx for Lustre for compute-intensive workloads commonly use their file systems to store data for interactive use cases like home directories and source code repositories. Today's performance improvement makes FSx for Lustre even faster for these interactive use cases by reducing the time it takes to list and analyze the contents of directories using "ls".<br /> <br /> The performance improvements are supported with the latest Lustre 2.15 client in all AWS regions where FSx for Lustre is available. To get started, <a href="https://docs.aws.amazon.com/fsx/latest/LustreGuide/install-lustre-client.html">upgrade to the latest 2.15 client</a> and follow the instructions in the <a href="https://docs.aws.amazon.com/fsx/latest/LustreGuide/performance-tips.html">Amazon FSx for Lustre documentation</a> to apply the recommended client tunings.</p>

Read article →

AWS CloudFormation accelerates dev-test cycle with early validation and simplified troubleshooting

<p><a contenteditable="false" href="https://docs.aws.amazon.com/cloudformation/" style="cursor: pointer;">AWS CloudFormation</a> now offers capabilities that allow customers to catch deployment errors before resource provisioning begins and resolve them more efficiently. Change set creation now provides early feedback on common deployment errors. Stack events are now grouped by an operation ID with access through the new describe-operation API to accelerate analysis of deployment errors. This empowers developers to reduce deployment cycle times and cut troubleshooting time from minutes to seconds.<br /> <br /> When you create a <a contenteditable="false" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html" style="cursor: pointer;">change set</a>, CloudFormation now validates your template against three common failure causes: invalid property syntax, resource name conflict with existing resources in your account, and S3 bucket emptiness constraint on delete operations. If validation fails, the change set status shows ‘FAILED’ with a detailed status on the validation failure. You can then view details for each failure, including the property path associated with them, to pinpoint exactly where issues occur in your template. When you execute a validated change set, the deployment can still fail because of resource-specific runtime errors, such as resource limits or service-specific constraints. For troubleshooting runtime errors, every stack operation now receives a unique ID. You can zoom into the stack events for an operation and filter down to the events that caused the deployment to fail. This allows you to quickly identify root causes, reducing your troubleshooting time.<br /> <br /> Get started by creating change sets through the CloudFormation Console, CLI, or SDK. View stack events by operation ID in the CloudFormation Console Events tab or via describe-events API. To learn more, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/validate-stack-deployments.html" style="cursor: pointer;">validate deployment</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#basic-ts-guide" style="cursor: pointer;">troubleshooting</a> User Guide.</p>

Read article →

Amazon RDS for Oracle now supports October 2025 Release Update and Spatial Patch Bundle

<p><a href="https://aws.amazon.com/rds/oracle/" target="_blank">Amazon Relational Database Service (Amazon RDS) for Oracle</a> now supports the <a href="https://www.oracle.com/security-alerts/cpuoct2025.html#AppendixDB" target="_blank">Oracle October 2025 Release Update</a> (RU) for Oracle Database versions 19c, 21c and the corresponding Spatial Patch Bundle for Oracle Database version 19c. We recommend upgrading to the October 2025 RU as it includes 6 new security patches for Oracle database products. For additional details, refer to <a href="https://www.oracle.com/security-alerts/cpuoct2025.html#AppendixDB" target="_blank">Oracle release notes</a>. The Spatial Patch Bundle update delivers important fixes for Oracle Spatial and Graph functionality to provide reliable and optimal performance for spatial operations.<br /> <br /> You can apply the October 2025 Release Update with just a few clicks from the Amazon RDS Management Console, or by using the AWS SDK or CLI. To automatically apply updates to your database instance during your maintenance window, enable Automatic Minor Version Upgrade. You can apply the Spatial Patch Bundle update for new database instances, or upgrade existing instances to engine version '19.0.0.0.ru-2025-10.spb-1.r1' by selecting the "Spatial Patch Bundle Engine Versions" checkbox in the AWS Console.<br /> <br /> Learn more about upgrading your database instances from the<a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html" target="_blank"> Amazon RDS User Guide.</a></p>

Read article →

Amazon Bedrock introduces Priority and Flex inference service tiers

<p>Today, Amazon Bedrock introduces two new inference service&nbsp;tiers to optimize costs and performance for different AI workloads. The new <b>Flex</b> tier offers cost-effective pricing&nbsp;for non-time-critical applications like model evaluations and content summarization while the <b>Priority</b> tier provides premium performance and preferential processing for mission-critical applications. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier. These join the existing <b>Standard</b> tier for everyday AI applications with reliable performance.</p> <p>These service tiers address key challenges that organizations face when deploying AI at scale. The Flex tier is designed for non-interactive workloads that can tolerate longer latencies, making it ideal for model evaluations, content summarization, labeling and annotation, and multistep agentic workflow,&nbsp;and it’s priced at a discount relative to the Standard tier. During periods of high demand, Flex requests receive lower priority relative to the Standard tier. The Priority tier is an ideal fit for mission critical applications, real-time end-user interactions, and interactive experiences where consistent, fast responses are essential. During periods of high demand, Priority requests receive processing priority, at a premium price, over other service tiers. These new service tiers are available today for a range of leading foundation models, including OpenAI (gpt-oss-20b, gpt-oss-120b), DeepSeek (DeepSeek V3.1), Qwen3 (Coder-480B-A35B-Instruct, Coder-30B-A3B-Instruct, 32B dense, Qwen3-235B-A22B-2507), and Amazon Nova (Nova Pro and Nova Premier). With these new options, Amazon Bedrock helps customers gain greater control over balancing cost efficiency with performance requirements, enabling them to scale AI workloads economically while ensuring optimal user experiences for their most critical applications.</p> <p>For more information about the AWS Regions where&nbsp;Amazon Bedrock Priority and Flex inference service tiers are available, see the&nbsp;<a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html" target="_blank">AWS Regions</a>&nbsp;table</p> <p>Learn more about service tiers in our <a href="https://aws.amazon.com/blogs/aws/new-amazon-bedrock-service-tiers-help-you-match-ai-workload-performance-with-cost" target="_blank">News Blog</a> and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/service-tiers-inference.html" target="_blank">documentation</a>.</p>

Read article →

EC2 Auto Scaling now offers a synchronous API to launch instances inside an Auto Scaling group

<p>Today, EC2 Auto Scaling is launching a new API, LaunchInstances, which gives customers more control and flexibility over how EC2 Auto Scaling provisions instances while providing instant feedback on capacity availability.<br /> <br /> Customers use EC2 Auto Scaling for automated fleet management. With scaling policies, EC2 Auto Scaling can automatically add instances when demand spikes and remove them when traffic drops, ensuring customers' applications always have the right amount of compute. EC2 Auto Scaling also offers the ability to monitor and replace unhealthy instances. In certain use cases, customers may want to specify exactly where EC2 Auto Scaling should launch additional instances and need immediate feedback on capacity availability. The new LaunchInstances API allows customers to precisely control where instances are launched by specifying an override for any Availability Zone and/or subnet in an Auto Scaling group, while providing immediate feedback on capacity availability. This synchronous operation gives customers real-time insight into scaling operations, enabling them to quickly implement alternative strategies if needed. For additional flexibility, the API includes optional asynchronous retries to help reach the desired capacity.<br /> <br /> This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, visit the <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (CLI)</a> and the <a href="https://aws.amazon.com/tools/" target="_blank">AWS SDKs</a>. To learn more about this feature, visit the <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-instances-synchronously" target="_blank">AWS documentation</a>.&nbsp;</p>

Read article →

AWS announces Supplementary Packages for Amazon Linux

<p>Today, AWS announces the general availability of Supplementary Packages for Amazon Linux (SPAL), a dedicated repository that provides developers and system administrators with streamlined access to thousands of pre-built, EPEL9 packages compatible for Amazon Linux 2023 (AL2023). Amazon Linux serves as the foundation for countless applications running on AWS, but developers often face lengthy processes when building packages from source code. SPAL addresses this challenge by offering ready-to-use packages that accelerate development workflows for teams working with AL2023 environments.<br /> <br /> With SPAL, development teams can significantly reduce deployment times and focus on core application development rather than package compilation. This solution is particularly valuable for DevOps engineers, system administrators, and developers who need reliable packages for production workloads without the overhead of building from source. SPAL packages are derived from the community-maintained EPEL9 project, with AWS providing security patches as they become available upstream. AWS will continue expanding the repository based on customer feedback through the <a contenteditable="false" href="https://github.com/amazonlinux/amazon-linux-2023/issues" style="cursor: pointer;">Amazon Linux 2023 GitHub repository.</a><br /> <br /> Supplementary Packages for Amazon Linux (SPAL) is available in all AWS Commercial Regions, including the AWS GovCloud (US) Regions and China.<br /> <br /> To get started, review available packages in the SPAL repository and update your package management configuration to include the SPAL repository for Amazon Linux 2023. To learn more about this feature consult the <a contenteditable="false" href="https://aws.amazon.com/linux/amazon-linux-2023/faqs/#supplementary-packages-for-amazon-linux-2023-spal" style="cursor: pointer;">SPAL FAQs</a> or <a contenteditable="false" href="https://docs.aws.amazon.com/linux/al2023/ug/spal.html" style="cursor: pointer;">AWS Documentation</a>.</p>

Read article →

Amazon EC2 P6-B300 instances with NVIDIA Blackwell Ultra GPUs are now available

<p>Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon&nbsp;EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory.&nbsp;</p> <p>P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.&nbsp;</p> <p>P6-B300 instances are now available in the p6-b300.48xlarge size through <a href="https://aws.amazon.com/ec2/capacityblocks/" target="_blank">Amazon EC2 Capacity Blocks for ML</a> and Savings Plans in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region</a>: US West (Oregon).&nbsp;For on-demand reservation of P6-B300 instances, please reach out to your account manager.</p> <p>To learn more about P6-B300 instances, visit <a href="https://aws.amazon.com/ec2/instance-types/p6/" target="_blank">Amazon EC2 P6 instances</a>.</p>

Read article →

Amazon OpenSearch Serverless now adds audit logs for data plane APIs

<p>Amazon OpenSearch Serverless now supports detailed audit logging of data plane requests via AWS CloudTrail. This feature enables customers to record user actions on their collections, helping meet compliance regulations, improve security posture, and provide evidence for security investigations. Customers can now track user activities such as authorization attempts, index modifications, and search queries.<br /> <br /> Customers can use CloudTrail to configure filters for OpenSearch Serverless collections with read-only and write-only options, or use advanced event selectors for more granular control over logged data events. All OpenSearch Serverless data events are delivered to an Amazon S3 bucket and optionally to Amazon CloudWatch Events, creating a comprehensive audit trail. This enhanced visibility into when and who made API calls helps security and operations teams monitor data access and respond to events in real-time.<br /> <br /> Once configured with CloudTrail, audit logs will be continuously streamed with no additional customer action required. Audit Logs will be continuously streamed to CloudTrail and can be further analyzed there.<br /> <br /> Please refer to the <a href="https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-service-regions">AWS Regional Services List</a> for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless.html">see the documentation.&nbsp;</a><a href="http://aws.amazon.com"></a></p>

Read article →

Amazon EC2 I7i instances now available in additional AWS regions

<p>Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Asia Pacific (Melbourne, Mumbai, Osaka), Middle East (UAE) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.<br /> <br /> I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks.<br /> <br /> I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth.<br /> To learn more, visit the I7i instances<a href="https://aws.amazon.com/ec2/instance-types/i7i/"> page</a>.</p>

Read article →

AWS announces flat-rate pricing plans for website delivery and security

<p>Amazon Web Services (AWS) is launching flat-rate pricing plans with no overages for website delivery and security. The flat-rate plans, available with Amazon CloudFront, combine global content delivery with AWS WAF, DDoS protection, Amazon Route 53 DNS, Amazon CloudWatch Logs ingestion, and serverless edge compute into <b>a simple monthly price</b> <b>with no overage charges. </b>Each plan also includes monthly Amazon S3 storage credits to help offset your storage costs.<br /> <br /> CloudFront flat-rate plans allow you to deliver your websites and applications without calculating costs across multiple AWS services. You won’t face the risk of overage charges, even if your website or application goes viral or faces a DDoS attack. Security features like WAF and DDoS protection are enabled by default, and additional configurations are simple to set up. When you serve your AWS applications through CloudFront instead of directly to the internet, your flat-rate plan covers the data transfer costs between your applications and your viewers for a simple monthly price without the worry of overages. This simplified pricing model is available alongside pay-as-you-go pricing for each CloudFront distribution, giving you the flexibility to choose the right pricing model and feature set for each application.<br /> <br /> Plans are available in Free ($0/month), Pro ($15/month), Business ($200/month), and Premium ($1,000/month) tiers for new and existing CloudFront distributions. Select the plan tier with the features and usage allowances matching your application’s needs. To learn more, refer to the <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-flat-rate-pricing-plans-with-no-overages/" target="_blank">Launch Blog</a>, <a href="https://aws.amazon.com/cloudfront/pricing/" target="_blank">Plans and Pricing</a>, or <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/flat-rate-pricing-plan.html" target="_blank">CloudFront Developer Guide</a>. To get started, visit the <a href="https://us-east-1.console.aws.amazon.com/cloudfront/v4/home?region=us-east-1#/distributions" target="_blank">CloudFront console</a>.</p>

Read article →

Amazon RDS Optimized Reads now supports R8gd and M8gd database instances

<p>Amazon Relational Database Service (RDS) now supports R8gd and M8gd database instances for Optimized Reads on Amazon Aurora PostgreSQL and RDS for PostgreSQL, MySQL, and MariaDB. R8gd and M8gd database instances offer improved price-performance. For example, Optimized Reads on R8gd instances deliver up to 165% better throughput and up to 120% better price-performance over R6g instances for Aurora PostgreSQL.<br /> <br /> Optimized Reads uses local NVMe-based SSD block storage available on these instances to store ephemeral data, such as temporary tables, reducing data access to/from network-based storage and improving read latency and throughput. The result is improved query performance for complex queries and faster index rebuild operations. Aurora PostgreSQL Optimized Reads instances using the I/O-Optimized configuration additionally use the local storage to extend their caching capacity. Database pages that are evicted from the in-memory buffer cache are cached in local storage to speed subsequent retrieval of that data.<br /> <br /> Customers can get started with Optimized Reads through the AWS Management Console, CLI, and SDK by modifying their existing Aurora and RDS databases or creating a new database using R8gd or M8gd instances. These instances are available in the US East (N. Virginia, Ohio), US West (Oregon), Europe (Spain, Frankfurt), and Asia Pacific (Tokyo) Regions. For complete information on pricing and regional availability, please refer to the <a href="https://aws.amazon.com/rds/pricing/" target="_blank">pricing page</a>. For information on specific engine versions that support these DB instance types, please see the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.SupportAurora.html" target="_blank">Aurora</a> and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.Support.html" target="_blank">RDS</a> documentation.</p>

Read article →

Workshops now available in AWS Builder Center

<p>AWS Builder Center now provides access to the catalog of AWS Workshops, offering step-by-step instructions crafted by AWS experts that explain how to deploy and use AWS services effectively. These workshops cover a wide range of AWS services and use cases, allowing builders to follow guided tutorials within their own AWS accounts. Workshops are designed for builders of all skill levels to gain practical experience and develop solutions tailored to their specific business needs using AWS services.</p> <p>The AWS Workshops Catalog features hundreds of workshops with advanced filtering capabilities to quickly find relevant content by category (Machine Learning, Security, Serverless), AWS service (EC2, Lambda, S3), and complexity level (100-Beginner through 400-Expert). Real-time search with partial matching across workshop titles, descriptions, services, and categories helps surface the most relevant content. Catalog content automatically localized based on your Builder Center language preference.</p> <p>Builders can navigate to the Workshops catalog at <a href="https://builder.aws.com/build/workshops?trk=dbc7efbe-28d3-4b54-a98d-2336026adfd9&amp;sc_channel=el">builder.aws.com/build/workshops</a> and filter by specific needs—whether you have 1 hour or 8 hours, are a beginner or expert, or want to focus on specific services like Amazon Bedrock and SageMaker. Seamless navigation from Builder Center discovery to the full workshops experience enables hands-on, step-by-step guided learning in your own AWS account.</p> <p>You can begin exploring Workshops in AWS Builder Center immediately with a free Builder ID. To get started with Workshops, visit <a href="https://builder.aws.com/build/workshops?trk=dbc7efbe-28d3-4b54-a98d-2336026adfd9&amp;sc_channel=el">AWS Builder Center</a>.</p>

Read article →

AWS Transfer Family announces Terraform module to automate scanning of transferred files

<p><a href="https://registry.terraform.io/modules/aws-ia/transfer-family/aws/latest" target="_blank">AWS Transfer Family Terraform module</a> now supports deployment of automated malware scanning workflows for files transferred using Transfer Family resources. This release streamlines centralized provisioning of threat detection workflows using Amazon GuardDuty S3 Protection, helping you meet data security requirements by identifying potential threats in transferred files.<br /> <br /> AWS Transfer Family provides fully managed file transfers over SFTP, AS2, FTPS, FTP, and web browser-<br /> based interfaces for AWS storage services. Using the new module, you can programmatically provision workflows to scan incoming files, dynamically route files based on scan results, and generate threat notifications, in a single deployment. You can granularly implement threat detection for specific S3 prefixes while preserving folder structures post scanning, and ensure that only verified clean files reach your business applications and data lakes. This eliminates the overhead and risks associated with manual configurations, and provides a scalable deployment option for data security compliance.<br /> <br /> Customers can get started by using the new module from the <a href="https://registry.terraform.io/modules/aws-ia/transfer-family/aws/latest/submodules/transfer-malware-protection" target="_blank">Terraform Registry</a>. To learn more about Transfer Family, visit the <a href="https://aws.amazon.com/aws-transfer-family/" target="_blank">product page</a> and <a href="https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html" target="_blank">user guide</a>. To see all the regions where Transfer Family is available, visit the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region table</a>.</p>

Read article →

Amazon MSK Replicator is now available in two additional AWS Regions

<p>You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (<a contenteditable="false" href="https://aws.amazon.com/msk/" style="cursor: pointer;">Amazon MSK</a>) clusters in Asia Pacific (Hyderabad) and Asia Pacific (Malaysia) <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">Region</a>s.<br /> <br /> MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing.<br /> <br /> With this launch, MSK Replicator is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), China (Beijing), China (Ningxia), Asia Pacific (Hyderabad), and Asia Pacific (Malaysia). You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. To learn more, visit the MSK Replicator <a contenteditable="false" href="https://aws.amazon.com/msk/features/msk-replicator/" style="cursor: pointer;">product page</a>, <a contenteditable="false" href="https://aws.amazon.com/msk/pricing/" style="cursor: pointer;">pricing page</a>, and <a contenteditable="false" href="https://docs.aws.amazon.com/msk/latest/developerguide/msk-replicator.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Redshift announces support for the SUPER data type in Databases with Case-Insensitive Collation

<p><a contenteditable="false" href="https://aws.amazon.com/redshift/" style="cursor: pointer;">Amazon Redshift</a> announces support for the SUPER data type in databases with case insensitive collation, enabling analytics on semi-structured and nested data in these databases. Using the SUPER data type with <a contenteditable="false" href="https://docs.aws.amazon.com/redshift/latest/dg/super-partiql.html" style="cursor: pointer;">PartiQL</a> in Amazon Redshift, you can perform advanced analytics that combine structured SQL data (such as string, numeric, and timestamp) with the semi-structured SUPER data (such as JSON) with flexibility and ease-of-use.<br /> <br /> This enhancement allows you to leverage the SUPER data type for your structured and semi-structured data processing needs in databases with case-insensitive collation. Using the COLLATE function, you can now explicitly specify case sensitivity preferences for SUPER columns, providing greater flexibility in handling data with varying case patterns. This is particularly valuable when working with JSON documents, APIs, or application data where case consistency isn't guaranteed. Whether you're processing user-defined identifiers or integrating data from multiple sources, you can now perform complex queries across both case-sensitive and case-insensitive data without additional normalization overhead.<br /> <br /> Amazon Redshift support for the SUPER data type in databases with case insensitive collation is available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon Redshift is available. See <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Region Table</a> for more details. To learn more about the SUPER data type in databases with case insensitive collation, please visit our <a contenteditable="false" href="https://docs.aws.amazon.com/redshift/latest/dg/super-overview.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Active threat defense now enabled by default in AWS Network Firewall

<p>Starting today, AWS Network Firewall enables active threat defense by default in alert mode when you create new firewall policies in the AWS Management Console. Active threat defense provides automated, intelligence-driven protection against dynamic, ongoing threat activities observed across AWS infrastructure.<br /> <br /> With this default setting you get visibility into threat activity and indicator groups, types, and threat names you are protected against. You can switch to block mode to automatically prevent suspicious traffic, such as command-and-control (C2) communication, embedded URLs, and malicious domains, or disable the feature entirely. AWS verifies threat indicators to ensure high accuracy and minimize false positives.<br /> <br /> Active threat defense is available in all regions where AWS Network Firewall is available, including AWS GovCloud (US) and China Regions. To learn more about active threat defense and pricing, see the AWS Network Firewall <a contenteditable="false" href="https://aws.amazon.com/network-firewall/" style="cursor: pointer;">product page</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/aws-managed-rule-groups-atd.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon EC2 I7ie instances now available in AWS Asia Pacific (Singapore) Region

<p>Amazon Web Services (AWS) is announcing starting today, Amazon EC2 I7ie instances are now available in Asia Pacific (Singapore) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.<br /> <br /> I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).<br /> <br /> To learn more, visit the I7ie instances<a href="https://aws.amazon.com/ec2/instance-types/i7ie/" target="_blank"> page</a>.</p>

Read article →

Amazon Polly expands Generative TTS engine with additional languages and region support

<p>Today, we are excited to announce the general availability of five highly expressive Amazon Polly Generative voices in Austrian German (Hannah), Irish English (Niamh), Brazilian Portuguese (Camila), Belgian Dutch (Lisa), and Korean (Seoyeon). This release follows our October launch of Netherlands Dutch (Laura) Generative voice, bringing our total Generative engine offering to thirty-one voices across twenty locales. Additionally, we have expanded the Generative engine to three new regions in Asia Pacific: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo).<br /> <br /> <a href="https://aws.amazon.com/polly" target="_blank">Amazon Polly</a> is a fully-managed service that turns text into lifelike speech, allowing developers and builders to enable their applications for conversational AI or for speech content creation.<br /> <br /> All new and existing Generative voices are now available in the US East (North Virginia), Europe (Frankfurt), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions.<br /> <br /> To hear how Polly voices sound, go to <a href="https://aws.amazon.com/polly/features/?nc=sn&amp;loc=3" target="_blank">Amazon Polly Features</a>. To learn more about how to use Generative engine, go to <a href="https://aws.amazon.com/blogs/aws/a-new-generative-engine-and-three-voices-are-now-generally-available-on-amazon-polly/" target="_blank">AWS Blog</a>. For more details on the Polly offerings and use, please read the <a href="https://docs.aws.amazon.com/polly/latest/dg/what-is.html" target="_blank">Amazon Polly documentation</a> and visit our <a href="https://aws.amazon.com/polly/pricing/?nc=sn&amp;loc=4" target="_blank">pricing page</a>.</p>

Read article →

Announcing EC2 Image Builder support for Lambda and Step functions

<p>EC2 Image Builder now supports invoking Lambda functions and executing Step Functions state machine through image workflows. These capabilities enable you to incorporate complex, multi-step workflows and custom validation logic into your image creation process, providing greater flexibility and control over how your images are built and validated.<br /> <br /> Prior to this launch, customers had to write custom code and implement multi-step workarounds to integrate Lambda or Step Functions within image workflows. This was a cumbersome process that was time-consuming to set up, difficult to maintain, and prone to errors. With these new capabilities, you can now seamlessly invoke Lambda functions to execute custom logic or orchestrate Step Functions state machines for complex, multi-step workflows. This native integration allows you to implement use cases such as custom compliance validation, sending custom notifications, multi-stage security testing —all within your Image Builder workflow.<br /> <br /> These capabilities are available to all customers at no additional costs, in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions.<br /> <br /> You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder <a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/manage-image-workflows.html" target="_blank">documentation</a>.</p>

Read article →

AWS HealthImaging adds native JPEG 2000 Lossless support

<p>AWS HealthImaging now offers JPEG 2000 Lossless as a <a href="https://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_8.7.3.html">transfer syntax</a> for storing and retrieving lossless medical images. With this launch, it is simpler than ever to integrate HealthImaging with applications that require JPEG 2000 encoded DICOM data.<br /> <br /> Customers can now choose between JPEG 2000 lossless and High-throughput JPEG 2000 (HTJ2K) for storing lossless DICOM data. HealthImaging data stores enabled for JPEG 2000 Lossless compression more easily integrate with legacy applications and still deliver low latency image retrieval performance from the cloud. With this launch, customers can retrieve image frames in the JPEG 2000 Lossless (1.2.840.10008.1.2.4.90) without incurring the latency of transcoding at retreival time.<br /> <br /> Support for JPEG 2000 Lossless is available in all AWS Regions where AWS HealthImaging is generally available: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).<br /> <br /> To learn more, visit the <a href="https://docs.aws.amazon.com/healthimaging/latest/devguide/what-is.html">AWS HealthImaging Developer Guide</a>.</p>

Read article →

Amazon MQ now supports LDAP authentication for RabbitMQ

<p><a href="https://aws.amazon.com/amazon-mq/">Amazon MQ</a> now supports LDAP (Lightweight Directory Access Protocol) authentication for RabbitMQ brokers in all available AWS <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">regions</a>. This feature enables RabbitMQ brokers to authenticate and authorize Amazon MQ users using identity providers which support LDAP, providing enhanced security and flexibility in access management. You can now authenticate your Amazon MQ users through the credentials stored in your LDAP server. You can also add, delete, and modify Amazon MQ users and assign permissions to topics and queues. <br /> <br /> You can configure LDAP authentication and authorization on your RabbitMQ broker on Amazon MQ using the AWS Console, AWS CloudFormation, AWS Command Line Interface (CLI), or the AWS Cloud Development Kit (CDK). To get started, create a new RabbitMQ broker with LDAP authentication or update your existing broker's configuration to enable LDAP support. This feature maintains compatibility with standard RabbitMQ LDAP implementations, ensuring seamless migration for existing LDAP enabled brokers. For detailed configuration options and steps, refer to the <a href="https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/ldap-for-amq-for-rabbitmq.html">Amazon MQ documentation page</a>.&nbsp;</p>

Read article →

AWS Parallel Computing Service is now HIPAA eligible

<p>AWS Parallel Computing Service (AWS PCS) is now HIPAA (<a contenteditable="false" href="https://aws.amazon.com/compliance/hipaa-compliance/" style="cursor: pointer;">Health Insurance Portability and Accountability Act</a>) eligible. AWS PCS is a managed service that helps you build and manage High Performance Computing (HPC) clusters using the Slurm workload manager.<br /> <br /> With the AWS PCS HIPAA certification, organizations with a Business Associate Addendum (BAA) can now use AWS PCS for compute-intensive healthcare workloads such as genomic sequencing, medical imaging analysis, and clinical research simulations. AWS maintains a standards-based risk management program to ensure that HIPAA-eligible services specifically support HIPAA administrative, technical, and physical safeguards.<br /> <br /> AWS PCS is HIPAA compliant in all of the AWS Regions where AWS PCS is available. See the <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regional Services List</a> for the most up-to-date availability information. To learn more about HIPAA eligible services, visit the <a contenteditable="false" href="https://aws.amazon.com/compliance/hipaa-eligible-services-reference/" style="cursor: pointer;">webpage</a>. To get started with AWS PCS, visit the <a contenteditable="false" href="https://aws.amazon.com/pcs/" style="cursor: pointer;">product page </a>to learn more.</p>

Read article →

Amazon MWAA Introduces Serverless Deployment Option for Apache Airflow Workflows

<p>Amazon Managed Workflows for Apache Airflow (MWAA) now offers a serverless deployment option that eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through true serverless scaling. This new offering addresses key challenges that data engineers and DevOps teams face when orchestrating workflows: operational scalability, cost optimization, and access management.<br /> <br /> Amazon MWAA Serverless provides seamless workflow orchestration with automatic resource provisioning and scaling. You can define workflows using either YAML configurations or Python-based DAGs, with support for over 80 AWS Operators from Apache Airflow v3.0. Each workflow runs in isolation with distinct AWS Identity and Access Management (IAM) permissions, ensuring secure access to AWS services and data. The service handles all infrastructure scaling automatically, with you paying only for the actual compute time used during task execution.<br /> <br /> Amazon MWAA Serverless is available in 15 AWS regions: Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (London), Europe (Paris), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon). To learn more about Amazon MWAA Serverless and supported AWS Operators, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/mwaa/latest/mwaa-serverless-userguide/what-is-mwaa-serverless.html" style="cursor: pointer;">Amazon MWAA Serverless documentation</a> and the <a contenteditable="false" href="https://aws.amazon.com/blogs/big-data/introducing-amazon-mwaa-serverless/" style="cursor: pointer;">Amazon MWAA Serverless Launch Blog</a>.<br /> <br /> Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the <a contenteditable="false" href="http://www.apache.org/" style="cursor: pointer;">Apache Software Foundation</a> in the United States and/or other countries.</p>

Read article →

Amazon WorkSpaces Applications adds new instance types and configurable storage options

<p>Amazon WorkSpaces Applications now offers enhanced flexibility features that provide customers with more choices for compute and configurable storage volume. Customers can also import their custom EC2 AMIs to create Amazon WorkSpaces images. These new capabilities help customers better match resources to their specific application requirements while optimizing costs.<br /> <br /> The service now includes 100+ additional compute instance types and sizes across all supported instance families, including general purpose, compute optimized, memory optimized, and accelerated options. Customers can also customize storage volumes ranging from 200GB to 500GB, and utilize their own Microsoft Windows Server 2022 AMIs along with their preferred image customization tools.<br /> <br /> With these enhancements, customers can now match their computing resources to exactly what their applications need while controlling costs effectively. Organizations can choose the right instance type for each use case - whether it's basic office applications running on cost-efficient general-purpose instances, or demanding CAD software requiring powerful compute-optimized instances. The ability to select storage size options allows customers to use additional storage based on their application needs. All of this customization is available while maintaining the simplicity of a fully managed service, eliminating the complexity of infrastructure management.<br /> <br /> These new capabilities are now generally available in all AWS <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">Regions</a> where Amazon WorkSpaces Applications is supported. The enhanced features follow WorkSpaces Applications’ standard pay-as-you-go pricing model. For detailed pricing information, visit the <a href="https://aws.amazon.com/workspaces/applications/pricing/" target="_blank">Amazon WorkSpaces Applications Pricing page</a>. To learn more about these new features, visit the <a href="https://docs.aws.amazon.com/appstream2/" target="_blank">Amazon WorkSpaces Applications documentation</a>.</p>

Read article →

Amazon Bedrock Data Automation supports 10 additional languages for speech analytics

<p>Amazon Bedrock Data Automation (BDA) now supports 10 additional languages for speech analytics workloads in addition to English: Portuguese, French, Italian, Spanish, German, Chinese, Cantonese, Taiwanese, Korean, and Japanese. BDA is a feature of Amazon Bedrock that automates generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With this launch, customers can process audio in these 10 new languages to get transcriptions in the detected language and Gen AI-powered insights such as summaries in either the detected language or English. Further, if the audio file contains more than one language, then BDA will create a multi-lingual transcript by automatically detecting all supported languages.<br /> <br /> This launch makes it easy to extract insights from multi-lingual conversations such as customer calls, education sessions, public safety calls, clinical discussions, and meetings. For example, a sales supervisor at a global software company can identify areas of improvements for his sales agents by analyzing multi-lingual voice conversations by leveraging the insights provided by custom output.<br /> <br /> BDA support for these 10 new languages for speech analytics is available in 8 AWS Regions including US West (Oregon), US East (N. Virginia), GovCloud (US-West), Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai) and Asia Pacific (Sydney). To learn more, visit the <a href="https://aws.amazon.com/bedrock/bda/">Bedrock Data Automation</a> page, <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock Pricing</a> page, or view <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda.html">documentation</a>.</p>

Read article →

Amazon WorkSpaces Applications expands the regional availability to Italy, Spain, Malaysia, and Israel

<p>Amazon Web Services (AWS) has expanded the regional availability for Amazon WorkSpaces Applications. Starting today, AWS customers can deploy their applications and desktops in the AWS Europe (Milan), Europe (Spain), Asia Pacific (Malaysia), and Israel (Tel Aviv) Regions and stream them using WorkSpaces Applications. Deploying your applications in a region closer to your end users helps provide a more responsive experience.&nbsp;<br /> <br /> Amazon WorkSpaces Applications is a fully managed, secure application streaming service that provides users with instant access to their desktop applications from anywhere. It allows users to stream applications and desktops from AWS to their devices, without requiring them to download, install, or manage any software locally. WorkSpaces Applications manages the AWS resources required to host and run your applications, scales automatically, and provides access to your users on demand.<br /> <br /> To get started with Amazon WorkSpaces Applications, sign into the WorkSpaces Applications management console and select one of the AWS Region of your choice. For the full list of Regions where WorkSpaces Applications is available, see the&nbsp;<a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region Table</a>. AppStream 2.0 offers pay-as-you-go pricing. For more information, see&nbsp;<a href="https://aws.amazon.com/workspaces/applications/pricing/" target="_blank">Amazon WorkSpaces Applications Pricing</a>.</p>

Read article →

Amazon EC2 reduces costs for Microsoft SQL Server High-Availability deployments

<p>Today, AWS announced that you can now designate Amazon EC2 instances running license-included SQL Server as part of a High-Availability (HA) cluster to reduce licensing costs with just a few clicks.<br /> <br /> This enhancement is particularly valuable for mission-critical SQL Server databases with Always On Availability Groups and/or Always On failover cluster instances. For example, you can save up to 40% of the full HA costs with no performance compromises when running SQL Server HA on two m8i.4xlarge instances with license-included Windows and SQL Server.<br /> <br /> This feature is available in all commercial AWS Regions.<br /> <br /> To learn more, see Microsoft SQL Server on Amazon EC2 <a href="https://docs.aws.amazon.com/sql-server-ec2/latest/userguide/sql-high-availability.html" target="_blank">User Guide</a> or read the <a href="https://aws.amazon.com/blogs/modernizing-with-aws/amazon-ec2-reduces-costs-for-microsoft-sql-server-high-availability-deployments" target="_blank">blog post</a>.</p>

Read article →

Amazon Route 53 DNS Firewall adds protection against Dictionary-based DGA attacks

<p>Starting today, you can enable Route 53 Resolver DNS Firewall Advanced to monitor and block queries associated with Dictionary-based Domain Generation Algorithm (DGA) attacks, that generate domain names by pseudo-randomly concatenating words from a predefined dictionary, creating human-readable strings to evade detection.<br /> <br /> Route 53 DNS Firewall Advanced is an offering on Route 53 DNS Firewall that enables you to enforce protections to monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. These include protections for DNS tunneling and DGA attacks. With this launch, you can also enforce protections for Dictionary-based DGA attacks, which is a variant of the DGA attack, where domain names are generated to mimic and blend with legitimate domain names, to resist detection. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying Dictionary DGA as the threat to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles.<br /> <br /> Route 53 Resolver DNS Firewall Advanced support for Dictionary DGA is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall <a href="https://aws.amazon.com/route53/resolver-dns-firewall/" target="_blank">webpage</a> and the <a href="https://aws.amazon.com/route53/pricing/" target="_blank">Route 53 pricing page</a>. To get started, visit the Route 53 <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/firewall-advanced.html" target="_blank">documentation</a>.</p>

Read article →

Amazon ECR now supports PrivateLink for FIPS Endpoints

<p>Amazon Elastic Container Registry (ECR) now supports PrivateLink for endpoints that have been validated under the Federal Information Processing Standard (FIPS) 140-3 program.</p> <p>With this release, customers with security and compliance requirements can now use FIPS-validated cryptographic modules when connecting to Amazon ECR while keeping their traffic within their Amazon Virtual Private Cloud (VPC). This enhancement enables you to meet regulatory compliance requirements while maintaining the security benefits of private connectivity.</p> <p>Support for PrivateLink for FIPS ECR endpoints is now available in (US East) N. Virginia, Ohio, (US West) N. California, Oregon and select (AWS GovCloud) US West and (AWS Govcloud) Us East regions. To learn more about AWS PrivateLink, see <a contenteditable="false" href="https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html" style="cursor: pointer;">accessing AWS services through AWS PrivateLink</a>. To learn more about FIPS 140-3 at AWS, visit <a contenteditable="false" href="https://aws.amazon.com/compliance/fips/" style="cursor: pointer;">FIPS 140-3 Compliance</a>. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our <a contenteditable="false" href="https://aws.amazon.com/ecr/" style="cursor: pointer;">product page</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html" style="cursor: pointer;">user guide</a>.</p>

Read article →

AWS Backup extends cross-account management in four new AWS Regions

<p>AWS Backup now offers cross-account management in the following AWS Regions: Asia Pacific (Taipei, Thailand, New Zealand) and Mexico (Central). This capability helps you manage and monitor backups across your AWS accounts using AWS Organizations.<br /> <br /> With cross-account management in AWS Backup, you can deploy organization-wide backup policies using your AWS Organizations management account or delegated administrator account. This helps maintain compliance across all organizational accounts while reducing management overhead. You can also monitor backup activity across all accounts in your organization from a single management account.<br /> <br /> For more information on the AWS Backup features available across AWS Regions, see <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#features-by-region">AWS Backup documentation</a>. To get started, visit the <a href="https://console.aws.amazon.com/backup">AWS Backup console</a>.</p>

Read article →

Amazon Aurora MySQL 3.11 (compatible with MySQL 8.0.43) is now generally available

<p>Starting today, Amazon Aurora MySQL - Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.43 through Aurora MySQL v3.11.<br /> <br /> In addition to several security enhancements and bug fixes, MySQL 8.0.43 contains additional errors for group replication and introduces the mysql client “commands” option, which enables or disables most mysql client commands. For more details, refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html" style="cursor: pointer;">Aurora MySQL 3.11</a> and<a contenteditable="false" href="https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-43.html" style="cursor: pointer;"> MySQL 8.0.43</a> release notes. To upgrade to Aurora MySQL 3.11, you can initiate a minor version upgrade manually by <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.Patching.ModifyEngineVersion.html" style="cursor: pointer;">modifying your DB cluster</a>, or you can enable the “<a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.AMVU.html" style="cursor: pointer;">Auto minor version upgrade</a>” option when creating or modifying a DB cluster. This release is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.RegionsAndAvailabilityZones.html#Aurora.Overview.Availability.MySQL" style="cursor: pointer;">AWS regions</a> where Aurora MySQL is available.<br /> <br /> Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other Amazon Web Services services. To get started with Amazon Aurora, take a look at our <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.html" style="cursor: pointer;">getting started page</a>.</p>

Read article →

Amazon RDS for MySQL now supports new minor versions 8.0.44 and 8.4.7

<p><a contenteditable="false" href="https://aws.amazon.com/rds/mysql/" style="cursor: pointer;">Amazon Relational Database Service (Amazon RDS) for MySQL</a> now supports MySQL minor versions 8.0.44 and 8.4.7, the latest minors released by the MySQL community. We recommend upgrading to the newer minor versions to fix known security vulnerabilities in prior versions of MySQL and to benefit from bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.44 and 8.4.7 in the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Concepts.VersionMgmt.html" style="cursor: pointer;">Amazon RDS user guide</a>.<br /> <br /> You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html" style="cursor: pointer;">Amazon RDS Managed Blue/Green deployments</a> for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html" style="cursor: pointer;">Amazon RDS User Guide.</a><br /> <br /> Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at <a contenteditable="false" href="https://aws.amazon.com/rds/mysql/pricing/" style="cursor: pointer;">Amazon RDS for MySQL</a>. Create or update a fully managed Amazon RDS for MySQL database in the <a contenteditable="false" href="https://console.aws.amazon.com/rds/home" style="cursor: pointer;">Amazon RDS Management Console</a>.</p>

Read article →

AWS Backup now supports backing up directly to a logically air-gapped vault

<p>AWS Backup now supports logically air-gapped vaults as a primary backup target. You can assign a logically air-gapped vault as the primary target in backup plans, organization-wide policies, or on-demand backups. Previously, logically air-gapped vaults could only store copies of existing backups.</p> <p>This capability reduces storage costs for customers who want the security and recoverability benefits of logically air-gapped vaults. Organizations wanting those benefits can now back up directly to a logically air-gapped vault without storing multiple backups.</p> <p>Resource types that support full AWS Backup management back up directly to the specified air-gapped vault. For resource types without full management support, AWS Backup creates a temporary snapshot in a standard vault, copies it to the air-gapped vault, then removes the snapshot.</p> <p>This feature is available in all AWS Regions that <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-region">support logically air-gapped vaults</a>. To get started, select a logically air-gapped vault as your primary backup target in the AWS Backup console, API, or CLI. For more information, visit the AWS Backup <a href="https://aws.amazon.com/backup/faqs/">product page</a> and <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/lag-vault-primary-backup.html">documentation</a>.</p>

Read article →

Amazon Redshift now supports writing to Apache Iceberg tables

<p><a href="https://aws.amazon.com/redshift/">Amazon Redshift</a> today announces the general availability of write capability to Apache Iceberg tables, enabling users to run analytics read and write queries for append-only workloads on Apache Iceberg tables within Amazon Redshift. Amazon Redshift is a petabyte-scale, enterprise-grade cloud data warehouse service used by tens of thousands of customers. Whether your data is stored in operational data stores, data lakes, streaming engines or within your data warehouse, Amazon Redshift helps you quickly ingest, securely share data, and achieve the best performance at the best price. The Apache Iceberg open table format has been used by many customers to simplify data processing on rapidly expanding and evolving tables stored in data lakes.<br /> <br /> Customers have been using Amazon Redshift to run queries on data lake tables in various file and table formats, achieving a wide range of scalability across data warehouse and data lake workloads. Data lake use cases continue to evolve and become increasingly sophisticated, and require capabilities like transactional consistency for record-level updates and deletes while having seamless schema and partition evolution support. With this milestone Amazon Redshift now supports SQL DDL (data definition language) operations to CREATE an Apache Iceberg table, SHOW the table definition SQL, DROP the table and perform DML (data manipulation language) operations such as INSERT. You can continue to use Amazon Redshift to read from your Apache Iceberg tables in AWS Glue Data Catalog and perform write operations on those Apache Iceberg tables while other users or applications can safely run DML operations on your tables.<br /> <br /> Apache Iceberg support in Amazon Redshift is available in all AWS regions where Amazon Redshift is available. To get started, visit the documentation page for Amazon Redshift <a href="https://docs.aws.amazon.com/redshift/latest/dg/iceberg-writes.html">Management Guide</a>.</p>

Read article →

AWS Marketplace now displays estimated tax and invoicing entity information

<p>AWS Marketplace now displays estimated tax information and the invoicing entity to buyers at the time of purchase. This new capability helps customers understand the total cost of their AWS Marketplace purchases before completing transactions, providing enhanced transparency for procurement approvals and budgeting.<br /> <br /> When reviewing offers in AWS Marketplace, customers can now see estimated tax amounts, tax rates, and the invoicing entity based on their current tax and address settings in the AWS Billing console. This information appears at the time of procurement and can be downloaded as a PDF, allowing buyers to request approval for the correct spend amount and issue purchase orders to the appropriate invoicing entity. The estimated tax display includes the tax type (such as Value Added Tax, Goods and Services Tax, or US sales tax), estimated tax amount for upfront charges, and tax rate information. This visibility helps finance teams accurately budget and avoid unexpected costs that can impact procurement workflows and payment processing.<br /> <br /> This capability is available today in all <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/supported-regions.html">AWS Regions</a> where AWS Marketplace is supported.<br /> <br /> For information on managing your tax settings, refer to the <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-account-payment.html">AWS Billing Documentation</a>. To learn more about tax handling in AWS Marketplace, visit this <a href="https://aws.amazon.com/tax-help/marketplace-buyers/">page</a>.</p>

Read article →

Amazon VPC IPAM automates IP assignments from Infoblox IPAM

<p>Today, AWS launched the ability for Amazon VPC IP Address Manager (IPAM) to automatically acquire non-overlapping IP address allocations from Infoblox Universal IPAM. This feature minimizes manual processes between cloud and on-premises administrators, reducing the turnaround time.<br /> <br /> With this launch, you can automatically acquire non-overlapping IP addresses from your on-premises Infoblox Universal IPAM into your top-level AWS IPAM pool and organize them into regional pools based on your business requirements. When you acquire non-overlapping IPs, you reduce the risk of service disruptions because your IPs don’t conflict with on-premise IP addresses. Previously, in hybrid cloud environments, administrators had to use offline means such as tickets or emails to request and allocate IP addresses, which was often error-prone and time-consuming. This integration automates the manual process, improving operational efficiency.<br /> <br /> This feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a> where Amazon VPC IPAM is supported, excluding AWS China Regions and AWS GovCloud (US) Regions.<br /> <br /> To learn more about IPAM, view the <a href="https://docs.aws.amazon.com/vpc/latest/ipam/integrate-infoblox-ipam.html">IPAM documentation</a>. For details on pricing, refer to the IPAM tab on the <a href="https://aws.amazon.com/vpc/pricing/">Amazon VPC Pricing Page</a>.<br /> </p>

Read article →

Amazon Route 53 Profiles now supports Resolver query logging configurations

<p>Today, AWS announced support for Resolver query logging configurations in Amazon Route 53 Profiles, allowing you to manage Resolver query logging configuration and apply it to multiple VPCs and AWS accounts within your organization. With this enhancement, Amazon Route 53 Profiles simplifies the management of Resolver query logging by streamlining the process of associating logging configurations with VPCs, and without requiring you to manually associate them with each VPC.<br /> <br /> Route 53 Profiles allows you to create and share Route 53 configurations (private hosted zones, DNS Firewall rule groups, Resolver rules) across multiple VPCs and AWS accounts. Previously, Resolver query logging required you to manually set it up for each VPC in every AWS account. Now, with Route 53 Profiles you can manage your Resolver query logging configurations for your VPCs and AWS accounts, using a single Profile configuration. Profiles support for Resolver query logging configurations reduces the management overhead for network security teams and simplifies compliance auditing by providing consistent DNS query logs across all accounts and VPCs.<br /> <br /> Route 53 Profiles support for Resolver query logging is now available in the AWS Regions mentioned <a href="https://docs.aws.amazon.com/general/latest/gr/r53.html" target="_blank">here</a>. To learn more about this capability and how it can benefit your organization, visit the Amazon Route 53 <a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/profiles.html" target="_blank">documentation</a>. You can get started by accessing the Amazon Route 53 console in your AWS Management Console or through the AWS CLI. To learn more about Route 53 Profiles pricing, see <a href="https://aws.amazon.com/route53/pricing/" target="_blank">here</a>.&nbsp;</p>

Read article →

Amazon U7i instances now available in US East (Ohio) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 24TB of memory (u7in-24tb.224xlarge) are now available in the US East (Ohio) region. U7in-24tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-24tb instances offer 24TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7in-24tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server..<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/">High Memory instances page</a>.</p>

Read article →

Amazon U7i instances now available in AWS Europe (Ireland) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Europe (Ireland) Region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server..<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/" target="_blank">High Memory instances page</a>.</p>

Read article →

Amazon EC2 M8i and M8i-flex instances are now available in Asia Pacific (Mumbai) Region

<p>Starting today, Amazon EC2 M8i and M8i-flex instances are now available in Asia Pacific (Mumbai) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances.<br /> <br /> M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.<br /> <br /> M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.<br /> <br /> To get started, sign in to the <a href="https://aws.amazon.com/console/" target="_blank">AWS Management Console</a>. For more information about the new instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/m8i/" target="_blank">M8i and M8i-flex</a> page or visit the <a href="https://aws.amazon.com/blogs/aws/new-general-purpose-amazon-ec2-m8i-and-m8i-flex-instances-are-now-available/" target="_blank">AWS News</a> blog.</p>

Read article →

AWS Backup extends delegated administrator support to 17 additional AWS Regions

<p>You can now designate delegated administrators for AWS Backup in 17 additional AWS Regions, enabling assigned users in member accounts to perform most administrative tasks.</p> <p>Delegated administrators are now supported in Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, New Zealand, Taipei, Thailand), Canada West (Calgary), Europe (Milan, Spain, Zurich), Israel (Tel Aviv), Mexico (Central), and Middle East (Bahrain, UAE). Delegated administration enables organizations to designate a central AWS account to manage backup operations across multiple member accounts, streamlining governance and reducing administrative overhead. Additionally, you can now use AWS Backup Audit Manager cross-Region and cross-account delegated administrator functionality in these Regions, empowering delegated administrators to create audit reports for jobs and compliance for backup plans that span these Regions.</p> <p>For more information on the AWS Backup features available across AWS Regions, see&nbsp;<a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#features-by-region">AWS Backup documentation</a>. To get started, visit the&nbsp;<a href="https://console.aws.amazon.com/backup">AWS Backup console</a>.</p>

Read article →

AWS Transform Automates Landing Zone Acceleration Network Configuration

<p>AWS Transform for VMware now allows customers to automatically generate network configurations that can be directly imported into the <a href="https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/">Landing Zone Accelerator on AWS solution (LZA)</a>. Building on AWS Transform's existing support for infrastructure-as-code generation in AWS CloudFormation, AWS CDK, and Terraform formats, this new capability specifically enables automatic transformation of VMware network environments into LZA-compatible network configuration YAML files. These YAML configurations can be directly deployed through LZA's deployment pipeline, streamlining the process of setting up your cloud infrastructure.<br /> <br /> AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. Landing Zone Accelerator on AWS solution (LZA) automates the setup of a secure, multi-account AWS environment using AWS best practices. Migrating workloads to AWS traditionally requires you to manually recreate network configurations while maintaining operational and compliance consistency. The service now automates the generation of LZA network configurations, reducing manual effort, potential configuration errors, and deployment time while ensuring compliance with enterprise security standards.<br /> <br /> The LZA configuration generation capability is available in all <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware-acct-connections.html#transform-app-vmware-target-acct">AWS Regions where the service is offered</a>.<br /> <br /> To learn more, visit the AWS Transform for VMware <a href="https://aws.amazon.com/transform/vmware/">product page</a>, read the <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware.html">user guide</a>, or get started in the <a href="https://console.aws.amazon.com/transform/home">AWS Transform web experience</a>.</p>

Read article →

AWS Lambda adds support for Rust

<p>AWS Lambda now supports building serverless applications using Rust. Previously, AWS classified Rust support in Lambda as ‘experimental’ and did not recommend using Rust for production workloads. With this launch, Rust support in Lambda is now Generally Available, backed by AWS Support and the Lambda SLA.<br /> <br /> Rust is a popular programming language, offering high performance, memory efficiency, compile-time code safety features, and a mature package management and tooling ecosystem. This makes Rust an ideal choice for developers building performance-sensitive serverless applications. Developers can now build business-critical serverless applications in Rust and run them in Lambda, taking advantage of Lambda’s built-in event source integrations, fast scaling from zero, automatic patching, and usage-based pricing.<br /> <br /> Lambda support for Rust is available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions.<br /> <br /> For more information, see <a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-rust.html" style="cursor: pointer;" target="_blank">Building Lambda functions with Rust</a> in the Lambda documentation, or our blog post <a style="cursor: pointer;" target="_blank"></a><a href="https://aws.amazon.com/blogs/compute/building-serverless-applications-with-rust-on-aws-lambda/" target="_blank">Building serverless applications with Rust on AWS Lambda</a>.</p>

Read article →

AWS Lambda adds support for Java 25

<p>AWS Lambda now supports creating serverless applications using Java 25. This runtime is based on the latest long-term support release of <a href="https://aws.amazon.com/corretto/" target="_blank">Amazon Corretto</a>, Amazon’s distribution of OpenJDK. You can use Java 25 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.<br /> <br /> This release brings the latest Java language features to Lambda developers, such as primitive types in patterns, module import declarations, and flexible constructor bodies. It also includes several performance enhancements, such as Ahead-of-Time caches, adjustments to tiered compilation defaults, and removing the patch for the Log4Shell vulnerability from 2021. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), CDK, and AWS CloudFormation to deploy and manage serverless applications written in Java 25. The runtime supports Lambda Snap Start (in supported Regions) for fast cold starts. <a href="https://docs.powertools.aws.dev/lambda/java/" target="_blank">Powertools for AWS Lambda (Java)</a>, a developer toolkit to implement serverless best practices and increase developer velocity, also supports Java 25.<br /> <br /> The Lambda Java 25 runtime is available in all Regions, including AWS GovCloud (US) Regions and China Regions.<br /> <br /> For more information, including guidance on upgrading existing Lambda functions, read our <a href="https://aws.amazon.com/blogs/compute/aws-lambda-now-supports-java-25/" target="_blank">blog post</a>. For more information about AWS Lambda, visit our <a href="https://aws.amazon.com/lambda/" target="_blank">product page</a>.</p>

Read article →

AWS IoT Services expand support of VPC endpoints and IPv6 connectivity

<p>AWS IoT Core, AWS IoT Device Management, and AWS IoT Device Defender have expanded support for Virtual Private Cloud (VPC) endpoints and IPv6. Developers can now use AWS PrivateLink to establish VPC endpoints for all data plane operations, management APIs, and credential provider. This enhancement allows IoT workloads to operate entirely within virtual private clouds without traversing the public internet, helping strengthen the security posture for IoT deployments.<br /> <br /> Additionally, IPv6 support for both VPC and public endpoints gives developers the flexibility to connect IoT devices and applications using either IPv6 or IPv4. This helps organizations meet local requirements for IPv6 while maintaining compatibility with existing IPv4 infrastructure.<br /> <br /> These features can be configured through the AWS Management Console, AWS CLI, and AWS CloudFormation. The functionality is now generally available in all AWS Regions where the relevant AWS IoT services are offered. For more information about the&nbsp;<a href="https://docs.aws.amazon.com/iot/latest/developerguide/connect-to-iot.html">IPv6 support</a>&nbsp;and&nbsp;<a href="https://docs.aws.amazon.com/iot/latest/developerguide/IoTCore-VPC.html">VPCe support</a>, customers can visit the AWS IoT technical documentation pages. For information about PrivateLink pricing, visit the&nbsp;<a href="https://aws.amazon.com/privatelink/pricing/">AWS PrivateLink pricing page</a>.</p>

Read article →

Amazon SageMaker Catalog now supports read and write access to Amazon S3

<p>Amazon SageMaker Catalog now supports read and write access to Amazon S3 general purpose buckets. This capability helps data scientists and analysts search for unstructured data, process it alongside structured datasets, and share transformed datasets with other teams. Data publishers gain additional controls to support analytics and generative AI workflows within SageMaker Unified Studio while maintaining security and governance controls over shared data.&nbsp;</p> <p>When approving subscription requests or directly sharing S3 data within the SageMaker Catalog, data producers can choose to grant read-only or read and write access. If granted read and write access, data consumers can process datasets in SageMaker and store the results back to the S3 bucket or folder. The data can then be published and automatically discoverable by other teams.&nbsp;This capability is now available in <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html" target="_blank">all AWS Regions</a>&nbsp;where Amazon SageMaker Unified Studio is supported. To get started, you can log into SageMaker Unified Studio, or you can use the Amazon DataZone API, SDK, or AWS CLI. To learn more, see the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/data-s3.html" target="_blank">SageMaker Unified Studio guide</a>.</p>

Read article →

Amazon RDS Blue/Green deployments now supports Aurora Global Database

<p>Amazon RDS Blue/Green deployments now support safer, simpler, and faster updates for your Aurora Global Databases. With just a few clicks, you can create a staging (green) environment that mirrors your production (blue) Aurora Global Database, including primary and all secondary regions. When you’re ready to make your staging environment the new production environment, perform a blue/green switchover. This operation transitions your primary and all secondary regions to the green environment, which now serves as the active production environment. Your application begins accessing it immediately without any configuration changes, minimizing operational overhead.<br /> <br /> With Global Database, a single Aurora cluster can span multiple AWS Regions, providing disaster recovery for your applications in case of single Region impairment and enabling fast local reads for globally distributed applications. With this launch, you can perform critical database operations including major and minor version upgrades, OS updates, parameter modifications, instance type validations, and schema changes with minimal downtime. During blue/green switchover, Aurora automatically renames clusters, instances, and endpoints to match the original production environment, enabling applications to continue operating without any modifications. You can leverage this capability using the AWS Management console, SDK, or CLI.<br /> <br /> This capability is available in <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.BlueGreenDeployments.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.BlueGreenDeployments.ams" target="_blank">Amazon Aurora MySQL-Compatible Edition</a> and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.BlueGreenDeployments.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.BlueGreenDeployments.apg" target="_blank">Amazon Aurora PostgreSQL-Compatible Edition</a> versions that support the Aurora Global Database configuration and in all commercial AWS Regions and AWS GovCloud (US) Regions.<br /> <br /> Start planning your next Global Database upgrade using RDS Blue/Green deployments by following the steps in the <a href="https://aws.amazon.com/blogs/database/introducing-fully-managed-blue-green-deployments-for-amazon-aurora-global-database/" target="_blank">blog</a>. For more details, refer to our <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html" target="_blank">documentation</a>.</p>

Read article →

Amazon ECS improves Service Availability during Rolling deployments

<p><a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Service</a> (Amazon ECS) now includes enhancements that improve service availability during rolling deployments. These enhancements help maintain availability when new application version tasks are failing, when current tasks are unexpectedly terminated, or when scale-out is triggered during deployments.</p> <p>Previously, when tasks in your currently running version became unhealthy or were terminated during a rolling deployment, ECS would attempt to replace them with the new version to prioritize deployment progress. If the new version could not launch successfully—such as when new tasks fail health checks or fail to start—these replacements would fail and your service availability could drop. ECS now replaces unhealthy or terminated tasks using the same service revision they belong to. Unhealthy tasks in your currently running version are replaced with healthy tasks from that same version, independent of the new version's status. Additionally, when Application Auto Scaling triggers during a rolling deployment, ECS applies scale-out to both service revisions, ensuring your currently running version can handle increased load even if the new version is failing.</p> <p>These improvements respect your service's maximumPercent and minimumHealthyPercent settings. These enhancements are enabled by default for all services using the rolling deployment strategy and are available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>. To learn more about rolling-update deployments, refer <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html" target="_blank">Link</a>.</p>

Read article →

Announcing Amazon DocumentDB (with MongoDB compatibility) version 8.0

<p>Amazon DocumentDB (with MongoDB compatibility) announces version 8.0, which now offers added support for drivers supporting the MongoDB API versions 6.0, 7.0, and 8.0. Amazon DocumentDB 8.0 also improves query latency by up to 7x and compression ratio by up to 5x, enabling you to build high-performance applications at a lower cost.&nbsp;</p> <p>The following are features and capabilities introduced in Amazon DocumentDB 8.0:<br /> </p> <ul> <li><b>Compatibility with MongoDB 8.0:</b>&nbsp;Amazon DocumentDB 8.0 provides compatibility with MongoDB 8.0 by adding support for MongoDB 8.0 API drivers. Amazon DocumentDB 8.0 also supports applications that are built using MongoDB API versions 6.0 and 7.0.</li> <li><b>Planner Version3:</b> <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/query-planner-v3.html" style="cursor: pointer;">New query planner</a> in Amazon DocumentDB 8.0 extends performance improvements to aggregation stage operators, along with supporting aggregation pipeline optimizations and distinct commands.</li> <li><b>New aggregation stages and operators:</b> Amazon DocumentDB 8.0 offers 6 new aggregation stages: $replaceWith, $vectorSearch, $merge, $set, $unset, $bucket, and 3 new aggregation operators $pow, $rand, $dateTrunc.</li> <li><b>Compression:</b> Support for <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/dict-compression.html" style="cursor: pointer;">dictionary-based compression</a> through the Zstandard compression algorithm improves compression ratio by up to 5x, thus improving storage efficiency and reducing I/O costs.</li> <li><b>New capabilities:</b> Amazon DocumentDB 8.0 supports <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/collation.html" style="cursor: pointer;">collation</a> and <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/views.html" style="cursor: pointer;">views</a>.</li> <li><b>A new version of text index:</b> <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/text-search.html" style="cursor: pointer;">Text index v2</a> in Amazon DocumentDB 8.0 introduces additional tokens, enhancing text search capabilities.</li> <li><b>Vector search improvements:</b> Through parallel vector index build, Amazon DocumentDB 8.0 reduces index build time by up to 30x.</li> </ul> <p>You can use AWS Database Migration Service (DMS) to upgrade your Amazon DocumentDB 5.0 instance-based clusters to Amazon DocumentDB 8.0 clusters. Please see <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-migration.versions.html" style="cursor: pointer;"><u>upgrading your DocumentDB cluster</u></a> to learn more. Amazon DocumentDB 8.0 is available in all AWS&nbsp;<a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/regions-and-azs.html" style="cursor: pointer;"><u>Regions</u></a> where Amazon DocumentDB is available. To learn more about Amazon DocumentDB 8.0 visit the <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html" style="cursor: pointer;"><u>documentation</u></a>.</p>

Read article →

Amazon SQS expands IPv6 support to the AWS GovCloud (US) Regions

<p><a href="https://aws.amazon.com/pm/sqs/">Amazon Simple Queue Service (Amazon SQS) </a>now allows customers to make API requests over Internet Protocol version 6 (IPv6) in the AWS GovCloud (US) Regions. The new endpoints have also been validated under the Federal Information Processing Standard (FIPS) 140-3 program.<br /> <br /> Amazon SQS is a fully managed message queuing service that enables decoupling and scaling of distributed systems, microservices, and serverless applications. With this update, customers have the option of using either IPv6 or IPv4 when sending requests over dual-stack public or VPC endpoints.<br /> <br /> Amazon SQS now supports IPv6 in all Regions where the service is available, including AWS Commercial, AWS GovCloud (US) and China Regions. For more information on using IPv6 with Amazon SQS, please refer to our <a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dual-stack.html">developer guide</a>.&nbsp;</p>

Read article →

Amazon RDS for PostgreSQL now supports major version 18

<p><a href="https://aws.amazon.com/rds/postgresql/">Amazon RDS for PostgreSQL</a> now supports major version 18, starting with PostgreSQL version 18.1. PostgreSQL 18 introduces several important community updates that improve query performance and database management.<br /> <br /> PostgreSQL 18.0 includes "skip scan" support for multicolumn B-tree indexes and improved WHERE clause handling for OR and IN conditions enhance query optimization. Parallel Generalized Inverted Index (GIN) builds and updated join operations boost overall database performance. The introduction of Universally Unique Identifiers Version 7 (UUIDv7) combines timestamp-based ordering with traditional UUID uniqueness, particularly beneficial for high-throughput distributed systems. PostgreSQL 18 also improves observability by providing buffer usage counts, index lookup statistics during query execution, and per-connection I/O utilization metrics. This release also includes support for the new pgcollection extension, and updates to existing extensions such as pgaudit 18.0, pgvector 0.8.1, pg_cron 1.6.7, pg_tle 1.5.2, mysql_fdw 2.9.3, and tds_fdw 2.0.5.<br /> <br /> You can upgrade your database using several options including <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html">RDS Blue/Green deployments</a>, upgrade in-place, restore from a snapshot. Learn more about upgrading your database instances in the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html">Amazon RDS User Guide</a>.<br /> <br /> Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See <a href="https://aws.amazon.com/rds/postgresql/pricing/">Amazon RDS for PostgreSQL Pricing</a> for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the <a href="https://console.aws.amazon.com/rds/home">Amazon RDS Management Console</a>.</p>

Read article →

Amazon EventBridge introduces enhanced visual rule builder

<p>Amazon EventBridge introduces a new intuitive console based visual rule builder with a comprehensive event catalog for discovering and subscribing to events from custom applications, and over 200 AWS services. The new rule builder integrates the EventBridge Schema Registry with an updated event catalog and intuitive drag and drop canvas that simplifies building event-driven applications.<br /> <br /> With enhanced rule builder, developers can browse and search through events with readily available sample payloads and schemas, eliminating the need to find and reference individual service documentation. The schema-aware visual builder guides developers through creating event filter patterns and rules, reducing syntax errors and development time.<br /> <br /> The EventBridge enhanced rule builder is available today in all regions where the Schema Registry is launched. Developers can get started through the Amazon EventBridge console at no additional cost beyond standard EventBridge usage charges.<br /> <br /> For more information, visit the EventBridge <a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-visual.html">documentation</a>.</p>

Read article →

AWS Network Firewall is now available in the AWS New Zealand (Auckland) region

<p>Starting today, AWS Network Firewall is available in the AWS New Zealand (Auckland) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs).<br /> <br /> AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up and maintain the underlying infrastructure. It is integrated with AWS Firewall Manager to provide you with central visibility and control over your firewall policies across multiple AWS accounts.<br /> <br /> To see which regions AWS Network Firewall is available in, visit the <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Region Table</a>. For more information, please see the AWS Network Firewall <a contenteditable="false" href="https://aws.amazon.com/network-firewall/" style="cursor: pointer;">product page</a> and the service <a contenteditable="false" href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/" style="cursor: pointer;">documentation</a>.</p>

Read article →

Announcing agreement EventBridge notifications for AWS Marketplace

<p>AWS Marketplace now delivers purchase agreement events via Amazon EventBridge, transitioning from our Amazon Simple Notification Service (SNS) notifications for Software as a Service and Professional Services product types. This enhancement simplifies event-driven workflows for both sellers and buyers by enabling seamless integration of AWS Marketplace Agreements, reducing operational overhead, and improving event monitoring and automation.<br /> <br /> Marketplace sellers (Independent Software Vendors and Channel Partners) and buyers will receive notifications for all events in the lifecycle of their Marketplace Agreements, including when they are created, terminated, amended, replaced, renewed, cancelled or expired. Additionally, ISVs receive license-specific events to manage customer entitlements. With EventBridge integration, you can route these events to various AWS services such as AWS Lambda, Amazon S3, Amazon CloudWatch, AWS Step Functions, and Amazon SNS, maintaining compatibility with existing SNS-based workflows while gaining advanced routing capabilities.<br /> <br /> EventBridge notifications are generally available and can be created in AWS US East (N. Virginia) Region.<br /> <br /> To learn more about AWS Marketplace event notifications, see the<a href="https://docs.aws.amazon.com/marketplace/latest/userguide/saas-eventbridge-integration.html" target="_blank"> AWS Marketplace documentation</a>. You can start using EventBridge notifications today by visiting the Amazon EventBridge console and enabling the 'aws.agreement-marketplace' event source.</p>

Read article →

AWS Lambda announces Provisioned Mode for SQS event source mapping (ESM)

<p>AWS Lambda announces Provisioned Mode for SQS event-source-mappings (ESMs) that subscribe to Amazon SQS, a feature that allows you to optimize the throughput of your SQS ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. SQS ESM configured with Provisioned Mode scales 3x faster (up to 1000 concurrent executions per minute) and supports 16x higher concurrency (up to 20,000 concurrent executions) than default SQS ESM capability. This allows you to build highly responsive and scalable event-driven applications with stringent performance requirements.<br /> <br /> Customers use SQS as an event source for Lambda functions to build mission-critical applications using Lambda's fully-managed <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html" target="_blank">SQS ESM</a>, which automatically scales polling resources in response to events. However, for applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in event processing. Provisioned Mode for SQS ESM allows you to fine tune the throughput of the ESM by provisioning a minimum and maximum number of polling resources called event pollers that are ready to handle sudden spikes in traffic. With this feature, you can process events with lower latency, handle sudden traffic spikes more effectively, and maintain precise control over your event processing resources.<br /> <br /> This feature is generally available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Commercial Regions</a>. You can activate Provisioned Mode for SQS ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM <a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html#invocation-eventsourcemapping-provisioned-mode" target="_blank">documentation</a> and AWS Lambda <a href="https://aws.amazon.com/lambda/pricing/" target="_blank">pricing</a>.&nbsp;</p>

Read article →

Amazon Kinesis Video Streams WebRTC Multi-Viewer

<p><a href="https://aws.amazon.com/kinesis/video-streams/">Amazon Kinesis Video Streams</a> now offers the ability to stream real-time audio and video to multiple concurrent viewers via WebRTC, while also recording video and audio from the session to the cloud for storage, playback, and analytical processing. With this update, developers can enable up to 3 concurrent viewers of real-time feeds from cameras or other video-producing devices without increasing compute or bandwidth utilization on the device. In addition, participants can engage in audio conversations with each other, enabling direct real-time communication between viewers during the session.<br /> <br /> Developers can now build real-time peer-to-peer streaming applications by installing the <a href="https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/tree/wrtc-stream-ingestion-support">Amazon Kinesis Video Streams with WebRTC SDK</a> across security cameras, IoT devices, PCs, and mobile devices. Using the APIs, developers can create applications that stream real-time media to multiple concurrent viewers. They can develop solutions for scenarios such as home security applications sharing camera feeds with family members, remote proctoring systems with multiple monitoring operators, or robot operation control centers with audit capabilities. Developers can implement both live and on-demand video playback through session recording, and build advanced applications utilizing computer vision and video analytics by integrating with Amazon Rekognition Video and Amazon SageMaker.<br /> <br /> Amazon Kinesis Video Streams WebRTC Multi-Viewer is available in all regions where Amazon Kinesis Video Streams is available, except the AWS GovCloud (US) Regions and the China (Beijing, operated by Sinnet) Region.<br /> <br /> To learn more, see our <a href="https://docs.aws.amazon.com/kinesisvideostreams-webrtc-dg/latest/devguide/webrtc-ingestion.html">Getting Started Guide</a>.</p>

Read article →

AWS Network Load Balancer now supports QUIC protocol in passthrough mode

<p>AWS Network Load Balancer (NLB) now supports QUIC protocol in passthrough mode, enabling low-latency forwarding of QUIC traffic while preserving session stickiness through QUIC Connection ID. This enhancement helps customers maintain consistent connections for mobile applications, even when client IP addresses change during network roaming.<br /> <br /> With QUIC support, customers can reduce application latency by up to 30% through fewer packet round trips and ensure seamless user experiences across varying network conditions. This is especially useful for mobile applications that require users to move between cellular towers or switch from WiFi to cellular networks, without losing connection state. You can enable QUIC support on your existing or new Network Load Balancers through the AWS Management Console, CLI, or APIs. Once enabled, NLB forwards QUIC traffic to targets by using the QUIC Connection ID to maintain session stickiness even when a client roams.<br /> <br /> QUIC support is available at no additional charge in all AWS commercial and AWS GovCloud (US) regions. QUIC traffic is metered within existing UDP Load Balancer Capacity Unit (LCU) entitlements. <br /> <br /> To learn more, visit this <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-quic-protocol-support-for-network-load-balancer-accelerating-mobile-first-applications/">AWS blog</a><b> </b>and <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html#add-listener">NLB User Guide</a>.&nbsp;</p>

Read article →

Amazon Connect now provides metrics on completion of agent performance evaluations by managers

<p>Amazon Connect now provides metrics that measure completion of agent performance evaluations, improving manager productivity and evaluation consistency. Businesses can monitor if the required number of evaluations for their agents have been completed, ensuring compliance with internal policies (e.g., complete 5 evaluations per agent per month), regulatory requirements, and labor union agreements. Additionally, businesses can analyze evaluation scoring patterns across different managers, to identify opportunities to improve evaluation consistency and accuracy. These insights are available in real-time through analytics dashboards in the Connect UI, and APIs.<br /> <br /> This feature is available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" target="_blank">Amazon Connect</a> is offered. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/agent-performance-evaluation-dashboard.html" target="_blank">documentation</a> and our <a href="https://aws.amazon.com/connect/contact-lens/" target="_blank">webpage</a>.&nbsp;</p>

Read article →

AWS CloudFormation Hooks adds granular invocation details for Hooks invocation summary

<p>Building on the Hooks Invocation Summary launched in September 2025, AWS CloudFormation Hooks now supports granular invocation details. Hook authors can supplement their Hook evaluation responses with detailed findings, finding severity, and remediation advice. The Hooks console now displays these details at the individual control level within each invocation, enabling developers to quickly identify and resolve specific Hook failures.<br /> <br /> Customers can easily drill down from the invocation summary to see exactly which controls passed, failed, or were skipped, along with specific remediation guidance for each failure. This granular visibility eliminates guesswork when debugging Hook failures, allowing teams to pinpoint the exact control that blocked a deployment and understand how to fix it. The detailed findings accelerate troubleshooting and streamline compliance reporting by providing actionable insights at the individual control level.<br /> <br /> The Hooks invocation summary page is available in all commercial and GovCloud (US) regions. To learn more, visit the <a href="https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-view-invocations.html" target="_blank">AWS CloudFormation Hooks View Invocations</a> documentation.</p>

Read article →

Amazon RDS for PostgreSQL supports minor versions 17.7, 16.11, 15.15, 14.20, and 13.23

<p><a href="https://aws.amazon.com/rds/postgresql/" target="_blank">Amazon Relational Database Service (RDS)</a> for PostgreSQL now supports the latest minor versions 17.7, 16.11, 15.15, 14.20, and 13.23. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.<br /> <br /> This release includes the new pgcollection extension for RDS PostgreSQL versions 15.15 and above (16.11 and 17.7). This extension enhances database performance by providing an efficient way to store and manage key-value pairs within PostgreSQL functions. Collections maintain the order of entries and can store various types of PostgreSQL data, making them useful for applications that need fast, in-memory data processing.<br /> The release also includes updates to extensions, with pg_tle upgraded to version 1.5.2 and H3_PG upgraded to version 4.2.3.<br /> <br /> You can use automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use <a href="https://aws.amazon.com/about-aws/whats-new/2024/11/rds-blue-green-deployments-upgrade-rds-postgresql/" target="_blank">Amazon RDS Blue/Green deployments</a> for RDS for PostgreSQL using physical replication for your minor version upgrades. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green deployments in the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-versions.html" target="_blank">Amazon RDS User Guide</a> .<br /> <br /> Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See <a href="https://aws.amazon.com/rds/postgresql/pricing/" target="_blank">Amazon RDS for PostgreSQL Pricing</a> for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.&nbsp;</p>

Read article →

Amazon U7i instances now available in Europe (Stockholm) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the Europe (Stockholm) region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/u7i/" style="cursor: pointer;">High Memory instances page</a>.</p>

Read article →

Amazon EC2 I8g instances now available in additional AWS regions

<p>AWS is announcing the general availability of Amazon EC2 Storage Optimized I8g instances in Asia Pacific (Seoul) and South America (Sao Paulo) regions. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.<br /> <br /> Amazon EC2 I8g instances are designed for I/O intensive workloads that require rapid data access and real-time latency from storage. These instances excel at handling transactional, real-time, distributed databases, including MySQL, PostgreSQL, Hbase and NoSQL solutions like Aerospike, MongoDB, ClickHouse, and Apache Druid. They're also optimized for real-time analytics platforms such as Apache Spark, data lakehouse and AI LLM pre-processing for training. I8g instances are available in 10 different sizes with up to 48xlarge including one metal size, 1.5 TiB of memory, and 45 TB local instance storage. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS).<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/i8g/">Amazon EC2 I8g instances</a>. To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/">Level up your compute with AWS Graviton page</a>. To get started, see <a href="https://console.aws.amazon.com/">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html">AWS SDKs</a>.</p>

Read article →

Amazon EC2 G6f instances are now available in additional regions

<p>Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6f instances powered by NVIDIA L4 GPUs are now available in Europe (Spain) and Asia Pacific (Seoul) regions. G6f instances can be used for a wide range of graphics workloads. G6f instances offer GPU partitions as small as one-eighth of a GPU with 3 GB of GPU memory giving customers the flexibility to right size their instances and drive significant cost savings compared to EC2 G6 instances with a single GPU.<br /> <br /> Customers can use G6f instances to provision remote workstations for Media &amp; Entertainment, Computer-Aided Engineering, for ML research, and for spatial visualization. G6f instances are available in 5 instance sizes with half, quarter, and one-eighth of a GPU per instance size, paired with third generation AMD EPYC processors offering up to 12 GB of GPU memory and 16 vCPUs.<br /> <br /> Amazon EC2 G6f instances are available today in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Stockholm, Frankfurt, London and Spain), Asia Pacific (Mumbai, Tokyo, Seoul and Sydney), Canada (Central), and South America (Sao Paulo) regions. Customers can purchase G6f instances as On-Demand Instances, Spot Instances, or as a part of Savings Plans.<br /> <br /> To get started, visit the <a href="https://console.aws.amazon.com/">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/">AWS Command Line Interface</a> (CLI), and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html">AWS SDKs</a>, and launch G6f instances with NVIDIA GRID driver 18.4 or later. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/g6/">G6 instance page</a>.</p>

Read article →

AWS Transform automates Landing Zone Accelerator network configuration

<p>AWS Transform for VMware now allows customers to automatically generate network configurations that can be directly imported into the <a href="https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/">Landing Zone Accelerator on AWS solution (LZA)</a>. Building on AWS Transform's existing support for infrastructure-as-code generation in AWS CloudFormation, AWS CDK, and Terraform formats, this new capability enables automatic transformation of VMware network environments into LZA-compatible network configuration YAML files.<br /> The YAML files can be deployed through LZA's deployment pipeline, streamlining the process of setting up cloud infrastructure.<br /> <br /> AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. Landing Zone Accelerator on AWS solution (LZA) automates the setup of a secure, multi-account AWS environment using AWS best practices. Migrating workloads to AWS traditionally requires you to manually recreate network configurations while maintaining operational and compliance consistency. The service now automates the generation of LZA network configurations, reducing manual effort and deployment time to better manage and govern your multi-account environment. <br /> <br /> The LZA configuration generation capability is available in all <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware-acct-connections.html#transform-app-vmware-target-acct">AWS Transform target Regions.</a></p> <p>To learn more, visit the AWS Transform for VMware <a href="https://aws.amazon.com/transform/vmware/" target="_blank">product page</a>, read the <a href="https://docs.aws.amazon.com/transform/latest/userguide/transform-app-vmware.html" target="_blank">user guide</a>, or get started in the <a href="https://console.aws.amazon.com/transform/home" target="_blank">AWS Transform web experience</a>.</p>

Read article →

Amazon EventBridge now supports targeting SQS fair queues

<p>Amazon EventBridge now supports Amazon SQS fair queues as targets, enabling you to build more responsive event-driven applications. You can now leverage SQSs improved message distribution across consumer groups and mitigate the noisy neighbor impact in multi-tenant messaging systems. This enhancement allows EventBridge to send events directly to SQS fair queues. With fair queues, multiple consumers can process messages from the same tenant at the same time, while keeping message processing times consistent across all tenants.<br /> <br /> The Amazon EventBridge event bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. SQS fair queues automatically distribute messages fairly across consumer groups, preventing any single group from monopolizing queue resources. When combined with EventBridge's event routing capabilities, this creates powerful patterns for building scalable, multi-tenant applications where different teams or services need equitable access to event streams.<br /> <br /> To route events to an SQS fair queue, you can select the fair queue as a target when creating or updating EventBridge rules through the AWS Management Console, AWS CLI, or AWS SDKs. Be sure to include a MessageGroupID parameter, which can be specified with either a static value or JSON path expression.<br /> <br /> Support for Fair Queue and FIFO targets is available in all AWS commercial and AWS GovCloud (US) Regions. For more information about EventBridge target support, see our <a contenteditable="false" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html" style="cursor: pointer;">documentation</a>. For more information about SQS Fair Queues, see the SQS <a contenteditable="false" href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-fair-queues.html" style="cursor: pointer;">documentation</a>.&nbsp;</p>

Read article →

AWS IoT Core adds location resolution capabilities for Amazon Sidewalk enabled devices

<p><a href="https://docs.aws.amazon.com/iot/latest/developerguide/device-location.html" target="_blank">AWS IoT Core Device Location</a> announces location resolution capabilities for Internet of Things (IoT) devices connected to <a href="https://docs.sidewalk.amazon/" target="_blank">Amazon Sidewalk</a> network, enabling developers to build asset tracking and geo-fencing applications more efficiently by eliminating the need for GPS hardware in low-power devices. Amazon Sidewalk provides a secure community network through Amazon Sidewalk Gateways (compatible Amazon Echo and Ring devices) to deliver cloud connectivity for IoT devices. <a href="https://aws.amazon.com/iot-core/sidewalk/" target="_blank">AWS IoT Core for Amazon Sidewalk</a> facilitates connectivity and message transmission between Amazon Sidewalk-connected IoT devices and AWS cloud services. The integration of Amazon Sidewalk with AWS IoT Core, enables you to easily provision, onboard, and monitor your Amazon Sidewalk devices in the AWS cloud.<br /> <br /> With the new enhancement, you can now use AWS IoT Core’s Device Location feature to resolve the approximate location of your Amazon Sidewalk enabled devices, using input payloads like WiFi access point, Global Navigation Satellite System data, or Bluetooth Low Energy data. AWS IoT Core Device Location uses these inputs to resolve the geo-coordinate data, and delivers the geo-coordinate data to your desired AWS IoT rules or MQTT topics for integration with backend applications. To get started, install Sidewalk SDK v1.19 (or a later version) in your Sidewalk-enabled devices, provision the devices in AWS IoT Core for Amazon Sidewalk, and enable location during the provisioning.<br /> <br /> This new feature is available in AWS US-East (N. Virginia) Region of AWS cloud where AWS IoT Core for Amazon Sidewalk is available. Please note that Amazon Sidewalk network is available only in the United States of America. For more information, refer <a href="https://docs.aws.amazon.com/iot-wireless/latest/developerguide/sidewalk-getting-started.html#sidewalk-gs-workflow" target="_blank">AWS developer guide</a>, <a href="https://docs.sidewalk.amazon/assets/pdf/Amazon_Sidewalk_Location_Library_Developer_Guide-1.0-rev-A.pdf" target="_blank">Amazon Sidewalk developer guide</a>, and <a href="https://coverage.sidewalk.amazon/" target="_blank">Amazon Sidewalk network coverage</a>.</p>

Read article →

Amazon EC2 I8g instances now available in additional AWS regions

<p>AWS is announcing the general availability of Amazon EC2 Storage Optimized I8g instances in Europe (Stockholm) and Asia Pacific (Osaka) regions. I8g instances offer the best compute performance in Amazon EC2 for storage-intensive workloads. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability compared to I4g instances. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/" target="_blank">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.<br /> <br /> Amazon EC2 I8g instances are designed for I/O intensive workloads that require rapid data access and real-time latency from storage. These instances excel at handling transactional, real-time, distributed databases, including MySQL, PostgreSQL, Hbase and NoSQL solutions like Aerospike, MongoDB, ClickHouse, and Apache Druid. They're also optimized for real-time analytics platforms such as Apache Spark, data lakehouse and AI LLM pre-processing for training. I8g instances are available in 10 different sizes with up to 48xlarge including one metal size, 1.5 TiB of memory, and 45 TB local instance storage. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS).<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/i8g/" target="_blank">Amazon EC2 I8g instances.</a> To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/" target="_blank">Level up your compute with AWS Graviton page</a>. To get started, see <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank">AWS SDKs</a>.</p>

Read article →

Service Connect cross-account support available in AWS GovCloud (US) Regions

<p><a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html">Amazon ECS Service Connect</a> now supports seamless communication between services residing in different AWS accounts through integration with <a contenteditable="false" href="https://aws.amazon.com/ram/" style="cursor: pointer;"><u>AWS Resource Access Manager</u></a> (AWS RAM). This enhancement simplifies resource sharing, reduces duplication, and promotes consistent service-to-service communication across environments for organizations with multi-account architectures.<br /> <br /> Amazon ECS Service Connect leverages AWS Cloud Map namespaces for storing information about ECS services and tasks. To enable seamless cross-account communication between Amazon ECS Service Connect services, you can now share the underlying AWS Cloud Map namespaces using AWS RAM with individual AWS accounts, specific Organizational Units (OUs), or your entire AWS Organization. To get started, create a resource share in AWS RAM, add the namespaces you want to share, and specify the principals (accounts, OUs, or the organization) that should have access. This enables platform engineers to use the same namespace to register Amazon ECS Service Connect services residing in multiple AWS accounts, simplifying service discovery and connectivity. Application developers can then build services that rely on a consistent, shared registry without worrying about availability or synchronization across accounts. Cross-account connectivity support improves operational efficiency and makes it easier to scale Amazon ECS workloads as your organization grows by reducing duplication and streamlining access to common services.<br /> <br /> This feature is available with both Fargate and EC2 launch modes in AWS GovCloud (US-West) and AWS GovCloud (US-East) regions via the AWS Management Console, API, SDK, CLI, and CloudFormation. To learn more, please refer to the Amazon ECS Service Connect <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html" style="cursor: pointer;"><u>documentation</u></a>.</p>

Read article →

Amazon EC2 I7i instances now available in additional AWS regions

<p>Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Europe (Ireland), Asia Pacific (Seoul, Hong Kong) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.<br /> <br /> I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks.<br /> <br /> I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth.</p> <p><br /> To learn more, visit the I7i instances<a href="https://aws.amazon.com/ec2/instance-types/i7i/" target="_blank"> page</a>.</p>

Read article →

AWS Health enhances Amazon EventBridge to give more flexibility and higher resilience

<p>Customers using Amazon EventBridge can now setup rules for AWS Health events with multi-region redundancy, or choose a simplified path by creating a single rule to capture all Health events. With this enhancement, Health sends all events simultaneously to US West (Oregon) as well as the individual region of impact. For more information customers can go to <a href="https://docs.aws.amazon.com/health/latest/ug/choosing-a-region.html" target="_blank">Creating EventBridge rules for AWS Region coverage</a>.<br /> <br /> Sending Health events to two regions gives customers an option to increase the resilience of their integration by creating a backup rule. US West (Oregon) is the backup for all regions in commercial partition, while US East (N. Virginia) is the backup for US West (Oregon). Plus, this change also enables a simplified integration path, where customers can now setup a single rule in US West (Oregon) to capture all Health events from across commercial partition, as opposed to needing to configure rules in individual regions. Customers now have greater flexibility in their integration approach for receiving Health events.<br /> <br /> This update is available in all AWS regions. In China, all Health events get delivered simultaneously to both China (Beijing) and China (Ningxia). In AWS GovCloud (US), all Health events get delivered to AWS GovCloud (US-West) and AWS GovCloud (US-East).</p>

Read article →

Amazon EC2 F2 instances are now generally available in four additional AWS regions

<p>Starting today, the FPGA-powered Amazon EC2 F2 instances are now available in the Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Canada (Central) regions. F2 instances are the second generation of FPGA powered instances and are the first to feature an FPGA with 16 GB of high bandwidth memory (HBM). Compared to F1 instances, the F2 instances have up to 3x vCPUs (192 vCPUS), 2x system memory (2 TB), 2x SSD space (7.6 TiB), and 4x networking bandwidth (100 Gbps). Amazon EC2 F2 instances are ideal for FPGA-accelerated solutions in genomics, multimedia processing, big data, network acceleration, and more.</p> <p>With these additional regions, F2 instances are now available in eight regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Seoul). These instances can be purchased as either Savings Plans or On-Demand instances. To learn more, visit the <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/f2/" style="cursor: pointer;">Amazon EC2 F2 Instances page</a> and <a contenteditable="false" href="https://github.com/aws/aws-fpga" style="cursor: pointer;">F2 FPGA development kit GitHub page.</a></p>

Read article →

New AWS CUR 2.0 features: EC2 ODCR and Capacity Blocks for ML monitoring

<p>AWS announces addition of new columns and granularity in CUR 2.0 that provide customers better visibility into the cost and usage of their capacity reservations, such as EC2 On-Demand Capacity Reservation (ODCR) and EC2 Capacity Blocks for ML. This enables customers to easily calculate the utilization and coverage of their capacity reservations, identify unused capacity reservations for cost optimization, and attribute the cost of capacity reservations to the resource owners.<br /> <br /> With this new feature, customers can easily calculate which portion of EC2 instance cost and usage is covered by which capacity reservation, down to hourly resource-level granularity. Customers can also easily calculate the coverage and utilization of each capacity reservation as CUR 2.0 labels capacity reservation-related line items as Reserved, Used, or Unused.<br /> <br /> This feature is available in all commercial AWS Regions, except the AWS GovCloud (US) Regions and the China Regions.<br /> <br /> To learn more about this feature, see <a contenteditable="false" href="https://docs.aws.amazon.com/cur/latest/userguide/what-is-data-exports.html" style="cursor: pointer;">AWS Data Exports</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/account-billing/" style="cursor: pointer;">AWS Billing and Cost Management</a> in the <i>AWS Cost Management User Guide</i>.</p>

Read article →

Amazon S3 Tables now support Amazon CloudWatch metrics

<p>Amazon CloudWatch metrics are now available for S3 Tables, helping you monitor table storage, requests, and maintenance operations. You can use CloudWatch metrics to track performance, detect anomalies, and monitor the operational health of applications that use S3 Tables.<br /> <br /> CloudWatch metrics for S3 Tables provide three types of metrics. Storage metrics track daily storage usage and count of objects. Table maintenance metrics track daily bytes and objects processed by compaction operations. Request metrics monitor table operations, data transfer volumes, error rates, and latency measurements at minute-level granularity. These metrics are available through the CloudWatch console, AWS CLI, and CloudWatch API at the table bucket, namespace, and individual table level.<br /> <br /> CloudWatch metrics for S3 Tables are now available in all <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html#s3-tables-regions" target="_blank">AWS Regions where S3 Tables are available</a>. To learn more, visit the <a href="https://aws.amazon.com/s3/features/tables/" target="_blank">S3 Tables product page</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-cloudwatch-metrics.html" target="_blank">documentation</a>.&nbsp;</p>

Read article →

Amazon Connect Cases adds conditional field visibility and dependent options

<p>Amazon Connect Cases now supports conditional field visibility and dependent field options, so you can simplify case layouts and ensure agents capture the right information faster. For example, you can show a Return Reason field only when the case involves a return, and limit Issue Type choices to hardware-related options when Issue Category is set to Hardware.<br /> <br /> Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases <a href="https://aws.amazon.com/connect/cases/">webpage</a> and <a href="https://docs.aws.amazon.com/connect/latest/adminguide/case-field-conditions.html">documentation</a>.</p>

Read article →

AWS Site-to-Site VPN announces 5 Gbps bandwidth tunnels

<p>AWS Site-to-Site VPN now supports VPN connections with up to 5 Gbps bandwidth per tunnel, a 4x improvement from existing limit of 1.25 Gbps. This increased bandwidth benefits customers who require high-capacity connections for bandwidth-intensive hybrid applications, big data migrations, and disaster recovery architectures while maintaining traffic encryption between AWS and their remote sites. Customers can also use 5 Gbps VPN connections as a backup or overlay for their high capacity AWS Direct Connect connections.<br /> <br /> AWS Site-to-Site VPN is a fully managed service that allows you to create a secure connection between your data center or branch office and your AWS resources using IP Security (IPSec) tunnels. Until now, Site-to-Site VPN supported a maximum of 1.25Gbps bandwidth per tunnel and customers had to rely on ECMP (Equal cost multi path) to logically bond multiple tunnels to achieve higher bandwidth. With this launch, customers can now configure their tunnel bandwidth to 5 Gbps, reducing the need to deploy complex protocols such as ECMP while ensuring consistent bandwidth performance.<br /> <br /> This capability is available in all AWS commercial Regions and AWS GovCloud (US) Regions where AWS Site-to-Site VPN is available, except Asia Pacific (Melbourne), Israel (Tel Aviv), Europe (Zurich), Canada West (Calgary), and Middle East (UAE) Regions. To learn more and get started, visit the AWS Site-to-Site VPN <a href="https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html#large-bandwidth-tunnels" target="_blank">documentation</a>.</p>

Read article →

AWS Fault Injection Service (FIS) launches new test scenarios for partial failures

<p>AWS Fault Injection Service (FIS) now offers two new scenarios that help you proactively test how your applications handle partial disruptions within and across Availability Zones (AZs). These disruptions, often called gray failures, are more common than complete outages and can be particularly challenging to detect and mitigate.<br /> <br /> The FIS scenario library provides AWS-created, pre-defined experiment templates that minimize the heavy lifting of designing tests. The new scenarios expand the testing capabilities for partial disruptions. "AZ: Application Slowdown" lets you test for increased latency and degraded performance for resources, dependencies, and connections within a single AZ. This helps validate observability setups, tune alarm thresholds, and practice critical operational decisions like AZ evacuation. The scenario works with both single and multi-AZ applications. "Cross-AZ: Traffic Slowdown" enables testing of how multi-AZ applications handle traffic disruptions between AZs.<br /> <br /> With both scenarios, you can target specific portions of your application traffic for more realistic testing of partial disruptions. These scenarios are particularly valuable for testing application sensitivity to these more subtle disruptions that often manifest as traffic and application slowdowns. For instance, you can test how your application responds to degraded network paths causing packet loss for some traffic flows, or misconfigured connection pools that slow down specific requests.<br /> <br /> To get started, access these new scenarios through the FIS scenario library in the AWS Management Console. These new scenarios are available in all AWS Regions where AWS FIS is available, including AWS GovCloud (US) Regions. To learn more, visit the FIS scenario library <a href="https://docs.aws.amazon.com/fis/latest/userguide/scenario-library.html" target="_blank">user guide</a>. For pricing information, visit the FIS <a href="https://aws.amazon.com/fis/pricing/" target="_blank">pricing</a> page.</p>

Read article →

Amazon ElastiCache supports M7g and R7g Graviton3-based nodes in AWS GovCloud (US) Regions

<p>Amazon ElastiCache now supports Graviton3-based M7g and R7g node families in the AWS GovCloud (US) Regions (US-East, US-West). ElastiCache Graviton3 nodes deliver improved price-performance compared to Graviton2. As an example, when running ElastiCache for Redis OSS on an R7g.4xlarge node, you can achieve up to 28% increased throughput (read and write operations per second) and up to 21% improved P99 latency, compared to running on R6g.4xlarge. In addition, these nodes deliver up to 25% higher networking bandwidth.<br /> <br /> For complete information on pricing and regional availability, please refer to the <a href="https://aws.amazon.com/elasticache/pricing/" target="_blank">Amazon ElastiCache pricing page</a>. To get started, create a new cluster or upgrade to Graviton3 using the <a href="https://console.aws.amazon.com/elasticache/" target="_blank">AWS Management Console</a>. For more information on supported node types, please refer to the <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/CacheNodes.SupportedTypes.html" target="_blank">documentation</a>.&nbsp;</p>

Read article →

Application loadbalancer support client credential flow with JWT verification

<p>Amazon Web Services (AWS) announces JWT Verification for Application Load Balancer (ALB), enabling secure machine-to-machine (M2M) and service-to-service (S2S) communications. This feature allows ALB to verify JSON Web Tokens (JWTs) included in request headers, validating token signatures, expiration times, and claims without requiring modifications to application code.<br /> <br /> By offloading OAuth 2.0 token validation to ALB, customers can significantly reduce architectural complexity and streamline their security implementation. This capability is particularly valuable for microservices architectures, API security, and enterprise service integration scenarios where secure service-to-service communication is critical. The feature supports tokens issued through various OAuth 2.0 flows, including Client Credentials Flow, enabling centralized token validation with minimal operational overhead.<br /> <br /> The JWT Verification feature is now available in all AWS Regions where Application Load Balancer is supported.</p> <p>To learn more, visit the&nbsp;<a contenteditable="false" href="https://aws.amazon.com/about-aws/whats-new/recent/feed/listener-verify-jwt.html" style="cursor: pointer;">ALB&nbsp;Documentation</a>.</p>

Read article →

Amazon Managed Service for Prometheus collector integrates with Amazon Managed Streaming for Apache Kafka

<p>Amazon Managed Service for Prometheus collector, a fully-managed agentless collector for Prometheus metrics, now enables you to discover and collect Prometheus metrics from your Amazon Managed Streaming for Apache Kafka cluster while ensuring high availability and scalability.</p> <p>So far, customers who were seeking to&nbsp;benefit from <a contenteditable="false" href="https://docs.aws.amazon.com/msk/latest/developerguide/open-monitoring.html" style="cursor: pointer;">open monitoring</a> in an Amazon Managed Streaming for Apache Kafka cluster had to set up dedicated infrastructure and deploy, right-size, and scale agents to discover and scrape the Prometheus metrics in the cluster. With this launch, you can configure a Amazon Managed Service for Prometheus collector to scrape metrics from the JMX exporter and the Node exporter, covering metrics including host-level, JVM-level, as well as broker-related metrics to implement use cases such as message queue health and partition balancing.</p> <p>Amazon Managed Service for Prometheus collector is available in all commercial regions where Amazon Managed Service for Prometheus is available. To learn more about Amazon Managed Service for Prometheus collector, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/prometheus/latest/userguide/prom-msk-integration.html" style="cursor: pointer;">user guide</a> or <a contenteditable="false" href="https://aws.amazon.com/prometheus" style="cursor: pointer;">product page</a>.</p>

Read article →

Amazon DCV now supports Amazon EC2 Mac instances

<p>AWS announces <a href="https://aws.amazon.com/hpc/dcv/" target="_blank">Amazon DCV</a> support for <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html" target="_blank">Amazon EC2 Mac instances</a> powered by Apple silicon, bringing high-performance remote desktop capabilities to macOS workloads in the cloud. You can now access your EC2 Mac instances with the same security and performance that Amazon DCV provides across other platforms. This integration is specifically designed for EC2 Mac instances running on Apple silicon processors.<br /> <br /> With Amazon DCV, you can connect to your EC2 Mac instances from Windows, Linux, macOS, or web clients with support for 4K resolution, multiple monitors, and smooth 60 FPS performance. The support includes essential productivity features like time zone redirection and audio output, making remote Mac development seamless. Amazon DCV's proven security architecture and optimized streaming protocols ensure your macOS applications run efficiently while maintaining data protection standards.<br /> <br /> Amazon DCV support for EC2 Mac instances is available in all AWS Regions where <a href="https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-instance-regions.html" target="_blank">EC2 Mac instances are offered</a>.<br /> To get started, see the <a href="https://docs.aws.amazon.com/dcv/latest/adminguide/what-is-dcv.html" target="_blank">Amazon DCV documentation</a> for installing and configuring DCV server on EC2 Mac instances.</p>

Read article →

Spaces now available in AWS Builder Center

<p>AWS Builder Center now offers Spaces, a community collaboration tool that enables builders to create and join groups around specific AWS topics, use cases, and interests. With Spaces, you can connect with peers, share knowledge, and collaborate with other builders to build applications and discuss solutions to common AWS challenges.</p> <p>Spaces provides three distinct space types to match different community needs - Public, Private and Invite-Only spaces. Public spaces allow any signed-in builder to join instantly and view all content. Private spaces require builders to request membership and receive approval from space admins or owners. Invite-only spaces remain hidden from discovery and are accessible only through direct invitation.</p> <p>Within any space, you can create posts with text and images, engage through comments and reactions, and search for relevant discussions. All spaces benefit from robust content moderation and multi-language support across 16 languages. Space owners and admins can manage membership through invites and approval workflows and self-moderate content published by other users to maintain focused discussions.</p> <p>Spaces helps you find answers faster, share best practices, and build meaningful connections within the AWS community.<br /> To get started with Spaces, visit <a href="https://builder.aws.com/spaces?trk=b598e140-8954-4520-ae18-f97b74eb4f3a&amp;sc_channel=el">AWS Builder Center</a>.</p>

Read article →

Announcing communication preferences for Security Incident Response

<p><a href="https://aws.amazon.com/security-incident-response/" target="_blank">AWS Security Incident Response</a> now provides customizable communication preferences so you can focus on the updates that matter most to your role.<br /> <br /> You can choose from various notification types including case changes, membership updates, and organizational announcements. This granular control reduces the previous one-size-fits-all approach where every team member received every update regardless of relevance. You can easily adjust these settings as your role evolves, with smart defaults that work effectively out of the box.<br /> <br /> This feature is available to all Security Incident Response customers at no additional cost.<br /> <br /> To configure your communication preferences, visit the Security Incident Response <a href="https://us-east-1.console.aws.amazon.com/security-ir" target="_blank">console</a> and select any team member to customize their notification settings.</p>

Read article →

Amazon CloudWatch Logs now supports Network Load Balancer access logs

<p>Amazon CloudWatch Logs now supports Network Load Balancer (NLB) access logs as vended logs, improving observability and simplifying debugging for network traffic patterns. You can now analyze NLB access logs directly in CloudWatch to gain insights into client connections, traffic distribution, and connection status, helping you identify and troubleshoot network issues faster.<br /> <br /> With this CloudWatch Logs integration, you can track detailed access patterns using CloudWatch Logs Insights queries, create metric filters for monitoring, and review traffic patterns in real time using Live Tail. NLB access logs can be configured through the integrations tab of your network load balancer in AWS Management Console, AWS CLI, or SDKs. You can also configure delivery of NLB access logs to Amazon Data Firehose or Amazon S3 with support for Apache Parquet format.<br /> <br /> NLB access logs delivery to CloudWatch is available in all AWS Commercial and GovCloud regions where Network Load Balancer and CloudWatch are available. NLB access logs are charged as vended logs when delivered to CloudWatch Logs and Data Firehose, while delivery to Amazon S3 is free (Parquet conversion is charged at $0.035/GB - N. Virginia).&nbsp;</p> <p>To learn more about configuring NLB access logs in CloudWatch Logs, please&nbsp;<a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-vended-logs-permissions:~:text=Network%20Load%20Balancer%20access%20logs" target="_blank">visit our documentation</a>. For pricing information, see&nbsp;<a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">CloudWatch pricing page.</a></p>

Read article →

Amazon EC2 C8gd, M8gd, and R8gd instances are now available in additional AWS Regions

<p>Amazon Elastic Compute Cloud (Amazon EC2) C8gd instances are now available in Europe (London), and Canada (Central) AWS Regions. Additionally, M8gd instances are available in South America (Sao Paulo) and R8gd instances are available in Europe (London) AWS Region. These instances feature up to 11.4 TB of local NVMe-based SSD block-level storage and are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.<br /> <br /> Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and<br /> Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes.<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/c8g/" target="_blank">Amazon C8gd instances</a>, <a href="https://aws.amazon.com/ec2/instance-types/m8g/" target="_blank">M8gd instances</a>, <a href="https://aws.amazon.com/ec2/instance-types/r8g/" target="_blank">R8gd instances</a>. To explore how to migrate your workloads to Graviton-based instances, see <a href="https://aws.amazon.com/ec2/graviton/fast-start/" target="_blank">AWS Graviton Fast Start program</a> and <a href="https://github.com/aws/porting-advisor-for-graviton" target="_blank">Porting Advisor for Graviton</a>. To get started, see the <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>.</p>

Read article →

Amazon EC2 C6id and R6id instances are now available in additional regions

<p>Amazon EC2 C6id instances are available in AWS Region Europe (Milan) and R6id instances are available in AWS Region Africa (Cape Town). These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id and R6id are built on <a href="https://aws.amazon.com/ec2/nitro/" target="_blank">AWS Nitro System</a>, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such as video encoding, image manipulation, other forms of media processing, data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics.<br /> <br /> Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank">AWS SDKs</a>. To learn more, visit our product pages for <a href="https://aws.amazon.com/ec2/instance-types/c6i/" target="_blank">C6id</a> and <a href="https://aws.amazon.com/ec2/instance-types/r6i/" target="_blank">R6id</a>.</p>

Read article →

AWS Parallel Computing Service (PCS) now supports Slurm CLI Filter plugins

<p>AWS Parallel Computing Service (PCS) now supports Slurm CLI Filter plugins, enabling you to extend and modify how Slurm schedules and processes your high performance computing (HPC) workloads without modifying Slurm directly.<br /> <br /> Using CLI Filter plugins, you can now define custom policies for job submission to your clusters. For example, you can define policies that verify certain flags or fields of jobs when users submit them, automatically reject jobs submitted without specific attributes, or even modify job parameters.<br /> <br /> PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. You can use PCS to build complete environments that integrate compute, storage, networking, and visualization. PCS simplifies cluster operations with managed updates and built-in observability features, helping to remove the burden of maintenance. You can work in a familiar environment, focusing on your research and innovation instead of worrying about infrastructure.<br /> <br /> This feature is now available in all AWS Regions where PCS is available. To learn more about using Slurm CLI Filter plugins with PCS, see the <a href="https://docs.aws.amazon.com/pcs/latest/userguide/what-is-service.html" target="_blank">PCS User Guide</a>.</p>

Read article →

Amazon S3 now supports IPv6 for gateway and interface VPC endpoints

<p>Amazon S3 now supports Internet Protocol version 6 (IPv6) addresses for AWS PrivateLink gateway and interface Virtual Private Cloud (VPC) endpoints.&nbsp;</p> <p>The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude, and customers no longer need to manage overlapping address spaces in their VPCs. To get started with IPv6 connectivity on a new or existing S3 gateway or interface endpoint, configure IP address type for the endpoint to IPv6 or Dualstack. When enabled, Amazon S3 automatically updates the routing tables with IPv6 addresses for gateway endpoints and sets up an <a contenteditable="false" href="https://aws.amazon.com/blogs/aws/new-elastic-network-interfaces-in-the-virtual-private-cloud/" style="cursor: pointer;">Elastic network interface</a> (ENI) with IPv6 addresses for interface endpoints.<br /> <br /> IPv6 support for VPC endpoints for Amazon S3 is now available in all AWS Commercial Regions and the AWS GovCloud (US) Regions, at no additional cost. You can set up IPv6 for new and existing VPC endpoints using the AWS Management Console, AWS CLI, AWS SDK, or AWS CloudFormation. To learn more, please refer to the service <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html" style="cursor: pointer;">documentation.</a></p>

Read article →

Mountpoint for Amazon S3 is now included in Amazon Linux 2023

<p>Mountpoint for Amazon S3 is now available in Amazon Linux 2023 (AL2023), simplifying how you get started and manage updates. Previously, you had to download the Mountpoint package from GitHub, install dependencies, and manually manage updates. Now, when using AL2023, you can install or update to the latest release of Mountpoint with a single command, and mount an Amazon S3 bucket.<br /> <br /> Mountpoint for Amazon S3 is an open source project backed by AWS support, giving AWS Business and Enterprise Support customers 24/7 access to AWS cloud support engineers. To learn more and get started, visit <a href="https://github.com/awslabs/mountpoint-s3" target="_blank">GitHub</a>, the <a href="https://aws.amazon.com/s3/features/mountpoint/" target="_blank">Mountpoint overview page</a>, the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint-installation.html" target="_blank">installation guide</a> and <a href="https://aws.amazon.com/linux/amazon-linux-2023/" target="_blank">AL2023 overview page</a>.</p>

Read article →

Amazon Keyspaces (for Apache Cassandra) now supports Logged Batches

<p><a contenteditable="false" href="https://aws.amazon.com/keyspaces/" style="cursor: pointer;">Amazon Keyspaces (for Apache Cassandra)</a> now supports Logged Batches, enabling you to perform multiple write operations as a single atomic transaction. With Logged Batches, you can ensure that either all operations (INSERT, UPDATE, DELETE) within a batch succeed or none of them do, maintaining data consistency across multiple rows and tables within a keyspace. This capability is particularly valuable for applications that require strong data consistency, such as financial systems, inventory management, and user profile updates that span multiple data entities.<br /> <br /> Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage.<br /> <br /> Logged Batches in Amazon Keyspaces provide the same atomicity guarantees as Apache Cassandra while eliminating the operational complexity of managing transaction logs across distributed clusters. It’s designed to scale automatically with your workload and maintain consistent performance regardless of transaction volume. The feature integrates seamlessly with existing Cassandra Query Language (CQL) statements, allowing for adoption in both new and existing applications.<br /> <br /> Logged Batches are available today in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is available. You pay only for the standard write operations processed within each batch.&nbsp;To learn more about Logged Batches, please visit our <a contenteditable="false" href="https://aws.amazon.com/blogs/database/amazon-keyspaces-now-supports-logged-batches-for-atomic-multi-statement-operations/" style="cursor: pointer;">blog post</a> or refer to our <a contenteditable="false" href="https://docs.aws.amazon.com/keyspaces/latest/devguide/working-with-overview.html" style="cursor: pointer;">Amazon Keyspaces documentation</a>.</p>

Read article →

Amazon EC2 M8a Instances now available in additional regions

<p>Starting today, the general-purpose Amazon EC2 M8a instances are available in US East (N. Virginia) and Asia Pacific (Tokyo) regions. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.<br /> <br /> M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.<br /> <br /> M8a instances are built using the latest sixth generation <a contenteditable="false" href="https://aws.amazon.com/ec2/nitro/" style="cursor: pointer;">AWS Nitro Cards</a> and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets.<br /> <br /> To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/m8a" style="cursor: pointer;">M8a instance page</a>.</p>

Read article →

Amazon EC2 I7i instances now available in additional AWS regions

<p>Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS Asia Pacific (Hyderabad), Canada (Central) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.<br /> <br /> I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances also support real-time, high-resolution performance statistics for the NVMe instance store volumes attached to them. To learn more, visit the detailed NVMe performance statistics <a contenteditable="false" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-detailed-performance-stats.html" style="cursor: pointer;">page</a>.<br /> <br /> I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth.<br /> To learn more, visit the I7i instances<a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/i7i/" style="cursor: pointer;"> page</a>.</p>

Read article →

Amazon U7i instances now available in Europe (Stockholm and Ireland) Regions

<p>Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Europe (Stockholm and Ireland) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/" target="_blank">High Memory instances page</a>.</p>

Read article →

Amazon CloudWatch Composite Alarms adds threshold-based alerting

<p>Amazon CloudWatch now enables you to create more flexible alerting policies by triggering notifications when a specific subset of your monitored resources need attention. Using CloudWatch composite alarms, you can create a rule to take action only when a certain combination of alarms is activated. This enhancement lets you choose to receive alerts only when a certain number of resources are impacted, helping you focus on meaningful incidents.<br /> <br /> The new threshold function in composite alarms allows you to eliminate unnecessary alerts for minor issues while ensuring quick notification of significant problems. IT operations teams can configure alerts to trigger when, for instance, at least two out of four storage volumes are running low on capacity, or when 50% of hosts in a cluster show high CPU utilization. The feature supports both fixed numbers and percentages, making it easy to maintain effective monitoring even as your infrastructure grows or changes.<br /> <br /> This capability is now available in all&nbsp;<a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">commercial AWS regions</a>, the AWS GovCloud (US) Regions, and the China Regions.<br /> <br /> To create a threshold-based condition in a composite alarm, simply use the AT_LEAST function in the alarm’s condition. Composite alarms’ pricing applies, see&nbsp;<a href="https://aws.amazon.com/cloudwatch/pricing/">CloudWatch pricing</a>&nbsp;for details. To learn more about the threshold function’s parameters, visit the&nbsp;<a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html">Amazon CloudWatch documentation for composite alarms</a>.</p>

Read article →

Amazon EC2 C7i-flex instances are now available in the Middle East (UAE) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in the Middle East (UAE) Region. C7i-flex instances provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i.<br /> <br /> C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances.<br /> <br /> To learn more, visit <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/c7i/" style="cursor: pointer;">Amazon EC2 C7i-flex instances</a>. To get started, see the <a contenteditable="false" href="https://console.aws.amazon.com/" style="cursor: pointer;">AWS Management Console</a>.</p>

Read article →

Amazon EC2 High Memory U7i instances now available in AWS GovCloud (US) Regions

<p>Amazon EC2 High Memory U7i instances with 12TB and 16TB of memory (u7i-12tb.224xlarge and u7in-16tb.224xlarge) are now available in the AWS GovCloud (US-West) region and 24TB of memory (u7in-24tb.224xlarge) are now available in the AWS GovCloud (US-East) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TiB of DDR5 memory, U7in-16tb instances offer 16TiB of DDR5 memory, and U7in-24tb instances offer 24TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb and U7in-24tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/u7i/" style="cursor: pointer;">High Memory instances page</a>.</p>

Read article →

MSK Express brokers now support Intelligent Rebalancing at no additional cost, and with no action required

<p>Effective today, all new Amazon MSK Provisioned clusters with Express brokers will support Intelligent Rebalancing at no additional cost. This new capability makes it effortless for customers to execute automatic partition balancing operations when scaling their Kafka clusters up or down. Intelligent Rebalancing maximizes the capacity utilization of MSK Express-based clusters by optimally rebalancing Kafka resources on them for better performance, eliminating the need for customers to manage partitions themselves or via third-party tools. Intelligent Rebalancing performs these operations up to 180 times faster compared to Standard brokers.<br /> <br /> MSK Express brokers are designed to deliver up to three times more throughput per-broker, scale up to 20 times faster, and reduce recovery time by 90 percent as compared to Standard brokers running Apache Kafka. With Intelligent Rebalancing, MSK Express-based clusters are continuously monitored for resource imbalance or overload based on intelligent Amazon MSK defaults to maximize cluster performance. When required, brokers are efficiently scaled, without affecting cluster availability for clients to produce and consume data. Customers can now take full advantage of the scaling and performance benefits of MSK Provisioned clusters for Express brokers while simplifying cluster management operations.<br /> <br /> Intelligent Rebalancing&nbsp;is being rolled out for all new MSK Provisioned clusters with Express brokers in all AWS Regions, where Express brokers are available. Intelligent Rebalancing does not require any additional configuration or setup to get started. To learn more, see the <a contenteditable="false" href="https://docs.aws.amazon.com/msk/latest/developerguide/intelligent-rebalancing.html" style="cursor: pointer;">Amazon MSK Developer Guide</a>.</p>

Read article →

AWS Backup now supports Amazon EKS

<p>AWS Backup now supports Amazon Elastic Kubernetes Service (EKS), providing a fully-managed, centralized solution for backing up EKS cluster state and persistent application data. You can now use AWS Backup to help protect your entire EKS environments through a centralized, policy-driven backup service.<br /> <br /> You now get comprehensive data protection capabilities through AWS Backup across your Amazon EKS Clusters, including automated scheduling, retention management, immutable vaults, cross-Region and cross-account copies. AWS Backup delivers a new an agent-free solution that works natively with AWS, replacing custom scripts or third-party tools to perform backups for each cluster. You can restore entire EKS clusters, specific namespaces, or individual persistent volumes. Use AWS Backup to protect your clusters for disaster recovery, to help meet your compliance requirements, or for additional protection before EKS cluster upgrades.<br /> <br /> AWS Backup for EKS is available in all AWS Regions where both AWS Backup and Amazon EKS are available. For the most up-to-date information on Regional availability, please refer to the <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#supported-services-by-region" target="_blank">AWS Backup Regional availability</a>.<br /> <br /> To get started with AWS Backup for Amazon EKS, visit the <a href="https://us-east-2.console.aws.amazon.com/backup/home" target="_blank">AWS Backup console</a>,&nbsp;refer to the <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-aurora.html%20and" target="_blank">AWS Backup documentation</a>,&nbsp;or read the <a href="https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-the-new-support-for-amazon-eks-in-aws-backup">AWS News Blog</a>.&nbsp;</p>

Read article →

AWS adds IPv6 support for Amazon S3 Gateway and Interface VPC endpoints

<p>Amazon Web Services (AWS) now supports Internet Protocol version 6 (IPv6) addresses for AWS PrivateLink Gateway and Interface Virtual Private Cloud (VPC) endpoints for Amazon S3.<br /> <br /> The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude, and customers no longer need to manage overlapping address spaces in their VPCs. To get started with IPv6 connectivity on a new or existing S3 gateway or interface endpoint, configure IP address type for the endpoint to IPv6 or Dualstack. When enabled, Amazon S3 automatically updates the routing tables with IPv6 addresses for gateway endpoints and sets up an <a contenteditable="false" href="https://aws.amazon.com/blogs/aws/new-elastic-network-interfaces-in-the-virtual-private-cloud/" style="cursor: pointer;">Elastic network interface</a> (ENI) with IPv6 addresses for interface endpoints.<br /> <br /> IPv6 support for VPC endpoints for Amazon S3 is now available in all AWS Commercial Regions and the AWS GovCloud (US) Regions, at no additional cost. You can set up IPv6 for new and existing VPC endpoints using the AWS Management Console, AWS CLI, AWS SDK, or AWS CloudFormation. To learn more, please refer to the service <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html" style="cursor: pointer;">documentation.</a></p>

Read article →

AWS Private CA now supports post-quantum digital certificates

<p>AWS Private Certificate Authority (AWS Private CA) now enables you to create certificate authorities (CAs) and issue certificates that use Module Lattice-based Digital Signature Algorithm (ML-DSA). This feature enables you to begin transitioning your public key infrastructure (PKI) towards post-quantum cryptography, allowing you to put protections in place now to protect the security of your data against future quantum computing threats. ML-DSA is a post-quantum digital signature algorithm standardized by National Institute of Standards and Technology (NIST) as Federal Information Processing Standards (FIPS) 204.</p> <p>With this feature, you can now test ML-DSA in your environment for certificate issuance, identity verification, and code signing. You can create CAs, issue certificates, create certificate revocation lists (CRLs) and configure online certificate status protocol (OCSP) responders using ML-DSA. Cryptographically relevant quantum computer (CRQC) will be able to break current digital signature algorithms, like Rivest–Shamir–Adleman (RSA) or Elliptic Curve Digital Signature Algorithm (ECDSA), which are expected to be phased out over the next decade.</p> <p>AWS Private CA support for ML-DSA is available in all commercial AWS Regions, the AWS GovCloud (US) Regions, and the China Regions.</p> <p>To learn more about AWS Private CA ML-DSA support, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html" style="cursor: pointer;">AWS Private CA user guide</a>.</p> <p>To learn more about Post-Quantum Cryptography at AWS, visit the <a contenteditable="false" href="https://aws.amazon.com/security/post-quantum-cryptography/" style="cursor: pointer;">AWS Post-Quantum Cryptography page</a>.</p>

Read article →

Amazon S3 Express One Zone now supports Internet Protocol version 6 (IPv6)

<p>Amazon S3 Express One Zone now supports Internet Protocol version 6 (IPv6) addresses for gateway Virtual Private Cloud (VPC) endpoints. S3 Express One Zone is a high-performance storage class designed for latency-sensitive applications.<br /> <br /> Organizations are adopting IPv6 networks to mitigate IPv4 address exhaustion in their private networks or to comply with regulatory requirements. You can now access your data in S3 Express One Zone over IPv6 or DualStack VPC endpoints. You don't need additional infrastructure to handle IPv6 to IPv4 address translation.<br /> <br /> S3 Express One Zone support for IPv6 is available in all AWS Regions where <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Endpoints.html" target="_blank">the storage class is available</a> at no additional cost. You can set up IPv6 for new and existing VPC endpoints using the AWS Management Console, AWS CLI, AWS SDK, or AWS CloudFormation. To get started using IPv6 on S3 Express One Zone, visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-az-networking.html#s3-express-networking-vpc-gateway" target="_blank">S3 User Guide.</a></p>

Read article →

Amazon Braket notebook instances now support CUDA-Q natively

<p>Amazon Braket notebook instances now come with native support for CUDA-Q, streamlining access to NVIDIA's quantum computing platform for hybrid quantum-classical applications. This enhancement is enabled by upgrading the underlying operating system to Amazon Linux 2023, which delivers improved performance, security, and compatibility for quantum development workflows.<br /> <br /> Quantum researchers and developers can now seamlessly build and test hybrid quantum-classical algorithms using CUDA-Q's GPU-accelerated quantum circuit simulation alongside access to quantum processing units (QPUs) from IonQ, Rigetti, and IQM, all within a single managed environment. With this release, developers can now access CUDA-Q directly within the managed notebook environment, simplifying workflows that previously required local deployment or needed to be run via Hybrid Jobs.<br /> <br /> CUDA-Q support in Amazon Braket notebook instances is available in all AWS Regions where Amazon Braket is available. To get started, see the Amazon Braket <a href="https://docs.aws.amazon.com/braket/latest/developerguide/braket-using-cuda-q.html" target="_blank">Developer Guide</a> and visit the Amazon Braket <a href="https://aws.amazon.com/braket/" target="_blank">product page</a> to learn more about quantum computing on AWS.</p>

Read article →

Amazon CloudWatch agent adds Shared Memory Metrics

<p>Amazon CloudWatch agent now supports collection of shared memory utilization metrics from Linux hosts running on Amazon EC2 or on-premises environments. This new capability enables you to monitor total shared memory usage in CloudWatch, alongside existing memory metrics like free memory, used memory, and cached memory.<br /> <br /> Enterprise applications such as SAP HANA and Oracle RDBMS make extensive use of shared memory segments that were previously not captured in standard memory metrics. By enabling shared memory metric collection in your CloudWatch agent configuration file, you can now accurately assess total memory utilization across your hosts, helping you optimize host and application configurations and make informed decisions about instance sizing.<br /> <br /> Amazon CloudWatch agent is supported in all commercial AWS Regions and AWS GovCloud (US) Regions. For Amazon CloudWatch custom metrics pricing, see the <a href="https://aws.amazon.com/cloudwatch/pricing/">CloudWatch Pricing</a> page.<br /> <br /> To get started, see <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html">Configuring the CloudWatch agent</a> in the Amazon CloudWatch User Guide.&nbsp;</p>

Read article →

Amazon SageMaker Unified Studio adds support for catalog notifications

<p>Amazon SageMaker Unified Studio now provides real-time notifications for data catalog activities, enabling data teams to stay informed of subscription requests, dataset updates, and access approvals. With this launch, customers receive real-time notifications for catalog events including new dataset publications, metadata changes, and access approvals directly within the SageMaker Unified Studio notification center. This launch streamlines collaboration by keeping teams updated as datasets are published or modified.<br /> <br /> The new notification experience in SageMaker Unified Studio is accessible from a “bell” icon in the top right corner of the project home page. From here, you can access a short list of recent notifications including subscription requests, updates, comments, and system events. To see the full list of all notifications, you can click on “notification center” to see all notifications in a tabular view that can be filtered based on your preferences for data catalogs, projects and event types.<br /> <br /> Notifications within SageMaker Unified Studio is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html" style="cursor: pointer;">regions where SageMaker Unified Studio is supported</a>.<br /> <br /> To learn more, refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/sagemaker-events-and-notifications.html" style="cursor: pointer;">SageMaker Unified Studio guide.</a></p>

Read article →

Anthropic’s Claude Sonnet 4.5 is now in Amazon Bedrock in AWS GovCloud (US)

<p>Customers can now use Claude Sonnet 4.5 in <a href="https://aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock</a> in <a href="https://aws.amazon.com/govcloud-us/" target="_blank">AWS GovCloud (US-West) and AWS GovCloud (US-East) via US-GOV Cross-Region Inference</a>. Claude Sonnet 4.5 is Anthropic's most intelligent model, excelling at building complex agents, coding, and long-horizon tasks while maintaining optimal speed and cost-efficiency for high-volume use-cases.<br /> <br /> Claude Sonnet 4.5 currently leads the SWE-bench Verified benchmarks with enhanced instruction following, better code improvement identification, stronger refactoring judgment, and more effective production-ready code generation. This model excels at powering long-running agents that tackle complex, multi-step tasks requiring peak accuracy—like autonomously managing multi-channel marketing campaigns or orchestrating cross-functional enterprise workflows. In cybersecurity, it can help teams shift from reactive detection to proactive defense by autonomously patching vulnerabilities. For financial services, it can handle everything from analysis to advanced predictive modeling.<br /> <br /> Through the Amazon Bedrock API, Claude can now automatically edit context to clear stale information from past tool calls, allowing you to maximize the model’s context. A new memory tool lets Claude store and consult information outside the context window to boost accuracy and performance.<br /> <br /> To get started with Claude Sonnet 4.5 in Amazon Bedrock, read the <a href="https://aws.amazon.com/blogs/aws/introducing-claude-sonnet-4-5-in-amazon-bedrock-anthropics-most-intelligent-model-best-for-coding-and-complex-agents/" target="_blank">News Blog</a>, visit the <a href="https://signin.amazonaws-us-gov.com/" target="_blank">AWS GovCloud (US)</a> console console, Anthropic's Claude in Amazon Bedrock <a href="https://aws.amazon.com/bedrock/claude/" target="_blank">product page</a>, and the Amazon Bedrock <a href="https://aws.amazon.com/bedrock/pricing/" target="_blank">pricing page</a>.&nbsp;</p>

Read article →

AWS Control Tower supports automatic enrollment of accounts

<p>AWS Control Tower customers can now simply move their accounts to an Organizational Unit (OU) to enroll them under AWS Control Tower governance. This feature helps customers maintain consistency across their AWS environment and simplifies the account creation and enrollment processes. When enrolled, member accounts receive best practice configurations, controls, and baseline resources required for AWS Control Tower governance.<br /> <br /> Customers are no longer required to manually update accounts or re-register OUs when migrating accounts or making changes to their OU structure. When an account is moved to a new OU, AWS Control Tower automatically enrolls the account, applying the baseline configurations and controls from the new OU and removing those from the original OU. With this feature, customers can further simplify their new account provisioning workflows by creating an account and then moving it into the right OU using the AWS Organizations console or the CreateAccount and MoveAccount APIs.<br /> <br /> Customers on landing zone version 3.1 and higher can opt in to this feature by toggling the automatically enroll accounts flag in their Landing Zone settings or using the Create or UpdateLandingZone APIs by setting the value of the RemediationTypes parameter to Inheritance_Drift. To learn more about this functionality, review <a href="https://docs.aws.amazon.com/controltower/latest/userguide/account-auto-enrollment.html" target="_blank">Move and enroll accounts with auto-enrollment</a>. For a list of AWS Regions where AWS Control Tower is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region Table</a>.</p>

Read article →

AWS KMS now supports Edwards-curve Digital Signature Algorithm (EdDSA)

<p>AWS Key Management Service (KMS) announces support for the Edwards-curve Digital Signature Algorithm (EdDSA). With this new capability, you can create an elliptic curve asymmetric KMS key or data key pairs to sign and verify EdDSA signatures using the Edwards25519 curve (Ed25519). Ed25519 provides 128-bit security level equivalent to NIST P-256, faster signing performance, and small signature size (64 bytes) and public key sizes (32 bytes).<br /> <br /> Ed25519 is ideal for situations that require small key and signature sizes, such as Internet of Things (IoT) devices and blockchain applications like cryptocurrency.<br /> <br /> This new capability is available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions. To learn more about this new capability, see <a contenteditable="false" href="https://docs.aws.amazon.com/kms/latest/developerguide/asymmetric-key-specs.html" style="cursor: pointer;">Asymmetric key specs</a> section in the AWS KMS Developer Guide.</p>

Read article →

Amazon Cognito user pools now supports private connectivity with AWS PrivateLink

<p>Amazon Cognito user pools now supports AWS PrivateLink for secure and private connectivity. With AWS PrivateLink, you can establish a private connection between your virtual private cloud (VPC) and Amazon Cognito user pools to configure, manage, and authenticate against your Cognito user pools without using the public internet. By enabling private network connectivity, this enhancement eliminates the need to use public IP addresses or relying solely on firewall rules to access Cognito. This feature supports user pool management operations (e.g., list user pools, describe user pools), administrative operations (e.g., admin-created users), and user authentication flows (sign in local users stored in Cognito). OAuth 2.0 authorization code flow (Cognito managed login, hosted UI, sign-in via social identity providers), client credentials flow (Cognito machine-to-machine authorization), and federated sign-ins via SAML and OIDC standards are not supported through VPC endpoints at this time.<br /> <br /> You can use PrivateLink connections in all AWS Regions where Amazon Cognito user pools is available, except AWS GovCloud (US) Regions. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to <a contenteditable="false" href="https://aws.amazon.com/privatelink/pricing/" style="cursor: pointer;" target="_blank">AWS PrivateLink pricing page</a> for details. You can get started by creating an AWS PrivateLink interface endpoint for Amazon Cognito user pools using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on <a contenteditable="false" href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html" style="cursor: pointer;" target="_blank">creating an interface VPC endpoint</a><a contenteditable="false" href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html" style="cursor: pointer;"> </a>and <a contenteditable="false" href="https://docs.aws.amazon.com/cognito/latest/developerguide/vpc-interface-endpoints.html" style="cursor: pointer;" target="_blank">Amazon Cognito’s developer guide</a>.&nbsp;</p>

Read article →

AWS Advanced .NET Data Provider Driver is Generally Available

<p>The Amazon Web Services (AWS) Advanced .NET Data Provider Driver is now generally available for <a href="https://aws.amazon.com/rds/" target="_blank">Amazon RDS</a> and <a href="https://aws.amazon.com/rds/aurora/" target="_blank">Amazon Aurora</a> PostgreSQL and MySQL-compatible databases. This advanced database driver reduces RDS Blue/Green switchover and database failover times, improving application availability. Additionally, it supports multiple authentication mechanisms for your database, including Federated Authentication, AWS Secrets Manager authentication, and token-based authentication with AWS Identity and Access Management (IAM).<br /> <br /> The driver builds on top of Npgsql PostgreSQL, native MySql.Data, and MySqlConnector drivers to further enhance functionality beyond standard database connectivity. The driver is natively integrated with Aurora and RDS databases, enabling it to monitor database cluster status and quickly connect to newly promoted writers during unexpected failures that trigger database failovers. Furthermore, the driver seamlessly works with popular frameworks like NHibernate and supports Entity Framework (EF) with MySQL databases.<br /> <br /> The driver is available as an open-source project under the Apache 2.0 license. Refer the instructions on the on the <a href="https://github.com/aws/aws-advanced-dotnet-data-provider-wrapper" target="_blank">GitHub</a> repository to get started.&nbsp;</p>

Read article →

Amazon VPC Lattice now supports custom domain names for resource configurations

<p>Starting today, VPC Lattice allows you to specify a custom domain name for a resource configuration. Resource configurations enable layer-4 access to resources such as databases, clusters, domain names, etc. across VPCs and accounts.&nbsp;With this feature, you can use resource configurations for cluster-based and TLS-based resources.<br /> <br /> Resource owners can use this feature by specifying a custom domain for a resource configuration and sharing the resource configuration with consumers. Consumers can then access the resource using the custom domain, with VPC Lattice managing a private hosted zone in the consumer’s VPC.<br /> <br /> This feature also provides resource owners and consumers control and flexibility over the domains they want to use. Resource owners can use a custom domain owned by them, or AWS, or a third-party. Consumers can use granular controls to choose which domains they want VPC Lattice to manage private hosted zones for.</p> <p>This feature is available at no additional cost in all AWS Regions where VPC Lattice resource configuration is available. For more information, please read our <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/custom-domain-names-for-vpc-lattice-resources/">blog </a>or visit the <a href="https://aws.amazon.com/vpc/lattice/">Amazon VPC Lattice product detail page</a> and<a href="https://docs.aws.amazon.com/vpc-lattice/latest/ug/resource-configuration.html"> Amazon VPC Lattice documentation</a>.</p>

Read article →

Amazon S3 now supports tags on S3 Tables

<p>Amazon S3 now supports tags on S3 Tables for attribute-based access control (ABAC) and cost allocation. You can use tags for ABAC to automatically manage permissions for users and roles accessing table buckets and tables. This helps eliminate frequent AWS Identity and Access Management (IAM) or S3 Tables resource-based policy updates, simplifying how you govern access at scale. Additionally, you can add tags to individual tables to track and organize AWS costs using AWS Billing and Cost Management.<br /> <br /> Amazon S3 supports tags on S3 Tables in all AWS Regions where S3 Tables is available. You can get started with tagging using the AWS Management Console, SDK, API, or CLI. To learn more about using tags on S3 Tables, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-tagging.html" style="cursor: pointer;">S</a><a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html" style="cursor: pointer;">3 User Guide.</a></p>

Read article →

Deadline Cloud expands support with latest 6th, 7th, and 8th generation instances

<p>AWS announces expanded instance family support in Deadline Cloud, adding new 6th, 7th, and 8th generation EC2 instances to enhance visual effects and animation rendering workloads. This release includes support for C7i, C7a, M7i, M7a, R7a, R7i, M8a, M8i, and R8i instance families, along with additional 6th generation instance types that were previously unavailable. Deadline Cloud is a fully managed service that helps customers run visual compute workloads in the cloud without having to manage infrastructure. <br /> <br /> With this enhancement, studios can utilize a broader range of AWS compute technology to optimize their rendering workflows. The compute-optimized (C-series), general-purpose (M-series), and memory-optimized (R-series) instances provide tailored options for different rendering workloads - from compute-intensive simulations to memory-heavy scene processing. The inclusion of latest-generation instances like M8a and R8i enables customers to access improved performance and efficiency for their most demanding rendering tasks.<br /> <br /> These instance families are available in all 10 AWS Regions where Deadline Cloud is offered. The specific instance types available in each Region depend on the regional availability of the EC2 instance types themselves.<br /> <br /> To learn more about the new instance types supported in Deadline Cloud and their regional availability, see the <a href="https://aws.amazon.com/deadline-cloud/pricing/" target="_blank">AWS Deadline Cloud pricing page</a>.</p>

Read article →

AWS announces a new Regional planning tool in Builder Center

<p>Today, AWS announced a new tool called AWS Capabilities by Region in <a href="https://builder.aws.com/" target="_blank">Builder Center</a>. This tool helps you discover and compare AWS services, features, APIs, CloudFormation resources across AWS Regions. You can explore service availability through an interactive interface, compare multiple Regions side-by-side, and view forward-looking roadmap information. This detailed visibility helps you make informed decisions about global deployments and prevent project delays due to service unavailability.<br /> <br /> In addition to this tool, AWS also enhanced the AWS Knowledge Model Context Protocol (MCP) Server to include information about Regional capabilities in an LLM-compatible format. MCP clients and agentic frameworks can connect to the AWS Knowledge MCP Server to get real-time insights into regional service availability and suggestions for alternative solutions when specific services or features are unavailable.<br /> <br /> You can begin exploring <a href="https://builder.aws.com/capabilities/?trk=769a1a2b-8c19-4976-9c45-b6b1226c7d20&amp;sc_channel=el" target="_blank">AWS Capabilities by Region in AWS Builder Center</a> today. The Knowledge MCP server is also publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Follow the <a href="https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server/" target="_blank">getting started guide</a> for setup instructions.&nbsp;</p>

Read article →

Amazon CloudWatch Application Signals now available in AWS GovCloud (US) Regions

<p>Amazon CloudWatch Application Signals expands its availability to AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, enabling government customers and regulated industries to automatically monitor and improve application performance in these regions. CloudWatch Application Signals provides comprehensive application monitoring capabilities by automatically collecting telemetry data from applications running on Amazon EC2, Amazon ECS, Amazon EKS and AWS Lambda, helping customers meet their compliance and monitoring requirements while maintaining workload visibility.<br /> <br /> With CloudWatch Application Signals, customers in AWS GovCloud (US) regions can now monitor application health in real time, track performance against business goals, visualize service relationships and dependencies, and quickly identify and resolve performance issues. This automated observability solution eliminates the need for manual instrumentation while providing detailed insights into application behavior and performance patterns. The service automatically detects anomalies and helps correlate issues across different AWS services, enabling faster problem resolution and improved application reliability.<br /> <br /> <a href="https://aws.amazon.com/cloudwatch/features/application-observability-apm/" target="_blank">CloudWatch Application Signals</a> will be available in AWS GovCloud (US-East) and AWS GovCloud (US-West). For pricing information, visit the <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">Amazon CloudWatch pricing page</a>. To get started, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Monitoring-Sections.html" target="_blank">Amazon CloudWatch Application Signals documentation</a>.</p>

Read article →

AWS Backup now supports AWS KMS customer managed keys with logically air-gapped vaults

<p>AWS Backup now supports encrypting backups in logically air-gapped vaults with AWS Key Management Service (KMS) customer managed keys (CMKs). This enhancement provides additional encryption options beyond the existing AWS-owned keys, helping organizations meet their regulatory and compliance requirements.<br /> <br /> You can now create logically air-gapped vaults using your own customer managed keys (CMKs) in AWS KMS, giving you more control over your backup protection strategy. Whether you want to use keys from the same account or across accounts, you maintain centralized key management while preserving the security benefits of logically air-gapped vaults. This integration works seamlessly with your existing logically air-gapped vaults and other AWS Backup features, ensuring no disruption to your backup workflows.<br /> <br /> AWS KMS customer managed key support with logically air-gapped vaults is available in all AWS Regions where logically air-gapped vaults are <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-region" target="_blank">currently supported</a>.<br /> <br /> You can get started with logically air-gapped vault support for CMKs using the AWS Backup console, API, or CLI. When creating a new logically air-gapped vault, you can now choose between an AWS-owned key or your own CMK for encryption. For more information about implementing this feature, visit the AWS Backup <a href="https://aws.amazon.com/backup/faqs/" target="_blank">product page</a>, <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/logicallyairgappedvault.html">documentation</a>, and <a href="https://aws.amazon.com/blogs/storage/encrypt-aws-backup-logically-air-gapped-vaults-with-customer-managed-keys/" target="_blank">blog</a>.</p>

Read article →

Amazon Elastic VMware Service (Amazon EVS) is now available in additional Regions

<p>Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) is now available in all availability zones in the Asia Pacific (Mumbai), Asia Pacific (Sydney), Canada (Central) and Europe (Paris) Regions. This expansion provides more options to leverage the scale and flexibility of AWS for running your VMware workloads in the cloud.<br /> <br /> Amazon EVS lets you run VMware Cloud Foundation (VCF) directly within your Amazon Virtual Private Cloud (VPC) on EC2 bare-metal instances, powered by AWS Nitro. Using either our step-by-step configuration workflow or the AWS Command Line Interface (CLI) with automated deployment capabilities, you can set up a complete VCF environment in just a few hours. This rapid deployment enables faster workload migration to AWS, helping you eliminate aging infrastructure, reduce operational risks, and meet critical timelines for exiting your data center.<br /> <br /> The added availability in the Asia Pacific (Mumbai), Asia Pacific (Sydney), Canada (Central) and Europe (Paris) Regions gives your VMware workloads lower latency through closer proximity to your end users, compliance with data residency or sovereignty requirements, and additional high availability and resiliency options for your enhanced redundancy strategy.<br /> <br /> To get started, visit the Amazon EVS <a href="https://aws.amazon.com/evs/" target="_blank">product detail page</a> and <a href="https://docs.aws.amazon.com/evs/latest/userguide/what-is-evs.html" target="_blank">user guide</a>.&nbsp;</p>

Read article →

AWS End User Messaging SMS launches Carrier Lookup

<p>Starting today, AWS End User Messaging customers can now lookup carrier information related to a phone number including the country, number type, dialing code, and mobile network and carrier codes. With Carrier Lookup, you can increase deliverability by checking important information about a phone number before you start sending messages, avoiding sending messages to the wrong destination, or to incorrect phone numbers. <br /> <br /> AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications. Developers can integrate messaging to support uses cases such as one-time passcodes (OTP) at sign-ups, account updates, appointment reminders, delivery notifications, promotions and more.<br /> <br /> Support for Carrier Lookup is available in all AWS Regions where End User Messaging is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region</a> table.<br /> <br /> To learn more, see <a href="https://aws.amazon.com/end-user-messaging/">AWS End User Messaging</a>.&nbsp;</p>

Read article →

Amazon SageMaker launches custom tags for project resources

<p>Today, Amazon SageMaker Unified Studio announced new capabilities allowing SageMaker projects to add custom tags to resources created through the project. This helps customers enforce tagging standards that conform to Service Control Policies (SCP) and helps enable cost tracking reporting practices on resources created across the organization.<br /> <br /> As an Amazon SageMaker Unified Studio administrator, you can configure a project profile with tag configurations that will be pushed down to all projects using the project profile. Project profiles can be setup to pass Key and Value tag pairings or pass the Key of the tag with a default Value that can be modified during project creation. All tag values passed to the project will result in the resources created by that project being tagged. This provides administrators a governance mechanism that enforces project resources have the expected tags.<br /> <br /> This first release of custom tags for project resources is supported only through application programming interface (API).<br /> <br /> Custom tags for project resources capability is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where Amazon SageMaker Unified Studio is supported, including: Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), South America (São Paulo), Asia Pacific (Seoul), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Asia Pacific (Mumbai), Europe (Paris), Europe (Stockholm)<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/sagemaker/" target="_blank">Amazon SageMaker</a> then get started with the <a href="https://docs.aws.amazon.com/datazone/latest/APIReference/Welcome.html" target="_blank">custom tag API documentation</a>.</p>

Read article →

Amazon Keyspaces (for Apache Cassandra) is now available in the Middle East (UAE) Region

<p><a href="https://aws.amazon.com/keyspaces/" target="_blank">Amazon Keyspaces (for Apache Cassandra)</a> is now available in the Middle East (UAE) Region, allowing customers in the Middle East to build Cassandra-compatible applications with lower latency while keeping their data within the Region to meet data residency requirements.<br /> <br /> Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage.<br /> <br /> The Middle East (UAE) Region provides the same Amazon Keyspaces features available in other AWS Regions, including point-in-time recovery, Multi-Region replication, CDC streams, and IPv6 support. This regional expansion enables organizations in the Middle East to build highly scalable, low-latency applications using familiar Cassandra Query Language (CQL) without the operational burden of managing Cassandra clusters.<br /> <br /> To learn more about on Keyspaces, visit the <a href="https://docs.aws.amazon.com/keyspaces/latest/devguide/what-is-keyspaces.html" target="_blank">Amazon Keyspaces documentation</a>.</p>

Read article →

AWS IoT Greengrass v2.16 introduces system log forwarder and TPM2.0 capabilities

<p>AWS announces the release of <a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-release-2025-11-06.html" target="_blank">AWS IoT Greengrass v2.16</a>, introducing new core components for nucleus and nucleus lite. AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software at the edge. The latest version 2.16 release includes enhanced debugging capabilities through the system log forwarder component. This component uploads system log files to AWS Cloud Watch, making it easier for developers to troubleshoot IoT edge applications.<br /> <br /> The AWS IoT Greengrass v2.16 release also features a new nucleus lite version (v2.3) with TPM2.0 specification support, enabling developers to manage edge device security for their resource constrained devices using hardware-based root of trust modules. The implementation helps developers to scale their IoT deployments with confidence while providing secure storage for secrets and streamlined device authentication.<br /> <br /> AWS IoT Greengrass v2.16 is available in all AWS Regions where AWS IoT Greengrass is offered. To learn more about AWS IoT Greengrass v2.16 and its new features, visit the AWS IoT Greengrass <a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html" target="_blank">documentation.</a> Follow the Getting Started <a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started.html" target="_blank">guide</a> for a quick introduction to AWS IoT Greengrass.</p>

Read article →

Amazon DynamoDB Streams expands AWS PrivateLink support to FIPS endpoints

<p>Amazon DynamoDB Streams now supports AWS PrivateLink for all available Amazon DynamoDB Streams Federal Information Processing Standard (FIPS) endpoints in US and Canada commercial AWS Regions.<br /> <br /> With this launch, you can establish a private connection between your virtual private cloud (VPC) and Amazon DynamoDB Streams FIPS endpoints instead of connecting over the public internet, helping you meet your organization's business, compliance, and regulatory requirements to limit public internet connectivity.<br /> <br /> Amazon DynamoDB Streams support for AWS PrivateLink FIPs endpoints is available with Amazon DynamoDB Streams in the US and Canada commercial AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), and Canada West (Calgary).<br /> <br /> To learn more about Amazon DynamoDB Streams support for AWS PrivateLink FIPs endpoints, visit the<a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/privatelink-streams.html" style="cursor: pointer;" target="_blank"> Amazon DynamoDB Stream documentation</a>. For more information about AWS PrivateLink and its benefits, visit the <a href="https://aws.amazon.com/privatelink/" style="cursor: pointer;" target="_blank">AWS PrivateLink product page</a>.&nbsp;</p>

Read article →

Amazon CloudFront announces cross-account support for VPC origins

<p>Amazon CloudFront announces cross-account support for Virtual Private Cloud (VPC) origins, enabling customers to access VPC origins that reside in different AWS accounts from their CloudFront distributions. With VPC origins, customers can have their Application Load Balancers (ALB), Network Load Balancers (NLB), and EC2 Instances in a private subnet that is accessible only through their CloudFront distributions. With the support for cross-account VPC origins in CloudFront, customers can now leverage the security benefits of VPC origins while maintaining their existing multi-account architecture.<br /> <br /> Customers set up multiple AWS accounts for better security isolation, cost management, and compliance. Previously, customers could access origins in private VPCs from CloudFront only if CloudFront and the origin were in the same AWS account. This meant customers who had their origins in multiple AWS accounts, had to keep their accounts in public subnets to get the scale and performance benefits of CloudFront. Customers then had to maintain additional security controls, such as access control lists (ACL), at both the edge and within regions, rather than benefiting from the inherent security of VPC origins. Now, customers can use <a href="https://aws.amazon.com/ram/">AWS Resource Access Manager (RAM)</a> to allow CloudFront access to origins in private VPCs in different AWS accounts, both within and outside their AWS Organizations and organizational units (OUs). This streamlines security management and reduces operational complexity, making it easy to use CloudFront as the single front door for applications.<br /> <br /> VPC origins is available in AWS Commercial Regions only, and the full list of supported AWS Regions is available <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-vpc-origins.html#vpc-origins-supported-regions">here</a>. There is no additional cost for using cross-account VPC origins with CloudFront. To learn more about implementing cross-account VPC origins and best practices for multi-account architectures, visit <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-vpc-origins.html">CloudFront VPC origins.</a></p>

Read article →

Amazon ECS announces non-root container support for managed EBS volumes

<p><a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Service</a> (ECS) now supports mounting Amazon Elastic Block Store (EBS) volumes to containers running as non-root users. With this launch, ECS automatically configures the EBS volume’s file system permissions to allow non-root users to read and write data securely, while preserving the root-level ownership of the volume. This enhancement simplifies security-first container deployments by removing the need for manual permission management or custom entrypoint scripts.</p> <p>This feature enhances container security by allowing tasks to run as non-root users, reducing the risk of privilege escalation and unauthorized access to data. Previously, for a container in a task to write to a mounted Amazon EBS volume, it had to run as the root user. ECS now automatically manages EBS volume permissions, simplifying workflows and ensuring that all containers within a task — regardless of user ID — can securely read and write to the mounted volume.</p> <p>This feature is now available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where Amazon ECS and Amazon EBS are supported, for EC2, AWS Fargate, and ECS Managed Instances launch types. To learn more, see <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html" target="_blank">Use Amazon EBS volumes with Amazon ECS</a> in the Amazon ECS Developer Guide.</p>

Read article →

AWS B2B Data Interchange is now available in AWS Europe (Ireland) Region

<p>Customers in AWS Europe (Ireland) Region can now use AWS B2B Data Interchange to build highly customizable, scalable and cost-efficient EDI workloads.<br /> <br /> AWS B2B Data Interchange automates validation, transformation, and generation of EDI files such as ANSI X12 documents to and from JSON and XML data formats. With this launch, you can use AWS B2B Data Interchange to process your EDI documents in AWS Europe (Ireland) Region, which enables you to meet your compliance and data sovereignty obligations while modernizing your B2B integration workloads. As part of this launch, the AWS B2B Data Interchange generative AI mapping capability will also become available in AWS Europe (Ireland) Region, simplifying mapping code development and ultimately expediting trading partners onboarding.<br /> <br /> To learn more about AWS B2B Data Interchange visit our <a href="https://aws.amazon.com/b2b-data-interchange/" target="_blank">product page</a>, <a href="https://docs.aws.amazon.com/b2bi/latest/userguide/what-is-b2bi.html" target="_blank">user-guide</a> or take our <a href="https://catalog.workshops.aws/getting-started-b2b-data-interchange/en-US" target="_blank">self-paced workshop</a>. See the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Region Table</a> for complete regional availability.</p>

Read article →

Microsoft SQL Server Developer Edition now available through AWS Launch Wizard

<p>AWS Launch Wizard now offers a guided approach to sizing, configuring, and deploying Windows Server EC2 instances with Microsoft SQL Server Developer Edition installed from your own media. AWS Launch Wizard for SQL Server Developer Edition allows you to simplify launching cost-effective and full-featured SQL Server instances on Amazon EC2, making it ideal for developers building non-production and test database environments.<br /> <br /> This feature is ideal for customers who also have existing non-production databases running SQL Server Enterprise Edition or SQL Server Standard Edition, as migrating the non-production databases to SQL Server Developer Edition will reduce SQL license costs while maintaining feature parity.<br /> <br /> This feature is available in all supported commercial AWS Regions and the AWS GovCloud (US) Regions.<br /> <br /> To learn more, see the AWS Launch Wizard for SQL Server <a contenteditable="false" href="https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sql.html" style="cursor: pointer;">User Guide</a> and <a contenteditable="false" href="https://aws.amazon.com/blogs/modernizing-with-aws/how-to-automate-downgrading-sql-server-to-developer-edition-on-amazon-ec2/" style="cursor: pointer;">blog post here</a>.</p>

Read article →

AWS Glue Schema Registry adds support for C#

<p><a contenteditable="false" href="https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html" style="cursor: pointer;">AWS Glue Schema Registry</a>&nbsp;(GSR)&nbsp;has now expanded the programming language support for&nbsp;GSR&nbsp;Client library&nbsp;to&nbsp;include&nbsp;C# support along with existing Java support.&nbsp;C# applications integrating with Apache Kafka&nbsp;or&nbsp;<a contenteditable="false" href="https://aws.amazon.com/msk/" style="cursor: pointer;">Amazon Managed Streaming for Apache Kafka (Amazon MSK)</a>,&nbsp;<a contenteditable="false" href="https://aws.amazon.com/kinesis/data-streams/" style="cursor: pointer;">Amazon Kinesis Data Streams</a>, and Apache Flink&nbsp;or&nbsp;<a contenteditable="false" href="https://aws.amazon.com/managed-service-apache-flink/" style="cursor: pointer;">Amazon Managed Service for Apache Flink</a>&nbsp;can now&nbsp;interact with AWS Glue Schema Registry&nbsp;to&nbsp;maintain&nbsp;data quality and schema compatibility in streaming data applications.</p> <p>AWS Glue Schema Registry, a serverless feature of&nbsp;<a contenteditable="false" href="https://aws.amazon.com/glue/" style="cursor: pointer;">AWS Glue</a>, enables you to&nbsp;validate&nbsp;and control the evolution of streaming data using registered schemas at no&nbsp;additional&nbsp;charge.&nbsp;Schemas define the structure and format of data records produced by applications. Using AWS Glue Schema Registry, you can centrally manage and enforce schema definitions across your data ecosystem. This ensures consistency of schemas across applications and enables seamless data integration between producers and consumers. Through centralized schema validation, teams can&nbsp;maintain&nbsp;data quality standards and evolve their schemas in a controlled manner.&nbsp;&nbsp;<br /> </p> <p>C# support is available across all&nbsp;<a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS regions</a>&nbsp;where Glue Schema Registry&nbsp;is available. Visit the Glue Schema Registry&nbsp;<a contenteditable="false" href="https://docs.aws.amazon.com/glue/latest/dg/schema-registry-gs-serde-csharp.html" style="cursor: pointer;">developer&nbsp;guide</a>,&nbsp;and&nbsp;<a contenteditable="false" href="https://www.nuget.org/packages/AWS.Glue.SchemaRegistry" style="cursor: pointer;">SDK</a>&nbsp;to get started with C# integration.</p> <p>&nbsp;</p>

Read article →

AWS Marketplace now open for India-based sellers supporting transactions in Indian Rupees (INR)

<p>Buyers and sellers in India can now transact locally in AWS Marketplace, with invoicing in Indian Rupees (INR), and with simplified tax compliance through AWS India. With this launch, India-based sellers can now register to sell in AWS Marketplace and offer paid subscriptions to buyers in India. India-based sellers will be able to create private offers in US dollars (USD) or INR. Buyers in India purchasing paid offerings in AWS Marketplace from India-based sellers will receive invoices in INR, helping to simplify invoicing with consistency across AWS Cloud and AWS Marketplace purchases. Sellers based in India can begin selling paid offerings in AWS Marketplace and can work with India-based Channel Partners to sell to customers.<br /> <br /> AWS India will facilitate the issuance of tax-compliant invoices in INR to buyers, with the independent software vendor (ISV) or Channel Partner as the seller of record. AWS India will automate the collection and remittance of Withholding Tax (WHT) and GST-Tax Collected at Source (GST-TCS) to the relevant tax authorities, fulfilling compliance requirements for buyers. During this phase, non-India based sellers can continue to sell directly to buyers in India through AWS Inc., in USD or through AWS India by working through authorized distributors.<br /> <br /> To learn more and explore solutions available from India-based sellers, <a href="https://aws.amazon.com/marketplace/solutions/india" target="_blank">visit this page</a>. To get started as a seller, India-based ISVs and Channel Partners can register in the <a href="https://aws.amazon.com/marketplace/partners/management-tour?ref_=header_modules_sell_in_aws" target="_blank">AWS Marketplace Management Portal</a>. For more information about buying or selling using AWS Marketplace in India, visit the <a href="https://aws.amazon.com/legal/awsin/" target="_blank">India FAQs page</a> and <a href="https://external-mp-channel-partners.s3.us-west-2.amazonaws.com/AWS+India_MPO_CheatSheet_Seller.pdf" target="_blank">help guide</a>.</p>

Read article →

Amazon Cloudfront adds IPv6 support for Anycast Static IPs

<p>Amazon CloudFront now supports both IPv4 and IPv6 addresses for Anycast Static IP configurations. Previously, this feature was limited to IPv4 addresses only. This update now provides customers with ability to have both IPv4 and IPv6 addresses when using CloudFront Anycast Static IP addresses.</p> <p>Previously, customers could only use IPv4 addresses when using CloudFront Anycast static IP addresses. With this launch, customers using CloudFront Anycast Static IP addresses receive both IPv4 and IPv6 addresses for their workloads. This dual-stack support allows customers to meet IPv6 compliance requirements, future-proof their infrastructure, and serve end users on IPv6-only networks.<br /> <br /> CloudFront supports IPv6 for Anycast Static IPs from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Learn more about Anycast Static IPs <a contenteditable="false" href="https://aws.amazon.com/blogs/networking-and-content-delivery/zero-rating-and-ip-address-management-made-easy-cloudfronts-new-anycast-static-ips-explained/" style="cursor: pointer;">here</a> and for more information, please refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/request-static-ips.html" style="cursor: pointer;">Amazon CloudFront Developer Guide</a>. For pricing, please see <a contenteditable="false" href="https://aws.amazon.com/cloudfront/pricing/" style="cursor: pointer;">CloudFront Pricing</a>.</p>

Read article →

Amazon Keyspaces (for Apache Cassandra) extends Multi-Region Replication to Bahrain and Hong Kong Region

<p>Amazon Keyspaces (for Apache Cassandra) now supports Multi-Region Replication in the Middle East (Bahrain) and Asia Pacific (Hong Kong) Regions. With this expansion, customers can now replicate their Amazon Keyspaces tables to and from these Regions, enabling lower latency access to data and improved regional resiliency.<br /> <br /> Amazon Keyspaces Multi-Region Replication automatically replicates data across AWS Regions with typically less than a second of replication lag, allowing applications to read and write data to the same table in multiple Regions. This capability helps customers build globally distributed applications that can serve users with low latency regardless of their location, while also providing business continuity in the event of a regional disruption.<br /> <br /> The addition of Multi-Region Replication support in Middle East (Bahrain) and Asia Pacific (Hong Kong) enables organizations operating in these regions to build highly available applications that can maintain consistent performance for users across the Middle East and Asia Pacific. Customers can now replicate their Keyspaces tables between these regions and any other supported AWS Region without managing complex replication infrastructure.<br /> <br /> You pay only for the resources you use, including data storage, read/write capacity, and writes in each Region of your multi-Region keyspace. To learn more about Amazon Keyspaces Multi-Region Replication and its regional availability, visit the Amazon Keyspaces documentation.</p>

Read article →

Amazon FSx now integrates with AWS Secrets Manager for enhanced management of Active Directory credentials

<p>Amazon FSx now integrates with AWS Secrets Manager, enabling enhanced protection and management of the Active Directory domain service account credentials for your FSx for Windows File Server file systems and FSx for NetApp ONTAP Storage Virtual Machines (SVMs).<br /> <br /> Previously, if you wanted to join your FSx for Windows file system or FSx for ONTAP SVM to your Active Directory domain for user authentication and access control, you needed to specify the username and password for your service account in the Amazon FSx Console, Amazon FSx API, AWS CLI, or AWS CloudFormation. With this launch, you can now specify an AWS Secrets Manager secret containing the service account credentials, enabling you to strengthen your security posture by eliminating the need to store plain text credentials in application code or configuration files, and aligning with best practices for credential management. Additionally, you can use AWS Secrets Manager to rotate your Active Directory credentials and consume them when needed in FSx workloads.<br /> <br /> You can now use AWS Secrets Manager to store your domain join service credentials for all FSx for Windows file systems and FSx for ONTAP Storage Virtual Machines in <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">all AWS Regions</a> where they are available. For more information, see <a href="https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html" target="_blank">Amazon FSx for Windows File Server documentation</a> and <a href="https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/getting-started.html" target="_blank">Amazon FSx for NetApp ONTAP documentation</a>.</p>

Read article →

Amazon CloudWatch Database Insights expands anomaly detection in on-demand analysis

<p>Amazon CloudWatch Database Insights now detects anomalies on additional metrics through its on-demand analysis experience. Database Insights is a monitoring and diagnostics solution that helps database administrators and application developers optimize database performance by providing comprehensive visibility into database metrics, query performance, and resource utilization patterns. The on-demand analysis feature utilizes machine learning to help identify anomalies and performance bottlenecks during the selected time period, and gives advice on what to do next.<br /> <br /> The Database Insights on-demand analysis feature now offers enhanced anomaly detection capabilities. Previously, database administrators could analyze database performance and correlate metrics based on database load. Now, the on-demand analysis report also identifies anomalies in database-level and operating system-level counter metrics for the database instance, as well as per-SQL metrics for the top SQL statements contributing to database load. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice while reducing mean time to diagnosis. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution.<br /> <br /> You can get started with on-demand analysis by enabling the Advanced mode of CloudWatch Database Insights on your Amazon Aurora or RDS databases using the AWS management console, AWS APIs, or AWS CloudFormation. Please refer to <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.Overview.Engines.html">RDS documentation</a> and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PerfInsights.Overview.Engines.html#USER_PerfInsights.Overview.PIfeatureEngnRegSupport">Aurora documentation</a> for information regarding the availability of Database Insights across different regions, engines, and instance classes.</p>

Read article →

AWS Config now supports 49 new resource types

<p>AWS Config now supports 49 additional AWS resource types across key services including Amazon EC2, Amazon Bedrock, and Amazon SageMaker. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.<br /> <br /> With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.<br /> <br /> You can now use AWS Config to monitor the following newly supported resource types in all <a href="https://docs.aws.amazon.com/config/latest/developerguide/what-is-resource-config-coverage.html" target="_blank">AWS Regions</a> where the supported resources are available:</p> <table> <tbody> <tr> <td>Resource Types</td> <td>&nbsp;</td> </tr> <tr> <td>AWS::ApiGateway::DomainName</td> <td>AWS::Glue::Registry</td> </tr> <tr> <td>AWS::ApiGateway::Method</td> <td>AWS::IoTCoreDeviceAdvisor::SuiteDefinition</td> </tr> <tr> <td>AWS::ApiGateway::UsagePlan</td> <td>AWS::MediaPackageV2::Channel</td> </tr> <tr> <td>AWS::AppConfig::Extension</td> <td>AWS::MediaPackageV2::ChannelGroup</td> </tr> <tr> <td>AWS::Bedrock::ApplicationInferenceProfile</td> <td>AWS::MediaTailor::LiveSource</td> </tr> <tr> <td>AWS::Bedrock::Prompt</td> <td>AWS::MSK::ServerlessCluster</td> </tr> <tr> <td>AWS::BedrockAgentCore::BrowserCustom</td> <td>AWS::PaymentCryptography::Alias</td> </tr> <tr> <td>AWS::BedrockAgentCore::CodeInterpreterCustom</td> <td>AWS::PaymentCryptography::Key</td> </tr> <tr> <td>AWS::BedrockAgentCore::Runtime</td> <td>AWS::RolesAnywhere::CRL</td> </tr> <tr> <td>AWS::CloudFormation::LambdaHook</td> <td>AWS::RolesAnywhere::Profile</td> </tr> <tr> <td>AWS::CloudFormation::StackSet</td> <td>AWS::S3::AccessGrant</td> </tr> <tr> <td>AWS::Comprehend::Flywheel</td> <td>AWS::S3::AccessGrantsInstance</td> </tr> <tr> <td>AWS::Config::AggregationAuthorization</td> <td>AWS::S3::AccessGrantsLocation</td> </tr> <tr> <td>AWS::DataSync::Agent</td> <td>AWS::SageMaker::DataQualityJobDefinition</td> </tr> <tr> <td>AWS::Deadline::Fleet</td> <td>AWS::SageMaker::MlflowTrackingServer</td> </tr> <tr> <td>AWS::Deadline::QueueFleetAssociation</td> <td>AWS::SageMaker::ModelBiasJobDefinition</td> </tr> <tr> <td>AWS::EC2::IPAMPoolCidr</td> <td>AWS::SageMaker::ModelExplainabilityJobDefinition</td> </tr> <tr> <td>AWS::EC2::SubnetNetworkAclAssociation</td> <td>AWS::SageMaker::ModelQualityJobDefinition</td> </tr> <tr> <td>AWS::EC2::VPCGatewayAttachment</td> <td>AWS::SageMaker::MonitoringSchedule</td> </tr> <tr> <td>AWS::ECR::RepositoryCreationTemplate</td> <td>AWS::SageMaker::StudioLifecycleConfig</td> </tr> <tr> <td>AWS::ElasticLoadBalancingV2::TargetGroup</td> <td>AWS::SecretsManager::RotationSchedule</td> </tr> <tr> <td>AWS::EMR::Studio</td> <td>AWS::SES::DedicatedIpPool</td> </tr> <tr> <td>AWS::EMRContainers::VirtualCluster</td> <td>AWS::SES::MailManagerTrafficPolicy</td> </tr> <tr> <td>AWS::EMRServerless::Application</td> <td>AWS::SSM::ResourceDataSync</td> </tr> <tr> <td>AWS::EntityResolution::MatchingWorkflow</td> </tr> </tbody> </table> <p>To view the complete list of AWS Config supported resource types, see the&nbsp;<a href="https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html" target="_blank">supported resource types</a> page.</p>

Read article →

Amazon GameLift Streams adds AWS Health notifications for aging resources

<p><a href="https://aws.amazon.com/gamelift/streams/">Amazon GameLift Streams</a> is now integrated with AWS Health and will provide automated notifications about aging stream groups. Customers are sent regular reminders via AWS Health to re-create their stream groups starting as early as the 45th day to the 335th day from the stream group creation date. Stream groups older than 180 days are restricted from adding new applications and automatically expire after the 365th day.<br /> <br /> This feature strengthens our customer’s security posture by helping customers manage the lifecycle of stream groups and prevent the use of outdated resources that might be missing updates. While the customer focuses on their game development, the service helps maintain the health of their resources.<br /> <br /> AWS Health will send a reminder to the linked account on the 45th day and on the 150th day from the stream group creation day, informing customers that the stream group will be restricted from adding new applications after the 180-day. A last reminder to re-create the stream group will be sent on 335th day informing customers that the stream group will expire on the 365th day.<br /> <br /> This feature is available in all AWS Regions where Amazon GameLift Streams is offered at no additional cost.<br /> <br /> Maintenance warnings or the expiration date of a stream group can be viewed on the Stream group details page on the service console, or by using the <i>ExpiresAt</i> field in the <i>GetStreamGroup</i> API response.<br /> <br /> To learn more about managing your stream groups and configuring notifications, visit the Amazon GameLift documentation on&nbsp;<a href="https://docs.aws.amazon.com/gameliftstreams/latest/developerguide/stream-groups.html#stream-groups-lifecycle">Stream group lifecycle</a>.</p>

Read article →

Amazon Connect now supports configuration of email address aliases

<p>Amazon Connect now lets you configure aliases for email addresses, so customers see trusted identities when sending or receiving messages, helping maintain a consistent brand experience and simplify email management. For example, when forwarding a customer-facing address such as support@company.com to an address in Amazon Connect, you can configure an alias to ensure customers continue to see support@company.com as the sender.<br /> <br /> Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">regions</a>. To learn more and get started, please refer to the help <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/setup-email-channel.html" style="cursor: pointer;">documentation</a>, <a contenteditable="false" href="https://aws.amazon.com/connect/pricing/" style="cursor: pointer;">pricing page</a>, or visit the <a contenteditable="false" href="https://aws.amazon.com/connect/" style="cursor: pointer;">Amazon Connect</a> website.</p>

Read article →

Amazon CloudWatch Application Signals adds AI-powered Synthetics debugging

<p><a href="https://github.com/awslabs/mcp/tree/main/src/cloudwatch-appsignals-mcp-server">Amazon CloudWatch Application Signals Model Context Protocol or MCP Server</a> for Application Performance Monitoring (APM) now integrates CloudWatch Synthetics canary monitoring directly into its audit framework, enabling automated, AI-powered debugging of synthetic monitoring failures. DevOps teams and developers can now use natural language questions like 'Why is my checkout canary failing?' in compatible AI assistants such as Amazon Q, Claude, or other supported assistants to utilize the new AI-powered debugged capabilities and quickly distinguish between canary infrastructure issues and actual service problems, addressing the significant challenge of extensive manual analysis in maintaining reliable synthetic monitoring.<br /> <br /> The integration extends Application Signals' existing multi-signal (services, operations, SLOs, golden signals) analysis capabilities to include comprehensive canary diagnostics. The new feature automatically correlates canary failures with service health metrics, traces, and dependencies through an intelligent audit pipeline. Starting from natural language prompts from users, the system performs multi-layered diagnostic analysis across six major areas: Network Issues, Authentication Failures, Performance Problems, Script Errors, Infrastructure Issues, and Service Dependencies. This analysis includes automated comparison of HTTP Archive or HAR files, CloudWatch logs analysis, S3 artifact examination, and configuration validation, significantly reducing the time needed to identify and resolve synthetic monitoring issues.<br /> Customers can then access these insights through natural language interactions with supported AI assistants.<br /> <br /> This feature is available in all commercial AWS regions where <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html">Amazon CloudWatch Synthetics</a> is offered. Customers will need access to a compatible AI agent such as Amazon Q, Claude, or other supported AI assistants to utilize the AI-powered debugging capabilities.<br /> <br /> To learn more about implementing AI-based debugging for your synthetic monitoring, visit the <a href="https://github.com/awslabs/mcp/tree/main/src/cloudwatch-appsignals-mcp-server#readme">CloudWatch Application Signals MCP Server documentation.</a></p>

Read article →

Announcing New EC2 R8a Memory-Optimized Instances

<p>AWS is announcing the general availability of new memory-optimized Amazon EC2 R8a instances. R8a instances, feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to R7a instances.<br /> <br /> R8a instances deliver 45% more memory bandwidth compared to R7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 R7a instances, R8a instances provide up to 60% faster performance for GroovyJVM, allowing higher request throughput and better response times for business-critical applications.<br /> <br /> Built on the <a href="https://aws.amazon.com/ec2/nitro/" target="_blank">AWS Nitro System</a> using sixth generation Nitro Cards, R8a instances are ideal for high performance, memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA) applications. R8a instances offer 12 sizes including 2 bare metal sizes. Amazon EC2 R8a instances are SAP-certified, and providing 38% more SAPS compared to R7a instances.<br /> <br /> R8a instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, sign in to the <a href="https://us-east-2.signin.aws.amazon.com/oauth?client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&amp;code_challenge=HY7MTd3RHBGxdmVpBrF6HsZQi2jnpthhoJoFiRqS6gw&amp;code_challenge_method=SHA-256&amp;response_type=code&amp;redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3FhashArgs%3D%2523%26isauthcode%3Dtrue%26state%3DhashArgsFromTB_us-east-2_df905b13f6c6c171" target="_blank">AWS Management Console</a>. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 <a href="https://aws.amazon.com/ec2/instance-types/r8a" target="_blank">R8a instance page</a>.</p>

Read article →

AWS Cloud WAN is now available in three more AWS Regions

<p>Starting today, <a href="https://aws.amazon.com/cloud-wan">AWS Cloud WAN</a> is available in the AWS Asia Pacific (Thailand), AWS Asia Pacific (Taipei) and AWS Asia Pacific (New Zealand) Regions.<br /> <br /> With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, removing the need to configure and manage different networks using different technologies. You can use network policies to specify the Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The AWS Cloud WAN central dashboard generates a comprehensive view of the network to help you monitor network health, security, and performance. In addition, AWS Cloud WAN automatically creates a global network across AWS Regions by using Border Gateway Protocol (BGP) so that you can easily exchange routes worldwide.<br /> <br /> To learn more, please visit the AWS Cloud WAN <a href="https://aws.amazon.com/cloud-wan/">product detail page</a>.</p>

Read article →

AWS Config conformance packs now available in additional AWS Regions

<p>AWS Config conformance packs and organization-level management capabilities for conformance packs are now available in additional AWS Regions. Conformance packs allow you to bundle AWS Config rules into a single package, simplifying deployment at scale. You can deploy and manage these conformance packs throughout your AWS environment.<br /> <br /> Conformance packs provide a general-purpose compliance framework designed to enable you to create security, operational, or cost-optimization governance checks using managed or custom AWS Config rules. This allows you to monitor compliance scores based on your own groupings. With this launch, you can also manage the AWS Config conformance packs and individual AWS Config rules at the organization level which simplifies the compliance management across your AWS Organization.<br /> <br /> With this expansion, AWS Config Conformance Packs are now also available in the following AWS Regions: Asia Pacific (Malaysia), Asia Pacific (New Zealand), Asia Pacific (Thailand), Asia Pacific (Taipei) and Mexico (Central).<br /> <br /> To get started, you can either use the provided <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/conformancepack-sample-templates.html" style="cursor: pointer;">sample conformance pack</a> templates or craft a custom YAML file from scratch based on a <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/custom-conformance-pack.html" style="cursor: pointer;">custom conformance pack</a>. Conformance pack deployment can be done through the AWS Config console, AWS CLI, or via AWS CloudFormation. You will be charged per conformance pack evaluation in your AWS account per AWS Region. Visit the AWS Config <a contenteditable="false" href="https://aws.amazon.com/config/pricing/" style="cursor: pointer;">pricing page</a> for more details. To learn more about AWS Config conformance packs, see our <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Bedrock AgentCore Runtime now supports direct code deployment

<p>Amazon Bedrock AgentCore Runtime now supports two deployment methods for AI agents: container-based deployment and direct code upload. Developers can now choose between direct code-zip file upload for rapid prototyping and iteration, or leverage advanced container-based options for complex use cases requiring custom configurations.<br /> <br /> AgentCore Runtime provides a serverless, framework and model agnostic runtime for running agents and tools at scale. This deployment option streamlines the prototyping workflow while maintaining enterprise security and scaling capabilities for production deployments. Developers can now deploy agents using direct code-zip upload with easy drag-and-drop functionality. This enables faster iteration cycles, empowering developers to prototype quickly and focus on building innovative agent capabilities.<br /> <br /> This feature is available in all nine <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html" style="cursor: pointer;">AWS Regions</a> where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).<br /> <br /> To learn more about AgentCore Runtime deployment options, see the <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-get-started-code-deploy.html" style="cursor: pointer;">AgentCore documentation</a> and get started with the <a contenteditable="false" href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-get-started-toolkit.html" style="cursor: pointer;">AgentCore Starter Toolkit</a>. AgentCore offers <a contenteditable="false" href="https://aws.amazon.com/bedrock/agentcore/pricing/" style="cursor: pointer;">consumption-based pricing</a> with no upfront costs.</p>

Read article →

Amazon RDS for Oracle is now available with R7i memory-optimized instances offering up to 64:1 memory-to-vCPU ratio

<p><a contenteditable="false" href="https://aws.amazon.com/rds/oracle/" style="cursor: pointer;" target="_blank">Amazon Relational Database Service (RDS) for Oracle</a> is now available with R7i memory-optimized preconfigured instances that offer additional memory and storage I/O per vCPU. Powered by custom 4th Gen Intel Xeon Scalable processors with AWS Nitro System and DDR5 memory for high performance, these instances provide up to 64:1 memory-to-vCPU ratio. Many Oracle database workloads require high memory, but can safely reduce the number of vCPUs without impacting application performance. By running such Oracle database workloads on R7i pre-configured instances, customers can lower their Oracle database licensing and support costs while meeting high performance application requirements.<br /> <br /> Memory optimized R7i pre-configured instances are available for Amazon RDS for Oracle with Bring Your Own License (BYOL) license model supporting both Oracle Database Enterprise Edition and Oracle Database Standard Edition 2. To learn more about Amazon RDS for Oracle R7i memory-optimized preconfigured instances, read <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Concepts.InstanceClasses.html" style="cursor: pointer;" target="_blank">RDS for Oracle User Guide</a> and visit <a contenteditable="false" href="https://aws.amazon.com/rds/oracle/pricing/" style="cursor: pointer;" target="_blank">Amazon RDS for Oracle Pricing</a> for available instance configurations, pricing details, and region availability.</p>

Read article →

Amazon Route 53 Resolver now supports AWS PrivateLink

<p>Amazon Route 53 Resolver now supports&nbsp;<a href="https://aws.amazon.com/privatelink/" target="_blank">AWS PrivateLink</a>. Customers can now access and manage&nbsp;<a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html" target="_blank">Route 53 Resolver</a>&nbsp;and all the related features (Resolver endpoints, Route 53 Resolver DNS Firewall, Resolver Query Logging, Resolver for AWS Outposts) privately, without going through the public internet.&nbsp;AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When Route 53 Resolver and its features are accessed via AWS PrivateLink, all&nbsp;operations,&nbsp;such as creating, deleting, editing, and listing, can be handled via the Amazon private network.&nbsp;<br /> <br /> Amazon Route 53 Resolver responds recursively to DNS queries from AWS resources for public records, Amazon VPC-specific DNS names, and Amazon Route 53 private hosted zones, and is available by default in all VPCs. Route 53 Resolver also offers features (Resolver endpoints, Route 53 Resolver DNS Firewall, Resolver Query Logging, Resolver for AWS Outposts) that you can opt-into.&nbsp;You can use Resolver and its features with AWS PrivateLink in regions where<i>&nbsp;</i>Route 53 Resolver and all its associated features are&nbsp;available today, including the AWS GovCloud (US) Regions. For more information about the AWS Regions where Resolver and its features are available, see&nbsp;<a href="https://docs.aws.amazon.com/general/latest/gr/r53.html" target="_blank">here</a>.<br /> <br /> To learn more about Route 53 Resolver and its features, please refer to the service&nbsp;<a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_Operations_Amazon_Route_53_Resolver.html" target="_blank">documentation</a>.</p>

Read article →

AWS Config launches 42 new managed rules

<p>AWS Config announces launch of an additional 42 managed Config rules for various use cases such as security, cost, durability, and operations. You can now search, discover, enable and manage these additional rules directly from AWS Config and govern more use cases for your AWS environment.</p> <p>With this launch, you can now enable these controls across your account or across your organization. For example, you can evaluate your tagging strategies across Amazon EKS Fargate profiles, Amazon EC2 Network Insight Analyses, AWS Glue Machine learning transforms. Or you can assess your security posture across Amazon Cognito Identity pools, Amazon Lightsail buckets, AWS Amplify apps and more. Additionally, you can leverage Conformance Packs to group these new controls and deploy across an account or across organization, streamlining your multi-account governance.</p> <p>For the full list of recently released rules, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/DocumentHistory.html" style="cursor: pointer;">AWS Config developer guide</a>. For description of each rule and the AWS Regions in which it is available, please refer our <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html" style="cursor: pointer;">Config managed rules documentation</a>. To start using Config rules, please refer our <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_add-rules.html" style="cursor: pointer;">documentation</a>.<br /> <br /> New Rules Launched:<br /> </p> <ol> <li>AMPLIFY_APP_NO_ENVIRONMENT_VARIABLES</li> <li>AMPLIFY_BRANCH_DESCRIPTION</li> <li>APIGATEWAY_STAGE_DESCRIPTION</li> <li>APIGATEWAYV2_STAGE_DESCRIPTION</li> <li>API_GWV2_STAGE_DEFAULT_ROUTE_DETAILED_METRICS_ENABLED</li> <li>APIGATEWAY_STAGE_ACCESS_LOGS_ENABLED</li> <li>APPCONFIG_DEPLOYMENT_STRATEGY_MINIMUM_FINAL_BAKE_TIME</li> <li>APPCONFIG_DEPLOYMENT_STRATEGY_TAGGED</li> <li>APPFLOW_FLOW_TRIGGER_TYPE_CHECK</li> <li>APPMESH_VIRTUAL_NODE_CLOUD_MAP_IP_PREF_CHECK</li> <li>APPMESH_VIRTUAL_NODE_DNS_IP_PREF_CHECK</li> <li>APPRUNNER_SERVICE_IP_ADDRESS_TYPE_CHECK</li> <li>APPRUNNER_SERVICE_MAX_UNHEALTHY_THRESHOLD</li> <li>APS_RULE_GROUPS_NAMESPACE_TAGGED</li> <li>AUDITMANAGER_ASSESSMENT_TAGGED</li> <li>BATCH_MANAGED_COMPUTE_ENV_ALLOCATION_STRATEGY_CHECK</li> <li>BATCH_MANAGED_SPOT_COMPUTE_ENVIRONMENT_MAX_BID</li> <li>COGNITO_IDENTITY_POOL_UNAUTHENTICATED_LOGINS</li> <li>COGNITO_USER_POOL_PASSWORD_POLICY_CHECK</li> <li>CUSTOMERPROFILES_DOMAIN_TAGGED</li> <li>DEVICEFARM_PROJECT_TAGGED</li> <li>DEVICEFARM_TEST_GRID_PROJECT_TAGGED</li> <li>DMS_REPLICATION_INSTANCE_MULTI_AZ_ENABLED</li> <li>EC2_LAUNCH_TEMPLATES_EBS_VOLUME_ENCRYPTED</li> <li>EC2_NETWORK_INSIGHTS_ANALYSIS_TAGGED</li> <li>EKS_FARGATE_PROFILE_TAGGED</li> <li>GLUE_ML_TRANSFORM_TAGGED</li> <li>IOT_SCHEDULED_AUDIT_TAGGED</li> <li>IOT_PROVISIONING_TEMPLATE_DESCRIPTION</li> <li>IOT_PROVISIONING_TEMPLATE_JITP</li> <li>IOT_PROVISIONING_TEMPLATE_TAGGED</li> <li>KINESIS_VIDEO_STREAM_MINIMUM_DATA_RETENTION</li> <li>LAMBDA_FUNCTION_DESCRIPTION</li> <li>LIGHTSAIL_BUCKET_ALLOW_PUBLIC_OVERRIDES_DISABLED</li> <li>RDS_MYSQL_CLUSTER_COPY_TAGS_TO_SNAPSHOT_CHECK</li> <li>RDS_PGSQL_CLUSTER_COPY_TAGS_TO_SNAPSHOT_CHECK</li> <li>ROUTE53_RESOLVER_FIREWALL_DOMAIN_LIST_TAGGED</li> <li>ROUTE53_RESOLVER_FIREWALL_RULE_GROUP_ASSOCIATION_TAGGED</li> <li>ROUTE53_RESOLVER_FIREWALL_RULE_GROUP_TAGGED</li> <li>ROUTE53_RESOLVER_RESOLVER_RULE_TAGGED</li> <li>RUM_APP_MONITOR_TAGGED</li> <li>RUM_APP_MONITOR_CLOUDWATCH_LOGS_ENABLED</li> </ol>

Read article →

AWS Service Reference Information now supports SDK Operation to Action mapping

<p>AWS is expanding service reference information to include which operations are supported by AWS services and which IAM permissions are needed to call a given operation. This will help you answer questions such as “I want to call a specific AWS service operation, which IAM permissions do I need?”<br /> <br /> You can automate the retrieval of service reference information, eliminating manual effort and ensuring your policies align with the latest service updates. You can also incorporate this service reference information directly into your policy management tools and processes for a seamless integration. This feature is offered at no additional cost. To get started, refer to the documentation on <a href="https://docs.aws.amazon.com/service-authorization/latest/reference/service-reference.html" target="_blank">programmatic service reference information</a>.</p>

Read article →

Amazon OpenSearch Serverless now supports FIPS compliant endpoints

<p>Amazon OpenSearch Serverless has added support for Federal Information Processing Standards (FIPS) compliant endpoints for Data Plane APIs in US East (N. Virginia), US East (Ohio), Canada (Central), AWS GovCloud (US-East), and AWS GovCloud (US-West). The service now meets the security requirements for cryptographic modules as outlined in <a href="https://aws.amazon.com/compliance/fips/">Federal Information Processing Standard (FIPS) 140-3</a>.<br /> <br /> Please refer to the <a href="https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-service-regions">AWS Regional Services List</a> for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless FIPS, <a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fips-compliance-opensearch-serverless.html">see the documentation.&nbsp;</a><a href="http://aws.amazon.com" target="_blank"></a></p>

Read article →

EC2 Auto Scaling announces warm pool support for Auto Scaling groups that have mixed instances policies

<p>Starting today, you can add warm pools to Auto Scaling groups (ASGs) that have <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html">mixed instances policies</a>. With warm pools, customers can improve the elasticity of their applications by creating a pool of pre-initialized EC2 instances that are ready to quickly serve application traffic. By combining warm pools with instance type flexibility, an ASG can rapidly scale out to its maximum size at any time, deploying applications across multiple instance types to enhance availability.<br /> <br /> Warm pools are particularly beneficial for applications with lengthy initialization processes, such as writing large amounts of data to disk, running complex custom scripts, or other time-consuming setup procedures that can take several minutes or longer to serve traffic. With this new release, the warm pool feature now works seamlessly with ASGs configured for multiple On-Demand instance types, whether specified through manual instance type lists or attribute-based instance type selection. The combination of instance type flexibility and warm pools provides a powerful solution that helps customers scale out efficiently while maximizing availability.<br /> <br /> The warm pool feature is available through the <a href="https://console.aws.amazon.com/console/home">AWS Management Console</a>, the <a href="https://aws.amazon.com/tools/">AWS SDKs</a>, and the <a href="https://aws.amazon.com/cli/">AWS Command Line Interface (CLI)</a>. It is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">public AWS Regions</a> and <a href="https://aws.amazon.com/govcloud-us/">AWS GovCloud (US)</a> Regions. To learn more about warm pools, visit this <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html">AWS documentation</a>.</p>

Read article →

Amazon Kinesis Data Streams launches On-demand Advantage mode

<p>Amazon Kinesis Data Streams launches On-demand Advantage, so customers can warm on-demand streams to handle instant throughput increases up to 10GB or 10 million events per second, eliminating the need to over-provision or build custom scaling solutions. Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. On-demand streams automatically scale capacity based on data usage, and now you can warm write capacity ad hoc. On-demand Advantage also provides a simpler pricing structure that removes the fixed, per-stream charge, so customers only pay for data usage at better rates.<br /> <br /> On-demand Advantage offers data usage with 60% lower pricing compared to On-demand Standard, with data ingest at $0.032/GB and data retrieval at $0.016/GB in the US East (N. Virginia) region. The price of Enhanced fan-out data retrieval is the same as shared-throughput retrievals, making higher fan-out use cases more cost effective. The mode also decreases the price of extended retention by 77% from $0.10/GB-month to $0.023/GB-month. Once you enable On-demand Advantage mode, the account will be billed for a minimum of 25MB/s of data ingest and 25MB/s of data retrieval at the lower rates across all on-demand streams. The new pricing means On-demand Advantage is the most cost effective way to stream with Kinesis Data Streams when you ingest at least 10MB/s in aggregate, fan out to more than two consumer applications, or have hundreds of streams in a region. You can check directly in the <a href="https://console.aws.amazon.com/kinesis/home" target="_blank">Kinesis console</a> and the <a href="https://aws.amazon.com/kinesis/data-streams/pricing/" target="_blank">pricing page</a> if On-demand Advantage is a good fit for your account.<br /> <br /> On-demand Advantage is available in all AWS regions where Kinesis Data Streams is available, including AWS GovCloud (US) and China regions. To learn more, see the <a></a><a href="https://aws.amazon.com/blogs/big-data/amazon-kinesis-data-streams-launches-on-demand-advantage-for-instant-throughput-increases-and-streaming-at-scale/" target="_blank">launch blog</a> and the <a href="https://docs.aws.amazon.com/streams/latest/dev/working-with-streams.html" target="_blank">Kinesis Data Streams User Guide</a>.</p>

Read article →

AWS Config now supports 52 new resource types

<p>AWS Config now supports 52 additional AWS resource types across key services including Amazon EC2, Amazon Bedrock, and Amazon SageMaker. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.<br /> <br /> With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.<br /> <br /> You can now use AWS Config to monitor the following newly supported resource types in all <a href="https://docs.aws.amazon.com/config/latest/developerguide/what-is-resource-config-coverage.html" target="_blank">AWS Regions</a> where the supported resources are available:<br /> </p> <table> <tbody> <tr> <td>Resource Types</td> <td>&nbsp;</td> </tr> <tr> <td>AWS::ApiGateway::DomainName</td> <td>AWS::IAM::GroupPolicy</td> </tr> <tr> <td>AWS::ApiGateway::Method</td> <td>AWS::IAM::RolePolicy</td> </tr> <tr> <td>AWS::ApiGateway::UsagePlan</td> <td>AWS::IAM::UserPolicy</td> </tr> <tr> <td>AWS::AppConfig::Extension</td> <td>AWS::IoTCoreDeviceAdvisor::SuiteDefinition</td> </tr> <tr> <td>AWS::Bedrock::ApplicationInferenceProfile</td> <td>AWS::MediaPackageV2::Channel</td> </tr> <tr> <td>AWS::Bedrock::Prompt</td> <td>AWS::MediaPackageV2::ChannelGroup</td> </tr> <tr> <td>AWS::BedrockAgentCore::BrowserCustom</td> <td>AWS::MediaTailor::LiveSource</td> </tr> <tr> <td>AWS::BedrockAgentCore::CodeInterpreterCustom&nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</td> <td>AWS::MSK::ServerlessCluster</td> </tr> <tr> <td>AWS::BedrockAgentCore::Runtime</td> <td>AWS::PaymentCryptography::Alias</td> </tr> <tr> <td>AWS::CloudFormation::LambdaHook</td> <td>AWS::PaymentCryptography::Key</td> </tr> <tr> <td>AWS::CloudFormation::StackSet</td> <td>AWS::RolesAnywhere::CRL</td> </tr> <tr> <td>AWS::Comprehend::Flywheel</td> <td>AWS::RolesAnywhere::Profile</td> </tr> <tr> <td>AWS::Config::AggregationAuthorization</td> <td>AWS::S3::AccessGrant</td> </tr> <tr> <td>AWS::DataSync::Agent</td> <td>AWS::S3::AccessGrantsInstance</td> </tr> <tr> <td>AWS::Deadline::Fleet</td> <td>AWS::S3::AccessGrantsLocation</td> </tr> <tr> <td>AWS::Deadline::QueueFleetAssociation</td> <td>AWS::SageMaker::DataQualityJobDefinition</td> </tr> <tr> <td>AWS::EC2::IPAMPoolCidr</td> <td>AWS::SageMaker::MlflowTrackingServer</td> </tr> <tr> <td>AWS::EC2::SubnetNetworkAclAssociation</td> <td>AWS::SageMaker::ModelBiasJobDefinition</td> </tr> <tr> <td>AWS::EC2::VPCGatewayAttachment</td> <td>AWS::SageMaker::ModelExplainabilityJobDefinition</td> </tr> <tr> <td>AWS::ECR::RepositoryCreationTemplate</td> <td>AWS::SageMaker::ModelQualityJobDefinition</td> </tr> <tr> <td>AWS::ElasticLoadBalancingV2::TargetGroup</td> <td>AWS::SageMaker::MonitoringSchedule</td> </tr> <tr> <td>AWS::EMR::Studio</td> <td>AWS::SageMaker::StudioLifecycleConfig</td> </tr> <tr> <td>AWS::EMRContainers::VirtualCluster</td> <td>AWS::SecretsManager::RotationSchedule</td> </tr> <tr> <td>AWS::EMRServerless::Application</td> <td>AWS::SES::DedicatedIpPool</td> </tr> <tr> <td>AWS::EntityResolution::MatchingWorkflow</td> <td>AWS::SES::MailManagerTrafficPolicy</td> </tr> <tr> <td>AWS::Glue::Registry</td> <td>AWS::SSM::ResourceDataSync</td> </tr> </tbody> </table> <p>To view the complete list of AWS Config supported resource types, see the&nbsp;<a href="https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html" target="_blank">supported resource types</a> page.</p>

Read article →

Amazon CloudWatch Synthetics adds multi-browser support in AWS GovCloud Regions

<p>Amazon CloudWatch Synthetics multi-browser support is now available in the AWS GovCloud (US-East, US-West) Regions. This expansion enables customers in these two regions to test and monitor their web applications using both Chrome and Firefox browsers.<br /> <br /> With this launch, you can run the same canary script across Chrome and Firefox when using Playwright-based canaries or Puppeteer-based canaries. CloudWatch Synthetics automatically collects browser-specific performance metrics, success rates, and visual monitoring results while maintaining an aggregate view of overall application health. This helps development and operations teams quickly identify and resolve browser compatibility issues that could affect application reliability.<br /> <br /> To learn more about configuring multi-browser canaries, see the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html">canary docs</a> in the Amazon CloudWatch Synthetics User Guide.&nbsp;</p>

Read article →

Mountpoint for Amazon S3 and Mountpoint for Amazon S3 CSI driver add monitoring capability

<p>You can now monitor Mountpoint operations in observability tools such as Amazon CloudWatch, Prometheus, and Grafana. With this launch, Mountpoint emits near real-time metrics such as request count or request latency using OpenTelemetry Protocol (OTLP), an open source data transmission protocol. This means you can use applications such as CloudWatch agent or the OpenTelemetry (OTel) collector to publish the metrics into observability tools and create dashboards for monitoring and troubleshooting.<br /> <br /> Previously, Mountpoint emitted operational data into log files, and you needed to create custom tools to parse the log files for insights. Now, when you mount your Amazon S3 bucket, you can configure Mountpoint to publish the metrics to an observability tool to proactively monitor issues that might impact your applications. For example, you can check if an application is unable to access S3 due to permission issues by analyzing the S3 request error metric that provides error types at an Amazon EC2 instance granularity.<br /> <br /> Follow the <a href="https://github.com/awslabs/mountpoint-s3/blob/main/doc/METRICS.md" target="_blank">step-by-step instructions</a> to set up the CloudWatch agent or the OTel collector and configure Mountpoint to publish metrics into an observability tool. For more information, visit the <a href="https://github.com/awslabs/mountpoint-s3" target="_blank">Mountpoint for Amazon S3 GitHub repository</a>, <a href="https://aws.amazon.com/s3/features/mountpoint/" target="_blank">Mountpoint product page</a>, and <a href="https://github.com/awslabs/mountpoint-s3-csi-driver" target="_blank">Mountpoint for Amazon S3 CSI driver GitHub page</a>.</p>

Read article →

Amazon CloudWatch Agent adds support for NVMe Local Volume Performance Statistics

<p>Amazon CloudWatch agent now supports the collection of detailed performance metrics for NVMe local volumes on Amazon EC2 instances. These metrics give you insights into behavior and performance characteristics of your NVMe local storage.<br /> <br /> The CloudWatch agent can now be configured to collect and send detailed NVMe metrics to CloudWatch, providing deeper visibility into storage performance. The new metrics include comprehensive performance indicators such as queue depths, I/O sizes, and device utilization. These metrics are similar to the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/nvme-detailed-performance-stats.html" target="_blank">detailed performance statistics available for EBS volumes</a>, providing a consistent monitoring experience across both storage types. You can create CloudWatch dashboards, set alarms, and analyze trends for your NVMe-based instance store volumes.<br /> <br /> Detailed performance statistics for Amazon EC2 instance store volumes via Amazon CloudWatch agent are available for all local NVMe volumes attached to Nitro-based EC2 instances in all AWS Commercial and AWS GovCloud (US) Regions. See the <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">Amazon CloudWatch pricing page</a> for CloudWatch pricing details.<br /> <br /> To get started with detailed performance statistics for Amazon EC2 instance store volumes in CloudWatch, see <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-instance-store-Collect.html" target="_blank">Collect Amazon EC2 instance store volume NVMe driver metrics</a> in the Amazon CloudWatch User Guide. To learn more about detailed performance statistics for Amazon EC2 instance store volumes, see&nbsp;<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-detailed-performance-stats.html" target="_blank">Amazon EC2 instance store volumes</a> in the Amazon EC2 User Guide.</p>

Read article →

Amazon Cognito removes Machine-to-Machine app client price dimension

<p>We're excited to announce a simplified pricing model for Amazon Cognito's machine-to-machine (M2M) authentication. Starting today we are removing the M2M app client pricing dimension, making it more cost-effective for customers to build and scale their M2M applications. Cognito supports applications that access API data with <i>machine identities</i>. Machine identities in user pools are clients that run on application servers and connect to remote APIs. Their operation happens without user interaction such as scheduled tasks, data streams, or asset updates. This change reduces the pricing of Cognito for customers using M2M authentication by removing the app client price dimension. Customers will continue to be charged based on the number of successful M2M token requests per month.<br /> <br /> Previously, customers were charged for each M2M app client registered, regardless of usage amount, and each successful token request made by the app client to access a resource. With this change, customers will only pay for their successful token requests, making it more cost-effective to build and scale M2M applications using Amazon Cognito.<br /> <br /> This pricing change is automatic and requires no action from customers. It is effective in all supported Amazon Cognito regions. To learn more about Amazon Cognito pricing, visit our <a href="https://aws.amazon.com/cognito/pricing/" target="_blank">pricing page</a>.&nbsp;</p>

Read article →

New SAP on AWS GROW Region Availability for SAP Cloud ERP

<p>SAP Cloud ERP on AWS (GROW) is now available in the Europe (Frankfurt) region. As a complete offering of solutions, best practices, adoption acceleration services, community and learning, SAP Cloud ERP on AWS helps any size organization adopt cloud enterprise resource planning (ERP) with speed, predictability, and continuous innovation on the world’s most comprehensive and broadly adopted cloud. SAP Cloud ERP on AWS can be implemented in months instead of years compared to traditional on-premises ERP implementations.<br /> <br /> By implementing SAP Cloud ERP on AWS, you can simplify everyday work, grow your business, and secure your success. At the core of SAP Cloud ERP on AWS is SAP S/4HANA Cloud, Public edition a full-featured SaaS ERP suite built on the learnings of SAP's 50+ years of industry best practices. SAP Cloud ERP on AWS allows your organization to gain end-to-end process visibility and control with integrated systems across HR, procurement, sales, finance, supply chain, and manufacturing. It also includes SAP Business AI-powered processes leveraging AWS to provide data-driven insights and recommendations. Customers can also innovate with generative AI using their SAP data through Amazon Bedrock models in the SAP generative AI hub. SAP Cloud ERP on AWS takes advantage of AWS Graviton processors, which offer up to 60% less energy than comparable cloud instances for the same performance.<br /> <br /> To learn more about deploying SAP Cloud ERP on AWS explore the <a href="http://aws.amazon.com/sap/grow" target="_blank">SAP on AWS product page</a>.&nbsp; &nbsp;</p>

Read article →

Amazon VPC IPAM automates prefix list updates

<p>Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to automate prefix lists updates with prefix list resolver (PLR). This feature allows network administrators to automatically update prefix lists based on their business logic in IPAM improving their operational posture and reducing overhead.<br /> <br /> Using IPAM PLR, you can define business rules for synchronizing prefix lists with IP address ranges from various resources, such as VPCs, subnets and IPAM pools. These prefix lists can then be referenced in resources such as route tables and security groups across your AWS environment, based on your connectivity requirements. Previously, you had to manually update your prefix lists to add or remove IP address ranges based on changes to your AWS environment. This was operationally complex and error prone. IPAM PLR automates prefix list updates requiring no manual intervention, improving your operational posture.<br /> <br /> This feature is now available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a> where Amazon VPC IPAM is supported, including AWS China Regions, and AWS GovCloud (US) Regions.<br /> <br /> To learn more about this feature, view the <a href="https://docs.aws.amazon.com/vpc/latest/ipam/automate-prefix-list-updates.html" style="cursor: pointer;">AWS IPAM documentation</a>. For details on pricing, refer to the IPAM tab on the <a href="https://aws.amazon.com/vpc/pricing/" style="cursor: pointer;">Amazon VPC Pricing Page</a>.</p>

Read article →

Amazon RDS extends IPv6 support for publicly accessible databases

<p>Amazon Relational Database Service (RDS) now extends the Internet Protocol Version 6 (IPv6) support to <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/gettingstartedguide/security-public-private.html#security-public" style="cursor: pointer;">publicly accessible databases</a>, in addition to the existing support for privately accessible databases within a VPC. This allows you to configure dual-stack (IPv4 and IPv6) connectivity for your publicly accessible RDS and Aurora databases.<br /> <br /> <a contenteditable="false" href="https://aws.amazon.com/vpc/ipv6/" style="cursor: pointer;">IPv6</a> provides an expanded address space, enabling you to scale your application on AWS beyond the limitations of IPv4 addresses. With IPv6, you can assign easy to manage contiguous IP ranges to micro-services and can get virtually unlimited scale for your applications. Moreover, with support for both IPv4 and IPv6, you can gradually transition applications from IPv4 to IPv6, enabling safer migration.<br /> <br /> This feature is available in all AWS regions where IPv6 support for privately accessible RDS databases within a VPC is already available. Get started with the AWS CLI or <a contenteditable="false" href="https://console.aws.amazon.com/rds/home" style="cursor: pointer;">AWS Management Console</a>.<br /> <br /> To learn more about configuring your environment for IPv6, please refer to the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html" style="cursor: pointer;">IPv6 User Guide</a>.</p>

Read article →

Announcing larger instances for Amazon Lightsail

<p>Amazon Lightsail now offers three larger instance bundles with up to 64 vCPUs and 256 GB memory. The new instance bundles are available with Linux operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundles with pre-configured Linux OS and application blueprints including WordPress, cPanel &amp; WHM, Plesk,&nbsp;Drupal, Magento, MEAN, LAMP, Node.js, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows.<br /> <br /> The new larger instance bundles enable you to scale your web applications and run more compute and memory intensive workloads in Lightsail. These higher performance instance bundles are ideal for general purpose workloads that require ability to handle large spikes in load. Using this new bundle, you can run web and application servers, large databases, virtual desktops, batch processing, enterprise applications, and more.<br /> <br /> These new bundles now available in all <a href="https://docs.aws.amazon.com/lightsail/latest/userguide/understanding-regions-and-availability-zones-in-amazon-lightsail.html" style="cursor: pointer;" target="_blank">AWS Regions where Amazon Lightsail is available</a>. For more information on pricing, or to get started with your free account, <a href="https://aws.amazon.com/lightsail/pricing/" style="cursor: pointer;" target="_blank">click here.</a></p>

Read article →

Amazon Route 53 Resolver now supports AWS PrivateLink

<p>Amazon Route 53 Resolver now supports&nbsp;<a href="https://aws.amazon.com/privatelink/" target="_blank">AWS PrivateLink</a>. Customers can now access and manage&nbsp;<a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html" target="_blank">Route 53 Resolver</a>&nbsp;and all the related features (Resolver endpoints, Route 53 Resolver DNS Firewall, Resolver Query Logging, Resolver for AWS Outposts) privately, without going through the public internet.&nbsp;AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When Route 53 Resolver and its features are accessed via AWS PrivateLink, all&nbsp;operations,&nbsp;such as creating, deleting, editing, and listing, can be handled via the Amazon private network.&nbsp;<br /> <br /> Amazon Route 53 Resolver responds recursively to DNS queries from AWS resources for public records, Amazon VPC-specific DNS names, and Amazon Route 53 private hosted zones, and is available by default in all VPCs. Route 53 Resolver also offers features (Resolver endpoints, Route 53 Resolver DNS Firewall, Resolver Query Logging, Resolver for AWS Outposts) that you can opt-into.&nbsp;You can use Resolver and its features with AWS PrivateLink in regions where<i>&nbsp;</i>Route 53 Resolver and all its associated features are&nbsp;available today, including the AWS GovCloud (US) Regions. For more information about the AWS Regions where Resolver and its features are available, see&nbsp;<a href="https://docs.aws.amazon.com/general/latest/gr/r53.html" target="_blank">here</a>.<br /> <br /> To learn more about Route 53 Resolver and its features, please refer to the service&nbsp;<a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_Operations_Amazon_Route_53_Resolver.html" target="_blank">documentation</a>.</p>

Read article →

Amazon GameLift Streams adds AWS Health notifications for aging resources

<p><a href="https://aws.amazon.com/gamelift/streams/">Amazon GameLift Streams</a> is now integrated with AWS Health and will provide automated notifications about aging stream groups. Customers are sent regular reminders via AWS Health to re-create their stream groups starting as early as the 45th day to the 335th day from the stream group creation date. Stream groups older than 180 days are restricted from adding new applications and automatically expire after the 365th day.<br /> <br /> This feature strengthens our customer’s security posture by helping customers manage the lifecycle of stream groups and prevent the use of outdated resources that might be missing updates. While the customer focuses on their game development, the service helps maintain the health of their resources.<br /> <br /> AWS Health will send a reminder to the linked account on the 45th day and on the 150th day from the stream group creation day, informing customers that the stream group will be restricted from adding new applications after the 180-day. A last reminder to re-create the stream group will be sent on 335th day informing customers that the stream group will expire on the 365th day.<br /> <br /> This feature is available in all AWS Regions where Amazon GameLift Streams is offered at no additional cost.<br /> <br /> Maintenance warnings or the expiration date of a stream group can be viewed on the Stream group details page on the service console, or by using the <i>ExpiresAt</i> field in the <i>GetStreamGroup</i> API response. <br /> <br /> To learn more about managing your stream groups and configuring notifications, visit the Amazon GameLift documentation on&nbsp;<b>Stream group lifecycle.</b></p>

Read article →

The Model Context Protocol (MCP) Proxy for AWS is now generally available

<p>Today, AWS announces the general availability of the Model Context Protocol (MCP) Proxy for AWS, a client-side proxy that enables MCP clients to connect to remote, AWS-hosted MCP servers using AWS SigV4 authentication. The Proxy supports popular agentic AI development tools like Amazon Q Developer CLI, Kiro, Cursor, and popular agent frameworks like Strands Agents. Customers can connect to remote MCP servers with AWS credentials using the Proxy to automatically handle MCP protocol communications via SigV4. The Proxy also helps customers to connect to MCP servers built on Amazon Bedrock AgentCore Gateway or Runtime using SigV4 authentication.<br /> <br /> This release allows developers and agents to extend development workflows to include AWS service interactions from AWS MCP server tools. For example, you can use AWS MCP servers to work with resources like AWS S3 buckets or Amazon RDS tables through existing MCP servers with SigV4. The MCP Proxy for AWS includes safety controls such as read-only mode to prevent unintended changes, configurable retry logic for reliability, and logging for troubleshooting. Customers can install the Proxy from source, through Python package managers, or by using a container making it simple to configure with their preferred MCP-supported development tool.<br /> <br /> The MCP Proxy for AWS is open-source and available now. Visit the <a href="https://github.com/aws/mcp-proxy-for-aws">AWS GitHub repository</a><a href="https://github.com/aws/mcp-proxy-for-aws"> </a>to view the installation and configuration options and start connecting with remote AWS MCP Servers today.&nbsp;</p>

Read article →

Amazon Aurora DSQL now supports FIPS 140-3 compliant endpoints

<p>Amazon Aurora DSQL now supports Federal Information Processing Standards (FIPS) 140-3 compliant endpoints, helping companies contracting with the US federal governments meet the FIPS security requirement to encrypt sensitive data in supported Regions. With this launch, you can use Aurora DSQL for workloads that require a FIPS 140-3 validated cryptographic module when sending requests over public or VPC endpoints.<br /> <br /> Aurora DSQL is the fastest serverless, distributed SQL database with single- and multi-Region clusters providing active-active high availability and strong consistency. Aurora DSQL enables you to build applications with virtually unlimited scalability, the highest availability, and zero infrastructure management.<br /> <br /> Aurora DSQL FIPS compliant endpoints are now available in the following regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). To learn more about FIPS 140-3 at AWS, visit <a href="https://aws.amazon.com/compliance/fips/">FIPS 140-3 Compliance</a>.</p>

Read article →

Amazon DynamoDB Accelerator now supports AWS PrivateLink

<p>Amazon DynamoDB Accelerator (DAX) now supports AWS PrivateLink, enabling you to securely access DAX management APIs such as CreateCluster, DescribeClusters, and DeleteCluster over private IP addresses within your virtual private cloud (VPC). DAX clusters already run inside your VPC, and all data plane operations like GetItem and Query are handled privately within the VPC. With this launch, you can now perform cluster management operations privately, without connecting to the public regional endpoint.<br /> <br /> With AWS PrivateLink, you can simplify private network connectivity between virtual private clouds (VPCs), DAX, and your on-premises data centers using interface VPC endpoints and private IP addresses. It helps you meet compliance regulations and eliminates the need to use public IP addresses, configure firewall rules, or configure an Internet gateway to access DAX from your on-premises data centers.<br /> <br /> AWS PrivateLink for DAX is available in all Regions where DAX is available today. For information about DAX Regional availability, see the “Service endpoints” section in <a href="https://docs.aws.amazon.com/general/latest/gr/ddb.html" target="_blank">Amazon DynamoDB endpoints and quotas</a>. There is an additional cost to use the feature. Please see <a href="https://aws.amazon.com/privatelink/pricing/" target="_blank">AWS PrivateLink pricing</a> for more details. To get started with DAX and PrivateLink, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/dax-private-link.html" target="_blank">AWS PrivateLink for DAX</a>.</p>

Read article →

AWS Marketplace now offers pricing model flexibility and simplified deployment for AI agents and tools

<p>AWS Marketplace now offers flexible pricing models, simplified authentication, and streamlined deployment for AI agents and tools. The new capabilities include contract-based and usage-based pricing for <a contenteditable="false" href="https://aws.amazon.com/bedrock/agentcore/" style="cursor: pointer;">Amazon Bedrock AgentCore</a> Runtime containers, and simplified OAuth credential management through Quick Launch for API-based AI agents and tools. Customers can also use supported remote MCP servers procured through AWS Marketplace as MCP targets on AgentCore Gateway, making it easier for them to connect to AI agents and tools from AWS Partners at scale. The improvements reduce deployment complexity while offering pricing models that better align with diverse customer needs.<br /> <br /> For Partners, the new capabilities for AI agents and tools streamline management and provide additional pricing options through AWS Marketplace. Partners can now manage all their AI agents and tools listings from one page in the AWS Marketplace Management Portal, reducing the complexity of managing multiple listings across different interfaces. With usage-based and contract-based pricing options for AgentCore Runtime compatible products, Partners have more flexibility to implement pricing strategies that align with their business models and customers’ needs.<br /> <br /> Customers can learn more in the <a contenteditable="false" href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-ai-agents-products.html" style="cursor: pointer;">buyer guide</a><b> </b>and start exploring AI agent solutions in AWS Marketplace on the <a contenteditable="false" href="https://aws.amazon.com/marketplace/solutions/ai-agents-and-tools" style="cursor: pointer;">solutions page</a>. For partners interested in implementing the capabilities, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/marketplace/latest/userguide/ai-agents-tools.html" style="cursor: pointer;">seller guide</a> and complete the <a contenteditable="false" href="https://catalog.workshops.aws/mpseller/en-US/use-cases/publish-agentcore-free" style="cursor: pointer;">workshop.</a></p>

Read article →

Amazon Connect now supports scheduling of individual agents

<p>Amazon Connect now supports scheduling of individual agents, giving you more flexibility in scheduling your workforce. For example, when onboarding 100 new agents to a business unit with schedules already published for next two months, you can create schedules for only those new agents and automatically merge them with existing schedules. This eliminates the need for workarounds such as manually copying schedules from existing agents to new agents or regenerating schedules for entire business unit, thus improving manager productivity and operational efficiency.<br /> <br /> This feature is available in all <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#optimization_region">AWS Regions</a> where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click <a href="https://docs.aws.amazon.com/connect/latest/adminguide/forecasting-capacity-planning-scheduling.html">here</a>.</p>

Read article →

AWS Step Functions announces a new metrics dashboard

<p>AWS Step Functions announces improved observability with a new metrics dashboard, giving you visibility into your workflow operations at both the account and state machine levels. <a href="https://aws.amazon.com/step-functions/" target="_blank">AWS Step Functions</a> is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads.<br /> <br /> With this launch, you can now view usage and billing metrics in one dashboard on the AWS Step Functions console. Metrics are available at both account and state-machine level. You can now view these metrics for both standard and express workflows. In addition, existing metrics, such as ApproximateOpenMapRunCount, are available on the metrics dashboard.<br /> <br /> New dashboard and metrics are available in all AWS Regions where AWS Step Functions is available. To get started, open a dashboard today in the <a href="https://console.aws.amazon.com/states/home/" target="_blank">AWS Step Functions console</a>. To learn more, visit the Step Functions <a href="https://docs.aws.amazon.com/step-functions/latest/dg/procedure-cw-metrics.html" target="_blank">developer guide.</a></p>

Read article →

Split Cost Allocation Data for Amazon EKS supports Kubernetes labels

<p>Starting today, Split Cost Allocation Data for Amazon EKS now allows you to import up to 50 Kubernetes custom labels per pod as cost allocation tags. You can attribute costs of your Amazon EKS cluster at the pod level using custom attributes, such as cost center, application, business unit, and environment in AWS Cost and Usage Report (CUR).<br /> <br /> With this new <a href="https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-kubernetes-labels.html">capability</a>, you can better align your cost allocation with specific business requirements and organizational structure driven by your cloud financial management needs. This enables granular cost visibility of your EKS clusters running multiple application containers using shared EC2 instances, allowing you to allocate the shared costs of your EKS cluster. For new split cost allocation data customers, you can enable this feature in the AWS Billing and Cost Management console. For existing customers, EKS will automatically import the labels, but you must activate them as cost allocation tags. After activation, Kubernetes custom labels are available in your CUR within 24 hours. You can use the <a href="https://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/scad-containers-dashboard.html">Containers Cost Allocation dashboard</a> to visualize the costs in Amazon QuickSight and the <a href="https://catalog.workshops.aws/cur-query-library/en-US/queries/container">CUR query library</a> to query the costs using Amazon Athena.<br /> <br /> This feature is available in all AWS Regions where Split Cost Allocation Data for Amazon EKS is available. To get started, visit <a href="https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data.html">Understanding Split Cost Allocation Data</a>.</p>

Read article →

AWS Clean Rooms launches advanced configurations to optimize SQL performance

<p>Today, AWS Clean Rooms announces support for advanced configurations to improve the performance of Spark SQL queries. This launch enables you to customize Spark properties and compute sizes for SQL queries at runtime, offering increased flexibility to meet your performance, scale, and cost requirements. <br /> <br /> With AWS Clean Rooms, you can configure Spark properties—such as shuffle partition settings for parallel processing and autoBroadcastJoinThreshold for optimizing join operations—to help you better control the behavior and tuning of SQL queries in a Clean Rooms collaboration. Additionally, you can choose to cache an existing table’s data containing results from a SQL query or create and cache a new table, which help improve the performance and reduce costs for complex queries using large datasets. For example, an advertiser running lift analysis on their advertising campaigns can specify a custom number of workers for an instance type and configure Spark properties—without editing their SQL query—to optimize costs.<br /> <br /> With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the <a href="https://docs.aws.amazon.com/general/latest/gr/clean-rooms.html#clean-rooms_region" target="_blank">AWS Regions</a> table. To learn more about collaborating with AWS Clean Rooms, visit <a href="https://aws.amazon.com/clean-rooms/" target="_blank">AWS Clean Rooms</a>.</p>

Read article →

Amazon Bedrock AgentCore Browser now reduces CAPTCHAs with Web Bot Auth (Preview)

<p>Amazon Bedrock AgentCore Browser provides a fast, secure, cloud-based browser for AI agents to interact with websites at scale. It now enables agents to establish trusted, accountable access quickly and reduce CAPTCHA interruptions in automated workflows through Web Bot Auth, a draft IETF protocol that cryptographically identifies AI agents to websites. Traditional security measures like CAPTCHAs, rate limits, and blocks often halt automated workflows because Web Application Firewalls (WAFs) treat all automated traffic as suspicious - meaning AI agents frequently need human intervention to complete their tasks.<br /> <br /> By enabling Web Bot Auth, AgentCore Browser streamlines bot verification across major security providers including Akamai Technologies, Cloudflare, and HUMAN Security. It automatically generates security credentials, signs HTTP requests with private keys, and registers verified identities - getting you started immediately without the need to register with multiple WAF providers or manage verification infrastructure.<br /> <br /> Web Bot Auth support for AgentCore Browser is available in preview in all nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).<br /> <br /> Learn more about this feature through the <a href="https://aws.amazon.com/blogs/machine-learning/reduce-captchas-for-ai-agents-browsing-the-web-with-web-bot-auth-preview-in-amazon-bedrock-agentcore-browser/">blog</a>, see the<a href="http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/browser-web-bot-auth.html"> Reduce CAPTCHAs with Web Bot Auth documentation</a> to get started with Web Bot Auth in Browser. AgentCore offers <a href="https://aws.amazon.com/bedrock/agentcore/pricing/">consumption-based pricing</a> with no upfront costs.</p>

Read article →

TwelveLabs’ Pegasus 1.2 model now available in three additional AWS regions

<p>Amazon announces the expansion of the TwelveLabs’ Pegasus 1.2 video understanding model to the US East (Ohio), US West (N. California), and Europe (Frankfurt) AWS Regions. This expansion makes it easier for customers to build and scale generative AI applications that can understand and interact with video content at an enterprise level.<br /> <br /> Pegasus 1.2 is a powerful video-first language model that can generate text based on the visual, audio, and textual content within videos. Specifically designed for long-form video, it excels at video-to-text generation and temporal understanding. With Pegasus 1.2's availability in these additional regions, you can now build video-intelligence applications closer to your data and end users in key geographic locations, reducing latency and simplifying your architecture.<br /> <br /> With today’s expansion, Pegasus 1.2 is now available in Amazon Bedrock across 7 regions: US East (N. Virginia), US West (Oregon),&nbsp;US East (Ohio), US West (N. California), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Seoul). To get started with Pegasus 1.2, visit the <a href="https://console.aws.amazon.com/bedrock/">Amazon Bedrock console</a>. To learn more, read the <a href="https://aws.amazon.com/blogs/aws/twelvelabs-video-understanding-models-are-now-available-in-amazon-bedrock">blog</a>, <a href="https://aws.amazon.com/bedrock/twelvelabs/">product pa</a><a href="https://aws.amazon.com/bedrock/twelvelabs/">ge</a>, <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock pricing</a>, and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html">documentation</a>.&nbsp;</p>

Read article →

Amazon WorkSpaces announces USB redirection support for DCV WorkSpaces

<p>AWS announces USB redirection support for WorkSpaces running Amazon DCV protocol, enabling users to access locally connected USB devices from their virtual desktop environments. With this feature, customers can now connect a wide range of USB peripherals to their virtual desktops, including credit card readers, 3D mice, and other specialized devices.<br /> <br /> USB redirection addresses the need for direct access to USB devices that require specialized drivers or lack dedicated protocols. This capability is currently limited to WorkSpaces Personal with Windows desktops accessed from Windows client devices. Performance and device compatibility may vary, so testing with your specific USB peripherals is recommended before adding them to the allowlist.<br /> <br /> This feature is available in all AWS Regions where Amazon WorkSpaces is offered.<br /> <br /> For more information about USB redirection in Amazon WorkSpaces, see USB Redirection for DCV in the <a href="https://docs.aws.amazon.com/workspaces/latest/adminguide/group_policy.html">Amazon WorkSpaces Administration Guide</a>, or visit the <a href="https://aws.amazon.com/workspaces-family/workspaces/">Amazon WorkSpaces</a> page to learn more about virtual desktop solutions from AWS.</p>

Read article →

Amazon GameLift Servers adds telemetry metrics to all server SDKs and game engine plugins

<p>Today, Amazon GameLift Servers launched the addition of built-in telemetry metrics across all server SDKs and game engine plugins. Built on OpenTelemetry, an open source framework, Amazon GameLift Servers telemetry metrics enable game developers to generate, collect, and export critical client-side metrics for game-specific insights.<br /> <br /> With this release, Amazon GameLift Servers can now be configured to collect and publish telemetry metrics for game servers running on managed Amazon EC2 and container fleets. Customers can leverage both pre-defined metrics and custom metrics, publishing them to <a href="https://aws.amazon.com/prometheus/" target="_blank">Amazon Managed Service for Prometheus</a> or <a href="https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html" target="_blank">Amazon CloudWatch</a>. This data can be visualized through ready-to-use dashboards (via Amazon Managed Grafana or Amazon CloudWatch) to help game developers optimize resource utilization, improve player experience, and identify and resolve potential operational issues.<br /> <br /> Telemetry metrics are now available in all Amazon GameLift Servers <a href="https://docs.aws.amazon.com/gameliftservers/latest/developerguide/gamelift-regions.html" target="_blank">supported regions</a>, except AWS China. For more information on monitoring resources using telemetry metrics on Amazon GameLift Servers, please visit the <a href="https://docs.aws.amazon.com/gameliftservers/latest/developerguide/monitoring-gamelift-servers-metrics.html" target="_blank">Amazon GameLift Servers documentation</a>.</p>

Read article →

Amazon ECS Service Connect enhances observability with Envoy Access Logs

<p><a contenteditable="false" href="https://aws.amazon.com/ecs/" style="cursor: pointer;">Amazon Elastic Container Service</a> (Amazon ECS) Service Connect now supports Envoy access logs, providing deeper observability into request-level traffic patterns and service interactions. This new capability captures detailed per-request telemetry for end-to-end tracing, debugging, and compliance monitoring.<br /> <br /> Amazon ECS Service Connect makes it simple to build secure, resilient service-to-service communication across clusters, VPCs, and AWS accounts. It integrates service discovery and service mesh capabilities by automatically injecting AWS-managed Envoy proxies as sidecars that handle traffic routing, load balancing, and inter-service connectivity. Envoy Access logs capture detailed traffic metadata enabling request-level visibility into service communication patterns. This enables you to perform network diagnostics, troubleshoot issues efficiently, and maintain audit trails for compliance requirements.<br /> <br /> You can now configure access logs within ECS Service Connect by updating the ServiceConnectConfiguration to enable access logging. Query strings are redacted by default to protect sensitive data. Envoy access logs will output to the standard output (STDOUT) stream alongside application logs and flow through the existing ECS log pipeline without requiring additional infrastructure. This configuration supports all existing application protocols (HTTP, HTTP2, GRPC and TCP). This feature is available in all regions where Amazon ECS Service Connect is supported. To learn more, visit the <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect-envoy-access-logs.html" style="cursor: pointer;">Amazon ECS Developer Guide</a>.</p>

Read article →

Amazon S3 Access Grants are now available in additional AWS Regions

<p>You can now create Amazon S3 Access Grants in the AWS Asia Pacific (Thailand) and AWS Mexico (Central) Regions.<br /> <br /> Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.<br /> <br /> Visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-limitations.html#access-grants-limitations-regions">AWS Region Table</a> for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our <a href="https://aws.amazon.com/s3/features/access-grants/">product page</a>.</p>

Read article →

Introducing the Amazon OCSF Ready Specialization

<p>We are excited to announce the <a href="https://aws.amazon.com/security-lake/specialization-partners/" target="_blank">Amazon OCSF Ready Specialization</a> that recognizes AWS Partners who have technically validated their software solutions to integrate with OCSF-compatible Amazon services with proven customer success in production environments. The Open Cybersecurity Schema Framework (OCSF) is an open-source initiative that simplifies how security data is normalized and shared across your security tools. This validation ensures customers can confidently select solutions that will help them improve their security operations through standardized data formats, leading to efficient threat detection, vulnerability identification, and enhanced security analytics.<br /> <br /> The AWS Service Ready Program provides customers with AWS Partner software solutions that work with AWS Services. This specialization helps you quickly find and deploy pre-validated AWS Partner solutions that work seamlessly with OCSF-compatible Amazon services, reducing the complexity of your security operations. Partners can participate in the Amazon OCSF Ready designation by either sending logs and security events in the OCSF schema, or receiving logs or security events from OCSF-compatible Amazon services. This standardization helps customers to collect, combine, and analyze security data reducing the time and effort needed for security operations.<br /> <br /> Amazon OCSF Ready Partners receive AWS Specialization Program benefits, and have access to signature benefits, including private strategy sessions and AWS guest speaker support for virtual events. The Amazon OCSF specialization expands and replaces the Amazon Security Lake Specialization.<br /> <br /> To learn more about how to become an Amazon OCSF Ready Partner, visit the <a contenteditable="false" href="https://aws.amazon.com/partners/programs/specializations/" style="cursor: pointer;">AWS Service Ready Program</a> webpage.</p>

Read article →

AWS Cloud Map supports cross-account workloads in AWS GovCloud (US) Regions

<p>AWS Cloud Map now supports cross-account service discovery through integration with AWS Resource Access Manager (AWS RAM) in AWS GovCloud (US) Regions. This enhancement lets you seamlessly manage and discover cloud resources—such as Amazon ECS tasks, Amazon EC2 instances, and Amazon DynamoDB tables—across AWS accounts. By sharing your AWS Cloud Map namespace via AWS RAM, workloads in other accounts can discover and manage resources registered in that namespace. This enhancement simplifies resource sharing, reduces duplication, and promotes consistent service discovery across environments for organizations with multi-account architectures.<br /> <br /> You can now share your AWS Cloud Map namespaces using AWS RAM with individual AWS accounts, specific Organizational Units (OUs), or your entire AWS Organization. To get started, create a resource share in AWS RAM, add the namespaces you want to share, and specify the principals (accounts, OUs, or the organization) that should have access. This enables platform engineers to maintain a centralized service registry—or a small set of registries—and share them across multiple accounts, simplifying service discovery. Application developers can then build services that rely on a consistent, shared registry without worrying about availability or synchronization across accounts. AWS Cloud Map's cross-account service discovery support improves operational efficiency and makes it easier to scale service discovery as your organization grows by reducing duplication and streamlining access to namespaces.<br /> <br /> This feature is available now in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions via the AWS Management Console, API, SDK, CLI, and CloudFormation. To learn more, please refer to the AWS Cloud Map <a href="https://docs.aws.amazon.com/cloud-map/latest/dg/sharing-namespaces.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

Amazon Managed Service for Prometheus adds anomaly detection

<p>Amazon Managed Service for Prometheus, a fully managed Prometheus-compatible monitoring service now supports anomaly detection. Anomaly detection applies machine-learning algorithms to continuously analyze time series and surfaces anomalies with minimal user intervention. You can use anomaly detection to isolate and troubleshoot unexpected changes in your metric behavior.</p> <p>Amazon Managed Service for Prometheus Anomaly Detection currently supports Random Cut Forest (RCF), an unsupervised algorithm for detecting anomalous data points within a time series. Once you create and configure an anomaly detector in an Amazon Managed Service for Prometheus workspace, it will create four new time series to represent resulting anomalies and confidence values along with them. Based on the resulting time series, you can create dynamic alerting rules in the Amazon Managed Service for Prometheus Alert manager, to notify you when anomalies occur, and you can also visualize the resulting time series alongside the input time series either in self-managed Grafana or Amazon Managed Grafana dashboards.</p> <p>This feature is now available in all AWS regions where Amazon Managed Service for Prometheus is <a contenteditable="false" href="https://docs.aws.amazon.com/prometheus/latest/userguide/what-is-Amazon-Managed-Service-Prometheus.html#AMP-supported-Regions" style="cursor: pointer;">generally available</a>. To configure anomaly detection use the AWS CLI, SDK, or APIs. Check out the <a contenteditable="false" href="https://docs.aws.amazon.com/prometheus/latest/userguide/prometheus-anomaly-detection.html" style="cursor: pointer;">Amazon Managed Service for Prometheus user guide</a> for detailed documentation.</p>

Read article →

Amazon ElastiCache now supports dual-stack (IPv4 and IPv6) service endpoints

<p><a href="https://aws.amazon.com/elasticache/">Amazon ElastiCache</a> now supports <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html#dual-stack-endpoints">dual-stack service endpoints</a>. Dual-stack service endpoints allow you to manage ElastiCache resources using either Internet Protocol version 4 (IPv4) or version 6 (IPv6). ElastiCache <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/elasticache-privatelink.html">interface VPC endpoints</a> powered by AWS PrivateLink also now support dual-stack connectivity.<br /> <br /> With this update, you can now connect to and manage your Amazon ElastiCache resources using IPv4 or IPv6. Dual-stack endpoints allow you to migrate applications using ElastiCache from IPv4 to IPv6 at your convenience, making it easier to meet IPv6 compliance requirements and modernize your application architecture.<br /> <br /> Amazon ElastiCache dual-stack service endpoints are available in all AWS commercial Regions, AWS China Regions and AWS GovCloud (US) Regions. There is no additional charge when you connect to Amazon ElastiCache service endpoints using Internet Protocol Version 6 (IPv6). To learn more, see the <a href="https://docs.aws.amazon.com/general/latest/gr/elasticache-service.html">Amazon ElastiCache service endpoint documentation</a>.</p>

Read article →

AWS Serverless MCP Server now supports tools for AWS Lambda event source mappings (ESM)

<p>The AWS Serverless Model Context Protocol (MCP) Server now supports specialized tools for <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> event source mappings (ESM), helping developers configure and manage ESMs more efficiently. These new tools combine the power of AI assistance with Lambda ESM expertise to streamline how developers set up, optimize, and troubleshoot event-driven serverless applications built on Lambda.<br /> <br /> We previously launched the open-source <a href="https://aws.amazon.com/blogs/compute/introducing-aws-serverless-mcp-server-ai-powered-development-for-modern-applications/">Serverless MCP Server</a> to enhance how developers build modern applications with AI-powered contextual guidance for architecture decisions, infrastructure provisioning, deployment automation, and troubleshooting of serverless applications. Starting today, we’re expanding the MCP server’s capabilities with new ESM tools that empower AI assistants, like Amazon Q Developer and Kiro, with proven knowledge of ESM patterns and best practices. The new ESM tools translate high-level throughput, latency, and reliability requirements into specific ESM configurations, generate complete AWS Serverless Application Model (AWS SAM) templates with optimized settings, validate network topology for Amazon Virtual Private Cloud (VPC)-based event sources, and diagnose common ESM issues. Thus, these tools enhance the event-driven application development experience, guiding developers through the entire ESM lifecycle, from initial setup to optimization and troubleshooting. <br /> <br /> The key new ESM tools being added to the Serverless MCP Server are: the ESM guidance tool for contextual guidance across all supported event sources, the ESM optimization tool for analyzing configuration tradeoffs, and the ESM Kafka troubleshooting tool for specialized diagnostics with Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka clusters.<br /> <br /> To learn more about the Serverless MCP Server and how it can transform your AI-assisted application development, visit the <a href="https://aws.amazon.com/blogs/compute/introducing-aws-lambda-event-source-mapping-tools-in-the-aws-serverless-mcp-server/">launch blog post</a> and <a href="https://awslabs.github.io/mcp/servers/aws-serverless-mcp-server/">documentation</a>. To download and try out the open-source MCP server with your AI-enabled IDE of choice, visit the <a href="https://github.com/awslabs/mcp/tree/main/src/aws-serverless-mcp-server">GitHub repository</a>.</p>

Read article →

Announcing an AI agent context pack for AWS IoT Greengrass developers

<p>AWS announces the release of a new AI agent context package for accelerating edge device application development using AWS IoT Greengrass. AWS IoT Greengrass is an IoT edge runtime and cloud service that helps developers build, deploy, and manage device software at the edge. The context package includes ready-to-use instructions, examples, and templates - enabling developers to leverage generative AI tools and agents for faster software creation, testing and deployment.<br /> <br /> Available as an open-source <a href="https://github.com/aws-greengrass/greengrass-agent-context-pack">GitHub repository</a> under the Creative Commons Attribution Share Alike 4.0 license, the AWS IoT Greengrass AI agent context package helps streamline development workflows. Developers can boost productivity by cloning the repository and integrating it with modern generative AI tools like Amazon Q to help accelerate cloud-connected edge application development while simplifying fleet-wide deployment and management.<br /> <br /> This new capability is available in all AWS Regions where AWS IoT Greengrass is supported. To learn more about AWS IoT Greengrass and its new AI agent context pack, visit the AWS IoT Greengrass <a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html">documentation.</a> Follow the getting started <a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started.html">guide</a> for a quick introduction to AWS IoT Greengrass.</p>

Read article →

AWS Backup adds single-action database snapshot copy across AWS Regions and accounts

<p>AWS Backup now supports copying database snapshots across AWS Regions and accounts using a single copy action. This feature supports Amazon RDS, Amazon Aurora, Amazon Neptune, and Amazon DocumentDB snapshots. It eliminates the need for sequential copying steps.</p> <p>You can use cross-Region and cross-account snapshot copies to protect against incidents like ransomware attacks and Region outages that might affect your production accounts or primary Regions. Previously, you needed to perform this as a two-step process—first copying to a different Region, and then to a different account (or vice versa). Now, by completing this in one step, you can achieve faster recovery point objectives (RPOs) while eliminating costs associated with intermediate copies. This streamlined process also simplifies the workflow by removing the need for custom scripts or Lambda functions that monitor intermediate copy status.</p> <p>This feature is available for all Amazon RDS and Amazon Aurora engines, Amazon Neptune and Amazon DocumentDB, in all regions where AWS Backup supports cross-Region and cross-account copying of snapshots in separate steps. You can start using this feature today through the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To get started, refer the <a contenteditable="false" href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource" style="cursor: pointer;">AWS Backup documentation</a>.</p>

Read article →

Amazon ECS now supports built-in Linear and Canary deployments

<p><a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Service</a>&nbsp;(Amazon ECS) announces support for linear and canary deployment strategies, giving you more flexibility and control when deploying containerized applications. These new strategies complement ECS built-in blue/green deployments, enabling you to choose the traffic shifting approach that best matches your application's risk profile and validation requirements.</p> <p>With linear deployments, you can gradually shift traffic from your current service revision to the new revision in equal percentage increments over a specified time period. You configure the step percentage (for example, 10%) to control how much traffic shifts at each increment, and set a step bake time to wait between each traffic shift for monitoring and validation. This allows you to validate your new application version at multiple stages with increasing amounts of production traffic. With canary deployments, you can route a small percentage of production traffic to your new service revision while the majority of traffic remains on the current stable version. You set a canary bake time to monitor the new revision's performance, after which Amazon ECS shifts the remaining traffic to the new revision. Both strategies support a deployment bake time that waits after all production traffic has shifted to the new revision before terminating the old revision, enabling quick rollback without downtime if issues are detected. You can configure deployment lifecycle hooks to perform custom validation steps, and use Amazon CloudWatch alarms to automatically detect failures and trigger rollbacks.</p> <p>The feature is available in all commercial <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where Amazon ECS is available.&nbsp;You can use linear and canary deployment strategies for new and existing Amazon ECS services that use Application Load Balancer (ALB) or ECS Service Connect, using the Console, SDK, CLI, CloudFormation, CDK, and Terraform. To learn more, see our documentation on&nbsp;<a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-linear.html" target="_blank">Amazon ECS linear deployments</a> and <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/canary-deployment.html" target="_blank">Amazon ECS canary deployments</a>.</p>

Read article →

Introducing the Capacity Reservation Topology API for AI, ML, and HPC instance types

<p>AWS announces the general availability of the Amazon Elastic Compute Cloud (EC2) Capacity Reservation Topology API. It joins the Instance&nbsp;Topology&nbsp;API in enabling customers to efficiently manage capacity, schedule jobs, and rank nodes for Artificial Intelligence, Machine Learning, and High-Performance Computing distributed workloads. The Capacity Reservation Topology API gives customers a unique per-account hierarchical view of the relative location of their capacity reservations.</p> <p>Customers running distributed parallel workloads are managing thousands of instances across tens to hundreds of capacity reservations. With the Capacity Reservation Topology API, customers can describe the topology of their reservations as a network node set, which will show the relative proximity of their capacity without the need to launch an instance. This enables efficient capacity planning and management as customers provision workloads on tightly coupled capacity. Customers can then use the Instance Topology API, which provides consistent network nodes from the Capacity Reservation Topology API with further granularity, enabling a consistent and seamless way to schedule jobs and rank nodes for optimal performance in distributed parallel workloads.</p> <p>The Capacity Reservation Topology API is available in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Middle East (Bahrain), Middle East (UAE), and South America (São Paulo), and it is supported on all instances available with the Instance Topology API.</p> <p>To learn more, please visit the latest <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-topology.html">EC2 user guide</a>.</p>

Read article →

AWS Elastic Beanstalk adds support for Amazon Corretto 25

<p>AWS Elastic Beanstalk now enables customers to build and deploy Java applications using Amazon Corretto 25 on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest Java 25 features while benefiting from AL2023's enhanced security and performance capabilities.<br /> <br /> AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Corretto 25 on AL2023 allows developers to take advantage of the latest Java language features including compact object headers, ahead-of-time (AOT) caching, and structured concurrency. Developers can create Elastic Beanstalk environments running Corretto 25 through the Elastic Beanstalk Console, CLI, or API.<br /> <br /> This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>.<br /> <br /> For more information about Corretto 25 and Linux Platforms, see the Elastic Beanstalk <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-platform.html" target="_blank">developer guide</a>. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk <a href="https://aws.amazon.com/elasticbeanstalk/" target="_blank">product page.</a></p>

Read article →

Announcing AWS for Fluent Bit 3.0.0 based on Fluent Bit 4.1.1

<p>AWS for Fluent Bit announces version <a href="https://github.com/aws/aws-for-fluent-bit/releases/tag/v3.0.0" target="_blank">3.0.0</a>, based on Fluent Bit version 4.1.1 and Amazon Linux 2023. Container logging using AWS for Fluent Bit is now more performant and more feature-rich for AWS customers, including those using <a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Services</a> (Amazon ECS) and <a href="https://aws.amazon.com/eks/" target="_blank">Amazon Elastic Kubernetes Service</a> (Amazon EKS).<br /> <br /> AWS for Fluent Bit enables Amazon ECS and Amazon EKS customers to collect, process, and route container logs to destinations including Amazon CloudWatch Logs, Amazon Data Firehose, Amazon Kinesis Data Streams, and Amazon S3 without changing application code. AWS for Fluent Bit 3.0.0 upgrades the Fluent Bit version to 4.1.1, and upgrades the base image to Amazon Linux 2023. These updates deliver access to the latest Fluent Bit features, significant performance improvements, and enhanced security. New features include native OpenTelemetry (OTel) support for ingesting and forwarding OTLP logs, metrics, and traces with AWS SigV4 authentication—eliminating the need for additional sidecars. Performance improvements include faster JSON parsing, processing more logs per vCPU with lower latency. Security enhancements include TLS min version and cipher controls, which enforce your TLS policy on outputs from AWS for Fluent Bit for stronger protocol posture.<br /> <br /> You can use AWS for Fluent Bit 3.0.0 on both ECS and EKS. On ECS, update the FireLens log-router container image in your task definition to the 3.0.0 tag from the Amazon ECR Public Gallery. On EKS, upgrade by either updating the Helm release or setting the DaemonSet image to the 3.0.0 version.<br /> <br /> The <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-using-fluentbit.html" target="_blank">AWS for Fluent Bit image</a> is available in the <a href="https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit" target="_blank">Amazon ECR Public Gallery</a> and in the Amazon ECR repository. You can also find it on <a href="https://github.com/aws/aws-for-fluent-bit" target="_blank">GitHub</a> for source code and additional guidance.</p> <p>&nbsp;</p> <p><i>10/29/2025 - This post has been updated to accurately reflect the Fluent Bit version at launch.</i></p> <p>&nbsp;</p>

Read article →

Amazon EBS introduces additional performance monitoring metrics for EBS volumes

<p>Amazon EBS now provides additional visibility to monitor the average IOPS and average throughput of your Amazon EBS volumes with two new CloudWatch metrics - <b>VolumeAvgIOPS</b> and <b>VolumeAvgThroughput</b>. You can use the metrics to monitor the I/O being driven on your EBS volumes to track performance trends.<br /> <br /> With these new volume level metrics, you can troubleshoot performance bottlenecks and optimize your volume’s provisioned performance to meet your application needs. The metrics will provide per-minute visibility into the driven average IOPS and average throughput on your EBS volume. With Amazon CloudWatch, you can use the new metrics to create customized dashboards and set alarms that notify you or automatically perform actions based on the metrics.<br /> <br /> The <b>VolumeAvgIOPS</b> and <b>VolumeAvgThroughput</b> metrics are available by default at a 1-minute frequency at no additional charge and are supported for all EBS volumes attached to an EC2 Nitro instance in all Commercial AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. To learn more about these new metrics, please visit the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/using_cloudwatch_ebs.html" target="_blank">EBS CloudWatch Metrics documentation</a>.</p>

Read article →

4 new image editing tools added to Stability AI Image Services in Amazon Bedrock

<p>Amazon Bedrock announces the availability of 4 new image editing tools to Stability AI Image Services: outpaint, fast upscale, conservative upscale, and creative upscale. These tools give creators precise control over their workflows, enabling them to transform concepts into finished products efficiently. The expanded suite now offers enhanced flexibility for professional creative projects.<br /> <br /> Stability AI Image Services offers three categories of image editing capabilities: <b>Edit tools</b>: Remove Background, Erase Object, Search and Replace, Search and Recolor, Inpaint, and Outpaint (NEW) let you make targeted modifications to specific parts of your images; <b>Upscale tools</b>: Fast Upscale (NEW), Conservative Upscale (NEW), and Creative Upscale (NEW) enable you to enhance resolution while preserving quality; <b>Control tools</b>: Structure, Sketch, Style Guide, and Style Transfer give you powerful ways to generate variations based on existing images or sketches.<br /> <br /> Stability AI Image Services is available in Amazon Bedrock through the API and is supported in US West (Oregon), US East (N. Virginia), and US East (Ohio). For more information on supported regions, visit the<a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html"> Amazon Bedrock Model Support by Regions guide. </a>For more details about Stability AI Image Services and its capabilities, visit the <a href="https://aws.amazon.com/blogs/machine-learning/scale-visual-production-using-stability-ai-image-services-in-amazon-bedrock/">launch blog</a>, <a href="https://aws.amazon.com/bedrock/stability-ai/">Stability AI product page</a>, and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-stability-diffusion.html">Stability AI documentation page</a>.&nbsp;</p>

Read article →

TwelveLabs’ Marengo Embed 3.0 for advanced video understanding now in Amazon Bedrock

<p>TwelveLabs' Marengo Embed 3.0 is now available on Amazon Bedrock, bringing advanced video-native multimodal embedding capabilities to developers and organizations working with video content. Marengo embedding models unify videos, images, audio, and text into a single representation space, enabling you to build sophisticated video search and content analysis applications for any-to-any search, recommendation systems, and other multimodal tasks with industry-leading performance.<br /> <br /> Marengo 3.0 delivers several key enhancements. Extended video processing capacity: process up to 4 hours of video and audio content and files up to 6GB—double the capacity of previous versions—making it ideal for analyzing full sporting events, extended training videos, and complete film productions. Enhanced sports analysis: the model delivers significant improvements with better understanding of gameplay dynamics, player movements, and event detection. Global multilingual support: expanded language capabilities from 12 to 36 languages, enabling global organizations to build unified search and retrieval systems that work seamlessly across diverse regions and markets. Multimodal search precision: combine images and descriptive text in a single embedding request, merging visual similarity with semantic understanding to deliver more accurate and contextually relevant search results.<br /> <br /> AWS is the first cloud provider to offer TwelveLab’s Marengo 3.0 model, now available in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Seoul). The model supports synchronous inference for low-latency text and image embeddings, and asynchronous inference for processing for video, audio, and large-scale image files.&nbsp;To get started, visit the <a href="https://console.aws.amazon.com/bedrock/">Amazon Bedrock console</a>. To learn more, read <a href="https://aws.amazon.com/bedrock/twelvelabs/">product pa</a><a href="https://aws.amazon.com/bedrock/twelvelabs/">ge</a>, and <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-twelvelabs.html">documentation</a>.&nbsp;</p>

Read article →

Amazon S3 adds conditional write functionality to copy operations

<p>Amazon S3 expands conditional write functionality to<b> </b>copy operations. With conditional copy, you can now verify if the object exists or has been modified in your destination S3 bucket before copying it. This helps you coordinate simultaneous writes to the same object and prevents multiple concurrent writers from unintentionally overwriting the object.<br /> <br /> You can now perform conditional copy operations through S3 CopyObject by including either the HTTP if-none-match header to verify object existence or the HTTP if-match header with ETag to validate the object’s content. Additionally, you can use the s3:if-match and s3:if-none-match condition keys in your S3 bucket policies to enforce conditional copy operations. S3 then evaluates the condition against the specified object's key or ETag before executing the copy operation in the destination bucket. This eliminates the need for additional client-side coordination mechanisms or API validation requests.<br /> <br /> Conditional copy is available at no additional charge in all AWS Regions in both S3 general purpose and directory buckets. You can use the AWS SDK, API, or CLI to copy data conditionally to your buckets. To learn more about conditional operations, visit the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-requests.html" target="_blank">S3 User Guide</a>.</p>

Read article →

AWS Control Tower is now available in AWS Asia Pacific (New Zealand) Region

<p>Starting today, customers can use AWS Control Tower in the AWS Asia Pacific (New Zealand) Region. With this launch, AWS Control Tower is available in 34 AWS Regions and the AWS GovCloud (US) Regions. AWS Control Tower offers the easiest way to set up and govern a secure, multi-account AWS environment. It simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. You can set up a multi-account AWS environment within 30 minutes or less, govern new or existing account configurations, gain visibility into compliance status, and enforce controls at scale.<br /> <br /> If you are new to AWS Control Tower, you can launch it today in any of the supported regions and you can use AWS Control Tower to govern your multi-account environment in all supported Regions. If you are already using AWS Control Tower and you want to extend its governance features to the newly supported regions in your accounts, you can go to the settings page in your AWS Control Tower dashboard, select your regions, and update your landing zone. Once you <a href="https://docs.aws.amazon.com/controltower/latest/userguide/configuration-updates.html#deploying-to-new-region">update all your governed accounts</a>, your landing zone, managed accounts, and registered OUs will be under governance in the new region(s).<br /> <br /> For a full list of Regions where AWS Control Tower is available, see the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region Table</a>. To learn more, visit the AWS Control Tower homepage or see the <a href="https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html">AWS Control Tower User Guide</a>.</p>

Read article →

AWS Elastic Beanstalk adds support for Amazon Corretto 25 and Tomcat 11

<p>AWS Elastic Beanstalk now enables customers to build and deploy Tomcat 11 applications using Amazon Corretto 25 on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest Java 25 and Jakarta EE 11 features while benefiting from AL2023's enhanced security and performance capabilities.</p> <p>AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Tomcat 11 with Corretto 25 on AL2023 allows developers to take advantage of the latest Java language features including compact object headers, ahead-of-time (AOT) caching, and structured concurrency. Developers can create Elastic Beanstalk environments running Corretto 25 with Tomcat 11 on AL2023 through the Elastic Beanstalk Console, CLI, or API.</p> <p>This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>.</p> <p>For more information about Corretto 25 with Tomcat 11 and Linux Platforms, see the Elastic Beanstalk <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform.html">developer guide. </a>To learn more about Elastic Beanstalk, visit the Elastic Beanstalk <a href="https://aws.amazon.com/elasticbeanstalk/">product page.</a></p> <p>&nbsp;</p>

Read article →

Amazon EC2 High Memory U7i instances are now available in AWS Europe (London) Region

<p>Amazon EC2 U7i-8tb (u7i-8tb.112xlarge) instances are now available in the AWS Europe (London) region. U7i-8tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids), delivering up to 135% more compute performance over existing U-1 instances. U7i-8tb instances offer 8TiB of DDR5 memory enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-8tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/" target="_blank">High Memory instances page</a>.</p>

Read article →

Web Grounding: Build accurate AI applications with Amazon Nova models

<p>We are excited to announce the general availability of Web Grounding, a new built-in tool for Nova models. Customers can use Web Grounding today with Nova Premier using the Amazon Bedrock tool use API. Support for additional Nova models is coming soon.<br /> <br /> Web Grounding is a built-in tool that can be used to retrieve and incorporate publicly available information with citations as context for responses. Developers can use the Web Grounding tool to implement a turnkey Retrieval Augmented Generation (RAG) solution using current, real-time information, reducing hallucinations and leading to more accurate outputs.<br /> <br /> Web Grounding is available today in US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions via cross-region inference.<br /> <br /> Learn more about using the Web Grounding tool on Nova models and steps to get started at our <a href="https://aws.amazon.com/blogs/aws/build-more-accurate-ai-applications-with-amazon-nova-web-grounding/" target="_blank">blog post</a>.</p>

Read article →

Amazon EC2 R8i and R8i-flex instances are now available in Europe (London)

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Europe (London) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% better performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.<br /> <br /> R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.<br /> <br /> R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal and the new 96xlarge size for the largest applications. <a href="https://docs.aws.amazon.com/sap/latest/general/sap-hana-aws-ec2.html" style="cursor: pointer;">R8i instances are SAP-certified </a>and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads.<br /> <br /> To get started, sign in to the <a href="https://aws.amazon.com/console/" style="cursor: pointer;">AWS Management Console</a>. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new <a href="https://aws.amazon.com/ec2/instance-types/r8i" style="cursor: pointer;">R8i and R8i-flex </a>instances visit the AWS News <a href="https://aws.amazon.com/blogs/aws/best-performance-and-fastest-memory-with-the-new-amazon-ec2-r8i-and-r8i-flex-instances/" style="cursor: pointer;">blog</a>.</p>

Read article →

AWS Resource Explorer supports 47 additional resource types

<p>AWS Resource Explorer now supports 47 more resource types services including Amazon Bedrock, AWS Shield, and AWS Glue.<br /> <br /> With this release, customers can now search for the following resource types in AWS Resource Explorer:<br /> </p> <table cellpadding="2000" width="800"> <tbody> <tr> <td>1. amplify:apps</td> <td>26. profile:domains/object-types</td> </tr> <tr> <td>2. aoss:collection</td> <td>27. resiliencehub:app</td> </tr> <tr> <td>3. app-integrations:application</td> <td>28. route53-recovery-control:controlpanel/routingcontrol</td> </tr> <tr> <td>4. appconfig:application/environment</td> <td>29. route53-recovery-readiness:cell</td> </tr> <tr> <td>5. appconfig:extensionassociation</td> <td>30. s3:storage-lens-group</td> </tr> <tr> <td>6. bedrock:agent-alias</td> <td>31. s3express:bucket</td> </tr> <tr> <td>7. cloudtrail:dashboard</td> <td>32. sagemaker:monitoring-schedule</td> </tr> <tr> <td>8. comprehend:flywheel</td> <td>33. shield:protection</td> </tr> <tr> <td>9. devicefarm:instanceprofile</td> <td>34. shield:protection-group</td> </tr> <tr> <td>10. directconnect:dx-gateway</td> <td>35. ssm-incidents:response-plan</td> </tr> <tr> <td>11. elasticloadbalancing:listener/gwy</td> <td>36. verifiedpermissions:policy-store</td> </tr> <tr> <td>12. elasticloadbalancing:loadbalancer/gwy</td> <td>37. vpc-lattice:service</td> </tr> <tr> <td>13. fsx:backup</td> <td>38. vpc-lattice:service/listener</td> </tr> <tr> <td>14. glue:dataQualityRuleset</td> <td>39. vpc-lattice:servicenetwork</td> </tr> <tr> <td>15. glue:registry</td> <td>40. vpc-lattice:servicenetworkserviceassociation</td> </tr> <tr> <td>16. iottwinmaker:workspace/sync-job</td> <td>41. vpc-lattice:targetgroup</td> </tr> <tr> <td>17. ivs:encoder-configuration</td> <td>42. wafv2:ipset</td> </tr> <tr> <td>18. ivs:ingest-configuration</td> <td>43. wafv2:regexpatternset</td> </tr> <tr> <td>19. ivs:playback-restriction-policy</td> <td>44. wafv2:rulegroup</td> </tr> <tr> <td>20. ivs:storage-configuration</td> <td>45. wafv2:webacl</td> </tr> <tr> <td>21. lex:bot</td> <td>46. wisdom:content</td> </tr> <tr> <td>22. mediatailor:vodSource</td> <td>47. workspaces-web:portal</td> </tr> <tr> <td>23. network-firewall:stateful-rulegroup</td> <td>&nbsp;</td> </tr> <tr> <td>24. network-firewall:stateless-rulegroup</td> <td>&nbsp;</td> </tr> <tr> <td>25. profile:domains/integrations</td> <td>&nbsp;</td> </tr> </tbody> </table> <p>To view a complete list of all supported types, see the&nbsp;<a contenteditable="false" href="https://docs.aws.amazon.com/resource-explorer/latest/userguide/supported-resource-types.html" style="cursor: pointer;">supported resource types</a>&nbsp;page.</p>

Read article →

Amazon DocumentDB (with MongoDB compatibility) announces upgraded query planner that can run queries up to 10x faster

<p>Today, Amazon DocumentDB (with MongoDB compatibility) announces a new query planner, featuring advanced query optimization capabilities and improved performance. PlannerVersion 2.0 for Amazon DocumentDB (with MongoDB compatibility) 5.0 delivers up to 10x performance improvement over the prior version when using find and update operators with indexes. Performance improvements primarily come from using more optimal index plans and enabling index scan support for operators such as negation operators ($neq, $nin) and nested $elementMatch. PlannerVersion 2.0 queries run faster through better cost estimation techniques, optimized algorithms, and enhanced stability.</p> <p>PlannerVersion 2.0 also simplifies query syntax. For example, you no longer need to provide explicit hints for $regex queries to utilize indexes.</p> <p>PlannerVersion 2.0 is available in all AWS Regions where Amazon DocumentDB 5.0 is supported. You can enable it by simply modifying the corresponding parameter in your cluster parameter group. The change does not require a cluster restart or cause any downtime. If needed, you can easily revert to using the legacy query planner. To learn more about the new query planner, see <a contenteditable="false" href="https://docs.aws.amazon.com/documentdb/latest/developerguide/query-planner.html" style="cursor: pointer;" target="_blank">Getting Started with New Query Planner</a>.</p>

Read article →

Amazon VPC Reachability Analyzer and Amazon VPC Network Access Analyzer are now available in AWS GovCloud (US) Regions

<p>With this launch, Amazon <a href="https://docs.aws.amazon.com/vpc/latest/reachability/getting-started.html">VPC Reachability Analyzer</a> and Amazon VPC <a href="https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-network-access-analyzer.html">Network Access Analyzer</a> are now available in both AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions.<br /> <br /> VPC Reachability Analyzer allows you to diagnose network reachability between a source resource and a destination resource in your virtual private clouds (VPCs) by analyzing your network configurations. For example, Reachability Analyzer can help you identify a missing route table entry in your VPC route table that could be blocking network reachability between an EC2 instance in Account A that is not able to connect to another EC2 instance in Account B in your AWS Organization.<br /> <br /> VPC Network Access Analyzer allows you to identify unintended network access to your AWS resources, helping you meet your security and compliance guidelines. For example, you can create a scope to verify that all paths from your web-applications to the internet, traverse the firewall, and detect any paths that bypass the firewall.<br /> <br /> For more information, visit documentation for <a href="https://docs.aws.amazon.com/vpc/latest/reachability/what-is-reachability-analyzer.html">VPC Reachability Analyzer</a> and <a href="https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-network-access-analyzer.html">VPC Network Access Analyzer.</a> For pricing, refer to the Network Analysis tab on the <a href="https://aws.amazon.com/vpc/pricing/">Amazon VPC Pricing Page</a>.&nbsp;</p>

Read article →

Amazon Kinesis Data Streams now supports 10x larger record sizes

<p>Amazon Kinesis Data Streams now supports record sizes up to 10MiB, a tenfold increase from the previous 1MiB limit. This launch enables customers to publish intermittent larger data payloads in their data streams while continuing to use existing Kinesis Data Streams APIs in their applications. This launch is accompanied by a 2x increase in the maximum PutRecords request size from 5MiB to 10MiB.<br /> <br /> Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store real-time data streams at any scale. With this launch, customers no longer need to maintain separate processing pipelines for handling intermittent large records, and can thus simplify their data pipelines. This reduces operational overhead for IoT analytics, change data capture, and generative AI workloads. You can update your stream's maximum record size up to 10 MiB using either the AWS Management Console or the UpdateMaxRecordSize API via the AWS SDK or CLI. Once your stream is configured, you can publish and consume larger records using existing Kinesis Data Streams APIs. You do not incur additional costs to use this capability beyond your regular Kinesis data streams charges.<br /> <br /> In conjunction with this launch, AWS Lambda now supports larger payloads up to 6MiB from Kinesis Data Streams.<br /> <br /> Amazon Kinesis Data Streams supports large records in the AWS Regions documented <a href="https://docs.aws.amazon.com/streams/latest/dev/large-records.html" target="_blank">here</a>. To learn more about using large records and how common downstream applications handle large records, please see our <a href="https://docs.aws.amazon.com/streams/latest/dev/large-records.html" target="_blank">documentation</a>.</p>

Read article →

Announcing Amazon Nova Multimodal Embeddings

<p>We are excited to announce the general availability of Amazon Nova Multimodal Embeddings, a state-of-the-art embedding model for agentic RAG and semantic search. It is the first unified embedding model that supports text, documents, images, video, and audio through a single model, to enable cross-modal retrieval with leading accuracy.<br /> <br /> Managing and searching across different content types traditionally required multiple specialized embedding models, leading to complexity, higher costs, and data silos. Amazon Nova Multimodal Embeddings maps diverse content types into a unified space with leading accuracy, helping break down these silos. Developers can build cross-modal applications that search video archives using complex queries, find relevant product images based on customer questions, or search financial documentation that contain both infographics and text explanations, all using a single embedding model.<br /> <br /> The model supports inputs of up to 8K tokens in length and video/audio segments up to 30 seconds, with the capability to segment larger files. Multiple output embedding dimensions allow organizations to balance accuracy and performance with storage and computation costs. Organizations can choose between synchronous API for near real-time applications and asynchronous API for efficient processing of larger files, enabling them to optimize for both latency-sensitive and high-volume workloads.<br /> <br /> Amazon Nova Multimodal Embeddings is available in US East (N. Virginia) in Amazon Bedrock.<br /> <br /> To learn more, read the <a href="https://aws.amazon.com/blogs/aws/amazon-nova-multimodal-embeddings-now-available-in-amazon-bedrock" target="_blank">AWS News blog</a> and <a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html" target="_blank">user guide</a>. To get started with Nova Multimodal Embeddings in Amazon Bedrock, visit the <a href="https://console.aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock console</a>.</p>

Read article →

Amazon EC2 I7i instances now available in additional AWS GovCloud (US) Regions

<p>Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS GovCloud (US-East, US-West) Regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.<br /> <br /> I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks.<br /> <br /> I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth.<br /> To learn more, visit the I7i instances<a href="https://aws.amazon.com/ec2/instance-types/i7i/"> page</a>.</p>

Read article →

Amazon EC2 I7ie instances now available in AWS GovCloud (US) Region

<p>AWS is announcing starting today, Amazon EC2 I7ie instances are now available in AWS GovCloud (US-West) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.<br /> <br /> I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).<br /> <br /> To learn more, visit the I7ie instances<a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/i7ie/" style="cursor: pointer;"> page</a>.</p>

Read article →

Amazon EC2 Im4gn instances now available in AWS Europe (Milan) Region

<p>Starting today, Amazon EC2 Im4gn Instances are available in Europe (Milan) region. Im4gn instances are built on the AWS Nitro System and are powered by AWS Graviton2 processors. They feature up to 30TB of instance storage with the 2nd Generation AWS Nitro SSDs that are custom-designed by AWS for the storage performance of I/O intensive workloads such as SQL/NoSQL databases, search engines, distributed file systems and data analytics. These instances help with transactions processed per second (TPS) for I/O intensive workloads such as relational databases (e.g. MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra) which have medium-large size data sets and can benefit from high compute performance and high network throughput. They are also an ideal fit for search engines, and data analytics workloads requiring fast access to data sets on local storage.<br /> <br /> The Im4gn instances also feature up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications requiring high levels of inter-node communication.<br /> <br /> Get started with Im4gn instances by visiting the <a href="https://console.aws.amazon.com/" target="_blank"><u>AWS Management Console</u></a>, <a href="https://aws.amazon.com/cli/" target="_blank"><u>AWS Command Line Interface (CLI)</u></a>, or <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank"><u>AWS SDKs</u></a>. To learn more, visit the <a href="https://aws.amazon.com/ec2/instance-types/i4g/" target="_blank"><u>Im4gn instances page</u></a>.&nbsp;</p>

Read article →

Amazon Redshift Serverless is now available in the AWS Asia Pacific (Osaka) and Asia Pacific (Malaysia) regions

<p><a href="https://aws.amazon.com/redshift/redshift-serverless/">Amazon Redshift Serverless</a>, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Asia Pacific (Osaka) and Asia Pacific (Malaysia) regions. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.<br /> <br /> With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.<br /> <br /> To get started, see the Amazon Redshift Serverless <a href="https://aws.amazon.com/redshift/redshift-serverless/">feature page</a>, <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-serverless.html">user documentation</a>, and <a href="https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/Welcome.html">API Reference</a>.</p>

Read article →

Amazon Location Service introduces new API key restrictions

<p>Today, AWS announced enhanced API key restrictions for Amazon Location Service, enabling developers to secure their location-based applications more effectively. This new capability helps organizations that need to restrict API access to specific mobile applications, providing improved security controls for location services across their application portfolio.<br /> <br /> Developers can now create granular security policies by restricting API keys to specific Android applications using package names and SHA-1 certificate fingerprints, or to iOS applications using Bundle IDs. For example, enterprises can ensure their API keys only work with their approved mobile applications, while development teams can create separate keys for testing and production environments.<br /> <br /> Amazon Location Service API key restrictions are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To implement these restrictions, you'll need to update your API key configurations using the Amazon Location Service console or APIs. To learn more, please visit the <a href="https://docs.aws.amazon.com/location/latest/developerguide/using-apikeys.html">Developer Guide</a>.</p>

Read article →

Amazon SageMaker adds additional search context for search results

<p>Amazon SageMaker enhances search results in Amazon SageMaker Unified Studio with additional context that improves transparency and interpretability. Users can see which metadata fields matched their query and understand why each result appears, increasing clarity and trust in data discovery. The capability introduces inline highlighting for matched terms and an explanation panel that details where and how each match occurred across metadata fields such as name, description, glossary, schema, and other metadata.<br /> <br /> The enhancement reduces time spent evaluating irrelevant assets by presenting match evidence directly in search results. Users can quickly validate relevance without opening individual assets.<br /> <br /> This capability is now available in all AWS Regions where Amazon SageMaker is supported.<br /> <br /> To learn more about Amazon SageMaker, see Amazon SageMaker <a contenteditable="false" href="https://docs.aws.amazon.com/next-generation-sagemaker/latest/userguide/what-is-sagemaker.html" style="cursor: pointer;" target="_blank">documentaion</a>.&nbsp;</p>

Read article →

Amazon ECS Managed Instances now available in all commercial AWS Regions

<p>Amazon Elastic Container Service (Amazon ECS) Managed Instances is now available in all commercial <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>. ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. By offloading infrastructure operations to AWS, you get the application performance you want and the simplicity you need while reducing your total cost of ownership.<br /> <br /> Managed Instances dynamically scales EC2 instances to match your workload requirements and continuously optimizes task placement to reduce infrastructure costs. It also enhances your security posture through regular security patching initiated every 14 days. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances Capacity Provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer.<br /> <br /> To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the <a href="https://aws.amazon.com/ecs/managed-instances/">feature page</a>, <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html">documentation</a>, and <a href="https://aws.amazon.com/blogs/aws/announcing-amazon-ecs-managed-instances-for-containerized-applications">AWS News launch blog</a>.</p>

Read article →

Amazon Cognito now supports resource indicators to simplify enhancing protection of OAuth 2.0 resources

<p>Amazon Cognito now enables app clients to specify resource indicators during access token requests as part of its OAuth 2.0 authorization code grant and implicit grant flows. The resource indicator identifies the protected resource, such as a user’s bank account record or a specific file in a file server that the user needs to access. After authenticating the client, Cognito then issues an access token for that specific resource. This ensures that access tokens can be limited from broad service level access down to accessing specific individual resources.<br /> <br /> This capability makes it simpler to protect resources that a user needs to access. For example, agents (an example of app clients) on behalf of users can request access tokens for specific protected resources, such as a user’s banking records. After validation, Cognito issues an access token with the audience claim set to the specific resource. Previously, clients had to use non-standard claims or scopes for Cognito to infer and issue resource-specific access tokens. Now, customers can specify the target resource in a simple and consistent way using standards-based resource parameter.<br /> <br /> This capability is available to Amazon Cognito <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-managed-login.html" target="_blank">Managed Login</a> customers using <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-sign-in-feature-plans.html" target="_blank">Essentials or Plus tiers</a> in AWS Regions where Cognito is available, including the AWS GovCloud (US) Regions. To learn more, refer to the <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/authorization-endpoint.html" target="_blank">developer guide</a>, and <a href="https://aws.amazon.com/cognito/pricing/" target="_blank">pricing</a> for Cognito Essentials and Plus tier.</p>

Read article →

AWS Payment Cryptography is now available in Canada(Montreal), Africa (Cape Town) and Europe (London)

<p><a href="https://aws.amazon.com/payment-cryptography/">AWS Payment Cryptography</a>&nbsp;has expanded its global presence with availability in three new regions - Canada(Montreal), Africa (Cape Town) and Europe (London). This expansion enables customers with latency-sensitive payment applications to build, deploy or migrate into additional AWS Regions without depending on cross-region support. For customers processing payment workloads in Europe, availability in London offers additional options for multi-Region high availability.<br /> <br /> AWS Payment Cryptography is a fully managed service that simplifies payment-specific cryptographic operations and key management for cloud-hosted payment applications. The service scales elastically with your business needs and is assessed as compliant with PCI PIN and PCI P2PE requirements, eliminating the need to maintain dedicated payment HSM instances. Organizations performing payment functions - including acquirers, payment facilitators, networks, switches, processors, and banks can now position their payment cryptographic operations closer to their applications while reducing dependencies on auxiliary data centers with dedicated payment HSMs.<br /> <br /> AWS Payment Cryptography is available in the following AWS Regions: Canada(Montreal), US East (Ohio, N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt, London), Africa(Cape Town) and Asia Pacific (Singapore, Tokyo, Osaka, Mumbai).<br /> <br /> To start using the service, please download the latest AWS&nbsp;<a href="https://aws.amazon.com/developer/tools/">CLI/SDK</a>&nbsp;and see the&nbsp;<a href="https://docs.aws.amazon.com/payment-cryptography/latest/userguide/what-is.html">AWS Payment Cryptography</a>&nbsp;user guide for more information.</p>

Read article →

Generative AI observability now generally available for Amazon CloudWatch

<p>Amazon CloudWatch announces the general availability of generative AI observability, helping you monitor all components of AI applications and workloads, including agents deployed and operated with Amazon Bedrock AgentCore. This release expands beyond runtime monitoring to include complete observability across AgentCore's Built-in Tools, Gateways, Memory, and Identity capabilities. DevOps teams and developers can now get an out-of-the-box view into latency, token usage, errors, and performance across all components of their AI workloads, from model invocations to agent operations. This feature is compatible with popular generative AI orchestration frameworks such as <a href="https://strandsagents.com/latest/">Strands Agents</a>, LangChain, and LangGraph, offering flexibility with your choice of framework.<br /> <br /> With this new feature, CloudWatch enables developers to analyzes telemetry data across components of a generative AI application. Customers can monitor code execution patterns in Built-in Tools, track API transformation success rates through Gateways, analyze memory storage and retrieval patterns, and ensure secure agent behavior through Identity observability. The connected view helps developers quickly identify issues - from gaps in VectorDB to authentication failures - using end-to-end prompt tracing, curated metrics, and logs. Developers can monitor their entire agent fleet through the "AgentCore" section in the CloudWatch console, which integrates seamlessly with other CloudWatch capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights.<br /> <br /> This feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney).<br /> <br /> To learn more, visit <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/GenAI-observability.html">documentation</a>. There is no additional pricing for Gen AI Observability, existing CloudWatch <a href="https://aws.amazon.com/cloudwatch/pricing/">pricing</a> for underlying telemetry data applies.</p>

Read article →

AWS Client VPN is now supporting MacOS Tahoe

<p>AWS Client VPN now supports MacOS Tahoe client with version 5.3.1. You can now run the AWS supplied VPN client on the latest MacOS versions. AWS Client VPN desktop clients are available free of charge, and can be downloaded <a href="https://aws.amazon.com/vpn/client-vpn-download/">here</a>.<br /> <br /> AWS Client VPN is a managed service that securely connects your remote workforce to AWS or on-premises networks. It supports desktop clients for MacOS, Windows x64, Windows Arm64 and Ubuntu-Linux. With client version 5.3.1 onwards, Client VPN now supports the MacOS Tahoe 26.0. It already supports Mac OS version 13.0, 14.0 and 15.0, Windows 10 (x64) and Windows 11 (Arm64 and x64), and Ubuntu Linux 22.04 and 24.04 LTS versions.<br /> To learn more about Client VPN:<br /> </p> <ul> <li>Visit the AWS Client VPN <a href="https://aws.amazon.com/vpn/">product page</a></li> <li>Read the AWS Client VPN <a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html">documentation</a></li> <li>Read the AWS Client VPN <a href="https://docs.aws.amazon.com/vpn/latest/clientvpn-user/client-vpn-user-what-is.html">user guide</a></li> </ul>

Read article →

Amazon Bedrock AgentCore is now generally available

<p>Amazon Bedrock AgentCore is an agentic platform to build, deploy and operate highly capable agents securely at scale using any framework, model, or protocol. AgentCore lets you build agents faster, enable agents to take actions across tools and data, run agents securely with low-latency and extended runtimes, and monitor agents in production - all without any infrastructure management.</p> <p>With general availability, all AgentCore services now have support for Virtual Private Cloud (VPC), AWS PrivateLink, AWS CloudFormation, and resource tagging, enabling developers to deploy AI agents with enhanced enterprise security and infrastructure automation capabilities. AgentCore Runtime builds on its preview capabilities of industry-leading eight-hour execution windows and complete session isolation by adding support for the Agent-to-Agent (A2A) protocol, with broader A2A support coming soon across all AgentCore services. AgentCore Memory now offers a self-managed strategy that gives you complete control over your memory extraction and consolidation pipelines. AgentCore Gateway now connects to existing Model Context Protocol (MCP) servers in addition to transforming APIs and Lambda functions into agent-compatible tools. It also supports&nbsp;Identity and Access Management (IAM)&nbsp;authorization enabling customers to leverage IAM in additional to OAuth for secure agent to tool interactions over MCP,&nbsp;and acts as a single, secure endpoint for agents to discover and use tools without the need for custom integrations. AgentCore Identity now offers identity-aware authorization, secure vault storage for refresh tokens, and native integration with additional OAuth-enabled services so agents can securely act on behalf of users or by themselves with enhanced access controls. AgentCore Observability now delivers complete visibility into end-to-end agent execution and operational metrics across all AgentCore services through dashboards powered by Amazon CloudWatch, and it is OTEL compatible, offering seamless integration with Amazon CloudWatch and external observability providers like Dynatrace, Datadog, Arize Phoenix, LangSmith, and Langfuse. AgentCore works with any open source framework (CrewAI, LangGraph, LlamaIndex, Google ADK, OpenAI Agents SDK) and any model in or outside Amazon Bedrock, giving you freedom to use your preferred frameworks and models, and innovate with confidence.</p> <p>Amazon Bedrock AgentCore is available in nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).</p> <p>Learn more about AgentCore through the <a contenteditable="false" href="https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-agentcore-is-now-generally-available/" style="cursor: pointer;" target="_blank">blog</a>, deep dive using the <a contenteditable="false" href="https://aws.amazon.com/bedrock/agentcore/resources/" style="cursor: pointer;" target="_blank">AgentCore resources</a>, and get started with the <a contenteditable="false" href="https://github.com/aws/bedrock-agentcore-starter-toolkit" style="cursor: pointer;" target="_blank">AgentCore Starter Toolkit</a>. AgentCore offers <a contenteditable="false" href="https://aws.amazon.com/bedrock/agentcore/pricing/" style="cursor: pointer;" target="_blank">consumption-based pricing</a> with no upfront costs.</p>

Read article →

Amazon Aurora PostgreSQL now supports R8g database instances in additional AWS regions

<p>AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility in the AWS Canada (Central), AWS Asia Pacific (Singapore) and AWS Asia Pacific (Seoul) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora PostgreSQL databases, depending on database engine, version, and workload.<br /> <br /> AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).<br /> <br /> You can launch Graviton4 R8g database instances in the <a href="https://console.aws.amazon.com/rds/home">Amazon RDS Management Console</a> or using the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html">AWS CLI</a>. Upgrading a database instance to Graviton4 requires a <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html#USER_ModifyInstance.Settings">simple instance type modification</a>. For more details, refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Modifying.html">Aurora documentation</a>.<br /> <br /> <a href="https://aws.amazon.com/rds/aurora/">Amazon Aurora</a> is designed for unparalleled high performance and availability at global scale with full PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.html">getting started page</a>.</p>

Read article →

Amazon Neptune Analytics is now available in AWS Canada (Central) and Australia (Sydney) Regions

<p><a href="https://docs.aws.amazon.com/neptune-analytics/latest/userguide/what-is-neptune-analytics.html">Amazon Neptune Analytics</a> is now available in the AWS Canada (Central) and Australia (Sydney) Regions. You can now create and manage Neptune Analytics graphs in the AWS Canada (Central) and Australia (Sydney) Regions and run advanced graph analytics and vector similarity search.<br /> <br /> Neptune Analytics is a memory-optimized graph database engine for analytics. With Neptune Analytics, you can get insights and find trends by processing large amounts of graph data in seconds. To analyze graph data quickly and easily, Neptune Analytics stores large graph datasets in memory. It supports a library of optimized graph analytic algorithms, low-latency graph queries, and vector search capabilities within graph traversals. Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It complements <a href="https://docs.aws.amazon.com/neptune/latest/userguide/intro.html">Amazon Neptune Database</a>, a popular managed graph database. To perform intensive analysis, you can load the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's stored in Amazon S3.<br /> <br /> To get started, you can create a new Neptune Analytics graphs using the <a href="https://ca-central-1.console.aws.amazon.com/neptune/home?region=ca-central-1#analytics-graphs:">AWS Management Console</a>, <a href="https://docs.aws.amazon.com/cli/latest/reference/neptune-graph/">or AWS CLI</a>. For more information on pricing and region availability, refer to the <a href="https://aws.amazon.com/neptune/pricing/">Neptune pricing page</a> and <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region Table</a>.&nbsp;</p>

Read article →

AWS Service Availability Updates

<p>After careful consideration, we’re announcing availability changes for a select group of AWS services and features. These changes fall into three lifecycle categories:<br /> <br /> Services and Capabilities moving to Maintenance<br /> <br /> Services moving to maintenance will no longer be accessible to new customers starting Nov 7, 2025. Current customers can continue using the service or feature while exploring alternative solutions.<br /> </p> <ul> <li><a href="https://docs.aws.amazon.com/clouddirectory/latest/developerguide/cloud-directory-availability-change.html">Amazon Cloud Directory</a></li> <li><a href="https://docs.aws.amazon.com/codecatalyst/latest/userguide/migration.html">Amazon CodeCatalyst</a></li> <li><a href="https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html">Amazon CodeGuru Reviewer</a></li> <li><a href="https://docs.aws.amazon.com/frauddetector/latest/ug/what-is-frauddetector.html">Amazon Fraud Detector</a></li> <li><a href="https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html">Amazon Glacier</a></li> <li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon3-ol-change.html">Amazon S3 Object Lambda</a></li> <li><a href="https://docs.aws.amazon.com/workspaces/latest/userguide/amazon-workspaces-web-access.html">Amazon Workspaces Web Access Client for PCoIP (STXHD)</a></li> <li><a href="https://docs.aws.amazon.com/application-discovery/latest/userguide/application-discovery-service-availability-change.html">AWS Application Discovery Service</a></li> <li><a href="https://docs.aws.amazon.com/omics/latest/dev/variant-store-availability-change.html">AWS HealthOmics - Variant and Annotation Store</a></li> <li><a href="https://docs.aws.amazon.com/iot-sitewise/latest/userguide/iotsitewise-dpp-availability-change.html">AWS IoT SiteWise Edge Data Processing Pack</a></li> <li><a href="https://docs.aws.amazon.com/iot-sitewise/latest/appguide/iotsitewise-monitor-availability-change.html">AWS IoT SiteWise Monitor</a></li> <li><a href="https://docs.aws.amazon.com/m2/latest/userguide/mainframe-modernization-availability-change.html">AWS Mainframe Modernization Service</a></li> <li><a href="https://docs.aws.amazon.com/migrationhub/latest/ug/migrationhub-availability-change.html">AWS Migration Hub</a></li> <li><a href="https://docs.aws.amazon.com/snowball/latest/developer-guide/snowball-edge-availability-change.html">AWS Snowball Edge Compute Optimized</a></li> <li><a href="https://docs.aws.amazon.com/snowball/latest/developer-guide/snowball-edge-availability-change.html">AWS Snowball Edge Storage Optimized</a></li> <li><a href="https://www.docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html">AWS Systems Manager - Change Manager</a></li> <li><a href="https://docs.aws.amazon.com/incident-manager/latest/userguide/incident-manager-availability-change.html">AWS Systems Manager - Incident Manager</a></li> <li><a href="https://aws.amazon.com/thinkbox-deadline/">AWS Thinkbox Deadline 10</a></li> <li><a href="https://docs.aws.amazon.com/portingassistant/latest/userguide/what-is-porting-assistant.html">.NET Modernization Tools</a></li> </ul> <p><br /> Services Entering Sunset<br /> <br /> The following services are entering sunset, and we are announcing the date upon which we will end operations and support of the service. Customers using these services should click on the links below to understand the sunset timeline (typically 12 months), and begin planning migration to alternatives as recommended in the updated service web pages and documentation.<br /> </p> <ul> <li><a href="https://docs.aws.amazon.com/finspace/latest/userguide/amazon-finspace-end-of-support.html">Amazon FinSpace</a></li> <li><a href="https://aws.amazon.com/blogs/machine-learning/preserve-access-and-explore-alternatives-for-amazon-lookout-for-equipment/">Amazon Lookout for Equipment</a></li> <li><a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/migrate-from-v1.html">AWS IoT Greengrass v1</a></li> <li><a href="https://docs.aws.amazon.com/proton/latest/userguide/proton-end-of-support.html">AWS Proton</a></li> </ul> <p><br /> Services Reaching End of Support<br /> <br /> The following services have reached end of support and are no longer available as of October 7, 2025.<br /> <br /> </p> <ul> <li>AWS Mainframe Modernization App Testing</li> </ul> <p><br /> For customers affected by these changes, we've prepared comprehensive migration guides and our support teams are ready to assist with your transition. Visit <a href="https://aws.amazon.com/products/lifecycle/">AWS Product Lifecycle Page</a> to learn more. or contact <a href="https://aws.amazon.com/contact-us/">AWS Support</a>.</p>

Read article →

Amazon Connect now supports copy and bulk edit of agent scheduling configuration

<p>Amazon Connect now supports copy and bulk edit of agent scheduling configuration, making it easier to set up and maintain agent schedules. You can create new scheduling configurations by copying existing ones — for example, copy a weekday shift profile to create a weekend variant, or, copy scheduling configuration (time-zone, weekly working hours, days off, etc.) from an existing agent to multiple new hires. When bulk editing, you can select specific fields to update, such as update time-zone and start date for new hires without changing their weekly working hours. These updates reduce time spent by managers on configuration management, thus improving productivity and operational efficiency.<br /> <br /> This feature is available in all <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#optimization_region">AWS Regions</a> where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click <a href="https://docs.aws.amazon.com/connect/latest/adminguide/forecasting-capacity-planning-scheduling.html">here</a>.</p>

Read article →

Amazon EC2 High Memory U7i instances now available in Asia Pacific (Mumbai) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the Asia Pacific (Mumbai) region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TiB of DDR5 memory enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/" target="_blank">High Memory instances page</a>.</p>

Read article →

AWS now supports immediate resource discovery within a Region

<p>AWS now provides immediate access to resource search capabilities in all accounts through AWS Resource Explorer. With this launch, you no longer need to activate Resource Explorer to discover your resources in a Region.<br /> <br /> To start searching, you need, at minimum, permissions in the AWS Resource Explorer Read Only Access or AWS Read Only Access managed policies. You can discover resources in the AWS Resource Explorer console, Unified Search, and AWS CLI and SDKs. To search the full inventory of supported resources, including historical backfill and automatic updates, complete Resource Explorer setup. This requires additional permissions to create a Service-Linked Role, so that Resource Explorer can automatically complete setup in each Region where you search. You can also enable cross-Region search to discover resources across all Regions in your AWS account with one-click in the Console, or with a single API call using the new CreateResourceExplorerSetup API.<br /> <br /> This feature is available at no additional cost in all <a href="https://docs.aws.amazon.com/resource-explorer/latest/userguide/welcome.html#supported-regions" target="_blank">AWS Regions where Resource Explorer is supported</a>. To start searching for your resources, visit the <a href="https://resource-explorer.console.aws.amazon.com/" target="_blank">AWS Resource Explorer console</a>. Read about getting started in the <a href="https://docs.aws.amazon.com/resource-explorer/latest/userguide/welcome.html" target="_blank">AWS Resource Explorer documentation</a>, or explore the <a href="https://aws.amazon.com/resourceexplorer/" target="_blank">AWS Resource Explorer product page</a>.</p>

Read article →

Introducing Amazon Quick Suite: your agentic AI-powered workspace

<p>Today, we’re announcing the general availability of <a href="https://aws.amazon.com/quicksuite/" target="_blank">Amazon Quick Suite</a>—a&nbsp;new set of agentic teammates that helps you get the answers you need using all of your business data and move instantly from insights to action. Quick Suite retrieves insights across the public internet and&nbsp;all&nbsp;your documents, including information in popular third party applications, databases, and other places your company keeps important data. Whether you need a single data point, a PhD-level research project, an entire strategy tailored to your context, or anything in between, Quick Suite quickly gets you&nbsp;all the relevant information.</p> <p>Quick Suite helps you seamlessly transition from getting answers to taking action in popular applications (like creating or updating Jira tickets, or ServiceNow incidents). Quick Suite can also help you automate tasks—from routine, daily tasks like responding to RFPs and preparing for customer meetings to automating the most complex business processes such as invoice processing and account reconciliation.&nbsp;All of your data is safe and private. Your queries and data are never used to train models, and you can tailor the Quick Suite experience to you.&nbsp;Your AWS administrator can turn on Quick Suite in only a few steps, and your new agentic teammate will be ready to go. New Quick Suite customers receive a 30-day free trial for up to 25 users.&nbsp;</p> <p>You can experience the full breadth of Quick Suite capabilities for chat, research, business intelligence, and automation in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland)., and we'll expand availability to additional AWS Regions over the coming months.</p> <p>To learn more about Quick Suite and its capabilities, read our <a href="https://aws.amazon.com/blogs/aws/reimagine-the-way-you-work-with-ai-agents-in-amazon-quick-suite/" target="_blank">deep-dive blog</a>.</p>

Read article →

Amazon EBS io2 Block Express supports China Regions

<p>Amazon EBS io2 Block Express volumes are now available in Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.<br /> <br /> io2 Block Express leverage the latest generation of EBS storage server architecture designed to deliver consistent sub-millisecond latency and 99.999% durability. With a single io2 Block Express volume, you can achieve 256,000 IOPS, 4GiB/s throughput, and 64TiB storage capacity. You can also attach an io2 Block Express volume to multiple instances in the same Availability Zone, supporting shared storage fencing through NVMe reservations for improved application availability and scalability. With the lowest p99.9 I/O latency among major cloud providers, io2 Block Express is the ideal choice for the most I/O-intensive, mission-critical deployments such as SAP HANA, Oracle, SQL Server, and IBM DB2.<br /> <br /> Customers using io1 volumes can upgrade to io2 Block Express without any downtime using the <a href="https://docs.amazonaws.cn/en_us/ebs/latest/userguide/ebs-modify-volume.html">ModifyVolume</a> API to achieve 100x durability, consistent sub-millisecond latency, and significantly higher performance at the same or lower cost than io1. With io2 Block Express, you can drive up to 4x IOPS and 4x throughput at the same storage price as io1, and up to 50% cheaper IOPS cost for volumes over 32,000 IOPS.<br /> <br /> io2 Block Express is now available in all the Amazon Web Services regions. You can create and manage io2 Block Express volumes using the Amazon Web Services Management Console, Amazon Command Line Interface (CLI), or Amazon SDKs. For more information on io2 Block Express, see our <a href="https://docs.amazonaws.cn/en_us/ebs/latest/userguide/provisioned-iops.html#io2-block-express">tech documentation</a>.</p>

Read article →

Amazon Connect now supports agent schedule adherence notifications

<p>Amazon Connect now supports agent schedule adherence notifications, making it easier for you to proactively identify when agents aren't adhering to their scheduled activities. You can define rules to automatically send email or text notifications (via EventBridge) to supervisors when agents exceed adherence thresholds. For example, if agent adherence drops below 85% in a trailing 15-minute window, supervisors can receive an email alert. These automated notifications eliminate the need for continuous dashboard monitoring and enable proactive intervention before service levels decline, improving both supervisor productivity and customer satisfaction.<br /> <br /> This feature is available in all <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#optimization_region">AWS Regions</a> where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click <a href="https://docs.aws.amazon.com/connect/latest/adminguide/forecasting-capacity-planning-scheduling.html">here</a>.</p>

Read article →

Amazon Quick Sight expands font customization for visuals

<p><a href="https://aws.amazon.com/quicksight/">Amazon Quick Sight</a> now supports font customization for data labels and axes. Authors can now customize fonts for data labels and axes in supported charts, in addition to the previously supported <a href="https://aws.amazon.com/about-aws/whats-new/2024/11/amazon-quicksight-font-customization-visuals/">font customization for visual titles, subtitles, and legend, as well as tables and pivot tables headers.</a><br /> <br /> Authors can set the font size (in pixels), font family, color, and styling options like bold, italics, and underline across analysis, including dashboards, reports and embedded scenarios. With this update, you can further align your dashboard's fonts with your organization's branding guidelines, creating a more cohesive and visually appealing experience. Additionally, the expanded font customization options help improve readability, especially when viewing visualizations on large screens.<br /> <br /> This is now available in all <a href="https://docs.aws.amazon.com/quicksight/latest/user/regions-qs.html">supported Amazon Quick Suite regions</a>.<br /> <br /> To learn more about this, visit <a href="https://docs.aws.amazon.com/quicksight/latest/user/analytics-format-options.html">Amazon Quick Suite Visual formatting guide.</a></p>

Read article →

Announcing vector search for Amazon ElastiCache

<p>Vector search for <a href="https://aws.amazon.com/elasticache/" style="cursor: pointer;" target="_blank">Amazon ElastiCache</a> is now generally available. Customers can now use ElastiCache to index, search, and update billions of high-dimensional vector embeddings from popular providers like <a href="https://aws.amazon.com/bedrock/" style="cursor: pointer;" target="_blank">Amazon Bedrock</a>,&nbsp;<a href="https://aws.amazon.com/sagemaker/" style="cursor: pointer;" target="_blank">Amazon SageMaker</a>,&nbsp;<a href="https://www.anthropic.com/" style="cursor: pointer;" target="_blank">Anthropic</a>, and <a href="https://openai.com/" style="cursor: pointer;" target="_blank">OpenAI</a> with latency as low as microseconds and up to 99% recall.<br /> <br /> Key use cases include semantic caching for large language models (LLMs) and multi-turn conversational agents, which significantly reduce latency and cost by caching semantically similar queries. Vector search for ElastiCache also powers agentic AI systems with Retrieval Augmented Generation (RAG) to ensure highly relevant results and consistently low latency across multiple retrieval steps. Additional use cases include recommendation engines, anomaly detection, and other applications that require efficient search across multiple data modalities.<br /> <br /> Vector search for ElastiCache is available with Valkey version 8.2 on node-based clusters in all AWS Regions at no additional cost. To get started, create a Valkey 8.2 cluster using the <a href="https://console.aws.amazon.com/elasticache/" style="cursor: pointer;" target="_blank">AWS Management Console</a>, AWS Software Development Kit (SDK), or AWS Command Line Interface (CLI). You can also use vector search on your existing clusters by upgrading from any version of Valkey or Redis OSS to Valkey 8.2 in a <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/VersionManagement.HowTo.html" style="cursor: pointer;" target="_blank">few clicks with no downtime</a>. To learn more about vector search for ElastiCache for Valkey read <a href="https://aws.amazon.com/blogs/database/announcing-vector-search-for-amazon-elasticache/" style="cursor: pointer;" target="_blank">this blog</a> and for a list of supported commands see the <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/vector-search.html" style="cursor: pointer;" target="_blank">ElastiCache documentation</a>.&nbsp;</p>

Read article →

AWS Config now supports 3 new resource types

<p>AWS Config now supports 3 additional AWS resource types. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.<br /> <br /> With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.<br /> <br /> You can now use AWS Config to monitor the following newly supported resource types in all <a contenteditable="false" href="https://docs.aws.amazon.com/config/latest/developerguide/what-is-resource-config-coverage.html" style="cursor: pointer;" target="_blank">AWS Regions</a> where the supported resources are available:</p> <p>Resource Types:</p> <p>AWS::ApiGatewayV2::Integration</p> <p>AWS::CloudTrail::EventDataStore</p> <p>AWS::Config::StoredQuery</p>

Read article →

Amazon SageMaker AI Projects now supports custom template S3 provisioning

<p>Amazon SageMaker AI Projects now supports provisioning custom machine learning (ML) project templates from Amazon S3. Administrators can now manage ML templates in SageMaker AI studio so data scientists can create standardized ML projects to meet their organizational needs.<br /> <br /> Data scientists can use Amazon SageMaker AI Projects to create standardized ML projects that meet organizational requirements and automate ML development workflows. Administrators define standardized ML project templates that include end-to-end development patterns. By provisioning custom templates from Amazon S3, administrators can define standardized project templates and provide access to these templates directly in the SageMaker AI studio for data scientists, ensuring all ML projects follow organizational standards.<br /> <br /> SageMaker AI Projects custom template S3 provisioning is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where SageMaker AI Projects is available.<br /> <br /> To learn more, visit <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-templates-custom.html" target="_blank">SageMaker AI Projects documentation</a>, and <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated.html" target="_blank">SageMaker AI Studio</a>.&nbsp;</p>

Read article →

Amazon Route 53 Profiles now supports AWS PrivateLink

<p>Amazon Route 53 Profiles now supports&nbsp;<a href="https://aws.amazon.com/privatelink/" target="_blank">AWS PrivateLink</a>. Customers can now access and manage their Profiles privately, without going through the public internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When Route 53 Profiles is accessed via AWS PrivateLink, all&nbsp;operations,&nbsp;such as creating, deleting, editing, and listing of Profiles, can be handled via the Amazon private network.&nbsp;<br /> <br /> Route 53 Profiles allows you to define a standard DNS configuration, in the form of a Profile, that may include Route 53 private hosted zone (PHZ) associations, Route 53 Resolver rules, and Route 53 Resolver DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account.&nbsp;Profiles can also be used to&nbsp;enforce DNS settings for your VPCs, with configurations for&nbsp;DNSSEC validations, Resolver reverse DNS lookups, and the DNS Firewall failure mode. You can share&nbsp;Profiles with AWS accounts in your organization using AWS Resource Access Manager (RAM).&nbsp;Customers can use&nbsp;Profiles with AWS PrivateLink&nbsp;in regions where Route 53 Profiles is available today, including the AWS GovCloud (US) Regions. For more information about the AWS Regions where Profiles is available, see&nbsp;<a href="https://docs.aws.amazon.com/general/latest/gr/r53.html" target="_blank">here</a>.<br /> <br /> To learn more about configuring Route 53 Profiles, please refer to the service&nbsp;<a href="https://docs.aws.amazon.com/Route53/latest/APIReference/API_Operations_Route_53_Profiles.html" target="_blank">documentation</a>.</p>

Read article →

Amazon EC2 M7i instances are now available in the Europe (Milan) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Europe (Milan) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.<br /> <br /> M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/m7i/">Amazon EC2 M7i Instances</a>. To get started, see the <a href="https://console.aws.amazon.com/">AWS Management Console</a>.</p>

Read article →

Amazon RDS for Oracle zero-ETL integration with Amazon Redshift is now available in 8 additional AWS regions

<p>Amazon RDS for Oracle zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East(UAE) Regions. Amazon RDS for Oracle zero-ETL integration with Amazon Redshift enables near real-time analytics and machine learning (ML) to analyze petabytes of transactional data in Amazon Redshift without complex data pipelines for extract-transform-load (ETL) operations. Within seconds of data being written to an Amazon RDS for Oracle database instance, the data is replicated to Amazon Redshift. Zero-ETL integrations simplify the process of analyzing data from Amazon RDS for Oracle database instances, enabling you to derive holistic insights across multiple applications with ease. <br /> <br /> You can use the AWS management console, API, CLI, and AWS CloudFormation to create and manage zero-ETL integrations between RDS for Oracle and Amazon Redshift. If you use Oracle multitenant architecture, you can choose specific pluggable databases (PDBs) to selectively replicate them. In addition, you can choose specific tables and tailor replication to your needs.<br /> <br /> RDS for Oracle zero-ETL integration with Redshift is available with Oracle Database version 19c. To learn more, refer <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html">Amazon RDS </a>and <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.html">Amazon Redshift</a> documentation.&nbsp;</p>

Read article →

Amazon Kinesis Data Streams announces new Fault Injection Service (FIS) actions for API errors

<p>Amazon Kinesis Data Streams now supports Fault Injection Service (FIS) actions for Kinesis API errors. Customers can now test their application's error handling capabilities, retry mechanisms (such as exponential backoff patterns), and CloudWatch alarms in a controlled environment. This allows customers to validate their monitoring systems and recovery processes before encountering real-world failures, ultimately improving application resilience and availability. This integration supports Kinesis Data Streams API errors including throttling, internal errors, service unavailable, and expired iterator exceptions for Amazon Kinesis Data Streams.<br /> <br /> Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store real-time data streams at any scale. Now customers can create real-world Kinesis Data Stream API errors (including 500, 503, and 400 errors for GET and PUT operations) to test application resilience. This feature eliminates the previous need for custom implementation or to wait for actual production failures to verify error handling mechanisms. To get started, customers can create experiment templates through the FIS console to run tests directly or integrate them into their continuous integration pipeline. For additional safety, FIS experiments include automatic stop mechanisms that trigger when customer-defined thresholds are reached, ensuring controlled testing without risking application stability.<br /> <br /> These actions are generally available in all <a contenteditable="false" href="https://docs.aws.amazon.com/general/latest/gr/fis.html" style="cursor: pointer;">AWS Regions where FIS is available</a>, including the AWS GovCloud (US) Regions. To learn more about using these actions, please see the <a contenteditable="false" href="https://docs.aws.amazon.com/streams/latest/dev/working-with-streams.html" style="cursor: pointer;">Kinesis Data Streams User Guide</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#aws-kinesis-actions" style="cursor: pointer;">FIS User Guide</a>.</p>

Read article →

Amazon AppStream 2.0 announces availability of license included Microsoft applications

<p>Amazon AppStream 2.0 now offers Microsoft applications with licenses included, providing customers with the flexibility to run these applications on AppStream 2.0 fleets. As part of this launch, AppStream 2.0 provides Microsoft Office, Visio, and Project 2021/2024 in both Standard and Professional editions. Each is available in both 32-bit and 64-bit versions for On-Demand and Always-On fleets.<br /> <br /> Administrators can dynamically control applications availability by adding or removing applications from AppStream 2.0 images and fleets. End users benefit from a seamless experience, accessing Microsoft applications that are fully integrated with their business applications within their AppStream 2.0 sessions. This helps in ensuring that users can work efficiently with both Microsoft and business applications in a unified environment, eliminating the need for switching between different platforms or services.<br /> <br /> To get started, create an AppStream custom image by launching an image builder with a Windows Server operating system image. Select the desired set of applications to be installed. Then connect to the image builder and complete image creation by following the&nbsp;<a href="https://docs.aws.amazon.com/appstream2/latest/developerguide/tutorial-image-builder.html">Amazon AppStream 2.0 Administration Guide</a>.&nbsp;You must use an AppStream 2.0 Image Builder that uses an AppStream 2.0 agent released on or after October 2, 2025 Or, your image must use managed AppStream 2.0 image updates released on or after October 3, 2025.<br /> <br /> This functionality is generally available in all regions where AppStream 2.0 is offered. Customers are billed per hour for the AppStream streaming resources, and per-user per-month (non-prorated) for Microsoft applications. Please see&nbsp;<a href="https://aws.amazon.com/appstream2/pricing/">Amazon AppStream 2.0 Pricing</a>&nbsp;for more information.</p>

Read article →

AWS Backup expands information in job APIs and Backup Audit Manager reports

<p>AWS Backup now provides more details in backup job API responses and Backup Audit Manager reports to give you better visibility into backup configurations and compliance settings. You can verify your backup policies with a single API call.</p> <p>List and Describe APIs for backup, copy, and restore jobs now return fields that required multiple API calls before. Delegated administrators can now view backup job details across their organization. Backup jobs APIs include retention settings, vault lock status, encryption details, and backup plan information like plan names, rule names, and schedules. Copy job APIs return destination vault configurations, vault type, lock state, and encryption settings. Restore job APIs show source resource details and vault access policies. Backup Audit Manager reports include new columns with vault type, lock status, encryption details, archive settings, and retention periods. You can use this information to enhance audit trails and verify compliance with data protection policies.</p> <p>These expanded information fields are available today in all AWS Regions where AWS Backup and AWS Backup Audit Manager <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-region" style="cursor: pointer;">are supported</a>, with no additional charges.</p> <p>To learn more about AWS Backup Audit Manager, visit the <a href="https://aws.amazon.com/backup/" style="cursor: pointer;">product page</a> and <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/aws-backup-audit-manager.html" style="cursor: pointer;">documentation</a>. To get started, visit the <a href="https://console.aws.amazon.com/backup" style="cursor: pointer;">AWS Backup console</a>.</p>

Read article →

Amazon RDS now supports the latest CU and GDR updates for Microsoft SQL Server

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server </a>now supports the latest General Distribution Release (GDR) updates for Microsoft SQL Server. This release includes support for Microsoft SQL Server 2016 SP3+GDR <a href="https://support.microsoft.com/help/5065226" target="_blank">KB5065226</a> (RDS version 13.00.6470.1.v1), SQL Server 2017 CU31+GDR <a href="https://support.microsoft.com/help/5065225" target="_blank">KB5065225</a> (RDS version 14.00.3505.1.v1), SQL Server 2019 CU32+GDR <a href="https://support.microsoft.com/help/5065222" target="_blank">KB5065222</a> (RDS version 15.00.4445.1.v1) and SQL Server 2022 CU21 <a href="https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate21" target="_blank">KB5065865</a> (RDS version 16.00.4215.2.v1).<br /> <br /> The GDR updates address vulnerabilities described in CVE-2025-47997, CVE-2025-55227, CVE-2024-21907. For additional information on the improvements and fixes included in these updates, see Microsoft documentation for KB5065226, KB5065225, KB5065222 and KB5065865. We recommend that you upgrade your Amazon RDS for SQL Server instances to apply these updates using Amazon RDS Management Console, or by using the AWS SDK or CLI. You can learn more about upgrading your database instance in the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.SQLServer.html" target="_blank">Amazon RDS SQL Server User Guide for upgrading your RDS Microsoft SQL Server DB engine</a>.</p>

Read article →

Amazon MSK Connect is now available in ten additional AWS Regions

<p>Amazon MSK Connect is now available in ten additional AWS Regions: Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Melbourne), Europe (Milan), Europe (Zurich), Middle East (Bahrain), Middle East (UAE), Africa (Cape Town), and Israel (Tel Aviv).<br /> <br /> MSK Connect enables you to run fully managed Kafka Connect clusters with Amazon Managed Streaming for Apache Kafka (Amazon MSK). With a few clicks, MSK Connect allows you to easily deploy, monitor, and scale connectors that move data in and out of Apache Kafka and Amazon MSK clusters from external systems such as databases, file systems, and search indices. MSK Connect eliminates the need to provision and maintain cluster infrastructure. Connectors scale automatically in response to increases in usage and you pay only for the resources you use. With full compatibility with Kafka Connect, it is easy to migrate workloads without code changes. MSK Connect will support both Amazon MSK-managed and self-managed Apache Kafka clusters.<br /> <br /> You can get started with MSK Connect from the Amazon MSK console or the Amazon CLI. Visit the <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions page</a> for all the regions where Amazon MSK is available. To get started visit, the MSK Connect <a href="https://aws.amazon.com/msk/features/msk-connect/">product page</a>, <a href="https://aws.amazon.com/msk/pricing/">pricing page</a>, and the <a href="https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect.html">Amazon MSK Developer Guide</a>.</p>

Read article →

AWS Transfer Family SFTP connectors now support VPC-based connectivity

<p>AWS Transfer Family <a href="https://docs.aws.amazon.com/transfer/latest/userguide/creating-connectors.html">SFTP connectors</a> can now connect to remote SFTP servers through your Amazon Virtual Private Cloud (VPC). This enables you to transfer files between Amazon S3 and any SFTP server, whether privately or publicly hosted, while leveraging the security controls and network configurations already defined in your VPC. By utilizing your NAT Gateways' bandwidth for file transfers over SFTP, you can achieve improved transfer performance and ensure compatibility with remote firewalls.<br /> <br /> <a href="https://aws.amazon.com/aws-transfer-family/">AWS Transfer Family</a> provides fully managed file transfers over SFTP, FTP, FTPS, AS2 and web-browser based interfaces. You can now use Transfer Family SFTP connectors to connect with SFTP servers that are only accessible from your VPC, including on-premises systems, external servers shared over private networks, or in-VPC servers. You can present the IP addresses from your VPC’s CIDR range for compatibility with IP controls, and achieve higher bandwidth for large-scale transfers via your NAT gateways when connecting over the internet. All connections are routed through your VPC’s existing networking and security controls, such as AWS Transit Gateway, centralized firewalls and traffic inspection points, helping you meet data security mandates. <br /> <br /> SFTP connectors support for VPC-based connectivity is available in select AWS Regions. To get started, visit the AWS Transfer Family console, or use AWS CLI/SDK. To learn more, read the <a href="https://aws.amazon.com/blogs/aws/aws-transfer-family-sftp-connectors-now-support-vpc-based-connectivity">AWS News Blog</a> or visit the <a href="https://docs.aws.amazon.com/transfer/latest/userguide/create-vpc-sftp-connector-procedure.html">Transfer Family User Guide</a>. </p>

Read article →

Amazon Connect now provides configurable thresholds for schedule adherence

<p>Amazon Connect now provides configurable thresholds for schedule adherence, giving you more flexibility in how you track agent performance. You can define thresholds for how early or late agents start or end their shifts, as well as for individual activities. For example, agents can start their shift 5 minutes early and end 10 minutes late, or end their breaks 3 minutes late, without negatively impacting their adherence scores. You can further customize these thresholds for individual teams. For example, teams that handle contacts with long handle times can be given more flexibility in when they start their breaks. This launch enables managers to focus on true adherence violations and eliminates the impact of minor schedule deviations on agent performance, thus improving manager productivity and agent satisfaction.<br /> <br /> This feature is available in all <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#optimization_region" style="cursor: pointer;">AWS Regions</a> where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/forecasting-capacity-planning-scheduling.html" style="cursor: pointer;">here</a>.</p>

Read article →

Amazon EBS now supports Volume Clones for instant volume copies

<p>Today, Amazon Web Services (AWS) announces the general availability of Volume Clones for Amazon Elastic Block Store (Amazon EBS), our high-performance block storage service. This new capability allows you to instantly create and access point-in-time copies of EBS volumes within the same Availability Zone (AZ), accelerating software development workflows and enhancing operational agility.<br /> <br /> Customers use Amazon EBS volumes as durable block storage attached to Amazon EC2 instances. With Amazon EBS Volume Clones, you can instantly create copies of volumes and access the copied volumes with single-digit millisecond latency. Amazon EBS Volume Clones enables rapid creation of test and development environments from production volumes, eliminating manual copy workflows. Additionally, Volume Clones integrates with the Amazon EBS Container Storage Interface (CSI) driver, simplifying storage management for containerized applications.<br /> <br /> Amazon EBS Volume Clones is available in all AWS Commercial Regions and AWS GovCloud (US) Regions. You can access Volume Clones through the AWS Console, AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. This capability supports all EBS volume types and works for volume copies within the same account and AZ.<br /> <br /> For detailed pricing information, please visit the <a href="https://aws.amazon.com/ebs/pricing/" target="_blank">EBS pricing page</a>. To explore how Volume Clones can accelerate your software development processes and improve operational efficiency, visit the <a href="https://docs.aws.amazon.com/ebs/latest/userguide/ebs-copying-volume.html" target="_blank">AWS documentation</a>.</p>

Read article →

Amazon MSK adds support for Apache Kafka version 4.1

<p>Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 4.1, introducing Queues as a preview feature, a new Streams Rebalance Protocol in early access, and Eligible Leader Replicas (ELR). Along with these features, Apache Kafka version 4.1 includes various bug fixes and improvements. For more details, please refer to the <a contenteditable="false" href="https://downloads.apache.org/kafka/4.1.0/RELEASE_NOTES.html" style="cursor: pointer;">Apache Kafka release notes for version 4.1. </a><br /> <br /> A key highlight of Kafka 4.1 is the introduction of Queues as a preview feature. Customers can use multiple consumers to process messages from the same topic partitions, improving parallelism and throughput for workloads that need point-to-point message delivery. The new Streams Rebalance Protocol builds upon Kafka 4.0's consumer rebalance protocol, extending broker coordination capabilities to Kafka Streams for optimized task assignments and rebalancing. Additionally, ELR is now enabled by default to strengthen availability.<br /> <br /> To start using Apache Kafka 4.1 on Amazon MSK, simply select version 4.1.x when creating a new cluster via the AWS Management Console, AWS CLI, or AWS SDKs. You can also upgrade existing MSK provisioned clusters with an in-place rolling update. Amazon MSK orchestrates broker restarts to maintain availability and protect your data during the upgrade. Kafka version 4.1 support is available today across all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS regions</a> where Amazon MSK is offered. To learn how to get started, see the <a contenteditable="false" href="https://docs.aws.amazon.com/msk/latest/developerguide/getting-started.html" style="cursor: pointer;">Amazon MSK Developer Guide</a>.</p>

Read article →

Amazon RDS for MySQL and PostgreSQL zero-ETL integration with Amazon Redshift now available in 8 additional regions

<p>Amazon RDS for MySQL and Amazon RDS for PostgreSQL zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East (UAE) regions. Zero-ETL integrations enable near real-time analytics and machine learning (ML) on petabytes of transactional data using Amazon Redshift. Within seconds of data being written to Amazon RDS for MySQL or Amazon RDS for PostgreSQL, the data is replicated to Amazon Redshift. <br /> <br /> You can create multiple zero-ETL integrations from a single Amazon RDS database, and you can apply data filtering for each integration to include or exclude specific databases and tables, tailoring the zero-ETL integration to your needs. You can also use AWS CloudFormation to automate the configuration and deployment of resources needed for zero-ETL integrations. <br /> <br /> To learn more about zero-ETL and how to get started, visit the documentation for <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.html">Amazon RDS </a>and <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.html">Amazon Redshift</a>.</p>

Read article →

Amazon EC2 M8g instances now available in additional regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Europe (Paris), Asia Pacific (Osaka), AWS Canada (Central), and AWS Middle East (Bahrain) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the <a contenteditable="false" href="https://aws.amazon.com/ec2/nitro/" style="cursor: pointer;">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.<br /> <br /> AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).<br /> <br /> To learn more, see <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/m8g/" style="cursor: pointer;">Amazon EC2 M8g Instances</a>. To explore how to migrate your workloads to Graviton-based instances, see <a contenteditable="false" href="https://aws.amazon.com/ec2/graviton/fast-start/" style="cursor: pointer;">AWS Graviton Fast Start program</a> and <a contenteditable="false" href="https://github.com/aws/porting-advisor-for-graviton" style="cursor: pointer;">Porting Advisor for Graviton</a>. To get started, see the <a contenteditable="false" href="https://console.aws.amazon.com/" style="cursor: pointer;">AWS Management Console</a>.&nbsp;</p>

Read article →

AWS Application Load Balancer launches URL and Host Header Rewrite

<p>Amazon Web Services (AWS) announces URL and Host Header rewrite capabilities for Application Load Balancer (ALB). This feature enables customers to modify request URLs and Host Headers using regex-based pattern matching before routing requests to targets.</p> <p>With URL and Host Header rewrites, you can transform URLs using regex patterns (e.g., rewrite "/api/v1/users" to "/users"), standardize URL patterns across different applications, modify Host Headers for internal service routing, remove or add URL path prefixes, and redirect legacy URL structures to new formats. This capability eliminates the need for additional proxy layers and simplifies application architectures. The feature is valuable for microservices deployments where maintaining a single external hostname while routing to different internal services is critical.</p> <p>You can configure URL and Host Header rewrites through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. There are no additional charges for using URL and Host Header rewrites. You pay only for your use of Application Load Balancer based on Application Load Balancer pricing.</p> <p>This feature is now available in all AWS commercial regions.</p> <p>To learn more, visit the <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/rule-transforms.html">ALB Documentation</a>, and the <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-url-and-host-header-rewrite-with-aws-application-load-balancers">AWS Blog post</a> on URL and Host Header rewrites with Application Load Balancer.</p>

Read article →

Announcing AWS for Fluent Bit 3.0.0 based on Fluent Bit 4.1.0

<p>AWS for Fluent Bit announces version 3.0.0, based on Fluent Bit version 4.1.0 and Amazon Linux 2023. Container logging using AWS for Fluent Bit is now more performant and more feature-rich for AWS customers, including those using <a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Services</a> (Amazon ECS) and <a href="https://aws.amazon.com/eks/" target="_blank">Amazon Elastic Kubernetes Service</a> (Amazon EKS).<br /> <br /> AWS for Fluent Bit enables Amazon ECS and Amazon EKS customers to collect, process, and route container logs to destinations including Amazon CloudWatch Logs, Amazon Data Firehose, Amazon Kinesis Data Streams, and Amazon S3 without changing application code. AWS for Fluent Bit 3.0.0 upgrades the Fluent Bit version to 4.1.0, and upgrades the base image to Amazon Linux 2023. These updates deliver access to the latest Fluent Bit features, significant performance improvements, and enhanced security. New features include native OpenTelemetry (OTel) support for ingesting and forwarding OTLP logs, metrics, and traces with AWS SigV4 authentication—eliminating the need for additional sidecars. Performance improvements include faster JSON parsing, processing more logs per vCPU with lower latency. Security enhancements include TLS min version and cipher controls, which enforce your TLS policy on outputs from AWS for Fluent Bit for stronger protocol posture.<br /> <br /> You can use AWS for Fluent Bit 3.0.0 on both ECS and EKS. On ECS, update the FireLens log-router container image in your task definition to the 3.0.0 tag from the Amazon ECR Public Gallery. On EKS, upgrade by either updating the Helm release or setting the DaemonSet image to the 3.0.0 version.<br /> <br /> The <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-using-fluentbit.html" target="_blank">AWS for Fluent Bit image</a> is available in the <a href="https://gallery.ecr.aws/aws-observability/aws-for-fluent-bit" target="_blank">Amazon ECR Public Gallery</a> and in the Amazon ECR repository. You can also find it on <a href="https://github.com/aws/aws-for-fluent-bit" target="_blank">GitHub</a> for source code and additional guidance.</p>

Read article →

Amazon EC2 R8g instances now available in additional regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in South America (Sao Paulo), Europe (London), and Asia Pacific (Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/" target="_blank">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.<br /> <br /> AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/r8g/" target="_blank">Amazon EC2 R8g Instances</a>. To explore how to migrate your workloads to Graviton-based instances, see <a href="https://aws.amazon.com/ec2/graviton/fast-start/" target="_blank">AWS Graviton Fast Start program</a> and <a href="https://github.com/aws/porting-advisor-for-graviton" target="_blank">Porting Advisor for Graviton</a>. To get started, see the <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>.</p>

Read article →

Amazon EC2 C8gn instances are now available in additional regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Asia Pacific (Malaysia, Sydney, Thailand) Region. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances.<br /> <br /> Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.<br /> <br /> For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.<br /> <br /> C8gn instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand)<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/c8g/" target="_blank">Amazon C8gn Instances</a>. To begin your Graviton journey, visit the <a href="https://aws.amazon.com/ec2/graviton/level-up-with-graviton/" target="_blank">Level up your compute with AWS Graviton page</a>. To get started, see <a href="https://console.aws.amazon.com/" target="_blank">AWS Management Console</a>, <a href="https://aws.amazon.com/cli/" target="_blank">AWS Command Line Interface (AWS CLI)</a>, and <a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html" target="_blank">AWS SDKs</a>.<br /> </p>

Read article →

AWS Global Accelerator now supports endpoints in two additional AWS Regions

<p>Starting today, <a href="https://aws.amazon.com/global-accelerator/">AWS Global Accelerator</a> supports application endpoints in two additional AWS Regions, Asia Pacific (Thailand) Region and Asia Pacific (Taipei) Region, expanding the number of <a href="https://docs.aws.amazon.com/global-accelerator/latest/dg/preserve-client-ip-address.regions.html">supported AWS Regions</a> to thirty three.</p> <p>AWS Global Accelerator is a service that is designed to improve the availability, security, and performance of your internet-facing applications. By using the congestion-free AWS network, end-user traffic to your applications benefits from increased availability, DDoS protection at the edge, and higher performance relative to the public internet. Global Accelerator provides static IP addresses that act as fixed entry endpoints for your application resources in one or more AWS Regions, such as your Application Load Balancers, Network Load Balancers, Amazon EC2 instances, or Elastic IPs. Global Accelerator continually monitors the health of your application endpoints and offers deterministic fail-over for multi-region workloads without any DNS dependencies.</p> <p>To get started, visit the AWS Global Accelerator <a contenteditable="false" href="https://aws.amazon.com/global-accelerator/" style="cursor: pointer;">website</a> and review its <a contenteditable="false" href="https://docs.aws.amazon.com/global-accelerator/index.html" style="cursor: pointer;">documentation</a>.</p>

Read article →

DeepSeek, OpenAI, and Qwen models available in Amazon Bedrock in additional Regions

<p>Amazon Bedrock is bringing DeepSeek-V3.1, OpenAI open-weight models, and Qwen3 models to more AWS Regions worldwide, expanding access to cutting-edge AI for customers across the globe. This regional expansion enables organizations in more countries and territories to deploy these powerful foundation models locally, ensuring compliance with data residency requirements, reducing network latency, and delivering faster AI-powered experiences to their users.<br /> <br /> DeepSeek-V3.1 and Qwen3 Coder-480B are now available in the US East (Ohio) and Asia Pacific (Jakarta) AWS Regions. OpenAI open-weight models (20B, 120B) and Qwen3 models (32B, 235B, Coder-30B) are now available in the US East (Ohio), Europe (Frankfurt), and Asia Pacific (Jakarta) AWS Regions.<br /> <br /> Check out the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html?trk=ba8b32c9-8088-419f-9258-82e9375ad130&amp;sc_channel=el" target="_blank">full Region list</a> for future updates. To learn more about these models visit the <a href="https://aws.amazon.com/bedrock/model-choice/" target="_blank">Amazon Bedrock product page</a>. To get started, access the <a href="https://console.aws.amazon.com/bedrock/?trk=e61dee65-4ce8-4738-84db-75305c9cd4fe&amp;sc_channel=el" target="_blank">Amazon Bedrock console</a> and view the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html" target="_blank">documentation</a>.</p>

Read article →

Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available

<p>Amazon Aurora PostgreSQL-Compatible Edition now supports zero-ETL integration with <a href="https://aws.amazon.com/sagemaker/lakehouse/" target="_blank">Amazon SageMake</a>r, enabling near real-time data availability for analytics workloads. This integration automatically extracts and loads data from PostgreSQL tables into your lakehouse where it's immediately accessible through various analytics engines and machine learning tools. The data synced into the lakehouse is compatible with Apache Iceberg open standards, enabling you to use your preferred analytics tools and query engines such as SQL, Apache Spark, BI, and AI/ML tools.<br /> <br /> Through a simple no-code interface, you can create and maintain an up-to-date replica of your PostgreSQL data in your lakehouse without impacting production workloads. The integration features comprehensive, fine-grained access controls that are consistently enforced across all analytics tools and engines, ensuring secure data sharing throughout your organization. As a complement to the existing zero-ETL integrations with Amazon Redshift, this solution reduces operational complexity while enabling you to derive immediate insights from your operational data.<br /> <br /> Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm) AWS Regions.<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/what-is/zero-etl/" target="_blank">What is zero-ETL</a>. To begin using this new integration, visit the zero-ETL documentation for <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.html" target="_blank">Aurora PostgreSQL</a>.</p>

Read article →

Second-generation AWS Outposts racks now supported in the AWS Europe (Ireland) Region

<p>Second-generation AWS Outposts racks are now supported in the AWS Europe (Ireland) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience.<br /> <br /> Organizations from startups to enterprises and the public sector in and outside of Europe can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to.<br /> <br /> To learn more about second-generation Outposts racks, read <a href="https://aws.amazon.com/blogs/aws/announcing-second-generation-aws-outposts-racks-with-breakthrough-performance-and-scalability-on-premises/" target="_blank"><u>this blog post</u></a> and <a href="https://docs.aws.amazon.com/outposts/latest/network-userguide/what-is-outposts.html" target="_blank"><u>user guide</u></a>. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the <a href="https://aws.amazon.com/outposts/rack/faqs/" target="_blank"><u>Outposts rack FAQs page</u></a>.</p>

Read article →

Amazon EC2 C8g instances now available in additional regions

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Europe (Milan), and AWS Asia Pacific (Hong Kong, Osaka, Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.<br /> <br /> AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).<br /> <br /> To learn more, see <a href="https://aws.amazon.com/ec2/instance-types/c8g/">Amazon EC2 C8g Instances</a>. To get started, see the <a href="https://console.aws.amazon.com/">AWS Management Console</a>.</p>

Read article →

AWS SAM CLI adds Finch support, expanding local development tool options for serverless applications

<p><a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/using-sam-cli.html" target="_blank">AWS Serverless Application Model Command Line Interface (SAM CLI)</a> now supports <a href="https://runfinch.com/" target="_blank">Finch</a> as an alternative to Docker for local development and testing of serverless applications. This gives developers greater flexibility in choosing their preferred local development environment when working with SAM CLI to build and test their serverless applications.<br /> <br /> Developers building serverless applications spend significant time in their local development environments. SAM CLI is a command-line tool for local development and testing of serverless applications. It allows you to build, test, debug, and package your serverless applications locally before deploying to AWS Cloud. To provide the local development and testing environment for your applications, SAM CLI uses a tool that can run containers on your local device. Previously, SAM CLI only supported Docker as the tool for running containers locally. Starting today, SAM CLI also supports Finch as a container development tool. Finch is an open-source tool, developed and supported by AWS, for local container development. This means you can now choose between Docker and Finch as your preferred container tool for local development when working with SAM CLI.<br /> <br /> You can use SAM CLI to invoke Lambda functions locally, test API endpoints, and debug your serverless applications with the same experience you would have in the AWS Cloud. With Finch support, SAM CLI now automatically detects and uses Finch as the container development tool when Docker is not available. You can also set Finch as your preferred container tool for SAM CLI. This new feature supports all core SAM CLI commands including sam build, sam local invoke, sam local start-api, and sam local start-lambda.<br /> <br /> To learn more about using SAM CLI with Finch, visit the <a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-finch.html" target="_blank">SAM CLI developer guide</a>.&nbsp;</p>

Read article →

Amazon ECS supports running Firelens as a non-root user

<p><a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Services</a> (Amazon ECS) now allows you to run Firelens containers as a non-root user, by specifying a User ID in your Task Definition.<br /> <br /> Specifying a non-root user with a specific user ID reduces the potential attack footprint by users who may gain access to such software, a security best practice and a compliance requirement by some industries and security services such as the <a href="https://aws.amazon.com/security-hub/">AWS Security Hub</a>. With this release, Amazon ECS allows you to specify a user ID in the "user" field of your Firelens containerDefinition element of your Task Definition, instead of only allowing "user": "0" (root user).<br /> <br /> The new capability is supported in all AWS Regions. See the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html">documentation for using Firelens</a> for more details on how to set up your Firelens container to run as non-root.&nbsp;</p>

Read article →

Customer managed KMS keys now available for Automated Reasoning checks

<p>AWS announces support for customer managed AWS Key Management Service (KMS) keys in Automated Reasoning checks in Amazon Bedrock Guardrails. This enhancement enables you to use your own encryption keys to protect policy content and tests, giving you full control over key management. Automated Reasoning checks in Amazon Bedrock Guardrails is the first and only generative AI safeguard that helps correct factual errors from hallucinations using logically accurate and verifiable reasoning that explains why responses are correct.<br /> <br /> This feature enables organizations in regulated industries like healthcare, financial services, and government to adopt Automated Reasoning checks while meeting compliance requirements for customer-owned encryption keys. For example, a financial institution can now use Automated Reasoning checks to validate loan processing guidelines while maintaining full control over the encryption keys protecting their policy content. When creating an Automated Reasoning policy, you can now select a customer managed KMS key to encrypt your content rather than using the default key.<br /> <br /> Customer managed KMS key support for Automated Reasoning checks is available in all AWS Regions where Amazon Bedrock Guardrails is offered: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris).<br /> <br /> To get started, see the following resources:<br /> </p> <ul> <li>Automated Reasoning checks <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-automated-reasoning-checks.html" style="cursor: pointer;">user guide</a></li> <li>Amazon Bedrock Guardrails <a href="https://aws.amazon.com/bedrock/guardrails/" style="cursor: pointer;">product page</a></li> <li>AWS Key Management Service <a href="https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-mgn-key" style="cursor: pointer;">developer guide</a></li> <li>Create an Automated Reasoning policy in the <a href="https://console.aws.amazon.com/bedrock/home#/automated-reasoning/policies" style="cursor: pointer;">Bedrock console</a></li> </ul>

Read article →

AWS Step Functions now supports Diagnose with Amazon Q

<p>AWS announces AI-powered troubleshooting capabilities with Amazon Q integration in AWS Step Functions console. <a href="https://aws.amazon.com/step-functions/" target="_blank">AWS Step Functions</a> is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. This integration brings Amazon Q's intelligent error analysis directly into AWS Step Functions console, helping you quickly identify and resolve workflow issues.<br /> <br /> When errors occur in your AWS Step Functions workflows, you can now click the "Diagnose with Amazon Q" button that appears in error alerts and the console error notification area to receive AI-assisted troubleshooting guidance. This feature helps you resolve common types of issues including state machine execution failures as well as Amazon States Language (ASL) syntax errors and warnings. The troubleshooting recommendations appear in a dedicated window with remediation steps tailored to your error context, enabling faster resolution and improved operational efficiency.<br /> <br /> Diagnose with Amazon Q for AWS Step Functions is available in all commercial AWS Regions where Amazon Q is available. The feature is automatically enabled for customers who have access to Amazon Q in their region.<br /> <br /> To learn more about Diagnose with Amazon Q, see <a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/diagnose-console-errors.html" target="_blank"><u>Diagnosing and troubleshooting console errors with Amazon Q</u></a> or get started by visiting the AWS Step Functions <a href="https://aws.amazon.com/step-functions/" target="_blank">console</a>.</p>

Read article →

AWS Security Hub CSPM now supports CIS AWS Foundations Benchmark v5.0

<p><a href="https://aws.amazon.com/security-hub/cspm/features/">AWS Security Hub Cloud Security Posture Management (CSPM</a>) now supports the Center for Internet Security (CIS) AWS Foundations Benchmark v5.0. This industry-standard benchmark provides security configuration best practices for AWS with clear implementation and assessment procedures. The new standard includes 40 controls that perform automated checks against AWS resources to evaluate compliance with the latest version 5.0 requirements.<br /> <br /> The standard is now available in all AWS Regions where Security Hub CSPM is currently available, including the AWS GovCloud (US) and the China Regions. To quickly enable the standard across your AWS environment, we recommend that you use Security Hub CSPM central configuration. With this approach, you can enable the standard in all or only some of your organization's accounts and across all AWS Regions that are linked to Security Hub CSPM with a single action.<br /> <br /> To learn more, see<a href="https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html"> CIS v5.0</a> in the <i>AWS Security Hub CSPM User Guide</i>. To receive notifications about new Security Hub CSPM features and controls, subscribe to the <a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-announcements.html">Security Hub CSPM SNS topic</a>. You can also try <a href="https://aws.amazon.com/security-hub/pricing/">Security Hub at no cost for 30 days</a> with the AWS Free Tier offering.</p>

Read article →

Amazon Timestream now supports InfluxDB 3

<p>Amazon Timestream for InfluxDB now offers support for InfluxDB 3. Now application developers and DevOps teams can run InfluxDB 3 databases as a managed service. InfluxDB 3 uses a new architecture for the InfluxDB database engine, built on Apache Arrow for in-memory data processing, Apache DataFusion for query execution, and columnar Parquet storage format with data persistence in Amazon S3 to deliver fast performance for high-cardinality data and large scale data processing for large analytical workloads.<br /> <br /> With Amazon Timestream for InfluxDB 3, customers can leverage improved query performance and resource utilization for data-intensive use cases while benefiting from virtually unlimited storage capacity through S3-based object storage. The service is available in two editions: Core, the open source version of InfluxDB 3, for near real-time workloads focused on recent data, and Enterprise for production workloads requiring high availability, multi-node deployments, and essential compaction capabilities for long-term storage. The Enterprise edition supports multi-node cluster configurations with up to 3 nodes initially, providing enhanced availability, improved performance for concurrent queries, and greater system resilience.<br /> <br /> Amazon Timestream for InfluxDB 3 is available in all Regions where Timestream for InfluxDB is available. See <a href="https://docs.aws.amazon.com/general/latest/gr/timestream.html" target="_blank">here</a> for a full listing of our Regions.<br /> <br /> To get started with Amazon Timestream for InfluxDB 3, visit the <a href="https://console.aws.amazon.com/timestream" target="_blank">Amazon Timestream for InfluxDB console</a>. For more information, see the <a href="https://docs.aws.amazon.com/timestream/" target="_blank">Amazon Timestream for InfluxDB documentation</a> and <a href="https://aws.amazon.com/timestream/pricing/" target="_blank">pricing page</a>.</p>

Read article →

Amazon EC2 now supports Optimize CPUs for license-included instances

<p>Amazon EC2 now allows customers to modify an instance’s CPU options to optimize the licensing costs of Microsoft Windows license-included workloads. You can now customize the number of vCPUs and/or disable hyperthreading on Windows Server and SQL Server license-included instances to save on vCPU-based licensing costs.<br /> <br /> This enhancement is particularly valuable for database workloads like Microsoft SQL Server that require high memory and IOPS but lower vCPU counts. By modifying CPU options, you can reduce vCPU-based licensing costs while maintaining memory and IOPS performance, achieve higher memory-to-vCPU ratios, and customize CPU settings to match your specific workload requirements. For example, on an r7i.8xlarge instance running Windows and SQL Server license included, you can turn off hyperthreading to reduce the default 32 vCPU count to 16, saving 50% on the licensing costs, while still getting the 256 GiB memory and 40,000 IOPS that come with the instance.<br /> <br /> This feature is available in all commercial AWS Regions and the AWS GovCloud (US) Regions.<br /> <br /> To learn more, see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html" target="_blank">CPU options</a> in the Amazon EC2 User Guide and read this <a href="https://aws.amazon.com/blogs/modernizing-with-aws/optimize-cpus-best-practices-for-sql-server-workloads-continued/" target="_blank">blog post</a>.</p>

Read article →

Amazon Bedrock simplifies access with automatic enablement of serverless foundation models

<p>Amazon Bedrock now provides immediate access to all serverless foundation models by default for users in all commercial AWS regions. This update eliminates the need for manually activating model access, allowing you to instantly start using these models through the Amazon Bedrock console playground, AWS SDK, and Amazon Bedrock features including Agents, Flows, Guardrails, Knowledge Bases, Prompt Management, and Evaluations.<br /> <br /> While you can quickly begin using serverless foundation models from most providers, Anthropic models, although enabled by default, still require you to submit a one-time usage form before first use. You can complete this form either through the API or through the Amazon Bedrock console by selecting an Anthropic model from the playground. When completed through the AWS organization management account, the form submission automatically enables Anthropic models across all member accounts in the organization.<br /> <br /> This simplified access is available across all commercial <a href="https://docs.aws.amazon.com/general/latest/gr/bedrock.html" target="_blank">AWS regions</a> where Amazon Bedrock is supported. Account administrators retain full control over model access through <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html" target="_blank">IAM policies</a> and <a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html" target="_blank">Service Control Policies (SCPs) </a>to restrict access as needed. For implementation guidance and examples on access controls, please refer to our <a href="https://aws.amazon.com/blogs/security/simplified-amazon-bedrock-model-access/" target="_blank">blog</a>.</p>

Read article →

Amazon OpenSearch Service now supports Graviton4 based (c8g,m8g,r8g and r8gd) instances

<p><span style="background-color: rgb(255,255,255);">Amazon OpenSearch Service now supports latest generation Graviton4-based Amazon EC2 instance families. These new instance types are compute optimized (C8g), general purpose (M8g), and memory optimized (R8g, R8gd) instances. </span><br /> <br /> <span style="background-color: rgb(255,255,255);">AWS Graviton4 processors provide up to 30% better performance than AWS Graviton3 processors with c8g, m8g and r8g &amp; r8gd offering the best price performance for compute-intensive, general purpose, and memory-intensive workloads respectively. To learn more about Graviton4 improvements, please see the <a></a><a href="https://aws.amazon.com/blogs/aws/aws-graviton4-based-amazon-ec2-r8g-instances-best-price-performance-in-amazon-ec2/">blog</a> on r8g instances and the <a></a><a href="https://aws.amazon.com/blogs/aws/run-your-compute-intensive-and-general-purpose-workloads-sustainably-with-the-new-amazon-ec2-c8g-m8g-instances/">blog</a> on c8g &amp; m8g instances.<br /> <br /> Amazon OpenSearch Service Graviton4 instances are supported on all OpenSearch versions, and Elasticsearch (open source) versions 7.9 and 7.10. </span><br /> <br /> <span style="background-color: rgb(255,255,255);">One or more than one Graviton4 instance types are now available on Amazon OpenSearch Service across 23 regions globally: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Mumbai), Asia Pacific (Malaysia), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Thailand), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Spain), Europe (Stockholm), South America(Sao Paulo) and AWS GovCloud (US-West).</span><br /> <br /> <span style="background-color: rgb(255,255,255);">For region specific availability &amp; pricing, visit our <a></a><a href="https://aws.amazon.com/opensearch-service/pricing/">pricing page</a>. To learn more about Amazon OpenSearch Service and its capabilities, visit our <a href="https://aws.amazon.com/opensearch-service/"></a><a></a><a href="https://aws.amazon.com/opensearch-service/">product page</a>.</span><br /> </p> <p>&nbsp;</p>

Read article →

Announcing Amazon EC2 Capacity Manager

<p>Today, AWS is announcing the general availability of Amazon EC2 Capacity Manager, a new capability that enables customers to monitor, analyze, and manage EC2 capacity across all of their accounts and regions. This new capability simplifies resource management using a single interface.<br /> <br /> EC2 Capacity Manager offers customers a comprehensive view of On-Demand, Spot, and Capacity Reservation usage across their accounts and Regions. The new service features dashboards and charts that present high-level insights while allowing customers to drill down into specific details where needed. These details include historical usage trends to help customers gain a better understanding of their capacity patterns over time, as well as optimization opportunities to guide informed capacity decisions, complete with workflows for implementing these insights. In addition to the updated user interface and APIs, EC2 Capacity Manager allows customers to export data, enabling integration with their existing systems.<br /> <br /> EC2 Capacity Manager is available in all commercial <a href="https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html" target="_blank">AWS Regions enabled by default </a>at no additional cost.<br /> <br /> To learn more, visit the<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-manager.html" target="_blank"> EC2 Capacity Manager user guide</a>, read the <a href="https://aws.amazon.com/blogs/aws/monitor-analyze-and-manage-capacity-usage-from-a-single-interface-with-amazon-ec2-capacity-manager/" target="_blank">AWS News Blog</a>, or get started using <a href="https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#CapacityManagerHome" target="_blank">EC2 Capacity Manager in the AWS console</a>.&nbsp;</p>

Read article →

Claude 4.5 Haiku by Anthropic now in Amazon Bedrock

<p>Claude Haiku 4.5 is now available in Amazon Bedrock. Claude Haiku 4.5 delivers near-frontier performance matching Claude Sonnet 4's capabilities in coding, computer use, and agent tasks at substantially lower cost and faster speeds, making state-of-the-art AI accessible for scaled deployments and budget-conscious applications.<br /> <br /> The model's enhanced speed makes it ideal for latency-sensitive applications like real-time customer service agents and chatbots where response time is critical. For computer use tasks, Haiku 4.5 delivers significant performance improvements over previous models, enabling faster and more responsive applications. This model supports vision and unlocks new use cases where customers previously had to choose between performance and cost. It enables economically viable agent experiences, supports multi-agent systems for complex coding projects, and powers large-scale financial analysis and research applications. Haiku 4.5 maintains Claude's unique character while delivering the performance and efficiency needed for production deployments.<br /> <br /> Claude Haiku 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. To view the full list of available regions, refer to the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html" target="_blank">documentation</a>. To get started with Haiku 4.5 in Amazon Bedrock visit the <a href="https://console.aws.amazon.com/bedrock/" target="_blank">Amazon Bedrock console</a>, Anthropic's Claude in Amazon Bedrock <a href="https://aws.amazon.com/bedrock/claude/" target="_blank">product page</a>, and the Amazon Bedrock <a href="https://aws.amazon.com/bedrock/pricing/" target="_blank">pricing page</a>.</p>

Read article →

AWS Backup enhances backup plan management with schedule preview

<p>AWS Backup now provides schedule preview for backup plans, helping you validate when your backups are scheduled to run. Schedule preview shows the next ten scheduled backup runs, including when continuous backup, indexing, or copy settings take effect.</p> <p>Backup plan schedule preview consolidates all backup rules into a single timeline, showing how they work together. You can see when each backup occurs across all backup rules, along with settings like lifecycle to cold storage, point-in-time recovery, and indexing. This unified view helps you quickly identify and resolve conflicts or gaps between your backup strategy and actual configuration.</p> <p>Backup plan schedule preview is available in all AWS Regions where <a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html">AWS Backup is available</a>. You can start using this feature automatically from the <a href="http://console.aws.amazon.com/backup">AWS Backup console,</a> API, or CLI without any additional settings. For more information, visit our <a>documentation</a>.&nbsp;</p>

Read article →

AWS Marketplace now supports purchase order line numbers

<p>AWS Marketplace now supports purchase order line numbers for AWS Marketplace purchases, simplifying cost-allocation and payment processing. This launch makes it easier for customers to process and pay invoices.<br /> <br /> AWS purchase order support allows customers to provide purchase orders per transaction, which reflect on invoices related to that purchase. Now, customers can associate transaction charges not only to purchase orders, but also to a specific PO line number for AWS Marketplace purchases. This capability is supported during procurement and, for future charges, post-procurement in the AWS Marketplace console. You can also view the purchase order and purchase order line number associated to an AWS invoice in the AWS Billing and Cost Management console. Streamline your invoice processing by accurately matching AWS invoices with your purchase order and purchase order line number.<br /> <br /> This capability is available today in all <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/supported-regions.html">AWS Regions</a>&nbsp;where AWS Marketplace is supported.<br /> <br /> To learn about transaction purchase orders for AWS Marketplace, view the <a href="https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-purchase-orders.html">AWS Marketplace buyer guide</a>. For information on using blanket purchase orders with AWS, refer to the <a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-purchaseorders.html">AWS Billing Documentation</a>.<br /> </p>

Read article →

Amazon WorkSpaces Core Managed Instances is now available in 5 additional AWS Regions

<p>AWS today announced Amazon WorkSpaces Core Managed Instances availability in US East (Ohio), Asia Pacific (Malaysia), Asia Pacific (Hong Kong), Middle East (UAE), and Europe (Spain), bringing Amazon WorkSpaces capabilities to these AWS Regions for the first time. WorkSpaces Core Managed Instances in these Regions is supported by partners including Citrix, Workspot, Leostream, and Dizzion.<br /> <br /> Amazon WorkSpaces Core Managed Instances simplifies virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. WorkSpaces Core Managed Instances provisions resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Managed Instances provide flexibility for organizations requiring specific compute, memory, or graphics configurations.<br /> <br /> With WorkSpaces Core Managed Instances, you can use existing discounts, Savings Plans, and other features like On-Demand Capacity Reservations (ODCRs), with the operational simplicity of WorkSpaces - all within the security and governance boundaries of your AWS account. This solution is ideal for organizations migrating from on-premises VDI environments or existing AWS customers seeking enhanced cost optimization without sacrificing control over their infrastructure configurations. You can use a broad selection of instance types, including accelerated graphics instances, while your Core partner solution handles desktop and application provisioning and session management through familiar administrative tools.<br /> <br /> Customers will incur standard compute costs along with an hourly fee for WorkSpaces Core. See the WorkSpaces Core <a href="https://aws.amazon.com/workspaces-family/core/pricing/" target="_blank">pricing page</a> for more information.<br /> <br /> To learn more about Amazon WorkSpaces Core Managed Instances, visit the <a href="https://aws.amazon.com/workspaces/core" target="_blank">product page</a>. For technical documentation and getting started guides, see the <a href="https://docs.aws.amazon.com/workspaces-core/" target="_blank">Amazon WorkSpaces Core Documentation</a>.</p>

Read article →

AWS Systems Manager Patch Manager launches security updates notification for Windows

<p>AWS Systems Manager announces the launch of security updates notification for Windows patching compliance, which helps customers identify security updates that are available but not approved by their patch baseline configuration. This feature introduces a new patch state called "AvailableSecurityUpdate" that reports security patches of all severity levels that are available to install on Windows instances but do not meet the approval rules in your patch baseline.<br /> <br /> As organizations grow, administrators need to maintain secure systems while controlling when patches are applied. The security updates notification helps prevent situations where customers could unintentionally leave instances unpatched when using features like ApprovalDelay with large values. By default, instances with available security updates are marked as Non-Compliant, providing a clear signal that security patches require attention. Customers can also configure this behavior through their patch baseline settings to maintain existing compliance reporting if preferred.<br /> <br /> This feature is available in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/" style="cursor: pointer;">AWS Regions</a> where AWS Systems Manager is available. To get started with security updates notification for Windows patching compliance, visit the AWS Systems Manager<a href="https://console.aws.amazon.com/systems-manager/patch-manager" style="cursor: pointer;"> Patch Manager console</a>. For more information about this feature, refer to our <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html" style="cursor: pointer;">user documentation</a> or update your patch baseline with the <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-predefined-and-custom-patch-baselines.html#patch-manager-baselines-custom" style="cursor: pointer;">details here</a>. There are no additional charges for using this feature beyond standard AWS Systems Manager pricing.</p>

Read article →

Amazon DocumentDB (with MongoDB compatibility) now supports Internet Protocol Version 6 (IPv6)

<p>Amazon DocumentDB&nbsp;(with MongoDB compatibility) now offers customers the option to use Internet Protocol version 6 (IPv6) addresses on new and existing clusters. Customers moving to IPv6 can simplify their network stack by running their databases on a dual-stack network that supports both IPv4 and IPv6.<br /> <br /> IPv6 increases the number of available addresses and customers no longer need to manage overlapping IPv4 address spaces in their VPCs (Virtual Private Cloud). Customers can standardize their applications on the new version of Internet Protocol by moving to dual-stack mode (supporting both IPv4 and IPv6) with a few clicks in the&nbsp;AWS Management Console or directly using the AWS CLI.<br /> <br /> Amazon DocumentDB&nbsp;(with MongoDB compatibility) is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon DocumentDB support for IPv6 is&nbsp;generally available on version 4.0 and 5.0 in AWS Regions listed in&nbsp;<a contenteditable="false" href="https://docs.aws.amazon.com/documentdb/latest/developerguide/vpc-clusters.html#dual-stack-availability" style="cursor: pointer;" target="_blank">Dual-stack mode Region and version availability</a>. To learn more about configuring your environment for IPv6, please refer to <a contenteditable="false" href="https://docs.aws.amazon.com/documentdb/latest/developerguide/vpc-docdb.html" style="cursor: pointer;" target="_blank">Amazon VPC and Amazon DocumentDB</a>.</p>

Read article →

Amazon CloudWatch Database Insights now provides on-demand analysis for RDS for SQL Server

<p>Amazon CloudWatch Database Insights expands the availability of its on-demand analysis experience to the RDS for SQL Server database engine. CloudWatch Database Insights is a monitoring and diagnostics solution that helps database administrators and developers optimize database performance by providing comprehensive visibility into database metrics, query analysis, and resource utilization patterns. This feature leverages machine learning models to help identify performance bottlenecks during the selected time period, and gives advice on what to do next.<br /> <br /> Previously, database administrators had to manually analyze performance data, correlate metrics, and investigate root cause. This process is time-consuming and requires deep database expertise. With this launch, you can now analyze database performance monitoring data for any time period with automated intelligence. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution. This automated analysis and recommendation system reduces mean-time-to-diagnosis from hours to minutes.<br /> <br /> You can get started with this feature by enabling the Advanced mode of CloudWatch Database Insights on your RDS for SQL Server databases using the RDS service console, AWS APIs, the AWS SDK, or AWS CloudFormation. Please refer to <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.Overview.Engines.html" style="cursor: pointer;">RDS documentation</a> and <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PerfInsights.Overview.Engines.html#USER_PerfInsights.Overview.PIfeatureEngnRegSupport" style="cursor: pointer;">Aurora documentation</a> for information regarding the availability of Database Insights across different regions, engines and instance classes.&nbsp;</p>

Read article →

CloudWatch Database Insights now supports tag based access control

<p>Amazon CloudWatch Database Insights now supports tag-based access control for database and per-query metrics powered by RDS Performance Insights. You can implement access controls across a logical grouping of database resources without managing individual resource-level permissions.<br /> <br /> Previously, tags defined on RDS and Aurora instances did not apply to metrics powered by Performance Insights, creating significant overhead in manually configuring metric-related permissions at the database resource level. With this launch, those instance tags are now automatically evaluated to authorize metrics powered by Performance Insights. This allows you to define IAM policies using tag-based access conditions, resulting in improved governance and security consistency.<br /> <br /> Please refer to <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.access-control.html" target="_blank">RDS</a> and <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PerfInsights.access-control.html" target="_blank">Aurora</a> documentation to get started with defining IAM policies with tag-based access control on database and per-query metrics. This feature is available in all AWS regions where CloudWatch Database Insights is available.<br /> <br /> CloudWatch Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis. It offers vCPU-based pricing – see the <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">pricing page</a> for details. For further information, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Database-Insights.html" target="_blank">Database Insights User Guide</a>.</p>

Read article →

Amazon ECS now publishes AWS CloudTrail data events for insight into API activities

<p><a href="https://aws.amazon.com/ecs/" target="_blank">Amazon Elastic Container Service</a> (Amazon ECS) now supports AWS CloudTrail data events, providing detailed visibility into Amazon ECS Agent API activities. This new capability enables customers to monitor, audit, and troubleshoot container instance operations.<br /> <br /> With CloudTrail data event support, security and operations teams can now maintain comprehensive audit trails of ECS Agent API activities, detect unusual access patterns, and troubleshoot agent communication issues more effectively. Customers can opt in to receive detailed logging through the new data event resource type AWS::ECS::ContainerInstance for ECS agent activities, including when the ECS agent polls for work (ecs:Poll), starts telemetry sessions (ecs:StartTelemetrySession), and submits <a href="https://aws.amazon.com/ecs/managed-instances/" target="_blank">ECS Managed Instances</a> logs (ecs:PutSystemLogEvents). This enhanced visibility enables teams to better understand how container instance roles are utilized, meet compliance requirements for API activity monitoring, and quickly diagnose operational issues related to agent communications.<br /> <br /> This new feature is available for Amazon ECS on EC2 in all AWS Regions and ECS Managed Instances in <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ManagedInstances.html">select regions</a>. Standard CloudTrail data event charges apply. To learn more, visit the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/logging-using-cloudtrail.html" target="_blank">Developer Guide</a>.</p>

Read article →

AWS Parallel Computing Service (PCS) now supports Slurm v25.05

<p>AWS Parallel Computing Service (PCS) now supports Slurm v25.05. You can now create AWS PCS clusters running the newer Slurm v25.05.<br /> <br /> The release of Slurm v25.05 in PCS provides new Slurm functionalities including enhanced multi-cluster sackd configuration and improved requeue behavior for instance launch failures. With this release, login nodes can now control multiple clusters without requiring sackd reconfiguration or restart. This enables administrators to pre-configure access to multiple clusters for their users. The new requeue behavior enables more resilient job scheduling by automatically retrying failed instance launches during capacity shortages, thus increasing overall cluster reliability.<br /> <br /> AWS PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads on AWS using Slurm. To learn more about PCS, refer to the <a href="https://docs.aws.amazon.com/pcs/latest/userguide/what-is-service.html">service documentation</a> and <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Region Table</a>.</p>

Read article →

Amazon Connect now supports automated follow-up evaluations triggered by initial evaluation results

<p>Amazon Connect can now automatically initiate follow-up evaluations to analyze specific situations identified during initial evaluations. For example, when an initial customer service evaluation detects customer interest in a product, Amazon Connect can automatically trigger a follow-up evaluation focused on the agent's sales performance. This enables managers to maintain consistent evaluation standards across agent cohorts and over time, while capturing deeper insights on specific scenarios such as sales opportunities, escalations, and other critical interaction moments.<br /> <br /> This feature is available in all regions where <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region" style="cursor: pointer;">Amazon Connect</a> is offered. To learn more, please visit our <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/contact-lens-rules-submit-automated-evaluation.html#auto-eval-prereq-2" style="cursor: pointer;">documentation</a> and our <a contenteditable="false" href="https://aws.amazon.com/connect/contact-lens/" style="cursor: pointer;">webpage</a>.&nbsp;</p>

Read article →

Amazon Nova now supports the customization of content moderation settings

<p>Amazon Nova models now support the customization of content moderation settings for approved business use cases that require processing or generating sensitive content.<br /> <br /> Organizations with approved business use cases can adjust content moderation settings across four domains: safety, sensitive content, fairness, and security. These settings allow customers to adjust specific settings relevant to their business requirements. Amazon Nova enforces essential, non-configurable controls to ensure responsible use of AI, such as controls to prevent harm to children and preserve privacy.<br /> <br /> Customization of content moderation settings is available for Amazon Nova Lite and Amazon Nova Pro in the US East (N. Virginia) region.<br /> <br /> To learn more about Amazon Nova, visit the <a href="https://aws.amazon.com/ai/generative-ai/nova/">Amazon Nova product page</a> and to learn about Amazon Nova responsible use of AI, visit the <a href="https://aws.amazon.com/ai/responsible-ai/resources/">AWS AI Service Cards</a>,&nbsp;or see the <a contenteditable="false" href="https://docs.aws.amazon.com/nova/latest/userguide/customizable-content-moderation.html" style="cursor: pointer;" target="_blank">User Guide</a>.&nbsp;To see if your business model is appropriate to customize content moderation settings, contact your AWS Account Manager.</p>

Read article →

Amazon U7i instances now available in Europe (London) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Europe (London) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/">High Memory instances page</a>.</p>

Read article →

Amazon Corretto October 2025 Quarterly Updates

<p>On October 21, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 25.0.1, 21.0.9, 17.0.17, 11.0.29, 8u472 are now available for <a href="https://aws.amazon.com/corretto/" target="_blank">download</a>. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. <br /> <br /> This release of Corretto JDK binaries for Generic Linux, Alpine and macOS will include <a href="https://github.com/async-profiler/async-profiler" target="_blank">Async-Profiler</a>, a low overhead sampling profiler for Java supported by the Amazon Corretto team. Async-Profiler is designed to provide profiling data for CPU time, allocations in Java Heap, native memory allocations and leaks, contended locks, hardware and software performance counters like cache misses, page faults, context switches, Java method profiling, and much more.<br /> <br /> Click on the Corretto <a href="https://aws.amazon.com/corretto" target="_blank">home page</a> to download Corretto 25, Corretto 21, Corretto 17, Corretto 11, or Corretto 8. You can also get the updates on your Linux system by configuring a <a href="https://docs.aws.amazon.com/corretto/latest/corretto-21-ug/generic-linux-install.html" target="_blank">Corretto Apt, Yum, or Apk repo</a>.<br /> <br /> Feedback is <a href="https://github.com/corretto" target="_blank">welcomed</a>!</p>

Read article →

Amazon MQ is now available in AWS Asia Pacific (New Zealand) Region

<p>Amazon MQ is now available in the AWS Asia Pacific (New Zealand) Region with three Availability Zones and API name ap-southeast-6. With this launch, Amazon MQ is now available in a total of 38 regions.<br /> <br /> Amazon MQ is a managed message broker service for open-source Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.<br /> <br /> For more information, please visit the <a contenteditable="false" href="https://aws.amazon.com/amazon-mq/" style="cursor: pointer;">Amazon MQ product page</a>, and see the <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Region Table</a> for complete regional availability.</p>

Read article →

Amazon Bedrock Data Automation supports additional formats for video and faster processing for images

<p>Amazon Bedrock Data Automation (BDA) now supports AVI, MKV, and WEBM file formats along with the AV1 and MPEG-4 Visual (Part 2) codecs, enabling you to generate structured insights across a broader range of video content. Additionally, BDA delivers up to 50% faster image processing. <br /> <br /> BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With support for AVI, MKV, and WEBM formats, you can now analyze content from archival footage, high-quality video archives with multiple audio tracks and subtitles, and web-based and open-source video content. This expanded video format and codec support enables you to process video content directly in the formats your organization uses, streamlining your workflows and accelerating time-to-insight. With faster image processing on BDA, you you can extract insights from visual content faster than ever before. You can now analyze larger volumes of images in less time, helping you scale your AI applications and deliver value to your customers more quickly.<br /> <br /> Amazon Bedrock Data Automation is available in 8 AWS regions: Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Sydney), US West (Oregon) and US East (N. Virginia), and GovCloud (US-West) AWS Regions.<br /> <br /> To learn more, see the <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda.html">Bedrock Data Automation User Guide</a> and the <a href="https://aws.amazon.com/bedrock/pricing/">Amazon Bedrock Pricing</a> page. To get started with using Bedrock Data Automation, visit the <a href="https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#overview">Amazon Bedrock console</a>.</p>

Read article →

Amazon SES adds IP observability for Dedicated IP addresses (managed)

<p>Today, Amazon <a href="https://aws.amazon.com/ses/" target="_blank">Simple Email Service</a> (SES) added visibility into the IP addresses used by Dedicated IP Addresses - Managed (DIP-M) pools. Customers can now find out the exact addresses in use when sending emails through DIP-M pools to mailbox providers. Customers can also see Microsoft Smart Network Data Services (SNDS) metrics for these IP addresses, giving them more insight into their sending reputation with Microsoft mailbox providers. This gives customers more transparency into the IP activities in DIP-M pools.<br /> <br /> Previously, customers could configure DIP-M pools to perform automatic IP allocation and warm-up in response to changes in email sending volumes. This reduced the operational overhead of managing dedicated sending channels, but customers could not easily see which IP addresses were in use by DIP-M pools. This also made it difficult to find SNDS feedback, which customers use to improve their reputation. Now, customers can see the IPs in DIP-M pools through the console, CLI, or SES API. SES also automatically creates CloudWatch Metrics for SNDS information on each IP address, which customers can access through the CloudWatch console or APIs. This gives customers more tools to monitor their sending reputation.<br /> <br /> SES supports DIP-M IP observability in all <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a> where SES is available.<br /> <br /> For more information, see the documentation for information about <a href="https://docs.aws.amazon.com/ses/latest/dg/managed-dedicated-sending.html" target="_blank">DIP-M pools</a>.</p>

Read article →

Amazon EC2 C7i-flex instances are now available in the Asia Pacific (Jakarta) Region

<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in the Asia Pacific (Jakarta) Region. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i.<br /> <br /> C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances.<br /> <br /> To learn more, visit <a contenteditable="false" href="https://aws.amazon.com/ec2/instance-types/c7i/" style="cursor: pointer;">Amazon EC2 C7i-flex instances</a>. To get started, see the <a contenteditable="false" href="https://console.aws.amazon.com/" style="cursor: pointer;">AWS Management Console</a>.</p>

Read article →

AWS Parallel Computing Service (PCS) now supports rotation of cluster secret keys

<p>AWS Parallel Computing Service (PCS) now supports rotation of cluster secret keys using AWS Secrets Manager, enabling you to update the secure credentials used for authentication between Slurm controller and compute nodes without creating a new cluster. Regularly rotating your Slurm cluster secret keys strengthens your security posture by reducing the risk of credential compromise and ensuring compliance with best practices. This helps keep your HPC workloads and accounting data safe from unauthorized access.<br /> <br /> PCS is a managed service that makes it easier to run and scale high performance computing (HPC) workloads on AWS using Slurm. With the support of cluster secret rotation in PCS, you can strengthen your security controls and maintain operational efficiency. You can now implement secret rotation as part of your security best practices while maintaining cluster continuity.<br /> <br /> This feature is available in all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;" target="_blank">AWS Regions where PCS is available</a>. You can rotate cluster secrets using either the AWS Secrets Manager console or API after preparing your cluster for the rotation process. Read more about PCS support for cluster secret rotation in the <a contenteditable="false" href="https://docs.aws.amazon.com/pcs/latest/userguide/cluster-secret-rotation.html" style="cursor: pointer;" target="_blank">PCS User Guide</a>.</p>

Read article →

AWS announces Nitro Enclaves are now available in all AWS Regions

<p><a href="https://aws.amazon.com/ec2/nitro/nitro-enclaves/">AWS Nitro Enclaves</a> is an Amazon EC2 capability that enables customers to create isolated compute environments (enclaves) to further protect and securely process highly sensitive data within their EC2 instances. Nitro Enclaves helps customers reduce the attack surface area for their most sensitive data processing applications.<br /> <br /> There is no additional cost other than the cost for the using Amazon EC2 instances and any other AWS services that are used with Nitro Enclaves.<br /> <br /> Nitro Enclaves is now available across all AWS Regions, expanding to include new regions in Asia Pacific (New Zealand, Thailand, Jakarta, Hyderabad, Malaysia, Melbourne, and Taipei), Europe (Spain and Zurich), Middle East (UAE and Tel Aviv), and North America (Central Mexico and Calgary).<br /> <br /> To learn more about AWS Nitro Enclaves and how to get started, visit the <a href="https://aws.amazon.com/ec2/nitro/nitro-enclaves/">AWS Nitro Enclaves page</a>. </p>

Read article →

Amazon S3 Metadata is now available in three additional AWS Regions

<p>Amazon S3 Metadata is now available in three additional AWS Regions: Europe (Frankfurt), Europe (Ireland), and Asia Pacific (Tokyo).<br /> <br /> Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating. S3 Metadata automatically populates metadata for both new and existing objects, providing you with a comprehensive, queryable view of your data.<br /> <br /> With this expansion, S3 Metadata is now generally available in six <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-restrictions.html#metadata-tables-regions" style="cursor: pointer;">AWS Regions</a>. For pricing details, visit the <a contenteditable="false" href="https://aws.amazon.com/s3/pricing/" style="cursor: pointer;">S3 pricing page</a>. To learn more, visit the <a contenteditable="false" href="https://aws.amazon.com/s3/features/metadata/" style="cursor: pointer;">product page</a>, <a contenteditable="false" href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/metadata-tables-overview.html" style="cursor: pointer;">documentation</a>, and <a href="https://aws.amazon.com/blogs/storage/analyzing-amazon-s3-metadata-with-amazon-athena-and-amazon-quicksight/">AWS Storage Blog</a>.<a contenteditable="false" href="https://aws.amazon.com/blogs/storage/analyzing-amazon-s3-metadata-with-amazon-athena-and-amazon-quicksight/" style="cursor: pointer;" target="_blank"></a></p>

Read article →

AWS’ Customer Carbon Footprint Tool now includes Scope 3 emissions data

<p>Today, AWS’ Customer Carbon Footprint Tool (CCFT) has been updated to include Scope 3 emissions data and Scope 1 natural gas and refrigerants, providing AWS customers more complete visibility into their cloud carbon footprint. This update expands the CCFT to cover all three industry-standard emission scopes as defined by the Greenhouse Gas Protocol.</p> <p>The CCFT Scope 3 update gives AWS customers full visibility into the lifecycle carbon impact of their AWS usage, including emissions from manufacturing the servers that run their workloads, powering AWS facilities, and transporting equipment to data centers. Historical data is available back to January 2022, allowing organizations to track their progress over time and make informed decisions about their cloud strategy to meet their sustainability goals. This data is available through the CCFT dashboard and AWS Billing and Cost Management Data Exports, enabling customers to easily incorporate carbon insights into their operational workflows, sustainability planning, and reporting processes.</p> <p>To learn more about the enhanced Customer Carbon Footprint Tool, visit the&nbsp;<a href="https://aws.amazon.com/sustainability/tools/aws-customer-carbon-footprint-tool/">CCFT Website</a>,&nbsp;<a href="https://us-east-1.console.aws.amazon.com/costmanagement/home?region=us-east-1#/customer-carbon-footprint-tool">AWS Billing and Cost Management console</a>&nbsp;or read the&nbsp;<a href="https://sustainability.aboutamazon.com/aws-customer-carbon-footprint-tool-methodology.pdf">updated methodology documentation</a>&nbsp;and&nbsp;<a href="https://docs.aws.amazon.com/ccft/latest/releasenotes/what-is-ccftrn.html">release notes</a>.</p>

Read article →

Amazon RDS for SQL Server now supports retaining CDC configurations when restoring database backups

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server</a> now allows maintaining Change Data Capture (CDC) settings and metadata when restoring native database backups. CDC is a Microsoft SQL Server feature that customers can use to record insert, update, and delete operations occurring in a database table, and make these changes accessible to applications. When a database is restored from a backup, CDC configurations and data are not preserved by default, which can result in gaps in data capture. With this new feature, customers can preserve their database CDC settings when restoring a database backup to a new instance, or a different database name.<br /> <br /> To retain CDC configurations, customers can specify the KEEP_CDC option when restoring a database backup. This option ensures that the CDC metadata and any captured change data are kept intact. Refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.BackupRestore.html" target="_blank">Amazon RDS for SQL Server User Guide</a> to learn more about KEEP_CDC.&nbsp;This feature is available in all AWS Regions where Amazon RDS for SQL Server is available.</p>

Read article →

AWS Outposts 2U server is now available in the AWS GovCloud (US) Regions

<p>AWS Outposts 2U server is now supported in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. Outposts 2U server is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to on-premises or edge locations with limited space or smaller capacity requirements for a truly consistent hybrid experience. It is delivered in an industry-standard 2U form factor and provides up to 128 vCPUs of compute. It is ideal for applications that need to run on-premises to meet low latency and stringent compliance requirements. You can also use Outposts 2U server to manage your local data processing needs.</p> <p>With the availability of Outposts 2U server in the GovCloud (US-East) and GovCloud (US-West) Regions, you can now run Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), and AWS IoT Greengrass locally on Outposts 2U servers, and connect to a broader range of AWS services available in the parent AWS GovCloud (US) Region. For AWS GovCloud (US) Regions customers, this means you can run sensitive and controlled unclassified information (CUI) data on-premises at your facilities and connect to the parent AWS GovCloud (US) Region for management and operations.</p> <p>To learn more about Outposts 2U server, visit the <a href="https://aws.amazon.com/outposts/servers/" target="_blank">product page</a> and read the <a href="https://docs.aws.amazon.com/outposts/latest/server-userguide/what-is-outposts.html" target="_blank">user guide</a>.</p>

Read article →

Amazon Location Service Introduces New Map Styling Features for Enhanced Customization

<p>Today, AWS announced enhanced map styling features for Amazon Location Service, enabling users to further customize maps with terrain visualization, contour lines, real-time traffic data, and transportation-specific routing information. Developers can create more detailed and informative maps tailored for various use cases, such as outdoor navigation, logistics planning, and traffic management, by leveraging parameters like terrain, contour-density, traffic, and travel-mode through the GetStyleDescriptor API.<br /> <br /> With these styling capabilities, users can overlay real-time traffic conditions, visualize transportation-specific routing information such as transit and trucks, and display topographic features through elevation shading. For instance, developers can display current traffic conditions for optimized route planning, show truck-specific routing restrictions for logistics applications, or create maps that highlight physical terrain details for hiking and outdoor activities. Each feature operates seamlessly, providing enhanced map visualization and reliable performance for diverse use cases.<br /> <br /> These new map styling features are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, please visit the <a href="https://docs.aws.amazon.com/location/latest/developerguide/what-is.html" target="_blank">Developer Guide</a>.</p>

Read article →

Amazon Quick Sight announces the general availability of a new data preparation experience

<p>Amazon Quick Sight, a capability of Amazon Quick Suite, now offers a visual data preparation experience that helps business users perform advanced data transformations without writing complex code. Users can now clean, transform, and combine data in multi-step workflows—appending tables, aggregating data, executing flexible joins, and other advanced operations that previously required custom programming or SQL commands.<br /> <br /> Users can easily track data transformations step-by-step, enhancing traceability and shareability. With the ability to utilize datasets as a source expanded from 3 to 10 levels, teams can build reusable transformation logic that cascades across departments. For instance, centralized data analysts can now prepare foundational data sets that can then be further customized by regional business users, applying territory-specific calculations and business logic with simple clicks. The enhanced experience now also supports 20X larger cross-source joins, moving from a previous capacity of 1GB to 20GB today.<br /> <br /> This feature is available to Quick Sight Author and Author Pro customers in the following regions: US East (N.Virginia and Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Europe (Frankfurt, Milan, Paris, Spain, Stockholm, Ireland, London, Zurich), Africa (Cape Town), Middle East (UAE), Israel (Tel Aviv), Asia Pacific (Jakarta, Mumbai, Singapore, Tokyo, Seoul, Sydney), AWS GovCloud (US-West, US- East) and to Quick Suite Enterprise subscribers in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). For more details, read our documentation <a href="https://docs.aws.amazon.com/quicksuite/latest/userguide/data-prep-experience-new.html" target="_blank">here</a>.</p>

Read article →

Amazon DCV releases version 2025.0 with enhanced keyboard handling and WebAuthn support

<p>AWS announces Amazon DCV 2025.0, the latest version of the high-performance remote display protocol that enables customers to securely access remote desktops and application sessions. This release focuses on enhancing user productivity and security while expanding platform compatibility for diverse use cases.<br /> <br /> Amazon DCV 2025.0 includes the following key features and improvements:<br /> </p> <ul> <li>Enhanced WebAuthn redirection on Windows and standard browser-based WebAuthn support on Linux, enabling security key authentication (like Yubikeys, Windows Hello) in native Windows and SaaS applications within virtual desktop sessions</li> <li>Linux client support for ARM architecture, further broadening compatibility and performance</li> <li>Windows Server 2025 support, delivering latest security standards and enhanced performance on DCV hosts</li> <li>Server side keyboard layout support and layout alignment for Windows clients, enhancing input reliability and consistency</li> <li>Scroll wheel optimizations for smoother navigation</li> </ul> <p>For more information about the new features and enhancements in Amazon DCV 2025.0, see the <a href="https://docs.aws.amazon.com/dcv/latest/adminguide/doc-history-release-notes.html" target="_blank">release notes</a> or visit the <a href="https://aws.amazon.com/hpc/dcv/" target="_blank">Amazon DCV webpage</a> to learn more and get started.</p>

Read article →

Amazon EKS Auto Mode now available in AWS GovCloud (US-East) and (US-West)

<p>Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode is now available in the AWS GovCloud (US-East) and (US-West) regions. This feature fully automates compute, storage, and networking management for Kubernetes clusters. Additionally, EKS Auto Mode now supports FIPS-validated cryptographic modules through its Amazon Machine Images (AMIs) to help customers meet FedRAMP compliance requirements.<br /> <br /> EKS Auto Mode enables organizations to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. Its AMIs include FIPS-compliant cryptographic modules to help meet federal security standards for regulated workloads. EKS Auto Mode manages OS patching and updates, and strengthens security posture through ephemeral compute, making it ideal for workloads that require high security standards. It also dynamically scales EC2 instances based on demand, helping optimize compute costs while maintaining application availability.<br /> <br /> Amazon EKS Auto Mode is now available in AWS GovCloud (US-East) and (US-West). You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs.<br /> <br /> To get started with EKS Auto Mode, visit the <a href="https://aws.amazon.com/eks/" target="_blank">Amazon EKS product page</a>. For additional details, see the<a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-auto-mode.html" target="_blank"> Amazon EKS User Guide</a> and <a href="https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/whatis.html" target="_blank">AWS GovCloud (US) documentation</a>.</p>

Read article →

Amazon Redshift auto-copy is now available in 4 additional AWS regions

<p>Amazon Redshift auto-copy is now available in the AWS Asia Pacific (Malaysia), Asia Pacific (Thailand), Mexico (Central), and Asia Pacific (Taipei) regions. With Auto-Copy, you can set up continuous file ingestion from your Amazon S3 prefix and automatically load new files to tables in your Amazon Redshift data warehouse without the need for additional tools or custom solutions.<br /> <br /> Previously, Amazon Redshift customers had to build their data pipelines using COPY commands to automate continuous loading of data from S3 to Amazon Redshift tables. With auto-copy, you can now setup an integration which will automatically detect and load new files in a specified S3 prefix to Redshift tables. The auto-copy jobs keep track of previously loaded files and exclude them from the ingestion process. You can monitor auto-copy jobs using system tables.<br /> <br /> To learn more, see the <a href="https://docs.aws.amazon.com/redshift/latest/dg/loading-data-copy-job.html">documentation</a> or check out the AWS <a href="https://aws.amazon.com/blogs/big-data/simplify-data-ingestion-from-amazon-s3-to-amazon-redshift-using-auto-copy/">Blog</a>.</p>

Read article →

Amazon RDS for SQL Server enables encrypting native backups using server-side encryption with AWS KMS keys (SSE-KMS)

<p><a href="https://aws.amazon.com/rds/sqlserver/" target="_blank">Amazon Relational Database Service (Amazon RDS) for SQL Server</a> now supports encrypting native backups in Amazon S3 using server-side encryption with AWS KMS keys (SSE-KMS). When customers create database backup files (.bak files) in their Amazon S3 buckets, the backup files are automatically encrypted using server-side encryption with Amazon S3-managed keys (SSE-S3). Now, customers have the option to additionally encrypt their native backup files in Amazon S3 using their own AWS KMS key for additional protection.<br /> <br /> To use SSE-KMS encryption for native backups, customers must update their KMS key policies to provide access to the RDS backup service, and specify the parameter @enable_bucket_default_encryption in their native backup stored procedure. For detailed instructions on how to use SSE-KMS with native backups, please refer to the <a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html" target="_blank">Amazon RDS for SQL Server User Guide</a>. This feature is available in all AWS Regions where Amazon RDS for SQL Server is available.</p>

Read article →

Amazon EC2 I8g instances now available in additional AWS regions

<p>AWS is announcing the general availability of Amazon EC2 Storage Optimized I8g instances in Europe (London), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.<br /> <br /> Amazon EC2 I8g instances are designed for I/O intensive workloads that require rapid data access and real-time latency from storage. These instances excel at handling transactional, real-time, distributed databases, including MySQL, PostgreSQL, Hbase and NoSQL solutions like Aerospike, MongoDB, ClickHouse, and Apache Druid. They're also optimized for real-time analytics platforms such as Apache Spark, data lakehouse and AI LLM pre-processing for training. I8g instances are available in 10 different sizes with up to 48xlarge including one metal size, 1.5 TiB of memory, and 45 TB local instance storage. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS).<br /> <br /> To learn more, visit <a href="https://aws.amazon.com/ec2/instance-types/i8g/">EC2 I8g instances</a>.</p>

Read article →

Amazon S3 now generates AWS CloudTrail events for S3 Tables maintenance operations

<p>Amazon S3 adds AWS CloudTrail events for table maintenance activities in Amazon S3 Tables. You can now use AWS CloudTrail to track compaction and snapshot expiration operations performed by S3 Tables on your tables.<br /> <br /> S3 Tables automatically performs maintenance to optimize query performance and lower costs of your tables stored in S3 table buckets. You can monitor and audit S3 Tables maintenance activities such as compaction and snapshot expiration as management events in AWS CloudTrail. To get started with monitoring, create a trail in the AWS CloudTrail console and filter for 'AwsServiceEvents' as the eventType and 'TablesMaintenanceEvent' as the eventName.<br /> <br /> AWS CloudTrail events for S3 Tables maintenance are now available in all <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html#s3-tables-regions" target="_blank">AWS Regions where S3 Tables are available</a>. To learn more, visit Amazon S3 Tables <a href="https://aws.amazon.com/s3/features/tables/" target="_blank">product page</a> and <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-logging.html" target="_blank">documentation</a>.</p>

Read article →

Amazon CloudWatch Synthetics now supports bundled multi-check canaries

<p><a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html" target="_blank">Amazon CloudWatch Synthetics</a> introduces multi-check blueprints, enabling customers to create comprehensive synthetic tests using simple JSON configuration files. This new feature addresses the challenge many customers face when developing custom scripts for basic endpoint monitoring, which often lack the depth needed for thorough synthetic testing across various check types like HTTP endpoints with different authentication methods, DNS record validation, SSL certificate monitoring, and TCP port checks.<br /> <br /> With multi-check blueprints, customers can now bundle up to 10 different monitoring steps, one step per endpoint, in a single canary, making API monitoring more cost-effective and easier to implement. The solution provides built-in support for complex assertions on response codes, latency, headers, and body content, along with seamless integration with AWS Secrets Manager for secure credential handling. Customers benefit from detailed step-by-step results and debugging capabilities through the existing CloudWatch Synthetics console, significantly simplifying the process of implementing comprehensive API monitoring compared to writing individual custom canaries for each check. This feature streamlines monitoring workflows, reduces costs, and enhances the overall efficiency of synthetic monitoring setups.<br /> <br /> Multi-check blueprints are available in all commercial AWS regions where Amazon CloudWatch Synthetics is offered. For pricing details, see <a href="https://aws.amazon.com/cloudwatch/pricing/" target="_blank">Amazon CloudWatch pricing</a>. To learn more about multi-check blueprints and how to get started, see the <a target="_blank"></a><a target="_blank"></a><a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries_Blueprints.html" target="_blank">CloudWatch Synthetics Canaries Blueprints documentation</a>.</p>

Read article →

Amazon CloudWatch Agent adds support for Windows Event Log Filters

<p>Amazon CloudWatch agent has added support for configurable Windows Event log filters. This new feature allows customers to selectively collect and send system and application events to CloudWatch from Windows hosts running on Amazon EC2 or on-premises. The addition of customizable filters helps customers to focus on events that meet specific criteria, streamlining log management and analysis.<br /> <br /> Using this new functionality of the CloudWatch agent, you can define filter criteria for each Windows Event log stream in the agent configuration file. The filtering options include event levels, event IDs, and regular expressions to either "include" or "exclude" text within events. The agent evaluates each log event against your defined filter criteria to determine whether it should be sent to CloudWatch. Events that don't match your criteria are discarded. Windows event filters help you to manage your log ingestion by processing only the events you need, such as those containing specific error codes, while excluding verbose or unwanted log entries.<br /> <br /> Amazon CloudWatch Agent is available in all commercial AWS Regions, and the AWS GovCloud (US) Regions.<br /> <br /> To get started, see <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html" target="_blank">Create or Edit the CloudWatch Agent Configuration File</a> in the Amazon CloudWatch User Guide.</p>

Read article →

Amazon DocumentDB (with MongoDB compatibility) now supports Graviton4-based R8g database instances

<p>AWS Graviton4-based R8g database instances are now generally available for Amazon DocumentDB (with MongoDB compatibility). R8g instances are powered by AWS Graviton4 processors and feature the latest DDR5 memory, making it ideal for memory-intensive workloads. These instances are built on the <a href="https://aws.amazon.com/ec2/nitro/">AWS Nitro System</a>, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.<br /> <br /> Customers can get started with R8g instances through the AWS Management Console, CLI, and SDK by modifying their existing Amazon DocumentDB database cluster or creating a new one. R8g instances are available for Amazon DocumentDB 5.0 on both Standard and IO-Optimized cluster storage configurations. For more information including region availability visit our <a href="https://aws.amazon.com/documentdb/pricing/">pricing page</a> and <a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/db-instance-classes.html">documentation</a>.&nbsp;</p>

Read article →

Amazon Connect now supports threaded views and includes conversation history in agent replies

<p>Amazon Connect now includes the conversation history in agent replies and introduces threaded views of email exchanges, making it easier for both agents and customers to maintain context and continuity across interactions. This enhancement provides a more natural and familiar email experience for both agents and customers.<br /> <br /> Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">regions</a>. To learn more and get started, please refer to the help <a contenteditable="false" href="https://docs.aws.amazon.com/connect/latest/adminguide/setup-email-channel.html" style="cursor: pointer;">documentation</a>, <a contenteditable="false" href="https://aws.amazon.com/connect/pricing/" style="cursor: pointer;">pricing page</a>, or visit the <a contenteditable="false" href="https://aws.amazon.com/connect/" style="cursor: pointer;">Amazon Connect</a> website.</p>

Read article →

AWS Secret-West Region is now available

<p>Amazon Web Services (AWS) is excited to announce the launch of our second Secret Region - AWS Secret-West. With this launch, AWS now offers two regions capable of operating mission-critical workloads at the Secret U.S. security classification level. AWS Secret-West places cloud resources in closer proximity to users and mission in the western U.S., delivering enhanced performance for latency-sensitive workloads and improving operational efficiency. Additionally, the region provides customers with multi-region resiliency capabilities and geographic separation to best meet their U.S. Government mission requirements.<br /> <br /> <b>Architected for Security and Compliance</b><br /> Security is the highest priority at AWS. With the AWS Secret-West Region, customers benefit from data centers and network architecture designed, built, accredited, and operated for security compliance with Intelligence Community Directive (ICD) requirements. The region features multiple Availability Zones, providing the high availability, fault tolerance, and resilience that mission-critical workloads require.<br /> <br /> <b>Mission Workload Support</b><br /> The AWS Secret-West Region allows customers to:<br /> • Build and run mission-critical applications with enhanced geographic distribution<br /> • Process and analyze sensitive data across multiple regions<br /> • Deploy robust disaster recovery strategies<br /> • Leverage comprehensive compute and storage solutions located in the western U.S.<br /> • Utilize services authorized at ICD 503 and Department of Defense (DoD) Security Requirements Guide (SRG) Impact Level (IL6) supporting Intelligence Community and Department of Defense authority to operate mission needs.<br /> <br /> To learn more about the AWS Secret-West Region and how to get started<i>, </i><a href="https://pages.awscloud.com/AWSSecretCloudContactUs_01.LandingPage.html"><i>contact us.</i></a></p>

Read article →

Amazon Connect now provides granular permissions for conversation recordings and transcripts

<p>Amazon Connect now provides granular permissions to access conversation recordings and transcripts in the UI, giving administrators greater flexibility and security control. Contact center administrators can now separately configure access to recordings and transcripts, allowing users to listen to calls while preventing unauthorized copying of transcripts. The system also provides flexible download controls, enabling users to download redacted recordings while restricting downloads of unredacted versions. Administrators can also create sophisticated permission scenarios, providing access to redacted recordings of sensitive conversations while granting unredacted recording access for other conversations.<br /> <br /> This feature is available in all regions where <a href="https://docs.aws.amazon.com/connect/latest/adminguide/regions.html#amazonconnect_region">Amazon Connect</a> is offered. To learn more, please visit our <a href="https://docs.aws.amazon.com/connect/latest/adminguide/security-profile-list.html">documentation </a>and our <a href="https://aws.amazon.com/connect/contact-lens/">webpage</a>.&nbsp;</p>

Read article →

New Amazon CloudWatch metrics to monitor EC2 instances exceeding I/O performance

<p>Today, Amazon announced two new Amazon CloudWatch metrics that provide insight into when your application exceeds the I/O performance limits for your EC2 instance with attached EBS volumes. These two metrics, <b>Instance EBS IOPS Exceeded Check</b> and <b>Instance EBS Throughput Exceeded Check</b>, monitor if the driven IOPS or throughput is exceeding the maximum EBS IOPS or throughput that your instance can support.<br /> <br /> With these two new metrics at the instance level, you can quickly identify and respond to application performance issues stemming from exceeding the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html" target="_blank">EBS-Optimized limits of your instance</a>. These metrics will return a value of 0 (performance not exceeded) or a 1 (performance exceeded) when your workload is exceeding the EBS-Optimized IOPS or throughput limit of the EC2 instance. With Amazon CloudWatch, you can use these new metrics to create customized dashboards and set alarms that notify you or automatically perform actions based on these metrics, such as moving to a larger instance size or a different instance type that supports higher EBS-Optimized limits.<br /> <br /> The <b>Instance EBS IOPS Exceeded Check</b> and <b>Instance EBS Throughput Exceeded Check</b> metrics are available by default at a 1-minute frequency at no additional charges, for all Nitro-based Amazon EC2 instances with EBS volumes attached. You can access these metrics via the EC2 console, CLI, or CloudWatch API in all Commercial AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To learn more about these CloudWatch metrics, please visit the <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#ebs-metrics-nitro" target="_blank">EC2 CloudWatch Metrics documentation</a>.</p>

Read article →

Amazon U7i instances now available in AWS US East (Ohio) Region

<p>Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the US East (Ohio) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.<br /> <br /> U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.<br /> <br /> To learn more about U7i instances, visit the <a href="https://aws.amazon.com/ec2/instance-types/u7i/">High Memory instances page</a>.</p>

Read article →

Amazon SageMaker Unified Studio supports Amazon Athena workgroups

<p>Data engineers and data analysts using Amazon SageMaker Unified Studio can now connect to and run queries with pre-existing Amazon Athena workgroups. This feature enables data teams to run SQL queries in SageMaker Unified Studio with the default settings and properties from existing Athena workgroups. Since Athena workgroups are used to manage query access and control costs, data engineers and data analysts can save time by reusing Athena workgroups as their SQL analytics compute while maintaining data usage limits and tracking query usage by team or project. <br /> <br /> When choosing a compute for SQL analytics within SageMaker Unified Studio, customers can create a new Athena compute connection or choose to connect to an existing Athena workgroup. To get started, navigate to SageMaker Unified Studio, select “Add compute” and choose “Connect to existing compute resources”. Then create a connection to your pre-existing Athena workgroups and save. This new compute is now available within the SageMaker Unified Studio query editor to run SQL queries.<br /> <br /> Connecting to Athena workgroups within SageMaker Unified Studio is available in all <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html">regions where SageMaker Unified Studio is supported</a>.<br /> <br /> To learn more, refer to the <a href="https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/adding-a-existing-athena-connection.html">SageMaker Unified Studio Guide</a> and <a href="https://docs.aws.amazon.com/athena/latest/ug/creating-workgroups.html">Athena Workgroups Guide</a>.</p>

Read article →

Amazon CloudWatch introduces interactive incident reporting

<p>Amazon CloudWatch now offers interactive incident report generation, enabling customers to create comprehensive post-incident analysis reports in minutes. The new capability, available within CloudWatch investigations, automatically gathers and correlates your telemetry data, as well as your input and any actions taken during an investigation, and produces a streamlined incident report.<br /> <br /> Using the new feature you can automatically capture critical operational telemetry, service configurations, and investigation findings to generate detailed reports. Reports include executive summaries, timeline of events, impact assessments, and actionable recommendations. These reports help you better identify patterns, implement preventive measures, and continuously improve your operational posture through structured post incident analysis.<br /> <br /> The incident report generation feature is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), and Europe (Stockholm).<br /> <br /> You can create their first incident report by first creating a <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Investigations.html">CloudWatch investigation</a> and then clicking “<i>Incident report</i>”. To learn more about this new feature, visit the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Investigations-Incident-Reports.html">CloudWatch incident reports documentation.</a></p>

Read article →

Amazon DynamoDB zero-ETL integration with Amazon Redshift now available in the Asia Pacific (Taipei) region

<p>Amazon DynamoDB zero-ETL integration with Amazon Redshift is now supported in the Asia Pacific (Taipei) region. This expansion enables customers to run high-performance analytics on their DynamoDB data in Amazon Redshift with no impact on production workloads running on DynamoDB.&nbsp;<br /> <br /> Zero-ETL integrations help you derive holistic insights across many applications, break data silos in your organization, and gain significant cost savings and operational efficiencies. Now you can run enhanced analysis on your DynamoDB data with the rich capabilities of Amazon Redshift, such as high performance SQL, built-in ML and Spark integrations, materialized views with automatic and incremental refresh, and data sharing. Additionally, you can use history mode to easily run advanced analytics on historical data, build lookback reports, and build Type 2 Slowly Changing Dimension (SCD 2) tables on your historical data from DynamoDB, out-of-the-box in Amazon Redshift, without writing any code.<br /> <br /> The Amazon DynamoDB zero-ETL integration with Amazon Redshift is now available in Asia Pacific (Taipei), in addition to previously supported regions. For a complete list of supported regions, please refer to the AWS Region Table where Amazon Redshift is available.<br /> <br /> To learn more, visit the getting started guides for <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB-zero-etl.html" target="_blank">DynamoDB</a> and <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.html" target="_blank">Amazon Redshift</a>. For more information on using history mode, we encourage you to visit our recent blog post <a href="https://aws.amazon.com/blogs/big-data/amazon-redshift-announces-history-mode-for-zero-etl-integrations-to-simplify-historical-data-tracking-and-analysis/" target="_blank">here</a>.</p>

Read article →

Amazon Aurora DSQL is now available in Europe (Frankfurt)

<p>Starting today, Amazon Aurora DSQL is now available in Europe (Frankfurt). Aurora DSQL is the fastest serverless, distributed SQL database with active-active high availability and multi-Region strong consistency. Aurora DSQL enables you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resilience effortless for your applications and offers the fastest distributed SQL reads and writes.<br /> <br /> Aurora DSQL is now available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions</a>: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Osaka), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Frankfurt).<br /> <br /> Get started with Aurora DSQL for free with the <a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;all-free-tier.sort-order=asc&amp;awsf.Free%20Tier%20Types=*all&amp;awsf.Free%20Tier%20Categories=categories%23databases">AWS Free Tier</a>. To learn more, visit the Aurora DSQL <a href="https://aws.amazon.com/rds/aurora/dsql/">webpage </a>and <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/what-is-aurora-dsql.html">documentation</a>.</p>

Read article →

AWS Lambda increases maximum payload size from 256 KB to 1 MB for asynchronous invocations

<p>AWS Lambda increases asynchronous invocations maximum payload size from 256 KB to 1 MB, allowing customers to ingest richer, complex payloads for their event-driven workloads without the need to split, compress, or externalize data. Customers invoke their Lambda functions asynchronously using either Lambda API directly, or by receiving push-based events from various AWS services like Amazon S3, Amazon CloudWatch, Amazon SNS, Amazon EventBridge, AWS Step Functions.<br /> <br /> Modern cloud applications increasingly rely on AWS Lambda’s asynchronous invocations and its integration with various AWS serverless services to build scalable, event-driven architectures. These applications often need to process rich contextual data, including large-language model prompts, telemetry signals, and complex JSON structures for machine learning outputs. With increase in maximum payload size to 1MB for asynchronous invocations, developers can streamline their architectures by including comprehensive data, from detailed user profiles to complete transaction histories, in a single event, eliminating the need for complex data chunking or external storage solutions.<br /> <br /> This feature is generally available in all <a contenteditable="false" href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Commercial and AWS GovCloud (US) Regions</a>. Customers can start sending asynchronous invocation payloads up to 1 MB using Lambda’s invoke API. Customers are charged for 1 request per each asynchronous invocation for first 256 KB. Individual payload size beyond 256 KB is charged 1 additional request for each 64 KB of chunk up to 1 MB. To learn more, read Lambda asynchronous invocation <a contenteditable="false" href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html" style="cursor: pointer;">documentation</a> and AWS Lambda <a contenteditable="false" href="https://aws.amazon.com/lambda/pricing/" style="cursor: pointer;">pricing</a>.&nbsp;</p>

Read article →

Now generally available: AWS RTB Fabric for real-time bidding workloads

<p>Today, AWS announces RTB Fabric, a fully managed service that helps you connect with your AdTech partners such as Amazon Ads, GumGum, Kargo, MobileFuse, Sovrn, TripleLift, Viant, Yieldmo, and more in three steps while delivering single-digit millisecond latency through a private, high-performance network environment. RTB Fabric reduces standard cloud networking costs by up to 80% and does not require upfront commitments.</p> <p>The service includes modules, a capability that helps you bring your own and partner applications securely into the compute environment for real-time bidding. Modules support containerized applications and foundation models (FMs) that can enhance transaction efficiency and bidding effectiveness. Today, AWS RTB Fabric launches with three built-in modules to help you optimize traffic, improve bid efficiency, and increase bid response rates—all running inline for consistent low-latency execution. AWS RTB Fabric helps you to optimize auction execution, maximize supply monetization, and increase publisher revenue. You can connect with AdTech companies faster to reach target audiences, increase campaign scale, and improve performance for higher return on ad spend.</p> <p>AWS RTB Fabric is generally available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" target="_blank">AWS Regions</a>: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland)..&nbsp;To learn more, read the <a href="https://aws.amazon.com/blogs/aws/introducing-aws-rtb-fabric-for-real-time-advertising-technology-workloads">Blog</a><i>, </i><a href="https://docs.aws.amazon.com/rtb-fabric/latest/userguide/what-is-rtb-fabric.html">Documentation</a><i>, </i>or<b> </b>visit the <a href="https://aws.amazon.com/rtb-fabric/">AWS RTB Fabric</a> product page.</p>

Read article →

Amazon Connect outbound campaigns supports preview dialing for greater agent control

<p>Amazon Connect outbound campaigns now offers a preview dialing mode that gives agents more context about a customer before placing a call. Agents can see key customer information—such as name, account balance, and prior interactions—and choose the right moment to call. Campaign managers can tailor preview settings and monitor performance through new dashboards that bring visibility to agent behavior, campaign outcomes, and customer engagement trends.<br /> <br /> Without proper context, agents struggle to personalize interactions, leading to low customer engagement and poor experiences. Additionally, businesses can face steep regulatory penalties under laws such as the U.S. Telephone Consumer Protection Act (TCPA) or the UK Office of Communications (OFCOM) for delays in customer-agent connection.<br /> <br /> With preview dialing, campaign managers can define review time limits and optionally enable contact removal from campaigns. During preview, agents see a countdown timer alongside customer data and can initiate calls at any moment. Analytics reveal performance patterns—such as average preview time or discard volume—giving managers data to optimize strategy and coach teams effectively. By reserving an agent prior to placing the call, companies can support compliance with regulations while bringing precision to outbound calling, improving both customer connection and operational control.<br /> <br /> With Amazon Connect outbound campaigns, companies pay-as-they-go for campaign processing and channel usage. Preview dialing is available in AWS regions, including US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London).<br /> <br /> To learn more about configuring preview dialing, visit our <a href="https://aws.amazon.com/connect/outbound/">webpage</a>.</p>

Read article →

AWS Transfer Family now supports changing identity provider type on a server

<p>AWS Transfer Family now enables you to change your server's identity provider (IdP) type without service interruption. This enhancement gives you more control and flexibility over authentication management in your file transfer workflows, enabling you to adapt quickly to changing business requirements.<br /> <br /> AWS Transfer Family provides fully managed file transfers over SFTP, FTP, FTPS, AS2, and web-browser based interfaces. With this launch, you can now dynamically switch between service managed authentication, Active Directory, and custom IdP configurations for SFTP, FTPS, and FTP servers. This enables you to implement zero-downtime authentication migration and meet evolving compliance requirements.&nbsp;<br /> <br /> Changing IDP type is available in all&nbsp;<a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/">AWS Regions where the service is available</a>. To learn more, visit the&nbsp;<a href="https://docs.aws.amazon.com/transfer/latest/userguide/configuring-servers-edit-custom-idp.html">Transfer Family User Guide</a>.</p>

Read article →

Aurora DSQL now supports resource-based policies

<p>Amazon Aurora DSQL now supports resource-based policies, enabling you to simplify access control for your Aurora DSQL resources. With resource-based policies, you can specify Identity and Access Management (IAM) principals and the specific IAM actions they can perform against your Aurora DSQL resources. Resource-based policies also enable you to implement Block Public Access (BPA), which helps to further restrict access to your Aurora DSQL public or VPC endpoints.<br /> <br /> Aurora DSQL support for resource-based policies is available in the following <a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" style="cursor: pointer;">AWS Regions</a>: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Osaka), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Frankfurt). To get started, visit the <a href="https://docs.aws.amazon.com/aurora-dsql/latest/userguide/resource-based-policies.html" style="cursor: pointer;">Aurora DSQL resource-based policies documentation</a>.</p>

Read article →

Amazon EC2 Auto Scaling now supports predictive scaling in six more regions

<p>Customers can now enable predictive scaling for their Auto Scaling groups (ASGs) in six more regions: Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Israel (Tel Aviv), Canada West (Calgary), Europe (Spain), and Europe (Zurich). Predictive Scaling can proactively scale out your ASGs to be ready for upcoming demand. This allows you to avoid the need to over-provision capacity, resulting in lower EC2 cost, while ensuring your application’s responsiveness. To see the list of all supported AWS public regions and AWS GovCloud (US) regions, <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/predictive-scaling-policy-overview.html#predictive-scaling-regions">click here</a>.<br /> <br /> Predictive Scaling is appropriate for applications that experience recurring patterns of steep demand changes, such as early morning spikes when business resumes. It learns from the past patterns and launches instances in advance of predicted demand, giving instances time to warm up. Predictive scaling enhances existing Auto Scaling policies, such as Target Tracking or Simple Scaling, so that your applications scale based on both real-time metrics and historic patterns. You can preview how Predictive Scaling works with your ASG by using the “Forecast Only” mode.<br /> <br /> Predictive Scaling is available as a scaling policy type through AWS Command Line Interface (CLI), EC2 Auto Scaling Management Console, AWS CloudFormation and AWS SDKs. To learn more, visit the Predictive Scaling page in the <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/predictive-scaling-policy-overview.html#predictive-scaling-regions">EC2 Auto Scaling documentation</a>.</p>

Read article →