AWS Re:Invent 2024: Innovating and Integrating to Meet AI’s Moment

AWS re:Invent 2024 Overview

Matt Garman kicked off his first re:Invent conference as AWS’ CEO, reinforcing a strategy that has been rooted in AWS’ DNA for over a decade. That is the notion of “building blocks,” in other words, the 220-plus native services AWS offers that cater to a specific workload and, when used together in a modular fashion, can address specific use cases. This approach of offering the broadest set of out-of-the-box, user-friendly tools to attract new applications, spin the IaaS meter and feed the lucrative flywheel effect AWS is known for, has naturally garnered a lot of interest with developer and startup communities. But Garman was quick to remind us how far AWS has come in catering to the large enterprise.

 

As an example, Garman welcomed JPMorgan Chase Global CIO Lori Beer to the stage to share the company’s aggressive cloud transformation, which consisted of growing from 100 applications on AWS in 2020 to over 1,000 today, powered by a range of services, from Graviton chips to SageMaker to AWS’ fastest-growing service, Aurora. If this success story is any indication and if we factor in the feedback from our own C-Suite discussions, this building-block approach appears to be resonating, solidifying AWS’ position as the leading IaaS & PaaS provider. But with every new application poised to have some AI or generative AI (GenAI) component, this budding technology is raising the stakes, and the hybrid-multicloud reality means customers have a lot of options when it comes to crafting new workloads.

Compute is foundational building block, with a heavy focus on AI training

Today, AWS offers over 850 Amazon Elastic Compute Cloud (EC2) instance types, and on average, 130 million new EC2 instances are launched daily. This pace of innovation and scale is largely due to AWS’ approach to the virtualization stack dating back to 2012 with the Nitro System, which other hyperscalers have since emulated in their own way, making compute the foundational building block and hallmark of AWS’ success. Though at the event AWS touted its commitment to NVIDIA, with support for Blackwell GPUs coming online next year, and general-purpose workloads via Graviton, a lot of the focus was on AI training.
 

Since it first launched its Trainium chip in 2020, AWS has served the needs of AI training workloads, but now AI-driven ISVs like Databricks and Adobe, seem to have an appetite for these chips, hoping to deliver cost and performance efficiencies to their wide swath of customers that also run on AWS. It is why AWS launched Trainium 2 and is making these EC2 instances, which encompass 16 Inferentia chips, generally available following year in private preview. AWS also reinforced its commitment to continuing to push the compute boundaries on AI training, announcing that Trainium 3, which will be available later next year, will reportedly offer double the compute power of Trainium 2.

Rise of the distributed database

Another core building block of the cloud stack is the database. Distributed databases are nothing new but have been picking up steam as customers in certain industries, including the public sector, want to have data stored within country borders but scale across different regions. At the event, AWS introduced Aurora DSQL, a distributed SQL database, that at its core isolates the transaction processing from the storage layer, so customers can scale across multiple regions with relatively low latency.
 

This development comes at an interesting time in the cloud database market. Database giant Oracle is shaking up the market, making its services available on all leading clouds, including AWS, with the Oracle Database@AWS service now in limited preview. But AWS is focused on choice. While the IaaS opportunity to land Oracle workloads was too good to pass up, particularly when Microsoft Azure and Google Public Cloud (GCP) are doing the same thing, AWS wants to continue pushing the performance boundaries of its own databases. In fact, it was Google Cloud that AWS targeted at the event, boasting that Aurora DSQL handles read-write operations four times faster than Google Spanner.
 

Watch On Demand: Monetizing GenAI: Cloud Vendors’ Investment Strategies and 2025 Outlook

Creating more unity between the data and AI was somewhat inevitable

Jumping on the platform bandwagon, AWS morphs SageMaker into SageMaker AI

AWS launched SageMaker seven years ago, and the machine learning development service quickly emerged as one of AWS’ most popular, innovative offerings, adding 140 new features in the last year alone. But when GenAI and Amazon Bedrock came on the scene, SageMaker found a new home in the GenAI portfolio, acting as the primary tool customers use to fine-tune foundation models they access through the Bedrock service. So, from a messaging perspective, it was not surprising to see AWS announce that SageMaker is becoming SageMaker AI. But what is notable is how SageMaker AI is being marketed, integrated and delivered.

 

First, AWS VP of Data and AI Swami Sivasubramanian introduced the SageMaker AI platform as a one-stop shop for data, analytics and AI, underpinned by SageMaker Unified Studio, which consolidates several disparate AWS data and analytics tools, from Redshift to Glue, into a single environment. Just as importantly, Unified Studio offers a native integration with Bedrock so customers can access Bedrock for GenAI app development within the same interface, as well as Q Developer for coding recommendations.
 

The second important piece is how data is accessed for SageMaker AI. The foundational layer of the SageMaker AI platform is SageMaker Lakehouse, which is accessible directly through Unified Studio, so customers can make a single copy of data regardless of whether it is sitting in data lakes they created on S3 or the Redshift data warehouse. This means customers do not have to migrate any existing data to use SageMaker Lakehouse, and they can query data stored in Apache Iceberg format as it exists today. From competitors and/or partners like Microsoft, Oracle and Databricks, we have seen big leaps forward in the data lake messaging, so the SageMaker Lakehouse announcement, combined with traditional S3 developments like S3 Tables for the automatic maintenance of Apache Iceberg tables, aligns with the market and is a big reaffirmation of the Apache Iceberg ecosystem.

 

In our view, SageMaker AI is a big development for a couple of reasons. First and foremost, it could go a long way in addressing one of the top concerns we often hear from customers pertaining to AWS, which is that they want consistent data without having to leverage multiple disparate services to carry out a task. SageMaker is still available as a stand-alone service for customers that have a specific requirement, but we suspect a lot of customers will find value in serving the full AI life cycle, from initial data wrangling up to model development as part of a unified experience. Since AWS launched the first EC2 instance in 2009, formalizing cloud computing as we know it today, we have watched the market gradually shift toward more complete, integrated solutions. From IBM to Microsoft, many of IT’s biggest players take a platform-first approach to ease common pain points like integration and cost in hopes of enabling true enterprise-grade digital transformation, and SageMaker AI signifies a step in this direction.
 

Secondly, SageMaker AI aligns AWS more closely with what competitors are doing to better integrate their services and selling data and AI as part of the same story. Considering the consolidation of services, data lake architecture and copilot (Amazon Q) integration, Microsoft Fabric is the most notable example, and while there are big technical differences between the two platforms, you can now draw parallels between both companies and how they are trying to better address the data layer in a broader AI pursuit. For context, TBR’s own estimates suggest Microsoft Azure (IaaS & PaaS) will significantly narrow, if not beat, AWS’ revenue lead by 2027, and a lot of customers we talk to today give Microsoft a leg up on data architecture. Nothing can displace Microsoft’s ties to legacy applications and the data within them, but SageMaker AI is clearly in step with the market, and if AWS can effectively engage partners on the data side, this solution could help AWS retain existing and compete for new workloads.

AWS’ values of breadth and accessibility extend to Bedrock

Because Bedrock and SageMaker go hand in hand, having a Bedrock IDE (integrated development environment) directly in SageMaker makes a lot of sense. This means within SageMaker AI, customers can access all the foundation models Bedrock supports and the various capabilities, like Agents and Knowledge Bases, that AWS has been rolling out to its audience of “tens of thousands” of Bedrock customers, which reportedly implies five times the growth in the last year alone. In true AWS fashion, offering the broadest set of foundation models is integral to the Bedrock strategy. This includes adding support for models from very early-stage AI startups like Luma and poolside, getting them tied to AWS infrastructure early on, and growing them into competitive ISVs over time.
 

Another key attribute of Bedrock has always been democratization and making access to the foundation models as seamless as possible through a single API hosting experience. In line with this strategy, AWS launched Bedrock Marketplace to make it easier for customers to find and subscribe to the 100-plus foundation models Bedrock supports, including those from Anthropic, IBM and Meta, as well as Amazon itself. AWS is the king of marketplaces, so having a dedicated hub for AI models that are from startups and are enterprise grade as part of a single experience is certainly notable and further fueling the shift in buyer behavior toward self-service.

Partners take note: Security, modernization and marketplace

Despite all the talk around AI and GenAI, security remains the No. 1 pain point when it comes to cloud adoption and was a big theme in the partner keynote. AWS’ VP of Global Specialists and Partners, Ruba Borno, reinforced the importance of AWS’ various specialization programs to demonstrate skills to clients in key areas including security. During the keynote, AWS announced new security specializations, including one around AWS’ Security Lake service. This is a pretty telling development for partners; Security Lake was a service essentially designed with partners in mind, allowing many services-led firms to build integrations and attach managed services. Now these partners can demonstrate their skills with Security Lake to customers, along with other areas in the realm of security, such as digital sovereignty, which aligns with AWS’ upcoming launch of the European Union (EU) Sovereign Cloud region.

 

Aside from security, AWS emphasized modernization and the need for partners to think beyond just traditional cloud migration opportunities. It is why AWS launched new incentives for modernization, including removing funding caps within MAP (Migration Acceleration Program), and rebranded the AWS Migration Competency as the AWS Migration and Modernization Competency. This is pretty telling of where AWS wants partners to focus and, in many cases, change the conversation with buyers, emphasizing the role of modernizing as part of the migration process. Considering how difficult it has become for services players to compete on migration services, as well as the fact that modernization could set the stage for more GenAI usage with tools like Q Developer, we believe this is aligned with where many global systems integrators are headed anyway.

Expanding the reach of AWS Marketplace

No partner discussion would be complete without AWS Marketplace, AWS’ pervasive hub where customers can buy and provision software using their existing cloud spend commitments. Year to date, AWS reports that essentially all of its top 1,000 customers buy on the AWS Marketplace, and usage spans several industries, including the public sector, which has reportedly transacted over $1 billion on AWS Marketplace in the past year. At re:Invent, AWS continued to take steps to expand the reach of AWS Marketplace, getting partners to better engage customers through this channel, with the availability of Buy with AWS. This option allows customers to access AWS Marketplace directly from a partner’s website.

Final thoughts

re:Invent showcased how AWS is pushing the envelope, in both breadth and capability, on the compute, database and AI building blocks customers use to solve specific use cases in the cloud. This approach, coupled with innovations like Bedrock Marketplace and a commitment to early-stage startups, speaks to how AWS will continue to lean into the core strengths that have made the cloud provider what it is today. But just as notably, offerings like SageMaker AI and an alliance with competitor Oracle show how AWS is embracing new tactics and elevating its role within the cloud ecosystem.