AWS Aims to Reinvent GenAI Through Infrastructure Layer, Platform Tools and Applications

Amazon Web Services’ (AWS) 12th annual re:Invent conference was, unsurprisingly, all about generative AI (GenAI). The five-day event showcased all the ways AWS enables this budding technology — which Amazon CEO Andy Jassy claims will add tens of billions of dollars to AWS’ top line — not just through the infrastructure layer AWS is known for, but also through the company’s platform tools and applications. 

AWS Set Out to re:Invent infrastructure over a decade ago and is prepared to do the same with GenAI

Dating back to the dot-com bubble and the early days of amazon.com, Amazon gained an understanding of what it takes to provision infrastructure designed to scale at massive volumes. After Amazon spent years trying to overcome scale challenges associated with bringing third-party merchants to its e-commerce engine, AWS was born.
 
Despite all the competition it has welcomed over the past 10 years, AWS is still largely credited with not only pioneering cloud infrastructure but also making it accessible to anyone. As articulated by AWS CEO Adam Selipsky, this could range from a college student using a laptop in their dorm room to some of the most sophisticated enterprises in the world. But largely owed to the pandemic, we have seen the cloud market shift from a data center outsourcing strategy to a tangential business driver, which means AWS has had to adapt alongside its clients with not just traditional hosting services but also full-stack solutions tied to a specific use case.
 
One of the most compelling customer examples highlighting this approach includes Pfizer. At the height of the COVID-19 pandemic in 2021, Pfizer pledged to expand its cloud footprint from 10% to 80%. Put another way, Pfizer migrated 12,000 applications and 8,000 servers in 42 weeks, which resulted in $47 million in annual savings and the closure of three data centers. This seemingly successful, large-scale transformation has Pfizer now exploring AWS’ GenAI technologies, including Bedrock, to automate manual processes and realize a projected $750 million to $1 billion in annual cost savings.
 
This customer example speaks to the powerful influence AWS’ infrastructure has with clients such as Pfizer — which needed to submit data to the Food and Drug Administration in a matter of days during COVID-19 — that prioritize speed, scale and agility. Holding a significant portion of the cloud infrastructure layer, AWS is looking up the stack to tackle cloud’s next big reinvention: GenAI.
 

Download your free copy of TBR′s 2024 Cloud Predictions special report

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond


 

A closer look at AWS’ GenAI stack

Selipsky’s overview of the AWS GenAI stack was consistent with the commentary Jassy has provided on Amazon earnings calls over the past couple of quarters. Here is a quick look at AWS’ GenAI capabilities and some of the new innovations:

  • Infrastructure: While a great talking point for AWS, we cannot argue the fact that scalable compute serves as the foundation for all things GenAI. For AWS, this includes both custom chips and NVIDIA (Nasdaq: NVDA) GPUs. AWS used re:Invent to launch innovations in both areas, including Amazon Trainium 2 instances (Trn2), which promises a fourfold performance increase over Trn1 for machine learning inference workloads, and NVIDIA DGX Cloud on AWS. The latter is particularly interesting and comes as all other cloud providers have already signed on as hosting partners for NVIDIA’s DGX AI software. As the first company to put GPUs in the cloud, AWS has a unique relationship with NVIDIA, but one that may be growing more contentious as sales teams push AWS’ own chips as part of a cost optimization play designed to maximize customers’ lifetime value. Even so, NVIDIA’s supplier power is significant, and thus the company has a lot of bargaining power with the hyperscalers, which need NVIDIA to supply GPUs to their data centers, and in return, can host DGX and support NVIDIA’s push into the software space.
  • Platform tools and “as a Service” models: The middle layer of AWS’ GenAI stack is largely synonymous with Amazon Bedrock, a managed service used by 10,000-plus customers to access and customize foundation models for their GenAI apps. Making sure customers are not beholden to one model provider and can access an array of options through the same API interface is key to AWS’ strategy. It also contrasts with Microsoft’s (Nasdaq: MSFT) approach and helps AWS position itself as an open and flexible alternative. New models supported via Bedrock include Anthropic’s Claude 2.1, which has a context window of roughly 150,000 words — making it well suited for legal and finance use cases — in addition to internal models, like Amazon Titan Multimodal Embeddings. Breadth of models is key, but improving the native functionality within Bedrock garners the majority of investment from AWS at this layer. This largely includes features that get customers beyond out-of-the-box models to those that can be customized, fine-tuned and applied to business use cases. One example includes Knowledge Bases for Amazon Bedrock, a Retrieval Augmented Generation (RAG) service that pulls data from multiple sources (i.e., databases, APIs) to help customers bring data to their models and customize.
  • GenAI applications: At the top of the stack are the actual GenAI applications built on foundation models. AWS may have a weaker association here, but this layer is important to rounding out the entire stack and keeping customers invested in AWS. This layer largely comprises Code Whisperer, the free-for-use code companion that also offers customization capability, which means the application learns from internal code to provide personalized recommendations.

It is all about breadth

With over 220 native services and 600 compute instance types, portfolio breadth has always been a hallmark attribute of AWS. For context, AWS launched 3,300 new features and services in 2022. In his opening keynote, Selipsky went as far to say that AWS offers 60% more services and 40% more features than its nearest competitor. The approach to GenAI will be no different, as AWS strives to offer the broadest set of capabilities for customers to run, build and deploy GenAI technology.
 
Even in areas where AWS lacks depth or specificity, ISV solutions prove instrumental in filling gaps and, in many cases, do more to drive up a customer’s underlying IaaS resources than AWS’ out-of-the-box services. We also know AWS has a rich history of delivering very basic services to market and quickly building them up into competitive products over time. Perhaps the best example is AWS SageMaker, which accumulated over 250 features and tens of thousands of customers in the span of six years.

 

GenAI applications: How AWS is entering the copilot race with Amazon Q

At re:Invent, AWS took a bigger leap into the GenAI applications space with the launch of Amazon Q. While incorporating natural language processing (NLP) into various services is not necessarily new to AWS, Q is a GenAI-powered assistant based on 17 years of AWS knowledge designed to bridge the gap between the technical and business-led functions in the enterprise. For example, Q will be integrated with the Code Whisperer environment so developers can ask questions like, “How do I create code for this function?” while admins can use Q in pretty much any environment (e.g., AWS Management Console, Slack, documentation) to ask questions as generic as, “How do I build a web app on AWS”?

 

But Q also connects to 40 external data sources for business-related tasks, such as data visualization and document summarization, while the assistant integrates with Amazon Connect for contact center optimization, and will soon work with AWS’ Supply Chain application launched at last year’s re:Invent. Integrating Q across functions like supply chain and customer service, in addition to the analytics stack with QuickSight, suggests AWS wants Q to be the expert assistant not just for building on AWS but also for business.

 

This approach is largely consistent with what we are seeing from competitors integrating copilots and assistants into their SaaS offerings; however, there are a couple of big contrasts between Q and Microsoft’s Copilot and Google Cloud’s (Nasdaq: GOOGL) Duet AI. This first one is pricing: Both Copilot and Duet AI are priced at $30 per month per user, while Amazon Q, though still in preview, will come in at $20 and $25 per month per user for Q Business and Q Builder editions, respectively.
 
AWS may be undercutting its competitors on price, but Microsoft’s and Google Cloud’s recognition and reach in the productivity space may prove challenging, at least in the context of including Q Business edition. Q Builder, however, may be another story. While including all the capabilities of Q Business, Q Builder is designed for AWS-specific use cases, and in general, anything AWS can do to make developers successful is going to be well received by the customer base. This could include tasks like troubleshooting applications, writing SQL queries or even migrating code. A small pool of Amazon developers tested this last capability internally to upgrade 1,000 applications from Java 8 to Java 17 in two days.

 

The other big difference is that Amazon Q leverages Bedrock, which means the GenAI assistant is pulling multiple third-party models and assigning them to the right tasks. Peers have taken a different approach, as their assistants are based on a sole provider; for Google Cloud, this is internal models like Codey, and for Microsoft, this is OpenAI’s ChatGPT. While we cannot say for certain how customers will view these approaches, for AWS, having Q based on Bedrock speaks to the company’s goal of offering a broad array of models in hopes of challenging Microsoft.

The zero-ETL integrations keep coming

Building on last year’s commitment to a zero-ETL (Extract, Transform, Load) future and the resulting integration between Redshift and Aurora, AWS launched three new zero-ETL relational and nonrelational database integrations with Redshift: Aurora PostgreSQL, RDS (Relational Database Service) and DynamoDB. Just like it wants to offer the broadest set of infrastructure options, AWS wants to ensure it has the breadth of cloud data services customers need so they do not have to compromise on the right tool for the right data task. But even if customers have an array of tools accessible to them, they still need a way to break down data silos, which requires integration.

 

To automatically connect data from source to destination and ease manual ETL processes, AWS is offering more integrations between its database and data warehouse services. We do suspect “zero-ETL” has become more of a marketing term and is essentially glorified data sharing, but there is undoubtedly value in simplifying how businesses connect and analyze data. Even before GenAI broke headlines, businesses were realizing the benefits of breaking down data silos and adopting an integrated data posture, but GenAI should only fast-track data strategies throughout the enterprises.

 

Microsoft recognized this trend years ago and recently launched Fabric, a platform that integrates multiple data services, including Synapse, which is akin to Amazon Redshift, into a single offering. Fabric is a single-source-of-truth platform that addresses the entire data cycle and charges customers based on total IaaS resources consumed, versus the compute and storage for each individual data service. AWS’ approach is different, and while customers have a suite of different data services available to them, it could take more effort for customers to stitch these services together and create a unified environment. The new zero-ETL integrations may help rectify this, but Microsoft’s single platform approach and simplified pricing model, all integrated with Copilot, will be competitive.

At AWS, “partners are the catalysts”

In the second-to-last keynote, VP Worldwide Channels and Alliances Ruba Borno discussed the critical role of partners acting as catalysts to GenAI adoption. This includes both ISVs and global systems integrators, and AWS wants to work with both parties collectively to meet a customer where they are in a journey and work backward from their needs. Delivering solutions as part of an ecosystem was a big focus of the revamped Partner Paths model two years ago, and now AWS is tasked with scaling this model to deliver the GenAI stack to customers.
 
When asked by Borno what partners can do to drive more business with AWS, Selipsky quickly called out proficiency and making sure the skills are in place to build trust with joint customers. Specializations and competencies are a big piece of proficiency and are skills customers appear to be asking for. At the event, AWS announced the general availability of specializations in resilience and cyber insurance and is also revamping its Competency, Service Delivery and Service Ready designations into one program. Another piece of advice for partners was to focus on putting the necessary resources in place to go to market with AWS, which could be anything from established business units to codeveloped centers of excellence.
 

It is always a balancing act between the vendor and partner as to who should invest what in terms of go-to-market resources to achieve collective goals. But the message during the talk between Selipsky and Borno seemed to be that AWS has all the funding, tooling and programs available to partners that can make for a successful go-to-market strategy, but the partner has to be willing to engage. Put another way, it may be difficult for some Tier 2 partners to grow their AWS business and get in on the GenAI opportunity given the massive resource scale of some of the Tier 1 competitors.
 
As an example, Accenture pledged to train 50,000 developers and technical specialists on Amazon Q and Code Whisperer over the next two years. Despite GenAI’s potential to automate labor, the technology will only broaden the vast IT skills gap, so vendors that can acquire and train the right talent will continue to outperform when it comes to doing business with AWS.
 
Lastly, Selipsky reiterated the important role partners will play in the data ecosystem. Considering it is not the actual foundation models that will differentiate the customer, but rather their data, there is an opportunity for partners here, and anything they can do to help customers establish a data layer that will pave the way for AWS’ GenAI stack will be well received.

Conclusion

While later to the GenAI movement, AWS, with its early establishment in cloud infrastructure, has actually been involved with AI for quite some time. In many ways, the company used re:Invent to raise its voice over the din of AI chatter and showcase the long-standing innovations that it aims to use to build new capabilities and play catch-up with competitors, namely Microsoft. The best example is Amazon Q, a business-focused assistant that is somewhat comparable to Microsoft Copilot, while more Redshift integrations underscore AWS’ goal of better connecting customers to other AWS services, an approach Microsoft is similarly taking with Fabric. Meanwhile, custom compute offerings will continue to serve as a landing spot for net-new workloads, and in some cases, they could be providing cost and performance benefits that help AWS become viewed as not just a hosting provider but also a long-term digital transformation partner.

 

At the end of the day, customers’ considerations of these GenAI offerings will heavily depend on their existing infrastructure footprint, level of integration required and business use case. Working with partners to land new business and maintain its IaaS leadership lays a foundation for AWS to build the broadest set of integrations, features and services. In doing so, AWS ensures it can meet clients anywhere in their journey regardless of technical requirement or business need. If properly executed, this approach will help AWS further grow off a $88 billion run rate and maintain its lead over its very fast-following peers in IaaS and PaaS.