Gemini for Google Cloud was a major highlight at Google Cloud Next 2024, signalling a significant step towards embedding Generative AI assistants across the platform. Devoteam experts who attended the event in Las Vegas and participated in trusted tester programs observed this “AI wave” across all Google Cloud domains, particularly in IT infrastructure and application modernisation. This integration of AI into daily work processes will reshape how businesses approach IT infrastructure and application development.
Let’s talk about the future of Google Cloud, where we can see a wave coming. That wave is the AI wave, which is blending AI into our daily way of working across all main Google Cloud domains: IT infra and app modernisation, data & work transformation. This AI Wave was very present across the entire Google Cloud Next 2024 event in Las Vegas.
1. Gemini for Google Cloud
That brings us to the first big announcement in IT infrastructure modernisation: “Gemini for Google Cloud”. This is the first step of embedding Generative AI assistants platform-wide. “Gemini Code Assist” for developers, “Gemini Cloud Assist” for platform engineers, “Gemini in Security Operations” for security engineers, “Gemini in Databases” for database engineers and migrations.
Devoteam currently participates in multiple trusted tester programs, providing feedback to the product teams to evolve the offering and acting as the voice of the customer, even though these programs are in preview.
Gemini Pricing Model
The only roadblock we see is the pricing model, which has been evolving into a per user licence type of fee, similar to Gemini for Workspace.
I believe that moving to the cloud has always been a pay-for-what-you-use experience instead of having a campaign to move customers to certain sku of products.
I would like to challenge Google to use a more innovative and fair pricing model, such as pay-per-query in BigQuery. So if platform engineers use the Gemini assistant, they can pay per query.
That would be fair to me, and the better it becomes, the more users would use it. It would be a non-predictable usage model, but that’s how BigQuery also won the hearts and minds of many of its proponents, same with Kubernetes and Cloud Run. The fact that we pay for what we use.
2. Google Axion Processor
The second big announcement was the Google Axion Processor. This was actually one of the keynotes’ first announcements, and it is worthy news.
Impact on other CPU-providers for Google Cloud
Devoteam is actually also a partner to Intel, as we help customers move to Google Cloud VMWare Engine, which is only available on Intel processors. This news might impact Intel and AMD, which provide the CPUs for Google Cloud.
But we think that this actually will enable more workloads on Google Cloud which are more price sensitive, rather than steal market share from Intel and AMD.
Want to learn more about how to redefine your VMware Stategy? Discover how we helped WESSLING achieve a successful transformation in this article.
Security considerations for custom hardware by Google
Google often builds security from the ground up into its custom hardware and all its products. This means that Intel and AMD-targeted attacks will not compromise their arm-based chip architecture as well.
Availability of Google Axion Processor
The Google Axion Processor will be available on Google Kubernetes Engine, Dataproc, Google Compute Engine and Cloud Batch. It will become a price competitor to on-prem systems which have already been depreciated and running on extended life warranties.
Lower carbon footprint & sustainability targets
The processor is also more energy-efficient than the current generation standard x86 instances. It also helps the energy targets of companies that want to lower their carbon footprint and hit their sustainability targets.
3. AI Anywhere on Google Distributed Cloud
The third big news for us is the AI Anywhere on Google Distributed Cloud. As a Google Cloud partner, Devoteam has been a strong proponent of Anthos and GKE On-Prem from the beginning in 2019, as Google invested into the development of running Kubernetes on VMWare, Bare Metal, Azure and AWS. That all while being monitored and operated in Google Cloud.
Also see this dedicated article by our Devoteam EMEA App Modernisation Practice Lead for Google Cloud Matthieu Audin.
Deploying Gemma models on to Google Distributed Cloud
Devoteam has invested similarly in training of our consultants in the product, which resulted in three Google Cloud Certified Fellows in Hybrid and Multi Cloud (this has now transitioned into the Champions Innovators program). As a result now in 2024, besides running applications and databases on Anthos (now repackaged as Google Distributed Cloud), customers are able to deploy Gemma models on to Google Distributed Cloud.
AI experience in highly regulated environments
Having a compliant standardised surface to run containerised models, which fit a strong MLOps process, is one of the key requirements to deliver the same AI experience to customers which work in highly regulated fields. Think of banking and healthcare, industries where data can’t be moved to the cloud.
This truly follows the promise of Google democratising AI even to hybrid customers.
Of course, the list of announcements at Google Cloud Next 2024 was much longer; but we were most excited about these 3 novelties in the field of IT infrastructure and application modernisation: the embedding of generative AI assistants across Google Cloud solutions, the newest Axion processor becoming a price competitor for on-prem systems and allowing for security and lower carbon footprints, and the AI anywhere experience on Google Distributed Cloud.
Devoteam helps you redefine your VMware Strategy
Want to learn how to modernise your IT infrastructure and redefine your VMware strategy? Our experts helped WESSLING achieve a successful transformation.