You know that you are at the right event when you see a familiar face like Mr. Hightower :)

This year I have attended a number of tech events and in terms of size, organization, and especially the content — Next ’18 is so far my favorite.

Next ’18 was an excellent representation of Google as a company and their culture. Sessions were mostly in Moscone West, but the whole event was spread across Moscone West, the brand new South building, and six other buildings.

Next ’18 Event Map

The floor plan was fun and casual; catering was of “Google Quality” and the security was insane, with metal detectors, police, K9 search dogs, and cameras everywhere. And of course games, fun, and even “Chrome Enterprise Grab n Go” were there in case you needed a loaner laptop to work on — see some pictures at the end. :)

What I learned at the Next ’18 conference

First of all, a big shout out to all involved in the Istio project. It is not a surprise that we see great advocate marketing and support for the Istio 1.0 GA release on social media last week. Istio is a big part of the Google’s Cloud Services Platform (CSP) puzzle.

GCSP Dashboard — After deploying my first app in less than 30 seconds.

Later this year, Google is targeting to make all components of their CSP available (in some form). CSP will combine Kubernetes, GKE, GKE On-Premand Istio with Google’s infrastructure, security, and operations to increase velocity, reliability and manage governance at scale.

Cloud Services Platform will be extensible through an open ecosystem. Stackdriver Monitoring and Marketplace are the extensions to platform services. Marketplace already has 27 Kubernetes apps including commonly used components of many environments such as Elasticsearch and Cassandra.

CSP Marketplace

Users will be able to deploy a unified architecture, that spans from their private cloud, using Google CSP to Google’s public cloud. Again, the two most important pieces to this puzzle are managed versions of the open source projects Kubernetes and Istio. To me, the rest of it still feels mostly to be DIY-like quality.

Knative, Cloud Build, and CD are other significant solutions announced at Next’18.

A new cloud availability zone, this time in your datacenter — which might be in your garage

At first, GKE on-prem got me interested. But, after talking to a few Google Cloud Experts again, I felt it’s very early to be seriously considered. You can read others’ thoughts here on Hacker News.

Discussions on GKE on-prem

GKE on-prem alpha will support vSphere 6.5 only, no bare-metal for now!

Failover from on-prem -> GKE is something Google team is working on. This means GKE on-prem instance will look like another availability zone (AZ) on a Google Cloud dashboard.

Other than vSphere dependency, the idea of being able to have an availability zone, local in your data center is really compelling. It is also a very common use-case for OpenEBS since there is no cloud vendor provided, a cloud-native way of spreading your cloud volumes, EBS, etc. across AZs — we see many community users running web services today using OpenEBS to enable that.

Github and Google Partnership to provide a CI/CD platform

Cloud Build is Google’s fully managed CI/CD platform that lets you build and test applications in the cloud. Cloud Build is fully integrated with GitHub workflow, simplifies CI processes on top of your GitHub repositories.

Me deploying myself on Serverless Cloud Maker ;)

Cloud Build features;

Multiple environment support lets developers build, test, and deploy across multiple environments such as VMs, serverless, Kubernetes, or Firebase.

Native Docker support means that deployment to Kubernetes or GKE can be automated by just importing your Docker files.

Generous free tier — 20 free build-minutes per day and up to 10 concurrent builds may be good enough for many small projects.

Vulnerability identification performs built-in package vulnerability scanning for Ubuntu, Debian, and Alpine container images.

Build locally or in the cloud enables more edge usage or GKE on-prem.

Serverless — here we are again

Knative is a new open-source project started by engineers from Google, Pivotal, IBM, and a few others. It’s a K8s-based platform to build, deploy, and manage serverless workloads.

“The biggest concern on Knative is the dependency on Istio.”

Traffic management is critical for serverless workloads. Knative is tied to Istio and can’t take advantage of the broad ecosystem. This means existing external DNS services and cert-managers cannot be used. I believe, Knative still needs some work and not ready for prime-time. If you don’t believe me, read the installation YAML file — I mean the 17K lines “human readable” configuration file (release.yaml).

My take on all of the above — Clash of the Cloud Vendors

If you have been in IT long enough, you could easily see the pattern and predict why some technologies will become more important and why will the others be replaced.

“What is happening today in the industry is the battle to become the “Top-level API” vendor.”

20–25 years ago hardware was still the king of IT. Brand-name server, network, and storage appliance vendors were ruling in the datacenters. Being able to manage network routers or configure proprietary storage appliances were the most wanted skills. We were talking to hardware…

20 years ago (in 1998), VMware was founded. VMware slowly but successfully commercialized hypervisors and virtualized the IT. They became the new API to talk to, everything else under that layer became a commodity. We were suddenly writing virtualized drivers, talking software-defined storage and networking — the term “software-defined” was born. Traditional hardware vendors lost the market and momentum!

12 years ago, the AWS platform was launched. Cloud vendors became the new API that developers wanted to talk to, hypervisors became a commodity. CIO and enterprises that are sucked into the cloud started worrying about the cloud lock-in. Just like the vendor lock-in or hypervisor lock-in, we have experienced before. Technology might be new, but concerns were almost the same.

4 years ago, Kubernetes was announced and v1.0 released in mid-2015. Finally, an open-source project that threatens all previous, proprietary, vendor managed “Top-level API” that we were using became a majorly adopted container orchestration technology. Although it came from Google, it took off after it got open-sourced and probably would be right to say that so far financially, Red Hat profited most from Kubernetes with their Red Hat OpenShift platform. And now we see somewhat of a battle over APIs to be used in operating applications on Kubernetes, with the RedHat / CoreOS operator framework and other projects including one supported by Google and others such as Rook.io emerging to challenge or extend the framework.

Google Container Engine (GKE), Microsoft Azure Container Service (AKS), Amazon Elastic Container Service (EKS), IBM Cloud Container Service (CCS), and Rackspace Kubernetes-as-a-Service (KaaS) are all competing in the hosted Kubernetes space (new vendors expected here).

There is enough space to grow in the self-hosted Kubernetes space. GKE on-prem is the validation from Google.

Hardware>Virtualization>Cloud>Containers>Serverless???

Many of us see Serverless as the next step, but it might be too granular to support larger adoption and current limitations validate the claims. It doesn’t scale well for intense workloads.

One size doesn’t fit all, there are still traditional use cases that even run on bare-metal and VMs. Same might be true for Serverless. It is not for every workload. Modernizing existing workloads will take time, and we will see who will become the leader of the next “Top-level API”.

What do you think? Who is going to win the clash of the titans? What did you think about Next’18 and Google’s strategy?

Thanks for reading and for any feedback.

Some Next’18 moments from my camera

#next18 #googlenext18 #knative #kubernetes #istiomesh

Also check out the keynotes:Keynotes from last week’s #GoogleNext18 here → http://g.co/nextonair