AWS Re:Invent 2019 - announcements caught my attention. Part 1
The main AWS event for the year - Re:Invent 2019 has ended. I`d like to divide all announcements and strategy moves into two parts: official statements and personal impressions.
Let’s start with the official part and the keynote. For the past few years, the main speaker at the conference has been Peter DeSantis, who has succeeded James Hamilton, whose sessions I strongly recommend watching. The reason for such change and direction of AWS`s evolution worth a separate publication. Peter paid a lot of attention to the huge growth of east-west (between applications inside datacenter) traffic volumes compared to the smaller growth of north-south (traffic leaving the datacenter). According to Peter containers and ML (machine learning), workloads are the key growth factor. From my point of view also worth mentioned traffic between AWS services and combinations between the cubes. And, finally, a lot of companies are customers of AWS and exchange the data thus generating more and more AWS-internal traffic.
The second topic Peter talked about is mainframes. These are outdated, expensive, and sometimes inefficient. But not all can and should be migrated from mainframes to x86, although the amount of such applications is decreasing. Special mention deserved also supercomputers, as well as strict requirements for network performance. The main disadvantages are the same, for which supercomputers are in favor: specialized software and hardware complexes. But, according to Peter, x86 platform is way better. As for me this controversial statement as well as a cloud is not a magic stick.
Such a long intro was required to present Project Nitro and the advantages of a specialized hardware and software solution compared to the commodity and/or bare metal. OK, noted to me that specialized hardware and software is bad if it`s not owned or managed by AWS :).
Of course, I have to admit that Project Nitro is a perfect and required solution necessary for a hyperscaler like AWS. And the benefits of it are available to both AWS and customers - higher performance and less overhead. Users on metal and dedicated instances get more resources, while for AWS is no need to reserve server resources for virtualization and management.
The second keynote was on Monday night, as always from Andy Jassy, CEO of AWS. The first half of the keynote as a whole was a repeat of the Sunday keynote: how cool AI/ML is, the importance of innovation in software, and more important hardware platform. The output of entry was the announcement of the second generation of ARM processor - Graviton 2.
It is worth going over this chip a little closer. AWS takes careful but confident steps towards energy efficiency and closure of its hardware platform and providing it as a black box. The first generation of Graviton instances was named A1 (A as ARM). The second is C6g, M6g, R6g. Letters for compute, moderate (a common assumption), and RAM + g for Graviton as a subtype. By the way, the current generation of these instances on x86 platform is 5th, so perhaps with the public release of Graviton 2 instances, we will see a new generation of x86 servers, that were not announced on Re:Invent. The second interesting announcement related to the hardware didn`t get a lot of attention in the media. A new type - INF1 instance, equipped with a special Inferentia chip, was again developed by AWS. This chip allows dramatically increases the output of AI/ML computations. The main application areas are chatbots, translation, voice assistants, etc. Let’s see if it will be as widely adopted as F1 instances with FPGA chip ;). Although, given the efforts that AWS in particular and the industry generally put into AI, INF1 should be fine.
AWS has global infrastructure around the world - North America and Europe are almost fully covered, new regions announced in the Gulf region and South Africa. The same situation in Asia, with the exception in Oceania region. Each region consists minimum (but more is rather an exception) of three availability zones. Adding new regions makes little sense because of the cost of investment and the proximity of the regions to each other. A few years ago, the local region in Osaka was introduced. It is accessible to local customers and for disaster recovery only in this seismic active region. This region was not widely available. The peculiarity of the local region is that availability zones are located within the same data center and some of the resources, for example, power, are shared.
The first publically available local region is announced in Los Angeles. I believe that with time if usage will grow, the local region can be scaled to a full-fledged region.
The proximity of infrastructure to the end-user is an advantage. Edge computing is one of the few relatively new topics that appeared recently and seems to be a solution to many problems related to fog computing. Especially combined with the following major announcement - AWS Wavelength. A joint solution with telco providers Verizon, Vodafone, KDDI, SK Telecom. Part of the AWS infrastructure deployed on telco operator facilities and from this edge the application or data is delivered directly via 5G to the end device.
I assume the underlying hardware is AWS Outpost, which was also released public. As far as I remember previous keynotes, it is the first time that Andy Jassy acknowledged that not every application can be migrated to the cloud. But for those who are already actively using AWS and would like to transfer these practices to their data center in a proper manner is intended Outpost. A fully packed rack managed through the AWS console but installed and operated on customers site. VMware Cloud on AWS as well as AWS native services are also supported. I would note that the hybrid cloud turns from an enemy to a perspective. AWS expands its hybrid cloud offerings with a new service called Kendra - an enterprise-grade search engine for unstructured data in Sharepoint, Dropbox, and file servers. Funny that Google stopped selling the same solution about six or seven years due to low interest. Attempts to build a search engine on so-called dark data were made by many. According to Andy Jassy, the new service will use all the benefits of ML - learn and, eventually, understand the sense of requests to deliver results.
The last hardware-related announcement is AWS Nitro Enclaves. A solution to protect and secure critical or personal data in a protected dedicated environment not available outside of the enclave.