[{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/aws/","section":"Tags","summary":"","title":"AWS"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/elacticsearch/","section":"Tags","summary":"","title":"ElacticSearch"},{"content":"Probably that\u0026rsquo;s how Amazon decided to make its ElasticSearch fork. Challengers of open source businesses face new challenges that their ancestors (like Red Hat) did not. Particularly with product appropriation from cloud providers.\nIn general, an interesting situation in which personal preferences remain not on the cloud side. This situation has been developing for a long time, and it produced not by Elastic. As early as 2018, MongoDB released a new license of its development - SSPL which is a modified AGPL 3.0.\nThe only and really important limitation of the license is if a consumer creates a service based on the product. In such a case, a consumer must either publish all source codes or buy a corporate license. Simply put, it does not allow cloud providers to earn money on a free (optional) open-source product without giving anything in return back to the product.\nAnd then it started\u0026hellip; Part of the community raise in arms against this, generally good decision, and launched its own MongoDB with SQL and blackjack. OSI acknowledged the SSPL as a proprietary and restrictive license. As a result, the MongoDB API-compliant service introduced by AWS supports only version 3.x released following the previous license.\nAfter that less promoted and well-known, but rather popular solutions - Graylog and CockroachDB have switched to the new license. By the way, with about the same result, in the end. Now it was Elactic\u0026rsquo;s turn to change the license.\nThe war between the search engine developer and the cloud giant has been going on for quite a long time. First AWS released a free and open-source version of the extensions for ElasticSearch. While Elastic company sells it as part of the enterprise license. Elastic did not find a better solution but to change the license for all its products. AWS announced the creation of its fork of ElastiSearch in return.\nThis is a logical decision from AWS\u0026rsquo;s point of view — managed ES service sold with an excellent added value compared to regular EC2 instances and very popular. Therefore, unlike a MongoDB-compatible service, AWS can not simply, for example, postpone the release and redo it from scratch. No one will chop their head with a chicken carrying golden eggs.\nI wonder what the consequences will be for each of the market players. I think that in fact, AWS will develop its fork to add features related to search, while Elastic will continue to develop security-related functionality. And products will not compete directly especially.\nBut in this example, it is the case, the precedent. And I don\u0026rsquo;t rule out that the war is not over and soon we will see new battles.\n","date":"25 March 2021","permalink":"https://reflectionson.cloud/2021/03/25/if-you-can-t-win-them-lead-them/","section":"Posts","summary":"","title":"If you can`t win them - lead them"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/oss/","section":"Tags","summary":"","title":"OSS"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"Personal notes and opinion on cloud platforms, infrastructure, and related technology.\n","date":null,"permalink":"https://reflectionson.cloud/","section":"Reflections on cloud","summary":"","title":"Reflections on cloud"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/software/","section":"Tags","summary":"","title":"Software"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/disaster/","section":"Tags","summary":"","title":"Disaster"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/ovh/","section":"Tags","summary":"","title":"OVH"},{"content":"Last week OVH data centers in Strasbourg burned down. Some discussions were produced more heat than the fire itself. Opinions were different: some claimed that OVH lied about its reliability, others - canceled the cloud/OVH/IT as a whole.\nClear minds recalled the words of Eric Schmidt (if I dont mind) about a cloud being just someone else's computer. It doesnt matter where the cloud is hosted in private DC or providers: anything can burn and sink. Furthermore - power outage, connectivity goes down, etc.\nAs for me, two events at once happened last Wednesday: one for OVH and another for Europe. With the first all is clear, and the second a warm welcome to the club. Clouds outages happened in Australia (several times already) and in the US. It was no such kind of disaster of data centers in other regions, or such didn`t get a lot of attention. Moreover, news from far-far away is not so interesting.\nEveryone is used to the outages of AWS and Azure. Google also breaks something from time to time. And any mention about the BGP leakage is a bad manner cause since it`s daily life for a long time already.\nArchitects and IT professionals also were outrage about OVH\u0026rsquo;s design and the entire DC project. The concern is about a modular design and fire-hazardous solutions. All in all, it\u0026rsquo;s bad. Some local providers immediately declared that their data centers do not burn in the fire, and they do not sink in the water. It`s like since there were no such accidents, the opposite is not proven.\nAnd for some reason, as many as 4 availability zones or sub-data centers were placed physically on the same site! Can you imagine?! But there are two points: initially Azure regions were physically in the same data center and, optionally, shared power and network, and AWS is doing so now with its Local Region.\nI recall one situation with a customer who had to choose a cloud to run managed DB. The first CSP had a multi-AZ design, and the second - higher SLA, but without multi-AZ. It was a long discussion about which one is better and more important\u0026hellip;\nEverything falls, and no clouds will change it. All AWS guides and best practices mention that the service has to be designed to handle a failure of the underlying cloud infrastructure. Twenty years ago, people were divided into those who make backups and those who don\u0026rsquo;t (since then, though, nothing has changed a lot). I hope this event will show the need to store copies outside. For example, the Veeam (backup vendor) almost from its very beginning delivers a message about the 3-2-1 rule (three copies, two media, one copy on another platform). Modern technologies make this process even easier than 5 or 10 years ago.\nP.S. small prediction that some of the service providers will provide a service to audit/guarantee data security on remote site in case of such accidents.\n","date":"17 March 2021","permalink":"https://reflectionson.cloud/2021/03/17/word-in-defense-of-ovh/","section":"Posts","summary":"","title":"Word in defense of OVH"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/arm/","section":"Tags","summary":"","title":"ARM"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/microsoft/","section":"Tags","summary":"","title":"Microsoft"},{"content":"In a previous note, I mentioned HPE servers running the ARM platform, which after a couple of years, without much publicity, transferred to x86. While server ARM CPUs and servers were produced by many companies. Why did the major players curtail these products and why ARM cloud providers pay so much attention to it now?\nBy the middle of the `10s, a cloud-native application was quite formed in its architecture. Some of the solutions previously were not used with legacy applications: Redis for caching, ElasticSearch for search and cache, and message queues. Applications have evolved very much towards the web approach and horizontal scaling.\nInitially, web and horizontal scaling of small application or server instances was the zone of interest to use ARM as server CPU. At that time ARM was quite a low-power CPU, but also consuming little energy. The very thing for application with low to medium loads such as web, or MapReduce (very fancy technology during the mentioned period), or even IoT processing. In general, all those applications may not load CPU for 100%, and quantity sometimes is more important than quality.\nBut the market, as always, decided on another. To begin with, very few of the enterprise customers needed ARM servers in their own DC while clouds are on the rise. To continue it turned out that performance is yet still too little. And, finally, the software did support the platform at the required level. While you could install and run Linux, most of the applications either did not support the platform or did not use the capabilities and features of the CPU (not clear what is worse).\nAs a result, the bright and beautiful future of ARM mass servers was washed out by reality. But ARM Holding, as a developer of the platform, did not care much about such trivia and stared into their bright future, which can be divided into two parts: clouds and 5G.\nAmazon acquired Anapurna Labs, the developer of ARM processors, in early 2015 not for nothing. On the hyperscaler caliber switch to an own energy-efficient platform can save billions per year.\nThe best example of the result of this acquisition is Project Nitro. A joint hard/software solution that allowed to move virtualization and management overhead off servers to dedicated PCI-X board. Previously about 1/3 of the server was reserved for management purposes, now 100% can be sold.\nFurthermore, there are many SaaS and PaaS services: DynamoDB, S3, SQS, etc. These services can be moved to a new platform. The benefit of such a move can be shown by Apple\u0026rsquo;s experience with its M1 and A14 CPUs. Both have units optimized for certain tasks. Basically, these units are a whole co-processor, but already built-in. The old idea gets a new life!\nAs a result, Amazon, Microsoft (which develops its ARM chip), as platform owners, get a specialized solution optimized for their needs. Just like IBM designs mainframe-optimized processors rather than using Intel\u0026rsquo;s general-purpose CPU (well, almost).\nIf ARM in the cloud is already the reality of today there is still a niche for the future: low-power and embedded servers for 5G, SmartNIC, and edge computing. Areas where low demanding platform and extensibility for special use cases will earn its success. With the spread of 5G and the gradual expansion of smart everything, the applications themselves will shift closer to data sources. The Internet of Things has not yet become a daily reality, but it has an intermediate stage - the “fog of things”. And this fog will become the computing power of all sensors and metering devices. Also, there will be smart machines - now there are few models with the support of M2M, but the concept is entering the market.\nSo Intel is not going anywhere and will not die, but rather release its ARM chip itself (again). And will work to ensure that x86 can move in the new growing market to move ARM away. In servers and PC, the ARM will remain a speed-up solution: business laptops, ultrabooks, and so on. Microsoft will provide the platform in the form of OS and basic software. But whether the initiative will get support and traction from vendors like Adobe, Corel, Autodesk who release highly demanding software - it`s a separate question that also will have a significant impact on the development of ARM as a platform for computers. The last stronghold that remained is games, but I would not be surprised if Unreal Engine in the next couple of years also will adopt this platform\u0026hellip;\nIn any case, it remains only to wait for the server manufacturers to support the initiative and what will be the “answer to Chamberlain” from Intel.\n","date":"8 March 2021","permalink":"https://reflectionson.cloud/2021/03/08/why-1st-appearance-of-arm-servers-failed-but-can-succeed-the-second/","section":"Posts","summary":"","title":"Why 1st appearance of ARM servers failed, but can succeed the second?"},{"content":"The release of the Apple M1 CPU caught a lot of attention from all kinds of media and blogs except myself. The processor was pictured in X-ray from all sides, all possible benchmarks are published. Even information about the update of this wonderful processor leaked. And, of course, again everyone buried x86 as architecture.\nThe recent undertaker of x86 from the ARM side, I don`t take AWS a1 instances into account yet, was HPE Moonshot Project, which, however, moved smoothly back to the traditional x86 platform.\nAs for me, the burners of x86 did not pass the short course of recent history. The fight between x86 and RISC has already happened once. Although eventually, RISC lost because of the negatives of its architecture both platforms have changed significantly over the past years, integrating the best sides of the competitor.\nIt even got to the point that it is considered that x86 processors are RISC-alike inside now. Well, thanks to God not quite the opposite.\nThe thing is that the context and evolution of IT over the years are not taken into account, as well as the shift of profits from the PC market to the servers and cloud.\nThe PC market has lost its former influence: the vector of development of processor technologies is already set even not by the server, but clouds. Secondly, compared to the 1980s, the CPU field has become much wider. ARM will lead in areas that didn`t exist before: IoT, vehicles, embedded devices, etc. A huge area with devices is dozens of times more than in all history Intel sales.\nAnother important point: other processor architectures and types of processors that ARM will have to deal with: MIPS and RISC V. Not to mention specialized solutions such as ASIC and FPGA, which also will have to resist in the SmartNIC market. As so, the struggle will rise.\n","date":"21 February 2021","permalink":"https://reflectionson.cloud/2021/02/21/arm-vs-x86-round-2/","section":"Posts","summary":"","title":"RISC vs x86. Round 2"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/x86/","section":"Tags","summary":"","title":"X86"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/amazon/","section":"Tags","summary":"","title":"Amazon"},{"content":"FTC probably read Roadside Picnic authored by genius soviet writers Arkadiy and Boris Strugatsky and used the core idea of the poem while prepared a case against internet giants. And now it, finally, reached the final stage - court.\nIn the \u0026rsquo;90s a monster that crushed everyone was Microsoft and was punished for that quite a lot. Now it\u0026rsquo;s a whole knot called FAANG. So far, the complaint is about only two of five heroes - Facebook and Google.\nStrange but the lawsuit did not start a discussion and analysis of the situation in media. Although, retrospectively, these lawsuits have been prepared for several years and clearly will be still. What is the claim to the titans of the industry and why exactly did these two companies of the bunch of internet giants get first under the hammer of justice?\nGoogle and Facebook have one thing in common - monopoly and aggressiveness. Google is more mature and experienced thus already a less aggressive and more accurate market player. It`s engaged in improving the supply rather than creating and capturing new areas. Facebook, unlike a senior fellow, is a company of one person, and actively buys competitors who can beсome Kronos in the future.\nThere are a few dozens of advertising networks and related companies on the market. Most sites earn on ads not only from Google, but also, at least, from one or two competitors. Whereas there is no equivalent replacement for Instagram and WhatsApp. Google softly (as for such a colossus) and gently forces its advertising products. While Facebook just like a black hole pulls in all available information and uses any opportunity to increase the time users spend in the products ecosystem. And this, in part, leads to the segmentation of the Internet to the InternetS, regarding which the best minds of mankind have been warned and worried for many years already.\nBut do not forget that for FB, same as for the popular search engine, the main source of earnings is ads. And the giants entered into a joint secret agreement under which Zuckerberg\u0026rsquo;s company receives preferences in an advertisement, and for that doesn\u0026rsquo;t push Google.\nBut forget about advertisement per se. After all, it must be delivered somehow. And if everything is clear with the social network, Google has a different ace in your pocket - Android. Officially it is a free and open mobile OS (except for some nuances). It`s installed on billions of varieties of devices - from phones and tablets to NAS and IoT devices. And there is Chrome also built on a free and open (see above) browser engine. Both are factories to collect personal data, analyze, and improve advertisement targeting. A beautiful ecosystem in its fullness!\nThis complexity and presence of Google and Facebook in all spheres, multiplied by the popularity of non-core products is the idea behind the possible separation of companies. And by this bring back the competition to the market.\nAmong the other internet giants - Amazon, Apple, and Netflix - is not yet very clear what to do with only the last two. To divide Amazon into “parts” was challenged by large investment companies a few years ago already. After all, the diamond in Bezos\u0026rsquo;s crown is only Amazon Web Services. All other businesses (except the advertising) exist thanks to the cloud profits, while either grow.\nAccording to the investors, designating Amazon\u0026rsquo;s cloud business into a separate company would only increase its market value, also increase the cost of the shares of the retail business. The situation with Netflix and Apple is a little more complicated.\nApple\u0026rsquo;s serious sin, at the moment, is tax avoidance. Attempting to smooth the US government, the company even returned some production from China to its homeland and promised to increase production. Although I do not exclude that the story with the monopoly of the AppStore will still get a sequel in the coming years.\nSo far Netflix seems to be the most harmless of the abovementioned trinity: grows peacefully, does not absorb anyone, competitors are many, and do grow like yeast. On the other hand, these competitors may force an antitrust investigation against the streaming giant, just as Oracle pushed the case against Google. And it\u0026rsquo;s not about revenge for Java, it\u0026rsquo;s about competition in the marketing and advertising market, although it would seem companies are not particularly competitors there.\nIn recent history already were two interesting showcases lawsuits against major monopolies: AT\u0026amp;T and Microsoft. Both are interesting because allow us to correlate the current giants to a particular lawsuit and assess the possible consequences. If to consider it completely binary - Facebook, Google, and Amazon are “AT\u0026amp;T,” whereas Netflix and Apple are rather like “Microsoft” of Gates time. In general, in the next few years, it will be very interesting to observe the development of the situation both with existing lawsuits and new ones. As well as possible legislative initiatives that could follow as a result of findings and court decisions.\n","date":"12 January 2021","permalink":"https://reflectionson.cloud/2021/01/12/and-let-no-one-go-offended/","section":"Posts","summary":"","title":"And let no one go offended"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/apple/","section":"Tags","summary":"","title":"Apple"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/court/","section":"Tags","summary":"","title":"Court"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/facebook/","section":"Tags","summary":"","title":"Facebook"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/google/","section":"Tags","summary":"","title":"Google"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/netflix/","section":"Tags","summary":"","title":"Netflix"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/azure/","section":"Tags","summary":"","title":"Azure"},{"content":"The end of the year is debriefing time. The other day Maxim Ageev (De Novo CEO, Ukrainian cloud provider) published his vision of the results of the year, to which Vladimir Pozdnyakov (CEO of DX Agent, ex-head of IDC Ukraine) did not agree with him and expressed his doubts. I disagree with both of them. The thoughts below are rather applicable for the CIS region rather than the US and Europe, but though have to be answered.\nI don`t have exact numbers and detailed analytics, but the arguments of both authors have a pair of important nuances: first, they make the study of the Ukrainian market in a sort of disconnected manner; secondly, only well-known and obvious companies are taken into account.\nOut of these two nuances, a conflict arises. The first thing to mention - not all Ukrainian companies pay for consumed services locally. Also, most of the enterprises do not attract public attention at all. For example, one of my ex-customers was in AppStore Top3 applications in the United States and even Google Play (here I could be mistaken) among entertainment apps, while the developing company was from a small city 800km away from Moscow. Or, another example - Ring, a company originally from Ukraine, paid directly to Amazon without the involvement of any local partners, and nobody didn\u0026rsquo;t know about the origin country until the e-commerce giant bought it.\nSo to the question - who is the winner of the Ukrainian cloud market? Global players like Azure, AWS, GCP, or local like De Novo or GigaCloud? The answer depends on what to cover. Indeed, Microsoft has strong sales channels, many years of experience, pricing flexibility, etc. And Azure subscription is added into Enterprise Agreement, which has a positive impact. From this point of view, it is a battle of two leaders - Azure and De Novo. As the main business is local their customers pay, of course, in Ukraine, either they pay to cloud aggregators. These are easy to evaluate and measure since customer names are public and well-known.\nLet have a look at the other, dark, side: outsourcers, gaming companies (especially casual, very interesting topic and market, by the way), startups, and darken IT companies mentioned above. Those types of enterprises with young and beardy young people. Most of them hate Azure for technical moments (API changed as often as WinAPI used to) and prefer it to AWS and GCP. They did not even hear about De Novo, GigaCloud, and other local CSPs. These young people consider that everyone who is over 30 is a dinosaur, quietly creeping towards the nearest tomb (although this may be specific only of the CIS). Most of such enterprises for consumed services not to Ukrainian companies, but directly to the provider with US cards. Its impossible to spot them - try to uncover someone who doesnt want it. They are not referenced in the public use cases either. As a result, the sum of their spending on IT is impossible to measure or analyze.\nBesides the vendors themselves are in the game. For now, let\u0026rsquo;s stop at the big three. Microsoft makes a focus on enterprise customers and invests a lot in evangelization among young people — everything is unchanged. GCP is young and cocky. It is the best in some areas, and the opposite in else, though aggressively closes the gaps. When you understand why to choose GCP - there is no better solution. If someone doesn`t know and does not understand - presales and sales will quickly demonstrate the quality of communication channels even in Ukraine, internal services developed into external products and will deprive any traces of doubt. And only AWS quietly, without attracting any kind of attention, harvests the market and pays attention only to encouraging and actively paying customers.\nTo complete the overview cloud managed services from HPE, IBM, and SaaS from companies such as SAP or Salesforce should be mentioned which Maxim Ageev politely missed in his review. Although it is difficult for me to estimate the revenues of SaaS-giants inside the Ukrainian market primarily because of the low share in total earnings. HPE and IBM, as the central players of managed services, feel great - it is worth remembering the move of DTEK (energy services enterprise) to the HPE cloud or IBM\u0026rsquo;s multi-year contract with Ukrsotsbank (owned by UniCredit Group at the mentioned period) which didnt last as long as planned. SAP cloud services are a long-term and huge investment and should be considered unique in UA. Overall, SaaS and managed services are yet another piece of the pie worth considering within the whole picture. Because the result is the same - provider takes away a customers IT functions and offers an abstraction of some part of the IT processes or the entire process/application entirely.\nOne of the authors declared the wise idea that AWS, Azure, GCP, in modern conditions, are cloud 2.0, while HPE, IBM - 1.0. Very good and reasonable thought. But there is one difference: from a technical point of view, clouds differ only in the management interface and the changes required for applications architecture and infrastructure (VPC, subnets, etc). Because, eventually, the idea behind the cloud for the customer, is flexibility, and for the provider-competent management of data center resources.\nWith this idea in mind, VMware Cloud on AWS and its brothers-in-law running in other clouds should be memorized. On the one hand, it\u0026rsquo;s cloud 2.0 - flexible, fast to provision, etc. On the other hand, it is 1.0 - what can be more usual than VMware stack, natively developed and supported, but running, in this case, on AWS hardware. But there is a vice versa chimera - AWS Outpost\u0026hellip;\nAs an outcome: the analysis of modern IT of a single market of one country has become so complex and multifaceted that the only correct way to measure it does not exist. But the fact that the number of variables and parts in various areas sharply multiplied is no longer in question. It remains only to understand how to make such kind of analysis with all abovementioned.\n","date":"12 December 2020","permalink":"https://reflectionson.cloud/2020/12/12/cloud-comitio/","section":"Posts","summary":"","title":"Cloud comitio"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/gcp/","section":"Tags","summary":"","title":"GCP"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/ibm/","section":"Tags","summary":"","title":"IBM"},{"content":"In the cult game Shadow of Colossus, the protagonist defeated a tremendous majestic colossus. Seeing one for the first time it is difficult to estimate what it will do in the next moment, where his weak spot is, and from which side you should get closer.\nIBM is one of such colossuses in the IT world. It is well known for its cut-offs of discouraging or low-profit businesses, as well as unexpected acquisitions like Red Hat. And now IBM separates the part of the business, which seems to make a noticeable profit in contrast to the constant decrease in sales, and seems to be not very costly, for example, on R\u0026amp;D.\nThe separation of managed services business into a separate company is a logical and correct step for a variety of reasons, because of the following reasons: different company culture, competition on the managed services market, drop of the ballast.\nThe culture of the company whose main business is operational support is fundamentally different from the one that deals with clouds, software, and long-term R\u0026amp;D projects. Not to mention cycles and sales methods.\nThe growing popularity of clouds, as well as broader usage areas, plus the lack of experts and engineers drive customer`s interest not just to outsource or out staff tasks to maintain IT, but also to increased competition in the managed services market. Offers exist for any pocket or need: starting with full coverage of any infrastructures and clouds by companies like RackSpace to SaaS products such as EPAM Syndicate which manages serverless applications in AWS. Plus AWS itself offers such a kind of service, while Microsoft as usual relies on partners. The managed services market is a wide, but crowded valley and soon it will be very crushing.\nPrinters, storage systems, laptops, and so on - all of these businesses at some point turned from promising and profitable business for IBM into ballast as technology and market evolved and technology became a commodity. As an example — servers — once it was a high-margin business with niche and expensive solutions. At present, the servers are a commodity, manufacturers are consolidated, and dozens of offers from different companies available. This is why the x86 servers business was sold, as opposed to the mainframes — through narrow, but still interesting and inaccessible to a wide range of manufacturers.\nAfter Satya Nadella signing on CEO position Microsoft quickly moved its main interest to modern reality - clouds. IBM, also because of scale, turned out to be much more inert, plus the bet on blockchain and artificial intelligence, including Watson.\nTherefore IBM, during its transformation returned to the starting point. The dismissal of infrastructure solutions in favor of applications has led to the need for modern methods — clouds and containers, which are the execution environment of modern applications.\nAt the current stage of IT evolution, this may not be the last time when IBM makes an unexpected step, and the transformation that began a decade ago still seems to last as much, no less.\n","date":"16 November 2020","permalink":"https://reflectionson.cloud/2020/11/16/shadow-of-the-blue-colossus/","section":"Posts","summary":"","title":"Shadow of the blue colossus"},{"content":"The arrival of 5G networks, the continuous evolution of ARM architecture, and the miniaturization of specialized solutions have caused a rise of a very interesting idea of edge computing - Smart NIC.\nNone of the above is new, even Smart NIC existed previously. Simply with less functional - just Ethernet and TCP/IP offload or intelligent NIC aimed to provides features as RoCE or DPDK. But now, with interest in edge computing and 5G attention increases interest in developing Smart NIC solutions.\nOf the three existing technologies - ASIC, FPGA, and SoC - the most flexible and “democratic” option is the latter.\nAWS uses a homemade Smart NIC solution for several years, gradually improving and expanding the functionality of the solution. Presented in 2013, first-generation ASIC offloaded block storage-related tasks from the CPU, progressed to Project Nitro — a full-fledged board for network management, block disks, security, and even hypervisor.\nOn the other side, VMware for several years promised to migrate its hypervisor to the ARM platform, and finally, it happened on VMworld 2020. The release of the leading virtualization platform on the ARM platform opens new horizons to absolutely everyone, and the benefits are huge.\nFor example, AWS thanks to Project Nitro able to sell an additional ~30% of the server previously reserved for management purposes (basically cloud overhead). VMware itself has NSX and VSAN — SDN and SDS solutions, respectively. Offload service overhead or implementation of GENEVE on any Smart NIC will significantly reduce the costs of the hardware due to lower service workloads.\nvSphere on ARM is a very interesting release a new ground for a bright future and enabler of hybrid clouds promised for so many years. And more important it will pave the widespread way for Smart NIC, not just some specialized niches. Honestly saying, now, Smart NICs were used either for NFV (classic ASIC) or in black boxes (AWS Outpost) and are little understood or used by a general business.\n","date":"12 November 2020","permalink":"https://reflectionson.cloud/2020/11/12/server-s-milestone/","section":"Posts","summary":"","title":"Server`s milestone"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/vmware/","section":"Tags","summary":"","title":"VMware"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/bgp/","section":"Tags","summary":"","title":"BGP"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/internet/","section":"Tags","summary":"","title":"Internet"},{"content":"The fact that globally the Internet is “broken” known for a long time and is not a special secret for anyone.\nThe flexibility and independence of the components built in the Internet architecture in its present form have become, an anchor pulling down. This does not concern us - end-users. Except that Facebook, or another vital service, may load slower than usual.\nBut at the level below, the global routing of traffic flows, often there is a real hell - Tier 1 providers are fighting among each other or organize coalition against 3d provider; telecom operators battle with the Internet companies and route their traffic through remote locations. Even traffic exchange points put sticks in wheels to their customers, and the cherry on the cake are constant changes in BGP announces. And the last point should be highlighted separately because of the mass and seriousness of the problem.\nWith enviable regularity, the news appears that some large ASNs are sent to the black hole (YouTube and Pakistan). One African ISP shut down Google this way, and in April `17 Visa and Mastercard networks were announced from Russia, and so on.\nIn addition to human (I want to believe) mistakes, there are hacker attacks associated with BGP hijacking. So far, there are only a few such attacks, but their number is growing, as does the danger they carry.\nThe IETF is working to solve the problem as it can: additions and extensions to BGP are being developed to eliminate routing hijacking and minimizing possible accidents and consequences.\nAnd a third, party - global cloud providers - has emerged. Internet businesses, whether Facebook, Google, or local players, like Yandex in Russia, have long history and experience of building private fiber cable networks and CDNs to streamline routes and delivery of content. They do not care how and where the content is delivered, the main thing is to do it quickly and efficiently.\nThe situation with global providers is different: they cannot afford the fall of any data center, and the quality of network access should be as high as possible. Including the connection between the regions on different continents. To achieve additional links are built being not publically available or shared but for private use only. And to avoid participation in Tier 1 wars or being affected by those, cloud providers or owners of such cable become a kind of Tier 1 providers themself. In fact, a decent chunk of cloud provider traffic doesn`t leave the network of a cloud provider. The situation is complicated further due to SD-WAN solutions offered by a cloud provider as it makes everything route traffic into its own network and avoids routing using external networks.\nIn general, it`s a logical step for the cloud provider: DC and interconnects present in main traffic exchange points and major cities, CDN PoP distributed in secondary EX, and smaller cities. Between all of these components backbone exists, so why not offer optimization of ingress/egress traffic to the clients?\nAs a result, from the point of view of routing, the modern Internet is not a full mesh, but rather several different large parallel internets, and the evolution of this situation yet is not quite clear.\n","date":"20 March 2020","permalink":"https://reflectionson.cloud/2020/03/20/internet-is-broken-and-internet-is-broken-and-what-to-do-about-it-is-unclear/","section":"Posts","summary":"","title":"Internet is broken and what to do about it is unclear"},{"content":"Every enterprise has its own curse related to optimization which most often spills into big problems for users.\nThe most recent example of such optimization comes from Microsoft. If before Windows updates were unstable and glitchy, which is still understandable given the quantity of supported hardware and software solutions, the latest Windows 10 releases and patches can have unexpected problems. Worth to note: previously glitches and instability were mostly associated with safety and components co-operation, rather than the risk of losing all data as it happens now.\nAccording to the opinion of one Microsoft employee published in media, the reason for such a deplorable drop in quality is simple - changes in the process of testing new builds and patches. Now, most tests are automated and done on virtual machines. That means new deployment, no “tails” from previous installations, no third-party software, no drivers. In general, the level of coverage of possible conflicts and problems has fallen catastrophically, which is what end-users face.\nInsiders program is not a solution, since its participants are not average users, and the number of them does not greatly increase the level of coverage.\nThis reminds me of a story when the same program was implemented at VMware. Unlike Microsoft, the amount of supported hardware is limited and well known. Drivers are produced either by the server vendors themselves (hi, HPE), or typical solutions, like Intel, are used. VMware\u0026rsquo;s task is to test and guarantee the quality of new functionality in its stack of products. And if so - you can automate everything and run it on virtual machines.\nThose releases were terrible. Not only new features were problematic but proven broke in an even place. Patches came out with enviable regularity, and the number of Knows Issues was more than Resolved Issues, and more entries were added as customers installed new releases. After some time, when it became clear that the new system is not working, a new idea was presented - Customer Zero. The idea is very simple and in the mid-90s was very popular inside Microsoft: eat your own dog food. Or, simply put, your own business is the first customer to who you sell to and any new functionality test on. Results were achieved very quickly: it turned out own IT was not updated to new versions due to stability problems. New features, products, and functions are not needed as such the way they were developed.\nIn the current situation with Microsoft, it remains only to wait for the flywheel to spin in the opposite direction.\n","date":"28 February 2020","permalink":"https://reflectionson.cloud/2020/02/28/customer-zero/","section":"Posts","summary":"","title":"Customer zero"},{"content":"During Re:Invent were announced another dozen of AWS services. Now cloud provider has almost 200 of them. And among the basics, always necessary services such as databases, servers, and different storage options, there are completely outlandish. For example, a time-series database or QLDB based on blockchain.\nLet\u0026rsquo;s leave behind whether such narrow use cases should be covered with dedicated service and whether development pays off. My personal opinion is the following: at such scale, innovation can sometimes be done for innovation only, and not for actual market capture or offering a new, effective way to solve the old issue.\nLet\u0026rsquo;s imagine that you already run some kind of payload on AWS using a standard set of services (EC2, RDS, S3, etc). And you need some unusual service. There is a high probability that by deploying and starting to use such a service you will find that there is no integration with existing ones or it is minimal.\nFor example, Athena - kind of Hive in the browser - you store data on S3 and run your SQL queries in a web console to search and analyze through this data. Very convenient and fast. At the time of launch, back in 2016, even integration with S3, the basis of Athena itself, left much to be desired. And integration with CloudWatch, a monitoring service, did not appear until early 2019. That is before there was no real way to track the execution time of the request and identify bottlenecks.\nAround the same time, Glue - an ETL service was introduced. And again no integration between services for a year as far as I can remember. Although both of these data analytics solutions are very closely related and mutually complementary and depend on other, external services and data sources.\nAnd this happens with all services: AWS launches, in fact, an MVP, which is developed based on popularity and customer feedback. Not to say such an approach is not justified from a business perspective - if the product is not popular, then why invest in it. Or actual customer feedback is completely different to service team plans and expectations. However, initial integration between new services and existing core-products is a key to increase the attractiveness and operability of the new service and something needed from the start. Otherwise, it is a cloud solution from one of the DB manufacturers who put their software and hardware in their data center and called it a cloud. Ignoring what is cloud according to NIST.\nOn the other hand, obviously that sometimes you do a blind shot hoping for the best, and that new service will earn customers love and attention. It hurts if due to the lack of fast and massive growth in service consumption, investments in it quickly fade out.\n","date":"7 February 2020","permalink":"https://reflectionson.cloud/2020/02/07/quantity-does-not-always-mean-quality/","section":"Posts","summary":"","title":"Quantity does not always mean quality"},{"content":"Some articles speculate about the consideration of Dell to sell RSA and Google to shut down its cloud completely in few years if it will reach certain market reach.\nIt is necessary to be able to get rid of non-core assets. It is an idea behind such a move taking into account that RSA was \u0026ldquo;bundled\u0026rdquo; with EMC. Even the acquisition of RSA by EMC far back in 2006 raised questions as already then the company did not cope very well with its broad products line, although the acquisition was aimed at diversifying the business even more. As history has shown the solution did not take off - the storage portfolio expanded, broad horizontal integration between the businesses didn`t happen (not to mention VMware and Pivotal), neither growth. Modern RSA is no longer the same, long time ago it was, in fact, synonymous with security and encryption.\nIn the current situation, given the development of the industry and the emergence of security as one of the cornerstones of IT, the technology stack of RSA products has sufficiently fallen behind the leaders. And the digital transformation presented by the leaders of the company has not yet yielded tangible results. It should be noted that Dell also has its security solutions which are not a priority compared to the storage and servers. Under these circumstances, it is reasonable to concentrate on the main direction so that EMC\u0026rsquo;s fate does not fall.\nMoving to Google consideration to close its cloud in 3-5 years in case it fails to meet the necessary market share capture targets. Admitting to such a decision has its logic: the startup market is owned by AWS, and the leader makes every effort to ensure that the situation does not change. Enterprises migrated to Azure, mainly due to Microsoft\u0026rsquo;s experience and connections. And there are still plenty of niche players: Oracle, who, finally, admitted that customers are looking for the cloud and moving its products to cloud nature; IBM earning a lot on outsourcing, Virtustream and Rackspace playing in their sandboxes. And as a cherry on top is dozens of niche players, starting with OVH ending local partners of one of the leaders.\nFrom a technological point of view, the datacenter for the public cloud is not the same as the data center for private usage. If Google`s own internal data center dies, no one is likely to notice. In case of a failure in a public region, the reputation will cause substantial damage, plus financial losses and related expenses. At the same time, IaaS and PaaS businesses are much less profitable compared to SaaS. To catch up with a leader(s) Google needs a billion-dollar investment with a not-so-clear purpose and result.\nGoogle has built its business on high-level management of collected information, to put it simply - advertising. The IaaS is much less marginal business compared to advertising and puts some problems on top. Dell built its business on hardware, and EMC acquisition simply covered weak points. Attempts to expand and diversify the business are good and useful, but sometimes it is necessary to get rid of assets that do not bring proper profit, and constantly monitor yourself.\nBut so far, all these are rumors and an attempt to predict the development of events. Let\u0026rsquo;s see what will happen and how the leaders of companies pursuing such ambitious goals will cope with the situation.\n","date":"10 January 2020","permalink":"https://reflectionson.cloud/2020/01/10/big-doesn-t-mean-successful/","section":"Posts","summary":"","title":"Big - doesn't mean successful"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/dellemc/","section":"Tags","summary":"","title":"DellEMC"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/rsa/","section":"Tags","summary":"","title":"RSA"},{"content":"In the second part, I will focus on the general direction of AWS and various announcements.\nAmong other updates to services, SageMaker got a significant update. It`s a managed service for data analysis and deployment of machine learning models. From now on it is a fully integrated solution with a web IDE, diagnostic, troubleshooting and visualization tools, as well as automation of testing and deployment of every used model. Excellent progress of an excellent service that can facilitate work by integrating all the major frameworks available on the market and presenting those as a ready-made solution.\nAnother promising AI-based service is CodeGuru. This service analyzes the source code for compliance with the best practices of Amazon and at the same time checks the speed of execution (thanks to the built-in profiler). It\u0026rsquo;s not entirely clear to me if AI, mentioned during the announcement, is rather a hype or it is a real benefit.\nNoSQL databases have certainly a big niche in the DB market. A year ago, a service compatible with MongoDB API was introduced, and now it`s the turn of Cassandra. The advantage of managed databases, of course, is the lack of extra load on administrators and the speed of deployment. And unlike last year\u0026rsquo;s announcement, the new release is a fully serverless solution.\nFor the last several years a quiet revolution is ongoing with networks. Previously, a set of services related to ease VPC network management and connectivity across regions has been introduced. On Re:Invent, it was announced that AWS Transit Gateway would support VPC peering between regions and, more important, multicast. It is worth noting that previously AWS service teams were fighting against multicast traffic, and the official position was that this would never be maintained. Unlike Flash vs Apple in this battle, multicast won.\nAll network traffic inside the VPC can now be sent to a special ENI connected to an instance with an IDS/IPS solution. Now there is no need to forward all traffic outside of AWS, which should significantly reduce traffic costs.\nAlso in the security area, two new services were introduced — Detective and Fraud Detector. The second is designed to solve the fraud problem. The retail part of Amazon fights with fraud for many years, and now ready to help its customers to solve this problem. Detective analyzes logs and network activity for suspicious or abnormal activity and allows you to find the root cause of security concerns. As such AWS now has a whole vertical stack of security solutions, where you can get what you need from a set of services.\nThe number of services and lack of management and control tools force AWS to develop optimization and budget control tools for the past three years. At the same time, new services allow upselling of existing ones. For example, Compute Optimizer uses metrics and data from CloudWatch to help you select the right instance types and the proper size. Amazon Builders Library is useful for architects and engineers — a collection of best practices and standardized solutions used by customers in different markets.\nWerner Vogels, CTO AWS, who presented on Wednesday, didn`t make any special announcements neither shared some thoughts or ideas. Generally, he repeated previous keynotes. However, it should be noted that Mr. Vogels, remains an excellent speaker who knows how to keep an audience.\nAmong the customers\u0026rsquo; success stories, I would like to highlight Volkswagen\u0026rsquo;s industrial cloud. This corporation chose AWS over Microsoft as its cloud provider, and the group\u0026rsquo;s CIO presented an ambitious global vision to transform all the company\u0026rsquo;s processes with modern approaches and technologies, including IoT, machine learning, etc. Volkswagen can become the same showcase as Netflix used to be.\nThe fanciest announcement of the event, of course, is quantum computing. The announce, gained a lot of attention from the press, while with little technical coverage. At the moment, three types of computing devices are available: superconducting devices based on Rigetti valves, superconducting devices using D-Wave quantum annealing technology, and IonQ ion trap devices. For all types of devices, a single API is provided abstracting the difference in each solution. For debugging and testing, you can simulate a cluster on EC2 instances.\nSince quantum computing is new and many things to be learned, Amazon launched Quantum Solutions Lab to share research data and development of new application areas.\nAnd, finally, an odd, personally to me, service to attract attention and entertain. Presented last year DeepRacer and DeepLense allow self-driving toy machines and analyze videos. While DeepComposer allows creating music from a small set of samples using generative AI algorithms. Although, the short and long-term goals of such services are clear.\nIn general, in my opinion, ReInvent slowly turns from a purely technological event with many announcements into a kind of entertainment show that attracts the attention of a wide audience. Each keynote designed in its style: first it was a commentator from a match and Peter DeSantis was dressed as a football player; an anchor before Andy Jassy\u0026rsquo;s keynote; Werner Vogels, who always appears in T-shirts, struggled with some designers who made a pitch of their \u0026ldquo;fancy\u0026rdquo; T-shirts. The number of services and types of instances reached a critical limit, it is impossible and difficult to maintain innovation at the same level, so AWS has to look for new options. Again, we have to give an analogy with Apple events, which are considered to be cooler and more innovative previously.\n","date":"27 December 2019","permalink":"https://reflectionson.cloud/2019/12/27/aws-reinvent-2019-announcements-caught-my-attention-part-2/","section":"Posts","summary":"","title":"AWS Re:Invent 2019 announcements caught my attention. Part 2"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/reinvent/","section":"Tags","summary":"","title":"ReInvent"},{"content":"The main AWS event for the year - Re:Invent 2019 has ended. I`d like to divide all announcements and strategy moves into two parts: official statements and personal impressions.\nLet\u0026rsquo;s start with the official part and the keynote. For the past few years, the main speaker at the conference has been Peter DeSantis, who has succeeded James Hamilton, whose sessions I strongly recommend watching. The reason for such change and direction of AWS`s evolution worth a separate publication. Peter paid a lot of attention to the huge growth of east-west (between applications inside datacenter) traffic volumes compared to the smaller growth of north-south (traffic leaving the datacenter). According to Peter containers and ML (machine learning), workloads are the key growth factor. From my point of view also worth mentioned traffic between AWS services and combinations between the cubes. And, finally, a lot of companies are customers of AWS and exchange the data thus generating more and more AWS-internal traffic.\nThe second topic Peter talked about is mainframes. These are outdated, expensive, and sometimes inefficient. But not all can and should be migrated from mainframes to x86, although the amount of such applications is decreasing. Special mention deserved also supercomputers, as well as strict requirements for network performance. The main disadvantages are the same, for which supercomputers are in favor: specialized software and hardware complexes. But, according to Peter, x86 platform is way better. As for me this controversial statement as well as a cloud is not a magic stick.\nSuch a long intro was required to present Project Nitro and the advantages of a specialized hardware and software solution compared to the commodity and/or bare metal. OK, noted to me that specialized hardware and software is bad if it`s not owned or managed by AWS :).\nOf course, I have to admit that Project Nitro is a perfect and required solution necessary for a hyperscaler like AWS. And the benefits of it are available to both AWS and customers - higher performance and less overhead. Users on metal and dedicated instances get more resources, while for AWS is no need to reserve server resources for virtualization and management.\nThe second keynote was on Monday night, as always from Andy Jassy, CEO of AWS. The first half of the keynote as a whole was a repeat of the Sunday keynote: how cool AI/ML is, the importance of innovation in software, and more important hardware platform. The output of entry was the announcement of the second generation of ARM processor - Graviton 2.\nIt is worth going over this chip a little closer. AWS takes careful but confident steps towards energy efficiency and closure of its hardware platform and providing it as a black box. The first generation of Graviton instances was named A1 (A as ARM). The second is C6g, M6g, R6g. Letters for compute, moderate (a common assumption), and RAM + g for Graviton as a subtype. By the way, the current generation of these instances on x86 platform is 5th, so perhaps with the public release of Graviton 2 instances, we will see a new generation of x86 servers, that were not announced on Re:Invent. The second interesting announcement related to the hardware didn`t get a lot of attention in the media. A new type - INF1 instance, equipped with a special Inferentia chip, was again developed by AWS. This chip allows dramatically increases the output of AI/ML computations. The main application areas are chatbots, translation, voice assistants, etc. Let\u0026rsquo;s see if it will be as widely adopted as F1 instances with FPGA chip ;). Although, given the efforts that AWS in particular and the industry generally put into AI, INF1 should be fine.\nAWS has global infrastructure around the world - North America and Europe are almost fully covered, new regions announced in the Gulf region and South Africa. The same situation in Asia, with the exception in Oceania region. Each region consists minimum (but more is rather an exception) of three availability zones. Adding new regions makes little sense because of the cost of investment and the proximity of the regions to each other. A few years ago, the local region in Osaka was introduced. It is accessible to local customers and for disaster recovery only in this seismic active region. This region was not widely available. The peculiarity of the local region is that availability zones are located within the same data center and some of the resources, for example, power, are shared.\nThe first publically available local region is announced in Los Angeles. I believe that with time if usage will grow, the local region can be scaled to a full-fledged region.\nThe proximity of infrastructure to the end-user is an advantage. Edge computing is one of the few relatively new topics that appeared recently and seems to be a solution to many problems related to fog computing. Especially combined with the following major announcement - AWS Wavelength. A joint solution with telco providers Verizon, Vodafone, KDDI, SK Telecom. Part of the AWS infrastructure deployed on telco operator facilities and from this edge the application or data is delivered directly via 5G to the end device.\nI assume the underlying hardware is AWS Outpost, which was also released public. As far as I remember previous keynotes, it is the first time that Andy Jassy acknowledged that not every application can be migrated to the cloud. But for those who are already actively using AWS and would like to transfer these practices to their data center in a proper manner is intended Outpost. A fully packed rack managed through the AWS console but installed and operated on customers site. VMware Cloud on AWS as well as AWS native services are also supported. I would note that the hybrid cloud turns from an enemy to a perspective. AWS expands its hybrid cloud offerings with a new service called Kendra - an enterprise-grade search engine for unstructured data in Sharepoint, Dropbox, and file servers. Funny that Google stopped selling the same solution about six or seven years due to low interest. Attempts to build a search engine on so-called dark data were made by many. According to Andy Jassy, the new service will use all the benefits of ML - learn and, eventually, understand the sense of requests to deliver results.\nThe last hardware-related announcement is AWS Nitro Enclaves. A solution to protect and secure critical or personal data in a protected dedicated environment not available outside of the enclave.\n","date":"12 December 2019","permalink":"https://reflectionson.cloud/2019/12/12/aws-reinvent-2019-announcements-caught-my-attention-part-1/","section":"Posts","summary":"","title":"AWS Re:Invent 2019 - announcements caught my attention. Part 1"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/openstack/","section":"Tags","summary":"","title":"OpenStack"},{"content":"Openstack is de-facto dead. Loudly launched in 2008 with the support of NASA and RackSpace by 2019 the project remained in demand in a very narrow niche like NFV in telecom providers.\nAt the moment, Openstack is trying to rise from the ashes like a phoenix bird with new infrastructure-level projects, which were, initially, discarded. If you look at Kubernetes and OpenStack at a general glance, you will find a lot in common: both are frameworks with scalable plugin-based architecture and not infrastructure services by themselves. Vanilla deployment is a Lego constructor offering a DIY approach. Internal projects with the same or similar functionality compete with each other. The more or less usable product comes as a finished commercial product provided by different vendors - from Red Hat to DellEMC.\nRapid development, ambitious plans to take over the universe, and the features described above, eventually, led Openstack to its current state. It can be objected that Kubernetes is developed by such giant as Google (well-known projects graveyard, he-he) and is heavily adopted in projects of different levels. And all mistakes of the past have been taken into account. Also, market adoption, support, and maturity are much more advanced compared to 10 years ago.\nAll modern projects are trying to solve the same problem of increasing the complexity of application support. If you look retrospectively, before Kubernetes there was Docker before it was CMPs - Cloud Management Platform(s), abstracting clouds behind a single management portal (initially, Openstack is also CMP). Slightly different approaches for, in general, one task. And there is still no single successful solution or approach to managing the compute, storage, and network parts. The exception is the IaaS cloud per se, but it\u0026rsquo;s a managed solution, “someone else\u0026rsquo;s computer” as Eric Schmidt put it, if I\u0026rsquo;m not confused.\nKubernetes is a great solution that closes needs that Docker can\u0026rsquo;t solve already at an average scale. And the approach with plugins and the idea framework is certainly good and has its benefits, although it has already known problems and risks.\n","date":"4 December 2019","permalink":"https://reflectionson.cloud/2019/12/04/will-kubernetes-repeat-openstack-s-fate/","section":"Posts","summary":"","title":"Will Kubernetes repeat OpenStack's fate?"},{"content":"AWS has its long history as a public cloud — more than ten years. Over the years, about 100 services covering all possible use cases have been launched. The question is if and how much obsolete are some of the solutions offered by AWS? Modern technologies develop quickly, and the recent rise of the OSS business model has allowed many interesting free products to enter the market.\nSome services are certainly in good shape. Simply because of their simplicity and irreplaceable — S3 for data storage, EC2 virtual machines, or SQS message bus. On the other hand, some technologies as it turns out after close examination can be replaced by modern and cheaper solutions. Recently, I partnered with one startup to optimize the costs of AWS infrastructure. It is a small company serving a huge entertainment website with UGC (user-generated content). Of course, they collect and store the possible maximum of analytical data from the website and application. The primary long-term storage is, of course, S3. Analytics and processing performed with advanced AWS services.\nAWS Redshift is a column database for the storage and analytics of big data. Amazon uses Redshift, combined with MongoDB, to migrate from Oracle DB solutions. The solution acquired many years ago, is based on a heavily modified PostgreSQL 8 kernel. It has a lot of significant installations where the number of nodes is more than 100 and hundreds of petabytes of data.\nOn the other hand, there is a modern, free and open-source solution created from scratch - ClickHouse by Russian internet giant Yandex. Initially, the database was developed as storage of Yandex.Metric service (a competitor to Google Analytics). As the volume of collected data grew, the cluster and the cost of the service scaled accordingly. One of the problems is that scale-out doesn`t guarantee an increase of performance accordingly, and vertical - shows even worse dynamics. So, the usage of the cluster\u0026rsquo;s storage does not keep up with the number of computational resources that are needed to process an increasing amount of data.\nAfter a small test, it turned out that a cluster of 30 Redshift nodes can be replaced with R4 10 nodes and get a several times performance boost, while TCO reduced by 3 times.\nEncouraged by such results, the company decided to conduct a full audit, after which it replaced all I2 nodes with I3, which allowed to reduce the cost by another 30%.\nAfter it, management decided to re-architect as much as possible and to abandon the legacy wherever possible, whether it is AWS service or internal application.\nDuring aggressive growth and limited operational support, managed services and PaaS can become a rescue and solution for immediate tasks, but in a few years, it can become a factor limiting the growth of the company or even an anchor pulling down.\nA few years ago, when Netflix was substantially smaller, company speakers did a lot of presentations at various events and talked about its strategy of using AWS services. No advanced services, only basic and simple services, basically EC2. The reason is simple — giving control to someone for basics you stop seeing the forest behind the tree and perhaps begin to lose more than you acquire.\nAbout five years ago, different vendors promoted to customers the idea of CoE (Center of Excellence) - a group of initiators and deeply technical specialists who would solve the problem of IT development, migration to the cloud, etc. This company decided to gather such kind of group once a year to audit the infrastructure and access what else to “throw out”.\nIt seems that history, as it should, repeats itself — 10 years ago we learned how to keep virtualization footprint minimal and effective, today it happens with clouds. And some companies, oops, are already in a similar situation but with container applications, which also gradually turn into legacy and had to be terminated.\n","date":"27 November 2019","permalink":"https://reflectionson.cloud/2019/11/27/aws-as-a-legacy/","section":"Posts","summary":"","title":"AWS as a legacy"},{"content":"","date":null,"permalink":"https://reflectionson.cloud/tags/legacy/","section":"Tags","summary":"","title":"Legacy"},{"content":"Reflection on cloud is personal blog about IT technologies and my personal opinion and thoughts about current activities. As most of post are thoughts about cloud technology - it is in the name of this blog.\nDuring different stages of my career I have worked for Microsoft, AWS, VMware among well-known companies, local cloud provider and a start up developing a Software Defined Storage product before it was called so. So, I`m experienced in different technologies and areas starting from network to software development.\nDisclaimer #All posts on this site are strictly personal opinion of the author and are not views or opinions of any, current, previous or future, employer.\n","date":null,"permalink":"https://reflectionson.cloud/author/","section":"Reflections on cloud","summary":"","title":"About"}]