<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://reflectionson.cloud//feed.xml" rel="self" type="application/atom+xml" /><link href="https://reflectionson.cloud//" rel="alternate" type="text/html" /><updated>2025-12-24T00:00:25+01:00</updated><id>https://reflectionson.cloud//feed.xml</id><title type="html">Reflections on cloud</title><subtitle>Blog by Konstantin Vvedenskyi about cloud and IT technologies</subtitle><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><entry><title type="html">If you can`t win them - lead them</title><link href="https://reflectionson.cloud//aws/2021/03/25/If-you-can-t-win-them-lead-them.html" rel="alternate" type="text/html" title="If you can`t win them - lead them" /><published>2021-03-25T00:00:00+01:00</published><updated>2021-03-25T00:00:00+01:00</updated><id>https://reflectionson.cloud//aws/2021/03/25/If-you-can%60t-win-them-lead-them</id><content type="html" xml:base="https://reflectionson.cloud//aws/2021/03/25/If-you-can-t-win-them-lead-them.html"><![CDATA[<p>Probably that’s how Amazon decided to make its ElasticSearch fork. Challengers of open source businesses face new challenges that their ancestors (like Red Hat) did not. Particularly with product appropriation from cloud providers.</p>

<p>In general, an interesting situation in which personal preferences remain not on the cloud side. This situation has been developing for a long time, and it produced not by Elastic. As early as 2018, MongoDB released a new license of its development - SSPL which is a modified AGPL 3.0.</p>

<p>The only and really important limitation of the license is if a consumer creates a service based on the product. In such a case, a consumer must either publish all source codes or buy a corporate license. Simply put, it does not allow cloud providers to earn money on a free (optional) open-source product without giving anything in return back to the product.</p>

<p>And then it started… Part of the community raise in arms against this, generally good decision, and launched its own MongoDB with SQL and blackjack. OSI acknowledged the SSPL as a proprietary and restrictive license. As a result, the MongoDB API-compliant service introduced by AWS supports only version 3.x released following the previous license.</p>

<p>After that less promoted and well-known, but rather popular solutions - Graylog and CockroachDB have switched to the new license. By the way, with about the same result, in the end. Now it was Elactic’s turn to change the license.</p>

<p>The war between the search engine developer and the cloud giant has been going on for quite a long time. First AWS released a free and open-source version of the extensions for ElasticSearch. While Elastic company sells it as part of the enterprise license. Elastic did not find a better solution but to change the license for all its products. AWS announced the creation of its fork of ElastiSearch in return.</p>

<p>This is a logical decision from AWS’s point of view — managed ES service sold with an excellent added value compared to regular EC2 instances and very popular. Therefore, unlike a MongoDB-compatible service, AWS can not simply, for example, postpone the release and redo it from scratch. No one will chop their head with a chicken carrying golden eggs.</p>

<p>I wonder what the consequences will be for each of the market players. I think that in fact, AWS will develop its fork to add features related to search, while Elastic will continue to develop security-related functionality. And products will not compete directly especially.</p>

<p>But in this example, it is the case, the precedent. And I don’t rule out that the war is not over and soon we will see new battles.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="AWS" /><category term="AWS" /><category term="OSS" /><category term="software" /><category term="cloud" /><category term="ElacticSearch" /><summary type="html"><![CDATA[Probably that’s how Amazon decided to make its ElasticSearch fork. Challengers of open source businesses face new challenges that their ancestors (like Red Hat) did not. Particularly with product appropriation from cloud providers.]]></summary></entry><entry><title type="html">Word in defense of OVH</title><link href="https://reflectionson.cloud//cloud/2021/03/17/Word-in-defense-of-OVH.html" rel="alternate" type="text/html" title="Word in defense of OVH" /><published>2021-03-17T00:00:00+01:00</published><updated>2021-03-17T00:00:00+01:00</updated><id>https://reflectionson.cloud//cloud/2021/03/17/Word-in-defense-of-OVH</id><content type="html" xml:base="https://reflectionson.cloud//cloud/2021/03/17/Word-in-defense-of-OVH.html"><![CDATA[<p>Last week OVH data centers in Strasbourg burned down. Some discussions were produced more heat than the fire itself. Opinions were different: some claimed that OVH lied about its reliability, others - canceled the cloud/OVH/IT as a whole.</p>

<p>Clear minds recalled the words of Eric Schmidt (if I don<code class="language-plaintext highlighter-rouge">t mind) about a cloud being just someone else's computer. It doesn</code>t matter where the cloud is hosted in private DC or providers: anything can burn and sink. Furthermore - power outage, connectivity goes down, etc.</p>

<p>As for me, two events at once happened last Wednesday: one for OVH and another for Europe. With the first all is clear, and the second a warm welcome to the club. Clouds outages happened in Australia (several times already) and in the US. It was no such kind of disaster of data centers in other regions, or such didn`t get a lot of attention. Moreover, news from far-far away is not so interesting.</p>

<p>Everyone is used to the outages of AWS and Azure. Google also breaks something from time to time. And any mention about the BGP leakage is a bad manner cause since it`s daily life for a long time already.</p>

<p>Architects and IT professionals also were outrage about OVH’s design and the entire DC project. The concern is about a modular design and fire-hazardous solutions. All in all, it’s bad. Some local providers immediately declared that their data centers do not burn in the fire, and they do not sink in the water. It`s like since there were no such accidents, the opposite is not proven.</p>

<p>And for some reason, as many as 4 availability zones or sub-data centers were placed physically on the same site! Can you imagine?! But there are two points: initially Azure regions were physically in the same data center and, optionally, shared power and network, and AWS is doing so now with its Local Region.</p>

<p>I recall one situation with a customer who had to choose a cloud to run managed DB. The first CSP had a multi-AZ design, and the second - higher SLA, but without multi-AZ. It was a long discussion about which one is better and more important…</p>

<p>Everything falls, and no clouds will change it. All AWS guides and best practices mention that the service has to be designed to handle a failure of the underlying cloud infrastructure. Twenty years ago, people were divided into those who make backups and those who don’t (since then, though, nothing has changed a lot). I hope this event will show the need to store copies outside. For example, the Veeam (backup vendor) almost from its very beginning delivers a message about the 3-2-1 rule (three copies, two media, one copy on another platform). Modern technologies make this process even easier than 5 or 10 years ago.</p>

<p>P.S. small prediction that some of the service providers will provide a service to audit/guarantee data security on remote site in case of such accidents.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="cloud" /><category term="OVH" /><category term="cloud" /><category term="disaster" /><summary type="html"><![CDATA[Last week OVH data centers in Strasbourg burned down. Some discussions were produced more heat than the fire itself. Opinions were different: some claimed that OVH lied about its reliability, others - canceled the cloud/OVH/IT as a whole.]]></summary></entry><entry><title type="html">Why 1st appearance of ARM servers failed, but can succeed the second?</title><link href="https://reflectionson.cloud//arm/2021/03/08/Why-1st-appearance-of-ARM-servers-failed-but-can-succeed-the-second.html" rel="alternate" type="text/html" title="Why 1st appearance of ARM servers failed, but can succeed the second?" /><published>2021-03-08T00:00:00+01:00</published><updated>2021-03-08T00:00:00+01:00</updated><id>https://reflectionson.cloud//arm/2021/03/08/Why-1st-appearance-of-ARM-servers-failed-but-can-succeed-the-second</id><content type="html" xml:base="https://reflectionson.cloud//arm/2021/03/08/Why-1st-appearance-of-ARM-servers-failed-but-can-succeed-the-second.html"><![CDATA[<p>In a previous note, I mentioned HPE servers running the ARM platform, which after a couple of years, without much publicity, transferred to x86. While server ARM CPUs and servers were produced by many companies. Why did the major players curtail these products and why ARM cloud providers pay so much attention to it now?</p>

<p>By the middle of the `10s, a cloud-native application was quite formed in its architecture. Some of the solutions previously were not used with legacy applications: Redis for caching, ElasticSearch for search and cache, and message queues. Applications have evolved very much towards the web approach and horizontal scaling.</p>

<p>Initially, web and horizontal scaling of small application or server instances was the zone of interest to use ARM as server CPU. At that time ARM was quite a low-power CPU, but also consuming little energy. The very thing for application with low to medium loads such as web, or MapReduce (very fancy technology during the mentioned period), or even IoT processing. In general, all those applications may not load CPU for 100%, and quantity sometimes is more important than quality.</p>

<p>But the market, as always, decided on another. To begin with, very few of the enterprise customers needed ARM servers in their own DC while clouds are on the rise. To continue it turned out that performance is yet still too little. And, finally, the software did support the platform at the required level. While you could install and run Linux, most of the applications either did not support the platform or did not use the capabilities and features of the CPU (not clear what is worse).</p>

<p>As a result, the bright and beautiful future of ARM mass servers was washed out by reality. But ARM Holding, as a developer of the platform, did not care much about such trivia and stared into their bright future, which can be divided into two parts: clouds and 5G.</p>

<p>Amazon acquired Anapurna Labs, the developer of ARM processors, in early 2015 not for nothing. On the hyperscaler caliber switch to an own energy-efficient platform can save billions per year.</p>

<p>The best example of the result of this acquisition is Project Nitro. A joint hard/software solution that allowed to move virtualization and management overhead off servers to dedicated PCI-X board. Previously about 1/3 of the server was reserved for management purposes, now 100% can be sold.</p>

<p>Furthermore, there are many SaaS and PaaS services: DynamoDB, S3, SQS, etc. These services can be moved to a new platform. The benefit of such a move can be shown by Apple’s experience with its M1 and A14 CPUs. Both have units optimized for certain tasks. Basically, these units are a whole co-processor, but already built-in. The old idea gets a new life!</p>

<p>As a result, Amazon, Microsoft (which develops its ARM chip), as platform owners, get a specialized solution optimized for their needs. Just like IBM designs mainframe-optimized processors rather than using Intel’s general-purpose CPU (well, almost).</p>

<p>If ARM in the cloud is already the reality of today there is still a niche for the future: low-power and embedded servers for 5G, SmartNIC, and edge computing. Areas where low demanding platform and extensibility for special use cases will earn its success. With the spread of 5G and the gradual expansion of smart everything, the applications themselves will shift closer to data sources. The Internet of Things has not yet become a daily reality, but it has an intermediate stage - the “fog of things”. And this fog will become the computing power of all sensors and metering devices. Also, there will be smart machines - now there are few models with the support of M2M, but the concept is entering the market.</p>

<p>So Intel is not going anywhere and will not die, but rather release its ARM chip itself (again). And will work to ensure that x86 can move in the new growing market to move ARM away. In servers and PC, the ARM will remain a speed-up solution: business laptops, ultrabooks, and so on. Microsoft will provide the platform in the form of OS and basic software. But whether the initiative will get support and traction from vendors like Adobe, Corel, Autodesk who release highly demanding software - it`s a separate question that also will have a significant impact on the development of ARM as a platform for computers. The last stronghold that remained is games, but I would not be surprised if Unreal Engine in the next couple of years also will adopt this platform…</p>

<p>In any case, it remains only to wait for the server manufacturers to support the initiative and what will be the “answer to Chamberlain” from Intel.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="ARM" /><category term="ARM" /><category term="AWS" /><category term="Microsoft" /><summary type="html"><![CDATA[In a previous note, I mentioned HPE servers running the ARM platform, which after a couple of years, without much publicity, transferred to x86. While server ARM CPUs and servers were produced by many companies. Why did the major players curtail these products and why ARM cloud providers pay so much attention to it now?]]></summary></entry><entry><title type="html">RISC vs x86. Round 2</title><link href="https://reflectionson.cloud//arm/2021/02/21/ARM-vs-x86-Round-2.html" rel="alternate" type="text/html" title="RISC vs x86. Round 2" /><published>2021-02-21T00:00:00+01:00</published><updated>2021-02-21T00:00:00+01:00</updated><id>https://reflectionson.cloud//arm/2021/02/21/ARM-vs-x86-Round-2</id><content type="html" xml:base="https://reflectionson.cloud//arm/2021/02/21/ARM-vs-x86-Round-2.html"><![CDATA[<p>The release of the Apple M1 CPU caught a lot of attention from all kinds of media and blogs except myself. The processor was pictured in X-ray from all sides, all possible benchmarks are published. Even information about the update of this wonderful processor leaked. And, of course, again everyone buried x86 as architecture.</p>

<p>The recent undertaker of x86 from the ARM side, I don`t take AWS a1 instances into account yet, was HPE  Moonshot Project, which, however, moved smoothly back to the traditional x86 platform.</p>

<p>As for me, the burners of x86 did not pass the short course of recent history. The fight between x86 and RISC has already happened once. Although eventually, RISC lost because of the negatives of its architecture both platforms have changed significantly over the past years, integrating the best sides of the competitor.</p>

<p>It even got to the point that it is considered that x86 processors are RISC-alike inside now. Well, thanks to God not quite the opposite.</p>

<p>The thing is that the context and evolution of IT over the years are not taken into account, as well as the shift of profits from the PC market to the servers and cloud.</p>

<p>The PC market has lost its former influence: the vector of development of processor technologies is already set even not by the server, but clouds. Secondly, compared to the 1980s, the CPU field has become much wider. ARM will lead in areas that didn`t exist before: IoT, vehicles, embedded devices, etc. A huge area with devices is dozens of times more than in all history Intel sales.</p>

<p>Another important point: other processor architectures and types of processors that ARM will have to deal with: MIPS and RISC V. Not to mention specialized solutions such as ASIC and FPGA, which also will have to resist in the SmartNIC market. As so, the struggle will rise.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="ARM" /><category term="ARM" /><category term="x86" /><summary type="html"><![CDATA[The release of the Apple M1 CPU caught a lot of attention from all kinds of media and blogs except myself. The processor was pictured in X-ray from all sides, all possible benchmarks are published. Even information about the update of this wonderful processor leaked. And, of course, again everyone buried x86 as architecture.]]></summary></entry><entry><title type="html">And let no one go offended</title><link href="https://reflectionson.cloud//analysis/2021/01/12/And-let-no-one-go-offended.html" rel="alternate" type="text/html" title="And let no one go offended" /><published>2021-01-12T00:00:00+01:00</published><updated>2021-01-12T00:00:00+01:00</updated><id>https://reflectionson.cloud//analysis/2021/01/12/And-let-no-one-go-offended</id><content type="html" xml:base="https://reflectionson.cloud//analysis/2021/01/12/And-let-no-one-go-offended.html"><![CDATA[<p>FTC probably read Roadside Picnic authored by genius soviet writers Arkadiy and Boris Strugatsky and used the core idea of the poem while prepared a case against internet giants. And now it, finally, reached the final stage - court.</p>

<p>In the ’90s a monster that crushed everyone was Microsoft and was punished for that quite a lot. Now it’s a whole knot called FAANG. So far, the complaint is about only two of five heroes - Facebook and Google.</p>

<p>Strange but the lawsuit did not start a discussion and analysis of the situation in media. Although, retrospectively, these lawsuits have been prepared for several years and clearly will be still. What is the claim to the titans of the industry and why exactly did these two companies of the bunch of internet giants get first under the hammer of justice?</p>

<p>Google and Facebook have one thing in common - monopoly and aggressiveness. Google is more mature and experienced thus already a less aggressive and more accurate market player. It`s engaged in improving the supply rather than creating and capturing new areas. Facebook, unlike a senior fellow, is a company of one person, and actively buys competitors who can beсome Kronos in the future.</p>

<p>There are a few dozens of advertising networks and related companies on the market. Most sites earn on ads not only from Google, but also, at least, from one or two competitors. Whereas there is no equivalent replacement for Instagram and WhatsApp. Google softly (as for such a colossus) and gently forces its advertising products. While Facebook just like a black hole pulls in all available information and uses any opportunity to increase the time users spend in the products ecosystem. And this, in part, leads to the segmentation of the Internet to the InternetS, regarding which the best minds of mankind have been warned and worried for many years already.</p>

<p>But do not forget that for FB, same as for the popular search engine, the main source of earnings is ads. And the giants entered into a joint secret agreement under which Zuckerberg’s company receives preferences in an advertisement, and for that doesn’t push Google.</p>

<p>But forget about advertisement per se. After all, it must be delivered somehow. And if everything is clear with the social network, Google has a different ace in your pocket - Android. Officially it is a free and open mobile OS (except for some nuances). It`s installed on billions of varieties of devices - from phones and tablets to NAS and IoT devices. And there is Chrome also built on a free and open (see above) browser engine. Both are factories to collect personal data, analyze, and improve advertisement targeting. A beautiful ecosystem in its fullness!</p>

<p>This complexity and presence of Google and Facebook in all spheres, multiplied by the popularity of non-core products is the idea behind the possible separation of companies. And by this bring back the competition to the market.</p>

<p>Among the other internet giants - Amazon, Apple, and Netflix - is not yet very clear what to do with only the last two. To divide Amazon into “parts” was challenged by large investment companies a few years ago already. After all, the diamond in Bezos’s crown is only Amazon Web Services. All other businesses (except the advertising) exist thanks to the cloud profits, while either grow.</p>

<p>According to the investors, designating Amazon’s cloud business into a separate company would only increase its market value, also increase the cost of the shares of the retail business. The situation with Netflix and Apple is a little more complicated.</p>

<p>Apple’s serious sin, at the moment, is tax avoidance. Attempting to smooth the US government, the company even returned some production from China to its homeland and promised to increase production. Although I do not exclude that the story with the monopoly of the AppStore will still get a sequel in the coming years.</p>

<p>So far Netflix seems to be the most harmless of the abovementioned trinity: grows peacefully, does not absorb anyone, competitors are many, and do grow like yeast. On the other hand, these competitors may force an antitrust investigation against the streaming giant, just as Oracle pushed the case against Google. And it’s not about revenge for Java, it’s about competition in the marketing and advertising market, although it would seem companies are not particularly competitors there.</p>

<p>In recent history already were two interesting showcases lawsuits against major monopolies: AT&amp;T and Microsoft. Both are interesting because allow us to correlate the current giants to a particular lawsuit and assess the possible consequences. If to consider it completely binary - Facebook, Google, and Amazon are “AT&amp;T,” whereas Netflix and Apple are rather like “Microsoft” of Gates time. In general, in the next few years, it will be very interesting to observe the development of the situation both with existing lawsuits and new ones. As well as possible legislative initiatives that could follow as a result of findings and court decisions.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="analysis" /><category term="Apple" /><category term="Amazon" /><category term="Google" /><category term="Facebook" /><category term="Netflix" /><category term="court" /><summary type="html"><![CDATA[FTC probably read Roadside Picnic authored by genius soviet writers Arkadiy and Boris Strugatsky and used the core idea of the poem while prepared a case against internet giants. And now it, finally, reached the final stage - court.]]></summary></entry><entry><title type="html">Cloud comitio</title><link href="https://reflectionson.cloud//analysis/2020/12/12/Cloud-comitio.html" rel="alternate" type="text/html" title="Cloud comitio" /><published>2020-12-12T00:00:00+01:00</published><updated>2020-12-12T00:00:00+01:00</updated><id>https://reflectionson.cloud//analysis/2020/12/12/Cloud-comitio</id><content type="html" xml:base="https://reflectionson.cloud//analysis/2020/12/12/Cloud-comitio.html"><![CDATA[<p>The end of the year is debriefing time. The other day Maxim Ageev (De Novo CEO, Ukrainian cloud provider) published his vision of the results of the year, to which Vladimir Pozdnyakov (CEO of DX Agent, ex-head of IDC Ukraine) did not agree with him and expressed his doubts. I disagree with both of them. The thoughts below are rather applicable for the CIS region rather than the US and Europe, but though have to be answered.</p>

<p>I don`t have exact numbers and detailed analytics, but the arguments of both authors have a pair of important nuances: first, they make the study of the Ukrainian market in a sort of disconnected manner; secondly, only well-known and obvious companies are taken into account.</p>

<p>Out of these two nuances, a conflict arises. The first thing to mention - not all Ukrainian companies pay for consumed services locally. Also, most of the enterprises do not attract public attention at all. For example, one of my ex-customers was in AppStore Top3 applications in the United States and even Google Play (here I could be mistaken) among entertainment apps, while the developing company was from a small city 800km away from Moscow. Or, another example - Ring, a company originally from Ukraine, paid directly to Amazon without the involvement of any local partners, and nobody didn’t know about the origin country until the e-commerce giant bought it.</p>

<p>So to the question - who is the winner of the Ukrainian cloud market? Global players like Azure, AWS, GCP, or local like De Novo or GigaCloud? The answer depends on what to cover. Indeed, Microsoft has strong sales channels, many years of experience, pricing flexibility, etc. And Azure subscription is added into Enterprise Agreement, which has a positive impact. From this point of view, it is a battle of two leaders - Azure and De Novo. As the main business is local their customers pay, of course, in Ukraine, either they pay to cloud aggregators. These are easy to evaluate and measure since customer names are public and well-known.</p>

<p>Let have a look at the other, dark, side: outsourcers, gaming companies (especially casual, very interesting topic and market, by the way), startups, and darken IT companies mentioned above. Those types of enterprises with young and beardy young people. Most of them hate Azure for technical moments (API changed as often as WinAPI used to) and prefer it to AWS and GCP. They did not even hear about De Novo, GigaCloud, and other local CSPs. These young people consider that everyone who is over 30 is a dinosaur, quietly creeping towards the nearest tomb (although this may be specific only of the CIS). Most of such enterprises for consumed services not to Ukrainian companies, but directly to the provider with US cards. It<code class="language-plaintext highlighter-rouge">s impossible to spot them - try to uncover someone who doesn</code>t want it. They are not referenced in the public use cases either. As a result, the sum of their spending on IT is impossible to measure or analyze.</p>

<p>Besides the vendors themselves are in the game. For now, let’s stop at the big three. Microsoft makes a focus on enterprise customers and invests a lot in evangelization among young people — everything is unchanged. GCP is young and cocky. It is the best in some areas, and the opposite in else, though aggressively closes the gaps. When you understand why to choose GCP - there is no better solution. If someone doesn`t know and does not understand - presales and sales will quickly demonstrate the quality of communication channels even in Ukraine, internal services developed into external products and will deprive any traces of doubt. And only AWS quietly, without attracting any kind of attention, harvests the market and pays attention only to encouraging and actively paying customers.</p>

<p>To complete the overview cloud managed services from HPE, IBM, and SaaS from companies such as SAP or Salesforce should be mentioned which Maxim Ageev politely missed in his review. Although it is difficult for me to estimate the revenues of SaaS-giants inside the Ukrainian market primarily because of the low share in total earnings. HPE and IBM, as the central players of managed services, feel great - it is worth remembering the move of DTEK (energy services enterprise) to the HPE cloud or IBM’s multi-year contract with Ukrsotsbank (owned by UniCredit Group at the mentioned period) which didn<code class="language-plaintext highlighter-rouge">t last as long as planned. SAP  cloud services are a long-term and huge investment and should be considered unique in UA. Overall, SaaS and managed services are yet another piece of the pie worth considering within the whole picture. Because the result is the same - provider takes away a customer</code>s IT functions and offers an abstraction of some part of the IT processes or the entire process/application entirely.</p>

<p>One of the authors declared the wise idea that AWS, Azure, GCP, in modern conditions, are cloud 2.0, while HPE, IBM - 1.0. Very good and reasonable thought. But there is one difference: from a technical point of view, clouds differ only in the management interface and the changes required for applications architecture and infrastructure (VPC, subnets, etc). Because, eventually, the idea behind the cloud for the customer, is flexibility, and for the provider-competent management of data center resources.</p>

<p>With this idea in mind, VMware Cloud on AWS and its brothers-in-law running in other clouds should be memorized. On the one hand, it’s cloud 2.0 - flexible, fast to provision, etc. On the other hand, it is 1.0 - what can be more usual than VMware stack, natively developed and supported, but running, in this case, on AWS hardware. But there is a vice versa chimera - AWS Outpost…</p>

<p>As an outcome: the analysis of modern IT of a single market of one country has become so complex and multifaceted that the only correct way to measure it does not exist. But the fact that the number of variables and parts in various areas sharply multiplied is no longer in question. It remains only to understand how to make such kind of analysis with all abovementioned.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="analysis" /><category term="cloud" /><category term="AWS" /><category term="GCP" /><category term="Azure" /><summary type="html"><![CDATA[The end of the year is debriefing time. The other day Maxim Ageev (De Novo CEO, Ukrainian cloud provider) published his vision of the results of the year, to which Vladimir Pozdnyakov (CEO of DX Agent, ex-head of IDC Ukraine) did not agree with him and expressed his doubts. I disagree with both of them. The thoughts below are rather applicable for the CIS region rather than the US and Europe, but though have to be answered.]]></summary></entry><entry><title type="html">Shadow of the blue colossus</title><link href="https://reflectionson.cloud//ibm/2020/11/16/Shadow-of-the-blue-colossus.html" rel="alternate" type="text/html" title="Shadow of the blue colossus" /><published>2020-11-16T00:00:00+01:00</published><updated>2020-11-16T00:00:00+01:00</updated><id>https://reflectionson.cloud//ibm/2020/11/16/Shadow-of-the-blue-colossus</id><content type="html" xml:base="https://reflectionson.cloud//ibm/2020/11/16/Shadow-of-the-blue-colossus.html"><![CDATA[<p>In the cult game Shadow of Colossus, the protagonist defeated a tremendous majestic colossus. Seeing one for the first time it is difficult to estimate what it will do in the next moment, where his weak spot is, and from which side you should get closer.</p>

<p>IBM is one of such colossuses in the IT world. It is well known for its cut-offs of discouraging or low-profit businesses, as well as unexpected acquisitions like Red Hat. And now IBM separates the part of the business, which seems to make a noticeable profit in contrast to the constant decrease in sales, and seems to be not very costly, for example, on R&amp;D.</p>

<p>The separation of managed services business into a separate company is a logical and correct step for a variety of reasons, because of the following reasons: different company culture, competition on the managed services market, drop of the ballast.</p>

<p>The culture of the company whose main business is operational support is fundamentally different from the one that deals with clouds, software, and long-term R&amp;D projects. Not to mention cycles and sales methods.</p>

<p>The growing popularity of clouds, as well as broader usage areas, plus the lack of experts and engineers drive customer`s interest not just to outsource or out staff tasks to maintain IT, but also to increased competition in the managed services market. Offers exist for any pocket or need: starting with full coverage of any infrastructures and clouds by companies like RackSpace to SaaS products such as EPAM Syndicate which manages serverless applications in AWS. Plus AWS itself offers such a kind of service, while Microsoft as usual relies on partners. The managed services market is a wide, but crowded valley and soon it will be very crushing.</p>

<p>Printers, storage systems, laptops, and so on - all of these businesses at some point turned from promising and profitable business for IBM into ballast as technology and market evolved and technology became a commodity. As an example — servers — once it was a high-margin business with niche and expensive solutions. At present, the servers are a commodity, manufacturers are consolidated, and dozens of offers from different companies available. This is why the x86 servers business was sold, as opposed to the mainframes — through narrow, but still interesting and inaccessible to a wide range of manufacturers.</p>

<p>After Satya Nadella signing on CEO position Microsoft quickly moved its main interest to modern reality - clouds. IBM, also because of scale, turned out to be much more inert, plus the bet on blockchain and artificial intelligence, including Watson.</p>

<p>Therefore IBM, during its transformation returned to the starting point. The dismissal of infrastructure solutions in favor of applications has led to the need for modern methods — clouds and containers, which are the execution environment of modern applications.</p>

<p>At the current stage of IT evolution, this may not be the last time when IBM makes an unexpected step, and the transformation that began a decade ago still seems to last as much, no less.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="IBM" /><category term="IBM" /><summary type="html"><![CDATA[In the cult game Shadow of Colossus, the protagonist defeated a tremendous majestic colossus. Seeing one for the first time it is difficult to estimate what it will do in the next moment, where his weak spot is, and from which side you should get closer.]]></summary></entry><entry><title type="html">Server`s milestone</title><link href="https://reflectionson.cloud//arm/2020/11/12/Server-s-milestone.html" rel="alternate" type="text/html" title="Server`s milestone" /><published>2020-11-12T00:00:00+01:00</published><updated>2020-11-12T00:00:00+01:00</updated><id>https://reflectionson.cloud//arm/2020/11/12/Server-s-milestone</id><content type="html" xml:base="https://reflectionson.cloud//arm/2020/11/12/Server-s-milestone.html"><![CDATA[<p>The arrival of 5G networks, the continuous evolution of ARM architecture, and the miniaturization of specialized solutions have caused a rise of a very interesting idea of edge computing - Smart NIC.</p>

<p>None of the above is new, even Smart NIC existed previously. Simply with less functional - just Ethernet and TCP/IP offload or intelligent NIC aimed to provides features as RoCE or DPDK. But now, with interest in edge computing and 5G attention increases interest in developing Smart NIC solutions.</p>

<p>Of the three existing technologies - ASIC, FPGA, and SoC - the most flexible and “democratic” option is the latter.</p>

<p>AWS uses a homemade Smart NIC solution for several years, gradually improving and expanding the functionality of the solution. Presented in 2013, first-generation ASIC offloaded block storage-related tasks from the CPU, progressed to Project Nitro — a full-fledged board for network management, block disks, security, and even hypervisor.</p>

<p>On the other side, VMware for several years promised to migrate its hypervisor to the ARM platform, and finally, it happened on VMworld 2020. The release of the leading virtualization platform on the ARM platform opens new horizons to absolutely everyone, and the benefits are huge.</p>

<p>For example, AWS thanks to Project Nitro able to sell an additional ~30% of the server previously reserved for management purposes (basically cloud overhead). VMware itself has NSX and VSAN — SDN and SDS solutions, respectively. Offload service overhead or implementation of GENEVE on any Smart NIC will significantly reduce the costs of the hardware due to lower service workloads.</p>

<p>vSphere on ARM is a very interesting release a new ground for a bright future and enabler of hybrid clouds promised for so many years. And more important it will pave the widespread way for Smart NIC, not just some specialized niches. Honestly saying, now, Smart NICs were used either for NFV (classic ASIC) or in black boxes (AWS Outpost) and are little understood or used by a general business.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="ARM" /><category term="ARM" /><category term="VMware" /><summary type="html"><![CDATA[The arrival of 5G networks, the continuous evolution of ARM architecture, and the miniaturization of specialized solutions have caused a rise of a very interesting idea of edge computing - Smart NIC.]]></summary></entry><entry><title type="html">Internet is broken and what to do about it is unclear</title><link href="https://reflectionson.cloud//internet/2020/03/20/Internet-is-broken-and-Internet-is-broken-and-what-to-do-about-it-is-unclear.html" rel="alternate" type="text/html" title="Internet is broken and what to do about it is unclear" /><published>2020-03-20T00:00:00+01:00</published><updated>2020-03-20T00:00:00+01:00</updated><id>https://reflectionson.cloud//internet/2020/03/20/Internet-is-broken-and-Internet%20is%20broken-and-what-to-do-about-it-is-unclear</id><content type="html" xml:base="https://reflectionson.cloud//internet/2020/03/20/Internet-is-broken-and-Internet-is-broken-and-what-to-do-about-it-is-unclear.html"><![CDATA[<p>The fact that globally the Internet is “broken” known for a long time and is not a special secret for anyone.</p>

<p>The flexibility and independence of the components built in the Internet architecture in its present form have become, an anchor pulling down. This does not concern us - end-users. Except that Facebook, or another vital service, may load slower than usual.</p>

<p>But at the level below, the global routing of traffic flows, often there is a real hell - Tier 1 providers are fighting among each other or organize coalition against 3d provider; telecom operators battle with the Internet companies and route their traffic through remote locations. Even traffic exchange points put sticks in wheels to their customers, and the cherry on the cake are constant changes in BGP announces. And the last point should be highlighted separately because of the mass and seriousness of the problem.</p>

<p>With enviable regularity, the news appears that some large ASNs are sent to the black hole (YouTube and Pakistan).  One African ISP shut down Google this way, and in April `17 Visa and Mastercard networks were announced from Russia, and so on.</p>

<p>In addition to human (I want to believe) mistakes, there are hacker attacks associated with BGP hijacking. So far, there are only a few such attacks, but their number is growing, as does the danger they carry.</p>

<p>The IETF is working to solve the problem as it can: additions and extensions to BGP are being developed to eliminate routing hijacking and minimizing possible accidents and consequences.</p>

<p>And a third, party - global cloud providers - has emerged. Internet businesses, whether Facebook, Google, or local players, like Yandex in Russia, have long history and experience of building private fiber cable networks and CDNs to streamline routes and delivery of content. They do not care how and where the content is delivered, the main thing is to do it quickly and efficiently.</p>

<p>The situation with global providers is different: they cannot afford the fall of any data center, and the quality of network access should be as high as possible. Including the connection between the regions on different continents. To achieve additional links are built being not publically available or shared but for private use only. And to avoid participation in Tier 1 wars or being affected by those, cloud providers or owners of such cable become a kind of Tier 1 providers themself. In fact, a decent chunk of cloud provider traffic doesn`t leave the network of a cloud provider. The situation is complicated further due to SD-WAN solutions offered by a cloud provider as it makes everything route traffic into its own network and avoids routing using external networks.</p>

<p>In general, it`s a logical step for the cloud provider: DC and interconnects present in main traffic exchange points and major cities, CDN PoP distributed in secondary EX, and smaller cities. Between all of these components backbone exists, so why not offer optimization of ingress/egress traffic to the clients?</p>

<p>As a result, from the point of view of routing, the modern Internet is not a full mesh, but rather several different large parallel internets, and the evolution of this situation yet is not quite clear.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="internet" /><category term="Internet" /><category term="BGP" /><category term="cloud" /><summary type="html"><![CDATA[The fact that globally the Internet is “broken” known for a long time and is not a special secret for anyone.]]></summary></entry><entry><title type="html">Customer zero</title><link href="https://reflectionson.cloud//microsoft/2020/02/28/Customer-zero.html" rel="alternate" type="text/html" title="Customer zero" /><published>2020-02-28T00:00:00+01:00</published><updated>2020-02-28T00:00:00+01:00</updated><id>https://reflectionson.cloud//microsoft/2020/02/28/Customer-zero</id><content type="html" xml:base="https://reflectionson.cloud//microsoft/2020/02/28/Customer-zero.html"><![CDATA[<p>Every enterprise has its own curse related to optimization which most often spills into big problems for users.</p>

<p>The most recent example of such optimization comes from Microsoft. If before Windows updates were unstable and glitchy, which is still understandable given the quantity of supported hardware and software solutions, the latest Windows 10 releases and patches can have unexpected problems. Worth to note: previously glitches and instability were mostly associated with safety and components co-operation, rather than the risk of losing all data as it happens now.</p>

<p>According to the opinion of one Microsoft employee published in media, the reason for such a deplorable drop in quality is simple - changes in the process of testing new builds and patches. Now, most tests are automated and done on virtual machines. That means new deployment, no “tails” from previous installations, no third-party software, no drivers. In general, the level of coverage of possible conflicts and problems has fallen catastrophically, which is what end-users face.</p>

<p>Insiders program is not a solution, since its participants are not average users, and the number of them does not greatly increase the level of coverage.</p>

<p>This reminds me of a story when the same program was implemented at VMware. Unlike Microsoft, the amount of supported hardware is limited and well known. Drivers are produced either by the server vendors themselves (hi, HPE), or typical solutions, like Intel, are used. VMware’s task is to test and guarantee the quality of new functionality in its stack of products. And if so - you can automate everything and run it on virtual machines.</p>

<p>Those releases were terrible. Not only new features were problematic but proven broke in an even place. Patches came out with enviable regularity, and the number of Knows Issues was more than Resolved Issues, and more entries were added as customers installed new releases.
After some time, when it became clear that the new system is not working, a new idea was presented - Customer Zero. The idea is very simple and in the mid-90s was very popular inside Microsoft: eat your own dog food. Or, simply put, your own business is the first customer to who you sell to and any new functionality test on. Results were achieved very quickly: it turned out own IT was not updated to new versions due to stability problems. New features, products, and functions are not needed as such the way they were developed.</p>

<p>In the current situation with Microsoft, it remains only to wait for the flywheel to spin in the opposite direction.</p>]]></content><author><name>Konstantin Vvedenskyi</name><email>konstantin@reflectionson.cloud</email></author><category term="Microsoft" /><category term="Microsoft" /><category term="VMware" /><summary type="html"><![CDATA[Every enterprise has its own curse related to optimization which most often spills into big problems for users.]]></summary></entry></feed>