Boris Scherbakov, Dell Technologies - on demonization of machines and 5G-transformation of communication

Boris Scherbakov - Vice President and CEO of Dell Technologies. In 1977 he graduated from the economic

MGIMO faculty. Until 1990, he worked in the system of the Ministry of Foreign Trade of the USSR, as well as the Ministry of Defense of the USSR. At the end of 1991, he began his career in commercial companies, becoming the head of the personal computer department of the local HP office. In 1997-1998, he served as vice president of computing and office equipment, and later as vice president of marketing at the Party distributor. Then he was Merisel's senior vice president, from where he moved to Oracle, where he worked for 13 years. Until 2010, he was the head of the representative office, general director and vice president of Oracle for sales in Russia and the CIS. Later, after the acquisition of Sun, he served as CEO of Oracle Hardware, vice president of hardware sales in Russia and the CIS.

He began his career at Dell in 2012.From the position of Dell CEO in Russia, Kazakhstan and Central Asia, he was responsible for the business strategy of the corporate and consumer areas of the company in the region. Since February 2017, Boris Shcherbakov has been the head of the Russian office of Dell EMC. He is responsible for the development of all areas of business - infrastructure solutions, services and client systems. In 2019, he became Vice President and CEO of Dell Technologies in Russia, Kazakhstan and Central Asia.

About Machine Learning

- Is it generally more profitable for humanity to invest in machine or human learning?

- It’s difficult to speak for humanity as a whole, butbusiness is accustomed to choosing the best investment / investment solution because it affects profitability. And if the machine does work faster and better, and at the same time it is cheaper, and it’s not mistaken, and it’s not sick, then the choice becomes obvious. Nothing personal, as they say, only business. And, of course, it’s worth passing on the functionality that computers can handle faster and more accurately in order to free frames from monotonous labor.

Photo: Anton Karliner / Haytek

On the other hand, to make these machines brilliantcoped with the tasks, highly skilled developers are required who are able to learn and improve algorithms. The issue of training people and machines is a strategic one. Imagine that the state has no robots, no computers, no software, no developers. It automatically drops out of a competitive race and remains on the sidelines of technological progress and innovation. At the same time, one should not forget that often technologies and developments are experimental in nature and should not be absolutized. After all, the most stable in our lives, as you know, is change. We conduct special studies on how technologies are changing and will change our lives in the present and in the future, and we are looking for optimal solutions for their application in various fields of human activity. And we want to make life on the planet better by providing people with different capabilities with access to modern solutions. We are transforming business not only through them, but also contributing to the transformation of people with large-scale ideas around the world through access to technology.

- At the same time, it cannot be said about machine learning that the next implementation of the famous “quantity goes into quality”. Why?

- Actually, no, machine learning is justjust a set of algorithms. Although the results that are obtained can sometimes be surprising. After all, with machine learning, a computer is not only trained to compare patterns that have occurred in the past, but it is often better to predict future development (as it takes into account the whole spectrum of known factors and eliminates human errors). True, life sometimes makes adjustments that even the most accurate computer cannot take into account.

But with quantity that goes into quality,not so simple. Learning systems, like children, goes from simple to complex, and errors are not ruled out. At the same time, the speed of information processing and the accuracy of the results are improved with each iteration and update. But, of course, the question of correct algorithms and debugging remains.

- Then what's the problem? Why, despite the development of technology, errors in the application of such algorithms still happen?

- The question is not about cars, but about people whodevelop algorithms. After all, the word “computer” is literally translated into Russian as “calculator”. The quality of the final result depends on how well the algorithm is specified and the results and the calculation process are tracked. There is the concept of “quality functional” that determines how well a machine handles data processing.

People do not have enough time to decidebasic problems of its evolution. This is a relatively low life expectancy, which is significantly affected by congenital or acquired diseases.

Boris Scherbakov, Dell Technologies

For example, Genome International is prettyIt has long been using machine learning methods based on high-performance data centers to study human DNA and predict, of course, as far as possible, the probable problems of a particular individual in the future. The speed of obtaining results and their accuracy depend in this case on the following factors: the quality of the primary diagnostics, historical data, data model algorithm, hardware cluster performance and ... good luck. It is worth hoping for it, since we have not 100% studied the biological mechanics.

- By what criteria will we determine that a computer has completely surpassed a person, as happened in chess or in game go?

- I believe in the potential of people and soft skills(over-professional skills not related to the official duties of a person - “High Tech”). Technologies are good in their fields and help to carry out operations faster and more accurately, at a level that is even potentially inaccessible to humans. At the same time, there is a flip side to the coin: the human brain and decision making have not yet been fully studied, and all systems, including neural networks, are built on the basis of data available to science and people, that is, initially without taking into account the entire array. Therefore, you can not worry about the future of mankind and not demonize machines.

Computing machines are a derivative of the human mind, operations are faster and more accurate, but a person sets the algorithm.

Boris Scherbakov, Dell Technologies

At the same time, qualities are important (and not only the leveleducation and genius, but also moral) of people who create algorithms. After all, robots can help, but they can also destroy. And like any technology, historically attracted the attention of the state and the military, in the first place. In combat conditions today robots and computers will give odds to any person, but at the same time they have no ingenuity and intuition. The basic quality, if translated into human realities, is common sense (the obvious right decision, taking into account the available data), but at the same time we know many historical examples when savvy beat him. Therefore, I would answer that in some areas the superiority is already obvious, but it is too early to talk about total loss on all fronts. Yes, and scared of self-learning systems - too.

To distinguish a person from a car, usedTuring test. It was first proposed by Alan Turing in the article "Computers and Mind" in 1950. The test consists in the fact that each participant conducts correspondence with one person and one computer program and must, based on answers to questions, determine who he is talking to. If the judge cannot say specifically which of the interlocutors is a person, then it is considered that the machine passed the test. To test the intelligence of the machine, and not its ability to recognize spoken language, the conversation is conducted in the "text only" mode, for example, using the keyboard and screen. Correspondence should be carried out at controlled intervals so that the judge could not draw conclusions based on the speed of responses. In Turing's time, computers reacted more slowly than humans. Now this rule is also necessary, because they respond much faster than a person.

About 5g

- DELL statedthat humanity is waiting for the "5G digital transformation." What do you have in mind?

- Just what I said above is the transferinformation will now be not only between people, but also machines too. Thanks to the implementation of IoT, the collection and transmission of Big Data, it requires more bandwidth and processing capabilities. People, fortunately or unfortunately, are limitedly prepared for an upgrade, mostly evolutionary (smiling). But technologies are developing, and each new stage requires the development of the entire infrastructure, pulling up weak links. IoT sensors and device density, the volumes of information generated by devices require new data transfer technologies, and this new technology at this stage is just 5G. If it is not implemented in time, then communication will become a weak link and a factor that impedes the development of the entire system. Imagine yourself working in a modern office with Internet access through a modem that is not very fast, to say the least. Can you do a lot? How long will you wait to download one letter? In any case, this situation is unacceptable. We are already walking data centers. Those who reject this strategy will significantly lose to the “transformers” in everyday tasks. The presence of high-speed data transfer between the data center and the data collection site makes it possible to carry out boundary calculations and, transmitting data through the cloud, does not load the core of the organization’s information system.

- The number of mobile base stations is growing critically today. What will happen with 5G?

- I find it difficult to predict the number of stations5G, after all, for us this is adjacent complementary knowledge. I can answer that the speed of information exchange, its volumes and quantity are growing exponentially and, naturally, a technical solution is required. 5G stations, as far as I know, if you compare them with processors in computers, provide a greater number of operations not only by increasing their number, but also due to multi-element digital antenna arrays (similar to cores and transistors on a chip). After all, we don’t see today a PC with thousands of cores to increase power, but according to Moore’s law, we see the doubling of transistors on a chip every 24 months. Something similar is happening with base stations and technologies. Not only their quantity, but also the quality is changing.

Photo: Anton Karliner / Haytek

- According to DELL, 5G network will be completely independent of equipment and all processing will be concentrated in a distributed software center. In fact, do you propose to completely change the architecture of both the network model and thinking in general?

- We are talking about edge- or peripheralcalculations. Today, there really is a tendency to decentralization, to processing and storing data in the same place where they are generated and used. Computations and analytics are shifting to the boundary of the network - the place of collection of large amounts of data. With the era of multimedia, the volumes and speed of data transfer by mobile operators are also increasing (all photos and videos, live broadcasts), while users do not want to put up with delays in transmitting or playing content. And for this you need to find a “free” node located as close to the user as possible. That is, demand and realities shape the transformation of both iron and computing models, data processing scenarios. You can ride in a traffic jam and wait a couple of hours until you reach your destination (in general, this is safer, at first glance, but not always actually, as we know), or you can choose a free highway and rush at maximum (permitted) speed.

Peripheral Computing (Edge) - the principle of hierarchical constructionIT infrastructure, in which computing resources are partially moved from the core - the central data center to the periphery - and are located in close proximity to the place of creation of the primary data for their primary processing before being transferred to a higher computing node.

One peripheral device can be processedsuch a volume of data that only a large computer could handle before. Peripheral computing simplifies data management by reducing randomness in the structure, increasing usability and reducing security risks.

- The costs of the new infrastructure are huge, it seems logical to create a kind of "national champion" with direct state participation. Could this completely kill competition in the market?

- Honestly, the question is large-scale, and soTo answer it, it is necessary to take into account a huge amount of input data, not all of which are on the surface. As we discussed with you earlier, when the human factor intervenes, sometimes unobvious factors affect the outcome.

Photo: Anton Karliner / Haytek

Killing competition is usually short-sighted, asthis negatively affects the development of both the industry and pricing policy. Today, different operators compete not only due to hardware (although the signal reception level and communication quality are, of course, one of the most important factors for the user), but also due to pricing policy and even marketing campaigns. And this is good. The lack of competition would lead to stagnation of both telecom operators and solution providers. The speed of diversification is functionally related to market demand. The emergence of monopolies is regulated by law. One operator will simply not be able to cope with the avalanche demand for services in the cloud in 2025–2030.