Burton Rast - a writer, teacher, mentor and former art director of IDEO design agency, and today he is a leading
“In the US, not everyone knows about the GDPR”
- Can you talk about the projects you are working on now?
- The projects I work on now are notrun until 2020, so I can not talk about them. But recently he joined the privacy and data protection team. We are working on different things, for example, that our products fit the GDPR in Europe - this is the “General Data Protection Regulation”.
- Yes, I know what GDPR is.
“Well, not everyone in the States knows about him!” In addition, there are things that we announced at IO, for example, federated machine learning. This is secure and distributed machine learning. With it, they improve models without downloading data from user devices. And then the improved model is loaded into the same devices.
Photo: Off Festival
- Recently, the commission on ethics in AI was closed at Google ...
- I am not connected with her.
- But can you talk generally about ethics in AI?
- On my own, yes, but not on behalf of Google.
- The ethics commission was closed after a weekafter its creation, as Google was criticized for including certain people in it. In principle, do you consider it normal that a small group of people should address ethical issues in AI for everyone?
- Personally - no, I do not think that a small grouppeople can articulate this ethic. Last year we published a list of principles in AI. They define work in this area. We will no longer work for the army, on something for weapons, on something that may contradict human rights laws. I think this can and should inspire other companies. This document should evolve over time, because the AI is also evolving. We need to start somewhere, but in general it should be much more serious than trying one company, or, as you said, one group of people.
"In all our data is biased people"
- Can we even get rid of bias in AI?
- This is a developing and rather complicated problem. Think how fast the language is developing. It is very difficult to design systems that understand how this happens and remove new types of bias and discrimination when they appear. These systems should be controlled by people, we need to lay the possibility of human intervention in them. Can someone design the perfect system? Which removes bias before it arises? Probably not. In all our data, there is a bias of people. In fact, we need to create a system that is better than people.
- Is it possible?
- You can improve the system in this direction. Today we see a lot of smart people who think not only about how to develop the systems themselves, but also about the general rules of design and development that others can use. I think this is a good start, we will be able to learn from the mistakes of the past and gradually create systems in which there is no place for bias in the data.
Photo: Off Festival
- UNESCO recently published a report on genderdiscrimination in voice assistants - all of them have female voices. One of the necessary actions, the authors of the report called the development of a machine, gender-neutral voice. What do you think about it?
- It is quite difficult: Should people be able to choose the voice of their assistant? Or should a general decision be made that will affect billions of people? Can three companies make this decision? The problem is not only in gender - there are almost no voices representing people of different cultures, ethnic groups. There is still a lot of work in this area, and I'm not sure that there is a simple answer.
The UNESCO report is called “I’d blush if I could”(“I would turn red if I could” - “Hi-Tech”) - this is Siri's answer to the numerous insults of users in her address. The authors of the report see the problem in that all voice assistants use the voices of young women, and they, among other things, do not respond to insults based on gender discrimination. UNESCO calls for greater gender balance in teams that develop AI: now only 12% of AI researchers are women.
“I want more privacy”
- Returning to the issue of privacy - do not you think that lately people have been panicking too much?
- No, I have long been an activist in data protection,This is partly the reason why you joined the team. I'm not the kind of person who thinks people panic too much. I am the kind of person that family and friends consider to be crazy, at the level of a “foil cap”. It seems to me that people should definitely be concerned about their privacy. If you are not concerned, then at least ask questions. And they should expect that the company that uses their data will tell them about it.
“Now companies are already trying to explain more specifically what is happening with the data. What are the next steps in this direction?
- I do not think that people are obliged to understand in allthe complexity of how their data is used on the Internet. People have a common understanding, this is confirmed by our research. For example, most services are free because data is used for advertising. But when it comes to questionable people and companies that want to access data, they use digital fingerprints of the device — I don't think people should figure this out. It is the business of companies to protect users.
- You can not exactly tell about new projects?
- I can say in general terms: Users should be able to say in a very simple way: “I want more privacy.” We are working to make this a reality in all our main products.
- How? Press the "make everything private" button?
- Of course, we will not put a giantswitch with the words "Privacy" in Google search. It's pretty hard to get teams working on products with an audience of over a billion users — search, maps, Android, YouTube — to make changes that will greatly affect their ideas. It does not happen in one day. But we are working on it.