Aleksandra Przegalinska ist Philosophin für KI am MIT in Boston und der Kozminski Universtität in Warschau. Sie beschäftigt sich mit der Schnittstelle zwischen Mensch und Maschine. Im Interview fragten wir sie wie wir als Menschen künftig mit KI interagieren und welche Konsequenzen sich daraus ergeben. (Englisch)

Die Stimme zählt

  • Will there be a strong or general AI in the near future?
  • People who work more in the field of engineering, cognitive sciences, artificial intelligence, do not perceive general artificial intelligence or perhaps something that we could also call singularity or strong AI, as something that actually could be deployed in any near future. I would say that most probably this is a solution that is still technically not feasible. Questions regarding what consciousness is and how to model it are very open questions that have not been addressed by biology, by cognitive science or even by philosophy. This is one of the hardest problems in philosophy, so there is no area within technical disciplines like life sciences, techno and humanistic sciences, that could somehow address it. At this point, at least. And therefore I don't think that there is a way to really model anything of that kind. And I do not believe that by increasing complexity a bit consciousness will just arise out of nowhere. I think that’s a very gradual process.

    I think consciousness and a certain ability to have self reference as in our case, in the case of humans, is something that is a very new evolutionary product. It has taken quite a while for us to develop these sorts of high cognitive and mental abilities. It can not just be switched on. If you add more neural networks, more layers in the neural network, you will not have consciousness or some sort of self reference, some sort of identity or subjectivity. I wouldn't exclude that by adding complexity at some point we may reach that stage, but I don't think we're there yet. We just have quite simple mechanisms. They're quite brilliant in what they do, in their narrow specialization when you think about the current developments in deep learning. Obviously what you can see is that for certain tasks it's really sometimes a better solution than the human brain or the human mind. But in terms of general capacities to link certain problems, to understand context and to have a broader understanding of the surroundings, this is not at all happening at this point. And when I hear people like Rey Kurtzweil I believe that's a bit of wishful thinking.
  • Why is it hard to imagine for us to exist next to intelligent machines?
  • I think we do not believe that an emotional machine can be created and there's a good reason for that – we clearly are not seeing a lot going on in this particular field.
  • Where do you see the biggest consequences for the people's fears for social life? Where is AI in our everyday life?
  • I do think that there is this rising phenomenon of algorithmic decision making. People's decisions concerning daily tasks and also the more important things like choosing a job or a career or even sometimes a life partner become more and more algorithmic. You trust the algorithm that it supports you in certain decisions or sometimes even can take over that decision. So this process is affecting us strongly and I'm quite ambivalent here. On the one hand obviously, I'm very amazed how certain algorithms can personalize certain solutions for you or how sometimes they know things that you know they shouldn't even know about.

    When you look at recommendation systems, the ones that work on Netflix or on Amazon to help people making purchasing decisions and also making decisions about what movie to watch, which can shape their tastes, shapes their mindset and reality, you have a deep neural network that makes those decisions for these people. Very often it's the deep learning system deployed by Netflix that tells you which movie to watch next and the same goes for Amazon for discovering products. It's usually something that is not for you to search for. It's something that is kind of given to you. And so maybe that's a trivial example but I think this is something that will be more widespread in the future. You will have more of these systems that perhaps not only tackle very daily solutions, like how to minimize your way to work and how to avoid traffic jams and what to buy, but also what career path to choose, which courses to take so that you can fully develop. Perhaps even how to manage your life in the best possible way to find satisfaction because there are some ideas regarding wearable technologies, IoT and machine learning that are trying to respond to that. How to make the people feel better. So I think this impacts people's lives a lot. In terms of AI this is not necessarily a fully good process in the sense that I think it should be guarded somehow and there should be limits to it. But we should not evilize it, but on the other hand I would say that we shouldn't give away all of our control and pass it over to these algorithms. The techno optimists should know that these algorithms will not solve your problems at this point. They may solve some problems or sometimes give you an interesting solution or give an interesting point of view, but in most cases you cannot expect them to make your life really better by any means. That is really impacting people's lives, their social lives, their daily lives.


    Another thing is obviously the growing process of automation that's perhaps not so much related to AI itself. This is just related to technological systems that are more and more widespread. But obviously the fact that you can now think about automating more intellectual professions and certain occupations we feel are creative, is also a big thing that is going to impact people's lives a lot. Although I don't know the trajectory of it. My understanding is that in the near future we should rather expect to have more jobs that are generated by technology than losses of jobs that are caused by it. So I think it's far ahead that this will be a problem. All those reports that I've been reading are rather showing that we will be overworked because of technology and not unnecessary.
  • This is one of the biggest fears in Germany. Germans are always very skeptical of everything digitalization brings with it.
  • I think it's very healthy. I actually very much respect the German attitude! You've always had a very strong critical thinking and critical theory, a critical media theory and technology was also included in those reflections. And I always admired it in a way because I thought to myself that this is exactly what the world needs. We have plenty of techno enthusiasts and also plenty of people who are just scared of technology but without any merits. And here you have some sort of rational way of looking at it in order to get the best possible outcome. So I think that a healthy critical relationship with technology is something to be nurtured as long as it does not turn itself into some sort of upset.
  • What's the difference between the human intelligence and AI intelligence?
  • I'm very much inspired by neural biological or cognitive scientific approaches to intelligence that broadly say that intelligence is the capacity to interact with constantly changing environments in a successful way. Which means, it allows you to adapt. It is that ability to adapt to a noisy, changing environment that is bringing your new stimuli all the time. And the capacity to really respond to that changing context is what defines intelligence. In that sense intelligence is not consciousness necessarily and it's something that can be ascribed to all living beings in different dimensions and in different distinct ways. But nonetheless you can say that the whole living environment consisting of all the living systems is intelligent in one way or another. You can also say that intelligence is a certain set of different skills in the sense that you can see it as a spectrum. You have emotional intelligence, social intelligence, computational intelligence, body intelligence and so on. And this whole umbrella together adds up to this one big intelligence.

    On that spectrum some areas can be stronger and some can be weaker and that defines in what way you are intelligent. As a system that interacts with its environment and here I have to say, that obviously humans and machines are of different kinds of intelligences. So today I think it's fair to say that in terms of computational intelligence, machines have computational intelligence. But as this is obviously always a matter of debate and many different fields can say: “how can a system that is working without its own will – How could it even be intelligent?” But the fact is that these systems produce quite innovative outcomes in terms of solving problems. And that means adaptation to me. So I could claim that computational intelligence is there. We obviously as humans have a tool but we also have other types of intelligence that are very well developed. I would say the social intelligence for example. In philosophy we would call it theory of other minds. The capacity to understand that others have certain intentions and mental states when they are interacting with us is a certain level of intelligence that makes us very distinct, although, obviously animals have it, too. Machines have a bit of a problem with it. That kind of intelligence, let's say social / emotional intelligence, is not yet there, although this company called Deep Mind, that has been acquired by Google, is now trying to build a network that would have a theory of other minds. We'll see how far that could go. Still it will not have any will. It will not make decisions that it will make on its own or goals that it could pursue – but nonetheless it will be solving some problems.

    Interestingly when it comes to body intelligence, and I think the body is a very big gate of intelligence for us and for all other living beings, some artificial intelligence or perhaps some robots would claim to have proprioceptive intelligence. They could be aware of where they stand in comparison to other objects in the room and how to keep a certain body posture. There is this company called Boston Dynamics, they build robots like that. They are equipped with a certain degree of body intelligence. But again, when you compare that to the human kind of intelligence or a Leopard kinds of intelligence here, then the body aspect is very very narrow.

    So I would say the domain of intelligence for machines is still computation and when it comes to other domains the machines are very weak, whereas humans are very very strong. And they can respond to their context in a very accurate manner. Whereas machines are usually good only in certain well-defined spaces. However, now you have some approaches in learning that make the systems more autonomous – but still I would argue that this is in no ways comparable with the kinds of intelligence tasks that we can execute quite easily. There's still a big big difference.

    These deep neural networks that are being deployed to solve some tasks now are really doing the job. Like AlphaGo Zero. That machine, build by Google, is quite interesting because obviously “game” is a changing environment and the game “Go” has so many different options on how to play. The probability of finding the optimal one is very very low because when you explore it, it is a universe of possibilities. It's not like a game of chess. It is actually much much much more complex and it turned out that a deep neural network that works on reinforcement learning can actually find ways to win this game that no human has ever found. So in that sense you could say: “Yes well it was searching for some sort of innovation to adapt to a difficulty, which is playing against the human champion, and it has won tremendously.” So you can have an angle of viewing this circumstances as a capacity to adapt. The same applies to the Dota-games and Open AI systems, that's the company that was established among others by Elon Musk. Dota is the kind of game where you play with other players and very often their behavior is not really predictable. And here you had a system that was doing that relying on deep neural networks quite well. So you would say: “The system probably doesn't know that it's winning. It's not that we thoroughly enjoy the fact that it has won against you. It's not enjoying the fact that it's a champion, but nonetheless it still adapts.”
  • How can we ethicality train machines? How do we get machines to have an ethical based decision for real world problems like in the example of the trolley dilemma?
  • We are now in a nice time because actually ethics in artificial intelligence is becoming a very important discipline. That's a good thing. It might be a slightly too late in a sense and it should have happened earlier, but at least it's there. And obviously a moral machine is just one possible way of solving it. So in the moral machine scenario you had a theoretical situation of the a trolley dilemma. It has to face it and choose optimally. So the question was how to choose optimally. And whether it can! So according to my knowledge the trolley dilemma was not solved yet. There are no good choices. That's something that we agree on. In these circumstances, when you have to choose between people's lives, then there are no good choices. Still, you have to make a decision. And the idea here was quite interesting but also extravagant, because the idea was that if you harvest the information from human players you will extract certain common features of how to behave in such dilemmas and that you will find inclinations which you can model into the machine. You can build preferences. When it comes to a certain systems you can build in priorities for the system, you can build that in and you can exclude certain possibilities from the start. You can include some other possibilities. You can do that manually. But what you also do is, you rely on the system's capacity to decide, taking into account certain weights that you put in the system, to prioritize certain information. But then you don't really give it an explicit recipe on how to behave in a certain situation but you rely on the fact that you fed it with certain weights, you fed it with certain data that had corresponding weights and you said that in some situations it makes more sense to save more people than less.

    But this turned out not to be a satisfactory approach in this case because what happened was that the study was inconclusive in the sense that it turned out, that in many different cultures we have different approaches. In Asian cultures the inclination to kill elderly people rather than young people was not so clear as in Western civilizations. Which means that probably we would have completely different cars being rolled out that would somehow correlate to certain cultures. But then again there is the question to what extent is a majority vote corresponding with individuals that would choose in some cases differently. Then you have more complexity because you need outliers to imagine a situation. Even if 60 percent people in a certain culture argue that it's better to kill less people then more. Then still these would not be representative for all the situations and there will be some specific circumstances where this majority will decide otherwise.


    And I think that's the main reason why this study failed, because it was mainly relying on statistics. And this is not a statistical problem. First of all I don't think that we should build machines based on certain bias. The bias here lies in the situation – The idea was that the trolley dilemma is frequent. It's actually not. It's quite infrequent in our reality. So that's one thing that I would bear in mind when I would construct such cars. Second of all we are not at the level of technological development where we can actually think realistically about the first level of autonomous vehicles. We have not solved these problems for ourselves and we have not solved them for the machine. So either we rely on a semi autonomous system that will make decisions and we will agree with the fact that they made it and we respected it. Which is highly controversial because then we also ascribe a certain responsibility to those machines. I think the automation lobby is not too happy with that. Or we just say the car follows the driver. Its model of driving should be reliant on the person that it's learning from and that would be the drivers. And the car mirrors that. Or we say we can allow a certain degree of autonomy but the final decisions in such situations always relies on the human insight. And then we'll be more instinct based than rationality based or statistics based.
  • One question that comes with it is: How can we implement ethical decisions? How can we decide which ethics should be implemented into code?
  • That's perhaps the question that is best tackled with the example of algorithmic bias. Data is never neutral. Obviously data is never neutral and our data that we process that we generate is not neutral either. And that's why Google is so full of syrup. That's because we put it there and then we see our machines with that. That's why for instance when it comes to more autonomous systems like deep learning systems, we are always trying to clean that data and make sure that whatever you feed it with is not super biased. Also because that now has real legal implications. If you create a system that is clearly biased against someone, then that person that it is biased against can sue you! That person can go to court with you. And I think that's quite an effective tool to really make sure that whatever is there works out. We had this recent case with Amazon and the recruitment tool that decided not to recruit any women for Amazon. It was a machine learning system that was clearly biased against women for some specific cases. So the good news is that you now have possibilities to solve bias. So there are certain protected features and those protective features can be included in the system. Certain capacities like age, gender, and so on do not matter for the system. And you have to make sure that these protective features are really protected in terms of the data sets that you have available, at your disposal to solve tasks. Second of all there is always room for testing and they need you see that if it is biased, there is room for improvement. You need a post production phase where you can solve the algorithmic bias. And it's also a lot of manual work at the beginning in the sense that the coders, they also have to have certain ethical, I would say moral values. They should be taught ethics during their school, during their studies at university, so that they also understand that it doesn't always matter to have big volume but what matters is that the data is actually trying to solve fairness. So there are a few approaches to solving fairness and I think this debate is actually happening.

    And the other thing is the fact that certain deep learning systems, or most of them, are really not fully transparent. Here we have another problem. We have a problem of a blackbox and the fact that: “OK we can try to clean the data but still we will not always get the way the machine brought us a certain process and how it spits out certain results.” And here what matters to us is that we try and solve it. This is a technological problem. I've seen recently the first ever demonstration of deep learning neural networks that dissects the neural network. A system that is allowing you to dissect the network and see how the information was processed. But that was always and only in relation to image processing and not towards the textual data or numeric data. So I think here is a problem ahead that is the clearest one. And thankfully big companies want to contribute. Now that everybody can get sued because of that lack of transparency, bias, and so. I think this becomes a vital problem for everyone.

    I think it's also a matter of safety. So in that sense I'm quite confident that to a certain degree we can try and solve it. I'm not sure about all deep learning transparency problems but we see the movement happening towards the solution somehow.
  • Is this a movement about putting light into the Chinese room and see where the decisions are and how they were made.
  • That would be exactly the case. That's the boundary we cannot cross. Obviously you can follow the Chinese example. They don't seem to care much about training or deploying deep learning solutions in many different circumstances. But I think this is a solution that is also quite risky and this is a pathway that is always quite risky. Frankly speaking I don't think that is also in full alignment with general values of western civilization. So I would say that perhaps what works in China will and should not be working here in the U.S. and in Europe. I think in India to a large degree, too. Many countries feel that this lack of explainability is an issue to tackle.
  • How do you measure the human machine interaction?
  • There's a diversity of methods but what we are interested in are methods that are combining qualitative studies, surveys or questionnaires with effective measures. So first of all what we do is we are working in this field of affective computing. We're trying to process the body signals of humans while the interaction is happening to see and know what signals we can detect and whether people feel stressed, whether we can correlate the signals that we read with any kinds of more general mental states that they're in. And we have detected that for some systems, in that case it was a chatbot, when the chatbot was human like and speaking with a particular voice it would actually make people feel quite unhappy and cause discomfort. Whereas in some other cases when the channel of communication with that machine was still through language but only text, it turned out that the stress level is much much lower. So we were quite interested in certain regularities in those affective responses.

    Then we would try to combine the affective responses with more statistical considerations and also questionnaires that we did to ask people how they felt. Sometimes what we saw and what they felt compared according to the affective data and how they explained what they felt was very different. And that's interesting. That's one way of doing it. What we also do is a lot of sentiment analysis. We're working with natural language processing and here we're trying to figure out what certain phrases could mean on the level of emotion. How people like certain conversations, how they express that and what they taught the bot. Our bot was learning from humans and also how this communication is going both ways. On the one hand you have the humans that feel something, they say something and so on, but they also do have a system that is trying to capture some of that. It's not doing it in a conscious way obviously, it has no intention to do so. But it has to find patterns in order to communicate with the human because that's the goal we give to it. So in the future it will be quite interesting to think about ambient intelligence where we could see the physiological signal, the affective signals and also the everything that's contained as a context in the text or the content of the phrases that we use. When we see that we could have a system that could be more responsive and achieve certain flows of interaction with the humans that are different from the ones we observe today. So for us it's quite interesting to have holistic tools that look at many different aspects of interaction and try to first of all understand what happens to humans but they also find bridges how to channel that.
  • You just mentioned that for humans there's a big difference between listening to an AI and reading text. What works good for humans?
  • We were quite shocked to see how important voice is. Especially natural voice. So even today with the newest systems like Google Duplex, you still have certain issues with tonality. When that tonality goes wrong what happens is that people sense it immediately. And when they sense it they go against it. Voice is a very important channel of communication and it transmits so much in terms of emotions. Obviously machines do not express emotions and therefore their tonality is sometimes so messed up. That's something that people are particularly vulnerable to. When they hear a false tone they immediately get alerted. Before I did the study I had no idea how important this instrument is and how often I actually get neglected because of that machiny kind of flat, shallow voice that speaks. Even when you deepen it and you have a better modulation, still there is very often something wrong with the utterance, with the way certain phrases are uttered. In texts you don't have that problem because it's more of the imagination of the one who reads it. That's why I think that so many bots that are textual work much better than those that are using voice. Voice is such a sensitive channel.

    A lot of those chatbots we had on Facebook or on Twitter were really bad because they just couldn't answer basic questions. At least that's what I saw and I can't speak for everyone of course. But the question is more like some tests have shown that the Turing test can be achieved by some of those systems but probably not by those who actually talk with a voice because we sense it.
  • How does it change our behavior towards AI when we use voice as the interface (do we start developing emotions, because conversation is more natural)?
  • Voice is a very delicate matter and a very important channel of passing the context and emotional layers of communication. On the one hand it is better for the machine to use a voice interface, that is more natural for humans, on the other there is a risk attached to it. If something goes wrong with the tonality and the voice does not correspond well wth the content of our conversation, the interlocutor immediately senses it and feels certain eeriness. 
  • What does it mean for our society when AI tries to imitate human gestures and mimics?
  • We have no better reference than humans and therefore we very often try to simulate or mimic human behavior while building our machines. This is quite okay, as long as we know exactly who it is that we are dealing with. When we are no longer sure whether we are conversing with humans or machines, it creates unnecessary confusion. Therefore I believe that we should keep it as transparent as possible, especially when the level of "mimicking" will be increasing.
  • When you were working and interacting with an intelligent humanoid robot such as Sophia: what was your biggest expectation? What do people expect when they see something like Sophia? What might be a problem with that?
  • I think deep inside many of us in a way hope that there will be a human or something alive hidden behind the robot. And often we expect these technologies to be better, we are hopeful they will be responsive at a certain level – and they turn out not to be.