GoodAI’s Marek Rosa Part 2: An emerging artificial intelligence would have to understand our world without being motivated by human emotions

Photo: archive of GoodAI

In the last edition of Czech Life, Marek Rosa - the founder and CEO of the Prague-based company GoodAI – discussed the means by which he and colleagues are working to develop something the world has not yet seen: Strong Artificial Intelligence.

Marek Rosa,  photo: archive of Marek Rosa
For some the concept is science fiction but there is no question, the future of AI is something being taken very seriously by many, something which could help humankind one day tackle seemingly impossible tasks. But what about the dangers pointed to by people like Stephen Hawking or Elon Musk?

In Part 2 of our interview, I ask Marek Rosa whether the aim is to instil an emerging intelligence with rules of behaviour or a moral sense.

“That is certainly our goal. We want to achieve this by educating the AI in the School for AI. From the very start you have to teach the AI about our world, what we are, and then to teach the AI through various methods. The point is to align the AI as much as possible with us, so that its actions are in line with ours.”

Your company had a big start: what are some of the most recent milestones?

“Some of them are up at our website but right now I consider most important the fact that we have published which is our Framework document and Roadmap. The first outlines the principles and methodology behind what we do and why are doing things a certain way and the second is the Roadmap document where we show the steps that we need to solve get to a human-level AGI or an AI with a human-level skill set. These documents show the big picture.”

Some researchers believe that this kind of AI could be solved within our lifetimes? What is you view?

“Actually, I hope that it could be sooner than that.”

Well you say that on your website too but I imagine that there are still many, many barriers before anything like that can be achieved.

“Yes certainly there are. The reason why we want to do it quicker is because we are thinking about how much is lost every day that goes by when we don’t have this technology. That is a negative which pushes us to work on this as quickly as possible. I don’t want to make a definite prediction but I think the task could be solved in 10, 20 or 30 years.”

Roadmap,  photo: archive of GoodAI
To come back to the aspect of danger: if AI is achieved, there will be key differences, I suppose: it won’t be humanlike in the sense of having emotions. Presumably it wouldn’t have the same kinds of motivations, not guided by emotions, in any case. In what ways could AI, handled wrongly, be malevolent? In what ways could it be dangerous for us?

“It is a matter of the AI receiving as much input as possible and understanding to properly assess the situation. A bad decision is exactly what we don’t want to happen. If you have input, by video sensors, on a self-driving car, it has to recognize that children playing on the road are children and react accordingly. The same is true of other AI: to assess the situation and make the proper decision. If the AI or the car had never experienced that kind of situation, in simulations, it could evaluate the situation wrongly. So the AI needs to understand the world but it doesn’t have to be to motivate by human-like emotions.”

Hypothetically, is there is a danger that an improperly trained AI which truly became self-aware, that it could come to the conclusion that humankind is no longer necessary?

“Yes, there is of course a risk, if it were not properly educated. It is the goal of most in this field, including us, to prevent anything like that from happening. We need to create an AI which is beneficial and to never reach a conclusion like this one.”

It must be exceedingly difficult to imagine what a machine consciousness would be like: we have no point of reference. How would we even know at what point the line had been crossed, that the so-called Singularity had been achieved?

“I think that it will be very hard. You basically have to have tests whether the system has consciousness [a Turing test of variations thereof – Ed. note], ways to communicate. That is the only way. What is in our favour is that we can run the AI through many simulations and we will be able to come up with scenarios where we will be able to test how the AI reacts. Again, there is every need to increase the likelihood that we tested the AI properly.”

You mentioned space travel before. Scientists talk about the small window of opportunity there is, perhaps more limited than we realize; then, there are enormous problems with the environment, overpopulation, dwindling resources… Are these things that the General AI or Strong AI would be able to tackle?

“I think there will always be limited resources, certainly when it comes to both time and space. It is possible the AI will be able to gather more or new resources but that will take time. I think there are certain limits, certainly to what the human mind and body can do. I see so many advantages if you try and move this to the next level of AI and artificial life: these can be tools for exploring the universe since the human body isn’t really designed to travel far in space. On our own we are such a disadvantage.”

Photo: archive of GoodAI
Much of the mindset is that the hunt for AI is a race: companies are rushing to find a solution ad, as you said yourself, the sooner the better. But you also said on your website that it is not about competition. Could you explain?

“Well of course it is a race: everyone wants to come up with something new and so on. We are not alone and many people are doing research into AI. Our view is that if you get together to share ideas and solutions you can get to a final result faster and maybe more safely. Because you create a kind of trust between each other; it should not be that that one group learns another is close and begin to rush ahead with something like shady tactics. At the moment the scientific community is very connected and sharing knowledge and new findings and I think this will only increase in the coming years.

“One thing is that there are not so many companies that would be searching for General AI as you might think, certainly not companies that would take that goal from the outset. Most are looking at narrow AI or are putting General AI as something which will be looked at far into the future. Not us. We are looking at the problem from the view of the big picture and so far I think that we are taking a good approach, that there is nothing we have missed. I could argue about some of the other approaches, where I see disadvantages. I am quite confident in how we are approaching the task ahead.

“On top of that, we want to launch an AI roadmap institute which would combine roadmaps from different teams from all around the world and bring different ideas together for further analysis and debate and refinement of approach. What I like is that we have this Roadmap and are looking at the big picture and we want to launch this soon. In my opinion, I couldn’t think about anything more important or worthwhile to do or focus on (and to have put my own money into) than General AI: because, when we solve General AI, we will also have solved everything else.”