Michal Pěchouček – AI specialist developing ways of improving aviation and protecting ships from pirates


Though still only in his late 30s, Professor Michal Pěchouček is an internationally recognised expert in the field of artificial intelligence. His work includes developing machines that are capable of working together without active human input, while he has also helped create sophisticated programmes aimed at improving flight traffic control – and thwarting the pirates that disrupt international shipping in the Gulf of Aden.

Michal Pěchouček
When I met Professor Pěchouček at his office at the Czech Technical University in Prague, I first asked what had led him to this field.

“I operate in the field of artificial intelligence as a basic scientific field with a number of application disciplines. I was led to this field because by my studies: I studied in Edinburgh at this famous AI department, where I was taught by really charismatic professors. That opened my mind to the view that you can really do something challengingly mathematically, but also having a wide application potential.”

In your younger days, when you were a kid, were you interested in sci-fi, or robots?

“Actually, I was not. My utmost interest when I was younger was mathematics: I was really attracted by the abstract way of reasoning and thinking about problems, rather than touching any hardware. It was only when I was older that I got interested in possible applications, but I was by no means a ‘robotic’ kid.”

I understand you’ve been involved in the development of multi-agent technologies. Could you explain to our listeners what they are?

“It’s a specific approach to writing software which are running complex problems, or solving complex situations, like traffic, or weather prediction, or other complicated pieces of software.

“It provides the interaction perspective into different pieces of programmes. So it’s designing ways and approaches to writing software in a decentralised, distributed manner, so the individual programmes can talk one to the other.”

Basically, there’s nobody running these computers, or these robots – they’re making decisions themselves.

“That’s the ultimate goal, to investigate and study the level of autonomy of reasoning and decision-making that the technology can achieve. We are obviously motivated by maximising the level of autonomy you can have in robots.”

You’ve been involved in various areas of research. What do you think is the most valuable or rewarding that you’ve been involved in to date?

“We are extremely happy about the interest of the FAA, the Federal Aviation Authority in the US. Those people are not research-oriented; they’re really solving daily problems. They kept their eye on the research we did in the past funded by defence organisations in the US, and they felt there was an application potential.

“We are really pleased that we have a long-term relationship with the people at the FAA, and we assist them in trying to see what multi-agent technologies can do for them in civilian traffic, so they can change the way how the air traffic is organised. This can help them in saving petrol, time, providing higher capacity at airports, and saving CO2 gas.”

What about your work in modelling the behaviour of pirates? I’m not talking here about people making imitation software, but actual sea pirates.

“This is an area where we are putting together our research in traffic modelling, the same as with airplanes, with adversarial reasoning. It’s a very specific area of artificial intelligence, related to game playing, that allows you to reason about your opponent – be it in chess, in a card game, or in an adversarial encounter on the sea.

Air traffic simulation
“We are modelling the goals and intentions of the pirates, and we try to reconstruct their plans. The goal of our research is to be able to model the plans of the pirates and with this information to adapt statistically what is the safest route for civilian transport through the Gulf.”

How do you even begin that kind of work? Do you just collate information about what pirates have done in the past?

“Exactly. We have real live data from satellites about the movement of the legitimate ship, and we have reports from the International Chamber of Commerce, who describe past attacks and past hold-ups.

“We’ve taken this information and performed this instance of cognitive modelling where we model the reasoning and the way behind the planning of these piracy attacks – we’ve modelled these in a computational system, in the multi-agent system.”

Could you tell us something about your work with unmanned planes?

“Yes, unmanned planes is a hot topic these days. It’s also a topic of the US defence organisations, and I truly believe that this is a topic that will be truly important for our future lives and daily business.

“It will be much less an issue for defence than for our daily lives. For instance, if you would like to guard your home, or if you would like to use aeroplanes to monitor your flock, or monitoring the spread of a fire in a forest. I believe there is a true practical potential in using UAVs, unmanned air vehicles.

Pirates activity simulation
“Our approach, or the scientific problem that we are trying to address, is to study how much they need to be controlled. Currently we have one pilot on the ground who is running a drone; that’s one to one mapping.

“We believe there is a potential for high scalability for the use of UAVs, having them in big numbers doing different tasks and solving different problems. There can’t be mapping of one pilot to one drone any more – we need to have a system that is running parallel drones at the same time.

“So we study how much of the planning and decision-making autonomy you can put on board an individual aircraft, how much it can be disembodied from ground control, while still maintaining safety and the purpose of the mission.”

Do you ever have any qualms about how your pure research can be used for military purposes?

“The whole research we’ve done with the military, which started in 1999, is public domain research. There’s nothing classified in what we’ve done so far, and we don’t intend to modify this mode of research operation. This is good, because we can write publications, we can talk about our stuff.

“The down-side is we get no feedback on how the research results have been used. We haven’t a clue. We assume that we’ve helped the military researchers to open new possibilities as to how to do things, but we’ve got no information as to whether a system was built, or anything like that.”

Air traffic simulation
But surely research you’ve done could result in people being killed?

“My take on this is that it’s just the other way around. We truly believe that through a higher level of technology in defence, or even in attack, you are making warfare more precise, more exact, more on target.

“These days nobody is interested in maximising human lives lost. Everybody is interested in maximising the inability of the enemy to do stuff, to hurt you. That’s why we believe that the more precise warfare is, the fewer lives will be lost, and especially the lives of non-military personnel.”

How far are we today down the road towards artificial intelligence?

“Artificial intelligence as a philosophical science believed that ultimately we would have an artificial human being, with all the aspects of human intelligence. But as the science and technology progressed, people quickly learned that that was a cul-de-sac.

“Only a few people really believe that there will be true artificial intelligence, which is why people accepted the superiority of human beings in terms of the complexity of reasoning that people can perform. That’s why people started to study individual aspects of reasoning and perception and cognition, where computers outperform people very often: mathematics, perhaps chess playing, image recognition, lots of stuff.

“But in total, the complexity of human reasoning hasn’t been improved over by computers, and I’m a true believer that people will be superior in their cognitive and reasoning capability until the end of our lives.”

Are there any particularly common misconceptions about artificial intelligence?

“Yes, people may understand artificial intelligence as a potential threat to the way we live. However, I think that there is no way that artificial intelligence can get the intention to harm anybody. It’s the other way around: people may misuse artificial intelligence and advance informatics and cybernetics concepts to harm other people.

“I guess that this is the most important misconception: that people believe that computers can strike and take over the planet. But this won’t happen unless people will want them to do so.”

The episode featured today was first broadcast on January 24, 2011.