22 Feb 2016

Interview: Dr Benjamin Rosman, CSIR, South Africa.

South Africa's mining industry has many uses for robotics with the primary role around dangerous work.


Dr Benjamin Rosman discusses the way robotics will develop in Africa and how some of the legal issues impinge. He is a senior researcher in the Mobile Intelligent Autonomous Systems group at the Council for Scientific and Industrial Research (CSIR), South Africa.

RLJ: How is the robotics sector developing in South Africa and the rest of Africa?

The embracing of technology seems to be slower in the developing world, particularly with hardware, than in the developed world. Hardware would typically require more infrastructure. There are also challenges about the uptake of robotics in the perceived subsequent loss of jobs.

People talk of the ‘three Ds’ as application areas in relation to use of robotics rather than humans - Dull, Dirty and Dangerous. Dull and Dirty applications tend to be avoided in the developing world because we have a substantial workforce here and employment is a major concern. So the primary roles for robotics in Africa are, typically, around Dangerous work, or providing services that can’t otherwise be provided.

One of the major application areas in Africa is mining. The conditions on mines tend to be very difficult for humans, often with poor and dangerous conditions.

There are many things that people do better than robots - tasks that require dexterity, for example, and certain kinds of reasoning. Humans are more flexible physically and intellectually. We deal better with challenges that are unanticipated. Robots are good at repeatedly performing one well-defined activity all day and all night.

There is a misconception that robots and computers are systems in which large numbers of rules have been constructed. These days tasks are rather solved using machine learning, which allows us to specify examples of desired and undesired behaviour, and leaves determining the rules which best describe these to the system. Importantly, no human is then directly responsible for the resulting performance of the system.

Where will we be in five years?

I predict that we are going to start seeing more and more of these developments happening on a small scale. There will be more companies offering drone services, for example, and working in war- and disaster-ravaged areas. I would expect this to happen incrementally - but I could also imagine in the right circumstances something just exploding onto the scene.

The drive for automation in manufacturing is another key difference between developing and developed economies. A country such as the USA has significant motivation to streamline manufacturing processes for competitive reasons, which inevitably leads to greater automation. On the other hand, developing economies can typically rely on a larger, relatively cheap, workforce.

Regarding autonomous cars, you will see that happening everywhere but I don’t think you will soon have large fleets of autonomous vehicles in Africa.

How are the legal issues developing?

We are already seeing some of the legal issues developing around driverless cars. Similar questions are being asked about drones. The main question is: when something goes wrong, where does the responsibility lie? It is not clear in general what the answer is. It is likely that the answer will be different between the application areas. This same question applies across the board to all aspects of robotics and autonomous systems.

Concretely, with driverless cars, who’s responsible if something goes wrong? You can’t blame the driver because he might be in a passenger seat reading. If you place the responsibility on the developer of the technology, you are discouraging development. If it was an obvious fault in the system you could hold the manufacturer responsible, but if there is some unusual confluence of events and the car behaves in a slightly unusual way, what do you do? This is particularly relevant question for learning systems.

You can design a driverless car so that it drives safely, politely, and conservatively but that is not necessarily how people drive. People display a lot of complicated signals, with these differing between places and cultures - and other drivers understand the unspoken protocols of the way you drive. If driverless cars aren’t able to encapsulate these protocols, it would almost be as if they were foreigners unaware of local customs when on the road. So if something went wrong in these circumstances, who do you blame? You might have a car designed in the US which you take to other parts of the world. What happens when they misinterpret the signals that are given by other cars on the road? There is much scope in the legal world for discourse on these issues.

More discussion is needed about robots in factory and domestic settings. A current hot topic of research is now into ‘compliant’ robots. If they hit you, these robots have some ‘springiness’ to absorb some of the impact, and they also realise that they have hit you and they stop. These robots are designed to be safer around humans than traditional non-compliant robots [large pieces of machinery, usually cordoned off from humans in factories] but there are, inevitably, going to be issues.

For example, if a robot’s job is to put knives into a box what should you do to protect humans in case a knife slips? Many modern robots can now be trained by workers in the factory, rather than being pre-programmed by the developers. In these cases, a worker may drag the robot’s arm around the knives and show it what actions to take to pack the knives, and how to do so safely. But, if something goes wrong, is it the designer’s fault…or is it the fault of the person in the factory who trained the robot?

There is concern that programmes and other works designed by robots might not be protected by copyright. What are your concerns in this area?

I can imagine general settings where a robot is given some requirements for a problem, and resources available, and is asked to come up with a solution. This is a standard optimisation problem, and to solve it, the robot could come up with something different and new. Without copyright protection, this could pose another huge question. Who owns that product? Giving the rights of that product to a piece of software doesn’t make sense. Giving them to the company which asked the robot to solve the problem may not make sense either, as all it did was ask for a problem to be solved. Similarly, many researchers work on problems such as automated music composition or artwork generation. This would affect those areas too.

There is a tendency for people who go into computer science to prefer the notion of open source, and to not care strongly about the ownership of these kinds of works. Many would say that what they produce should be available to all. You would not want a predatory company coming along and somehow obtaining rights to these automatically generated products, and then charging people to use them. That would be harmful to the entire industry, if not society more broadly.

How much of an issue is the fear of job loss in the development of this area?

I don’t see it as a major stumbling block at the moment - but it could be soon. Honestly, it isn’t the case that we don’t have robot butlers in our houses because of the social issue. This is still more because of the technological issues. However, in the future, these problems will relate more to the legal, financial and social issues.

On the financial side, costs of the equipment required for driverless cars, for instance, are coming down now. The legal aspects are still a major issue. On the societal side, there are other issues relating to the acceptance of new technologies. For example, Google Glass was plagued by various privacy concerns. Within a generation, I can imagine that won’t be an issue anymore: people will be more used to being constantly filmed and documented.

But now, you can have a start-up anywhere in the world which acquires funding and starts producing some new world-changing technology. Society is welcoming this kind of innovation. This is the right time for us to be asking these questions.

Dr Rosman is also a Visiting Lecturer in the School of Computer Science and Applied Mathematics, at the University of the Witwatersrand, South Africa.


related topics