Robotisation as Rationalisation
Lambèr Royakkers* and Rinie Van Est
School of Innovation Sciences, Eindhoven University, The Netherlands
Submission: September 20, 2017; Published: November 20, 2017
*Corresponding author: Lambèr Royakkers School of Innovation Sciences, Eindhoven University of Technology, O.O Box 513, 5600MB Eindhoven, The Netherlands, Tel: +31 40 2474693; E-mail: l.m.m.royakkers@tue.nl
How to cite this article: Lambèr R, Rinie V E. Robotisation as Rationalisation. Robot Autom Eng J. 2017; 1(4): 555570.
DOI: 10.19080/RAEJ.2017.01.555570
Editorial
At the beginning of the twentieth century, German social theorist Max Weber (1864-1920) created a theory of rationalisation. He reflected on industrialisation, urbanisation, scientism and capitalism and found that the modern Western world had become dominated by a belief in rationality. Weber saw the bureaucracy as the paradigm for the rationalisation process in his day. It is well known that belief in efficiency led to the redesign of the factory and labour. Engineers not only mechanised separate actions but aimed to design the factory as one 'great efficient machine'. Rationalisation, however, took place in many social practices. Offices, airports and cities were defined in terms of flows that could be designed and mechanised in an integrated manner. Weber discussed rationalisation as a double-edged phenomenon. On the one hand, it can have many benefits, such as broader access to cheaper products and services with consistent quality. On the other hand, he was worried about the much irrationality of rational systems. For example, bureaucracies can become inefficient because of too many regulations. Weber was most concerned about the so-called iron cage of rationality, the idea that an emphasis on rationalisation can reduce the freedom and choices people have and lead to de humanisation.
Until recently, robots were mainly used in factories for automating production processes. In the seventies, the appearance of factory robots led to much debate on their influence on employment. The mass un employment that was feared did not come to pass. Still, robots have radically changed the work in countless factories. Driven by a belief in efficiency, factories and labour have been redesigned over the last century. The first half of the twentieth century saw a far-reaching simplification and specialisation of the work. This paved the way for the mechanisation and automation of the production process. As a result, robots have come to play a central role in this ongoing attempt to rationalise production. In essence, robotisation presents a way to rationalise social practices and reduce their dependence on people. As Ritzer [1] argues: "With the coming of robots we have reached the ultimate stage in the replacement of humans with non human technology”.
Both rationalisation and robotics no longer only concern factory applications. Currently, no aspect of people's lives is immune to rationalisation any more. It is important to realise, however, that before engineers began to rationalise it, the average factory was a chaotic place too. And now a days, we even have fully automated factories that require no human presence at all. We use these historical insights about industrial robots to reflect on the use of service and social robots outside the factory. One basic understanding is that the use of robots in messy social practices is only possible when these practices are organised around their technological limitations. So before robots can be applied in a certain social practice, that practice first has to be adapted to the limited capacities of robots. In other words, a basic level of rationalisation of a certain social practice is required before robotisation, as the next and further step towards rationalisation, can take place.
The introduction of robotics into society is, however, paired with an enormous human challenge. Making use of opportunities and dealing with their social, legal and ethical aspects calls for human wisdom. Trust in our technological capabilities is an important part of this. But let us hope that trust in technology will not get the upper hand, because that would be a perfect formula for creating a technocracy, or more aptly here, robocracy. Trust in humans and the acceptance of human abilities, but also human failures, ought to be the driving force behind our actions.
Like no other technology, new robotics is inspired by humans. This technology aims to copy and sub sequently improves human characteristics and capacities. It appears to be a race between machines and us. Bill Joy [2] is afraid of this race, because he fears that humans will ultimately be worse off. His miserable worst-case scenario describes "a future that does not need us”. In such a hyper - rationalized world, the robots and the elite owning them have come to control us or even replace us. Rationalization through robotisation, however, should never be a goal in itself. The ultimate goal of robotics should not be to create an autonomous and socially and morally capable machine. This is an engineer's dream. Such a vision should not be the leading principle. Robotics, namely, is not about building the perfect machine, but about supporting the well-being of humans. There should be a balance in social practices between 'hard' and 'soft' tasks. The police take care of law enforcement and criminal investigation, and also offer support to citizens. The war must be won, but also the 'hearts and minds' of people. Care is about 'taking care' (washing and feeding) and 'caring for' through a kind word or a good conversation. We enjoy making love, but we especially want to give and receive love. Robotics can play a role in the functional side of social practices. But we must watch out that the focus on technology does not erase the 'soft' human side of the practice in question. Such a trap can easily lead to appalling practices: inhumane care, a repressive police force, a hardening of our sex culture and cruel war crimes.
Robotics does not exist for itself, but for society. Robotics ought to support human kind, not over shadow it. Our objective should not be to build a high-tech, robot-friendly world, but to create a human-friendly world. This begins with the realisation that robotics offers numerous opportunities for improving our lives and the insight that how we envelop the world in a robot-friendly environment will define whether we seize those chances or whether that leads to a de humanized society, guided solely by a strong belief in efficiency. This implies that sometimes there simply is no place for robots. Even if, in the very distant future, there are robots that are better at raising our children than we are, we must still do these ourselves. An important aspect of this is the notion of personal responsibility and the human right to make autonomous decisions and mistakes. Even if robots can carry out some tasks better than humans can, it might still make more sense for humans to carry on doing those tasks, despite doing them less well.