Robots learn very quickly, but they also learn what they shouldn't

published: 2018-10-20

Robotic systems in business, law, insurance and banking are already learning very quickly on their own. Not only routine activities, but also the analysis of data from which they are able to predict human behaviour. In the American justice system, they even assist judges. But sometimes they also quickly learn what they shouldn't, or what they are told not to say. There is the famous case of a chatbot, a communication robot, which had to be shut down after a few hours because, as it learned the attitudes of the people it was talking to, it began to broadcast hateful sentences against feminists and tended towards radicalism.

Something similar has happened to forensic software in the United States, according to Milan Fric of international law firm PwC Legal. Its use had to be restricted because an analysis of police statistics and case law showed that it had begun to show signs of racism. For example, the robot assumed in advance in its proposed decisions that black people were more likely to reoffend than white people. According to Fric, there have already been successful appeals in the United States after a defendant's lawyer argued that it was not clear on what programming algorithm his client was convicted. The algorithm in question cannot be presented publicly because it is for the judiciary and therefore secret. However, Czechs need not fear similar scenarios in the courts.

"Before something like that would happen in our country, i.e. the deployment of some kind of justice program, it would age the millennials (today's teenagers). In addition, Czech legal norms are so complicated, often contradictory, and procedural law so convoluted, allowing the matter to drag on endlessly, that even modern robots would get stuck on it. And who knows what they would learn," Karel Havlíček, founder of the Permanent Conference of Czech Law, told Právo.

Positions at risk

But otherwise, the proliferation of robots and automated systems in a variety of fields is a given. "A lot of current positions in law, healthcare, transportation and banking are at risk. As we are on the internet of information, in five years we will be on the internet of things, of value," said former banker Pavel Kysilka at the recent Lenders Forum in Prague, adding a quote from Microsoft chief Bill Gates about needing banking but not banks. According to Kysilka, until recently, it was purely the domain of banks to use their knowledge and ability to verify, based on calculations, the client's ability to repay loans in the future, based on credit history, tax returns, assets. "Today, technology companies can do this much more accurately based on completely non-financial data about who you communicate with, what you talk about, where you move the most, where, what and how much you buy," Kysilka noted.

The influence of emotional AI will continue to grow, he said. For example, it already tracks and can assess the prevailing moods of the people it communicates with, whether on the phone or in writing, based on certain words and word combinations. And it then goes on to work with that data. Twelve years from now, 70 percent of all clients and employees will be young people who have grown up with modern technology.

Artificial intelligence can usually act faster and often more complex than humans. "For example, IBM's Watson robot for hospitals. It listens to the patient's problems or reads them in written form and suggests treatment itself according to the analysis of the doctors' articles," Fric noted.

Robots make it easier and thus also take work away from people. There will certainly be consequences in the labour market. It will not be the first time, after all, that the first wave of automation in the 1970s in the West also intensified the wave of redundancies. But people will not be unemployed. Thanks to robots, they can take on more challenging tasks, value-added services and strategies.