Algorithms have an increasingly fundamental influence on our lives. That is not necessarily positive. In order to steer the use in the right direction, I think that two characteristics are essential when developing algorithms. They should be unbiased and transparent.
AI and chatbots make our lives easier. The insurance company Lemonade, which works entirely based on, assesses and pays a claim within 6 seconds. Try that within an existing insurance company.
This revolutionary customer experience is possible thanks to the use of smart algorithms. These have an increasing influence on our lives. The news we get presented, the question whether we get an insurance policy and at what premium, the decision whether we receive a certain medical treatment and even the decision whether we are condemned by the court. Experts are assisted and sometimes even replaced by algorithms.
Those algorithms are not value-free. Machines do not start learning by themselves; they learn from data. Machines are initially fed by people with data and decision-making criteria. People determine which data and criteria are used. People tend to be biased and the human bias is thus translated into algorithms. They mirror our biases. Furthermore they have the logical tendency to enlarge the bias. Especially when a proper feedback loop is missing, as is the case with many algorithms.
When someone is excluded for an insurance, for the insurance company it is practically impossible to determine afterwards whether that decision was “right”. The algorithm is therefore not optimized when it has wrongly excluded someone. However, it is confirmed when it has correctly admitted someone and corrected when it has wrongfully admitted. In this way the original, probably unconsciously introduced, bias will become larger in the course of time.
An excellent book on this subject is “Weapons of math destruction” by Cathy O’Neil.
Source link of image: https://www.bol.com/nl/p/weapons-of-math-destruction/9200000053376751/
Because of algorithms with a bias, people can end up in Kafkaesque situations: the situation where you are refused somewhere while nobody can or will explain why. So you can not object to it. But the refusal is registered and, as input for other algorithms, also leads to refusals in other places. A recipe for structural exclusion of people. Escaping is impossible.
To prevent this, it is essential that algorithms are unbiased. And to
determine that, algorithms must be transparent. But the operation of algorithms is often so complex that they are difficult to understand even for experts. Let alone for an average consumer. Providing insight into how an algorithm exactly works is therefore not the solution.
Source link of image: https://openclipart.org/detail/281449/you-are-here
However, I think people should have two fundamental rights:
- The right to know the argumentation of a decision
For any decision that is taken about a person, this person should have the right to get explained in understandable language on which grounds this decision is taken. Organizations should never be allowed to refer to “the complex operation of algorithms”.
- The right to appeal against the outcome of an algorithm
In addition to this transparency, there must be the possibility to appeal against the decision to an independent legal authority. This authority can judge whether or not bias is involved in the algorithm in question. If so, the decision should be undone and the algorithm adjusted.
Source link: https://s16113770.wordpress.com/2017/01/24/the-importance-of-data-structures-and-algorithms/
Of course the intended effect is that this will have a preventive effect. That organizations will develop unbiased algorithms based on unbiased data. And that they are always able to explain decisions in detail: unbiased and transparent by design.
The UN human rights were made in 1948. They need an update.