- a beauty contest used an algorithm to judge contestants but, because it had been trained only on white women, it was found to discriminate against women with dark skin;
- a man had his driving licence revoked because anti-terrorism facial recognition software mistook him for someone else;
- over 1,000 people a week are mistakenly identified as terrorists at airports by the algorithms used there;
- an algorithm used to assess teacher performance scored a number of teachers badly -- yet these same teachers had previously been rated highly by parents and school principals. The reason? The algorithm based its scored on a very small number of student results and some teachers had tricked the system by suggesting to their pupils that they should cheat.
So: Generalising on the basis of a very small number of instances. Mistaking someone for someone else. Discrimination due to narrow exposure. Being duped by another person. It all sounds eerily familiar, doesn't it?
So much for computer programs being more reliable than human beings. Turns out there's not so much difference between the two systems after all...