Saturday 22 September 2018

Artificial intelligence hates the poor and disenfranchised


The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. It’s biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized. Machine learning algorithms, whether in the form of “AI” or simple shortcuts for sifting through data, are incapable of making rational decisions because they don’t rationalize — they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical. When an algorithm manages inventory for a grocery store, for example, machine learning helps humans do…

This story continues at The Next Web
https://ift.tt/2xGPlkR Tristan Greene September 21, 2018 at 03:00PM

No comments: