The works of Gould (date) and Benjamin (date)
The works of Gould (date) and Benjamin (date) deeply elaborate on the existence of racist and classist in data. In Gould’s analysis, it is clear that prejudices among the scientists lead to biasness to their data and subsequently to their intelligence tests. According to Gould, the tests created by this group of individuals have two fundamental errors: hereditarianism and reification. The three psychologists he argues against in his analysis all use the tests as real things and believe that intelligence is inherited and is constant in an individual’s life. Through their notion and their biased tests, people who have low IQ (according to them) require to be treated harshly. In short, their tests advocate for eugenic practices, which lead to discriminations to the immigrants, hence classist in the data (Gould 195). In the outset of Benjamin’s work, racist is identified in data. According to him, in 2016, there was a beauty contest that was judged by AI robots. The robots favored the white people as only six people with dark skin out of 44 won in the diverse categories. The author accounts for this by declaring that the algorithm used in the creating of the AI system is biased due to the biased nature of the creator (Benjamin n.p.).
In closing the racial and class divides, data will need to be utilized against the societal norms. For instance, in mitigating racial injustice, the societal norms of judges – discretion – can be eliminated by employing automated decision-makers that are consistent, and have no prejudice (Eubanks 3). The technology industry can quickly remedy these historic discrimination events (racism, classism, and sexism). In the case of the robots, systems can be developed that track data and able to identify biased decision making.