top of page
  • Writer's pictureAdmin

Weapons of Math Destruction



We've heard the optimistic term "AI is the future" countless times by now that we didn't stop to think: when can it's imperfections be harmful and even ruin people's lives?


Let's take a journey through the book: Weapons of Math Destruction.

"Weapons of Math Destruction" is a non-fiction book written by Cathy O'Neil, which explores the use of algorithms and data-driven decision-making (what clueless managers like to call AI) in our society. The book discusses how these algorithms can be biased, unfair, and ultimately harmful to individuals and communities.

The book begins by introducing the concept of a "mathematical model", which is a set of rules and equations that can be used to make predictions or decisions based on data. It is basically how AI works, exaplained in mathmatical equations. The author then discusses the rise of "big data" and the increasing use of AI models in a wide range of areas, including finance, employment, and education.

O'Neil argues that many of these models are "weapons of math destruction" because they can have negative consequences for individuals and society as a whole. She provides several examples of such models, including:

  1. Credit Scoring: The use of mathematical models to determine an individual's credit score. These models can be biased against low-income individuals and people of color, as they may not have access to the same financial resources as others.

  2. Teacher Evaluation: The use of mathematical models to evaluate the performance of teachers. These models can be based on flawed assumptions and may not take into account important factors, such as student background and socioeconomic status.

  3. Criminal Justice: The use of mathematical models to predict the likelihood of a person reoffending. These models can be biased against people of color and those with low incomes, as they may be more likely to be arrested and incarcerated.

  4. Online Advertising: The use of mathematical models to target online advertising to specific groups of people. These models can be based on personal information that is collected without consent, and can be used to discriminate against certain groups of people.

More detailed explanation: credit scoring.

Credit scoring models are used by banks to determine an individual's creditworthiness and likelihood to repay a loan. These models are typically created by analyzing large amounts of data about past borrowing and repayment behavior, as well as other factors such as income, employment history, and education. However, because these models are built using historical data, they can reflect and perpetuate existing biases and inequalities in society.

For example, if historical data shows that people from certain demographic groups (e.g: a minority who has been disadvantaged for decades) have a higher likelihood of defaulting on loans, a credit scoring model may be biased against individuals from those groups, even if they have good credit and are able to repay their debts. Similarly, if the model uses factors such as education level or occupation to determine creditworthiness, it may disadvantage individuals who come from lower-income backgrounds or who work in certain industries.

This can have significant consequences for individuals who are unfairly scored by the model. They may be denied credit or charged higher interest rates, making it more difficult for them to access financial opportunities and potentially perpetuating the cycle of poverty.

To address these issues, it's important for credit scoring models to be developed with fairness and equity in mind. This might involve re-evaluating the factors that are used to determine creditworthiness, or using more sophisticated modeling techniques that are able to detect and correct for bias.


O'Neil argues that these models can have serious consequences for individuals and society as a whole in multiple fields of life. They can even have a chilling effect on free speech and political activity. She also argues that many of these models are opaque and difficult to understand, making it hard for people to challenge their decisions or hold the people who create them accountable.


In the final chapter, O'Neil calls for more transparency and accountability in the use of mathematical models. She suggests that we need to develop new models that are fair, transparent, and accountable, and that we need to involve a wide range of stakeholders in the development and implementation of these models.


Overall, "Weapons of Math Destruction" is a thought-provoking book that raises important questions about the role of AI and mathematical models in our society. It provides a valuable critique of the ways in which these models can be used to perpetuate inequality and harm individuals.


Don't get me wrong, my main problem is not with the mathmatics itself (I love math). But with corporations using AI blindly with no ethical reviews. For the engineer who's designing the model, high accuracy in results shouldn't be the only concern. Even if the model prediction is correct 99% of the time (this is very good for an AI model), the 1% is very important when the prediction affects some people's lives. As opposed to an AI model operating in a factory where one can afford to have 1% of the products be faulty and thrown in the bin.

Engineers (and corporations in general) should restrict AI predictions as much as possible by taking into account more variable that the model might consider. Also, they should make AI trasparent for people to know how it works in order to be able to appeal it's prediction.


28 views0 comments

Recent Posts

See All
bottom of page