Earlier this week, Google announced that it was piloting a machine learning intensive for college students. Today, its broader Machine Learning Crash Course is adding a new training module on fairness when building AI.
As adoption of machine learning continues, ethics and fairness are very important considerations. While AI can have the “potential to be fairer and more inclusive at a broader scale than decision-making processes based on ad hoc rules or human judgments,” there might be underlying biases present in the data used to train these models. Other issues involve insuring that AI is fair in all situations, while more broadly there is “no standard definition of fairness.”
As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias.
Google’s previously internal Machine Learning Crash Course has a released a new self-study training module about fairness. This hour-long program was developed Google’s engineering education and ML fairness teams, and will discuss:
- Different types of human biases that can manifest in machine learning models via data
- How to identify potential areas of human bias in data before training a model
- Methods for evaluating a model’s predictions not just for overall performance, but also for bias
Google is also updating its Machine Learning Glossary with new entries related to fairness.
These entries provide clear, concise definitions of the key fairness concepts discussed in our curriculum, designed to serve as a go-to reference for both beginners and experienced practitioners. We also hope these glossary entries will help further socialize fairness concerns within the ML community.
Check out 9to5Google on YouTube for more news:
FTC: We use income earning auto affiliate links. More.
Comments