Content Warning

#reading#ML
Mitigating Bias in Machine Learning

Edited By @drcaberry
Brandeis Hill Marshall

„We dedicate this work to the diverse voices in #AI who work tirelessly to call out bias and work to mitigate it and advocate for #EthicalAI every day.
Some of the trailblazers doing the work are
@ruha9
@timnitGebru
@cfiesler
Joy Buolamwini
@ruchowdh
@safiyanoble
We also dedicate this work to the future engineers, scientists, and sociologists who will use it to inspire them to join the charge.“

Book Cover:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa

This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries.
Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced.

Mitigating Bias in Machine Learning addresses:
Ethical and Societal Implications of Machine Learning
Social Media and Health Information Dissemination
Comparative Case Study of Fairness Toolkits
Bias Mitigation in Hate Speech Detection
Unintended Systematic Biases in Natural Language Processing
Combating Bias in Large Language Models
Recognizing Bias in Medical Machine Learning and AI Models
Machine Learning Bias in Healthcare
Achieving Systemic Equity in Socioecological Systems
Community Engagement for Machine Learning
Book Cover: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries. Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced. Mitigating Bias in Machine Learning addresses: Ethical and Societal Implications of Machine Learning Social Media and Health Information Dissemination Comparative Case Study of Fairness Toolkits Bias Mitigation in Hate Speech Detection Unintended Systematic Biases in Natural Language Processing Combating Bias in Large Language Models Recognizing Bias in Medical Machine Learning and AI Models Machine Learning Bias in Healthcare Achieving Systemic Equity in Socioecological Systems Community Engagement for Machine Learning
Figure 9.9 ML life cycle with bias indicators and mitigation techniques.
(inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022)

• Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data.

• In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization.

• Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling.

From book:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa
Figure 9.9 ML life cycle with bias indicators and mitigation techniques. (inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022) • Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data. • In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization. • Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling. From book: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa
And, following the social convention here, we will start with an #introduction

We are a Research Centre at the #UniversityOfWarwick dedicated to expanding the role of interdisciplinary methods through new lines of inquiry that cut across disciplinary boundaries.

Besides being committed to excellent research, we also offer a #PhD Programme and 3 #interdisciplinary Master degrees on #bigData#DataVisualisations and #DigitalMedia and Culture

You can find us at https://warwick.ac.uk/cim

Content Warning

Our world-class research spans throughout disciplinary boundaries and covers a wide range of topics, such as #AI, #DigitalHealth Rights, #DataVisualisations, #DigitalGood#CyberSecurity, #Sustainability#Equity... amongst others.

You can find more about our research here: https://warwick.ac.uk/fac/cross_fac/cim/research/

Content Warning

The problem with #AI regulation is that it is regulation about AI. Not only because it regulates something as vague as the concept of AI, but also because it again compartmentalizes, diffuses and decontextualizes systemic problems into categories.

Content Warning

This is not a technology problem which needs a tech solution.

News headlines are - by definition - the most succinct but accurate summary of the story. That is what they are for. There is just no function for a summary of news headlines. #AI is a (bad) solution in search of a problem. Again. #AppleIntelligence#AIHype#SnakeOil

https://news.sky.com/story/apple-ai-feature-must-be-revoked-over-notifications-misleading-users-say-journalists-13288716

Content Warning

"This article addresses the poverty of The Plan and the emptiness of its claims about #AI but, rather than a point-by-point rebuttal, it's about the underlying reasons why this #Labour government supports measures that will harm both people and the environment."

#AIActionPlan

https://www.computerweekly.com/opinion/Labours-AI-Action-Plan-a-gift-to-the-far-right

Content Warning

Fostering a Federated AI Commons ecosystem

A policy briefing by Joana Varon, @schock and @timnitGebru

https://codingrights.org/docs/Federated_AI_Commons_ecosystem_T20Policybriefing.pdf

> This policy paper provides actionable recommendations for the G20 to foster decentralized AI development. We urge support for an alternative AI ecosystem characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately ...

#ai #commons #policy

Content Warning

[Algorithmique 4/6] IA qu'à algorithmiser le climat ? Podcast animé par @mathildesaliou avec @clementmarquet et Anne-Laure Ligozat

https://next.ink/podcast/algorithmique-4-6-ia-qua-algorithmiser-le-climat/

La retranscription https://www.librealire.org/ia-qu-a-algorithmiser-le-climat

#ai #environment

Content Warning

Fostering a Federated AI Commons ecosystem

A policy briefing by Joana Varon, @schock and @timnitGebru

https://codingrights.org/docs/Federated_AI_Commons_ecosystem_T20Policybriefing.pdf

> This policy paper provides actionable recommendations for the G20 to foster decentralized AI development. We urge support for an alternative AI ecosystem characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately ...

#ai #commons #policy

Content Warning

Content Warning

Join the conversation at #NISOPlus25, where we'll address #ResearchIntegrity, #OpenAccess, #AI, & more! Our program features speakers @timnitGebru (our 2025 Miles Conrad awardee) & ALA president Cindy Hohl. Register by January 10 for early bird rates!: https://niso.plus/niso-plus-2025-baltimore/
#ScholComm
I was just talking to a colleague about the AI bubble. These companies are in so deep they can't tell the truth. They are all lying about the efficacy, costs to consumers and most importantly how & when this tech works or doesn't.

Is there enough money on the line to kill over?

There's likely a trillion bucks of valuations across the industry. Billions in sunk costs, billions in c suite remuneration, billions in VC mgmt costs.

RIP Suchir

https://www.mercurynews.com/2024/12/13/openai-whistleblower-found-dead-in-san-francisco-apartment/

#OpenAI#AI#VC#SuchirBalaji

Content Warning

This whole situation is about as believable as a Russian dissident or oligarch falling out of a midrise window in Moscow. Like the Boeing whistle blower I don't think it much matters if "foul play" was involved because this whole scenario is foul af.

This young man was very brave and righteous to blow the whistle and was undoubtedly under immense social and professional pressure not to. He's fucking up A LOT of people's grifty gravy train.

We should read his words

https://suchir.net/fair_use.html

#AI

Content Warning

I was just talking to a colleague about the AI bubble. These companies are in so deep they can't tell the truth. They are all lying about the efficacy, costs to consumers and most importantly how & when this tech works or doesn't.

Is there enough money on the line to kill over?

There's likely a trillion bucks of valuations across the industry. Billions in sunk costs, billions in c suite remuneration, billions in VC mgmt costs.

RIP Suchir

https://www.mercurynews.com/2024/12/13/openai-whistleblower-found-dead-in-san-francisco-apartment/

#OpenAI#AI#VC#SuchirBalaji