Fairness Literature
Thursday, November 12, 2020
Self-Supervised Learning
- Prediction of labels, segments, distortions, colorizations.
- Can be used to estimate terrain, depth completion to identify distance to objects while driving.
Fairness
- Want to guarantee stable performance even if the data distribution changes.
- Causal graphs to be used with identifying transferable knowledge while keeping fairness constraints (protected attributes).
- Datasets: group identifier bias, african american english dialect bias, compas recidivism
- False positive bias (“I am a gay man” given high toxicity scores).
- Demographic parity (If subjects in protected and unprotected groups have equal probability of being assigned to positive predicted class)
- Equality of odds (If subjects in protected and unprotected groups have equal TPR and equal FPR)
- Equality of opportunity (If subjects in protected and unprotected groups have equal FNR)
- Use an adversarial network which attempts to predict (discriminator with loss based on the above principles)
Causal Inference
- Gold standard of causal inference is experimentation (randomized, controlled trial)
- Combine data from different contexts to figure out causation. Combining datasets can show that the contexts can cause the X factor in the pooled data.
- Uses directed graphs
Transfer Learning
- BERT embeddings for pronoun resolution are gender-biased performing more poorly on female pronouns
- BERT hate speech classifiers correlate protected groups with hateful language (muslim, black)
- De-biased models can be transferred with downstream fine-tuning and remain less biased. Can also transfer this to a different domain.
- Fair-MAML and fairness warnings
Idea
- Domain adaptation and transfer, how does fairness affect this?
- Focus on self-driving cars.
Links
Unread