Abstract Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive based on these can rely such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected making instances belonging to same attribute class uncorrelated, using rate-distor...