In scikit repo
conf.py which you pointed in description of the issue:
I am confused what should we put for in the case of
dependencies - requirements.txt. If you look at scikit-repo requirements - https://github.com/scikit-learn/scikit-learn/blob/309f135c3284d7db6e23ca81a87948c7066a3949/doc/binder/requirements.txt, it's confusing.
What should we put in the requirements.txt in case of fairlearn repository?
predict_proba. Note that our meta estimators like
ExponentiatedGradientdo NOT produce probabilities the same way, so we've very consciously chosen not to name it
Assessment: our user guide covers both classification and regression metrics https://fairlearn.org/main/user_guide/assessment.html#metrics
Mitigation: The table shows which techniques should work for regression: https://fairlearn.org/main/user_guide/mitigation.html (everything but ThresholdOptimizer)
Admittedly, the user guide isn't super comprehensive yet and has some gaps, but there is a regression section for fairness constraints that can be plugged into our
fairlearn.reductions techniques: https://fairlearn.org/main/user_guide/mitigation.html#fairness-constraints-for-regression
We don't have examples with regression mitigation yet, but this test case code might be useful to you: https://github.com/fairlearn/fairlearn/blob/62fc80c77bcd3bef6a3d7bc44e54827ec9fb8d09/test/unit/reductions/grid_search/test_grid_search_regression.py#L64
If you're willing to explain your use case we could try to advise on how to use Fairlearn. Let us know!
MetricFrame. Is this discussion still on-going? I see the thread is still active here fairlearn/fairlearn#756
MetricFramewould stay as is.
I'll be making a calmcode course on fairlearn soon, so I figured I'd ask ... are there common points of confusion that would be nice to address?
Some that come to mind: