fairlearn.org - everything related to the Fairlearn package. Please feel free to ask any kinds of questions!
romanlutz on main
add CNAME file command (#702) … (compare)
flake8
rule updating. Now I look at the code, I think that one of these is actually a legitimate complaint - why replace the actual exception by a ValueError
?
Quote
BCG GAMMA FACET Helps Human Operators Understand Advanced Machine Learning Models So They Can Make Better and More Ethical Decisions
load_boston
dataset to explain the tool without acknowledging the B
column.
That's disappointing to see, but good they responded quickly. I think it only highlights the need for people to have the social context before diving into solutioning.
This is especially true when strategy consultancies are marketing these tools. Especially if it is at global scale.
ValueError: Phase 1 of the simplex method failed to find a feasible solution. (...) Consider increasing the tolerance to be greater than 3.9e-01. (...)
;nu
? Or the constraint's ratio_bound_slack
?
To anybody who is bored. I've found another global consultancy company that didn't look at the contents of the load_boston
dataset.
Upvotes appreciated. quantumblacklabs/causalnex#91
Hello, I had a question on the different types of biases in datasets. I couldn't find much of it online.
So far, my team and I explored two types of biases:
There is a sensitive feature that correlates with y but it would be unfair to use it for making the prediction. Even in the infinite amount of data and a perfect predictor, it would still use that feature for making the prediction and it would be unfair.
The minority class does not correlate with y, but there is much less samples for training the model in that subspace (since the distribution is uneven). In the limit of infinite data, this is no longer a problem, you converge to the optimal classifier and this variable was not a sensitive variable.
Are you aware of other types of biases in datasets?
Thanks for your help!
main
please approve my PR fairlearn/fairlearn#694 @hildeweerts @adrinjalali @riedgar-ms @MiroDudik @mmadaio
Hi all.
With my PR on the correlation removed out there, I feel like I might start working on some educational content for fairlearn. My plan is to host it on https://calmcode.io but I imagine that content could be re-used on the docs as well.
There's one thing though, I don't have a clear intuition on how the reductions
actually work. The docs host an example that's a bit too theoretical for me to immediately get my head around ... so is there a call that I can join with a whiteboard soon-ish? I'm not sure if I'm the only one who feels this way, or if the usual meetings are the best place for this, but I'd certainly appreciate a better "paper-napkin"-level of intuition.
MetricFrame
with sample parameters
model-card-toolkit
is. Anybody here knows anything? Seems like there isn't really a community around it. I'm wondering if we have any contacts in the team that we could talk to, to see what their plans are. Also wondering if it makes sense or if there's enough support to actually fork it and create a community around it.