diff --git a/doc/index.rst b/doc/index.rst index 7fa9251..bc9a32d 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -1,8 +1,73 @@ -Welcome to Orange3 Fairness documentation! -=================================================== +Orange3 Fairness Add-On +======================== + +**Orange3 Fairness** is an add-on for the `Orange Data Mining `_ software. Orange3 is a data mining and machine learning software suite featuring interactive data visualization, data pre-processing and feature selection, and a variety of learning algorithms. + +The Fairness add-on extends Orange3 with tools for fairness-aware AI, which includes algorithms for detecting and mitigating different types of biases in the data and in the predictions of machine learning models. + +Installation +------------ + +1. `Download `_ the latest Orange release from our website. +2. Install the fairness add-on: head to + ``Options -> Add-ons...`` in the menu bar. From the list of add-ons, select Fairness and confirm. + This will download and install the add-on and its dependencies. + +Usage +----- + +After the installation, the widgets from this add-on are added to Orange. To use them, run Orange. The new widgets appear in the toolbox bar under the section Fairness. + +For an introduction to Orange, see the following YouTube playlist: + +* `Intro to Data Science `_ - introduces data analysis with Orange + +For more on using widgets from the Fairness add-on, see the following: + +* `Orange blog series on Fairness `_ +* `Orange Fairness workflow examples `_ +* `Orange widget catalog `_ - Orange widgets documentation + +Examples +-------- + +Dataset Bias Examination +~~~~~~~~~~~~~~~~~~~~~~~~ + +Understanding the potential biases within datasets is crucial for fair machine-learning outcomes. This workflow detects dataset bias using a straightforward algorithm. After loading the dataset, we add specific fairness attributes to it, which are essential for our calculations. We then compute the fairness metrics via the Dataset Bias widget and visualize the results in a Box Plot. + +.. image:: static/images/dataset-bias.png + :alt: Dataset Bias + :width: 100% + +More information: see blog on `Dataset Bias `_ + +Fairness-aware Models +~~~~~~~~~~~~~~~~~~~~~ + +If the data we use is biased, the models we train on it will perpetuate that bias. This is where fairness-aware models come in. In this workflow, we use four models, two of which are fairness-aware, to predict the outcome of the Adult dataset. Using the Test and Score widget, we can compare the fairness metrics of the models and see how they perform on the dataset. + +.. image:: static/images/models.png + :alt: Models + :width: 100% + +More information: see blog on `Adversarial Debiasing `_, `Equal Odds Postprocessing `_ or `Reweighing `_ + +Fairness Visualization +~~~~~~~~~~~~~~~~~~~~~~ + +Sometimes, it is hard to understand the bias in the data by looking at the numbers. This is where visualization comes in. In this workflow, we use Box Plots to visualize and compare the bias in the predictions of a fairness-aware and a regular model. The visualization shows the difference in the false-negative and true-positive rates for the two models. + +.. image:: static/images/visualization.png + :alt: Bias Visualization + :width: 100% + +More information: see blog on `Equal Odds Postprocessing `_ or `Why Removing Features is Not Enough `_ Widgets -------- +------------------- + +The following widgets are included in the Fairness add-on: .. toctree:: :maxdepth: 1 @@ -14,3 +79,30 @@ Widgets widgets/equalized-odds-postprocessing widgets/weighted-logistic-regression widgets/combine-preprocessors + +For developers +-------------- + +Installing from downloaded code +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you would like to install from a cloned git repository, run + +.. code-block:: bash + + pip install . + +Installing in editable mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To register this add-on with Orange, but keep the code in the development directory (do not copy it to Python's site-packages directory), run + +.. code-block:: bash + + pip install -e . + +To run Orange from the terminal, use + +.. code-block:: bash + + orange-canvas diff --git a/doc/static/images/dataset-bias.png b/doc/static/images/dataset-bias.png new file mode 100644 index 0000000..1bc828e Binary files /dev/null and b/doc/static/images/dataset-bias.png differ diff --git a/doc/static/images/models.png b/doc/static/images/models.png new file mode 100644 index 0000000..888afae Binary files /dev/null and b/doc/static/images/models.png differ diff --git a/doc/static/images/visualization.png b/doc/static/images/visualization.png new file mode 100644 index 0000000..9ac7a13 Binary files /dev/null and b/doc/static/images/visualization.png differ diff --git a/doc/widgets/adversarial-debiasing.md b/doc/widgets/adversarial-debiasing.md index 8ec4c5c..971f9af 100644 --- a/doc/widgets/adversarial-debiasing.md +++ b/doc/widgets/adversarial-debiasing.md @@ -28,6 +28,6 @@ The **Adversarial Debiasing** widget requires TensorFlow in order to work. Becau Example ------- -In this example we will try to obtain bias free predictions using the `Adversarial Debiasing` widget. First we include the widget into the canvas and tune the settings to suit our needs. What is important here is to tick the `Use debiasing` box and set the `Adversary Loss Weight` to more than 0 if we want to see the effect of the debiasing. We then connect the widget along with the dataset into the `Test & Score` widget to evaluate the performance of the model. In the evaluation results we can see the performance of the model as well as the fairness metrics for its predictions. +In this example we will try to obtain bias free predictions using the **Adversarial Debiasing** widget. First we include the widget into the canvas and tune the settings to suit our needs. What is important here is to tick the `Use debiasing` box and set the `Adversary Loss Weight` to more than 0 if we want to see the effect of the debiasing. We then connect the widget along with the dataset into the `Test & Score` widget to evaluate the performance of the model. In the evaluation results we can see the performance of the model as well as the fairness metrics for its predictions. ![](images/adversarial-debiasing-example.png) diff --git a/doc/widgets/equalized-odds-postprocessing.md b/doc/widgets/equalized-odds-postprocessing.md index ae03f6a..84b4bd2 100644 --- a/doc/widgets/equalized-odds-postprocessing.md +++ b/doc/widgets/equalized-odds-postprocessing.md @@ -18,6 +18,6 @@ Postprocessing Fairness algorithm Example ------- -In this example we will use the `Equalized Odds Postprocessing` to debias the predictions of a linear regression model. All we need to do is include the `Equalized Odds Postprocessing` widget in our workflow and connect any model and desired preprocessors to it. We then connect the `Equalized Odds Postprocessing` to the `Test & Score` widget along with the dataset to evaluate the performance of the model with postprocessing. +In this example we will use the **Equalized Odds Postprocessing** to debias the predictions of a linear regression model. All we need to do is include the **Equalized Odds Postprocessing** widget in our workflow and connect any model and desired preprocessors to it. We then connect the **Equalized Odds Postprocessing** to the `Test & Score` widget along with the dataset to evaluate the performance of the model with postprocessing. ![](images/equal-odds-postprocessing-example.png) \ No newline at end of file diff --git a/doc/widgets/reweighing.md b/doc/widgets/reweighing.md index c8ca46b..a20557c 100644 --- a/doc/widgets/reweighing.md +++ b/doc/widgets/reweighing.md @@ -20,11 +20,11 @@ Applies the reweighing algorithm to the dataset. Example ------- -The first example shows how the `Reweighing` widget can be used to preprocess a dataset. First load a fairness dataset, in this case we will use the compas analysis dataset. We than split the dataset into a training and testing set using the `Data Sampler` widget. We connect the training set to the `Reweighing` widget which will train the algorithm and create a preprocessor. The preprocessor can be connected to the `Apply Domain` widget along with the testing set to apply the same transformation to the testing set. The preprocessed testing set can then be connected to the `Dataset Bias` widget to evaluate the bias of the dataset. +The first example shows how the **Reweighing** widget can be used to preprocess a dataset. First load a fairness dataset, in this case we will use the compas analysis dataset. We than split the dataset into a training and testing set using the `Data Sampler` widget. We connect the training set to the **Reweighing** widget which will train the algorithm and create a preprocessor. The preprocessor can be connected to the `Apply Domain` widget along with the testing set to apply the same transformation to the testing set. The preprocessed testing set can then be connected to the `Dataset Bias` widget to evaluate the bias of the dataset. ![](images/reweighing-dataset-example.png) -The second example demonstrates how to use the `Reweighing` widget as a preprocessor for a learner widget. We use it by connecting it and any other preprocessors we want to use into the `Combine Preprocessors` widget which we connect into the `Weighted Logistic Regression` widget. We then connect a dataset with fairness attributes and the learner into the `Test & Score` widget to evaluate the performance of the learner. In the evaluation results we can see the performance of the learner as well as the fairness metrics for its predictions. +The second example demonstrates how to use the **Reweighing** widget as a preprocessor for a learner widget. We use it by connecting it and any other preprocessors we want to use into the `Combine Preprocessors` widget which we connect into the `Weighted Logistic Regression` widget. We then connect a dataset with fairness attributes and the learner into the `Test & Score` widget to evaluate the performance of the learner. In the evaluation results we can see the performance of the learner as well as the fairness metrics for its predictions. ![](images/reweighing-preprocessor-example.png) diff --git a/doc/widgets/weighted-logistic-regression.md b/doc/widgets/weighted-logistic-regression.md index 43e4c0a..dd55216 100644 --- a/doc/widgets/weighted-logistic-regression.md +++ b/doc/widgets/weighted-logistic-regression.md @@ -20,4 +20,4 @@ Besides the support for weights it works in the exact same way as the normal [Lo Example ------- -The example of how to use the `Weighted Logistic Regression` can be found on the [Reweighing](reweighing.md) widget documentation. An example of how to use the `Logistic Regression` widget in general can be found on the [Logistic Regression](https://orange3.readthedocs.io/projects/orange-visual-programming/en/latest/widgets/model/logisticregression.html) widget documentation. \ No newline at end of file +The example of how to use the **Weighted Logistic Regression** can be found on the [Reweighing](reweighing.md) widget documentation. An example of how to use the `Logistic Regression` widget in general can be found on the [Logistic Regression](https://orange3.readthedocs.io/projects/orange-visual-programming/en/latest/widgets/model/logisticregression.html) widget documentation. \ No newline at end of file