From f068962d74258c28e447e947e611b9f162a83410 Mon Sep 17 00:00:00 2001
From: Pat Alt <55311242+pat-alt@users.noreply.github.com>
Date: Thu, 20 Apr 2023 07:33:48 +0200
Subject: [PATCH] uh

---
 CITATION.bib                                  |   6 +--
 Project.toml                                  |   2 +-
 README.md                                     |   4 +-
 .../dev/proposal/execute-results/html.json    |   2 +-
 .../notebooks/intro/execute-results/html.json |   2 +-
 .../proposal/execute-results/html.json        |   2 +-
 .../synthetic/execute-results/html.json       |   2 +-
 docs/notebooks/intro.html                     |  16 +++---
 docs/notebooks/proposal.html                  |   2 +-
 docs/notebooks/synthetic.html                 |  12 ++---
 docs/search.json                              |   8 +--
 notebooks/Manifest.toml                       |  14 ++---
 notebooks/Project.toml                        |   2 +-
 notebooks/intro.qmd                           |  20 +++----
 notebooks/mnist.qmd                           |  51 +++++++++++-------
 notebooks/proposal.qmd                        |   2 +-
 notebooks/setup.jl                            |   4 +-
 notebooks/synthetic.qmd                       |  30 +++++------
 paper/paper.tex                               |   4 +-
 src/{CCE.jl => ECCCE.jl}                      |   4 +-
 src/generator.jl                              |  28 ++++++++--
 src/penalties.jl                              |   2 +-
 test/runtests.jl                              |   4 +-
 www/cce_mnist.png                             | Bin 21858 -> 22069 bytes
 24 files changed, 126 insertions(+), 97 deletions(-)
 rename src/{CCE.jl => ECCCE.jl} (67%)

diff --git a/CITATION.bib b/CITATION.bib
index d673e1fe..752944a7 100644
--- a/CITATION.bib
+++ b/CITATION.bib
@@ -1,7 +1,7 @@
-@misc{CCE.jl,
+@misc{ECCCE.jl,
 	author  = {Patrick Altmeyer},
-	title   = {CCE.jl},
-	url     = {https://github.com/pat-alt/CCE.jl},
+	title   = {ECCCE.jl},
+	url     = {https://github.com/pat-alt/ECCCE.jl},
 	version = {v0.1.0},
 	year    = {2023},
 	month   = {2}
diff --git a/Project.toml b/Project.toml
index 8783ebf8..647598d4 100644
--- a/Project.toml
+++ b/Project.toml
@@ -1,4 +1,4 @@
-name = "CCE"
+name = "ECCCE"
 uuid = "0232c203-4013-4b0d-ad96-43e3e11ac3bf"
 authors = ["Patrick Altmeyer"]
 version = "0.1.0"
diff --git a/README.md b/README.md
index a0fbd4f4..c599843e 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,3 @@
-# CCE
+# ECCCE
 
-[![Build Status](https://github.com/pat-alt/CCE.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/pat-alt/CCE.jl/actions/workflows/CI.yml?query=branch%3Amain)
+[![Build Status](https://github.com/pat-alt/ECCCE.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/pat-alt/ECCCE.jl/actions/workflows/CI.yml?query=branch%3Amain)
diff --git a/_freeze/dev/proposal/execute-results/html.json b/_freeze/dev/proposal/execute-results/html.json
index 1d98ce87..195e16d6 100644
--- a/_freeze/dev/proposal/execute-results/html.json
+++ b/_freeze/dev/proposal/execute-results/html.json
@@ -1,7 +1,7 @@
 {
   "hash": "d7b4f9bf7f4bff7ce610fc8be4dcfb8b",
   "result": {
-    "markdown": "---\ntitle: High-Fidelity Counterfactual Explanations through Conformal Prediction\nsubtitle: Research Proposal\nabstract: |\n    We propose Conformal Counterfactual Explanations: an effortless and rigorous way to produce realistic and faithful Counterfactual Explanations using Conformal Prediction. To address the need for realistic counterfactuals, existing work has primarily relied on separate generative models to learn the data-generating process. While this is an effective way to produce plausible and model-agnostic counterfactual explanations, it not only introduces a significant engineering overhead but also reallocates the task of creating realistic model explanations from the model itself to the generative model. Recent work has shown that there is no need for any of this when working with probabilistic models that explicitly quantify their own uncertainty. Unfortunately, most models used in practice still do not fulfil that basic requirement, in which case we would like to have a way to quantify predictive uncertainty in a post-hoc fashion.\n---\n\n\n\n## Motivation\n\nCounterfactual Explanations are a powerful, flexible and intuitive way to not only explain black-box models but also enable affected individuals to challenge them through the means of Algorithmic Recourse. \n\n### Counterfactual Explanations or Adversarial Examples?\n\nMost state-of-the-art approaches to generating Counterfactual Explanations (CE) rely on gradient descent in the feature space. The key idea is to perturb inputs $x\\in\\mathcal{X}$ into a black-box model $f: \\mathcal{X} \\mapsto \\mathcal{Y}$ in order to change the model output $f(x)$ to some pre-specified target value $t\\in\\mathcal{Y}$. Formally, this boils down to defining some loss function $\\ell(f(x),t)$ and taking gradient steps in the minimizing direction. The so-generated counterfactuals are considered valid as soon as the predicted label matches the target label. A stripped-down counterfactual explanation is therefore little different from an adversarial example. In @fig-adv, for example, generic counterfactual search as in @wachter2017counterfactual has been applied to MNIST data.\n\n\n\n\n\n![You may not like it, but this is what stripped-down counterfactuals look like. Here we have used @wachter2017counterfactual to generate multiple counterfactuals for turning an 8 (eight) into a 3 (three).](www/you_may_not_like_it.png){#fig-adv}\n\nThe crucial difference between adversarial examples and counterfactuals is one of intent. While adversarial examples are typically intended to go unnoticed, counterfactuals in the context of Explainable AI are generally sought to be \"plausible\", \"realistic\" or \"feasible\". To fulfil this latter goal, researchers have come up with a myriad of ways. @joshi2019realistic were among the first to suggest that instead of searching counterfactuals in the feature space, we can instead traverse a latent embedding learned by a surrogate generative model. Similarly, @poyiadzi2020face use density ... Finally, @karimi2021algorithmic argues that counterfactuals should comply with the causal model that generates them [CHECK IF WE CAN PHASE THIS LIKE THIS]. Other related approaches include ... All of these different approaches have a common goal: they aim to ensure that the generated counterfactuals comply with the (learned) data-generating process (DGB). \n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, if $x \\sim \\mathcal{X}$ and for the corresponding counterfactual we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X} \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nIn the context of Algorithmic Recourse, it makes sense to strive for plausible counterfactuals, since anything else would essentially require individuals to move to out-of-distribution states. But it is worth noting that our ambition to meet this goal, may have implications on our ability to faithfully explain the behaviour of the underlying black-box model (arguably our principal goal). By essentially decoupling the task of learning plausible representations of the data from the model itself, we open ourselves up to vulnerabilities. Using a separate generative model to learn $\\mathcal{X}$, for example, has very serious implications for the generated counterfactuals. @fig-latent compares the results of applying REVISE [@joshi2019realistic] to MNIST data using two different Variational Auto-Encoders: while the counterfactual generated using an expressive (strong) VAE is compelling, the result relying on a less expressive (weak) VAE is not even valid. In this latter case, the decoder step of the VAE fails to yield values in $\\mathcal{X}$ and hence the counterfactual search in the learned latent space is doomed. \n\n![Counterfactual explanations for MNIST using a Latent Space generator: turning a nine (9) into a four (4).](www/mnist_9to4_latent.png){#fig-latent}\n\n> Here it would be nice to have another example where we poison the data going into the generative model to hide biases present in the data (e.g. Boston housing).\n\n- Latent can be manipulated: \n    - train biased model\n    - train VAE with biased variable removed/attacked (use Boston housing dataset)\n    - hypothesis: will generate bias-free explanations\n\n### From Plausible to High-Fidelity Counterfactuals {#sec-fidelity}\n\nIn light of the findings, we propose to generally avoid using surrogate models to learn $\\mathcal{X}$ in the context of Counterfactual Explanations.\n\n::: {#prp-surrogate}\n\n## Avoid Surrogates\n\nSince we are in the business of explaining a black-box model, the task of learning realistic representations of the data should not be reallocated from the model itself to some surrogate model.\n\n:::\n\nIn cases where the use of surrogate models cannot be avoided, we propose to weigh the plausibility of counterfactuals against their fidelity to the black-box model. In the context of Explainable AI, fidelity is defined as describing how an explanation approximates the prediction of the black-box model [@molnar2020interpretable]. Fidelity has become the default metric for evaluating Local Model-Agnostic Models, since they often involve local surrogate models whose predictions need not always match those of the black-box model. \n\nIn the case of Counterfactual Explanations, the concept of fidelity has so far been ignored. This is not altogether surprising, since by construction and design, Counterfactual Explanations work with the predictions of the black-box model directly: as stated above, a counterfactual $x^{\\prime}$ is considered valid if and only if $f(x^{\\prime})=t$, where $t$ denote some target outcome. \n\nDoes fidelity even make sense in the context of CE, and if so, how can we define it? In light of the examples in the previous section, we think it is urgent to introduce a notion of fidelity in this context, that relates to the distributional properties of the generated counterfactuals. In particular, we propose that a high-fidelity counterfactual $x^{\\prime}$ complies with the class-conditional distribution $\\mathcal{X}_{\\theta} = p_{\\theta}(X|y)$ where $\\theta$ denote the black-box model parameters. \n\n::: {#def-fidele}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nIn order to assess the fidelity of counterfactuals, we propose the following two-step procedure:\n\n1) Generate samples $X_{\\theta}|y$ and $X^{\\prime}$ from $\\mathcal{X}_{\\theta}|t$ and $\\mathcal{X}^{\\prime}$, respectively.\n2) Compute the Maximum Mean Discrepancy (MMD) between $X_{\\theta}|y$ and $X^{\\prime}$. \n\nIf the computed value is different from zero, we can reject the null-hypothesis of fidelity.\n\n> Two challenges here: 1) implementing the sampling procedure in @grathwohl2020your; 2) it is unclear if MMD is really the right way to measure this. \n\n## Conformal Counterfactual Explanations\n\nIn @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (CCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. \n\n### Minimizing Predictive Uncertainty\n\n@schut2021generating demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, @antoran2020getting ...\n\n- Problem: restricted to Bayesian models.\n- Solution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction. \n\n### Background on Conformal Prediction\n\n- Distribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\n- Conformal prediction is instance-based. So is CE. \n- Take any fitted model and turn it into a conformal model using calibration data.\n- Our approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway. \n\n> Does the coverage guarantee carry over to counterfactuals?\n\n### Generating Conformal Counterfactuals\n\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, @stutz2022learning introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search ...\n\n> Challenge: still need to implement these loss functions. \n\n## Experiments\n\n### Research Questions\n\n- Is CP alone enough to ensure realistic counterfactuals?\n- Do counterfactuals improve further as the models get better?\n- Do counterfactuals get more realistic as coverage\n- What happens as we vary coverage and setsize?\n- What happens as we improve the model robustness?\n- What happens as we improve the model's ability to incorporate predictive uncertainty (deep ensemble, laplace)?\n- What happens if we combine with DiCE, ClaPROAR, Gravitational?\n- What about CE robustness to endogenous shifts [@altmeyer2023endogenous]?\n\n- Benchmarking:\n    - add PROBE [@pawelczyk2022probabilistically] into the mix.\n    - compare travel costs to domain shits.\n\n> Nice to have: What about using Laplace Approximation, then Conformal Prediction? What about using Conformalised Laplace? \n\n## References\n\n",
+    "markdown": "---\ntitle: High-Fidelity Counterfactual Explanations through Conformal Prediction\nsubtitle: Research Proposal\nabstract: |\n    We propose Conformal Counterfactual Explanations: an effortless and rigorous way to produce realistic and faithful Counterfactual Explanations using Conformal Prediction. To address the need for realistic counterfactuals, existing work has primarily relied on separate generative models to learn the data-generating process. While this is an effective way to produce plausible and model-agnostic counterfactual explanations, it not only introduces a significant engineering overhead but also reallocates the task of creating realistic model explanations from the model itself to the generative model. Recent work has shown that there is no need for any of this when working with probabilistic models that explicitly quantify their own uncertainty. Unfortunately, most models used in practice still do not fulfil that basic requirement, in which case we would like to have a way to quantify predictive uncertainty in a post-hoc fashion.\n---\n\n\n\n## Motivation\n\nCounterfactual Explanations are a powerful, flexible and intuitive way to not only explain black-box models but also enable affected individuals to challenge them through the means of Algorithmic Recourse. \n\n### Counterfactual Explanations or Adversarial Examples?\n\nMost state-of-the-art approaches to generating Counterfactual Explanations (CE) rely on gradient descent in the feature space. The key idea is to perturb inputs $x\\in\\mathcal{X}$ into a black-box model $f: \\mathcal{X} \\mapsto \\mathcal{Y}$ in order to change the model output $f(x)$ to some pre-specified target value $t\\in\\mathcal{Y}$. Formally, this boils down to defining some loss function $\\ell(f(x),t)$ and taking gradient steps in the minimizing direction. The so-generated counterfactuals are considered valid as soon as the predicted label matches the target label. A stripped-down counterfactual explanation is therefore little different from an adversarial example. In @fig-adv, for example, generic counterfactual search as in @wachter2017counterfactual has been applied to MNIST data.\n\n\n\n\n\n![You may not like it, but this is what stripped-down counterfactuals look like. Here we have used @wachter2017counterfactual to generate multiple counterfactuals for turning an 8 (eight) into a 3 (three).](www/you_may_not_like_it.png){#fig-adv}\n\nThe crucial difference between adversarial examples and counterfactuals is one of intent. While adversarial examples are typically intended to go unnoticed, counterfactuals in the context of Explainable AI are generally sought to be \"plausible\", \"realistic\" or \"feasible\". To fulfil this latter goal, researchers have come up with a myriad of ways. @joshi2019realistic were among the first to suggest that instead of searching counterfactuals in the feature space, we can instead traverse a latent embedding learned by a surrogate generative model. Similarly, @poyiadzi2020face use density ... Finally, @karimi2021algorithmic argues that counterfactuals should comply with the causal model that generates them [CHECK IF WE CAN PHASE THIS LIKE THIS]. Other related approaches include ... All of these different approaches have a common goal: they aim to ensure that the generated counterfactuals comply with the (learned) data-generating process (DGB). \n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, if $x \\sim \\mathcal{X}$ and for the corresponding counterfactual we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X} \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nIn the context of Algorithmic Recourse, it makes sense to strive for plausible counterfactuals, since anything else would essentially require individuals to move to out-of-distribution states. But it is worth noting that our ambition to meet this goal, may have implications on our ability to faithfully explain the behaviour of the underlying black-box model (arguably our principal goal). By essentially decoupling the task of learning plausible representations of the data from the model itself, we open ourselves up to vulnerabilities. Using a separate generative model to learn $\\mathcal{X}$, for example, has very serious implications for the generated counterfactuals. @fig-latent compares the results of applying REVISE [@joshi2019realistic] to MNIST data using two different Variational Auto-Encoders: while the counterfactual generated using an expressive (strong) VAE is compelling, the result relying on a less expressive (weak) VAE is not even valid. In this latter case, the decoder step of the VAE fails to yield values in $\\mathcal{X}$ and hence the counterfactual search in the learned latent space is doomed. \n\n![Counterfactual explanations for MNIST using a Latent Space generator: turning a nine (9) into a four (4).](www/mnist_9to4_latent.png){#fig-latent}\n\n> Here it would be nice to have another example where we poison the data going into the generative model to hide biases present in the data (e.g. Boston housing).\n\n- Latent can be manipulated: \n    - train biased model\n    - train VAE with biased variable removed/attacked (use Boston housing dataset)\n    - hypothesis: will generate bias-free explanations\n\n### From Plausible to High-Fidelity Counterfactuals {#sec-fidelity}\n\nIn light of the findings, we propose to generally avoid using surrogate models to learn $\\mathcal{X}$ in the context of Counterfactual Explanations.\n\n::: {#prp-surrogate}\n\n## Avoid Surrogates\n\nSince we are in the business of explaining a black-box model, the task of learning realistic representations of the data should not be reallocated from the model itself to some surrogate model.\n\n:::\n\nIn cases where the use of surrogate models cannot be avoided, we propose to weigh the plausibility of counterfactuals against their fidelity to the black-box model. In the context of Explainable AI, fidelity is defined as describing how an explanation approximates the prediction of the black-box model [@molnar2020interpretable]. Fidelity has become the default metric for evaluating Local Model-Agnostic Models, since they often involve local surrogate models whose predictions need not always match those of the black-box model. \n\nIn the case of Counterfactual Explanations, the concept of fidelity has so far been ignored. This is not altogether surprising, since by construction and design, Counterfactual Explanations work with the predictions of the black-box model directly: as stated above, a counterfactual $x^{\\prime}$ is considered valid if and only if $f(x^{\\prime})=t$, where $t$ denote some target outcome. \n\nDoes fidelity even make sense in the context of CE, and if so, how can we define it? In light of the examples in the previous section, we think it is urgent to introduce a notion of fidelity in this context, that relates to the distributional properties of the generated counterfactuals. In particular, we propose that a high-fidelity counterfactual $x^{\\prime}$ complies with the class-conditional distribution $\\mathcal{X}_{\\theta} = p_{\\theta}(X|y)$ where $\\theta$ denote the black-box model parameters. \n\n::: {#def-fidele}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nIn order to assess the fidelity of counterfactuals, we propose the following two-step procedure:\n\n1) Generate samples $X_{\\theta}|y$ and $X^{\\prime}$ from $\\mathcal{X}_{\\theta}|t$ and $\\mathcal{X}^{\\prime}$, respectively.\n2) Compute the Maximum Mean Discrepancy (MMD) between $X_{\\theta}|y$ and $X^{\\prime}$. \n\nIf the computed value is different from zero, we can reject the null-hypothesis of fidelity.\n\n> Two challenges here: 1) implementing the sampling procedure in @grathwohl2020your; 2) it is unclear if MMD is really the right way to measure this. \n\n## Conformal Counterfactual Explanations\n\nIn @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (ECCCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. \n\n### Minimizing Predictive Uncertainty\n\n@schut2021generating demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, @antoran2020getting ...\n\n- Problem: restricted to Bayesian models.\n- Solution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction. \n\n### Background on Conformal Prediction\n\n- Distribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\n- Conformal prediction is instance-based. So is CE. \n- Take any fitted model and turn it into a conformal model using calibration data.\n- Our approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway. \n\n> Does the coverage guarantee carry over to counterfactuals?\n\n### Generating Conformal Counterfactuals\n\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, @stutz2022learning introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search ...\n\n> Challenge: still need to implement these loss functions. \n\n## Experiments\n\n### Research Questions\n\n- Is CP alone enough to ensure realistic counterfactuals?\n- Do counterfactuals improve further as the models get better?\n- Do counterfactuals get more realistic as coverage\n- What happens as we vary coverage and setsize?\n- What happens as we improve the model robustness?\n- What happens as we improve the model's ability to incorporate predictive uncertainty (deep ensemble, laplace)?\n- What happens if we combine with DiCE, ClaPROAR, Gravitational?\n- What about CE robustness to endogenous shifts [@altmeyer2023endogenous]?\n\n- Benchmarking:\n    - add PROBE [@pawelczyk2022probabilistically] into the mix.\n    - compare travel costs to domain shits.\n\n> Nice to have: What about using Laplace Approximation, then Conformal Prediction? What about using Conformalised Laplace? \n\n## References\n\n",
     "supporting": [
       "proposal_files/figure-html"
     ],
diff --git a/_freeze/notebooks/intro/execute-results/html.json b/_freeze/notebooks/intro/execute-results/html.json
index d600296a..28ce12ee 100644
--- a/_freeze/notebooks/intro/execute-results/html.json
+++ b/_freeze/notebooks/intro/execute-results/html.json
@@ -1,7 +1,7 @@
 {
   "hash": "43d5045964ca39def434cb65914681bc",
   "result": {
-    "markdown": "::: {.cell execution_count=1}\n``` {.julia .cell-code}\ninclude(\"notebooks/setup.jl\")\neval(setup_notebooks)\n```\n:::\n\n\n# `ConformalGenerator`\n\nIn this section, we will look at a simple example involving synthetic data, a black-box model and a generic Conformal Counterfactual Generator.\n\n## Black-box Model\n\nWe consider a simple binary classification problem. Let $(X_i, Y_i), \\ i=1,...,n$ denote our feature-label pairs and let $\\mu: \\mathcal{X} \\mapsto \\mathcal{Y}$ denote the mapping from features to labels. For illustration purposes, we will use linearly separable data. \n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\ncounterfactual_data = load_linearly_separable()\n```\n:::\n\n\nWhile we could use a linear classifier in this case, let's pretend we need a black-box model for this task and rely on a small Multi-Layer Perceptron (MLP):\n\n::: {.cell execution_count=3}\n``` {.julia .cell-code}\nbuilder = MLJFlux.@builder Flux.Chain(\n    Dense(n_in, 32, relu),\n    Dense(32, n_out)\n)\nclf = NeuralNetworkClassifier(builder=builder, epochs=100)\n```\n:::\n\n\nWe can fit this model to data to produce plug-in predictions. \n\n## Conformal Prediction\n\nHere we will instead use a specific case of CP called *split conformal prediction* which can then be summarized as follows:^[In other places split conformal prediction is sometimes referred to as *inductive* conformal prediction.]\n\n1. Partition the training into a proper training set and a separate calibration set: $\\mathcal{D}_n=\\mathcal{D}^{\\text{train}} \\cup \\mathcal{D}^{\\text{cali}}$.\n2. Train the machine learning model on the proper training set: $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}(X_i,Y_i)$.\n\nThe model $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}$ can now produce plug-in predictions. \n\n::: callout-note\n\n## Starting Point\n\nNote that this represents the starting point in applications of Algorithmic Recourse: we have some pre-trained classifier $M$ for which we would like to generate plausible Counterfactual Explanations. Next, we turn to the calibration step. \n:::\n\n3. Compute nonconformity scores, $\\mathcal{S}$, using the calibration data $\\mathcal{D}^{\\text{cali}}$ and the fitted model $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}$. \n4. For a user-specified desired coverage ratio $(1-\\alpha)$ compute the corresponding quantile, $\\hat{q}$, of the empirical distribution of nonconformity scores, $\\mathcal{S}$.\n5. For the given quantile and test sample $X_{\\text{test}}$, form the corresponding conformal prediction set: \n\n$$\nC(X_{\\text{test}})=\\{y:s(X_{\\text{test}},y) \\le \\hat{q}\\}\n$$ {#eq-set}\n\nThis is the default procedure used for classification and regression in [`ConformalPrediction.jl`](https://github.com/pat-alt/ConformalPrediction.jl). \n\nUsing the package, we can apply Split Conformal Prediction as follows:\n\n::: {.cell execution_count=4}\n``` {.julia .cell-code}\nX = table(permutedims(counterfactual_data.X))\ny =  counterfactual_data.output_encoder.labels\nconf_model = conformal_model(clf; method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)\n```\n:::\n\n\nTo be clear, all of the calibration steps (3 to 5) are post hoc, and yet none of them involved any changes to the model parameters. These are two important characteristics of Split Conformal Prediction (SCP) that make it particularly useful in the context of Algorithmic Recourse. Firstly, the fact that SCP involves posthoc calibration steps that happen after training, ensures that we need not place any restrictions on the black-box model itself. This stands in contrast to the approach proposed by @schut2021generating in which they essentially restrict the class of models to Bayesian models. Secondly, the fact that the model itself is kept entirely intact ensures that the generated counterfactuals maintain fidelity to the model. Finally, note that we also have not resorted to a surrogate model to learn more about $X \\sim \\mathcal{X}$. Instead, we have used the fitted model itself and a calibration data set to learn about the model's predictive uncertainty. \n\n## Differentiable CP\n\nIn order to use CP in the context of gradient-based counterfactual search, we need it to be differentiable. @stutz2022learning introduce a framework for training differentiable conformal predictors. They introduce a configurable loss function as well as smooth set size penalty.\n\n### Smooth Set Size Penalty\n\nStarting with the former, @stutz2022learning propose the following:\n\n$$\n\\Omega(C_{\\theta}(x;\\tau)) = = \\max (0, \\sum_k C_{\\theta,k}(x;\\tau) - \\kappa)\n$$ {#eq-size-loss}\n\nHere, $C_{\\theta,k}(x;\\tau)$ is loosely defined as the probability that class $k$ is assigned to the conformal prediction set $C$. In the context of Conformal Training, this penalty reduces the **inefficiency** of the conformal predictor. \n\nIn our context, we are not interested in improving the model itself, but rather in producing **plausible** counterfactuals. Provided that our counterfactual $x^\\prime$ is already inside the target domain ($\\mathbb{I}_{y^\\prime = t}=1$), penalizing $\\Omega(C_{\\theta}(x;\\tau))$ corresponds to guiding counterfactuals into regions of the target domain that are characterized by low ambiguity: for $\\kappa=1$ the conformal prediction set includes only the target label $t$ as $\\Omega(C_{\\theta}(x;\\tau))$. Arguably, less ambiguous counterfactuals are more **plausible**. Since the search is guided purely by properties of the model itself and (exchangeable) calibration data, counterfactuals also maintain **high fidelity**.\n\nThe left panel of @fig-losses shows the smooth size penalty in the two-dimensional feature space of our synthetic data.\n\n### Configurable Classification Loss\n\nThe right panel of @fig-losses shows the configurable classification loss in the two-dimensional feature space of our synthetic data.\n\n::: {.cell execution_count=5}\n\n::: {.cell-output .cell-output-display execution_count=6}\n![Illustration of the smooth size loss and the configurable classification loss.](intro_files/figure-html/fig-losses-output-1.svg){#fig-losses}\n:::\n:::\n\n\n## Fidelity and Plausibility\n\nThe main evaluation criteria we are interested in are *fidelity* and *plausibility*. Interestingly, we could also consider using these measures as penalties in the counterfactual search.\n\n### Fidelity\n\nWe propose to define fidelity as follows:\n\n::: {#def-fidelity}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nWe can generate samples from $p_{\\theta}(X|y)$ following @grathwohl2020your. In @fig-energy, I have applied the methodology to our synthetic data.\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\nM = CCE.ConformalModel(conf_model, mach.fitresult)\n\nniter = 100\nnsamples = 100\n\nplts = []\nfor (i,target) ∈ enumerate(counterfactual_data.y_levels)\n    sampler = CCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)\n    Xgen = rand(sampler, nsamples)\n    plt = Plots.plot(M, counterfactual_data; target=target, zoom=-3,cbar=false)\n    Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=i,shape=:star,label=\"X|y=$target\")\n    push!(plts, plt)\nend\nPlots.plot(plts..., layout=(1,length(plts)), size=(img_height*length(plts),img_height))\n```\n\n::: {.cell-output .cell-output-display execution_count=7}\n![Energy-based conditional samples.](intro_files/figure-html/fig-energy-output-1.svg){#fig-energy}\n:::\n:::\n\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual $x^{\\prime}$ from these generated samples, for example.\n\n### Plausibility\n\nWe propose to define plausibility as follows:\n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, let $\\mathcal{X}|t$ denote the conditional distribution of samples in the target class. As before, we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X}|t \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual $x^{\\prime}$ from (potentially bootstrapped) training samples in the target class, for example.\n\n## Counterfactual Explanations\n\nNext, let's generate counterfactual explanations for our synthetic data. We first wrap our model in a container that makes it compatible with `CounterfactualExplanations.jl`. Then we draw a random sample, determine its predicted label $\\hat{y}$ and choose the opposite label as our target. \n\n::: {.cell execution_count=7}\n``` {.julia .cell-code}\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny_factual = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y_factual][1]\n```\n:::\n\n\nThe generic Conformal Counterfactual Generator penalises the only the set size only:\n\n$$\nx^\\prime = \\arg \\min_{x^\\prime}  \\ell(M(x^\\prime),t) + \\lambda \\mathbb{I}_{y^\\prime = t} \\Omega(C_{\\theta}(x;\\tau)) \n$$ {#eq-solution}\n\n::: {.cell execution_count=8}\n\n::: {.cell-output .cell-output-display execution_count=9}\n![Comparison of counterfactuals produced using different generators.](intro_files/figure-html/fig-ce-output-1.svg){#fig-ce}\n:::\n:::\n\n\n## Multi-Class\n\n::: {.cell execution_count=9}\n``` {.julia .cell-code}\ncounterfactual_data = load_multi_class()\n```\n:::\n\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nX = table(permutedims(counterfactual_data.X))\ny =  counterfactual_data.output_encoder.labels\n```\n:::\n\n\n::: {.cell execution_count=11}\n\n::: {.cell-output .cell-output-display execution_count=12}\n![Illustration of the smooth size loss.](intro_files/figure-html/fig-pen-multi-output-1.svg){#fig-pen-multi}\n:::\n:::\n\n\n::: {.cell execution_count=12}\n\n::: {.cell-output .cell-output-display execution_count=13}\n![Illustration of the configurable classification loss.](intro_files/figure-html/fig-losses-multi-output-1.svg){#fig-losses-multi}\n:::\n:::\n\n\n::: {.cell execution_count=13}\n\n::: {.cell-output .cell-output-display execution_count=14}\n![Energy-based conditional samples.](intro_files/figure-html/fig-energy-multi-output-1.svg){#fig-energy-multi}\n:::\n:::\n\n\n::: {.cell execution_count=14}\n``` {.julia .cell-code}\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny_factual = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y_factual][1]\n```\n:::\n\n\n::: {.cell execution_count=15}\n\n::: {.cell-output .cell-output-display execution_count=16}\n![Comparison of counterfactuals produced using different generators.](intro_files/figure-html/fig-ce-multi-output-1.svg){#fig-ce-multi}\n:::\n:::\n\n\n## Benchmarks\n\n::: {.cell execution_count=16}\n``` {.julia .cell-code}\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Untrained Models:\nmodels = Dict(\n    :cov75 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),\n    :cov80 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),\n    :cov90 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),\n    :cov99 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),\n)\n```\n:::\n\n\nThen we can simply loop over the datasets and eventually concatenate the results like so:\n\n::: {.cell execution_count=17}\n``` {.julia .cell-code}\nusing CounterfactualExplanations.Evaluation: benchmark\nbmks = []\nmeasures = [\n    CounterfactualExplanations.distance,\n    CCE.distance_from_energy,\n    CCE.distance_from_targets\n]\nfor (dataname, dataset) in datasets\n    bmk = benchmark(\n        dataset; \n        models=deepcopy(models), \n        generators=generators, \n        measure=measures,\n        suppress_training=false, dataname=dataname,\n        n_individuals=10\n    )\n    push!(bmks, bmk)\nend\nbmk = reduce(vcat, bmks)\n```\n:::\n\n\n::: {.cell execution_count=18}\n``` {.julia .cell-code}\nf(ce) = CounterfactualExplanations.model_evaluation(ce.M, ce.data)\n@chain bmk() begin\n    @group_by(model, generator, dataname, variable)\n    @select(model, generator, dataname, ce, value)\n    @mutate(performance = f(ce))\n    @summarize(model=unique(model), generator=unique(generator), dataname=unique(dataname), performace=unique(performance), value=mean(value))\n    @ungroup\n    @filter(dataname == :multi_class)\n    @filter(model == :cov99)\n    @filter(variable == \"distance\")\nend\n```\n:::\n\n\n::: {#fig-benchmark .cell execution_count=19}\n\n::: {.cell-output .cell-output-display}\n![Circles.](intro_files/figure-html/fig-benchmark-output-1.png){#fig-benchmark-1}\n:::\n\n::: {.cell-output .cell-output-display}\n![Linearly Separable.](intro_files/figure-html/fig-benchmark-output-2.png){#fig-benchmark-2}\n:::\n\n::: {.cell-output .cell-output-display}\n![Moons.](intro_files/figure-html/fig-benchmark-output-3.png){#fig-benchmark-3}\n:::\n\n::: {.cell-output .cell-output-display}\n![Multi-class.](intro_files/figure-html/fig-benchmark-output-4.png){#fig-benchmark-4}\n:::\n\n::: {.cell-output .cell-output-display}\n![Overlapping.](intro_files/figure-html/fig-benchmark-output-5.png){#fig-benchmark-5}\n:::\n\nBenchmark results for the different generators.\n:::\n\n\n",
+    "markdown": "::: {.cell execution_count=1}\n``` {.julia .cell-code}\ninclude(\"notebooks/setup.jl\")\neval(setup_notebooks)\n```\n:::\n\n\n# `ConformalGenerator`\n\nIn this section, we will look at a simple example involving synthetic data, a black-box model and a generic Conformal Counterfactual Generator.\n\n## Black-box Model\n\nWe consider a simple binary classification problem. Let $(X_i, Y_i), \\ i=1,...,n$ denote our feature-label pairs and let $\\mu: \\mathcal{X} \\mapsto \\mathcal{Y}$ denote the mapping from features to labels. For illustration purposes, we will use linearly separable data. \n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\ncounterfactual_data = load_linearly_separable()\n```\n:::\n\n\nWhile we could use a linear classifier in this case, let's pretend we need a black-box model for this task and rely on a small Multi-Layer Perceptron (MLP):\n\n::: {.cell execution_count=3}\n``` {.julia .cell-code}\nbuilder = MLJFlux.@builder Flux.Chain(\n    Dense(n_in, 32, relu),\n    Dense(32, n_out)\n)\nclf = NeuralNetworkClassifier(builder=builder, epochs=100)\n```\n:::\n\n\nWe can fit this model to data to produce plug-in predictions. \n\n## Conformal Prediction\n\nHere we will instead use a specific case of CP called *split conformal prediction* which can then be summarized as follows:^[In other places split conformal prediction is sometimes referred to as *inductive* conformal prediction.]\n\n1. Partition the training into a proper training set and a separate calibration set: $\\mathcal{D}_n=\\mathcal{D}^{\\text{train}} \\cup \\mathcal{D}^{\\text{cali}}$.\n2. Train the machine learning model on the proper training set: $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}(X_i,Y_i)$.\n\nThe model $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}$ can now produce plug-in predictions. \n\n::: callout-note\n\n## Starting Point\n\nNote that this represents the starting point in applications of Algorithmic Recourse: we have some pre-trained classifier $M$ for which we would like to generate plausible Counterfactual Explanations. Next, we turn to the calibration step. \n:::\n\n3. Compute nonconformity scores, $\\mathcal{S}$, using the calibration data $\\mathcal{D}^{\\text{cali}}$ and the fitted model $\\hat\\mu_{i \\in \\mathcal{D}^{\\text{train}}}$. \n4. For a user-specified desired coverage ratio $(1-\\alpha)$ compute the corresponding quantile, $\\hat{q}$, of the empirical distribution of nonconformity scores, $\\mathcal{S}$.\n5. For the given quantile and test sample $X_{\\text{test}}$, form the corresponding conformal prediction set: \n\n$$\nC(X_{\\text{test}})=\\{y:s(X_{\\text{test}},y) \\le \\hat{q}\\}\n$$ {#eq-set}\n\nThis is the default procedure used for classification and regression in [`ConformalPrediction.jl`](https://github.com/pat-alt/ConformalPrediction.jl). \n\nUsing the package, we can apply Split Conformal Prediction as follows:\n\n::: {.cell execution_count=4}\n``` {.julia .cell-code}\nX = table(permutedims(counterfactual_data.X))\ny =  counterfactual_data.output_encoder.labels\nconf_model = conformal_model(clf; method=:simple_inductive)\nmach = machine(conf_model, X, y)\nfit!(mach)\n```\n:::\n\n\nTo be clear, all of the calibration steps (3 to 5) are post hoc, and yet none of them involved any changes to the model parameters. These are two important characteristics of Split Conformal Prediction (SCP) that make it particularly useful in the context of Algorithmic Recourse. Firstly, the fact that SCP involves posthoc calibration steps that happen after training, ensures that we need not place any restrictions on the black-box model itself. This stands in contrast to the approach proposed by @schut2021generating in which they essentially restrict the class of models to Bayesian models. Secondly, the fact that the model itself is kept entirely intact ensures that the generated counterfactuals maintain fidelity to the model. Finally, note that we also have not resorted to a surrogate model to learn more about $X \\sim \\mathcal{X}$. Instead, we have used the fitted model itself and a calibration data set to learn about the model's predictive uncertainty. \n\n## Differentiable CP\n\nIn order to use CP in the context of gradient-based counterfactual search, we need it to be differentiable. @stutz2022learning introduce a framework for training differentiable conformal predictors. They introduce a configurable loss function as well as smooth set size penalty.\n\n### Smooth Set Size Penalty\n\nStarting with the former, @stutz2022learning propose the following:\n\n$$\n\\Omega(C_{\\theta}(x;\\tau)) = = \\max (0, \\sum_k C_{\\theta,k}(x;\\tau) - \\kappa)\n$$ {#eq-size-loss}\n\nHere, $C_{\\theta,k}(x;\\tau)$ is loosely defined as the probability that class $k$ is assigned to the conformal prediction set $C$. In the context of Conformal Training, this penalty reduces the **inefficiency** of the conformal predictor. \n\nIn our context, we are not interested in improving the model itself, but rather in producing **plausible** counterfactuals. Provided that our counterfactual $x^\\prime$ is already inside the target domain ($\\mathbb{I}_{y^\\prime = t}=1$), penalizing $\\Omega(C_{\\theta}(x;\\tau))$ corresponds to guiding counterfactuals into regions of the target domain that are characterized by low ambiguity: for $\\kappa=1$ the conformal prediction set includes only the target label $t$ as $\\Omega(C_{\\theta}(x;\\tau))$. Arguably, less ambiguous counterfactuals are more **plausible**. Since the search is guided purely by properties of the model itself and (exchangeable) calibration data, counterfactuals also maintain **high fidelity**.\n\nThe left panel of @fig-losses shows the smooth size penalty in the two-dimensional feature space of our synthetic data.\n\n### Configurable Classification Loss\n\nThe right panel of @fig-losses shows the configurable classification loss in the two-dimensional feature space of our synthetic data.\n\n::: {.cell execution_count=5}\n\n::: {.cell-output .cell-output-display execution_count=6}\n![Illustration of the smooth size loss and the configurable classification loss.](intro_files/figure-html/fig-losses-output-1.svg){#fig-losses}\n:::\n:::\n\n\n## Fidelity and Plausibility\n\nThe main evaluation criteria we are interested in are *fidelity* and *plausibility*. Interestingly, we could also consider using these measures as penalties in the counterfactual search.\n\n### Fidelity\n\nWe propose to define fidelity as follows:\n\n::: {#def-fidelity}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nWe can generate samples from $p_{\\theta}(X|y)$ following @grathwohl2020your. In @fig-energy, I have applied the methodology to our synthetic data.\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\nM = ECCCE.ConformalModel(conf_model, mach.fitresult)\n\nniter = 100\nnsamples = 100\n\nplts = []\nfor (i,target) ∈ enumerate(counterfactual_data.y_levels)\n    sampler = ECCCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)\n    Xgen = rand(sampler, nsamples)\n    plt = Plots.plot(M, counterfactual_data; target=target, zoom=-3,cbar=false)\n    Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=i,shape=:star,label=\"X|y=$target\")\n    push!(plts, plt)\nend\nPlots.plot(plts..., layout=(1,length(plts)), size=(img_height*length(plts),img_height))\n```\n\n::: {.cell-output .cell-output-display execution_count=7}\n![Energy-based conditional samples.](intro_files/figure-html/fig-energy-output-1.svg){#fig-energy}\n:::\n:::\n\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual $x^{\\prime}$ from these generated samples, for example.\n\n### Plausibility\n\nWe propose to define plausibility as follows:\n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, let $\\mathcal{X}|t$ denote the conditional distribution of samples in the target class. As before, we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X}|t \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual $x^{\\prime}$ from (potentially bootstrapped) training samples in the target class, for example.\n\n## Counterfactual Explanations\n\nNext, let's generate counterfactual explanations for our synthetic data. We first wrap our model in a container that makes it compatible with `CounterfactualExplanations.jl`. Then we draw a random sample, determine its predicted label $\\hat{y}$ and choose the opposite label as our target. \n\n::: {.cell execution_count=7}\n``` {.julia .cell-code}\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny_factual = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y_factual][1]\n```\n:::\n\n\nThe generic Conformal Counterfactual Generator penalises the only the set size only:\n\n$$\nx^\\prime = \\arg \\min_{x^\\prime}  \\ell(M(x^\\prime),t) + \\lambda \\mathbb{I}_{y^\\prime = t} \\Omega(C_{\\theta}(x;\\tau)) \n$$ {#eq-solution}\n\n::: {.cell execution_count=8}\n\n::: {.cell-output .cell-output-display execution_count=9}\n![Comparison of counterfactuals produced using different generators.](intro_files/figure-html/fig-ce-output-1.svg){#fig-ce}\n:::\n:::\n\n\n## Multi-Class\n\n::: {.cell execution_count=9}\n``` {.julia .cell-code}\ncounterfactual_data = load_multi_class()\n```\n:::\n\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nX = table(permutedims(counterfactual_data.X))\ny =  counterfactual_data.output_encoder.labels\n```\n:::\n\n\n::: {.cell execution_count=11}\n\n::: {.cell-output .cell-output-display execution_count=12}\n![Illustration of the smooth size loss.](intro_files/figure-html/fig-pen-multi-output-1.svg){#fig-pen-multi}\n:::\n:::\n\n\n::: {.cell execution_count=12}\n\n::: {.cell-output .cell-output-display execution_count=13}\n![Illustration of the configurable classification loss.](intro_files/figure-html/fig-losses-multi-output-1.svg){#fig-losses-multi}\n:::\n:::\n\n\n::: {.cell execution_count=13}\n\n::: {.cell-output .cell-output-display execution_count=14}\n![Energy-based conditional samples.](intro_files/figure-html/fig-energy-multi-output-1.svg){#fig-energy-multi}\n:::\n:::\n\n\n::: {.cell execution_count=14}\n``` {.julia .cell-code}\nx = select_factual(counterfactual_data,rand(1:size(counterfactual_data.X,2)))\ny_factual = predict_label(M, counterfactual_data, x)[1]\ntarget = counterfactual_data.y_levels[counterfactual_data.y_levels .!= y_factual][1]\n```\n:::\n\n\n::: {.cell execution_count=15}\n\n::: {.cell-output .cell-output-display execution_count=16}\n![Comparison of counterfactuals produced using different generators.](intro_files/figure-html/fig-ce-multi-output-1.svg){#fig-ce-multi}\n:::\n:::\n\n\n## Benchmarks\n\n::: {.cell execution_count=16}\n``` {.julia .cell-code}\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Untrained Models:\nmodels = Dict(\n    :cov75 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),\n    :cov80 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),\n    :cov90 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),\n    :cov99 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),\n)\n```\n:::\n\n\nThen we can simply loop over the datasets and eventually concatenate the results like so:\n\n::: {.cell execution_count=17}\n``` {.julia .cell-code}\nusing CounterfactualExplanations.Evaluation: benchmark\nbmks = []\nmeasures = [\n    CounterfactualExplanations.distance,\n    ECCCE.distance_from_energy,\n    ECCCE.distance_from_targets\n]\nfor (dataname, dataset) in datasets\n    bmk = benchmark(\n        dataset; \n        models=deepcopy(models), \n        generators=generators, \n        measure=measures,\n        suppress_training=false, dataname=dataname,\n        n_individuals=10\n    )\n    push!(bmks, bmk)\nend\nbmk = reduce(vcat, bmks)\n```\n:::\n\n\n::: {.cell execution_count=18}\n``` {.julia .cell-code}\nf(ce) = CounterfactualExplanations.model_evaluation(ce.M, ce.data)\n@chain bmk() begin\n    @group_by(model, generator, dataname, variable)\n    @select(model, generator, dataname, ce, value)\n    @mutate(performance = f(ce))\n    @summarize(model=unique(model), generator=unique(generator), dataname=unique(dataname), performace=unique(performance), value=mean(value))\n    @ungroup\n    @filter(dataname == :multi_class)\n    @filter(model == :cov99)\n    @filter(variable == \"distance\")\nend\n```\n:::\n\n\n::: {#fig-benchmark .cell execution_count=19}\n\n::: {.cell-output .cell-output-display}\n![Circles.](intro_files/figure-html/fig-benchmark-output-1.png){#fig-benchmark-1}\n:::\n\n::: {.cell-output .cell-output-display}\n![Linearly Separable.](intro_files/figure-html/fig-benchmark-output-2.png){#fig-benchmark-2}\n:::\n\n::: {.cell-output .cell-output-display}\n![Moons.](intro_files/figure-html/fig-benchmark-output-3.png){#fig-benchmark-3}\n:::\n\n::: {.cell-output .cell-output-display}\n![Multi-class.](intro_files/figure-html/fig-benchmark-output-4.png){#fig-benchmark-4}\n:::\n\n::: {.cell-output .cell-output-display}\n![Overlapping.](intro_files/figure-html/fig-benchmark-output-5.png){#fig-benchmark-5}\n:::\n\nBenchmark results for the different generators.\n:::\n\n\n",
     "supporting": [
       "intro_files/figure-html"
     ],
diff --git a/_freeze/notebooks/proposal/execute-results/html.json b/_freeze/notebooks/proposal/execute-results/html.json
index 433db5eb..e9804de6 100644
--- a/_freeze/notebooks/proposal/execute-results/html.json
+++ b/_freeze/notebooks/proposal/execute-results/html.json
@@ -1,7 +1,7 @@
 {
   "hash": "24ab407f04257b00a84f7dcaee456281",
   "result": {
-    "markdown": "---\ntitle: High-Fidelity Counterfactual Explanations through Conformal Prediction\nsubtitle: Research Proposal\nabstract: |\n    We propose Conformal Counterfactual Explanations: an effortless and rigorous way to produce realistic and faithful Counterfactual Explanations using Conformal Prediction. To address the need for realistic counterfactuals, existing work has primarily relied on separate generative models to learn the data-generating process. While this is an effective way to produce plausible and model-agnostic counterfactual explanations, it not only introduces a significant engineering overhead but also reallocates the task of creating realistic model explanations from the model itself to the generative model. Recent work has shown that there is no need for any of this when working with probabilistic models that explicitly quantify their own uncertainty. Unfortunately, most models used in practice still do not fulfil that basic requirement, in which case we would like to have a way to quantify predictive uncertainty in a post-hoc fashion.\n---\n\n\n\n## Motivation\n\nCounterfactual Explanations are a powerful, flexible and intuitive way to not only explain black-box models but also enable affected individuals to challenge them through the means of Algorithmic Recourse. \n\n### Counterfactual Explanations or Adversarial Examples?\n\nMost state-of-the-art approaches to generating Counterfactual Explanations (CE) rely on gradient descent in the feature space. The key idea is to perturb inputs $x\\in\\mathcal{X}$ into a black-box model $f: \\mathcal{X} \\mapsto \\mathcal{Y}$ in order to change the model output $f(x)$ to some pre-specified target value $t\\in\\mathcal{Y}$. Formally, this boils down to defining some loss function $\\ell(f(x),t)$ and taking gradient steps in the minimizing direction. The so-generated counterfactuals are considered valid as soon as the predicted label matches the target label. A stripped-down counterfactual explanation is therefore little different from an adversarial example. In @fig-adv, for example, generic counterfactual search as in @wachter2017counterfactual has been applied to MNIST data.\n\n\n\n\n\n\n\n![You may not like it, but this is what stripped-down counterfactuals look like. Here we have used @wachter2017counterfactual to generate multiple counterfactuals for turning an 8 (eight) into a 3 (three).](www/you_may_not_like_it.png){#fig-adv}\n\nThe crucial difference between adversarial examples and counterfactuals is one of intent. While adversarial examples are typically intended to go unnoticed, counterfactuals in the context of Explainable AI are generally sought to be \"plausible\", \"realistic\" or \"feasible\". To fulfil this latter goal, researchers have come up with a myriad of ways. @joshi2019realistic were among the first to suggest that instead of searching counterfactuals in the feature space, we can instead traverse a latent embedding learned by a surrogate generative model. Similarly, @poyiadzi2020face use density ... Finally, @karimi2021algorithmic argues that counterfactuals should comply with the causal model that generates them [CHECK IF WE CAN PHASE THIS LIKE THIS]. Other related approaches include ... All of these different approaches have a common goal: they aim to ensure that the generated counterfactuals comply with the (learned) data-generating process (DGB). \n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, if $x \\sim \\mathcal{X}$ and for the corresponding counterfactual we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X} \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nIn the context of Algorithmic Recourse, it makes sense to strive for plausible counterfactuals, since anything else would essentially require individuals to move to out-of-distribution states. But it is worth noting that our ambition to meet this goal, may have implications on our ability to faithfully explain the behaviour of the underlying black-box model (arguably our principal goal). By essentially decoupling the task of learning plausible representations of the data from the model itself, we open ourselves up to vulnerabilities. Using a separate generative model to learn $\\mathcal{X}$, for example, has very serious implications for the generated counterfactuals. @fig-latent compares the results of applying REVISE [@joshi2019realistic] to MNIST data using two different Variational Auto-Encoders: while the counterfactual generated using an expressive (strong) VAE is compelling, the result relying on a less expressive (weak) VAE is not even valid. In this latter case, the decoder step of the VAE fails to yield values in $\\mathcal{X}$ and hence the counterfactual search in the learned latent space is doomed. \n\n\n\n\n\n\n\n![Counterfactual explanations for MNIST using a Latent Space generator: turning a nine (9) into a four (4).](www/mnist_9to4_latent.png){#fig-latent}\n\n> Here it would be nice to have another example where we poison the data going into the generative model to hide biases present in the data (e.g. Boston housing).\n\n- Latent can be manipulated: \n    - train biased model\n    - train VAE with biased variable removed/attacked (use Boston housing dataset)\n    - hypothesis: will generate bias-free explanations\n\n### From Plausible to High-Fidelity Counterfactuals {#sec-fidelity}\n\nIn light of the findings, we propose to generally avoid using surrogate models to learn $\\mathcal{X}$ in the context of Counterfactual Explanations.\n\n::: {#prp-surrogate}\n\n## Avoid Surrogates\n\nSince we are in the business of explaining a black-box model, the task of learning realistic representations of the data should not be reallocated from the model itself to some surrogate model.\n\n:::\n\nIn cases where the use of surrogate models cannot be avoided, we propose to weigh the plausibility of counterfactuals against their fidelity to the black-box model. In the context of Explainable AI, fidelity is defined as describing how an explanation approximates the prediction of the black-box model [@molnar2020interpretable]. Fidelity has become the default metric for evaluating Local Model-Agnostic Models, since they often involve local surrogate models whose predictions need not always match those of the black-box model. \n\nIn the case of Counterfactual Explanations, the concept of fidelity has so far been ignored. This is not altogether surprising, since by construction and design, Counterfactual Explanations work with the predictions of the black-box model directly: as stated above, a counterfactual $x^{\\prime}$ is considered valid if and only if $f(x^{\\prime})=t$, where $t$ denote some target outcome. \n\nDoes fidelity even make sense in the context of CE, and if so, how can we define it? In light of the examples in the previous section, we think it is urgent to introduce a notion of fidelity in this context, that relates to the distributional properties of the generated counterfactuals. In particular, we propose that a high-fidelity counterfactual $x^{\\prime}$ complies with the class-conditional distribution $\\mathcal{X}_{\\theta} = p_{\\theta}(X|y)$ where $\\theta$ denote the black-box model parameters. \n\n::: {#def-fidele}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nIn order to assess the fidelity of counterfactuals, we propose the following two-step procedure:\n\n1) Generate samples $X_{\\theta}|y$ and $X^{\\prime}$ from $\\mathcal{X}_{\\theta}|t$ and $\\mathcal{X}^{\\prime}$, respectively.\n2) Compute the Maximum Mean Discrepancy (MMD) between $X_{\\theta}|y$ and $X^{\\prime}$. \n\nIf the computed value is different from zero, we can reject the null-hypothesis of fidelity.\n\n> Two challenges here: 1) implementing the sampling procedure in @grathwohl2020your; 2) it is unclear if MMD is really the right way to measure this. \n\n## Conformal Counterfactual Explanations\n\nIn @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (CCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. \n\n### Minimizing Predictive Uncertainty\n\n@schut2021generating demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, @antoran2020getting ...\n\n- Problem: restricted to Bayesian models.\n- Solution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction. \n\n### Background on Conformal Prediction\n\n- Distribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\n- Conformal prediction is instance-based. So is CE. \n- Take any fitted model and turn it into a conformal model using calibration data.\n- Our approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway. \n\n> Does the coverage guarantee carry over to counterfactuals?\n\n### Generating Conformal Counterfactuals\n\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, @stutz2022learning introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search ...\n\n> Challenge: still need to implement these loss functions. \n\n## Experiments\n\n### Research Questions\n\n- Is CP alone enough to ensure realistic counterfactuals?\n- Do counterfactuals improve further as the models get better?\n- Do counterfactuals get more realistic as coverage\n- What happens as we vary coverage and setsize?\n- What happens as we improve the model robustness?\n- What happens as we improve the model's ability to incorporate predictive uncertainty (deep ensemble, laplace)?\n- What happens if we combine with DiCE, ClaPROAR, Gravitational?\n- What about CE robustness to endogenous shifts [@altmeyer2023endogenous]?\n\n- Benchmarking:\n    - add PROBE [@pawelczyk2022probabilistically] into the mix.\n    - compare travel costs to domain shits.\n\n> Nice to have: What about using Laplace Approximation, then Conformal Prediction? What about using Conformalised Laplace? \n\n## References\n\n",
+    "markdown": "---\ntitle: High-Fidelity Counterfactual Explanations through Conformal Prediction\nsubtitle: Research Proposal\nabstract: |\n    We propose Conformal Counterfactual Explanations: an effortless and rigorous way to produce realistic and faithful Counterfactual Explanations using Conformal Prediction. To address the need for realistic counterfactuals, existing work has primarily relied on separate generative models to learn the data-generating process. While this is an effective way to produce plausible and model-agnostic counterfactual explanations, it not only introduces a significant engineering overhead but also reallocates the task of creating realistic model explanations from the model itself to the generative model. Recent work has shown that there is no need for any of this when working with probabilistic models that explicitly quantify their own uncertainty. Unfortunately, most models used in practice still do not fulfil that basic requirement, in which case we would like to have a way to quantify predictive uncertainty in a post-hoc fashion.\n---\n\n\n\n## Motivation\n\nCounterfactual Explanations are a powerful, flexible and intuitive way to not only explain black-box models but also enable affected individuals to challenge them through the means of Algorithmic Recourse. \n\n### Counterfactual Explanations or Adversarial Examples?\n\nMost state-of-the-art approaches to generating Counterfactual Explanations (CE) rely on gradient descent in the feature space. The key idea is to perturb inputs $x\\in\\mathcal{X}$ into a black-box model $f: \\mathcal{X} \\mapsto \\mathcal{Y}$ in order to change the model output $f(x)$ to some pre-specified target value $t\\in\\mathcal{Y}$. Formally, this boils down to defining some loss function $\\ell(f(x),t)$ and taking gradient steps in the minimizing direction. The so-generated counterfactuals are considered valid as soon as the predicted label matches the target label. A stripped-down counterfactual explanation is therefore little different from an adversarial example. In @fig-adv, for example, generic counterfactual search as in @wachter2017counterfactual has been applied to MNIST data.\n\n\n\n\n\n\n\n![You may not like it, but this is what stripped-down counterfactuals look like. Here we have used @wachter2017counterfactual to generate multiple counterfactuals for turning an 8 (eight) into a 3 (three).](www/you_may_not_like_it.png){#fig-adv}\n\nThe crucial difference between adversarial examples and counterfactuals is one of intent. While adversarial examples are typically intended to go unnoticed, counterfactuals in the context of Explainable AI are generally sought to be \"plausible\", \"realistic\" or \"feasible\". To fulfil this latter goal, researchers have come up with a myriad of ways. @joshi2019realistic were among the first to suggest that instead of searching counterfactuals in the feature space, we can instead traverse a latent embedding learned by a surrogate generative model. Similarly, @poyiadzi2020face use density ... Finally, @karimi2021algorithmic argues that counterfactuals should comply with the causal model that generates them [CHECK IF WE CAN PHASE THIS LIKE THIS]. Other related approaches include ... All of these different approaches have a common goal: they aim to ensure that the generated counterfactuals comply with the (learned) data-generating process (DGB). \n\n::: {#def-plausible}\n\n## Plausible Counterfactuals\n\nFormally, if $x \\sim \\mathcal{X}$ and for the corresponding counterfactual we have $x^{\\prime}\\sim\\mathcal{X}^{\\prime}$, then for $x^{\\prime}$ to be considered a plausible counterfactual, we need: $\\mathcal{X} \\approxeq \\mathcal{X}^{\\prime}$.\n\n:::\n\nIn the context of Algorithmic Recourse, it makes sense to strive for plausible counterfactuals, since anything else would essentially require individuals to move to out-of-distribution states. But it is worth noting that our ambition to meet this goal, may have implications on our ability to faithfully explain the behaviour of the underlying black-box model (arguably our principal goal). By essentially decoupling the task of learning plausible representations of the data from the model itself, we open ourselves up to vulnerabilities. Using a separate generative model to learn $\\mathcal{X}$, for example, has very serious implications for the generated counterfactuals. @fig-latent compares the results of applying REVISE [@joshi2019realistic] to MNIST data using two different Variational Auto-Encoders: while the counterfactual generated using an expressive (strong) VAE is compelling, the result relying on a less expressive (weak) VAE is not even valid. In this latter case, the decoder step of the VAE fails to yield values in $\\mathcal{X}$ and hence the counterfactual search in the learned latent space is doomed. \n\n\n\n\n\n\n\n![Counterfactual explanations for MNIST using a Latent Space generator: turning a nine (9) into a four (4).](www/mnist_9to4_latent.png){#fig-latent}\n\n> Here it would be nice to have another example where we poison the data going into the generative model to hide biases present in the data (e.g. Boston housing).\n\n- Latent can be manipulated: \n    - train biased model\n    - train VAE with biased variable removed/attacked (use Boston housing dataset)\n    - hypothesis: will generate bias-free explanations\n\n### From Plausible to High-Fidelity Counterfactuals {#sec-fidelity}\n\nIn light of the findings, we propose to generally avoid using surrogate models to learn $\\mathcal{X}$ in the context of Counterfactual Explanations.\n\n::: {#prp-surrogate}\n\n## Avoid Surrogates\n\nSince we are in the business of explaining a black-box model, the task of learning realistic representations of the data should not be reallocated from the model itself to some surrogate model.\n\n:::\n\nIn cases where the use of surrogate models cannot be avoided, we propose to weigh the plausibility of counterfactuals against their fidelity to the black-box model. In the context of Explainable AI, fidelity is defined as describing how an explanation approximates the prediction of the black-box model [@molnar2020interpretable]. Fidelity has become the default metric for evaluating Local Model-Agnostic Models, since they often involve local surrogate models whose predictions need not always match those of the black-box model. \n\nIn the case of Counterfactual Explanations, the concept of fidelity has so far been ignored. This is not altogether surprising, since by construction and design, Counterfactual Explanations work with the predictions of the black-box model directly: as stated above, a counterfactual $x^{\\prime}$ is considered valid if and only if $f(x^{\\prime})=t$, where $t$ denote some target outcome. \n\nDoes fidelity even make sense in the context of CE, and if so, how can we define it? In light of the examples in the previous section, we think it is urgent to introduce a notion of fidelity in this context, that relates to the distributional properties of the generated counterfactuals. In particular, we propose that a high-fidelity counterfactual $x^{\\prime}$ complies with the class-conditional distribution $\\mathcal{X}_{\\theta} = p_{\\theta}(X|y)$ where $\\theta$ denote the black-box model parameters. \n\n::: {#def-fidele}\n\n## High-Fidelity Counterfactuals\n\nLet $\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)$ denote the class-conditional distribution of $X$ defined by $\\theta$. Then for $x^{\\prime}$ to be considered a high-fidelity counterfactual, we need: $\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}$ where $t$ denotes the target outcome.\n\n:::\n\nIn order to assess the fidelity of counterfactuals, we propose the following two-step procedure:\n\n1) Generate samples $X_{\\theta}|y$ and $X^{\\prime}$ from $\\mathcal{X}_{\\theta}|t$ and $\\mathcal{X}^{\\prime}$, respectively.\n2) Compute the Maximum Mean Discrepancy (MMD) between $X_{\\theta}|y$ and $X^{\\prime}$. \n\nIf the computed value is different from zero, we can reject the null-hypothesis of fidelity.\n\n> Two challenges here: 1) implementing the sampling procedure in @grathwohl2020your; 2) it is unclear if MMD is really the right way to measure this. \n\n## Conformal Counterfactual Explanations\n\nIn @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (ECCCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. \n\n### Minimizing Predictive Uncertainty\n\n@schut2021generating demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, @antoran2020getting ...\n\n- Problem: restricted to Bayesian models.\n- Solution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction. \n\n### Background on Conformal Prediction\n\n- Distribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\n- Conformal prediction is instance-based. So is CE. \n- Take any fitted model and turn it into a conformal model using calibration data.\n- Our approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway. \n\n> Does the coverage guarantee carry over to counterfactuals?\n\n### Generating Conformal Counterfactuals\n\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, @stutz2022learning introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search ...\n\n> Challenge: still need to implement these loss functions. \n\n## Experiments\n\n### Research Questions\n\n- Is CP alone enough to ensure realistic counterfactuals?\n- Do counterfactuals improve further as the models get better?\n- Do counterfactuals get more realistic as coverage\n- What happens as we vary coverage and setsize?\n- What happens as we improve the model robustness?\n- What happens as we improve the model's ability to incorporate predictive uncertainty (deep ensemble, laplace)?\n- What happens if we combine with DiCE, ClaPROAR, Gravitational?\n- What about CE robustness to endogenous shifts [@altmeyer2023endogenous]?\n\n- Benchmarking:\n    - add PROBE [@pawelczyk2022probabilistically] into the mix.\n    - compare travel costs to domain shits.\n\n> Nice to have: What about using Laplace Approximation, then Conformal Prediction? What about using Conformalised Laplace? \n\n## References\n\n",
     "supporting": [
       "proposal_files/figure-html"
     ],
diff --git a/_freeze/notebooks/synthetic/execute-results/html.json b/_freeze/notebooks/synthetic/execute-results/html.json
index 5770692c..682d0f96 100644
--- a/_freeze/notebooks/synthetic/execute-results/html.json
+++ b/_freeze/notebooks/synthetic/execute-results/html.json
@@ -1,7 +1,7 @@
 {
   "hash": "617bb13e20ec081d43c585fd80675156",
   "result": {
-    "markdown": "::: {.cell execution_count=1}\n``` {.julia .cell-code}\ninclude(\"notebooks/setup.jl\")\neval(setup_notebooks);\n```\n:::\n\n\n# Synthetic data\n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Hyperparameters:\ncvgs = [0.5, 0.75, 0.95]\ntemps = [0.01, 0.1, 1.0]\nΛ = [0.0, 0.1, 1.0, 10.0]\nl2_λ = 0.1\n\n# Classifiers:\nepochs = 250\nlink_fun = relu\nlogreg = NeuralNetworkClassifier(builder=MLJFlux.Linear(σ=link_fun), epochs=epochs)\nmlp = NeuralNetworkClassifier(builder=MLJFlux.MLP(hidden=(32,), σ=link_fun), epochs=epochs)\nensmbl = EnsembleModel(model=mlp, n=5)\nclassifiers = Dict(\n    # :logreg => logreg,\n    :mlp => mlp,\n    # :ensmbl => ensmbl,\n)\n\n# Search parameters:\ntarget = 2\nfactual = 1\nmax_iter = 50\ngradient_tol = 1e-2\nopt = Descent(0.01)\n```\n:::\n\n\n\n\n\n\n::: {.cell execution_count=5}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-1.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-2.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-3.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-4.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-5.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-6.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-7.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-8.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-9.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-10.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-11.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-12.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-13.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-14.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-15.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-16.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-17.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-18.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-19.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-20.svg){}\n:::\n:::\n\n\n## Benchmark\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\n# Benchmark generators:\ngenerators = Dict(\n    :wachter => GenericGenerator(opt=opt, λ=l2_λ),\n    :revise => REVISEGenerator(opt=opt, λ=l2_λ),\n    :greedy => GreedyGenerator(),\n)\n\n# Untrained Models:\nmodels = Dict(Symbol(\"cov$(Int(100*cov))\") => CCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)\n\n# Measures:\nmeasures = [\n    CounterfactualExplanations.distance,\n    CCE.distance_from_energy,\n    CCE.distance_from_targets,\n    CounterfactualExplanations.validity,\n]\n```\n:::\n\n\n### Single CE\n\n\n\n\n\n::: {.cell execution_count=9}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-2.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-3.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-4.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-5.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-6.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-7.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-8.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-9.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-10.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-11.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-12.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-13.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-14.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-15.png){}\n:::\n:::\n\n\n### Full Benchmark\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nbmks = []\nfor (dataname, dataset) in datasets\n    for λ in Λ, temp in temps\n        _generators = deepcopy(generators)\n        _generators[:cce] = CCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)\n        _generators[:energy] = CCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        _generators[:target] = CCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        bmk = benchmark(\n            dataset; \n            models=deepcopy(models), \n            generators=_generators, \n            measure=measures,\n            suppress_training=false, dataname=dataname,\n            n_individuals=5,\n            initialization=:identity,\n        )\n        bmk.evaluation.λ .= λ\n        bmk.evaluation.temperature .= temp\n        push!(bmks, bmk)\n    end\nend\nbmk = reduce(vcat, bmks)\n```\n:::\n\n\n::: {.cell execution_count=11}\n``` {.julia .cell-code}\nCSV.write(joinpath(output_path, \"synthetic_benchmark.csv\"), bmk())\n```\n:::\n\n\n::: {.cell execution_count=12}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-2.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-3.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-4.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-5.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-6.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-7.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-8.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-9.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-10.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-11.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-12.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-13.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-14.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-15.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-16.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-17.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-18.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-19.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-20.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-21.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-22.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-23.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-24.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-25.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-26.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-27.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-28.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-29.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-30.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-31.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-32.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-33.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-34.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-35.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-36.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-37.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-38.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-39.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-40.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-41.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-42.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-43.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-44.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-45.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-46.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-47.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-48.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-49.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-50.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-51.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-52.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-53.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-54.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-55.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-56.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-57.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-58.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-59.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-60.png){}\n:::\n:::\n\n\n",
+    "markdown": "::: {.cell execution_count=1}\n``` {.julia .cell-code}\ninclude(\"notebooks/setup.jl\")\neval(setup_notebooks);\n```\n:::\n\n\n# Synthetic data\n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Hyperparameters:\ncvgs = [0.5, 0.75, 0.95]\ntemps = [0.01, 0.1, 1.0]\nΛ = [0.0, 0.1, 1.0, 10.0]\nl2_λ = 0.1\n\n# Classifiers:\nepochs = 250\nlink_fun = relu\nlogreg = NeuralNetworkClassifier(builder=MLJFlux.Linear(σ=link_fun), epochs=epochs)\nmlp = NeuralNetworkClassifier(builder=MLJFlux.MLP(hidden=(32,), σ=link_fun), epochs=epochs)\nensmbl = EnsembleModel(model=mlp, n=5)\nclassifiers = Dict(\n    # :logreg => logreg,\n    :mlp => mlp,\n    # :ensmbl => ensmbl,\n)\n\n# Search parameters:\ntarget = 2\nfactual = 1\nmax_iter = 50\ngradient_tol = 1e-2\nopt = Descent(0.01)\n```\n:::\n\n\n\n\n\n\n::: {.cell execution_count=5}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-1.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-2.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-3.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-4.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-5.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-6.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-7.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-8.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-9.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-10.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-11.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-12.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-13.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-14.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-15.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-16.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-17.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-18.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-19.svg){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-6-output-20.svg){}\n:::\n:::\n\n\n## Benchmark\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\n# Benchmark generators:\ngenerators = Dict(\n    :wachter => GenericGenerator(opt=opt, λ=l2_λ),\n    :revise => REVISEGenerator(opt=opt, λ=l2_λ),\n    :greedy => GreedyGenerator(),\n)\n\n# Untrained Models:\nmodels = Dict(Symbol(\"cov$(Int(100*cov))\") => ECCCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)\n\n# Measures:\nmeasures = [\n    CounterfactualExplanations.distance,\n    ECCCE.distance_from_energy,\n    ECCCE.distance_from_targets,\n    CounterfactualExplanations.validity,\n]\n```\n:::\n\n\n### Single CE\n\n\n\n\n\n::: {.cell execution_count=9}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-2.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-3.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-4.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-5.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-6.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-7.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-8.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-9.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-10.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-11.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-12.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-13.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-14.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-10-output-15.png){}\n:::\n:::\n\n\n### Full Benchmark\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nbmks = []\nfor (dataname, dataset) in datasets\n    for λ in Λ, temp in temps\n        _generators = deepcopy(generators)\n        _generators[:cce] = ECCCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)\n        _generators[:energy] = ECCCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        _generators[:target] = ECCCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        bmk = benchmark(\n            dataset; \n            models=deepcopy(models), \n            generators=_generators, \n            measure=measures,\n            suppress_training=false, dataname=dataname,\n            n_individuals=5,\n            initialization=:identity,\n        )\n        bmk.evaluation.λ .= λ\n        bmk.evaluation.temperature .= temp\n        push!(bmks, bmk)\n    end\nend\nbmk = reduce(vcat, bmks)\n```\n:::\n\n\n::: {.cell execution_count=11}\n``` {.julia .cell-code}\nCSV.write(joinpath(output_path, \"synthetic_benchmark.csv\"), bmk())\n```\n:::\n\n\n::: {.cell execution_count=12}\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-2.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-3.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-4.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-5.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-6.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-7.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-8.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-9.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-10.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-11.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-12.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-13.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-14.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-15.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-16.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-17.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-18.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-19.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-20.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-21.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-22.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-23.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-24.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-25.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-26.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-27.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-28.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-29.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-30.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-31.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-32.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-33.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-34.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-35.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-36.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-37.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-38.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-39.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-40.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-41.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-42.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-43.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-44.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-45.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-46.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-47.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-48.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-49.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-50.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-51.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-52.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-53.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-54.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-55.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-56.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-57.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-58.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-59.png){}\n:::\n\n::: {.cell-output .cell-output-display}\n![](synthetic_files/figure-html/cell-13-output-60.png){}\n:::\n:::\n\n\n",
     "supporting": [
       "synthetic_files"
     ],
diff --git a/docs/notebooks/intro.html b/docs/notebooks/intro.html
index 6ecdadd7..6a00699b 100644
--- a/docs/notebooks/intro.html
+++ b/docs/notebooks/intro.html
@@ -351,14 +351,14 @@ C(X_{\text{test}})=\{y:s(X_{\text{test}},y) \le \hat{q}\}
 </div>
 <p>We can generate samples from <span class="math inline">\(p_{\theta}(X|y)\)</span> following <span class="citation" data-cites="grathwohl2020your">Grathwohl et al. (<a href="references.html#ref-grathwohl2020your" role="doc-biblioref">2020</a>)</span>. In <a href="#fig-energy">Figure&nbsp;<span>2.2</span></a>, I have applied the methodology to our synthetic data.</p>
 <div class="cell" data-execution_count="6">
-<div class="sourceCode cell-code" id="cb5"><pre class="sourceCode julia code-with-copy"><code class="sourceCode julia"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>M <span class="op">=</span> CCE.<span class="fu">ConformalModel</span>(conf_model, mach.fitresult)</span>
+<div class="sourceCode cell-code" id="cb5"><pre class="sourceCode julia code-with-copy"><code class="sourceCode julia"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>M <span class="op">=</span> ECCCE.<span class="fu">ConformalModel</span>(conf_model, mach.fitresult)</span>
 <span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a></span>
 <span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a>niter <span class="op">=</span> <span class="fl">100</span></span>
 <span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a>nsamples <span class="op">=</span> <span class="fl">100</span></span>
 <span id="cb5-5"><a href="#cb5-5" aria-hidden="true" tabindex="-1"></a></span>
 <span id="cb5-6"><a href="#cb5-6" aria-hidden="true" tabindex="-1"></a>plts <span class="op">=</span> []</span>
 <span id="cb5-7"><a href="#cb5-7" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> (i,target) <span class="op">∈</span> <span class="fu">enumerate</span>(counterfactual_data.y_levels)</span>
-<span id="cb5-8"><a href="#cb5-8" aria-hidden="true" tabindex="-1"></a>    sampler <span class="op">=</span> CCE.<span class="fu">EnergySampler</span>(M, counterfactual_data, target; niter<span class="op">=</span>niter, nsamples<span class="op">=</span><span class="fl">100</span>)</span>
+<span id="cb5-8"><a href="#cb5-8" aria-hidden="true" tabindex="-1"></a>    sampler <span class="op">=</span> ECCCE.<span class="fu">EnergySampler</span>(M, counterfactual_data, target; niter<span class="op">=</span>niter, nsamples<span class="op">=</span><span class="fl">100</span>)</span>
 <span id="cb5-9"><a href="#cb5-9" aria-hidden="true" tabindex="-1"></a>    Xgen <span class="op">=</span> <span class="fu">rand</span>(sampler, nsamples)</span>
 <span id="cb5-10"><a href="#cb5-10" aria-hidden="true" tabindex="-1"></a>    plt <span class="op">=</span> Plots.<span class="fu">plot</span>(M, counterfactual_data; target<span class="op">=</span>target, zoom<span class="op">=-</span><span class="fl">3</span>,cbar<span class="op">=</span><span class="cn">false</span>)</span>
 <span id="cb5-11"><a href="#cb5-11" aria-hidden="true" tabindex="-1"></a>    Plots.<span class="fu">scatter!</span>(Xgen[<span class="fl">1</span>,<span class="op">:</span>],Xgen[<span class="fl">2</span>,<span class="op">:</span>],alpha<span class="op">=</span><span class="fl">0.5</span>,color<span class="op">=</span>i,shape<span class="op">=:</span>star,label<span class="op">=</span><span class="st">"X|y=</span><span class="sc">$</span>target<span class="st">"</span>)</span>
@@ -477,10 +477,10 @@ x^\prime = \arg \min_{x^\prime}  \ell(M(x^\prime),t) + \lambda \mathbb{I}_{y^\pr
 <span id="cb10-9"><a href="#cb10-9" aria-hidden="true" tabindex="-1"></a></span>
 <span id="cb10-10"><a href="#cb10-10" aria-hidden="true" tabindex="-1"></a><span class="co"># Untrained Models:</span></span>
 <span id="cb10-11"><a href="#cb10-11" aria-hidden="true" tabindex="-1"></a>models <span class="op">=</span> <span class="fu">Dict</span>(</span>
-<span id="cb10-12"><a href="#cb10-12" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov75 <span class="op">=&gt;</span> CCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.75</span>)),</span>
-<span id="cb10-13"><a href="#cb10-13" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov80 <span class="op">=&gt;</span> CCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.80</span>)),</span>
-<span id="cb10-14"><a href="#cb10-14" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov90 <span class="op">=&gt;</span> CCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.90</span>)),</span>
-<span id="cb10-15"><a href="#cb10-15" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov99 <span class="op">=&gt;</span> CCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.99</span>)),</span>
+<span id="cb10-12"><a href="#cb10-12" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov75 <span class="op">=&gt;</span> ECCCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.75</span>)),</span>
+<span id="cb10-13"><a href="#cb10-13" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov80 <span class="op">=&gt;</span> ECCCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.80</span>)),</span>
+<span id="cb10-14"><a href="#cb10-14" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov90 <span class="op">=&gt;</span> ECCCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.90</span>)),</span>
+<span id="cb10-15"><a href="#cb10-15" aria-hidden="true" tabindex="-1"></a>    <span class="op">:</span>cov99 <span class="op">=&gt;</span> ECCCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(clf; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span><span class="fl">0.99</span>)),</span>
 <span id="cb10-16"><a href="#cb10-16" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 </div>
 <p>Then we can simply loop over the datasets and eventually concatenate the results like so:</p>
@@ -489,8 +489,8 @@ x^\prime = \arg \min_{x^\prime}  \ell(M(x^\prime),t) + \lambda \mathbb{I}_{y^\pr
 <span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a>bmks <span class="op">=</span> []</span>
 <span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a>measures <span class="op">=</span> [</span>
 <span id="cb11-4"><a href="#cb11-4" aria-hidden="true" tabindex="-1"></a>    CounterfactualExplanations.distance,</span>
-<span id="cb11-5"><a href="#cb11-5" aria-hidden="true" tabindex="-1"></a>    CCE.distance_from_energy,</span>
-<span id="cb11-6"><a href="#cb11-6" aria-hidden="true" tabindex="-1"></a>    CCE.distance_from_targets</span>
+<span id="cb11-5"><a href="#cb11-5" aria-hidden="true" tabindex="-1"></a>    ECCCE.distance_from_energy,</span>
+<span id="cb11-6"><a href="#cb11-6" aria-hidden="true" tabindex="-1"></a>    ECCCE.distance_from_targets</span>
 <span id="cb11-7"><a href="#cb11-7" aria-hidden="true" tabindex="-1"></a>]</span>
 <span id="cb11-8"><a href="#cb11-8" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> (dataname, dataset) <span class="kw">in</span> datasets</span>
 <span id="cb11-9"><a href="#cb11-9" aria-hidden="true" tabindex="-1"></a>    bmk <span class="op">=</span> <span class="fu">benchmark</span>(</span>
diff --git a/docs/notebooks/proposal.html b/docs/notebooks/proposal.html
index 958533e3..4d65db90 100644
--- a/docs/notebooks/proposal.html
+++ b/docs/notebooks/proposal.html
@@ -253,7 +253,7 @@ div.csl-indent {
 </section>
 <section id="conformal-counterfactual-explanations" class="level2" data-number="1.2">
 <h2 data-number="1.2" class="anchored" data-anchor-id="conformal-counterfactual-explanations"><span class="header-section-number">1.2</span> Conformal Counterfactual Explanations</h2>
-<p>In <a href="#sec-fidelity"><span>Section&nbsp;1.1.2</span></a>, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (CCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models.</p>
+<p>In <a href="#sec-fidelity"><span>Section&nbsp;1.1.2</span></a>, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (ECCCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models.</p>
 <section id="minimizing-predictive-uncertainty" class="level3" data-number="1.2.1">
 <h3 data-number="1.2.1" class="anchored" data-anchor-id="minimizing-predictive-uncertainty"><span class="header-section-number">1.2.1</span> Minimizing Predictive Uncertainty</h3>
 <p><span class="citation" data-cites="schut2021generating">Schut et al. (<a href="references.html#ref-schut2021generating" role="doc-biblioref">2021</a>)</span> demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, <span class="citation" data-cites="antoran2020getting">Antorán et al. (<a href="references.html#ref-antoran2020getting" role="doc-biblioref">2020</a>)</span> …</p>
diff --git a/docs/notebooks/synthetic.html b/docs/notebooks/synthetic.html
index db488979..ec5ffcc2 100644
--- a/docs/notebooks/synthetic.html
+++ b/docs/notebooks/synthetic.html
@@ -330,13 +330,13 @@ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warni
 <span id="cb3-6"><a href="#cb3-6" aria-hidden="true" tabindex="-1"></a>)</span>
 <span id="cb3-7"><a href="#cb3-7" aria-hidden="true" tabindex="-1"></a></span>
 <span id="cb3-8"><a href="#cb3-8" aria-hidden="true" tabindex="-1"></a><span class="co"># Untrained Models:</span></span>
-<span id="cb3-9"><a href="#cb3-9" aria-hidden="true" tabindex="-1"></a>models <span class="op">=</span> <span class="fu">Dict</span>(<span class="fu">Symbol</span>(<span class="st">"cov</span><span class="sc">$</span>(<span class="fu">Int</span>(<span class="fl">100</span><span class="op">*</span>cov))<span class="st">"</span>) <span class="op">=&gt;</span> CCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(mlp; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span>cov)) <span class="cf">for</span> cov <span class="kw">in</span> cvgs)</span>
+<span id="cb3-9"><a href="#cb3-9" aria-hidden="true" tabindex="-1"></a>models <span class="op">=</span> <span class="fu">Dict</span>(<span class="fu">Symbol</span>(<span class="st">"cov</span><span class="sc">$</span>(<span class="fu">Int</span>(<span class="fl">100</span><span class="op">*</span>cov))<span class="st">"</span>) <span class="op">=&gt;</span> ECCCE.<span class="fu">ConformalModel</span>(<span class="fu">conformal_model</span>(mlp; method<span class="op">=:</span>simple_inductive, coverage<span class="op">=</span>cov)) <span class="cf">for</span> cov <span class="kw">in</span> cvgs)</span>
 <span id="cb3-10"><a href="#cb3-10" aria-hidden="true" tabindex="-1"></a></span>
 <span id="cb3-11"><a href="#cb3-11" aria-hidden="true" tabindex="-1"></a><span class="co"># Measures:</span></span>
 <span id="cb3-12"><a href="#cb3-12" aria-hidden="true" tabindex="-1"></a>measures <span class="op">=</span> [</span>
 <span id="cb3-13"><a href="#cb3-13" aria-hidden="true" tabindex="-1"></a>    CounterfactualExplanations.distance,</span>
-<span id="cb3-14"><a href="#cb3-14" aria-hidden="true" tabindex="-1"></a>    CCE.distance_from_energy,</span>
-<span id="cb3-15"><a href="#cb3-15" aria-hidden="true" tabindex="-1"></a>    CCE.distance_from_targets,</span>
+<span id="cb3-14"><a href="#cb3-14" aria-hidden="true" tabindex="-1"></a>    ECCCE.distance_from_energy,</span>
+<span id="cb3-15"><a href="#cb3-15" aria-hidden="true" tabindex="-1"></a>    ECCCE.distance_from_targets,</span>
 <span id="cb3-16"><a href="#cb3-16" aria-hidden="true" tabindex="-1"></a>    CounterfactualExplanations.validity,</span>
 <span id="cb3-17"><a href="#cb3-17" aria-hidden="true" tabindex="-1"></a>]</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
 </div>
@@ -397,9 +397,9 @@ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warni
 <span id="cb4-2"><a href="#cb4-2" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> (dataname, dataset) <span class="kw">in</span> datasets</span>
 <span id="cb4-3"><a href="#cb4-3" aria-hidden="true" tabindex="-1"></a>    <span class="cf">for</span> λ <span class="kw">in</span> Λ, temp <span class="kw">in</span> temps</span>
 <span id="cb4-4"><a href="#cb4-4" aria-hidden="true" tabindex="-1"></a>        _generators <span class="op">=</span> <span class="fu">deepcopy</span>(generators)</span>
-<span id="cb4-5"><a href="#cb4-5" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>cce] <span class="op">=</span> <span class="fu">CCEGenerator</span>(temp<span class="op">=</span>temp, λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
-<span id="cb4-6"><a href="#cb4-6" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>energy] <span class="op">=</span> CCE.<span class="fu">EnergyDrivenGenerator</span>(λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
-<span id="cb4-7"><a href="#cb4-7" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>target] <span class="op">=</span> CCE.<span class="fu">TargetDrivenGenerator</span>(λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
+<span id="cb4-5"><a href="#cb4-5" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>cce] <span class="op">=</span> <span class="fu">ECCCEGenerator</span>(temp<span class="op">=</span>temp, λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
+<span id="cb4-6"><a href="#cb4-6" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>energy] <span class="op">=</span> ECCCE.<span class="fu">EnergyDrivenGenerator</span>(λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
+<span id="cb4-7"><a href="#cb4-7" aria-hidden="true" tabindex="-1"></a>        _generators[<span class="op">:</span>target] <span class="op">=</span> ECCCE.<span class="fu">TargetDrivenGenerator</span>(λ<span class="op">=</span>[l2_λ,λ], opt<span class="op">=</span>opt)</span>
 <span id="cb4-8"><a href="#cb4-8" aria-hidden="true" tabindex="-1"></a>        bmk <span class="op">=</span> <span class="fu">benchmark</span>(</span>
 <span id="cb4-9"><a href="#cb4-9" aria-hidden="true" tabindex="-1"></a>            dataset; </span>
 <span id="cb4-10"><a href="#cb4-10" aria-hidden="true" tabindex="-1"></a>            models<span class="op">=</span><span class="fu">deepcopy</span>(models), </span>
diff --git a/docs/search.json b/docs/search.json
index 8413a225..c097c74a 100644
--- a/docs/search.json
+++ b/docs/search.json
@@ -18,7 +18,7 @@
     "href": "notebooks/proposal.html#conformal-counterfactual-explanations",
     "title": "1  High-Fidelity Counterfactual Explanations through Conformal Prediction",
     "section": "1.2 Conformal Counterfactual Explanations",
-    "text": "1.2 Conformal Counterfactual Explanations\nIn Section 1.1.2, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (CCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models.\n\n1.2.1 Minimizing Predictive Uncertainty\nSchut et al. (2021) demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, Antorán et al. (2020) …\n\nProblem: restricted to Bayesian models.\nSolution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction.\n\n\n\n1.2.2 Background on Conformal Prediction\n\nDistribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\nConformal prediction is instance-based. So is CE.\nTake any fitted model and turn it into a conformal model using calibration data.\nOur approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway.\n\n\nDoes the coverage guarantee carry over to counterfactuals?\n\n\n\n1.2.3 Generating Conformal Counterfactuals\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, Stutz et al. (2022) introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search …\n\nChallenge: still need to implement these loss functions."
+    "text": "1.2 Conformal Counterfactual Explanations\nIn Section 1.1.2, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (ECCCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models.\n\n1.2.1 Minimizing Predictive Uncertainty\nSchut et al. (2021) demonstrated that the goal of generating realistic (plausible) counterfactuals can also be achieved by seeking counterfactuals that minimize the predictive uncertainty of the underlying black-box model. Similarly, Antorán et al. (2020) …\n\nProblem: restricted to Bayesian models.\nSolution: post-hoc predictive uncertainty quantification. In particular, Conformal Prediction.\n\n\n\n1.2.2 Background on Conformal Prediction\n\nDistribution-free, model-agnostic and scalable approach to predictive uncertainty quantification.\nConformal prediction is instance-based. So is CE.\nTake any fitted model and turn it into a conformal model using calibration data.\nOur approach, therefore, relaxes the restriction on the family of black-box models, at the cost of relying on a subset of the data. Arguably, data is often abundant and in most applications practitioners tend to hold out a test data set anyway.\n\n\nDoes the coverage guarantee carry over to counterfactuals?\n\n\n\n1.2.3 Generating Conformal Counterfactuals\nWhile Conformal Prediction has recently grown in popularity, it does introduce a challenge in the context of classification: the predictions of Conformal Classifiers are set-valued and therefore difficult to work with, since they are, for example, non-differentiable. Fortunately, Stutz et al. (2022) introduced carefully designed differentiable loss functions that make it possible to evaluate the performance of conformal predictions in training. We can leverage these recent advances in the context of gradient-based counterfactual search …\n\nChallenge: still need to implement these loss functions."
   },
   {
     "objectID": "notebooks/proposal.html#experiments",
@@ -60,7 +60,7 @@
     "href": "notebooks/intro.html#fidelity-and-plausibility",
     "title": "2  ConformalGenerator",
     "section": "2.4 Fidelity and Plausibility",
-    "text": "2.4 Fidelity and Plausibility\nThe main evaluation criteria we are interested in are fidelity and plausibility. Interestingly, we could also consider using these measures as penalties in the counterfactual search.\n\n2.4.1 Fidelity\nWe propose to define fidelity as follows:\n\nDefinition 2.1 (High-Fidelity Counterfactuals) Let \\(\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)\\) denote the class-conditional distribution of \\(X\\) defined by \\(\\theta\\). Then for \\(x^{\\prime}\\) to be considered a high-fidelity counterfactual, we need: \\(\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}\\) where \\(t\\) denotes the target outcome.\n\nWe can generate samples from \\(p_{\\theta}(X|y)\\) following Grathwohl et al. (2020). In Figure 2.2, I have applied the methodology to our synthetic data.\n\nM = CCE.ConformalModel(conf_model, mach.fitresult)\n\nniter = 100\nnsamples = 100\n\nplts = []\nfor (i,target) ∈ enumerate(counterfactual_data.y_levels)\n    sampler = CCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)\n    Xgen = rand(sampler, nsamples)\n    plt = Plots.plot(M, counterfactual_data; target=target, zoom=-3,cbar=false)\n    Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=i,shape=:star,label=\"X|y=$target\")\n    push!(plts, plt)\nend\nPlots.plot(plts..., layout=(1,length(plts)), size=(img_height*length(plts),img_height))\n\n\n\n\nFigure 2.2: Energy-based conditional samples.\n\n\n\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual \\(x^{\\prime}\\) from these generated samples, for example.\n\n\n2.4.2 Plausibility\nWe propose to define plausibility as follows:\n\nDefinition 2.2 (Plausible Counterfactuals) Formally, let \\(\\mathcal{X}|t\\) denote the conditional distribution of samples in the target class. As before, we have \\(x^{\\prime}\\sim\\mathcal{X}^{\\prime}\\), then for \\(x^{\\prime}\\) to be considered a plausible counterfactual, we need: \\(\\mathcal{X}|t \\approxeq \\mathcal{X}^{\\prime}\\).\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual \\(x^{\\prime}\\) from (potentially bootstrapped) training samples in the target class, for example."
+    "text": "2.4 Fidelity and Plausibility\nThe main evaluation criteria we are interested in are fidelity and plausibility. Interestingly, we could also consider using these measures as penalties in the counterfactual search.\n\n2.4.1 Fidelity\nWe propose to define fidelity as follows:\n\nDefinition 2.1 (High-Fidelity Counterfactuals) Let \\(\\mathcal{X}_{\\theta}|y = p_{\\theta}(X|y)\\) denote the class-conditional distribution of \\(X\\) defined by \\(\\theta\\). Then for \\(x^{\\prime}\\) to be considered a high-fidelity counterfactual, we need: \\(\\mathcal{X}_{\\theta}|t \\approxeq \\mathcal{X}^{\\prime}\\) where \\(t\\) denotes the target outcome.\n\nWe can generate samples from \\(p_{\\theta}(X|y)\\) following Grathwohl et al. (2020). In Figure 2.2, I have applied the methodology to our synthetic data.\n\nM = ECCCE.ConformalModel(conf_model, mach.fitresult)\n\nniter = 100\nnsamples = 100\n\nplts = []\nfor (i,target) ∈ enumerate(counterfactual_data.y_levels)\n    sampler = ECCCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)\n    Xgen = rand(sampler, nsamples)\n    plt = Plots.plot(M, counterfactual_data; target=target, zoom=-3,cbar=false)\n    Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=i,shape=:star,label=\"X|y=$target\")\n    push!(plts, plt)\nend\nPlots.plot(plts..., layout=(1,length(plts)), size=(img_height*length(plts),img_height))\n\n\n\n\nFigure 2.2: Energy-based conditional samples.\n\n\n\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual \\(x^{\\prime}\\) from these generated samples, for example.\n\n\n2.4.2 Plausibility\nWe propose to define plausibility as follows:\n\nDefinition 2.2 (Plausible Counterfactuals) Formally, let \\(\\mathcal{X}|t\\) denote the conditional distribution of samples in the target class. As before, we have \\(x^{\\prime}\\sim\\mathcal{X}^{\\prime}\\), then for \\(x^{\\prime}\\) to be considered a plausible counterfactual, we need: \\(\\mathcal{X}|t \\approxeq \\mathcal{X}^{\\prime}\\).\n\nAs an evaluation metric and penalty, we could use the average distance of the counterfactual \\(x^{\\prime}\\) from (potentially bootstrapped) training samples in the target class, for example."
   },
   {
     "objectID": "notebooks/intro.html#counterfactual-explanations",
@@ -81,14 +81,14 @@
     "href": "notebooks/intro.html#benchmarks",
     "title": "2  ConformalGenerator",
     "section": "2.7 Benchmarks",
-    "text": "2.7 Benchmarks\n\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Untrained Models:\nmodels = Dict(\n    :cov75 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),\n    :cov80 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),\n    :cov90 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),\n    :cov99 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),\n)\n\nThen we can simply loop over the datasets and eventually concatenate the results like so:\n\nusing CounterfactualExplanations.Evaluation: benchmark\nbmks = []\nmeasures = [\n    CounterfactualExplanations.distance,\n    CCE.distance_from_energy,\n    CCE.distance_from_targets\n]\nfor (dataname, dataset) in datasets\n    bmk = benchmark(\n        dataset; \n        models=deepcopy(models), \n        generators=generators, \n        measure=measures,\n        suppress_training=false, dataname=dataname,\n        n_individuals=10\n    )\n    push!(bmks, bmk)\nend\nbmk = reduce(vcat, bmks)\n\n\nf(ce) = CounterfactualExplanations.model_evaluation(ce.M, ce.data)\n@chain bmk() begin\n    @group_by(model, generator, dataname, variable)\n    @select(model, generator, dataname, ce, value)\n    @mutate(performance = f(ce))\n    @summarize(model=unique(model), generator=unique(generator), dataname=unique(dataname), performace=unique(performance), value=mean(value))\n    @ungroup\n    @filter(dataname == :multi_class)\n    @filter(model == :cov99)\n    @filter(variable == \"distance\")\nend\n\n\n\n\n\n\n\n(a) Circles.\n\n\n\n\n\n\n\n(b) Linearly Separable.\n\n\n\n\n\n\n\n(c) Moons.\n\n\n\n\n\n\n\n(d) Multi-class.\n\n\n\n\n\n\n\n(e) Overlapping.\n\n\n\nFigure 2.8: Benchmark results for the different generators.\n\n\n\n\n\n\nGrathwohl, Will, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. 2020. “Your Classifier Is Secretly an Energy Based Model and You Should Treat It Like One.” In. https://openreview.net/forum?id=Hkxzx0NtDB.\n\n\nSchut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.\n\n\nStutz, David, Krishnamurthy Dj Dvijotham, Ali Taylan Cemgil, and Arnaud Doucet. 2022. “Learning Optimal Conformal Classifiers.” In. https://openreview.net/forum?id=t8O-4LKFVx."
+    "text": "2.7 Benchmarks\n\n# Data:\ndatasets = Dict(\n    :linearly_separable => load_linearly_separable(),\n    :overlapping => load_overlapping(),\n    :moons => load_moons(),\n    :circles => load_circles(),\n    :multi_class => load_multi_class(),\n)\n\n# Untrained Models:\nmodels = Dict(\n    :cov75 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),\n    :cov80 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),\n    :cov90 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),\n    :cov99 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),\n)\n\nThen we can simply loop over the datasets and eventually concatenate the results like so:\n\nusing CounterfactualExplanations.Evaluation: benchmark\nbmks = []\nmeasures = [\n    CounterfactualExplanations.distance,\n    ECCCE.distance_from_energy,\n    ECCCE.distance_from_targets\n]\nfor (dataname, dataset) in datasets\n    bmk = benchmark(\n        dataset; \n        models=deepcopy(models), \n        generators=generators, \n        measure=measures,\n        suppress_training=false, dataname=dataname,\n        n_individuals=10\n    )\n    push!(bmks, bmk)\nend\nbmk = reduce(vcat, bmks)\n\n\nf(ce) = CounterfactualExplanations.model_evaluation(ce.M, ce.data)\n@chain bmk() begin\n    @group_by(model, generator, dataname, variable)\n    @select(model, generator, dataname, ce, value)\n    @mutate(performance = f(ce))\n    @summarize(model=unique(model), generator=unique(generator), dataname=unique(dataname), performace=unique(performance), value=mean(value))\n    @ungroup\n    @filter(dataname == :multi_class)\n    @filter(model == :cov99)\n    @filter(variable == \"distance\")\nend\n\n\n\n\n\n\n\n(a) Circles.\n\n\n\n\n\n\n\n(b) Linearly Separable.\n\n\n\n\n\n\n\n(c) Moons.\n\n\n\n\n\n\n\n(d) Multi-class.\n\n\n\n\n\n\n\n(e) Overlapping.\n\n\n\nFigure 2.8: Benchmark results for the different generators.\n\n\n\n\n\n\nGrathwohl, Will, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. 2020. “Your Classifier Is Secretly an Energy Based Model and You Should Treat It Like One.” In. https://openreview.net/forum?id=Hkxzx0NtDB.\n\n\nSchut, Lisa, Oscar Key, Rory Mc Grath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.\n\n\nStutz, David, Krishnamurthy Dj Dvijotham, Ali Taylan Cemgil, and Arnaud Doucet. 2022. “Learning Optimal Conformal Classifiers.” In. https://openreview.net/forum?id=t8O-4LKFVx."
   },
   {
     "objectID": "notebooks/synthetic.html#benchmark",
     "href": "notebooks/synthetic.html#benchmark",
     "title": "3  Synthetic data",
     "section": "3.1 Benchmark",
-    "text": "3.1 Benchmark\n\n# Benchmark generators:\ngenerators = Dict(\n    :wachter => GenericGenerator(opt=opt, λ=l2_λ),\n    :revise => REVISEGenerator(opt=opt, λ=l2_λ),\n    :greedy => GreedyGenerator(),\n)\n\n# Untrained Models:\nmodels = Dict(Symbol(\"cov$(Int(100*cov))\") => CCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)\n\n# Measures:\nmeasures = [\n    CounterfactualExplanations.distance,\n    CCE.distance_from_energy,\n    CCE.distance_from_targets,\n    CounterfactualExplanations.validity,\n]\n\n\n3.1.1 Single CE\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n3.1.2 Full Benchmark\n\nbmks = []\nfor (dataname, dataset) in datasets\n    for λ in Λ, temp in temps\n        _generators = deepcopy(generators)\n        _generators[:cce] = CCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)\n        _generators[:energy] = CCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        _generators[:target] = CCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        bmk = benchmark(\n            dataset; \n            models=deepcopy(models), \n            generators=_generators, \n            measure=measures,\n            suppress_training=false, dataname=dataname,\n            n_individuals=5,\n            initialization=:identity,\n        )\n        bmk.evaluation.λ .= λ\n        bmk.evaluation.temperature .= temp\n        push!(bmks, bmk)\n    end\nend\nbmk = reduce(vcat, bmks)\n\n\nCSV.write(joinpath(output_path, \"synthetic_benchmark.csv\"), bmk())"
+    "text": "3.1 Benchmark\n\n# Benchmark generators:\ngenerators = Dict(\n    :wachter => GenericGenerator(opt=opt, λ=l2_λ),\n    :revise => REVISEGenerator(opt=opt, λ=l2_λ),\n    :greedy => GreedyGenerator(),\n)\n\n# Untrained Models:\nmodels = Dict(Symbol(\"cov$(Int(100*cov))\") => ECCCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)\n\n# Measures:\nmeasures = [\n    CounterfactualExplanations.distance,\n    ECCCE.distance_from_energy,\n    ECCCE.distance_from_targets,\n    CounterfactualExplanations.validity,\n]\n\n\n3.1.1 Single CE\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n3.1.2 Full Benchmark\n\nbmks = []\nfor (dataname, dataset) in datasets\n    for λ in Λ, temp in temps\n        _generators = deepcopy(generators)\n        _generators[:cce] = ECCCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)\n        _generators[:energy] = ECCCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        _generators[:target] = ECCCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)\n        bmk = benchmark(\n            dataset; \n            models=deepcopy(models), \n            generators=_generators, \n            measure=measures,\n            suppress_training=false, dataname=dataname,\n            n_individuals=5,\n            initialization=:identity,\n        )\n        bmk.evaluation.λ .= λ\n        bmk.evaluation.temperature .= temp\n        push!(bmks, bmk)\n    end\nend\nbmk = reduce(vcat, bmks)\n\n\nCSV.write(joinpath(output_path, \"synthetic_benchmark.csv\"), bmk())"
   },
   {
     "objectID": "notebooks/references.html",
diff --git a/notebooks/Manifest.toml b/notebooks/Manifest.toml
index a1c318aa..d5d8d17a 100644
--- a/notebooks/Manifest.toml
+++ b/notebooks/Manifest.toml
@@ -2,7 +2,7 @@
 
 julia_version = "1.8.5"
 manifest_format = "2.0"
-project_hash = "f47e03784c5ec1ca97608a676d4689997e56c59d"
+project_hash = "bb24fa6d048fab99674a941d85c45a034b033aae"
 
 [[deps.AbstractFFTs]]
 deps = ["ChainRulesCore", "LinearAlgebra"]
@@ -140,12 +140,6 @@ git-tree-sha1 = "19a35467a82e236ff51bc17a3a44b69ef35185a2"
 uuid = "6e34b625-4abd-537c-b88f-471c36dfa7a0"
 version = "1.0.8+0"
 
-[[deps.CCE]]
-deps = ["CategoricalArrays", "ChainRules", "ConformalPrediction", "CounterfactualExplanations", "Distances", "Distributions", "Flux", "JointEnergyModels", "LinearAlgebra", "MLJBase", "MLJEnsembles", "MLJFlux", "MLJModelInterface", "MLUtils", "Parameters", "PkgTemplates", "Plots", "Random", "SliceMap", "Statistics", "StatsBase", "StatsPlots", "Term"]
-path = ".."
-uuid = "0232c203-4013-4b0d-ad96-43e3e11ac3bf"
-version = "0.1.0"
-
 [[deps.CEnum]]
 git-tree-sha1 = "eb4cb44a499229b3b8426dcfb5dd85333951ff90"
 uuid = "fa961155-64e5-5f13-b03f-caf6b980ea82"
@@ -502,6 +496,12 @@ git-tree-sha1 = "5837a837389fccf076445fce071c8ddaea35a566"
 uuid = "fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74"
 version = "0.6.8"
 
+[[deps.ECCCE]]
+deps = ["CategoricalArrays", "ChainRules", "ConformalPrediction", "CounterfactualExplanations", "Distances", "Distributions", "Flux", "JointEnergyModels", "LinearAlgebra", "MLJBase", "MLJEnsembles", "MLJFlux", "MLJModelInterface", "MLUtils", "Parameters", "PkgTemplates", "Plots", "Random", "SliceMap", "Statistics", "StatsBase", "StatsPlots", "Term"]
+path = ".."
+uuid = "0232c203-4013-4b0d-ad96-43e3e11ac3bf"
+version = "0.1.0"
+
 [[deps.EarCut_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
 git-tree-sha1 = "e3290f2d49e661fbd94046d7e3726ffcb2d41053"
diff --git a/notebooks/Project.toml b/notebooks/Project.toml
index c178b3ca..5e61dae4 100644
--- a/notebooks/Project.toml
+++ b/notebooks/Project.toml
@@ -1,6 +1,6 @@
 [deps]
 AlgebraOfGraphics = "cbdf2221-f076-402e-a563-3d30da359d67"
-CCE = "0232c203-4013-4b0d-ad96-43e3e11ac3bf"
+ECCCE = "0232c203-4013-4b0d-ad96-43e3e11ac3bf"
 CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
 CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
 CategoricalDistributions = "af321ab8-2d2e-40a6-b165-3d674595d28e"
diff --git a/notebooks/intro.qmd b/notebooks/intro.qmd
index 33eeecb3..1c2ad272 100644
--- a/notebooks/intro.qmd
+++ b/notebooks/intro.qmd
@@ -122,14 +122,14 @@ We can generate samples from $p_{\theta}(X|y)$ following @grathwohl2020your. In
 #| label: fig-energy
 #| output: true
 
-M = CCE.ConformalModel(conf_model, mach.fitresult)
+M = ECCCE.ConformalModel(conf_model, mach.fitresult)
 
 niter = 100
 nsamples = 100
 
 plts = []
 for (i,target) ∈ enumerate(counterfactual_data.y_levels)
-    sampler = CCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)
+    sampler = ECCCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)
     Xgen = rand(sampler, nsamples)
     plt = Plots.plot(M, counterfactual_data; target=target, zoom=-3,cbar=false)
     Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=i,shape=:star,label="X|y=$target")
@@ -294,11 +294,11 @@ Plots.plot(plts..., layout=(length(cvgs),length(cvgs)), size=(2img_height*length
 
 niter = 100
 nsamples = 100
-M = CCE.ConformalModel(conf_model, mach.fitresult; likelihood=:classification_multi)
+M = ECCCE.ConformalModel(conf_model, mach.fitresult; likelihood=:classification_multi)
 
 plts = []
 for target ∈ counterfactual_data.y_levels
-    sampler = CCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)
+    sampler = ECCCE.EnergySampler(M, counterfactual_data, target; niter=niter, nsamples=100)
     Xgen = rand(sampler, nsamples)
     plt = Plots.plot(M, counterfactual_data; target=target, zoom=-0.5,cbar=false)
     Plots.scatter!(Xgen[1,:],Xgen[2,:],alpha=0.5,color=target,shape=:star,label="X|y=$target")
@@ -354,10 +354,10 @@ datasets = Dict(
 
 # Untrained Models:
 models = Dict(
-    :cov75 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),
-    :cov80 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),
-    :cov90 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),
-    :cov99 => CCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),
+    :cov75 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.75)),
+    :cov80 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.80)),
+    :cov90 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.90)),
+    :cov99 => ECCCE.ConformalModel(conformal_model(clf; method=:simple_inductive, coverage=0.99)),
 )
 ```
 
@@ -369,8 +369,8 @@ using CounterfactualExplanations.Evaluation: benchmark
 bmks = []
 measures = [
     CounterfactualExplanations.distance,
-    CCE.distance_from_energy,
-    CCE.distance_from_targets
+    ECCCE.distance_from_energy,
+    ECCCE.distance_from_targets
 ]
 for (dataname, dataset) in datasets
     bmk = benchmark(
diff --git a/notebooks/mnist.qmd b/notebooks/mnist.qmd
index 31fcec4d..d26f3ca6 100644
--- a/notebooks/mnist.qmd
+++ b/notebooks/mnist.qmd
@@ -5,11 +5,20 @@ eval(setup_notebooks)
 
 # MNIST
 
+```{julia}
+function pre_process(x; noise::Float32=0.03f0)
+    ϵ = Float32.(randn(size(x)) * noise)
+    x = @.(2 * x - 1) .+ ϵ
+    return x
+end
+```
+
 ```{julia}
 # Data:
 n_obs = 10000
 counterfactual_data = load_mnist(n_obs)
 X, y = CounterfactualExplanations.DataPreprocessing.unpack_data(counterfactual_data)
+X = pre_process.(X)
 X = table(permutedims(X))
 labels = counterfactual_data.output_encoder.labels
 input_dim, n_obs = size(counterfactual_data.X)
@@ -23,16 +32,18 @@ First, let's create a couple of image classifier architectures:
 # Model parameters:
 epochs = 100
 batch_size = minimum([Int(round(n_obs/10)), 128])
-n_hidden = 200
+n_hidden = 32
 activation = Flux.relu
-builder = MLJFlux.@builder Flux.Chain(
-    Dense(n_in, n_hidden),
-    BatchNorm(n_hidden, activation),
-    Dense(n_hidden, n_hidden),
-    BatchNorm(n_hidden, activation),
-    Dense(n_hidden, n_out),
-)
-# builder = MLJFlux.Short(n_hidden=n_hidden, dropout=0.2, σ=activation)
+# builder = MLJFlux.@builder Flux.Chain(
+#     Dense(n_in, n_hidden, activation),
+#     Dense(n_hidden, n_hidden, activation),
+#     Dense(n_hidden, n_hidden, activation),
+#     # BatchNorm(n_hidden, activation),
+#     # Dense(n_hidden, n_hidden),
+#     # BatchNorm(n_hidden, activation),
+#     Dense(n_hidden, n_out),
+# )
+builder = MLJFlux.Short(n_hidden=n_hidden, dropout=0.2, σ=activation)
 # builder = MLJFlux.MLP(
 #     hidden=(
 #         n_hidden,
@@ -51,7 +62,7 @@ mlp = NeuralNetworkClassifier(
 )
 
 # Joint Energy Model:
-𝒟x = Uniform(0,1)
+𝒟x = Uniform(-1,1)
 𝒟y = Categorical(ones(output_dim) ./ output_dim)
 sampler = ConditionalSampler(
     𝒟x, 𝒟y, 
@@ -78,11 +89,11 @@ mlp_ens = EnsembleModel(model=mlp, n=5)
 ```
 
 ```{julia}
-cov = .90
+cov = .95
 conf_model = conformal_model(jem; method=:adaptive_inductive, coverage=cov)
 mach = machine(conf_model, X, labels)
 fit!(mach)
-M = CCE.ConformalModel(mach.model, mach.fitresult)
+M = ECCCE.ConformalModel(mach.model, mach.fitresult)
 ```
 
 ```{julia}
@@ -140,12 +151,12 @@ ce_jsma = generate_counterfactual(
     initialization=:identity,
 )
 
-# CCE:
+# ECCCE:
 λ=[0.0,1.0]
 temp=0.01
 
-# Generate counterfactual using CCE generator:
-generator = CCEGenerator(
+# Generate counterfactual using ECCCE generator:
+generator = ECCCEGenerator(
     λ=λ, 
     temp=temp, 
     opt=Flux.Optimise.Adam(),
@@ -157,8 +168,8 @@ ce_conformal = generate_counterfactual(
     converge_when=:generator_conditions,
 )
 
-# Generate counterfactual using CCE generator:
-generator = CCEGenerator(
+# Generate counterfactual using ECCCE generator:
+generator = ECCCEGenerator(
     λ=λ, 
     temp=temp, 
     opt=CounterfactualExplanations.Generators.JSMADescent(η=1.0),
@@ -180,7 +191,7 @@ p1 = Plots.plot(
 plts = [p1]
 
 ces = [ce_wachter, ce_conformal, ce_jsma, ce_conformal_jsma]
-_names = ["Wachter", "CCE", "JSMA", "CCE-JSMA"]
+_names = ["Wachter", "ECCCE", "JSMA", "ECCCE-JSMA"]
 for x in zip(ces, _names)
     ce, _name = (x[1],x[2])
     x = CounterfactualExplanations.counterfactual(ce)
@@ -215,8 +226,8 @@ generators = Dict(
 # Measures:
 measures = [
     CounterfactualExplanations.distance,
-    CCE.distance_from_energy,
-    CCE.distance_from_targets,
+    ECCCE.distance_from_energy,
+    ECCCE.distance_from_targets,
     CounterfactualExplanations.validity,
 ]
 ```
\ No newline at end of file
diff --git a/notebooks/proposal.qmd b/notebooks/proposal.qmd
index 5ee92707..bdbd45b3 100644
--- a/notebooks/proposal.qmd
+++ b/notebooks/proposal.qmd
@@ -188,7 +188,7 @@ If the computed value is different from zero, we can reject the null-hypothesis
 
 ## Conformal Counterfactual Explanations
 
-In @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (CCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. 
+In @sec-fidelity, we have advocated for avoiding surrogate models in the context of Counterfactual Explanations. In this section, we introduce an alternative way to generate high-fidelity Counterfactual Explanations. In particular, we propose Conformal Counterfactual Explanations (ECCCE), that is Counterfactual Explanations that minimize the predictive uncertainty of conformal models. 
 
 ### Minimizing Predictive Uncertainty
 
diff --git a/notebooks/setup.jl b/notebooks/setup.jl
index 6f9ae26e..5997cae6 100644
--- a/notebooks/setup.jl
+++ b/notebooks/setup.jl
@@ -6,8 +6,8 @@ setup_notebooks = quote
     using AlgebraOfGraphics
     using AlgebraOfGraphics: Violin, BoxPlot, BarPlot
     using CairoMakie
-    using CCE
-    using CCE: set_size_penalty, distance_from_energy, distance_from_targets
+    using ECCCE
+    using ECCCE: set_size_penalty, distance_from_energy, distance_from_targets
     using Chain: @chain
     using ConformalPrediction
     using CounterfactualExplanations
diff --git a/notebooks/synthetic.qmd b/notebooks/synthetic.qmd
index b97802c6..0ac30edb 100644
--- a/notebooks/synthetic.qmd
+++ b/notebooks/synthetic.qmd
@@ -60,16 +60,16 @@ for (dataname, data) in datasets
         conf_model = conformal_model(clf; method=:simple_inductive, coverage=cov)
         mach = machine(conf_model, X, y)
         fit!(mach)
-        M = CCE.ConformalModel(mach.model, mach.fitresult)
+        M = ECCCE.ConformalModel(mach.model, mach.fitresult)
 
-        # Set up CCE:
+        # Set up ECCCE:
         factual_label = predict_label(M, data, x)[1]
         target_label = data.y_levels[data.y_levels .!= factual_label][1]
 
         for λ in Λ, temp in temps
 
-            # CCE for given classifier, coverage, temperature and λ:
-            generator = CCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)
+            # ECCCE for given classifier, coverage, temperature and λ:
+            generator = ECCCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)
             @assert predict_label(M, data, x) != target_label
             ce = try
                 generate_counterfactual(
@@ -158,13 +158,13 @@ generators = Dict(
 )
 
 # Untrained Models:
-models = Dict(Symbol("cov$(Int(100*cov))") => CCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)
+models = Dict(Symbol("cov$(Int(100*cov))") => ECCCE.ConformalModel(conformal_model(mlp; method=:simple_inductive, coverage=cov)) for cov in cvgs)
 
 # Measures:
 measures = [
     CounterfactualExplanations.distance,
-    CCE.distance_from_energy,
-    CCE.distance_from_targets,
+    ECCCE.distance_from_energy,
+    ECCCE.distance_from_targets,
     CounterfactualExplanations.validity,
 ]
 ```
@@ -187,7 +187,7 @@ for (dataname, data) in datasets
 
         # Model training:
         M = train(M, data)
-        # Set up CCE:
+        # Set up ECCCE:
         factual_label = predict_label(M, data, x)[1]
         target_label = data.y_levels[data.y_levels .!= factual_label][1]
     
@@ -195,13 +195,13 @@ for (dataname, data) in datasets
 
             # Generators:
             _generators = deepcopy(generators)
-            _generators[:cce] = CCEGenerator(temp=_temp, λ=[l2_λ,λ], opt=opt)
-            _generators[:energy] = CCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)
-            _generators[:target] = CCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)
+            _generators[:cce] = ECCCEGenerator(temp=_temp, λ=[l2_λ,λ], opt=opt)
+            _generators[:energy] = ECCCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)
+            _generators[:target] = ECCCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)
 
             for (gen_name, gen) in _generators
 
-                # CCE for given models, λ and generator:
+                # ECCCE for given models, λ and generator:
                 @assert predict_label(M, data, x) != target_label
                 ce = try
                     generate_counterfactual(
@@ -300,9 +300,9 @@ bmks = []
 for (dataname, dataset) in datasets
     for λ in Λ, temp in temps
         _generators = deepcopy(generators)
-        _generators[:cce] = CCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)
-        _generators[:energy] = CCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)
-        _generators[:target] = CCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)
+        _generators[:cce] = ECCCEGenerator(temp=temp, λ=[l2_λ,λ], opt=opt)
+        _generators[:energy] = ECCCE.EnergyDrivenGenerator(λ=[l2_λ,λ], opt=opt)
+        _generators[:target] = ECCCE.TargetDrivenGenerator(λ=[l2_λ,λ], opt=opt)
         bmk = benchmark(
             dataset; 
             models=deepcopy(models), 
diff --git a/paper/paper.tex b/paper/paper.tex
index 50fc37d1..13799de7 100644
--- a/paper/paper.tex
+++ b/paper/paper.tex
@@ -266,7 +266,7 @@ The fact that conformal classifiers produce set-valued predictions introduces a
 
 where $\kappa \in \{0,1\}$ is a hyper-parameter and $C_{\theta,\mathbf{y}}(\mathbf{x}_i;\alpha)$ can be interpreted as the probability of label $\mathbf{y}$ being included in the prediction set. Formally, it is defined as $C_{\theta,\mathbf{y}}(\mathbf{x}_i;\alpha):=\sigma\left((s(\mathbf{x}_i,\mathbf{y})-\alpha) T^{-1}\right)$ for $\mathbf{y}\in\mathcal{Y}$ where $\sigma$ is the sigmoid function and $T$ is a hyper-parameter used for temperature scaling \citep{stutz2022learning}.
 
-Penalizing the set size in this way is in principal enough to train efficient conformal classifiers \citep{stutz2022learning}. As we explained above, the set size is also closely linked to predictive uncertainty at the local level. This makes the smooth penalty defined in Equation~\ref{eq:setsize} useful in the context of meeting our objective of generating plausible counterfactuals. In particular, we adapt Equation~\ref{eq:general} to define the baseline objective for Conformal Counterfactual Explanations (CCE):
+Penalizing the set size in this way is in principal enough to train efficient conformal classifiers \citep{stutz2022learning}. As we explained above, the set size is also closely linked to predictive uncertainty at the local level. This makes the smooth penalty defined in Equation~\ref{eq:setsize} useful in the context of meeting our objective of generating plausible counterfactuals. In particular, we adapt Equation~\ref{eq:general} to define the baseline objective for Conformal Counterfactual Explanations (ECCCE):
 
 \begin{equation}\label{eq:cce}
   \begin{aligned}
@@ -276,7 +276,7 @@ Penalizing the set size in this way is in principal enough to train efficient co
 
 Since we can still retrieve unperturbed softmax outputs from our conformal classifier $M_{\theta}$, we are free to work with any loss function of our choice. For example, we could use standard cross-entropy for $\text{yloss}$.
 
-In order to generate prediction sets $C_{\theta}(f(\mathbf{Z}^\prime);\alpha)$ for any Black Box Model we merely need to perform a single calibration pass through a holdout set $\mathcal{D}_{\text{cal}}$. Arguably, data is typically abundant and in most applications practitioners tend to hold out a test data set anyway. Our proposed approach for CCE therefore removes the restriction on the family of predictive models, at the small cost of reserving a subset of the available data for calibration. 
+In order to generate prediction sets $C_{\theta}(f(\mathbf{Z}^\prime);\alpha)$ for any Black Box Model we merely need to perform a single calibration pass through a holdout set $\mathcal{D}_{\text{cal}}$. Arguably, data is typically abundant and in most applications practitioners tend to hold out a test data set anyway. Our proposed approach for ECCCE therefore removes the restriction on the family of predictive models, at the small cost of reserving a subset of the available data for calibration. 
 
 \section{Experiments}
 
diff --git a/src/CCE.jl b/src/ECCCE.jl
similarity index 67%
rename from src/CCE.jl
rename to src/ECCCE.jl
index 809408f1..8a031fff 100644
--- a/src/CCE.jl
+++ b/src/ECCCE.jl
@@ -1,4 +1,4 @@
-module CCE
+module ECCCE
 
 using CounterfactualExplanations
 import MLJModelInterface as MMI
@@ -9,6 +9,6 @@ include("losses.jl")
 include("generator.jl")
 include("sampling.jl")
 
-export CCEGenerator, EnergySampler, set_size_penalty, distance_from_energy
+export ECCCEGenerator, EnergySampler, set_size_penalty, distance_from_energy
 
 end
\ No newline at end of file
diff --git a/src/generator.jl b/src/generator.jl
index d379f751..ac598d48 100644
--- a/src/generator.jl
+++ b/src/generator.jl
@@ -1,25 +1,43 @@
 using CounterfactualExplanations.Objectives
 
-"Constructor for `CCEGenerator`."
-function CCEGenerator(; λ::Union{AbstractFloat,Vector{<:AbstractFloat}}=[0.1, 1.0], κ::Real=1.0, temp::Real=0.05, kwargs...)
+"Constructor for `ECCCEGenerator`."
+function ECCCEGenerator(; λ::Union{AbstractFloat,Vector{<:AbstractFloat}}=[0.1, 1.0], κ::Real=1.0, temp::Real=0.05, kwargs...)
     function _set_size_penalty(ce::AbstractCounterfactualExplanation)
-        return CCE.set_size_penalty(ce; κ=κ, temp=temp)
+        return ECCCE.set_size_penalty(ce; κ=κ, temp=temp)
     end
     _penalties = [Objectives.distance_l2, _set_size_penalty]
     λ = λ isa AbstractFloat ? [0.0, λ] : λ
     return Generator(; penalty=_penalties, λ=λ, kwargs...)
 end
 
+"Constructor for `ECECCCEGenerator`: Energy Constrained Conformal Counterfactual Explanation Generator."
+function ECECCCEGenerator(; 
+    λ::Union{AbstractFloat,Vector{<:AbstractFloat}}=[0.1, 1.0, 1.0], 
+    κ::Real=1.0, 
+    temp::Real=0.5, 
+    η::Union{Nothing,Real}=nothing,
+    n::Union{Nothing,Int}=nothing,
+    opt::Flux.Optimise.AbstractOptimiser=CounterfactualExplanations.Generators.JSMADescent(η=η,n=n),
+    kwargs...
+)
+    function _set_size_penalty(ce::AbstractCounterfactualExplanation)
+        return ECCCE.set_size_penalty(ce; κ=κ, temp=temp)
+    end
+    _penalties = [Objectives.distance_l2, _set_size_penalty, ECCCE.distance_from_energy]
+    λ = λ isa AbstractFloat ? [0.0, λ, λ] : λ
+    return Generator(; penalty=_penalties, λ=λ, opt=opt, kwargs...)
+end
+
 "Constructor for `EnergyDrivenGenerator`."
 function EnergyDrivenGenerator(; λ::Union{AbstractFloat,Vector{<:AbstractFloat}}=[0.1, 1.0], kwargs...)
-    _penalties = [Objectives.distance_l2, CCE.distance_from_energy]
+    _penalties = [Objectives.distance_l2, ECCCE.distance_from_energy]
     λ = λ isa AbstractFloat ? [0.0, λ] : λ
     return Generator(; penalty=_penalties, λ=λ, kwargs...)
 end
 
 "Constructor for `TargetDrivenGenerator`."
 function TargetDrivenGenerator(; λ::Union{AbstractFloat,Vector{<:AbstractFloat}}=[0.1, 1.0], kwargs...)
-    _penalties = [Objectives.distance_l2, CCE.distance_from_targets]
+    _penalties = [Objectives.distance_l2, ECCCE.distance_from_targets]
     λ = λ isa AbstractFloat ? [0.0, λ] : λ
     return Generator(; penalty=_penalties, λ=λ, kwargs...)
 end
\ No newline at end of file
diff --git a/src/penalties.jl b/src/penalties.jl
index 30ae653c..37474c9e 100644
--- a/src/penalties.jl
+++ b/src/penalties.jl
@@ -42,7 +42,7 @@ function distance_from_energy(
     ignore_derivatives() do
         _dict = ce.params
         if !(:energy_sampler ∈ collect(keys(_dict)))
-            _dict[:energy_sampler] = CCE.EnergySampler(ce; kwargs...)
+            _dict[:energy_sampler] = ECCCE.EnergySampler(ce; kwargs...)
         end
         sampler = _dict[:energy_sampler]
         push!(conditional_samples, rand(sampler, n; from_buffer=from_buffer))
diff --git a/test/runtests.jl b/test/runtests.jl
index d86e70bf..569772e4 100644
--- a/test/runtests.jl
+++ b/test/runtests.jl
@@ -1,6 +1,6 @@
-using CCE
+using ECCCE
 using Test
 
-@testset "CCE.jl" begin
+@testset "ECCCE.jl" begin
     # Write your tests here.
 end
diff --git a/www/cce_mnist.png b/www/cce_mnist.png
index 07f2ae43c15f6032db97bbfdfdf748de88051151..3db6423c613ebc10174d84b0ae8f6f3cf8601ff0 100644
GIT binary patch
literal 22069
zcmcJ1byU?`yRL=0Q2`ZD1OwbEjUpi}NC+t1BDj%8Lb}-sC=!B%Nb5qnq@+Pbq@)C-
z5s?m&Zn)3#JLlYS?l|L)bI<o(_8)t1Vy*d`^NlCoxt=J<NgX0RLAqndjziMd#g%sK
zAnDz)V<+qWUHFcg?+73MwO3D8N_@vQ@t>DvNg+FSoY^5QE~?@fJ=JIANZG#rYc}^>
z*wrIiV@g%K7?~xQ9x5ta6@ByY@BPdZx}pjn_mZmd@6z)=Ln?Fl%}%9j2kwmRqF=ar
zhJMc%>7KsPDyL{O_mYT^H%!-7-E|ymdzIzvM`I((MnBW|)ej%R+vAJzmRc_DjvXC|
z{{H^=e|%!o%=tL=Xoq^j%F0TU$hK==v7I6Bo_E^?<~?um{~N!)SK|M#YSuI~#9aLA
z!^Y}-gGEh#lbAdWrD^Azzxeq0E*pNnSVCWTicP!FYH_07b04`ye`!&d`F&oOrKuPJ
z2f64gEgc;lm)E+@vfNgO!`0Hb`T2`Aa}Cesx8&vJU3*N{HN8Y@S(>d~WF4M=Pfu?!
z`))O}O0re!P2R41Z-Y&h9&Bx`&s6yxtJvOJ&o^ynZa?%rM_*q*`pSI<Zf<Ty#$-Il
z*4CD9gnX=EL{QKfuVlQsiEH4GPXV$wZ*H#q2t43#Kl#<X%9rZ>`}gb1Gg)dGTGHlR
zSUeAp+SV$GVWyb2wzjLQD}KYWbXL-@N@+k)T3Y&#KmMShq2cYdx`qb}Fl1>fb(-g~
z>=$(3Sb6vE>de=_S=6kp=dj$l`T5EZA57blU&&SGgocF7f1<VQy<dHD_uhlQ{1X!r
zxXrt{goVo`{HUXN%$eVZ)YOJtXo}`jPEksb3^?Hw!|Ba`fAC{ht`Q5KyYdk49ug9#
z>_?aGe$vSJy4>%Q^|5Df*6{G~Nj~eL58elM?%a8ZmM2}|)kD3Jj~_qc<ff*km{n7`
z?sdLdu;*!#JjHLzh@H7|rBEy1G>k=^J!+b|ilbs9$9;2c_DA)x7nJc*fm_&@+w2{W
z4$*e!nJ6zc%-e2ltX4iIr;Yh2vOblYo}P~PfAQt5-u+j$UXLEFQ4YP|cbfHm#^;F3
zMoUv&=L$dUQq9oH$EhWhQE8h*%Nez&s(MpACO>U|z;3EjaeqsWfo#FJQDgLBu|0eD
zjtmWjs$9mqo%hlxb<B2t+u7NfRk5`e=ds0<GEL3Q&dz@EqSdc~_au@9_qz1*wMH9b
zT(&otzqzkZ7Znv%ebbYa{50_1b8};3s6~L0aV*B6pJ_1i$KYVG?YLh4)^`6hy#KtQ
z?Tsj&jCZXcUy6ECntgRw^gzh1jpHOAJb19VzFbi%awNfUx;uaQn@2@pVBk{2+S;1^
zbhlkYB)5f)PNCKJ>4I;Wnz@PYyvgcW{rgV421Q0Pk4nqQvHzmG;5Ju#!Taq6X68D*
zY=2tb%y(nGMK;s+>)$-K-R2v3MK*p_R&Deo*o-y`VSBqdYGuNhZ^rQF`_%vJDVX_L
zGV@h9VDF(58;c#f7W1X+KTGT;+FCq}saVup@u+=IA6uMG*9kd)Gp}puXP;TV7V!o3
zw%RY&2hm^iqL^FuH$+UYos{1H{l|~ix8^<AJk6+4Y~vUEmR<6Rfx>Ia(SNP{`c7g|
zte&MK=3_lvXY$7_as1IyF$Bk}cVpN=v#;qz2KCS9hU?`cxyEz8-qUZ26~ak=5e`5w
zZeq*5*%z_a6X%{$F!1wtn{Vm5uMQ_hS$sLY=DawO5cfSHT!-08)xu+BVZ7CwLc)Z;
z?%merdYtnFlN-}})BJw7<vwhI-4(huJ{xQ6@|8iqD9duq?P0Y{?bR@KohN>Nt%hb;
zPK)&PlMAtoc|uZ8=?;r|d3%pvNPPWz<J<OT+^w>Bi@svyPqS_Xw`tq;M+^)Mf*t9L
z#}{X3nXFnTJF>zr--s0~YPO$1jbXoXWhEskmic0_QPU}1-EUdtYpGI!rvhks%WcQM
z7-W>bYiNk(vl)2zpx~Q(N$H6Q0SAkgWAZm{Otq(l+aJl&E}F|TY3&_};5K_sF^Jz`
zF;uRPy?tNd+Wn!Ck&)rye3Mp54-!_d!o{z*WUW{#A04W%uXkDMGLp55`CC6*qdVWM
zz;Sl4rltlFZrW;NV}k;E-mA07#z;#maw}0>U;i})0W~$;WxHsFKp^P2&Q>YLOZM4y
z+&0LqD(H7+5fQ0)CtDdE9WCmK5aAk&Fp6~=c(Bi+Vsi=ABJlVn!j&kSB5ir0#kSKP
z8~1z63xtrSPEJm`9-E67;#`BHqc_^sbyLD`qHvLui@2`*x<)S4mT{{f*QiM@PT1|;
z-#g`!=8HySZK8_s)+lS9?!G;C{{8Pg_f<bVX_nhXNA~y@BP;6^&NoufX%0b``A$c!
zBMK!*{&^E>Njqv?P*Bjq!UA446T5~Mp<QC1k)B6NPj*VMGeI`|=mj~Ibd79}?X9^_
zizqhX9J*U59QROUke_1v!|TJs!;P{vewHokG&Qtm-Y#iAU$(V2shX<dG#sWbcx|*P
z?ljN6&d%(+ryU0#kQ|`SyVqsiLXZd)TGF^L{6IFG-R@_vQ%B})tZL>*poshT58gDF
z4G+?A{wW|((%g&c?D&}MWN=8xI>Hl4f<lAZH}_2HlxDf>D#AOf;qCn&lHv7>3$5}U
zsj4hz&I}+e>YuaG3fSMfapUttk^^~NI8leK4X1*eQ+*}N$nv%Ipqo)V)BR=ZE3-dz
z9lv`YnPS{?l$b4OnIXp>fB*jd)vH(c@85S>{&~B?-3>Vx=Q=07`I*}993S5*a&)34
z{;~Vo>ip>T#=N{=)tfXIG!Gmly;sfW&uE_C-#ysv*I7D0+T^h@TTL#!a;L<8S}^|E
zvuEFsP`*?wKg-vDA`~i?y3NQ%Hr{M?zdL}$8E<}dO-4rMTe;f|&i2KN7pTv~f`;8D
zIi<mcP-;$!4Gj&=Gir*(+1;`zk`QtG{NfT?z|!)v$3#k!Me%r|-!Tr9C`%OG*fjJ0
zQl|u&um$Dbx^%?i+Ei{+zw3O%?GpPi%?k%8jq&3qp=E73xyAtA{*?hw@~_E?X9WaY
zX8Oxm)HA10>56PdhuDPZ#jUNamw)#0TJ}%P%zQ%__E?lOG&OMu3%jG-0yw-7@o=}c
z&WTpuvN!9>Nif<LcT>z7!(V-<taLyRDY70$Vc|7yKEuW~4JeYPmJuoBvP8^P_dm5*
zpZoau;O8Y77Z@1Cp3RB*po7+LuL3O~0JIjUpPmxp7ZP$?7;A~8e_VOVYT(`D)9x!(
z$8=o>pInIKwG_q*<b_vXkFhN0*-ffx?#S0*4PTd~XJJWGPLczB!QNKh4HSAF9W8kG
zlfSyI8#gbn`g*1}h1>cP5m!k0AD?Z$M&328uKKF7Ex0-VhOip;SRwURib{%-Hz}2j
z-BEVuI_+xIzytK+IHSd*2E!6tj#ee=f{wrL(2FZt1RuM7*rg6#RcikwAc!Nqc>p8P
z4TdfYEm8;kKUG(AXuLUd`Lgns4fYoh$mLQ&p>YcXy*RN?w8x5glp8igQu`T9e$ALL
z7`ALoigWKCx2<DpX=xD_7N)1CKgO=zY#1VB@G0=L?#J_dbWf2JUsIH&`orudnSNLo
z>D|5SO|iMQ$TCQ(D5KAyo~-a4X>EI;>eDCti8g83p##U*QAX7F&klviqdt#~er{;k
zoU7*$u$%aEo2j0(yu7@h+XOZ7!;0|st80(lHy72X)r?eB+K@zLJwcN8JBfYL-M##6
z7&_AG`g$C{olIGJ`o6OHMuBchJ~fqi$$+`Jxx6Q_MjIyu?Dy>1v!5O@oS@;MqoV`t
z<xPRu?2r!^IxiV`>b6$}xjJ1_Ea%NguES!kD8d#tvBGZ7c!X||;pKj0^y$-ar-Ymf
z8w3sNDCxb@qbOg9TJO~yM1HvV_TO2xne5Q@V7pk1E@(IT)nt1ES&TTnwGehJ8R;wL
zgXFs!#ZVt!&};4YJV}Drb;XMQV?9SjdR(Di6)C+q@B{ft{?~DtBFjZvlT5j;t8-!n
z3^N~_H9u*iH5K$s^F*0Xw53?Y=miG{>o|TVVVfah#L7x<3O230D^7E-dz=Am3ea#(
zwnMoKflORFnDI;~=J~yG96F`RahG{{9mc;Tthrm|B;DwFYtBYp-x4picsVI4X&(tm
zZ79c^q#IgVTF8+PA3o?a<&L?1;?ORnIDWhnXRs?CseJ3rod@>kD2^VTpUkQ-4S&|(
zRpKy%@`d6V6&yVG@u<8{{W#P7cv2iGeX;#?s@so!<fjheYyW<qz`%~C)fTDKf6zZ?
zzpWLW_*}@BGa;dwK!ktz&Kj}^{nx20$6$PXT+#-R(-^1_RnO1Ir?cFx<nv?HoUyU7
zfq?;>xy1G;O04xW^y;5oxz*-E$2fHSpFXuOS^M1Bs8iv-In|X5wlU9VGeXB_{V{hZ
zr9tMc^`+^fVr{9avLcGY$*TZt2k85qC)wPt_e*!?82D`%$;ruq2Z)P*G>UVhK79E5
z>haL?Y;5LOlXBvX2p;q9nTqY$=g$+p7bGIE6qD9OfU)+BgNJEp*S|kLO-heO`#Rwo
zDv*0D%3<B6D;}Xb@U(;3Rae*2w>4-^<r{PL(jn)Wa^{FJ!b(+38FKrtyLx)%Xj!Pm
zS(>?@rhVTF;JK*9KF5jZiroL;^#|ciwI7Z<O(jJyTcbDURxDY?>IkiQ-7vMmdAVpt
zFO9eNBBs(uHAC+gX(Jqd&Cg?7v=4VYIYMkXPQq(wYoyWdzgrns$PZ;j;z5pYGVptB
zl`e@?G1Z}~Rp+Wad-3A<$&)cJUW{26*-v#|a$B=)(qE1&9qM~)9_2oWVAVE0$;ZT0
zYnV$yLQ-^+LMzXh-b({~z`2B6@YmlIgD4bApL1=;m1ZlN*rfp^&@s|<%AB7FZu@YK
zsD&0;eSb*t!J@#-+wN=*<NU8m^5wCXN{fP$+tU4r-7sd=;0k$C*1BH%9`iYem?_@W
z+cc?sF^xb#j~+cTG%}h@IxXbP#m~=w`SN8E5s%@zFdWP3L|Rs_JD2kXdU`;`M_yiq
zZfA&~jDmrab1vuW`4EZy02ySPwq5eM&8D&~ew?V~N5mGUy5@5tA0HXA<u@=&1~WO{
z5FeB3zA-CBFCK7$yEpR>p;6l~F~vm7Z{-rD>A^<HGv(`z88^nQi^@?m>Af~AA4o)4
zR&1*6zjUwjqLa%UfGCrm=P>t>!#B62xEHapr%s+^%L)lRDX;_zAQ#DH0ye1ACf&|n
zz}Wgyba&GpkJ+kY-V_#1qH$O5e=yB|KJ_=f!QX@%CW^-l_#J*ueVxfxN|1>?R|ZZ7
z>{?`JVcJ)mUq2k|rreij+Ah=M3;@;p^{wNq9{ooE;*^_De0;22>RL?m$>^>$PtzFs
za(1wpYtU>`X*fDM;@wY@NujFd-)<bXK6m~+`rXTyXI9t0SD*G^)h#b|pUKf+MbWb<
z82~Os!#YGp_O)qzW+qG3USH7=UD|@2LvVAURV%wPmo~S#o#RVuE6Q6&)0L<OAgqg<
ze@H&<S;oVfHccF$2T%WNp_a>{x6?4UX!H6l#3;oe0Od(x*Ftw2^X@!p8JW4k`HrFC
z;nmeu%?!1IC>L<)V`9YKkUjPEq<|L%0JnYAsl$iqy&M5y@|nU_L;38dG!zsR(7SH4
zhZ7j*fd%)|<4D#C=|>a(`tn+uV9fg&c-i1Qb;2!~`@WoD<%pCt#rQ`(3&^l~q2_76
zr?EmV)bol<As6IH>6w8s?89eJ4~~i10wiQlffIXEkP9sY^(+9`{t@^g?*iZ7KINyq
zlF;upj{B1|XX%R2dzC-ll}T+O5s{b2<yQrUdEn%z1WR2FndaKbyI7x?59(%JWTBDb
z2zb?Dv0j&%vdrz`ngBY!9L5ZwytEz~QD;1tHwCa_tBAL0ZkMk}cMC)TFN(er2ZyMw
zr5;PvD39&UdabC^kJz&mcY?hy>%x*d;+Q==4bUFV5}HDIyI3eC-#Zy&V`DWnwO5zx
zE&7h0zrk1+DO<{THjcu9RwMOl4x<f_%i7{($#gav&H+-=V#is+MaH^bn<gPKAH1WH
zzJ4f&uJE6KzGjT`N1#>m#qilkiYPAKo$ko`1_Yh|V81to1C)sI)+E22ZieD?!e?Zc
zSGA*1=C*~2;IDVPEm|Q-OgJ^TdY7c0$}fJTZ)oWL>-|3B+EvrkoSoOeZ8y;&gja_`
z&^^A!YJIM$8SBjc@nH8pe!B^x=71)K@U#HB!7{~sAg+K-rh<SXPUezcQhb;WH-B~H
z$dO`)nSOvkJkk2<>d7ls1MwiN0BdQgY3EDazw?6?bQf4w7`G4>Cp$W_banSXi;AM9
zqG|_=N^9>fcUuQ~qZbmoO|3hWMP^QyfJ8%yUYckpZmhMVcc94D3Qm0gfde5;hp*!h
zS8JY~1!m-TTU!9xZW^~mn;Hw(b+2t~{8}*7=kN_|G6L}QCT|}lx+GK^)bMS<Y+5ek
zAnjVv22P9KLJ%=Y7wQ7E53&7^D`#uY2Bw4kfZ+nvJ0mWLB?DXpaD)jG##CqaT895g
zfv?roMkAji0CfEV19?wWZllGalof9-^&Fr}ai1Nm3TM9^$!(^FT!-jIM5ah=LFfi%
z9hvRS+=_~d=&auETR4hm!+k%h14oC3K&>uuajA%SmRefP4b^`Bd?TLs(n&!_h5d?I
zIzk2Ga|gl{%e(>DG;{uh0>sM7Dk!+IIvSUyl;GPV$R0lFeN5-q;$m0@fl8~ug3Zgv
z-+zIwbg#iRoeJl#%R)Fm7H;l*z?q2_I^P-WAl@Q0rb6&$tYDW#0f%PJP@}+%Ahar<
z!)F2WI<S|TPx~)q+%Dk(e{DV_goxrCA9B|zu((~&L(l1{r>95BQq+6!%h2d36tBz7
z%&AWEBOt=uk!+VP-A9o_LgXw5MSab<wT%`7wsVr-R)On?dN~9N-g}+D0Q-TC;y4X_
zdzx>POyKu4G&<8YbMr(QhWh3J|LbK%<*kkX(5#;XQ$t;)IehpMI-*#$TF`Fqy0dSS
z!gZWepnM_DQ<ak%m!^9jgDg5PjGf>%tx<kM!5}0ggk?2XFkZQ$LHo61@-uUgps*m&
z$l8+}>4HA{TiV*aKuz%k0z@vLZX$QzrEHBxZ>|56$P*zkfLxQ5ucre}5_Jb(U*=zr
zC}g5~)dBg2D_@aDL_`3=9+ME*{d{lNv>A?$NG87|$iCuW&DCIS5}XGkyNjZHM*brJ
z$Tu*YzjyBT?>|OOOH0gv*U};2PB-_rAb)_7Hc)6RmU#pX^!2BDi<HaHHD4}<Gz0Kx
z_sO5;cCjt=s#Clcy+j2U;d-y<?M;Cf8}m&fIV!KeqRZEBx=qxMS<%Z6#y<ALj?Q3l
zfNp_;bJs8N-nRdoNqReXi2OC)a=LNbKqW84Ap@bCdh?mXi_=a+hpTeQCGXZzW+(Y4
z4Gj&wdGjVHh+@-bZXvWNwzmIh_nR}cycU#VHzGKzhOKM;M5Y^qT`2}y<wdx)RKZly
zt{*;pn3t)MBp<i5xX7lS*^64@|2$a806=y@eyEQ2$y(+UyhDv7WZRn_J;?bokQ`{d
zTKlNGl+Rzh_@=q#oc1==NZ+IG`<orgk{dIZSAk%x9W6JlPFd?e5HmSlpLjFMi(>6U
z9QT~y^y=$8N&8U8kqD#S#M4)-u5Qej@0*^M{p+v4y2FgoZg|bRhu%*4(_HrYd2*+3
zEa#lJcI>9Dg5l850{z-xvC&Z8%$efJOhUTaQfRgYYsLYBd$H)xkl1SpH7DY#^jbu|
zhDJter6`^At0H)i#`{=4`MNf|iFme<IyesQBeuV!FVlEBEvsB>AX{Qdy$t23FOkUe
z9r~S&at#~A#Kbl>H`f;@lV(oMjOR89A0=5-2-5%Qvef!MtjXp^1kP5Kwr6f6%JS63
zJ6FZUy^KU;GLnc@fB1OBVKhw$D9>U<NvD`>KpL8xoB#fMhxzgSeAnN!#QQm08((aC
zrKB$(VcMQLF7T+vf*WA3^;9~GRj;=ETBRSISL6I2lwBg%?U$Gctx57d_jq)5bqmOq
zg^jWL(Sd<GC4KHtI6IORubtBGUGu7o<>ZOVK+JVJ*!~FMJtbg|CXRYhL@|h}&g^_Z
z@nNjfNW`wd+SnmWyOKVgkIxkF_Cx_-{`JUev5<VMkOA<BL5BGL!*q1L`DPseb>dP|
zE^nolh@hbp+7TEJvS7&8$?gOVO+<M31=0zo`LDMu|DY$rXi(35qV|`UpYygo;}&nh
zcutZ(h2Gb6&2I6>mGtXBHo(xkKkky+v;2C|)xkzn$Y7|-72KyUF;2)uZU6Vt(NUh;
zI?WSO4@>4+loM@Z`0eVZ%cDmHWd7vqj$j>=nOj@I!+dc##bOSL_FhM3A3?$obh+<Q
zR`sHEe{6gIrwdZ6=HJT8mcBvygIKJv;V>h1Uw{|VJB*F_kp=<BUoQn^;f-7bn}h!7
zyS#p#qx6^9_3J)MO}~CwrL{Y6tjvn_h=2gle=M0PPiy-4^5sjEE4Q`rByjo=^=KuC
zcQ`lIG5%q`ce&d>ZEya%wpq_PWxo!E4&>7Noa}SRoDy=bqcIMGx)p=cF2i+@;=CwG
zmU^Ll*J5ooH5av>wB*cvk}xz(<Vipc28NkzO9`o{sGt}m>eo0$=(AnmL7kl%86k2g
zdgz`4_EU_OpHfxv6TR#j`GbkqZ{DO5JKm>1AKf~ga*eyggM5EsZEdY0YZ#mt|I3Vm
zg4+GJL*d{2LcQ@IDx}X@pEYss|9xax3hjBvZYAfQ5fKp)5V-YoDb_bu^a~_0w9w~K
zQBe^Q<CBw7&xv0nOESlVoMb;GvqWr!*1R;U#7PvR=Yj%fX=D$AO@gaN3%R_7QUX=p
z=<|zLBGo3bE<cNyloLTxM^PfT!K1*xDJ?y9a`bEAAQ91TU-(aeBa~j~VSV{#>UKSC
zqxpEXNrAg>vCB}oP`&(peS_6^At;~A%cgjHdz)-?T&|y|E><*T!fRK5w-Q|HybEXL
ztGv9d>`}3F?V=pO18iwXLDw->jphQ&Z_sqpAQQ3OdK+pZ43!K*lRjF|lzmC%chJ{*
zv#heWSPzix&(7@1X?|^YnYk;Ea?}@;!=iJEUf5!4`dO)`j$3oVX~o*2qN0!hqF}*6
z$0j>1yfI3IH=}V7PE4C$zQ>}5eePPIJN5u3K{Q2+ItIrB2Ew&6=S4Ir^?{k{jg1W?
zbNBR8f0;`?6Z$OC;y^nX931TJKRF^9GlznhvvN)|<0}LbH#`mwuZ7=^>?FU6#~<b&
zb6xz7wOts5M!m7&>gMq-?UFQ8U1;X+ePm-zS0f=@24d?FsI#-PaL>AJr!($pYHDU@
zXQRg_XwiV}ifk@~&^=fL6k0RW&Nx@t*ylW%0h}oWBLoRQ*`9U`=os}NUGp-%ILHmK
z5S!U0yc$uN00M?!H2e@a29J-Ov%hx_FJ%P!kZ)(Y=mTlcXS>)Twa@WKz?uh)9q>@<
zVdE0B4tDyFP<8c}sT+Rwm3UKJkc)1XXVojN{TV4)c8Lgp?bpIRUTzgkqD$QlJ!FEN
z@_H|kS~{j;I~Z2)tY$tmue?UGnDv87jLYQ95P+s}YhnYS#OUXU_{BIPGdxLF4IX<d
zG>s1G6C{s>galZ2MP+M;4X$m41|Q>+h`0=4af~Hg;&HusFx_k$qi5c_?}?^^!ONX~
zaxo-oX{wC_RXMRKye41%?4n|%r~eBO1A2Ndq6Mcqw4!anW6^v3=uu^0+Fe<V-V}3>
z`HhCmK4ld27XvlpTp-I^AWl~2h9N`6@mj{ZUVlHIbO*vm6y#2apF~6;$iVQSx>{;|
zaaekLD!2_nUKhp!LN<e4maHiMGa^iqeA?O20M!c+qR?a8T{AQV0Y0UyW*Z)*DXG2P
zC$a3hTJXFBx^&D^{V#Ypu!1YmJ<B(jZe=jWdwhCtn2Xm13%29!vu(wmgAMehC|}q#
zHBh4-ZX}iQEUi`uRSr9Lft&s#wvkHR6D~&hR$4eWWmPwJpXechm@fnU;gKMyZz5v@
zPYHFSt^s6GtsL(|lcc4gDT3_+6#c2HDuCwlr!i|YKuUOy=z)Ir<V||n3()n*j-7Go
zI<BZ&*Kb|VX3=pk_QD{7hxw~+jz7ES@ONk&B|@*9xX&|5&>W41pbDQB5mNZ$sleLO
zH0_BK%**S~&Pth**Vf0RqeOTMduY&sm`8CMz~56XhIE{!#^&@(tJHB8fJ-^Fdhxzc
zGdT<Kn2`QNj@rZ41Bd9&`-ck)wyOj$XqzVcX7QR&VIm?{c(P`9|NATe3J~D*Tfryn
zz;S2*-Tmi08mU&Uq|r7dQKhS;)ekFi3|ki|DJ!#4`;{J>KF`I~W`4+mx68j~{j%ip
zte}T;Ez_E)zFZ7{!cUnqeejd#=HbW?R%QsA93mpRnQDy+XW|WpDGNkRdpoC>Ao-w$
zxgl(<>(JlSKr!@dDEz7zs&BV$adnBF0WAK;^lseN(7=FX&mM(mmt?{3P!nFeX_DfU
zsl_M_8ufx{^xMkn=ab@%_Cmq^JZ?)}{0d4?-L|5UD4HGN1EbAHzk{i28QqBK#ux!!
z1S3jjQNW#ZwGS@tJ$U?%ZDx2}aErWwb^^{H_7EslnFELhP!ce;5KJ?oe+aqVsj;`h
zgD5mhii;B;3;`mUA68gc$Zhr&&x5zGDIrq=%xnaiLqeN)FYZcdGK-}RQ_f>;Y-kAZ
z^HWaOom*b+D2{iI(X;^Uwi>E=27ikfau)OvD;N{dnw6O7Et*GAFT@2WLDa{7HUMrw
zl(dkbSyeyg@#3JP)K4)ijm@?Z5eRvvw6qj5OzlzaS-iPSxrnGUe1<j&nwzN7ARBGy
z3K&(egE@oEhVwHEiX$}Y3KXd8GMx`x-v)T~l=E($oaWQKp6R(~{L+RBfO@ws+~=*T
z2n`4f<fz+>#X4#1#M$^4rF-X3teD<V^5s1LLfCEX$<Ygs$WOEDN}NCU=b!#`d`jy^
za3ENzW%cw3mEV66*ze|R32|uj9pPYS|LMn3j^eLlYn$h8`CR_Z?ns;`t5zN(Cua_p
z2)eQ+dKIKV#InB>AzV`(nb(X$i>%N{l8YG|Gv2I<-r6ZNt9$MinSj!Xp`MqVoSaaQ
z6dzWY4(Sd#Sp*$9dh|dv!SX6d?iQp|F<#q&cRN?_sKZdY$8!tL7ZQ14nSA*A3VM$3
z(ZDHv+qDIgf^oe35izJx8hv}aI^+eMYYIAV0GJWQbepHdO~e*GIv4>u$aCr3xvFv7
z)R|_2O1^oIJ(zZyde-x^+Kg}vmB{Mezzc4o{c-O4PjboKd+P4%EC^msM#xDNtwzMe
zuvunNT{z$yA&N7N6>!LOHF+9^kr<@DCM&hMqqY3pm1}!%S0cHoPo3I=*fi6b9siDS
z16WMb|M(@_vF6fJ!K%z7kezM$W8{n;H1WE@!=E609yUyfCJ2e;zycNcS$HkmeLS<|
zXYkD9BiqArkd=D`B|ftT&}}#Z4aTb1&$ZLUigXTzip2$43v8Oy$k-5{1(*=2&$4hq
z2?`M?CrV@i!1m7FdqZo!Vv8b!g9)3;PnB#nG54{6+U)?Mq}$s9|4lnfrwqEokI~VN
z?7LF&)gEv{iH;-vN2s3wMHNL7$fnc)YVDVUx^=?P2d!U9=MxXPfP%VuIMyfY)1L{9
zcXzGUd{UMdN{Acdk<_|+^&KP<TTEvF(J-C*`3oleo9mh=19z*R776K&^6)GIQ(zmg
zwuK}X!0)Ji#zPk`p?!x*2^A3hiSq6j^Q#f~dpJ-4zMDH+#%+uHUIb@#a`Cg>rtUgi
znfq8BJ?HG%v%|FUz7<r`<sD2sJa6BPp;kxBC6ym5{%o;1P+sP;yi|H*@%3<GMTLhS
zHQQTsnZGLcZk}e4vQ_?;P=u<Dezr`M{zW`=zELGJ*86ThRqrlq0EI03_2GcUe7bIh
zFkTrd1kvE>Nm+)4N_d$$!+05pV|^?>@HvNWxy!OeeC`uLP0X$kwJ$)=O&FmgO>tQ-
za$nq3G754XQfRvDxtIKuc4)a__o6eB99YVJZDAZ!H1)HT2?YHIboEx~y%vE3wFU=H
zjw@{0v;CS)2Hu5o*Na*+w|SU^!83E8_436vCnkYt$TqLhV7lMwTWD_w*mKNemPimw
z10*;HSoGcVRk3gb)&S06&QZT0gF);Jn{ZKi=*g}6I)p;$Tpf!fOUKwZzH>H4Xi*j-
z8GbJ`m9(_F2dn(psAV7Ajjp|<hiQPAkdXMM_gLrwdeg7zDLO?x<ML;7m4vGUayoz!
zk!aCKI#>jpCwk-rR1Y1$cn3WmkvP$s6v*Rc^T3AsS&wIb|1%*e%Daz*F^O>a@Zqkm
zE?=!L@TP+>0)=g?TfR4jUbci99D~UU0^?bSJMl+nq1p81e3}>Ug_uFm7ga2-JeG>0
zLxh3%2fgTIMWZZik5n*+w51<@l;t$1_|nU3Pg9Vq=K!7|pNS|co_s*Gwa>c)_>1{G
zKIMFUo8eF1xs8cc!+l+?N+SL4N=m`5iC10LgXsjnO~;U3HfjtN_yiQ3|D@DyoPQK1
zLRJ-|q9Q0=YbTW+!2v%H=1yLF*TK=T&|PTG{yBvnfP}S-nAk&v8R8$|9q^X@Tlwpz
zV=3tRYwCfunf$Sq;=$zX`4)X^KqnTjFf#>Uyt+MC+rvP%i(`8NQ(^_ZZT;nWci!(Z
z*OQZbIyIexc27v}^>O{)vYTA+<)QAls3;bXROH`{sA<4>G>}aU-z?PEMWwANj>*Rf
z-+m=V0x~|Ge-jEvtW+=(pss{rO<l@KsnkunT;2#MF?_y&;8iQkiMcRA>mi?8jF&Il
zU=FBTpzSFgA4A0~$j8U|MG7F@FFHTYoo<KjvPckOVq)TnnuercAC+0=#>CIB6$Lbf
z=Vn1O0}JGMn?M8gpJEWIVRIrz6kehC0H%6uJbCgYg5NGx*!%W&*<G<XHTYL~UH2eC
z5cNXkp})&xJyKlP+tED*qyE$tN%x5GzQ8B<c+1<OP^%Dc#lkWPv9nE%Ka!X$uUi`r
zuN81eUyzU0hDQRpn9pQksT*8zL(oQM4J_W9BCB-HmymBA5Byr%W6Kvr95!1lDVS2a
zoojfrio{g0hH?d#ULF&gR)JOqe%czB1FjE`TBT`uN3nJ$W`bn)-$#k2H5YqYBH}cs
zZ!Y8RzIEH7@0`~QE|cKsU^s>6az;?<@d7+i4fAihN}X(ovw&2>b&ps@_wL)5rCZU@
z78(~Pa`)50X>)N<m7GcybcDR+SrcqD;hWP?O(9H?<xJIBOWif}y&pe*2#tpBJ}~j^
z-0Lzo>8;jr+e`p1tPjk{+IO@ke28iQ=s~c<<NF5^XsATykeJAPq8B7Tt$l(gaA_&Q
zvV2uC>P1`}N8C=Lm`lY{%IaIdaqhwedDxlLrA+H<Ww0sNx*(qcAdae^H#35W$dU5(
z`A(R&5{jX#5b%d;?_t32(xpp4rNw~zAP6&Rp|y2&!4BNmv<fH)#4-cO4kfU)qHP`y
zFXe9&1QcXrZ89rAJhra74RemDqnL+EB({@Bv=%|PEAj|8penGoqA%(Z$M%+{XoQ^e
zva+&RuPTK;h~+6S9u+VccH#1-0Hkd>4{NZVzY&aKsGl3|5VU}zI5P>F@cMIc4^zPH
z0T2oC@tFUZg<=WMzzw@){fL3u0G({}_t#^@IR-MIzia7S=c6NYGRwxuC+Gsq&z`ri
z0D-jCb)0~M#9bPe45$pW*~3d6vYqA)H^o6_kYhkbL~9o!f~2Hm9jOWWmhXzB<XM!k
z9FO-$<ps0RRzUFUo{2)jT{L^%QMYEcE4&_)joDB;)7ts1hiZ_ttlEVNI`Krb6?Fk{
zq^W5JYzX1>1)h{qL*(KYS&rHs84C`|<=2dLwln>G(*~ayT_D#HCwM3rFnWY-j)nz+
z)<Mm#eFc*SP6uSSbDA(3DH~hy6QU#R=h^}Z0FT>0Yw>hpxwQoUc2l>6Gf-{HilvG3
zq3x!2C0bz;NXZG{qaP^Sz5xNZ%Uz3H&ky4DSw|iRy?DX6-MPJ9vE2i08H92uav#j^
zxSQpIJ<}2$FdM+pA+W+`2=w*!_3;Tx`=Xl*E}4N6i7~tvoCb9$zo5I9^*-+JQhkH{
zj+OYU%K=lSo%0<pb<pN^@7|q%mMW8hh>vTo0USI!NP{tv3b-W}@Z@L8mSY9giNhz~
zt!ns${5WHKeK~;#!t&d88408ESLs2lYH=TMs;jBlPex{e3Bev)r==_$7o-Rv8>g}6
zSFlM$g*)Oo5*>{Yr<icTG%NxXf(^3hxwZ}9vOAAc#yY$l$IMrNb7DT|x3uOA=g%jK
zd6U|gxB=<|TiwdhmqttVCVG}%b>_+t&=^&Ktl$o)iOnaDh-_lurSAO|;?O|>70^V#
zML_|0_6rXIU+Jf-ck;Dd7AL?t(upIsw~H4l$wfj7-Y;YkM63q>rc+2n1Yxz`vDqLl
zeE7YjSm(Ox>I)YyDnuz$8#pijoOV)L$NUtjd9aaUBJd*402+J^P6#Y48-Nccw%n4=
z?BZg31xt4Z<B!@5#<NnplVOe;#i$40RrkOQiSJ(GatA#uzn5&W)U~1J!G#+9_+%d$
zU1zCNuB((4GTxs@T^!SY-rF_0dq9vvx1)K?BYC3e<T|G_<sm+!?6OkVL!U0jMgW@E
z{aiwB*!Rd_!UWw93d6t8fS{b*=a0b;zXq258K+{+Qdg**2^*mLl*?!H(DQCHp2XQt
z$dQu@eHg{z>-%n17$Eiy+y{!4rY>>j0qglfo-i^x%8RLmu4j)42Nx3M<JcXJG?@vT
z-+P7{uPx$+LbSG1*Uil>=|%*@y2b?2K%GR-zV9+5aOH|47!;@0M*;+JEljzTo8{Su
zJ&%bIf~{Tf*C$A$s0B%zbj7^{|0&}{j!xp%lwxp>7=*&+NNL$2URBIl^;C+YCJ{#+
z++%kU-@*)`wnknJPo9QT+{5~&y!^83%GYn-JP>=nla95?j3Lp{QvlcCpL!cDj5NR_
zu%(w=Y41R2BhI?^_G)i_MuD{i_33q1TVGirntCX_)~6O_51n$Ex@6}v(hzk)uvo!l
zGqrbb;}`^YjIzv%ctOCbv3`wtQt;sc`<c9Vzn`zzHl=5$d2iwUH5%Ay3rvY_#=xnv
zu;cIfPZj;JKY0JFRIKyQqS4Y#zrW6}z8-~HH7t^wn;YjeLeo^4o|QENtt3rEO4~y5
zJX}G<A6dBtS4F91dU-utoJcpUZFd4|9kI9MTq}5cH(X`A8fxhzhd8S3Vwu*T@r2J1
zpu8w1^g`-~iR0opoW(dNaAC|}=g~mU7+!TBhUowpXA;I4FqN-m+(%<@2Vf#+PrN$D
z+#!wiWGq*MLn184mR5qR^e|lK=_>F2+VQb19_N60U@txBLdMP2+3M3XrSo!4E%ug0
z!_)<**u!syI*liUdu6gtj>B<;H4g{ecW?E_j~cAcQS7ut6sZkhEAwLXbJNCqxz|P>
zoOkUgqsjx0rRSh<i4AIZ6h94HV|??Lh^v2<9?cc2m!rcE=(aU<hl`jsw7tv9_w@5Q
zFC@2)gB+B%T>AjOi>R8SI{?>rzct5rEJhqg(9ih<S3tEPM|}D61(4fzGNZux)ZefM
zffeam6t;b)uvhw0&Cl6xp>kw^O9L%o4sY%ByOfj^R9e8Cbc_e0`mm^`VoEABHWrY;
z&Rnns)mI=!uIk!M;RvE~9NMOs&zm<FHLcq84Ge14m0cd;J%{?KPMqk5jSrd=C*mQ@
z#&)y9!Gr~1WgZeL=HpSous_Sj<T>XrTu4exbk5WkcKBhRa8CW4JS|XmSe9!ryH05#
z#@|p{N%ro={AN5Hfwedd%nDwYmcD;WoG8i?olJ~s_a9sYYqXo`-^}mOi9(CPR0fRM
zpqQ9G*syp?>fz}Yi9EC>__RIu-XgYLgxRBFV%#R$m`eJ7@^NZoicepE=pRdVKngkw
zUzo8j!o;v6vI^w_f-_#1I3z>%Dg8Y82^3U&+M7<GQb27J5T){ds$2|j9q$FBy?5`P
zv$HcsN{FH+kGbiBhKrJtl8TB7w!ca*1@aj(cyW|0sWruMRRT(AKCdk`BQI|O_82Zs
zZ;3<J^OG1va)N!0lMn=x$GWAYr0UsQa{@J<SY4rmiY@56@&>>kZ!i?YUxtBaFE1}R
zYE65JfRoT^27mmxcI_Gw8tkLEi6z~*;V<|nrYe)raY!1^dtI>^xnXZ_4|!0~<B;hP
z?UDHSct1`r=1eZkZER!a2J`F-5|T-+ML;X4AK<uZrwni98GpCGB^IlSR!SueOEEG!
zn)RD~IQDYnGfh;(h$eMg&E=Gyp4nbHL@;_V<dop1J<{@mvmb#twvoZm9PWZfBYvT9
zLA1F0YIhFhZ8f#8Sal=roMh`&fqpmLcM1s(#>R<VyVlnqgz+cL2vbr}D2%<NLPewH
zxpxKG5E$5mrT*kARs&K71A=gX6-`Y}_TYL2@#I&ue&S!JMBF!qYC~c<yuFy9Bjv%?
zz&!EV`uaBr@kE33!Cl8c=%End@{B9A2?*6-x%tK|AIy1)jX&MZac*%@btpU!EpOi`
z$Cn63iyvO+&!7MKtzsMVZotY&ARKpCVEzQuAZ8AZEa-8PyL-`lkTkbgsh#YOL3anM
zEi?_Ai9RCc7(23nsjO18Gn7}e{1)_iA~MviaJL!x{3I|C|KVn6IKl=A0Lj9*_dOGT
zJg?wHpd)UZ;5_oQGGZ%)7B9>KuuxwiwZ?wh1B$}APn(>U^<EC;b6HC75#rim@Pw|E
z6C=~`$Hr!;O@$0@+-nazi^N0w2L@suvALzb%lZvhP4&IS_=K8R&e-MJEO}_{!}Z}a
zeI*55hl&`q?f4u4|5xvtK!`Og80I{9f~%&fN&cIk_);j&)Pwk%cR|d=d@#uq)kkp@
zf1$CYKSNa@rI%%extzCM_n>%`^GHlG-PKF;y=H&hVY1Fym9aGG^Ic5NL%ASY>07z)
zIu>^ZLEg|AoV2YMjSuIzK5>f1*Ofy~{izuF*>mL*BzAWxX%`2xItwgy4Gj%}l!i^4
zVgw41rL-1>?{fuuhAkBPj%Qh_Ic|P;#Hr9jNdZJ%%N$QAmXI!|&(VM88J^%%5>Y?w
z`}FBs$e7s}o<(KC)FhE-H=VLF)!C+3w<e3l1uk~>Ok&3LXL1CvJrCuq5naqrUK7{u
z?#PCzh%X?*xy(Yk`X$q6AtAXyi%=$vd6U5kVuFIK;1TE=EJw>$BYkpmb7kKyo0%|r
znzgX-?Gs|^R^IpoTmp}f=iE67d)Bbq{$*RFbz7Buj=u&lBSRd3iJLA2<HZp}1krrx
zrqUNW@pBd%X{fIc5iv)6ZPBs8@8rv0gwL=ayDL0A_?(Lv^UbadbPx2&pXDx~Q#ozW
zrsz)LtRBj=9%m$%q@4Y-<=V5VE&^kW<?E?}u~n^w!IC)}h3ra3HCeJ?PjwSA=i6(n
zwwePc6!vxy)YZwJe~IWQVN1PDzAu@@e3|i(&eRUWuR*`>>g~w9MQQMl|GIHiaqsW{
zdbj8Q@mqIzOLp#H4{R`07KdTLFky8?rkKy$vdyP{c+Mhbf~4%JR>mU<)0wSzV(OE&
z`~hMBp}uyh6;cu^W%G{JlQXjub|fP&^>~#PrXEb#r5;mZS*KHR)%DG)go~6dw|L<y
z*{;I3DdYkPd0LAt>7kKBF=jvB-IBR*EUThgX-%cx-H(Kjbny4;|Neuq?jH~I;%~w~
z{jDiO-W~6T0Vs8(@MwxPhPh1CwpZx-C8*L=xg;A*RaHaU(^OrBCjuMI<dRzb*l?HU
zx#ZopZ}Efi^Uh6*8>?cge>j%8gg#KJ6J30ovGr$Jz33t}l?>B9*(ZaQid^SWXZ&f;
zmNtkkc6?v_>73veDB*DKw9wyX*O&}6@2klUx{S({98tcP7aHm8J;k>Wy7TwbUEC=_
z^N+_os_;MkR%e67j=zI3?@FZ@)nG_jS6{EpItKXR7IjdAm3oGtmbTokUu8N!KOZ1w
zUoxSPvi12S+2dmYm%MSX46nCXC=aBDt8yf7Nq$b~KTYY9rAJPg5aMa7Gy1Y4CF2Td
zO2z%felap;LWe$?4teR8*3M8Lv-A68Q&M!|aw#(j^6dxym5=iu`|zM|YVVFhL7u3a
zhjBy9H@SY8d3<C)y;q-w<DZK6TJ*9Rzoz)49wSb;GNc;IOG-Y{6D3*XC3F%7>xo?=
z{W1=P@lZ+nC&!y*^(jo{GHI%IO)sy|vXu`Gy%f7&e$p<)KUrB?baR?`jk2ISJ)Jc^
zsC9h7B1UZWJxB+wH~b~aNnB!JoBiZC&Hq`{F{bqD6NS@tZR<%L>W|bz(~?s@_&04#
zthr|H`Tw>D+PYKrmE;?!D@a4oFC}rbO>;`j^h(l)*C%Kg>FC5V5Fu!MGwd>|1I`z?
zxmAf<h9cdXvdtLPkmv*BAwH-*mANIAH?=by7g+tpCi-q<H#RpbR6R8HrHj7!*F%R%
zmC*sPq0uV-)-PWuThN)A3BGi&&?=4vFbVm(UCqw&{b1kmMNNh;S*Lk2KB9v!c_k?8
zF`Y`BHUI*pqV+H4;YH2yi4Xb7dZNVP!sFgD<6~~;SmgK}HxW{#6Q~Do`n~t?X?)3i
zZ^QgM^52C8{!0>kK<g+SUAhkj^II7gAXPpWvk`~bq*zm3UF{cy`E>PZ9=IN?GL<rQ
zxQ>+UgNq+|pS7%2-)J40eB+DX7`A32;A)oSD|oN*3TUEKlM9jj`%b%ISXcqZvdk+!
zom5<=?P!6S>0HnPUx(v4e5Pj%&f{Os&76quABO$lkj5Q@joAvJ>Kd{mORd=DZFY8F
zyN5S_YK<tQV-3vr6f{3_V%XQ0d@FN>wC!7?s+XB!rbL4Ts{YWx@V~MGzO%63oym=x
z%8wY<Fpue&-3ScCLipx7AHYzIt8MW>$3cq55bS;Jh*x+Hv)u&}Q4$Ay(=uc$5-ZeZ
zpP`0p<(u4>$*+bh6vw@p)zZiMs58_!WIsWXRTl?KGp3Zed6SzIqrJ>uS0Uju5kP9Q
z6t6$?#AuUhdM-ekuQ8;#05PKOPZAOmLZx^X0|;v39T~R}Z3hgpHDvDQpCg1?H={BW
zi1$QZr7`Y96#9#yulw2sd-@tP4G)A($p{OF9?F!c@3eEw*OY0y(45AYU;CcZ{?O1F
zp?|so{uiB-*b_S<ap}CJgaoj}UksL_B5R@ZqUp@r<X_ON!dLkH&R)9I!k7mXmvgT+
z?44n*Q(1cD_lu-m)2Z<f;^N{MQZ;x41bP^O(o;2(>W4W=`9R$sLKF5ihRvr~Gx<Bm
z7*Z9$?cx=fhVKv)I(6j~RU9p!qb4PiErWlb8!=~5k{=FcQ4GCQvara!>HBnQ#}2A5
z#Lop_NbTGQ%zl@W^V~kTH(i!(+P=BLg&%||G9_d^2enl0|9M!9I0Qu*X(KVuf3ocW
zwv?n+wCbU{1lbqm{~==dFE;Fdf6G3X)+!?f>Edv}qC2ma5s=-~GU`VHyc}i%q;GX`
z%d*9<!4^*^DiwJbBZsW6P%5Ui*UO|izegdK;JSFRnK3jX!h}+3n_h<{fKFv*-W0Cx
zFFmHkr*63YB9R9LT#_EuO+#TefKixE8PR&QAhX{NE#4t5zem+bKMQ#33}3Q>3?CX|
zyoN&d-4M^jdH($RQym0tKb?>oQLN~k^&8fff8iGVBW7N|{!^lD-1dYwW~PXR0|h#q
zVGS2a!#_!)IR}VAmSzfI(+Qzm*SJrer@`ur0srTUHQ%iqJ5FWYCg<ynmm;50!}TkI
z4=$~BGcLfja4wL@!bcr&_UlOl%=EKC>d*2?wfO~8Ns`{WSs-y}0L;=~Z0L;dPv;s@
z6qN+w))aY-4o{-@b-1Cv{xaz^&kpndpGW@0*H=lLc*Q$DsPa3b_lOUqxGu&#*H!Cy
z1r3cCdND-UXpGCwipIFL;7dZh@f5FMZ~s<LbgW35U7}4d`?xnQMNu6{U6xN<BxRm?
z?nVaT;o4!<p#ey(29!!h5nABzipE5xjRZ=>*rKGYHq*%?^?&~^ytSg<s2-TzCTIlF
zi9Z?83N>r_l6WQjiC*-+GM!&C^U&Iv2)-gAbfp$HzetPyF;auCLihiG9{*!hM#L=I
zW`>Q?DL#Am>^bg@34p&SA(AVk=VFM$cMST7Ic2I&Suy(K@BWJB@=TF9SnNvz9j?rb
zeBmBX6vKoqzwF(7vd5Jr?av=UWMwAw4-XKhpC+f^sxqLHq~)vB4Z^p*Oqzkg7OuBr
z+*KFuWNT0!a4x$LzH_l-!n&yR*0f!$CnR;0VDMRAM!saLn<HlmsfV@Nv?t=FOjsQg
z==Wt2=v1^{gbkBP%m33z^uO3YfYW!2!yu+SqyeWz`WY!tpVsAXxI{@yo906tDexVG
zi6>jkw{jYk=&)UpHd>9vi#cjPiR?S>?(VM+l`!L`#WzMS$RzR282^?@;(J7#SRyuh
z<->j1cONTWFLS{~(Bs~a)?aHhn@yBua#6D$2G%IJ`OI^|QmU*a&4*!M#o7c*yuyte
zH^#ZDn?a7LS^xkT2{0tN0C9)mz{V>e1NGkL7+H`Jt`t&Bm?)nfyW$+=sUFIr|M>O4
zkeK{#%N)dOAV1h}nYxKF&!C<<z8QBfGc|~J6|=j&1~ELYBQ-xaS9+@%BU0UO(!?<Q
z?N8)f3E5Q;)lYb1!oZU9gn)gzUJP+ECK&^|iq~UGv1^jAaQU5ydLdFT%Ld4qBtz};
z4C9MnPh2c&esr6PlFvA|OXjX0lxE3SXwEd=z9F8zDs>GFs;tRZNDT}Oh=0ovmN`Tx
ze&34C_e-L1%mX^F3ymqi^JK^QmyG|&m^-_CIse&jy@~4k+w3M3amp54&JSIoMscKS
z`Cj>$)*DP)c`)ETRii0)dS$UTd)32#;{uZOC4IlngrN9oBoR`GT>1{~{!+EzRB632
z?-n37L4JXqUG6Y0yu48~Mw<ANP_^kkChia;2Zwz29UQd7wBj37-Q+<})u|m~MfG7j
z7f&aQ*pWGJJ_!Aff^!FT>R7<xs+#vTHP@>i0>I+R>qXKOn~Rq(x0Wn1CW{oOaqWBc
zDW5S@X=47)eYtW2f4YzVQVjRM=`UF9z{EVRX)qE1<J84r%<6t=I1FP+MZ7wI)`*gn
zoSc_5;q`0Ew401Ksr{-Z4a|9>h%F`blj(2#0Kv}Jj>rWR;J#8F!2P8X-(-e;RB`X1
z;!{ojrc(T{?cx<wR6-S#aZ3>d-)U2ox%_a|w-*r+DMb7-mwCxlZL-~2O0>ngN?4as
zf(!$+C%j7=abwR_A$vezrZnD%*;*mH3JiepD-9*(YoGGfktoSm4N<(b-nf;Qu8|ap
zZ9n~vs&>+j?0Ik+@4l=<+J1wPD$%mNoRVY&{rhqxzXR)E#E<_rAzrPPAR2F6q^ju}
zirFvwGS{x-qc;pGwaT3H6&WG?WH&;4Vj)m5t6U_-Mln%;q^cjb{x>kUy0XIf4<y`f
zs?lF&J07f<4B6Y)4x?sujJ6F!cW@;tKn(gLQ4%4mD!>OB%=DR3xx@_iaD-Or%@Qsp
zDi_?Gl}ZU$9ep`yMkUsHk0(AzPnTgIhU@QA)_wl0lCHdI=NJQAM#Y!hKisD1Js54K
zAg=9a%|>eT4Ag+?VeCJhZvRE<5;O42@XU-QrSsUU$CL+fEw7ysvkO@oRxk*B$I!n`
zD6Oy1Nxs7UIhL;w#qLPt;ogDdE23A!xPT!P=C3d>QhW$`7>Sc;8@3MnWa|0wA!Uod
z|H1em@b#c^Tp-YkNeK1)9k^xjjqfq?dBPhvKWO5c;YWaF*VQSB{|-_VFl>Q9;NT8i
zCg7{$S0+-+q*fEkoUVJmBq)iLEnf`tQ-2zTDv=Qd%EM)iHox#mBue7AjCY5UICrF&
ze1&gpa?*@)jvvA8bl+OxSvPb<o!i9bM>)c<ul7C5>3=0*0N7P1CVqB9eDm2r!NER$
zSo#S*WVAe=OWK*CEiw{d?f4o)g!dPNEX_%ZG0#-SLzPrDkTIap{B;ty0Hug6uiSw)
z=F;6qipj9JeC?Q%ZLBcj*Prp#3>m8|j`<5q$_Y$tD_*}pbwmti4-)~(s&DecL6^Lq
zL-N#^{PDHhOG3g#FflQa#D`hD0_wdGB^Tq_?^Z*2)1>(qyb{S*AS+G<jVx^Y+DG3*
z)I8z6wv6+~$XKLa435$l!-UDizRJi*)&iN^m!zHVMP#HYWwEo(FPX}G#ty>8V|>@i
zbXKn<5kItS$Buu6O7|a%AF)5mT)lerZ^dLd7yTLY@F^u<!QrVZv2&h=ec?L>#7otJ
zI~>)Fq4<;-O6wwR36tYDFkwlhn5L>~qAHHcrzcu++b$hd%vW&?rS38*qO*#hijQcy
zR6R79LHH_*!#KBySAhEyAO;UcDIKHtvWHb&!zS9)W&|bSqERYlIjAo_961$F?2?vH
z6RJx{yeVLNag%yKMw_a6kK5Mq3J9|M?;t8-ufF9xBQ{{dM4xJQ6$ZVk9(H`Ud$=Y~
zcJLq<>3z1k{IPtg&FN%$cAYFW5F7DY(V5UM?mB0j|3{MNz;?dPOs1lcIo-SEk#oiu
zABwg~(o(UN+L9hQ%yGwrUc)CWKdiGyW9;-_XZM@Z8^>!fsrladi%Btb|B2G6^Eq8T
zJzI`{om#vnXBgrt9OHT!wnv1~@~=1h^<Od(8-37&M4|$Fb#FlL0gXKYl5_0vM|AEr
z!a++Q&Qz1^w(KoD{JOQZHDwGASqib?zK@0vLi*($KGjF5<--TDY(C9h`C4tV)646k
zS{Z7}Na#QIJ`D&UF;VaY1Y^CJdW3ogu;$?d%wiq*0@*%+0MByo?%l-u9A7Yr@S1?(
zm)8XH)53df>`VI29BRCp+jVg7l5^Ran}3k0mcDSjCpdlQ;>wQ?b;?SIKAkRwId|}j
zs*dOE@}2Ax2g|cqJeZ%KIhlXLpWNn*<srkvGxIAej?8oHCw{RP->4c68YY-tyuRD?
z;+xRXUXI^l0%!AYO}6IhZ+3WQ_e~-~{`(-;f8&S7LptvmRyNlin2W84WtfX`IU>70
z=E{A76+Yx3#8=p3E86$dr%w<0mS?`5e*FORVu=LFv;4=W?MphTG$}pbpSf<9pWKQM
zXi-wk*E+O!3m?03-E%kb%ASfXO2M7RcW_A`E?{IubPc#ig-b{%#&>Pg{o*K0<%e91
z)3~*iK!7oFrx6K!CMG84ro^l=d!kO(J`?GEoLMV^A6dvP_`aIvKeR>bN-=ITy_oFt
za<5C7&P5{{?n2z1xYM|De-F3DqZJl`%;pc=SL&!Dy&YQ49CAE%+xy{i=U>7a4|&^R
z_arcIC-0T=()}I1|7XlZ{%%<`#{ah;a#?fSIc!_frvgG4ufoO6P4(FcAG4xDw2CXV
zJ4FIc8g{?Kpqg?^dz!j(3+5(h{Qgo@No(g!Jb7@h=m{0g1SJ!ms1(NJ_qbcM&t@;T
zMw&$5kbTL?*ASahjNx*o7`wX*zqs5Vjq0Xii*KkbpcUZ>5AHE3wcB(Txvu<(kMG>c
zk({a?jn0b#U8ea;M3-Gz@%af{tC3}neDUJCgsiUTw+avP5mwi|6N}D4h`n7lnIBHj
ziS|qE?w4523Kq71Xws3c8RFaa@&w&s_y3$e|DCP)&-)~Lc6nD_J7Rz+#y)(2BfTjB
zzPo?#DPh+u*WpRiyk-Jz6eWJ7&6zT{b$e#hd@T~{N}2YVWzReu^*B<GdBVOVWekY=
zeKbA=N?HP!UjEn%*2Q}rmj6R8gBK@U=Z2)2Bcq~ZB>*YONA4B$h-#&sE17=(BUy#{
z=+UDo$-k;4-n^)ble_&=D}$Q$O5I*h(V$P<$LXGY;C95JsbD03{LTQ(wTBC+)uNG{
zHxA*V0T&escOhEMoyNkwp7S&v12E#1T2K&~eJ2^&pRcr0#qwmWBtCZ^?w2U@%j7Xt
zI5d~Fp+r=p{-2WVKc}gu<WCa+WQ-9}`<y9pE<5$Q&w`g~(Yfr-ooH(13^mUMd}sg(
zL55G3P5RfH?rHmr7yFApL^c>QliKI2`9-c1gJGYiC^Pf;sqlB4sqZ^+S%6eLtX<Lb
zoJqb`3S(z`p~gdIzt-Vfp7jpH+tnqEXIZxHD0=?-Czk&5naj?1dWb1vajY*iF4K{h
zc<+o|rty0+ll|In@52Tr)^fiO%f}C<w+ElxaIesvvJU$~rms_<yjS)}^z%BaeTgqP
zO)n<yRev?m6_#B)OEa&r`>jRjp?O<1*F%{v;%|KlNdLQGR`$PN!~XNO{(JR1QFLqv
zYa(%9%XB!X7S}iL+{vKkw(>L~?pdl@+WTnlDp|H`R)JwwHV+@GuJC;tOxSC5rf~3L
z)BW+&Uyt}$GItJ9DSOW5&33UHa<|=a6(qg&jQfRzbi2xw_3Wivg9%R~Ro&P9&eeBF
zIo&<BX>HIp;G44cOzZR5F9TvD`j39}-$^L0t(9X9hV$^yap~X>wcYi77>nLoZ^B)~
zDGVyq@U@@W5aQnNlp<Oqdq_-&#PkjS@q>dEM7I|qgiqCaeEVd&Fs43>G2a*zk}S%2
z1HWcUF{k7i)bXutATH{6fn(+0{w)w(-TU8J)PK(Me{Zk<_Lm#vHlIRHpC2&c!n5v>
NmXH(A6ua~Ae*p}e%R~SG

literal 21858
zcmcJ%1yogQxGroV2r8f;DF`U7NH+))(hZ7~L8mlGNhmG}=?+;)cSwV%NSAb@(%lW~
zzLS0Kf5v~u9cPSt&%SK;K$kezoZt7x6Ysa)D=SK0!6nB%bLPwynMV?;XU<@ao;h=t
z73UnhqKTT}hi@03%SlU|!65%7*QAG?IdkibjD(oFbNtenlN(Xr$sVTojxfQw3j_oz
z4_I$`s}lK2sOJ$-M1IJ)rNx%)aRIAJFVsNHTlE5~*5}8B5~vI!1@RDiT9lWUeP-N+
zP>UA&xP%Gn*~O!s6(hxOsq$U}oA^{O$6t5Ek0o##?+6<kS;VGYf*XOaUe>C<D`&il
z!$y`B6%=#}4BqrzIon}G6+9crJ08tx9EAMQgM!C`RMg{WWvnXK3dbu}Sx4vSCxx}!
z$^NV+@6?j<i?quv+-ALu!Ht~m2lm+%-ge#9E}ONOlf&QN-#IM}b}xLn5X)~jGt(Y7
z;kMb{)zvjQ=TAsC?r~^YHCAidlg5%)o|2^*8y8n)H>;T5<FejBA}uMYp;3BbYisMc
zJWPD;nl190lvHf?%l(b%&Dl;&r{_rzyi1lPmDvLsndT(VlbD2rn}mejl@5#Wi&@X@
zXm9=dJA4vi;)aHXQme6Hqhbxq7c*@!#kSKem#))GNK0c&HwKVx4x8tj#(r_#UNFzE
z`C_JJsc2+01CNSEqes42QdHYHA8!xf;o|-bx@j&mqD=A9&~SOIYG84j$7-}3ZvOZ0
z-<oA}n003%CE;!Lbj5_Xm&vYCwzs!iPShU50th;-R2#LEzx%GDu_$aa`89-2nw-m|
zD@`G;XZX$3>?~%zk^0V^J0$vUuZ|D4``E_oJTcv9>Baus*Y#K=;T4>(s>Tj?mhGy>
z?rN8q^yfTT!$WSRY((F6{xhnKpf6W5P!lD!GFrjOz`#IHpC%JVe}#f4CuUpo2V68H
zB;*;_H~KqwY7Q2%W7BfAOS4|@u5eFO8yGL+WPft~@$=`~@M&tyo-}cu;m*<!r*TKl
zH{90V3>9=nM#h33o*6|_=Rk%E9nB9|l=I@3E?pA$zo%PyfmPIY%AbIigX7oVb#)C+
zsX+2OzR{wdwFib(j?1pAHM<@M^XXaI))iN&gh<KA_}%u_IMC(;d8H%E)SgG{S9nKl
zrX#d&kdSD31X`87-dp?q?AbHh)tfhO9)H7QQ%{}oJYLi{T*0I(B~h)lR99DjdiZW3
zJ3}qKLyoORT~#$D<ET$dKZ4EUaJlR_idxW-m4m}}ZL&TcBlYaF<;a&Sr1!sKCflsa
z^zGWY;T>O@nr32HgsD28TGu$te~vn^R)=MBTPkdX8_cN7aaim(40=mLL*q2z?o_eN
z|LTu~ynLJ2+4C&{CkKo9@E7n*<I80e<6kUCzJ2>9TYG$+L9u$XLl9qht1CiFZ{A+O
zj{te2h7WKzOC5r%y`DKd7K1hmy_sk8W7uE3CHS+LUzhh%O{F!El3x{v#<O!yGhffW
zoqJ%<DdywHk5t00Z84nEp6z859xM;9kaIEG-LLr@Bshg96-=c&leJpCsYoNCuCA`A
z*kM}A&&{oYQ)t$g#es%6x9-mw8ZI?o?9JqS_2)-V)=*P^-3h#)b=+gplh(d5-O8eR
zpO<%c0&`L(qRmekRJBsMIM<Cftlj&-R$D$5Dz{bY`}Xa)%WQ({R&S<Se$CFXhL-L$
zrbg#g1Mvjp>I31PG==Np>IHroR^0sBV|Jaw(_dygMKMI;Tvnrlp1(8I(oa?~Cv3MS
zx{C~3X&%1aqpmA8?U8zG)E-N$W9RBxSzE<u*GEQ1wt*E!ufSZEaME(~*~5nqC){_(
z7D%5x6c<-?@)h<ta>kM#Z1_N;Rg}pcoGR}SVq8$=v`U4k3y@N_gWORrvl?U0DUiQy
z*zsiJ>xvJ)$eRA#y7Sc1rSu*$o39MByyNyg3L1lNap?_>Iuq~ul5!f!6X$6!kH22M
za^;G!{hYE5#P(cK>pcnq`;0WxM#!gMsm*)qe_1L7k~*WJqZxhYx;}kRl?hW-Q+rx!
zcE^{L$Kr8?p`mJ;TDl_dQNEX0>+I(7Uuq12xc$=LtBtAVzm36lb15Hu34(=pqE*^Q
z%B=D6@uSYae4r}TQv&%svb@tC#~b&$HNz#zeJAE9#11N2Fe&?=?E!ryMa7kgx;o14
zap%8Rjw51XVm4BrI6Lp%bQ%o_4c&m~6Fu4c8>Q<QLy+Qed~m?>w8VtQckb6`Ik%6P
zqgl~up7{`Is=|?F2s@ms5#ixF?%RD%V-8Virb9zRBsw<rcx=_m%}NIAe;PJSI*p2(
zgH!&z3)tV^zbRl(Ab$JyZTUE!yLawL&Xghr+RLcStZyMy?%uj<O@zf=v*O9rHrUXq
z)sIjCg2_0Iex=GNiR`~z9k020wecaoaB7`Az2Ox95f<K!3+juu72*#c8kV#@Y-wwg
z7uF!W@0MMmlNTLLine<Pd#{+jdAM3Pz_kqf<{uDnS>%~rrQ_=OO$rM0;vp*M$u~~p
zF2++$A)h{df}DeYu;48zDaldR2NkpMMPu#yPl^qlZo??u8}aXCWn@BLXPrHN@iqfP
zFeN{-Qgv(J@U}@TM$1)j2yfhojE>%eY9e&FR6JJc5LUM~+mS#~Fb<u_X|Ah&c2i@c
z1|2GLyrtG)GF>;YO`UtNyJ~G!lGXIv&_*>aBjXYM;mdO-e}24gZf@Qkb4sElAGInw
zk&Ov29I{wnU$5uJrxJSX=z>mdo2>USjItic)0GqvPI&IS?OLOy?_Mge-$J;vKbs_)
z+v20qj6*?9tsg?D=`LP)gO*86cS`a1VBw3t${V2^8JgBL+mQr+-e6O$>6hD1%eu%j
z`Qq>ebGhx=(D+t|Cuj9`q_)-jV8e<-Sg06nt^NLPH`~$P*%{?|vNK{GA<A+x9i`v$
zTQi@Ble73qu@O(4B{Y<UA`Yg_@A#q_`t++U-B3flz0Xxl{dj+UHo+;e-Dsr5RBdfh
zPEPLi)?9arBwkE4o?^?=aH$*)yGd6vZq38T3MX^6e3WWw@|XD9=WHw9UZL1&Vaab2
zt<kfF0PrJ75Ofl;nSAg-_yY->RZm*rNxD$m@$v420vD5@pdbea2Mx`qwz>D$8D=^X
zUN7`A?av0?6s(12BzyAs0Lm%7eDuf2NJl7=Rjaj!C0EJxUoUghNYr^8Zz)+rW<moa
zBqV%?Prc(g&5~bD>-)IGM8z-`rG1Z&sV7fYkmd($I&_G-lY>^#D>Pvox>b%>E?ufP
z-WfTsC*-p6lIDlQfUa?PcnI6xtSE-bw|r;VoMqgg;;iR(pO&E%Y*}1lVjgB65)lIn
zNt|CTUpD3#M?lNZ|3sg|X+<aIsY*I4G<IhF$hbI`s6Dq@F`tW<XwG@?LLXGec?CFV
zb+n?#)5X#8%L!(AqzrQ8C&qtlcC!Y8QQRL|1Y7bQw%WZEgQ9@GUUXmAr-$1f#?MQJ
zzgWgqXksxKLSYiF_>?H*@(U9IXa$et-c}MeLl}p!>sG;##p|7=b<bCNdV2QuB}*HI
z^oF)}cIcE8a$c*^Y%Mk#iH6yLR>CiBTpw?6eLDd(p&|Hf9Ex)~_qQMl-jx!mK|eea
z&9r}NDHAV~9V8dLQoG&%6sj;_3Kc!Q#BLU+l~KTGZN0s{5IdHTKRfTB)>#ghFwxV~
zGc%j@Ol--|LgIJ=9P)BiOcbgB@Z)v;5Rwy;6kr1-K*Xh?V$9KI=j6qY;rzyyW0e{>
z00r$ZejDQL)2&f>G>PszPcR?jsj3~9wfEB-|30a(oz8hutOpl`cV+6xnx2`t2e$+T
z72f<9@_G9^>#}DUM;wo3tboHpIFs7BUjzU!LesxddF(x_5d=84Uk6oQ*j6wJ?!VUk
z@Ep0#-rBR8Cbqhx4cJemcs`YG7R+W6=0ZB^B}tgtGtA!K;N9`p)=<VokJcW3CG_EU
zKiDD^M*!^!-UQU({k%Rymgw*h?0cIt?VI|0a36RyE!DD%?YtA=qO8<zYxNvq9I<H*
zux6~ROOQ<q8sBY2l9Up8C<PsJy3IK_Ij235y0UL4ssDUSHrG~$tisQ8@AtbCyv>ab
z9?Rj|uh++GUQc*pjv+N5j!Ie>6638@r8-y-p<W!5T)iovjuR3ayHIC?IXQN$I-ZHM
z&MW$ovUBNC%S&V9R<bnL=VSizEu*%rcL6R`^~_)93$5O0VWHqMx#gJ@9uq#_^~vW*
z*9n3g?p+9*s%>@?V6XrW&zFphCeQsu;n#2*;YmrW+pbY}pYphDOab!Yos>rNL)I0K
zEN?IL;q5)htKN{M`Oy-=LTor4+&NTie8?tbGwDS@d+%Nc=5j?f#TbH@D%)A^-HY5k
zxU_c$;)uz2ZlVq|uj!v<tBvksNOGg~H5o23{mTAqjiuD=QH#*p*Yo23fWgC`YSJ5u
z>`nT3aeYoRrS{nSM2Q_3%CTC9sU>(05OiZJ^|_<n8UAk<+tRZO33u$nIBZ>=zv2j5
z4Bkf*elfEqRXzWxMV-V!HBBi=<ScKLSy!@{jqECP@(ph+Pbm34T$atDcW56$mT#Bx
zn*O?-{AGW076Q@Zz|_4NLhoFjZOO<oKse>E+-(S=IIuK43jXT4yP^vv=-01b{kcyU
z<Kh7%1XGs{S#)=IZ!ix*_e)hw5IA`n)oujqt^D<DQkp50$oKExuX`FgJC`R8g~-Ho
ziX1NGRjq7zwgYg?W)QS0|NVAyC3%)@V`JmuRm!=Ezm!)S0cRg!le$dQ?oN2>*J<kG
z6nbJj&`N25V6VQTyJcQ%(oIYA<8QF2i0hUKrq5w@d~Dh_X79)WI0FD?$Eq>RRAQPb
zP?hWAB^E=fy4AEYWWkNsuU-3N!*_1R0|T)u?hg-vUF`w&8t(r*_SIf4g^+L=Ma3Ip
zqFE|<&2s%OUcA_LQN@LNLqIIp)yo_e9>N=kQ?BG2pTld?b<3qkp$6@+&^r&9lB4hj
zU3x?~L_#o5Wt>-<>Bhb}@3`}euNyndBZ8DcUQ~g_G7nVgd3gtniiehWY*#l^7!0R=
z{`Cy=2N<b<v;O0{Qd7LMjp|H#lE*dit%W|h!`FF>P_eW^`t$Vb^l%8>;uwbTZrs(L
z^2~RC_Z={hYKR#ztR^ST56iN#8wE#@A|&D&`IbUZZ6v!oITd=Q3V-|zq?k@RATV9o
z@&KgTI_gwv*2m1n)r%Rv-kaJ(o;y`yg)z4#&Jtpx3$8t9#i~IcgFHs3B#ACjL4zIu
zCRiN?r9`5OGGs3A4(~qwQa-i<MW;7gE1xzgU}qQS^;8of#xGklH<WIvm90)Emu|`Q
zj^WPA*wY!DDx2TpXhNR7=!!_Yi<nTcD@~o92(W`JM~cYCMkJfQ2vA1Plg*^3Uo5(<
zC1)I~V4J>JPk8hw@CK4|XDP(-K$K0Czj~pq9ZQe`WQj#HrySZxjb8o{6QeKAp5fEH
z7mYz<jw6=Ghr0;g1R??#IDwAr2E|1omMcW9&L*gM$U^w_&e29I8z%?Hl+5IxKYxJU
zY|r;JT9xfC4Q-->MOs>0!<tF>4=aHUZp7{beBE6fSOwBz(U+wmA@Kv6v>u?yV3C82
zSATzQXnOyGq)K|dQUOE{?sFN!YIola$nDXe4{RtD>geeP{N}B%uiMU$ag>g2GeYuu
z)s)<!s1E3R9s>knW!Z-x0YnY??Q#5V7~%zwMpjM^kWc^C)|QC-fzbX`7%clJz<nTj
zTtH-+nwp~YJ%&8*LN&L2^r%U2wc4(MSc_7`ZE%k2L^_yitio;<$k&sZgoWA8B>l&a
z`Cj}CfHkKU^$@yq#}JYqsZX$;iJ4EztaPD^RwQ}uu8bMFI!)BNQw663pe`&dJYoAd
zd*v!6-^ZjR{gtq22&LUN&d%`g@Dh{m6#Y66A8h;}fB!Bt^kOC^CR3<QU#lXs2g|JW
zRa8Pw{0S6#-d?>)PEIZWC1zW8I%1ziE03#k=)Tz{m$<KRJBQZ9hRlqNg9Xog2>wJq
zwaV<LuNYeaMo?xkR0NE@z8e<hBs;&>73i#W?fy(o4r|5r^5rih{rUQA)?F;Vd-Lf@
zQ*Uy#p^_;%ZC(EIqVX!t4?hw;(gK~GTVz)r&qIz6<UCOuqoy-^4^dlUH3mg&3~n57
z#Re9Mj<QwAj~_^qbVjrPeb12iQEqBz0}y5F!+HZ~-$dd~crH+u_O^PII%ByP0ZjG_
z`Vm%Oe|)`U5G=IO1Zh$NOb*f#(Df?-5i&9|5D5_v3kYc0*sP)0PS>IV<{y8lmC}`>
zvbf+x=DD%#EJTIzi%laYA;F^o;@n?hXH4U3q5AIXO`zR;&%XIWVD7usg31GIT;3>>
z7~%7D(m|k0=((*ooc&}qR=EsW3YRxbBoKGV-ZP0vN@BAnC31UPWz=M<4;aP_b`pNI
zx*No_8&!es1P!bTa-aXE1-xwDpF_s+@=*+SytqGL!`!E3dGKw+g9TD33DV?G!-r|C
z09;$&nn`awNPz0e49aJqI*gZDPvqp}03ZTfG3vUQ3o;lVozx)Q5Fi@|Xpd#%uI6{N
zbnRFbaT98$@Kd2@>ulErggO=*wO<n#f<kw2&E@TW|5IxXokIhAr5N0f@Jo>q5q;&a
z{(|1LuXOS<t*t^{uOfHi#!I-KxPNmPgLy1H{>j#pdL;7*p2JE@X(R|ZWC7TM>U5x<
z12~t3i;D{o8K^>1pQq$wU`r46Ls*n<A1EnsRi+16ynOyV1!MXSG!xh_zjyBzuRn$#
zgH+<M9;J9dKXcxy6n!kLCUvyRy$@{!60CyYPn>maH%W^iUbr0QF{n&*zH4<GPZ%2K
zCs{rpH~ad3|IG*T!&W1<p5u_=$>%ARj?Yng<T^T3yL<XYf}s2T-VHd@AWydcAP=^g
zIriP~<jLVG$#g<OByd-J8sDDzoXKWmmzBbY@2-w{Ul1Mrku*wgT`?W$HEo*~K=2Lo
z+voG=&l^9Jk~|JFYYRpvbVEhh*uI*!hleP+%?;%1f3+RLf^v5*vh~U3h&oPov)swo
zEp7mB4yg8i$+6w!c-aK~+<R7KfzyynA#6Tc-(z30hYNZk3CTdM`|IV^Z0YkxEkrkN
z{QUV-N<w0;>`gh41U~|~NLHq@(v5zD`ZC#&nq<hvlj{$sN@D31V&{T`hXBAz9t?xl
zl&eK+El%Qwg^H)X&2UIsu|E^%M{rij>(@x~#1nlzlTip7h<^|;L0)QUX5)!d5bo~$
zyWVtI9O?3=4eF;K!GZ7><?mA_8X>Q7S|%2_?`zA{a~c93c$p`GLwNmqPriN~kX$Q3
z-Q~izX>CwBc9%zti-$l|q>XAt_e8L0MTUp_7a6r6oc7zQQ5lEs2)wWDQx@q#Nbi3Z
z^ZCyJvV(8GeyQ?P&xR?9z`l-*MpHdgGEWN-ipHmU7EC3qJ%hvISsnj`j+N6eYXErR
zxqE8NckkZajM%U;=J>?Y{5~PSdIN=-DqPS6AhZZT20Gqosd-fO^Xs$adwb3_z9(EF
z`*s<|orwo^Q0~HmmqHGm_76B3;654~J)sYP9`O;~3o<S0!v|{DxldlRL>EDFF{yFg
z-V%kzV_7x=kEdmI2>QkgnjauU%}rbf;T0RSFKCrpTlOwLm=J&og_5mIk^kb4hK2|-
z$i7mctC8`FvrQp%P)I5b>b(K?xA?JDuZbHO8C9_SV!(lI#-(YAuTaa^6D}KBcB~@a
zzD-Ymm5gI5);#YOA0@;Zv~?=>=Rcrsu2wGQLHTJ?k}5<+M@N7AChks*jZfvaQZak&
z+O?_X$E84fhb?&a-+$t1ZEhwMzo28A;cN&>+}F9Zo!KPMi@}CP%ybJu{gBy*D^*IE
z9;-4@Xl9_|J}oe)|N7?q^f!h?XK|{XOspg~&iV}OW=>zK;Ks%6gHFwltBW85MCfyO
z5IK#$Y6OlI78)7}oB&$=O+m-Dc>bQ>Hb+2U;zismF{J5-19d0d0N0@^>bp!wK={6@
z(AkhGHq?f<f(n9tl`<nKiV8&LZ+>{j9q~i?`l1day<?W3v7UsnBl7H+!$7WHjVqL~
zjYpF}*rHsdN}-G_Vxn~E_lPQWLX(p6GiGZKSKwh*e+E-~ZnpEbUYg<O=Ra^u0R>?{
zX>?SF<_D-k!)4ZW08xi}Kv_BbdLADMODL*KAWp?>2a|Iz!0N0Fmm=_vg!1#IcJ_@R
zO{o+#n%81*4NQbKpdrwu5ZQ@$!j03YEm|{Iy9OW+0x4}mBJVy0P1+8`Z2$?HFJDec
zQ@h*$hOP%-&G5uy%e4kzMq$r<uXN4c-X5r2HVr6b=fe5v>867UIQH)}6GCe##k;}U
zB|Q#1@C`#Sj6-LE6%wAQ2O8Zj^&5MT+cGmV0cr8M?G?06|B5m;G;GEY5fOn50+qHz
zo0Bd!%^k$-jjK5TI3k`5dPPoBI_*zK@tXC5O!yG$a1{`JtvnsF>(>!<f<tq!axP`F
zQ&fNExz4=-*hF~X>E=5$zG<dFv16d0Knn_@5@u!~mU9&mAJ3OiTvf-57NrCPK|%2}
zq-Gi54y|vrpi^G_c*;nT;Rk}}zyeS3SO&+Y>=n>M(V#Nmlb%ddgp-uXAzW5cGc^qV
zD(mJg4J!LAhV_F9#>U?^kcH{#>1CE9JAkx*r8jB15r2Psh2`#Dcg*oFi)}cd&uhfQ
z5pi)(RaI&8%^xn`PS=?iH6og_E=E=Vnt+&}3NJ*dIhC~Dt70|yfV%>`+_U}ug4l+*
zBqFsQ`yXNHg-o$_&_fH^hyUl-odK@YJ^TI^q?~(nbbpIGMZ!p!E?|g|Mhwx6=$Pw&
z8v$Ri+`cV=*(Pqo4ByEFd<EHe1dvkqUQ~BJo6PvT!MPOOV~aWj?LwM@<Q^yA!UZ7%
z@`hH;=B=XE3rQ*;b)|$e#$K=1fFGm}p${cG;p0aFVq(DkS<tzGp;?YsKdI~bj_IB}
zVEYc>)fm8SPr4GY?%h`TmH^N)54Pq(<9JeT^DO53jf4U1K4EP{{CLH+2inwu=QhW?
zl#IK;@f*NP0maq{I20hpfPesyP*#SD)2%0J=jZ1Q6=y*cz=jau_dGd*u6t9+*=A|5
zQ0m=Nxu|=>wo{Mb`M2$yL6e*O`ewuNtZigqNK0@5KoG57{n8FU!pPJBEKm&)x5yK7
z0=F&<BpHCIZIvffp=!N+x4_XRa&9yAmF;&C>}I_gkm!vK4R`42d925ER;p(@lQ2C>
zqVsp_`8zOQIG|waS2<dWi+k5TL6QZL3IXxs%a()ezm>2D=@=CoJ6!PWJ3LK-=gGb;
zfo{3Y?}P2d?OeH&VYW@%{xhJwot0a1tV%OoP>oHSo|+o8;Bj#=@ERN(^dct*LEK92
z$Hc60K*!(ar*znC>viwjPW#APwjO|+w?sSPY9H$GV~<^G^q%Cpi8XMWbWOgcu-XF_
z0OlEE7>88$nzIlV78Z2M@s1gUQB~T{Ls7TniK||XJ?V!9f#N^`eKA?=d`)@hEDIx}
zzrX)2I<?kj{^tiySzS3#IIG6=R8?)~x@cUg+Raq=gB(HqadB11b=jObI1ZKPToALB
zvB8h1lVtJyEZiXhq#4@JLSOa(XQxo{%jVD#>pFKRgsqWx!?dO^kP11kT~`uf2h{jk
zE()YC@@Y*#s(Zj+_!2t6W_*(9@G%zDsF;|2K;#S8Wm-~Ktm;`J<hH_PU%grfKux?Y
z2d3bGy*hsysY8qxMFv*E$TDz1(6ENV--%+=U$Nn<bXrAN+wD7dF40VJ%l=vQAh~}1
z`bW3<p7hSG`JQ9ZJ7dsG^gQ;bp{m`#dpEbO>d7U6{Jtsr+pD>G0MIz&+BHkjebMYM
zJWlqFk2@i{r4z!t!EgbgoCjZM?SYw@8FX%Npw<?$^F2VmuFtQAdM+J6$_@n@eyXLa
zjkJ$xvp_LjO;44fnTiV@+DqooS<+|b6rmh9qTzl|jt`;c-KCbKM>>&b=u22BOP&$P
zVbF{frVJrGI%n}7lXk5miv&nl*ZH>}Jm|(m?DQDR8C^HGBLrT8<qD{9W_Gr9Voepq
zI#K8nrRD=It+88OV2&W=AK?|lCCwOv4$L{>btn3#<943}n$pF9CP9`^B%I+At3^*1
ziV1)}6eD*;(FO1v<g=PlVlY~D^#&ocSwJg@h|qZac+T@gdHhlx5Lfz(RQ&IiChnG)
z_WZOetJ1j_+=x!nB@(9;uz!R@V}0?q@5fHc1P;R%L|J5}L>dtL3#lMVeh`k`5l72S
zB<bMtT{9o>=H#RA(+CUKjEbd0>x8BPL}fxV=dnueu63T9G{t>`2a-}H$WtA5tG|tJ
z-v!!>#Tq4?0aDJ{*6G6(^IPofg-}3Rqu4MYU_b#H1!&VLwAls$r_xm<^!R_y0)V?Y
z(JV3ua2DPkqJHxr4PW#~IUzhGE~VR25Ro)zt{q<sg)4(RA)``f4=}iO_Tl?F!3MCL
z&+EPDH9W^h83bN)3&rlE*tF(18X|Z)I5hbdxF_}WQ5P_j30$VXAhNnrBtMjta5a>+
z+3T5WxRsy+11$&g#sIbe9X$jOPrZ3KT|UMZg!^M9;q#AQ0W#YI?>#I!dM;^n9o%&X
zFcLZvg#}qzS!rnM=Qfg{@&bXC!YOnHyXgZ7wge~+XeGaUYSYq#P^`7aO;iI3zj!}t
zdHg`Hh~41p8I0fA-A5n%{Sgz=5sIe6qGm(GO)!59>6zlVOp9R2ycQSFttcErssdMx
zv#M<NEC;X}(SF{3gNNPJ)YLYg<YPFYc#cj5$K+d%r)5@vWWl8`!(8p2-NcHI4h<gU
zB9B!35bsYY^gSDz3W8Ua0<w;@Z}01*109SY16f<&T_g#^YClwUY`r1~R{bPhw}I=#
zWqf>)_cx)KY2|89W2B~Tjxjq&(w8UvS>;p|_Xk>qG#b!1+he%_uJdRE0?=W%pX>6<
z0`~*ZM;}@`__Tx0fmo5wXucq576)Pnm*wylpa+ERbdPy&u^Tphqyb?O@n%j1lAoUq
zYl}Fh3`3=E!aiGqFK}XxzhZ*eW0GHLQ1bmO19zfyu91+Gfnw<LIlXo?BdjHlW`eHc
z6$if6)~2c=9}IMep``-1D8jJ1PHIPLgsP|rZUW>*v<84DivuLs<}jsz=*J*7abSl-
z8*jjHJYL`Zn(I_nUS8fv^|~#?HUYz#KBX%Tk59bTqwN@jJN~d&UrOpHgc@)J*<;sP
zFa`w^IuL3)`SLsb!H)nIW0UO!C?If+O3?OUlR()~&++dt&+WOH3n~&f$U9Ju@@1u1
z_5+q>Lpi01b}zd_7d38=)lgSYZMx&W)h!L9C_#cNA3KNEq9P-akqhYAjG-I=?#|ZG
zJNqtA)Okv}ibVtzAZyf{(fvf^o3`VJ-^B^lM00?u>uL5c)Pv%?D2KFADFe{%w*0``
zjwSd5g8Ggpf$AClTHp*9S?lK0;(lPpttX~!o6T{z(%G8hW|6vdf;HhXd$ZzM>gUh%
zcrFLO_&a7do1c7$(sMNhUVg;;h^!Lg)`BO_^U+EbVA*B;V7JJ0g6H$UKHD|}m;3hp
zI~^@8s2aTJ($;jg{aq3!L^|(&VVkA6c0FXuPYy~d3oGko9AY9O=Ni3zPnT&+EbDV@
z6Amw5zI-szn^emMCY1Z9#=jxOx&VkZLs|lxu&KtJ{{|i?9DexlVH&?f^23dJHbL8|
zCMaW|r=NERaaU1HOzeTBzWa7WyAj@{OZ?7j$+gOg9_yZI5pb#V1sF8in<=sOc2dkS
z0P*;a*M8hn!IeN@_nlvR0zI;>JN|MaTsdatg6j&P1$>^fY)I5tm;b(atmKkk57l=}
zB22r?YT1SlDC`l*f|9-~AggUZy?iabPip{~q93co5Gm=mw6w%RjYawzNlVxcB5l;(
z2A5I=E#+9{4jXrtyNT`)0Ye(*DkB$!+%uDtClTU`eRB+=X}a0iJz&lAAk-f2gP27V
zBr{3<YinC~_FGfi8fQ&occ8)H(onr#VwW_Zmc<;edSbIi)g;5^D=I1u#QD$WLIk4D
z_o6;3O`go5gSYNb@LGZF4|?f3by3f+Uup6&MKGizseyR!BDYRmpL?rSqM+uvzaLnP
zP%}NE9^b0RrXIBT0-?*4h?NPP4c%_UvWY{@{g7STrDhyD_o5QrFX?O~sJ+n|Yb%xB
z-buAvP*)B`D^wY*S?4gUG{^ID8%jx?yWo)KKQ0s(^}zEu2o*Ya^!pvP_hM&9$5lMM
z4=7YibF&IpIY?uURl<|p2R%_IhZC3uylq0qj(C0)uF{7;^Kynk1-5xf7#tRAvK-Q9
z2#?mt%{_bDFzW*?(<VTZmii^wEeu!((4<~{zrw3y-z|AoCI;vmzjPX~cCZ<b;5K@C
zdeqd^jy*XL0S>1d;5IO^O|UG0xxUf4m()H4Km@W5y3q{$cpa!@Q<>+5nZiJZ>)}G$
z*MGQzsi~=5UCJ?Dx0%j+9vt@_^NPyUMHnH#*AH><gE?VFtiquNWNBp5M{ppFV|e1d
z7u3}{@bm#Q96(c?ZHuuRu`J7lrKwH!mWhF8vUd6fd#R8latyb12v5bW04ydV{YHNI
z9~8dGo&nGr*wJ`E&o6?7VWvj-n0*9Hif%52#Hu~u3=npy@*i&8UFgp}1W#}kD#LV3
z1jj>HP~J7x-Ump5f<1LP0WcJUu(113k>N(|G1T&%{3lc_ht$_0_xrN7tU<j+v}!Q+
zw}BRqgRQ*5@z-mXn&iff7N21lLx6;3efmXSF{T2FNw1~aZ9}W7WuCZp5Nm^4s-1eH
zfeh>6JS!UkDXjyR6n@wOTz62EISS`6ID^3ITIIIj#hB30A`qSpTO->6k9_qy3*8Xl
z_yNoGvlV0s9)Fygdg<^w(QTu7GCo<pB_0qw{5=dLje-&qEPBG2WH1Oy=OlGadc*AI
zmE*Wk{X4gBH;JYz#HH2A>)jz-Bkh2;1O;`_s5p?6{W;WYh=*_=&<MVt?+=ZOqkvgD
zV5u!!#HBhg#G<#+1v(A@3J+jGz<6LWVg4*?>hTa<29&1#mdRB40<Y9IkN|G^Ixm+F
zw79JUdbH$u3UJ%|`j$Z1$V~w9Fq;P&)s{6pcGNwjoP7am0tMlb-tj_`Ji9R<b9h#8
z{1Fixei#55jd>!sc_gNFs&GY3NqD-h02D+d?7(a$VSW5DG?cdUr|4GK!?W37?25t`
zFWL(rP6c=hFp&b&EjTpbv}cP>9X3*XhQD|Yn6&E_V1(`b`<$FT^Y)G)FNCh%jJUnJ
zKPjqxSEpQ=I`@cL(z6oX2f~d)fTs2nnHa$J?_fL%WMD9+b%CaUUx$H$&@{vXlPV{j
zsH=}!F5o?CX{tI4jlI{TsS5n{^NP9`!sv1^Om54osR5MFAg>fKgkPZ7_VIoL)S%%R
zQ=MRAVv^xG1QEh*`s;^Lu`p0%Ti=MD<v^)HU_{j%z-0bO>c5-&Z}F*DJXi0yd{#Z3
zD*}_?cs8uVbEh6m<b$2%><O`AxVWv0sp<SfeBmkm1(+v6#yCBZDI}<zdlRxiW&vlL
z+S207knKU%#g0Dm4&L8-{ULx_bllSgrnE3u^kFCqj4;;O#|I3cr5>(&Vkm5yonRgg
z^6X`aw+xv=4~%mxigF)7GDfvR#e%9xY!k;0;wJ#9SpfP2h?}j*wW%~{?#W;%6+pTU
zBJIA@2uPZXLrE=s&T9r=8^qiXciKBTc03hqRE_@r3_L(H+5^Ij<}y{aNi+e;!?r(h
z1?&>0vURD`Vo*>}078#!wP|k&baiaelv?1gQ03gd?w3FyiX5$dKYRTf1YN#f%~7Ww
zb7G52w<Jj>MKRbV`>dBRejH4kFxb-DtB%R0%7Ym<Fut@lmZSCuRchGg*jPB3(5Ay6
zzWl*t@Wr+7bUnm(5lZxmYOdluFm|!o3DI&@jJ_{+@<d(T9y%TN#NS(7Tu5KonrKh7
zQl*CpOb!m!aksI#?TzZf1J<(Y6B8K1Il(hj1-MbWPE9T@E}ms}^7m+e^B7;^`}gn1
z#-)#7q~qMm{xvePeduEg@9%m754%)OnF%%eq*&rSRR`&<TTHCRxgKEgX>9DMkQoMP
zo=yjrfTsG+TRI!m*!_6iOb&EJnkfV}u{cr>4nh$$nU$S~TRXCX%uC(>J`DysSULV{
za;+B+fra}L4CuYiO{oW~?j1NMAS5A$vMpK62c~kA!{{u)NH{4v9d)oADr&&&urCny
zi?-TH6_`#7*It-`3~Zd;)P=qxMRExN$-wnsTou6Zd^h^t_uqq6PS%eeJz86?-(8)k
z1LI_LboAmCavfl`^@6vOI$=1tFeN3Wpx{16?j<tS$KdTa|7#k2%g)YD7$<Rt(sAT@
zW_D^S=V{prBqKaETn{$zP3tr`m|)gyZXLc~87b?8;RaBBa79SsV{q~C0CY0@nv9kg
z%JVJY;^I!BGkqQc6tG<cexm>vfJzRa&&nzf)Bjm(1F8mGf{m3m2XGX!A|5p6nwpvb
zp=_o>Ozq;jyZ3DsX`2>YX1z_KWXQZ9@H{F3dqxh9A|Twrjkh|<42_MG1RR)?{Sl82
zNRQpxq}Xw@GR#HQf{zCozz#GM-1i)q!G-Zd6iXg5!Y7$Xr7H&3(jaJD-+v>Rb#1x`
z`Xvb4wgL$!1Ea1r<4`4+#;SN>o(l9xKm|~L*el6u(@brNVVn-Oc%#l^(0|kv6ter1
zRxO|aygpiQg!z}&ST_##tREc-g5WdS!fit@1(kj8^q+hgO&VE728LjC?$5e2GEW4O
zhnFDt(nOs;mBK~5D_6FaQX?~Av;qQhAI7Y<^sP2#{CqB6i3N!l&_B4#8=ISKRZB%M
zJFWTZy+;F?us1VgfG4>eXd!RTV=p<rUTb=+tz8M5BZ@iV6cVZiJ;87s%`PHx0t9)i
z&Jz=}ZGjUXIbv0134{@>vyD2z{La}CDE@E(P)W_oMquEv8;oABo@=)1+Y{s7AV$I9
zPjmVhF){H?e%l|vZ5Um{C}qwk29X$m9IFL0t034ML6=^@TdkN)1TE+od`$rC2k^Ww
zqC!SRl?Ym4S33(DpyoSj4?fW7AX5Rr-*nU0i)sa4ojEx87{hQoGBOfq$2t@lRz^mX
z@oE=0H@B~1-lF*f;4x;wLiqan>VOV^cGs*I|3yo0s?2rFtSH2>AwVAp9~=+H+pN#5
zO3Yy12t@L1$C3k}Mb_2pFvcx-ag<>Tj3I!)hp@g7s)#TJEgUMpQ1TMUX|l4iFc0<2
z-iJb=2Mlwtf~KNvLE!=&gU>9YGya-*(<LS;Z|Ok*C=heSJi!HsU|RFK%++@c^m$mq
zjsn6jqb2R2vjADh=H<TdmDIh!G4y)kapw*2%2+KlmVd81Yk#pAf}obwE7PFp!oFK#
z*m|RCY{fG(K}7B6H%*PV@f~SO$qyvsU%B5G6nuK}7`J@ybL4BNNZ-GGJ2DPwis2AF
z-U{=}%@PLdg1MuQ-^y5rI>*I!rv-Mx!r`IfcWCyMSKnc9vOB{I3p65Vqz9o4XH(b?
zw<Iyy4d^-v!yph$&}?*aP!C`~IDpN|s_yU-8Lad4Z1~XXMjn;dCfF$lJa{HH{l<N|
zpGlROGYdIsW`PT%l@8_}9v+Z((-P4Df-J_X_r3A%ttEA&>FbiV&H2h#5KW+!JtF$n
zyUA+mq1S?ZvxrQ%CdKM$5T%<>U8l{~d_aBK*-_6cKAzIfTt#$PjeTBA;Z<6by;`4f
zQg0vg7d`Z{9cb7a-vssvA=j;;ya27(xF<KAwSSx_4m1vQWv8-{G039Hc0L$@kw{pO
zes@7MEUz*<_sy#U9_kN9hR7Gx3;9IisNEuAHD{s3Knh+^+SEnGY5?!Sz3IZhy73_Y
z?7}*P#0_F%jpe0*1)HGVdh(fdXUGRv2u=90A&W17&-!7v1vFDnv=uz*z;0{m=ve8`
zjiY>~1Os}Y!=r)=N&}z!=E~3?80^c$2(FaBfEoFXm1%K*?OKX+IT}{)IB1gb`!m&~
z*PoY-<oc8A7Sj4MGBa}w2yr*`Jy=MK%`T43;?_;yJA9IExZx}`v!L!4EWWRy*(jav
z?2~eBRYf7B?*8$0b+}9mg&jF~Q8WFj3Zl6_tIs}hj;Tu~OlxKjx)KZW_9zhdg^}~L
zV-7wT0MDXdP=Dzmc_*|k4OyIzFH|195pmX3^~`mnD@DB*F)_UQ&mon`cuCcZr>9lE
zp1W;$`kOPX5B?8dQp~0}Gr3hhW+lnR$|_Ijz(+ZwNq+ODWu6Q$-I@LjWbR|y0oPEi
z&sskh=Hu*#nDf<Y)}z59N%40TN@UhYXCy9X{V6IgChms1AlW<5Md?^vR78aJ_s<_9
zm8e!jq7By?ls7JFOw#ZmIn7jxrlCK@<IoIchKGlzpk)rnN{k`~o&YzIR4=s5fGO>0
z7MwE~wxE3zY6Rq_Wory%S}9!-^ocd|jp6v`F>l;rI)C~*gR`Ok?U!t{$IhJ5X#Tq%
z9i%BOmOo&gm+YNps-c%o@NIUJ%nt@}h_RqR5O>ebnV>(8Ec2*j>7dL&yCA~4dG+0U
zfxcdAiS_!dy6Kb@TD#@UYYWT2r<%iNvPy?6P$tiweX1~1D3dM}2IDL`TaEEhPDbXv
z%A?~QVg+JxI8TsHdj*exgv2-{e9E>WKzw^~K#NGdnou$U<sQYj8VCnrDEvSwA;v0(
z6;?TOUrcx_TUti)i9>zyLx<`{&N;h(ePz<&=#z~rEgc<k_n$w{C3_dQ$^F2nBgPuD
za;mt(q6QC8!EF|6MAXvR$vZ&Goz!Veq*{$fBLRTDg8QwXADLe-b@>4r&@Trr@r@hI
zxNqORi+hSTb*%dR{_J*{j>2+gGQb_il!e?oPz@M+3!Xic6pL!ru==j3rK1y18ygm;
zHf>JSlBWycXTc=kc_6AI`{ZjMbpmZxo@Swva(*{g!%7NA(ixi_l_ePqo~H_5rdTy&
zlr|_aH;6{k`JRfz7gynPsQtI|<A0M_|D+6zKryC(o<ry$aHXocb5@NgASg&v0-iZQ
z9EcADJn&nc7l65lyZ!xfkif|c;%3g$fP2?Tn1-3B`zo?y>d>zInR$%lMBd=4)-gGZ
z5UtAfa7osem-E>v;9k0<LZt5h{yn=^mp!$8g^tuGpc%{yZEbDp5|Cm6;;2rY<2Nj^
z+GMCzc4Q&dVUi0X=eQV|JnCe8+01aJR~z;HTJOaL_THu^o27}|))K#$Jc8T8&z|N_
z!5JB{e^LtX<A3`lubzdA$wi@IVeQ0Aec4eej3Od>D);H=9ufkyygxvtt_+bgU!<2m
zAn^!?keocrAL4?-50FxzIAps@dgm-BuB5ay3ocBdFs4|QjYyQDRK@+F8%maxmk%=b
zewHIae@uCeO9L~5dH&-t;wwe7y_;c@<$ds7#q$X&umVby#0#^x2Y2J{W#{oQGs_YN
z1_W^7G94s0{#EIHqOEO41Tz<EMC!W`Z-yz_l1wx|09q2OG#K@#D^a7BVG4lAA?t8b
zcy|=-AB)~A>ZJ}kXa!8o`5&R|=f{-7H(<kx+nO27-07NjXrQVZTH)Qd^~O9_^Se2h
zLbv+$=zqC99f`s@3#m^aB#~P@+W+(SFL5`_fS{FuH1(BTOs5kP8fBca&)T^NQ%k4}
zq(tSWu#5Xc_Fm6sDbx0orgHhg7#<s&#rRlRIoV%IN{ZYM9Fz(Hu-1sXp}Ug#f!~W}
z%pf2nd`1M#4CQ@V{^xA?h*Xe@sNLG|^C!9Al4GCgwT6CLUp|+ORFy~F*oy=C)_ICi
zt)CemBcZ1*){U*Es!D{#$;nCF4Z#{~5^x|q4#${Kb1-ueD2^!V?|<S<&wLu^=@nKN
z6K2fV{m%FB6)W!J#mfsmv1v>xFf=bk<3L?9WT7!mYUdCsr;=MKZ1_Cw@@brUVLfX3
zCtx3YrT$;O^wreid<PG#D9S`tRaJpIRTcaQqHY)=VopKrl2TJ!yMFEhroof~CxK8V
zFvzIFC?KG%(q&W}F;c(*gJTqa06&SaKn(~K2i%472Biu$2fiuVpB#7|%H-wC)c0`e
z4NNM=Er6_yxPX*Z2yY6zmZ;2uov%ftQ)(8Wl2K65#h5zILxj~Auh_RXweH-e@$?R0
zL0WBX?JR%f+yf&K2}wF*hG{JA!nSLmIH*9JxS66qq0^$3jf~P#Cc&^y_ZNY2UsMJm
z?1O*oiVsYqNPPmMn*|q)cxgh!EeJ$V7(~6jTSF8vA>pSCPF`LuLJj5B6qUGFtSyC%
zIT|Ae$$yj;UfI98xGmOGr^fuy-N3Q(-wfpc?Ag#@GCo6{2o#4=N$$aqEiFw=_i;a`
zrbaVrkiqdGlnHn4))MHb`TkN+4v<PPDx07fc8J?!4pwA<${b9(+}BDs1v?p)F_zb2
z#rE_|8zYy)f;zejL|Sx)+VNd;Xiy7ab)^`<kSnd(bBBme;~lvg$in_?XkbdY7u7l{
z3k|%&8!*=lZY>;Ul=@_x<le87kd}F9pspTHTZxP=cfpK^8PNgk8nFY|e_DjxW7sft
zm+w!`1ZYr27Aib!$zgKJtt{Nel)i4#Aj!MwgAa!-zQkI{RCL6OL?<R%5y?UY#7z)$
zFW_iQLFFpy=FWTnt1<nb$q=u}n;Z$@SyXumQx_giB+13kUy(+xN>iFEJWGy1Sy)po
z=~Bh?--$mEfCEK@$RUW7{Trh?EDobd@eq+p)tEf}0gMl`-wgEkzh5Y|&HMR8K+kpl
z4*Fx`U-oWN8sG1~@8f3HT`g^DdW8P?yp=G*nNEXNNJxj!p+W~`0!_RkA}d=%*+Oms
ze_tPS5LLmg`3^=iSW*^fssMk}_y1EcjvbCBFvm&nxwHB*)fySYqoN?%$90I^l?ktD
z?$+B~XiQ1#`?B*j>3}a+^AkQ)*ku)K_1Lt&EVI@h^8aq{ihl}qAOJbZ3o%(N?dDcZ
z8w)3({L@lXN&S$v_ND1L%$lASSgqXezm+Ax>kkwMO;(B~xba0EI~;T&yvxL-A_2}x
zfcWmHT`-d3$|~tSrnb^qdjh=^=vV{*$F&>f89aU*{ZB3E<%csul3Os&$Biqz-K&Q3
zM(y-^0hpBJx_3_zSx=3zcw0u1Y*01r9A>7b{=pp@TlJtcQ1}7zX8Bdn^ZE4h+7PxX
zk7^E8m&}M>e*wPCybuo>F|C00xDdJv?DS8J8P35O(|k8_Z7DzeDe1z_j<3YeViJh0
z#Am*mMJ(Mb_5Up83*<o{a8Qp64NdU}K~Sqvwv%X~Q-@GNRyJ$nXGV5=Q<E~0x^$;9
zvc@I=6=)ai$ya1f;Q?l^QB=N6+SFhW`^*_44Dz`E&p<Xn&A}iQbSEr;pn1vjyQB!H
zfTPnCe%GL}VS%5ijqXCMIig=&fG^sO)FeP<u_RL6ErR|j*{cm$L<a#|lD%+vge9BN
z8GH|>g&ei?AQeUkW|g(vvT;=j=^)BXe{vP;M_?t;FAz<;&)_a}WDl;ww4g4^8#&LY
zG8bc<nEB|t;s6{WFe4JcZmf@OEHr8V`?%K@_5I>@UadeEHZilESGvu`?fTfGw10Dz
z5dzZ+7E8QXUq7h8&OO}#6tXPkw^iJpl<vR2jEAQt0nptchwJw3)D|t!=vDv0Q&Zcn
z#8|WKamFd^5rkA!dHx8fN02G5u0x*&N&x#ZX|n%S3Z6DbBIp^6fCdoASzS`1u9EI*
zr(s`mM=p{PcX(MdQAOD^0EF?JposrjTO|@;oI^`O!E+u?g}r(IGb6L^)wWS&RFp9h
zGt^Do)A@{9U8k|n{-DN&Nzo*ArZbX~k(mOO_yBg9WWoYI!`-{>jOv?*zw8JaQ*bFi
zB;K5OaUYV$;rfYZ-xf`F+1IJ5sfEX;SJdi06r=I`ZyWu$vB7`V(f<!;AaKDf6Her!
zm7yf07=RN%8~gF2!Sld~PbRsvKm^op24=dT@O9KjjP=HZsDf!>TrtZZ=3-CjqE;7I
zx>_klMox|scYCh;p^EYt_Tl!TF_AW4fRto!q<=ZVGz6nP%z9Gz!I5^$Ja%y$fL%mz
zz!c>Tm`;jDp(j9mVO8bW5X>t@LJb^zd|s^rSs3QklrFs&l`W9PZ3z>efQAPS5WJBB
zdXdy8&|Ap;dSCNwza6W2?Sxha-yl#NF_}{oOF_0pD+5ePF;G+tQh8K(9Sl~rUY9-%
zEv6K}z^W3&y@$f;mx3g-(d^u-IYKvtrK4AT%uq$8S@>BAz(ND&d;&86fnxs#9Y4hc
zFn1Xs4kjAP8$kgXDLKyr%@(-fLmFbTzi0whV8^|~z%XF&u;UC+tQlOh*fb>D<Fa`K
z1cn*a0opP;!#Ed(U+wYEQ^|Ue5t&n9zC!8~Olfy8o_;oh9e9Q)aJnH$QU3l=Zy4|1
z)sO&eA-DswC%OyF2vZ_gn4~3gaIMjk<N_TV2s)fdYA4P)`|8!J8Qeqx2U)`HeK%Cb
z^DyO-#%Wuog`?d%j3v>*RG1PG7EF+3`XfVS^teXG#w-i!TO2e$K%qayZh*i*LQ<7b
z*sX_o0(2M5?I6Xd9;7u8KU^|)IsN?nP>fm7^-2LuXh^>LYFa#G)<-_?Q!2|1g=6B@
zzuNWwll}4KT78XSprH{XR8UmZlz>6&(~U@rvrmIT9%YF>YF6f^Wyd%c(wjGPkws@o
z0XveuA2K%6pLzTd*h)lLP+N2T!3U~m+yZTr)Q=W45{E3NaerC})U=GAQS5uZd!X?W
zcmMbyaot?qnnE$%ES$ZjqTdKi2vi0@3tWKHOt@L1EDf_cd)~d~Z5<svxMMJGg7QW_
zK!6qoin!2Spx*|FN6h=c;bRWm3Y~lCkCpg<%?5N{M6y0f@rRMSLB=VhS^*SB3G^vw
zxSh*r=exlj_@bNn6JTkx;vsqI?1v6HAtEN)mWw{zbz+h4|6hqqUd0NwwnenDP$$%`
zWJ&?QL7B+PqAI*I+QH)?>OMMhLnF-iACh4VIjNG7ocus#4G@8b1hKF>2m)eQ2yhdi
zyoy$qknpYGMh*j^=D_B8L8M*u@_mZIqeuQwW9y`$Y6!T#1qRm#6i^z6rl_b$zLyL>
zL*O0^W@a+-!NiQDI{Q^1-YmGx<vRBydyR_8{NNm1z$uXt0Pi`=zayIcJ0llB2FcSf
z{rFrdWyMF4LE3eeGDwpDMy?g=nE8VHM-k_Pf4PmpQbBd?Q^?3TEw}%y(pf|a?Bh}(
z3_R77AoOq#+(aaMt$5{EnQ>tbM>1hXlficn#`S^5&;GTK9i;|{8^@9{s9O!^=~3R`
zA}LG2S;2}OUqqZD(vLh?h6%kGb9u|OIcaf!*a68B0Hsnipb69}q{RdA(h=KcxDUs7
z?E2omeH+U{3KCu3B+yrLB3DRmVysgN5PQN*oaB|FxILo9E7iRyOck7R%ZxKf@~nBn
zP51k!@&}yq2le+Jd=}8Z#$W%lM*-HKY`YGGRX{r%X8e9JhKGcx_*|)<@8)W^A36;z
zI2)g0@ak3JKl~9uzoL|iY%WY^qfEfWq}5hdHb@Z$l?4a-uRn9n<HB#J$qQ}Wf=9A<
z>xSy%$0kIoZovo#Taa*hs+;?HPI?+Sa<0M%@KmJ>3SbIKAq7lKsZU59OTb+aabC<V
z11^09IqZ6h9WvAk=d(akAoByBWp3wyO9QMUKpc@PyiwK_VENO_M#vSqB=h`zOjMw0
z81$2v2<>9npOL}C&p*T%4(--8&=`B$I8TIJRQKOhub}WfxEm24Z$Si{DJzKE=;cdN
zKg~RyOh%JRVX#>_v<OO{*xTDXs8gIu-ZgsHX-YwnLU+Mbxn-Ura&|$ArpMjpoA0F=
z@*ePH&|M(wf@F+B5vep7TLGQI0-ivuVh4w`h`Qknja^`7+H$pbuXPqI8Y<(@-E97Z
ztaa_~)KiBgp2HBOf*7$g<aE9^Rb!_Yhfmr-nSi{iVgkdEprDVCh)YOGF{hlCO@0>_
zml@owGD?WvykSH-#8P=h`Qco@>Q;Shx`?kD<wO=5rH8+5tXKMPChY%8=tD!Z(8Itu
z(aKQoDg^qlxD_~Y8yXwqp3<3dmW^2EAytZ&99jXQKxw+XMh=_XVfC5ID=2W|BJ)pD
zG=P+{<GD;jX~E24LU+N@1<O-CN&wQA7BMD>R6Y4-aHT+<Szx_)PZJpx2o#6NLV5SA
zdI|!m*C}<ZTHungu*d=7-7}O<D&W4Ba$!3j4ZTvL1(7R!wi1BYKs6k8NN&k37%G+e
zgv|Rgrmh&mI-XKjU{Eg6EE$Y<S{8j!x%Wgx^;Om=Q~8pwiNly1vp!v8idUT-kkK@c
zL+%M|V)L9J>#UiwK<37Oosa_S%(W8t%NTJ<tNu`|0m)3e09YhKgM<~~nefR#S#E*2
zX%lFIKX5LUM@9K~2Z|6aIWh5Pe{$rw9Z@$7do!f~tf22_Z1bQyRkVP5(rWAISioA&
zELc}mB0U$;Lv)@vi1o<l7nYRtGKME4SXu>&`)ldBG$1MbjHqa&J?WupS>q!p%d}a8
zVg9mi5h!l}Vj8sqG!mdM1&V{-WnXH1=M<u6{fx0C0uE%E1`StRA)&#(6xFZ&E59>5
zcNe5S<NA}B(~i4`8Lo&*NUIgO{I#k?`Rkuci+4zk>4R>5^)ji(ey8X6-XLXQ@($_W
zrf6qMuj_1I3!RkZ6||GFlQHvAZcjvQ#Gc&h<mAjZUoOSZ9&@q0a0XZTlm!qap9<QH
z4^FjMWv^3XEcwi5qz*qgD~m`6UgyG3-6Jf<Q~M0pBwX(+zeaYSYWGWy=0wLAFX)IZ
zEiFH%Y;SKHbVNi%EN$BmZXX_-1|-D1XC8(%y)`yArXcxuosYOwJJ+CtX}O++O)oVC
zpIRg}1sVjl9r(?>kDMGGRaI=TynFJqvbz06Dt)*-oV}~6%gRVDU%veA`>iW^58Q$n
zhOih0=Izti9X&imNMXd7G}Wj6)?fc+N5N>$ax8`+A2$73c}eT>Y8gqW-dAR--47i<
zUeua|A;ZkMwYB@v$RxHH`O?JRZ&RMQOvXXk`V40SAxB3?9#ZE}-LLE}B-5j)3DvaQ
z9!oxV4D)xEb`1zDt*kEEC6)<4p1Rb0eYB-m2(Rw{c-8)6xw{x^ygnE^IB-AW<>mFU
zcXt=2^ZsjIsx3CWy1SV+HZg(!3x>;Iw((P5-Mq00-u|uQz4aF#MLbW0=wd_Ho#Rn}
zDt+Dz?Xn-PXY`q6nsCItq-MX*dHp!99*X98iE;9n6}D-5kBsEIckk-IT9r|xf;%KD
zS-(RWC{-W-2GA$gJe+m+*@6RrWZgqg_4Ka(T3%kh1*c{*84uTSek6TfCp^DSN3yD%
z)?URJrM$*$yR%f!bZMNqnA&SkVLM((t5yv$M*e#t)KFYnN}O7%eHU2?B^Pdu(xRek
zsr6<s(vgVb;o;dJkx)7I5Vf9!vw|Oa$kQ=T2(2SEvm;P2{2tC*kd*~HoeH97910kO
z4?net8)+(xOvp)!dY;Hjstx<B|0Nuqj){5cBT61v6ik{iuqftgWAl-BnXE}<8t_~y
z82+tsJWs2}6ea6LP}rIp+u$pzZyoNgs&mE2d-D(U_j_-_SaBwDemHXuMkMOv>w_|~
z%=jqNQWVp#U%uSd*_obFZ$NOlO@wwf#z}?me7%-Zb?Z{S#rkJKQM*Ke|LZvUkF-oy
zaXlN=y0Bn2f5+zc-Fx>g&bP@M^;A|>Q6O>u=xMGdvj87o)6>qA6OW(2gSOyvqykL;
zez0?PCQSt^G}A<g3j2I<-{Q79(L~T>tg)!pkauxLP~}X|m+ETD)b-NXq$DNDF)Lou
z!NqOwk}<3Fl!Sx?bt2I>urYsaq$}ZYz|WF(s>>aPxJoZMVlJHTjM#ff8q`_ENU65A
z{py;_>-#6)Vt=W2Moi)!oljql`u~-R|AQJs{+Z;=Q9?{ihTP5IMlrzfn^%X;O1(=a
zR+GZ`v$L}L%v{bF2b1EhJaWT2x2_69vm0scB7eBr+S{?_O}GS#BaZ57YHDuf!Zr^t
zZmX)e{r!{vxo7@@X$wq9c*D51QJc#(7voo{%`Zby!^@68GqSS2l$H5#Me~e=-&4FL
z@}XEC|Epo-vBzQOURl1E{{L-#{Lf53Gr78Tc>GIl4{zI5nE1*?9|I(q#|fp;D&t=*
zay413C&R6#UvkJu!fKN`=TYT4vkwg%sV}XpD83j*o~B)78oIVRn9CbM<#lz`mjlWi
z6Oq`}%GYa;+v!XaMYh;DpP^hNP-<(uZ`skup4_|B7l}@bvJth%SJzfsJ>)8;ycacZ
zI?U^An)<IcO>D^h@9O%0wOL+2Lp;#~i%2aJqDrban@mB@-C|CVPa#4+`ssK4&MnF$
zVI0FDALU5+X1G*8{F386=4Tj#5-c(tzEG?bD0l1c?@ttQcl{=_B-UhvFaBibR=6R2
vYaH^C{wHwa1u*IVMeF&mZt;KVStsX9CS)?TgW|>!Mj<1qD3K>_;Qc=UF>J1Q

-- 
GitLab