Explore projects

  • Updated
  • Updated
  • The purpose of the model is to explore the influence of design interventions, combined with planned behaviour, on supplier and processor cashflows in Industrial Symbiosis Networks, and on the network robustness.

    Updated
    Updated
  • Code for the manuscript entitled "Re-oxidation of cytosolic NADH is a major contributor to the high oxygen requirements of the thermotolerant yeast Ogataea parapolymorpha in oxygen-limited cultures"

    Updated
    Updated
  • Updated
  • Updated
    Updated
  • A package for Counterfactual Explanations and Algorithmic Recourse in Julia.

    Updated
    Updated
  • This project queries the DataCite API for metadata on datasets related to concrete, then examines which data repositories the datasets have been published in.

    Updated
    Updated
  • WORK IN PROGRESS - Samples a distribution from the posterior, when a sigma-stable process is used as a prior. If you interrupt a run, don't forget to use the stopCluster(cl)!

    Updated
    Updated
  • Simulates missing data and performs Bayesian sensitivity analysis on the MCAR assumption. Based on the work of Scharfstein et al. (2003)

    See our paper for more details on the simulations and the theory.

    Simulations.R generates the posterior distributions of the functional of interest for both parametrisations. Uses all cpu cores - 1 (you can change this at the start). Outputs one .csv with the data (as the runtime is long so as to minimize amount of runs necessary) and 5 images in the working directory: 1. Histograms of the posteriors 2-3. Trajectories of the posteriors of eta and alpha 4-5. Priors and posteriors of eta and alpha. Also shows the accept rate of the random walk Metropolis Hastings step in the Gibbs sampling scheme. If you interrupt a run, don't forget to use the stopCluster(cl)!

    Coverage.R calculates the coverage of the above method. It is set-up for running 10 sessions at the same time, each running on multiple cores for use on a cluster. It outputs the credible sets and their lengths into a CSV file (only 1/10 of it). For the full coverage and length, add the values in the CSV of 10 runs of the program. Easy implementation, without any dependencies except parallel. Made to run on a single node on multiple cpu cores. Run 10x at the same time on different nodes. Make sure to leave cores free on each nodes, so others can use them at the same time!

    Coverage_local.R does the same as coverage, but is made for local runs. Can't recommend due to long runtimes for high values of n. For n < 1000 it is doable.

    Directories contain output images for different priors on the sensitivity parameter. The .csv files contain the results of the coverage runs for different values of alpha and no prior on alpha.

    If you have any questions or remarks, please let me know!

    Updated
    Updated
  • Updated