Explore projects
-
This is the very small bash code to check list of different websites and send notifications if any of them is down.
Updated -
-
DEMOSES / DEMOSES-co-optimization
MIT LicenseUpdated -
NOMAD / NOMAD
GNU General Public License v3.0 onlyUpdated -
Stef Lhermitte / ctb3311-pangeo
MIT LicenseUpdated -
DEMOSES / DEMOSES-coupling
MIT LicenseUpdated -
DEMOSES / DEMOSES-ADMM
MIT LicenseUpdated -
aj-lab / roodmus
GNU General Public License v3.0 or laterUpdated -
Extract documentation from ansible/shell/sql/ini/configs to md files.
Updated -
Ido Akkerman / mfem
BSD 3-Clause "New" or "Revised" LicenseLightweight, general, scalable C++ library for finite element methods
Updated -
-
-
Yidong Zhao / Hmc Uncertainty
Apache License 2.0Updated -
Webapp to create an inventory of rock samples collected from a borehole. It used the RockIn Data Model to establish the relationship between samples.
Updated -
Updated
-
ACS2023 / Lab1-Template
Apache License 2.0Updated -
Rene Paassen / CSE-vehicle-sailer
GNU General Public License v3.0 or laterUpdated -
Simulates missing data and performs Bayesian sensitivity analysis on the MCAR assumption. Based on the work of Scharfstein et al. (2003)
See our paper for more details on the simulations and the theory.
Simulations.R generates the posterior distributions of the functional of interest for both parametrisations. Uses all cpu cores - 1 (you can change this at the start). Outputs one .csv with the data (as the runtime is long so as to minimize amount of runs necessary) and 5 images in the working directory: 1. Histograms of the posteriors 2-3. Trajectories of the posteriors of eta and alpha 4-5. Priors and posteriors of eta and alpha. Also shows the accept rate of the random walk Metropolis Hastings step in the Gibbs sampling scheme. If you interrupt a run, don't forget to use the stopCluster(cl)!
Coverage.R calculates the coverage of the above method. It is set-up for running 10 sessions at the same time, each running on multiple cores for use on a cluster. It outputs the credible sets and their lengths into a CSV file (only 1/10 of it). For the full coverage and length, add the values in the CSV of 10 runs of the program. Easy implementation, without any dependencies except parallel. Made to run on a single node on multiple cpu cores. Run 10x at the same time on different nodes. Make sure to leave cores free on each nodes, so others can use them at the same time!
Coverage_local.R does the same as coverage, but is made for local runs. Can't recommend due to long runtimes for high values of n. For n < 1000 it is doable.
Directories contain output images for different priors on the sensitivity parameter. The .csv files contain the results of the coverage runs for different values of alpha and no prior on alpha.
If you have any questions or remarks, please let me know!
Updated