Individual: Normalize weights to always sum to 1
This MR built ons !1 (merged). Please review and merge that MR first.
Both the estimatedFitness
and realisedFitness
defaulted to -1
in the IndividualLocationPerception
. However, if an agent doesn't know a fitness layer it doesn't mean that it perceives it negatively, it just doesn't perceive it at all.
This commit fixed this issue that negatively biased the fitness estimation by just not including the unknown value at all. It also made it more readable and robust by using "unknown"
instead of -1
, which is a string and thus can't be accidentally added to something.
When a agent doesn't perceive a layer, it now reweights the existing weights to sum to 1 again.
Changes
- Modified
getFitness
method to build a list of fitness values and corresponding weights, handling 'unknown' cases. - Introduced rescaling of weights to ensure they sum to 1 before calculating the weighted average.
- Improved code readability and performance by eliminating redundant checks and ensuring consistent data structures.
Two checks where performed to validate the refactored method. Both tests where run on 12 different experiments using the main.py
module.
The normalized weights was checked to be always equal to 1:
if abs(sum(normalized_weights) - 1.0) > 0.0001:
raise ValueError(f"Normalized weights do not sum to 1, but to {sum(normalized_weights)}")
- The final result was checked to be between 0 and 1:
# Check if result is between 0 and 1
if not -0.000001 <= result <= 1.000001:
raise ValueError(f"Fitness value is not between 0 and 1, but {result}")
Due to rounding errors in the weights, the fitness value might be something like 1.0000000000000002. If that's a problem let me know, and I will also cap that.
The results returned by getFitness
look like this:
@wkolk If you want I can fully statistically verify that the solutions are within the expected ranges and distributions, but that will take time.