One fifth of the EU population has some form of disability. The EU is committed to improving the social and economic situation of these persons. Ensuring that they have access to culture, as consumers or contributors, is also important. Machine learning (ML) is increasingly used in clinical care to improve diagnosis, therapy options and effectiveness of the health system. However, ML models use historically gathered information and exclude parts of the population that experience social, racial or gender discrimination, failing to provide fairness in predictive models and posing ethical concerns. The EU-funded FPH project will map the ethical theories related to the distribution of resources in healthcare and link them to fair ML. It will understand how usual moral concepts can be perceived in probabilistic terms and if existing allegations of fair models in AI are solid in respect to different philosophical perceptions of probability, causality and counterfactuals and demonstrate the relevance of these philosophical perceptions.