![]() ![]() In other words, epistemic uncertainty refers to the reducible part of the (total) uncertainty, whereas aleatoric uncertainty refers to the irreducible part. Yet, the nature of uncertainty is different, as one could easily get rid of it. For example, what does the word “kichwa” mean in the Swahili language, head or tail? The possible answers are the same as in coin flipping, and one might be equally uncertain about which one is correct. As opposed to uncertainty caused by randomness, uncertainty caused by ignorance can in principle be reduced on the basis of additional information. 3.3 and Appendix A.2) of the agent or decision maker, and hence to the epistemic state of the agent instead of any underlying random phenomenon. In other words, it refers to the ignorance (cf. As opposed to this, epistemic ( aka systematic) uncertainty refers to uncertainty caused by a lack of knowledge (about the best model). Consequently, even the best model of this process will only be able to provide probabilities for the two possible outcomes, heads and tails, but no definite answer. The prototypical example of aleatoric uncertainty is coin flipping: The data-generating process in this type of experiment has a stochastic component that cannot be reduced by any additional source of information (except Laplace’s demon). Roughly speaking, aleatoric ( aka statistical) uncertainty refers to the notion of randomness, that is, the variability in the outcome of an experiment which is due to inherently random effects. Without questioning the probabilistic approach in general, one may argue that conventional approaches to probabilistic modeling, which are essentially based on capturing knowledge in terms of a single probability distribution, fail to distinguish two inherently different sources of uncertainty, which are often referred to as aleatoric and epistemic uncertainty (Hora 1996 Der Kiureghian and Ditlevsen 2009). Traditionally, uncertainty is modeled in a probabilistic way, and indeed, in fields like statistics and machine learning, probability theory has always been perceived as the ultimate tool for uncertainty handling. 2019), or in concrete learning algorithms such as decision tree induction (Mitchell 1980). Besides, uncertainty is also a major concept within machine learning methodology itself for example, the principle of uncertainty reduction plays a key role in settings such as active learning (Aggarwal et al. 2011) or socio-technical systems (Varshney 2016 Varshney and Alemzadeh 2016). ![]() Needless to say, a trustworthy representation of uncertainty is desirable and should be considered as a key feature of any machine learning method, all the more in safety-critical application domains such as medicine (Yang et al. In addition to the uncertainty inherent in inductive inference, other sources of uncertainty exist, including incorrect model assumptions and noisy or imprecise data. Such models are never provably correct but only hypothetical and therefore uncertain, and the same holds true for the predictions produced by a model. Indeed, learning in the sense of generalizing beyond the data seen so far is necessarily based on a process of induction, i.e., replacing specific observations by general models of the data-generating process. As such, it is inseparably connected with uncertainty. Machine learning is essentially concerned with extracting models from data, often (though not exclusively) using them for the purpose of prediction. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |