I am about to board the flight back to Amsterdam. The news about recent aircraft crashes, however, alerted me to think more about the decision of whether to take the flight. Let’s consider the following simple model. Suppose the status quo utility (i.e. not taking the flight) is normalized to \(0\). Taking the flight and making to the destination, safe and sound, gives a normalized utility of \(1\). Undergoing a crash gives a normalized utility of \(-x\). Denote the probability of crash by \(p\). Therefore, the decision is to take the flight if and only if
that is, if and only if \(p\times(1+x)<1\).
Obviously, if the crash probability \(p\) is small enough, i.e. “negligible” as we usually call it, I should just go and take the flight. What the recent news changes, as many Bayesian agents are updating, is probably this probability \(p\). Seeing so many crashes in so few days simply adds to the posterior learned probability of crashing. If the increased \(p\) pushes the above inequality to flip left-right, then flights shall reduce.
For me, however, the crash probability does not change. (This is because I have a Poisson prior of the crash probability, and I am not changing the intensity.) What really changes is the size of \(x\). In view of the recent changes in my life, I feel a larger and larger \(x\)–—the cost of losing life. It’s future, it’s responsibility, and it’s promise.