In Anjana Ahuja’s article on artificial intelligence “Could an AI death calculator be a good thing?” (Opinion, January 10), there are a number of problems which are not addressed.
First, the deep learning models being used to predict life outcomes are being driven by so-called AI “scientists”. And what was their hypothesis? That more money in general can predict a better life outcome. Second, the predictive outcome of such models has several serious implications for society. With vast amounts of data there is little oversight of the predictive accuracy, even if AI programmers use “training sets” on a small scale.
Once the models are left free to operate there will be little or no oversight. The models will be used to exclude selected people from being able to access the scarce resources of the NHS, for no good reason. They already do this based purely on age. Hip replacement? “Sorry too old” will become: “Sorry you lived in the wrong part of London for too long.” At least a GP can make a decision based on knowing the person. Meanwhile, in the US, insurance premiums will increase for those least able to afford them. Thus we end up with a self-fulfilling prediction — that more people will die earlier than they would have done — which at least means the “scientist” will claim excellent accuracy.
Once the public catches on, data collected will become tarnished by liars who want to increase their chances of life-saving operations and medication. Once the AI programmers catch on to this, they will need verification of such measures (wealth, marital status, sexual preferences, height, weight and so on).
Is this the world we want? Sounds too much like that American science fiction film Gattaca.
Stelios Charalambides
Meredith, NH, US