In Anjana Ahuja’s art­icle on arti­fi­cial intel­li­gence “Could an AI death cal­cu­lator be a good thing?” (Opin­ion, Janu­ary 10), there are a num­ber of prob­lems which are not addressed.

First, the deep learn­ing mod­els being used to pre­dict life out­comes are being driven by so-called AI “sci­ent­ists”. And what was their hypo­thesis? That more money in gen­eral can pre­dict a bet­ter life out­come. Second, the pre­dict­ive out­come of such mod­els has sev­eral ser­i­ous implic­a­tions for soci­ety. With vast amounts of data there is little over­sight of the pre­dict­ive accur­acy, even if AI pro­gram­mers use “train­ing sets” on a small scale.

Once the mod­els are left free to oper­ate there will be little or no over­sight. The mod­els will be used to exclude selec­ted people from being able to access the scarce resources of the NHS, for no good reason. They already do this based purely on age. Hip replace­ment? “Sorry too old” will become: “Sorry you lived in the wrong part of Lon­don for too long.” At least a GP can make a decision based on know­ing the per­son. Mean­while, in the US, insur­ance premi­ums will increase for those least able to afford them. Thus we end up with a self-ful­filling pre­dic­tion — that more people will die earlier than they would have done — which at least means the “sci­ent­ist” will claim excel­lent accur­acy.

Once the pub­lic catches on, data col­lec­ted will become tar­nished by liars who want to increase their chances of life-sav­ing oper­a­tions and med­ic­a­tion. Once the AI pro­gram­mers catch on to this, they will need veri­fic­a­tion of such meas­ures (wealth, mar­ital status, sexual pref­er­ences, height, weight and so on).

Is this the world we want? Sounds too much like that Amer­ican sci­ence fic­tion film Gat­taca.

Stelios Char­alam­bides
Meredith, NH, US

Source link