New Mathematical Formula Unveiled to Prevent AI From Making Unethical Decisions

first_imgThe AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential penalty if regulators levy hefty fines or customers boycott you – or both.That’s why these mathematicians and statisticians came together: to help business and regulators by creating a new “Unethical Optimization Principle” that would provide a simple formula to estimate the impact of AI decisions.RELATED: In World First, AI System Develops New Drug, Cuts R&D Costs By 80%, Moving it to Trials For OCD Patients in 1/5 the TimeImage credit: deepak pal, CC licenseAs it stands right now, “Optimization can be expected to choose disproportionately many unethical strategies,” said Professor Robert MacKay of the Mathematics Institute of the University of Warwick.“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process.”They have laid out the full details in a paper bearing the name “An unethical optimization principle”, published in Royal Society Open Science on Wednesday 1st July 2020.POPULAR: Researchers Create AI System That Can Predict Epileptic Seizures One Hour Ahead of Time With 99.6% Accuracy“Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden,” said MacKay. “(The) inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”Reprinted from Warwick CollegeBe Sure And Share This Intriguing Solution With Math Lovers On Social Media…AddThis Sharing ButtonsShare to FacebookFacebookFacebookShare to TwitterTwitterTwitterShare to EmailEmailEmailShare to RedditRedditRedditShare to MoreAddThisMore AddThis Sharing ButtonsShare to FacebookFacebookFacebookShare to TwitterTwitterTwitterShare to EmailEmailEmailShare to RedditRedditRedditShare to MoreAddThisMoreResearchers from the UK and Switzerland have found a mathematical means of helping regulators and business police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging choices.The collaborators from the University of Warwick, Imperial College London, and EPFL – Lausanne, along with the strategy firm Sciteb Ltd, believe that in an environment in which decisions are increasingly made without human intervention, there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy—and to find and reduce that risk, or eliminate entirely, if possible.Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be more profitable to make certain decisions that end up hurting the company.last_img

Leave a Reply

Your email address will not be published. Required fields are marked *