Published Date : Jun 11, 2018
The wider deployment of artificial intelligence (AI) by end-use sectors increasingly hinges on standardization in its technologies and products. The effort to do so is underpinned by the creation of industry standards and protocols, which help end-user better leverage the potential of AI. To this end, several global technology providers with the like of Google and industry players in developed nations are harping on ethical guidelines. As a part of this, Accenture, an Irish-based management consulting and professional services provider, announced on June 10, 2018 that in the next week it will launch a tool for ensuring fairness in AI deployment and correct any unfair bias in the AI models. The tool will enable companies assess any unfair bias which may arise when various data on sensitive variables are fed to AI models.
Tool to Corrects Algorithm Bias caused due to correlation between Sensitive and Others Variables in AI Model
The tool is based on statistical models takes a close look at bias caused by data influence sensitive variables, such as gender and race, have on other variables of the AI model. Though, developing a perfect AI algorithm is technically infeasible, there can be tools to remove bias particularly those that are quantifiable and measurable and can have devastating effect on decision-making if not corrected. The core benefit that Accenture expects its fairness tool to impart is offering a way to integrate it with fast-paced innovation lifecycle across companies.
Building Trust Crucial for Removing Biases that impact AI-based Human Outcomes
The potential of AI technology is largely plagued by the biases in algorithm, and a substantial number of companies lack resources that help them in course-correcting these. Accenture believes that the development of such AI fairness tools will pave way for building up trust and communication among them to do so. It further opines that among the various end-use industries, the first to benefit will be from those in high regulated domains, notably in the financial services and healthcare sectors.
However, there is always a trade-off between the fairness and accuracy and the tool is expected to give users freedom to decide on this. The tool is yet to be verified using varied types of AI models to establish its efficacy.