Cathy O’Neil to the Ivory Tower: Stop ignoring the ethics risks in tech [1]

The use of artificial intelligence (AI) in most all aspects of modern business and social life is now ubiquitous. From wealth management, to hiring employees, assessing the effectiveness of teachers, and our news feeds, AI impacts most every aspect of our day-to-day life — whether or not we are aware of it.

As the use of AI has grown, so has a corresponding gap in the ability of many companies and people to explain what, how, and most importantly, why these bots make certain decisions.  In a NYT op-ed this week, Cathy O’Neil gives academics a call to action — the time has come to provide a deeper, and unbiased view of the risks of AI and the ethics challenges we face as their power and influence grows.

O’Neil is a data scientist and author of the book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, where she has described how big data and algorithms that were once the engine of ingenuity are now raising critical questions about fairness, access, and equality. 

ONeil’s NYT’s piece recognizes that only academics can do the kind of research and analysis necessary for understanding the ethical impact of AI.  She writes “We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again.” 

On the flip side, the tech industry needs to make data more available to academics. O’Neil identifies a gap in research that desperately needs addressing: “Academics largely don’t have access to the mostly private, sensitive personal data that tech companies collect; indeed even when they study data-driven subjects, they work with data and methods that typically predict much more abstract things like disease or economics than human behavior, so they’re naïve about the effects such choices can have. “ In addition, tech companies can poach academic talent from campuses leaving them less able to conduct conflict-of-interest free research.

ONeil’s insightful essay includes an appeal that universities create “an academic institute focused on algorithmic accountability.” Only by doing so, she argues, will we begin to truly confront this intelligence gap for the long term.

As AI expands, it will raise ethical issues not previously seen. Developing more sustainable partnerships between business and the academy can help address these potential risks.

We urge you to read the entire piece online >> [2]

 

 

 

[9]
Continue Reading [1]