Lauren M.E. Goodlad, chair of Critical AI, is featured in an article on the Ethics of Algorithms in Rutgers Magazine.
The Ethics of Algorithms
By Paula Derrow
Artificial intelligence (AI) has made aspects of life more convenient and even safer, courtesy of services such as Siri and Alexa. “There are tremendous benefits to AI,” says Fred S. Roberts, Distinguished Professor of Mathematics at the School of Arts and Sciences (SAS) and director of the Command, Control, and Interoperability Center for Advanced Data Analysis (CCICADA). The center is part of the U.S. Department of Homeland Security, in which Rutgers is the lead partner of this university consortium. “We can use facial recognition technology to identify missing children, for instance, or diagnose rare diseases. But you have to keep the trade-offs in mind.”
One of those trade-offs is that the powerful, predictive algorithms fueling everything from facial recognition technology to who gets a bank loan or a traffic ticket can adversely affect your privacy, health, well-being, and personal finances—and are leading to inequities in American society.
“With AI, people tend to worry about things like super-intelligent computers turned evil like in The Terminator,” says Lauren M.E. Goodlad, a professor in the Department of English at SAS and chair of Critical AI, a new interdisciplinary initiative examining the ethics of artificial intelligence. What is worrisome, she says, “is how this technology can be used in an opaque way to manipulate our behavior, as we’ve seen with Facebook, along with other problems that are making our country more unequal than it has been since the Gilded Age.”