5 simple rules to make AI a force for good

Artificial Intelligence / Technology

5 simple rules to make AI a force for good

Consumers and activists are rebelling against Silicon Valley titans, and all levels of government are probing how they operate. Much of the concern is over vast quantities of data that tech companies gather—with and without our consent—to fuel artificial intelligence models that increasingly shape what we see and influence how we act.

If “data is the new oil,” as boosters of the AI industry like to say, then scandal-challenged data companies like AmazonFacebook, and Google may face the same mistrust as oil companies like BP and Chevron. Vast computing facilities refine crude data into valuable distillates like targeted advertising and product recommendations. But burning data pollutes as well, with faulty algorithms that make judgments on who can get a loan, who gets hired and fired, even who goes to jail.

The extraction of crude data can be equally devastating, with poor communities paying a high price. Sociologist and researcher Mutale Nkonde fears that the poor will sell for cheap the rights to biometric data, like scans of their faces and bodies, to feed algorithms for identifying and surveilling people. “The capturing and encoding of our biometric data is going to probably be the new frontier in creating value for companies in terms of AI,” she says.

The further expansion of AI is inevitable, and it could be used for good, like helping take violent images off the internet or speeding up the drug discovery process. The question is whether we can steer its growth to realize its potential benefits while guarding against its potential harms. Activists will have different notions of how to achieve that than politicians or heads of industry do. But we’ve sought to cut across these divides, distilling the best ideas from elected officials, business experts, academics, and activists into five principles for tackling the challenges AI poses to society.

1. CREATE AN FDA FOR ALGORITHMS

Algorithms are impacting our world in powerful but not easily discernable ways. Robotic systems aren’t yet replacing soldiers as in The Terminator, but instead they’re slowly supplanting the accountants, bureaucrats, lawyers, and judges who decide benefits, rewards, and punishment. Despite the grown-up jobs AI is taking on, algorithms continue to use childish logic drawn from biased or incomplete data.

Cautionary tales abound, such as a seminal 2016 ProPublica investigation that found law enforcement software was overestimating the chance that black defendants would re-offend, leading to harsher sentences. In August, the ACLU of Northern California tested Rekognition, Amazon’s facial-recognition software, on images of California legislators. It matched 26 of 120 state lawmakers to images from a set of 25,000 public arrest photos, echoing a test the ACLU did of national legislators last year. (Amazon disputes the ACLU’s methodology.)

Faulty algorithms charged with major responsibilities like these pose the greatest threat to society—and need the greatest oversight. “I advocate having an FDA-type board where, before an algorithm is even released into usage, tests have been run to look at impact,” says Nkonde, a fellow at Harvard University’s Berkman Klein Center for Internet & Society. “If the impact is in violation of existing laws, whether it be civil rights, human rights, or voting rights, then that algorithm cannot be released.”

Nkonde is putting that idea into practice by helping write the Algorithmic Accountability Act of 2019, a bill introduced by U.S. Representative Yvette Clarke and Senators Ron Wyden and Cory Booker, all of whom are Democrats. It would require companies that use AI to conduct “automated decision system impact assessments and data protection impact assessments” to look for issues of “accuracy, fairness, bias, discrimination, privacy, and security.”

These would need to be in plain language, not techno-babble. “Artificial intelligence is . . . a very simple concept, but people often explain it in very convoluted ways,” says Representative Ro Khanna, whose Congressional district contains much of Silicon Valley. Khanna has signed on to support the Algorithmic Accountability Act and is a co-sponsor of a resolution calling for national guidelines on ethical AI development.

Chances are slim that any of this legislation will pass in a divided government during an election year, but it will likely influence the discussion in the future (for instance, Khanna co-chairs Bernie Sanders’s presidential campaign).

2. OPEN UP THE BLACK BOX OF AI FOR ALL TO SEE

Plain-language explanations aren’t just wishful thinking by politicians who don’t understand AI, according to someone who certainly does: data scientist and human rights activist Jack Poulson. “Qualitatively speaking, you don’t need deep domain expertise to understand many of these issues,” says Poulson, who resigned his position at Google to protest its development of a censored, snooping search engine for the Chinese market.

Continue Reading

Leave your thought here

Your email address will not be published. Required fields are marked *

5 simple rules to make AI a force for good