Don’t have an account? Sign up with Email

IBM proposes AI rules to ease bias concerns

IBM called for rules aimed at eliminating bias in artificial intelligence to ease concerns that the technology relies on data that bakes in past discriminatory practices and could harm women, minorities, the disabled, older Americans and others.


Bloomberg—IBM called for rules aimed at eliminating bias in artificial intelligence to ease concerns that the technology relies on data that bakes in past discriminatory practices and could harm women, minorities, the disabled, older Americans and others.

As it seeks to define a growing debate in the U.S. and Europe over how to regulate the burgeoning industry, IBM urged industry and governments to jointly develop standards to measure and combat potential discrimination.

The Armonk, N.Y. -based company issued policy proposals Tuesday ahead of a Wednesday panel on AI to be led by Chief Executive Officer Ginni Rometty on the sidelines of the World Economic Forum in Davos. The initiative is designed to find a consensus on rules that may be stricter than what industry alone might produce, but that are less stringent than what governments might impose on their own.


“It seems pretty clear to us that government regulation of artificial intelligence is the next frontier in tech policy regulation,” said Chris Padilla, vice president of government and regulatory affairs at IBM.

The 108-year-old company, once a world technology leader, has lagged behind the sector for years. In its fight to remain relevant, IBM has pegged its future on newer technologies like artificial intelligence and cloud services. But it’s yet to show significant revenue growth from those areas.

The IBM recommendations call for companies to work with governments to develop standards on how to make sure, for instance, that African-Americans are guaranteed fair access to housing despite algorithms that rely on historical data such as ZIP codes or mortgage rates that may have been skewed by discrimination. In the U.S., that would likely occur through the National Institute of Standards and Technology within the Department of Commerce.

Rometty is hosting the panel, which includes a top White House aide, Chris Liddell, OECD Secretary-General Jose Angel Gurria and Siemens AG CEO Joe Kaeser.

IBM also suggests that companies appoint chief AI ethics executives, carry out assessments to determine how much harm an AI system may pose and maintain documentation about data when “making determinations or recommendations with potentially significant implications for individuals” so that the decisions can be explained.

Spearheading the AI regulatory debate gives IBM a chance to come back into the spotlight as a leader in cutting-edge technology, a position it hasn’t held for years.

The AI proposals are intended to stave off potential crises that could enrage customers, lawmakers and regulators worldwide—similar to what happened with Facebook in the Cambridge Analytica data scandal, when the personal data of millions of Americans was transferred to the political consulting firm without their knowledge.

“I don’t think we’re yet in the same place on AI,” he said. “So I don’t think it’s too late to try this approach.”

Concerns about AI and machine learning—software tools that use existing data to automate future analysis and decision-making—range from identifying faces in security-camera footage to making determinations about mortgage rates. AI is central to the future of many technology companies, including IBM’s, but has spurred worries that it could kill jobs and spread existing disparities in areas such as law enforcement, access to credit and hiring.

IBM has been working with the Trump administration since last summer on its approach to AI regulation. Earlier this month, the White House issued guidelines for use of the technology by federal agencies, which emphasized a desire not to impose burdensome controls. Last week, a bipartisan group of U.S. senators unveiled a bill designed to boost private and public funding for AI and other industries of the future.

The European Union is considering new, legally binding requirements for developers of artificial intelligence to ensure the technology is developed and used in an ethical way. IBM advised a European committee of academics, experts and executives that recommended avoiding unnecessarily prescriptive rules.

The EU’s executive arm is set to propose that the new rules apply to “high-risk sectors,” such as healthcare and transportation. It also may suggest that the bloc update safety and liability laws, according to a draft of a so-called white paper on artificial intelligence obtained by Bloomberg. The European Commission is expected to unveil the paper in mid-February, and the final version is likely to change.

Padilla said compliance with standards could become a selling-point for companies and perhaps help lower their legal liability.

“If we take a just-say-no approach, or we just wait, the chances are higher that governments will react to something that happens,” he said. “Then you will get more of a prescriptive, top-down regulation."

More for you

See All
Loading data for hdm_tax_topic #better-outcomes...