Denisismagilov/Dreamstime
No facial recogniton

IBM Nixes General Purpose Facial Recognition Technology

June 9, 2020
CEO Arvind Krishna calls on Congress to denounce irresponsible technology policies that promote racial injustice.

IBM has made a bold call to no longer offer general-purpose facial recognition or analysis software. The decision was published in a letter penned by CEO Arvind Krishna on June 8, and is part of a pledge to work with Congress “in pursuit of justice and racial equity.”

According to Krishna, IBM initially plans to focus its efforts on three policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities. The letter addressed to Congress outlines policy proposals with a view to addressing responsible use of technology in the context of law enforcement.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” noted the letter. 

Krishna further articulated IBM’s stance on artificial intelligence, noting that vendors and users of Al systems “have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

Facial recognition software is an ongoing subject of scrutiny and has been largely unregulated. Krishna’s letter called for national policy to “encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

In a 2018 study, MIT researchers showed that machine learning algorithms can discriminate based on classes like race and gender. The paper notes that many AI systems, such as face recognition tools, rely on machine learning algorithms that are trained with biased data that have resulted in algorithmic discrimination.

In a Microsoft blog published Dec. 6, 2018, the company noted that despite many positive benefits, certain uses of facial recognition technology increase the risk of biased decisions, outcomes and experiences, with higher error rates when seeking to determine the gender of women and people of color.

In combating bias, intrusion on privacy and the potential encroachment on democratic freedoms, Microsoft called for these problems to be addressed through legislation. The company has also undertaken steps through research and policy updates to help engineers identify blind spots.

Sponsored Recommendations

How BASF turns data into savings

May 7, 2024
BASF continuously monitors the health of 63 substation assets — with Schneider’s Service Bureau and EcoStruxure™ Asset Advisor. ►Learn More: https://www.schn...

Agile design thinking: A key to operation-level digital transformation acceleration

May 7, 2024
Digital transformation, aided by agile design thinking, can reduce obstacles to change. Learn about 3 steps that can guide success.

Can new digital medium voltage circuit breakers help facilities reduce their carbon footprint?

May 7, 2024
Find out how facility managers can easily monitor energy usage to create a sustainable, decarbonized environment using digital MV circuit breakers.

The Digital Thread: End-to-End Data-Driven Manufacturing

May 1, 2024
Creating a Digital Thread by harnessing end-to-end manufacturing data is providing unprecedented opportunities to create efficiencies in the world of manufacturing.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!