THE BEST PHONE TO BUY RIGHT NOW
Buying a new phone can be a chore, but we’ve done the work for you
A new report from the Government Accountability Office (GAO) has revealed near-total lack of accountability from federal agencies using facial recognition built by private companies, like Clearview AI.
Of the 14 federal agencies that said they used privately built facial recognition for criminal investigations, only Immigration and Customs Enforcement was in the process of implementing a list of approved facial recognition vendors and a log sheet for the technology’s use.
The rest of the agencies, including Customs and Border Protection, the Federal Bureau of Investigation, and the Drug Enforcement Administration, had no process in place to track the use of private facial recognition.
This GAO report greatly expands the public’s knowledge of how the federal government uses facial recognition more broadly by distilling which agencies use facial recognition built by the government, which are using third-party vendors, and how large those datasets are in each case. Of 42 federal agencies surveyed, 20 told the oversight agency they used facial recognition in some form, most relying on federal systems maintained by the Department of Defense and Department of Homeland Security.
These federal systems can hold a staggering amount of identities: the Department of Homeland Security’s Automated Biometric Identification System holds more than 835 million identities, according to the GAO report.
Federal agencies were also asked how they used this technology during racial justice protests in the wake of George Floyd’s murder, as well as the Capitol Hill riot on January 6th.
Six agencies, including the FBI, US Marshals Service, and Postal Inspection Service, used facial recognition on “individuals suspected of violating the law” in protests last summer. Three agencies used the technology investigating the riot on January 6th: Capitol Police, Customs and Border Protection, and the Bureau of Diplomatic Security. However, some information was withheld from the GAO investigators as it pertained to active investigations.
The use of this technology on protestors and rioters shows how critical it is to have accountability mechanisms in place. The GAO explains if these agencies don’t know which facial recognition services they’re using, they have no way to mitigate the enormous privacy, security, or accuracy risks inherent in the technology.
“When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies,” the report says.
In one case, GAO investigators asked a federal agency if it was using facial recognition built by private companies, and the agency said it was not. But after an internal poll, the unnamed agency learned that employees had run such facial recognition searches more than 1,000 times.
Going forward, the GAO has issued 26 recommendations to federal agencies on the continued use of facial recognition. They consist of two identical recommendations for each of the 13 agencies without an accountability mechanism in place: Figure out which facial recognition systems you’re using, and then study the risks of each.
Microsoft gets a taste of OpenAI’s tech
It’s much easier to play xCloud now