Introduction:
Recently, Banjo CEO Damien Patton was convicted of crimes committed by a white supremacist group. He was once the subject of profiles by business journalists. OneZero presented an analysis of grand jury testimony and hate crime prosecution documents. Patton pled guilty of being involved in a 1990 shooting attack on a synagogue in Tennessee. With the awareness around algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo. Utah’s attorney general’s office ran an investigation in matters of privacy, algorithmic bias, and discrimination. But in a shocking twist, an audit and report were released last week. It did not detect any bias in the algorithm because there was no algorithm for assessment in the first place.
“Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter. But, there was no evidence of any Twitter data incorporated into Live Time.”
The incident was called the fight for the soul of machine learning. It shows why government officials should evaluate claims made by companies looking for contracts. Also, how failure to do that can cost taxpayers millions of dollars. As the incident underlines the companies that sell surveillance software can falsely claim about the technology’s capabilities or would turn out to be charlatans or white supremacist that constitutes a public nuisance or worse.
Help from the Commission on Protecting Privacy and Preventing Discrimination:
The auditor result suggests a lack of scrutiny that can undermine public trust in AI and the governments using them. Dougall conducted the audit with the help of the Commission on Protecting Privacy and Preventing Discrimination. His office formed the group after the news of the company’s white supremacist associations and Utah state contract. Banjo had previously claimed that its lifetime technology can detect active shooter incidents, child abduction, and traffic accidents from video footage for social media events. Because of the controversy, Banjo appointed A new CEO and rebranded under the name safeXai.
“The touted example of the system assisting in ‘solving’ a simulated child abduction was not validated by the AGO. And, it was simply accepted based on Banjo’s representation. In other words, it would appear that the result could have been that of a skilled operator as Live Time lacked the advertised AI technology,” Dougall states in a seven-page letter sharing audit results.
Vice had reported that Banjo used a secret company and fake apps to scrape data from social media. Banjo and Patton received support from politicians like US Senator Mike Lee (R-UT) Utah State Attorney General Sean Reyes. In a letter that accompanied the audit, Reyes commanded the results of the investigation. Also, the finding of no discrimination was consistent with the conclusion the state attorney general’s office derived because there was no AI to evaluate.
Statement from Sean Reyes:
“The subsequent negative information that came out about Mr. Patton was contained in records. These were sealed and/or would not have been available in a robust criminal background check.” Reyes said in a letter accompanying the audit findings. “Based on our first-hand experience and close observation, we are convinced the horrible mistakes of the founder’s youth never carried over in any malevolent way to Banjo, his other initiatives, attitudes, or character.”
Recommendations for anyone considering AI contracts include questions they should be asking third-party vendors and the need to conduct an in-depth review of vendors’ claims and the algorithms themselves.
“The government entity must have a plan to oversee the vendor and vendor’s solution to ensure the protection of privacy and the prevention of discrimination, especially as new features/capabilities are included,” reads one of the listed recommendations. Among other recommendations are the creation of a vulnerability reporting process and evaluation procedures, but no specifics were provided.
Some cities have food surveillance technology review processes but local and take adoption of private vendors surveillance technology is happening at a lot of places with little scrutiny. The lack of oversight could become an issue for the federal government. The government by algorithm report Stanford University and New York University jointly published last year concluded that about half of algorithms used by federal government agencies come from third-party vendors.
The federal government is funding the initiative currently to develop tech for public safety. It will be similar to the kind Banjo claimed to develop. The National Institute of Standards and Technology regularly conducts assessments of the quality of facial recognition systems. It also assesses the role the federal government should play to develop industry standards.
Introduction of ASAPS:
Last year it introduced ASAPS which is the competition where the government encourages AI startups. It also promotes researchers to develop systems that can tell if an injured person requires an ambulance and whether police should be alerted in an altercation. These determinations will be based on a data set incorporating data ranging from social media posts to 911 calls and camera footage. Such technology can save lives but also lead to higher rates of contact with police which can also cost lives. It could also fuel repressive surveillance states like the kind used in Xinjiang to identify and control the Muslim minority groups like Uyghurs.
Best practices for government procurement officers that seek contracts with third-party selling AI were introduced in 2018. The UK government officials introduced them in the World Economic Forum to companies like Salesforce. The document recommends the definition of public benefit, risk and encourages open practice as a way to gain public trust.
“Without clear guidance on how to ensure accountability, transparency, and explainability. Governments may fail in their responsibility to meet public expectations of both expert and democratic oversight of algorithmic decision-making. And, it may inadvertently create new risks or harms,” the British-led report reads. The U.K. released official procurement guidelines in June 2020, but weeks later a grading algorithm scandal sparked widespread protests.
People concerned about the possibilities for things to go wrong have called the policymakers for the implementation of additional legal safeguards. Last month, a group of current and former Google employees urged the government to adopt strengthened whistleblower protection to provide tech workers with the way to speak out when artificial intelligence poses public harm.
Democratization of AI:
A week before that the national security Commission on artificial intelligence called Congress. It provided federal Government employees a way to report misuse or inappropriate deployment of artificial intelligence. The group also recommends tens of billions of dollars in investment to democratize AI. It also suggests creating an accredited university to train AI talent for government agencies. Last year the cities of Amsterdam and Helsinki developed public algorithm registries so that the citizens know which government agency is responsible for the deployment of an algorithm and have a mechanism for accountability if necessary.