Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies

Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies

[ad_1]

The use of private service providers raises sensitive control issues

A study conducted at Stanford University and New York University looked at how US government agencies use artificial intelligence (AI) in their activities. Contrary to the accepted idea that Government agencies operate with outdated systems and procedures.it turns out that many of them have already tried AI programs and machine learning (to see there m 30-31, pp. 75).

The Securities and Exchange Commission (SEC), Customs and Border Protection (CBP), the Social Security Administration (SSA) and the US Patent and Trademark Office (PTO) are some of the 142 government agencies studied by academics. Nearly half of these agencies (64 or 45%) have expressed a special interest in AI and machine learning “by planning, piloting or implementing such technologies”. This is particularly the case with the National Oceanic and Atmospheric Administration (NOAO) that is using artificial intelligence to improve high-impact weather tracking systems, in order to improve real-time decision-making. The Transportation Security Administration (TSA) is considering using image recognition to screen passenger baggage and search for explosive devices. The Centers for Medicare and Medicaid Services (CMMS) is developing artificial intelligence-based tools to predict healthcare fraud. Meanwhile, the Department of Housing and Urban Development has published a prototype chat that informs citizens of housing benefits, various government agency programs, and civil rights complaints procedures. Another idea received is that the majority of algorithms are generated by private law providers, or “More than half of the requests (84 use cases, or 53%) were made internally.”indicating real interest and strong ownership within government agencies.

However, this is not the case for Customs and Border Protection (CBP), an agency of the Department of Homeland Security with 60,000 employees, which filters the entry of potential terrorists into US soil as well as the entry of weapons, drugs, and contraband. Or unauthorized products, which also fight illegal immigration. CBP uses two of the most controversial AI tools on the frontier: facial recognition technologies and risk prediction techniques, which often rely on private companies. In 2018 alone, the agency received $196 million to acquire and deploy biometric identification systems. In 2017, CBP launched CBP Passenger Verification, a facial recognition software as part of “Biometric Entry/Exit”, and it is now in service at several airports in the territory: When passengers board a plane, the system takes their photos, which will then be processed by an algorithm to ensure that their faces upon boarding match the recorded photo. » CBP would like to create a central database which will also be used by private operators for its implementation “Self-service baggage drop-off kiosks, self-boarding gates for facial recognition, and other amenities”. Photos and videos taken by these special providers are submitted to the Customs and Border Protection Passenger Verification Service. For the recorded data, “The agency retains photographs of U.S. citizens only until their identity is confirmed, but may retain photographs of non-U.S. citizens for up to fifteen years.”.

CBP also publishes artificial intelligence algorithms and machine learning In the context of “risk prediction” techniques, in particular through the “automated targeting system,” the first data of which was published in 2006. According to CBP, “The system creates and assigns a score to each entity that crosses the US border, determining the potential threat that specific entity poses and the level and priority of screening/screening that will be applied to it. One subsystem screens passengers – including tagging airline passengers for further screening – while another screens cargo.. Techniques for forecasting the potential risks posed by travelers are characterized by the implementation of “Non-invasive monitoring techniques and through the development of a formal scoring system allow the officer to assess suspicions based on behavioral and nonverbal indicators”. If artificial intelligence tools and machine learning Significantly expands the agency’s reach and scope by enabling it to make its operations more efficient and accurate, and these programs increase privacy and security risks, and It reveals the underlying tensions between law enforcement objectives and agency transparency.”. Because the use of private providers raises sensitive questions about control, CBP has not been able to explain the failure rate of a biometric reading application, in this case iris scanning, due to the use of “proprietary technology”.

If CBP’s use of artificial intelligence algorithms and machine learning Among the most controversial topics, the academic study also analyzes 157 use cases implemented by government agencies, which are less controversial.

Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, David Freeman Engstrom – Stanford University, Daniel E. Ho – Stanford University, Catherine M. Sharkey – New York University, Mariano Florentino Cuellar – Stanford University and the California Supreme Court, February 2020.

[ad_2]

Source link