Singapore must take caution with AI use, review approach to public trust | ZDNet

In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society. 

It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong. 

Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered. 

She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market. 

I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate. 


Singapore wants widespread AI use in smart nation drive

With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

Read More

Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated. 

However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned. 

In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest. 

Vendors such as IBMMicrosoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools. 

Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology.

IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.

“AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned. 

I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research.

In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender.  

In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place.

How should AI be best trained for multi-ethnic Singapore?

Facial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems. 

Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group. 

Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians.

Should the country decide to tap facial recognition systems to identify individuals, should the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model? 

Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals? 

These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”. 

Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. 

Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when would human intervention be required.

The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data. 

“These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said. 

Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms. 

The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits.  The resolution passed, though, isn’t legally binding.

Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology. 

The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms. 

“AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.”

Fostering trust goes beyond AI

Singapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns. 

In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.  

Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive. 

The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology. 

Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. 

Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions that this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.

Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. 

Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? 

Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. 

According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. 

“Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”

Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. 

RELATED COVERAGE



Source link

Leave a comment