A Chinese academic at the centre of concerns about the use of Australian research has returned to his homeland, amid findings that he failed to obtain ethical approval for research into facial recognition of Chinese minorities.
Liu Wan Quan, an artificial intelligence expert who taught at Perth’s Curtin University for more than two decades, featured in a 2019 ABC?Four Corners??of Australian research contributions to China’s surveillance of Uighurs.
Dr Liu co-authored a??on “Facial feature discovery for ethnicity recognition”, published in the journal?Data Mining and Knowledge Discovery. The paper suggests that analyses of T-shaped regions of people’s faces – the eyebrows, eyes and nose, for example – are “quite effective” for distinguishing ethnicity but unsuitable for general face recognition.
The study was based on facial images of 300 Uighur, Tibetan and Korean students at Dalian Minzu University in northern China. The paper does not explain the purpose of the research, but says that racial analysis based on facial images is a “popular topic” with potential applications in border control and public security.
Curtin reviewed the approval procedures for the study and said Dr Liu had not responded to some of its questions. It has now found that he breached the Australian Code for the Responsible Conduct of Research by failing to provide evidence that he had obtained ethical approval or the students’ informed consent for the use of the images.
In a letter to University of Leuven bioinformatics professor Yves Moreau, who has expressed concerns about the study, Curtin said that Dr Liu had also breached the code by claiming co-authorship for the paper even though “was only involved in the research technically”.
The letter says that Dr Liu has “resigned” from Curtin and is now a professor at Sun Yat-sen University in Guangzhou. Dr Liu’s LinkedIn page says he left Curtin in May and joined Sun Yat-sen the same month.
The university said that its concerns were not limited to approval procedures. “Curtin University unequivocally condemns the use of artificial intelligence, including facial recognition technology, for any form of ethnic profiling to negatively impact or persecute any person or group,” the institution said.
IPVM, a Pennsylvania-based organisation that reports on video surveillance,??“Uighur-recognition technology” was widely used in China. “The People’s Republic of China also harshly represses Tibetans and regularly tracks and deports North Korean refugees,” it added.
Professor Moreau, who campaigns against the “creeping development of mass surveillance technology”, said that computer scientists should think about the parameters of “acceptable” research. “Do we really need…models that [track] groups like Tibetans and Uighurs?” he?.
But the journal’s publisher, Wiley,??the paper after concerns were raised in 2019. “This article is about a specific technology and not an application of that technology. It bridges artificial intelligence and physical anthropology, and contributes to this specific body of scientific literature,” Wiley said.
Curtin said that it had asked Wiley to retract the 2018 paper “multiple times”. A different paper based on the same data set has been??by the journal?IEEE Access.
The episode could add to Canberra’s jitters about Australian research contributing to offshore repression. New?guidelines?on overseas partnerships are expected to be released this month, while the sector is also awaiting advice on agreements to be vetoed under the?Foreign Relations Act.