2 Sources
[1]
UK firm not racist for rejecting Chinese applicant over security concerns, tribunal rules
Judge says not discrimination to refuse to hire people from 'hostile' states like China and Russia if it may pose risk to British security Refusing to give a job to Chinese and Russian people in companies that deal with issues of national security and require security clearance is not racist, an employment tribunal has ruled. It is not discriminatory to stop people from "hostile" states taking up certain jobs in the defence sector because of the risk to British security, the judgment says. The ruling relates to the case of a Chinese scientist who accused a British AI company with ties to the UK and US defence departments of racism after she was not given a job due to security concerns. Tianlin Xu applied for a role at Binary AI Ltd but the founder of the software company, James Patrick-Evans, turned her down and employed a British man instead. He emailed her: "Disappointingly I've come to the decision not to proceed with your application on the sole basis of your nationality. "As a company, we work closely in sensitive areas with western governments and wish to continue to do so. We're simply not big enough of a company to ensure the separation and security controls needed to hire someone of your nationality at this stage." Judge Richard Baty, sitting in London, described the email as clumsy and said: "In complete isolation, it looks like an admission of direct race discrimination on the basis of nationality." But he said in fact Xu had been turned down as she would not get security clearance because of her nationality. The judge said: "That reason would apply to people of any nationality where it was not possible to get security clearance (including Russian, North Korean and Iranian nationality as well as Chinese nationality). The reason is not nationality per se." Patrick-Evans was "strongly advised against hiring a Chinese national" by defence officials that he worked with, the tribunal heard. Binary AI had had a contract with the Defence Science and Technology Laboratory - the secret site based at Porton Down in Wiltshire - and the Ministry of Defence to develop AI that could identify hidden "back doors" inside software. Baty said in his judgment: "It is obvious that software drives the modern world. It underpins our everyday lives and runs every sector of our state. "Therefore, it is paramount that the security and operational capability of the software that drives our everyday lives should remain intact and free from malicious hackers and state actors wanting to persuade political outcomes or obtain sensitive information." Xu's complaints of direct and indirect race discrimination both failed.
[2]
Banning Chinese and Russians from security jobs 'not racist', tribunal rules
Banning Chinese and Russians from working in sensitive national security areas in the UK is not racist because they might be spies, a tribunal has ruled. It is not discriminatory to stop people from nations that pose a threat to Britain taking up certain jobs in the defence sector due to the possibility of espionage, the judgment suggested. The precautionary measure applies to potential job candidates from China, Russia, North Korea and Iran. The ruling comes after a Chinese scientist sued a British artificial intelligence (AI) company with ties to the UK and US defence departments when she was not given a job due to security concerns. Tianlin Xu applied for a £220,000 lead AI role at Binary AI, but the software company's technology boss James Patrick-Evans had to reject her. There is no suggestion that Ms Xu is a spy. Mr Patrick-Evans' start-up uses AI to identify flaws in software used by Western governments to prevent state-backed hackers from the likes of China and Russia targeting them. The 32-year-old was "strongly advised against hiring a Chinese national" by top defence officials that he worked with, it was heard. Chinese people - like Ms Xu - would not get security clearance from governments in order to carry out the work, it was said. Ms Xu tried to sue Binary AI on grounds of race discrimination, claiming it was "racial stigma" and "stereotyping". But the tribunal dismissed her claims after hearing evidence of the security concerns.
Share
Copy Link
A UK employment tribunal has ruled that rejecting job applicants from "hostile" states like China for sensitive AI security roles is not discriminatory, highlighting the intersection of national security and AI development.
A recent UK employment tribunal ruling has sparked discussions about the intersection of national security, artificial intelligence (AI), and employment practices. The case centered around Tianlin Xu, a Chinese scientist who applied for a high-level AI position at Binary AI Ltd, a British company with connections to UK and US defense departments 12.
James Patrick-Evans, the founder of Binary AI Ltd, rejected Xu's application for a £220,000 lead AI role, citing her nationality as the sole basis for the decision. In an email to Xu, Patrick-Evans explained that the company works closely with Western governments in sensitive areas and lacks the resources to implement necessary security controls for hiring someone of her nationality 1.
Xu subsequently filed a lawsuit against Binary AI Ltd, alleging racial discrimination. She argued that the decision was based on "racial stigma" and "stereotyping" 2.
Source: The Telegraph
Judge Richard Baty, presiding over the London tribunal, acknowledged that Patrick-Evans' email could be interpreted as an admission of direct race discrimination when viewed in isolation. However, the tribunal ultimately ruled in favor of Binary AI Ltd, dismissing Xu's claims of both direct and indirect race discrimination 1.
The judgment emphasized that the rejection was not based on nationality per se, but on the inability to obtain necessary security clearance. This reasoning would apply to individuals from any country where security clearance is not possible, including Russia, North Korea, and Iran, in addition to China 12.
The case highlights the critical role of AI in modern society and national security. Judge Baty noted in his judgment:
"It is obvious that software drives the modern world. It underpins our everyday lives and runs every sector of our state. Therefore, it is paramount that the security and operational capability of the software that drives our everyday lives should remain intact and free from malicious hackers and state actors wanting to persuade political outcomes or obtain sensitive information." 1
Binary AI Ltd's work involves developing AI capable of identifying hidden "back doors" in software, a crucial aspect of cybersecurity. The company had previously contracted with the Defence Science and Technology Laboratory at Porton Down and the Ministry of Defence 1.
Patrick-Evans revealed that he had been "strongly advised against hiring a Chinese national" by defense officials he worked with. This advice underscores the perceived security risks associated with employing individuals from certain countries in sensitive technological roles 2.
This ruling sets a precedent for how companies working in sensitive areas of technology and national security may approach hiring practices. It suggests that in certain contexts, refusing to hire individuals from countries deemed "hostile" to British interests may not be considered discriminatory if there are legitimate security concerns 12.
The case also highlights the growing importance of AI in national security and the challenges companies face in balancing international talent acquisition with security requirements in an increasingly globalized tech industry.
Thinking Machines Lab, a secretive AI startup founded by former OpenAI CTO Mira Murati, has raised $2 billion in seed funding, valuing the company at $10 billion. The startup's focus remains unclear, but it has attracted significant investor interest.
2 Sources
Startups
20 hrs ago
2 Sources
Startups
20 hrs ago
The ongoing Israel-Iran conflict has unleashed an unprecedented wave of AI-generated disinformation, marking a new phase in digital warfare. Millions of people are being exposed to fabricated images and videos, making it increasingly difficult to distinguish fact from fiction in real-time.
3 Sources
Technology
20 hrs ago
3 Sources
Technology
20 hrs ago
A UN survey unveils a stark contrast in AI trust levels between developing and developed nations, with China showing the highest confidence. The study highlights the complex global attitudes towards AI adoption and its perceived societal benefits.
2 Sources
Technology
20 hrs ago
2 Sources
Technology
20 hrs ago
Reddit is reportedly considering the use of World's iris-scanning orbs for user verification, aiming to balance anonymity with authenticity in response to AI-generated content concerns and regulatory pressures.
3 Sources
Technology
20 hrs ago
3 Sources
Technology
20 hrs ago
Nvidia has reportedly booked all available capacity at Wistron's new server plant in Taiwan through 2026, focusing on the production of Blackwell and Rubin AI servers. This move highlights the increasing demand for AI hardware and Nvidia's strategy to maintain its market leadership.
2 Sources
Business and Economy
12 hrs ago
2 Sources
Business and Economy
12 hrs ago