Limited period discount :Sponsored Articles, Homepage Banners and News Release. Write to us - [email protected]

The Battle Over Which AI Applications Europe Should Prohibit

1 Mins read

Guards on the borders of Greece, Hungary, and Latvia started experimenting with a lie detector driven by artificial intelligence in 2019. The iBorderCtrl system examined facial expressions in an effort to identify telltale signals of deception by a border agent. Over $5 million in research money from the European Union and nearly 20 years of study at Manchester Metropolitan University in the UK served as the foundation for the trial.

The controversy was raised by the trial. Psychologists have generally pronounced that polygraphs and other technologies designed to identify falsehoods based on bodily characteristics are unreliable. Soon, iBorderCtrl issues started to be reported as well. The lie-prediction algorithm’s failure to operate was reported in the media, and the project’s website admitted that the technology “may imply implications for fundamental human rights.”

This month, Silent Talker, a Manchester Met spinoff that created the technology that underlies iBorderCtrl, went out of business. But the narrative is not over yet. With iBorderCtrl serving as an illustration of the potential pitfalls, lawyers, campaigners, and MPs are advocating for a European Union rule to govern AI that would outlaw systems that assert to find human dishonesty in migration. It was impossible to contact former Silent Talker executives for comment.

Officials from EU countries and members of the European Parliament are debating thousands of modifications to the AI Act, including a ban on AI lie detectors at borders. The Act aims to safeguard the fundamental rights of EU citizens, such as the freedom from discrimination and the ability to seek asylum. It categorizes AI use scenarios into “high-risk,” “low-risk,” and outright banned categories. Human rights organizations, labour unions, and businesses like Google and Microsoft are among those pushing for changes to the AI Act. They want the law to distinguish between those who create general-purpose AI systems and those who employ them for specific purposes.

Read more about this at

426 posts

About author
Andrew Sabastian is a tech whiz who is obsessed with everything technology. Basically, he's a software and tech mastermind who likes to feed readers gritty tech news to keep their techie intellects nourished.