Hollywood has ‘helped to fan flames of fear about AI’, peers hear

Hollywood has helped to fan flames about the dangers of artificial intelligence (AI) in the minds of a generation of “engineers, computer scientists and super-geeks”, ministers have heard.

The House of Lords was told movie depictions of AI, such as The Terminator, have helped to cement “hopes and fears of what AI could do to us”, as it considered plans to regulate the emerging technology.

The upper chamber of Parliament was urged to back proposals by Lord Holmes of Richmond which would create a new watchdog, known as the AI Authority.

The Conservative peer’s Artificial Intelligence (Regulation) Bill, which began its progression through Parliament on Friday, would require the authority to push forward AI regulation in the UK and assess and monitor potential risks to the economy.

But Conservative peer Lord Ranger of Northwood suggested the technology’s proponents currently needed room to innovate.

He told the Lords: “If like me, you are from a certain generation, these seeds of fear and fascination of the power of artificial intelligence have long been planted by numerous Hollywood movies picking on our hopes and fears of what AI could do to us.”

He cited “unnerving subservience” of HAL 9000 in 2001: A Space Odyssey, and “the ultimate hellish future of machine intelligence taking over the world in the form of Skynet” from the Terminator movies.

Lord Ranger added: “These and many other futuristic interpretations of AI helped fan the flames in the minds of engineers, computer scientists and super-geeks, many of who created the biggest tech firms in the world.”

While he said he was supportive of the aims of the Bill and there may be a long-term need for regulatory guidance, Lord Ranger said he did not believe it was possible to regulate AI through a single authority.

He was also critical of a labelling system it would introduce, which seeks to ensure any person involved in training AI would have to supply to the authority a record of all third-party data and intellectual property (IP) they used and offer assurances that informed consent was secured for its use.

The Tory peer said: “This will not … help us work hand-in-hand with industry and trade bodies to build trust and confidence in the technology.”

Other peers gave their backing to the Bill, with crossbench peer Lord Freyberg telling the upper chamber: “It stands to reason that if artists’ IP (intellectual property) is being used to train these models, it is only fair that they be compensated, credited and given the option to opt out.”

Fellow crossbencher Baroness Kidron, meanwhile, said she wanted to see “more clarity that material that is an offence such as creating viruses, CSAM (child sexual abuse material), or inciting violence are offences whether they are created by AI or not.”

The filmmaker and children’s rights campaigner cited a report by the Stanford Internet Observatory, which identified “hundreds of known images of children sexual abuse material in an open data set used to train popular AI text-to-text models”.

She added: “The report illustrates that it is very possible to remove such images, but they did not bother. Now those images are proliferating at scale. We need to have some rules upon which AI is developed.”

Lord Holmes, the Bill’s sponsor, compared the onset of AI to the advent of steam power during the industrial revolution as he urged peers to back his proposals.

He said: “If AI is to human intellect what steam was to human strength, you get the picture. Steam literally changed time. It is our time to act and it is why I bring this Bill to your Lordships’ House today.”

The Government believes a non-statutory approach to AI regulation provides “critical adaptability” but has pledged to keep it under review.

A Government spokesman said: “As is standard process, the Government’s position on this Bill will be confirmed during the debate.”

– Advertisement –
– Advertisement –