Developers working on artificial intelligence should be licensed and regulated similarly to the pharmaceutical, medical, or nuclear industries, according to a representative for Britain’s opposing political party.
Lucy Powell, a politician and digital spokesperson for the United Kingdom’s Labour Party told The Guardian on June 5 that firms like OpenAI or Google which have created AI models should “have to have a license in order to build these models,” adding:
“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.”
Powell argued regulating the development of certain technologies is a better option than banning them similar to how the European Union banned facial recognition tools.
She added AI “can have a lot of unintended consequences” but if developers were forced to be open about their AI training models and datasets then some risks could be mitigated by the government
“This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one,” she said.
Ahead of speaking at the TechUk conference tomorrow, I spoke to the Guardian about Labour’s approach to digital tech and AI https://t.co/qzypKE5uJU
— Lucy Powell MP (@LucyMPowell) June 5, 2023
Powell also believes such advanced technology could greatly impact the U.K. economy and the Labour Party is purportedly finishing up its own policies on AI and related technologies.
Next week, Labour leader Keir Starmer is planning to hold a meeting with the party’s shadow cabinet at Google’s U.K. offices so it can speak with its AI-focused executives.
Related: EU officials want all AI-generated content to be labeled
Meanwhile on June 5, Matt Clifford the chair of the Advanced Research and Invention Agency — the government’s research agency set up last February — appeared on TalkTV to warn AI could threaten humans in as little as two years.
EXCLUSIVE: The PM’s AI Task Force adviser Matt Clifford says the world may only have two years left to tame Artificial Intelligence before computers become too powerful for humans to control.
— TalkTV (@TalkTV) June 5, 2023
“If we don’t start to think about now how to regulate and think about safety, then in two years’ time we’ll be finding that we have systems that are very powerful indeed,” he said. Clifford clarified, however, that a two-year timeline is the “bullish end of the spectrum.”
Clifford highlighted that AI tools today could be used to help “launch large-scale cyber attacks.” OpenAI has put forward $1 million to support AI-aided cybersecurity tech to thwart such uses.
“I think there’s [sic] lots of different scenarios to worry about,” he said. “I certainly think it’s right that it should be very high on the policymakers’ agendas.”
BitCulture: Fine art on Solana, AI music, podcast + book reviews
BlackRock Spot Bitcoin ETF Launches In Brazil, ETF Market Secures 4% Of Total BTC Supply
Cardano’s Price Performance In The Current Bull Run
If Bonk Is The Dogecoin Of This Cycle, Is WIF The Next Shiba Inu?
Ethereum Shanghai Upgrade: What you need to know
Nissan files 4 new web3 trademarks, trials sales in the metaverse
Why is Ethereum (ETH) price down today?
News from Twitter: