UK Online Safety Act Fails Advanced Generative AI
Generative AI is difficult, and the new cyber rules in the UK could not be up to the task, according to the paper.
The presence of terrorist generative AI chatbots, whether created for shock effect, experimentation, or even parody, might possibly constitute a significant danger, according to a new analysis. “Sophisticated generative AI is unsuited to the new Online Safety Act,” the paper said.
The rising danger of chatbots manipulated by terrorists is a major worry, according to a new analysis. Just six months ago, the UK’s newly passed “online safety act” went live. Nevertheless, supporters are actively working to address the danger by proposing new legislation.
“Humans alone are capable of committing terrorist offenses, so it is difficult to imagine a human being legally responsible for chatbot comments that encourage terrorism or those that solicit support for a group that is on the Terrorism Act of 2000’s blacklist.”
On the other hand, the idea that chatbots would only become stronger emerged, highlighting the need for governments to step in. “New legislation will be necessary if criminals or the ignorant continue to train terrorist chatbots.”
According to a recent post from BeInCrypto, Tesla CEO Elon Musk issued a strong warning about the risks of artificial intelligence at a senate hearing:
“At least one out of every ten people will perish because of artificial intelligence. Although I believe it’s low, there is a possibility.”
While this was going on, only 33% of respondents expressed total or high trust in governments when it came to regulating and overseeing AI systems and tools, according to Statista’s survey of 17,000 individuals from 17 countries.
Many consider this historic statute to be the strongest legislation aimed at protecting children in the last hundred years.
Also Read: Legal tech startup raises $26 million in Series B for AI contract automation January 3, 2024