Maker of ChatGPT Forms Group to Study Artificial Intelligence’s Threat to Humanity

Aleksander Madry manages the Preparedness team that OpenAI, ChatGPT’s parent company, formed to evaluate the dangers posed by AI models.

OpenAI, ChatGPT’s parent firm, has formed a Preparedness team to evaluate potential threats presented by AI systems.

AI models have the potential to improve people’s lives but also provide certain dangers. Therefore, governments worry about how to control the risks associated with the technology.

OpenAI revealed the formation of the Preparedness team in a blog post. The team will be led by Aleksander Madry, the head of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology.

Artificial intelligence also poses a threat of spreading false information and disinformation. The percentage of experts who think generative AI presents a threat to global brand safety and disinformation is shown in the following screenshot.

“Extremely powerful, general-purpose AI models that can compete with or outperform the state-of-the-art models in a wide range of tasks.”

The Preparedness team at OpenAI will be responsible for mitigating the damage these disasters might do. The company behind ChatGPT says this is their “contribution” to the next global AI meeting in the UK.

OpenAI, Meta, Google, and others who are seen as leaders in the AI space made a commitment to ethical practices and openness at the White House in July.

UK Prime Minister Rishi Sunak reportedly does not want to hurry in regulating AI, as reported by BeInCrypto yesterday. His worries center on the possibility that humans may eventually lose control of artificial intelligence.

Also Read: Linea And Ramp Collaborate To Connect ZkEVM To The Global Financial System