Apple has forbidden its workers from using ChatGPT due to security concerns
While the company develops its own AI technology, the use of third-party AI chatbots by employees will be limited, according to a company document.
Apple has reportedly limited ChatGPT’s use inside its own firm out of concern that the bot’s private information may be stolen.
The Wall Street Journal reported that an internal Apple paper discouraged the use of ChatGPT and comparable artificial intelligence (AI) products developed by Microsoft.
The paper claims that the iPhone developer is worried about employees using the apps and revealing sensitive corporate data.
A limitation on using the Microsoft-owned program Copilot, which is used to automate coding tasks on GitHub, was also cited.
The new app is now only accessible for iOS devices in the United States, but there are plans to roll it out internationally “in the coming weeks,” and there will also be an Android version “soon.”
Apple isn’t the only major company to limit ChatGPT’s internal use. On May 2nd, Samsung sent out a note telling workers they were no longer allowed to utilise ChatGPT and other forms of generative AI.
This restriction was implemented by Samsung after an internal event involving the submission of “sensitive code” to the service.
Samsung has warned its staff who use such apps on their own smartphones not to share any business data for fear of “disciplinary action up to and including termination of employment.”
Companies such as JPMorgan, Bank of America, Goldman Sachs, and Citigroup have joined Samsung and Apple in prohibiting the usage of ChatGPT and similar generative AI technologies inside their own walls.
Companies that have banned their employees from using AI chatbots are often in the midst of developing their own bots.
Also Read: The Texas Legislature Includes Digital Currency In The State’s Bill Of Rights