Microsoft temporarily prohibited its employees from using ChatGPT “due to security and data concerns,” according to CNBC. The company announced the rule in an internal website and even blocked corporate devices from being able to access the AI chatbot. While several tech companies had prohibited — or had at least discouraged — the internal use of ChatGPT in the past, Microsoft doing the same thing was certainly curious, seeing as it’s OpenAI’s biggest and most prominent investor.
In January, Microsoft pledged to invest $10 billion in ChatGPT’s developer over the next few years after pouring $3 billion into the company in the past. The AI-powered tools it rolled out for its products, such as Bing’s chatbot, also use OpenAI’s large language model. But Microsoft reportedly said in its note that “[w]hile it is true that [the company] has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service.” It advised its employees to “exercise caution,” adding that it goes for other external services, including AI image generator Midjourney.
ChatGPT’s Microsoft ban was unexpected, but it was also swift. CNBC says that after it published its story, Microsoft quickly restored access to the chatbot. It also reportedly removed the language in its advisory, saying that it was blocking the chat app and and design software Canva. A company spokesperson told the news organization that the ban was a mistake despite the advisory explicitly mentioning ChatGPT and that Microsoft restored access to it as soon as it realized its error. “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. They added: “As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”
Trending Products