Microsoft banned employees from using ChatGPT, allegedly for security reasons
Artificial intelligence (AI) chatbot ChatGPT is once again making headlines, but this time because one of the biggest investors in its creator, giant Microsoft, has banned employees from using it for a period of time due to security reasons. This happens just a few days after OpenAI, the company that launched ChatGPT, opened its first conference for developers dedicated to AI products, in which its investor also participated.
Microsoft has allegedly published the temporary ban rule on its internal website, and it seems that it has also prevented corporate devices from accessing this chatbot, writes CNBC. The move comes amid data concerns, as has been the case with many companies around the world, but now it’s especially surprising because this company is the largest and most prominent investor in OpenAI.
In January of this year, the technology giant committed to invest another 10 billion dollars in the company that created ChatGPT, after having already invested three billion in the past. Additionally, the AI-based tools that Microsoft has introduced to its products, such as the Bing chatbot, also use the startup OpenAI’s large language model.
However, despite all this, Microsoft reportedly said in an internal explanation that although it has invested in OpenAI and although ChatGPT has built-in safeguards to prevent improper use, it is “still an external service of an independent company”. So he advised his employees to “be careful”, adding that this certainly applies to other external services, including the AI image generator Midjourney.
The temporary ban on the use of the ChatGPT bot in the company is definitely unexpected, given that its representatives also participated in the OpenAI conference, but it seems to have already ended. CNBC claims that shortly after this information was published, Microsoft restored access to the chatbot. A company spokesperson says the ban is allegedly a mistake, although advice to employees about using such apps explicitly mentions ChatGPT exists.
The company, they claim, allegedly tested endpoint control systems for large language models and “inadvertently turned them off for all employees.” In order to reduce the resonance in the public, Microsoft representatives pointed out that they, on the contrary, encourage their employees and customers to use services such as Bing Chat and ChatGPT. However, it is obvious that there are concerns, even among such large investors in the OpenAI products themselves, and it would not be bad to approach these services with a dose of caution when it comes to data security.