Microsoft, having invested significant funds in OpenAI, encountered a momentary glitch on Thursday, preventing its employees from accessing the renowned ChatGPT. The company cited security and data concerns in an internal update, stating that several AI tools were temporarily unavailable.
According to Microsoft, despite their investment and the built-in safeguards in ChatGPT to prevent misuse, the platform remains a third-party external service. Which means caution is advised due to potential risks to privacy and security, a precaution extended to other external AI services like Midjourney and Replika.
Initially, the advisory included a ban on ChatGPT and design software Canva, but this line was later removed. After the initial publication of this story, Microsoft swiftly reinstated access to ChatGPT.
READ MORE: Breakthrough Machine-Learning Tool Identifies ChatGPT-Authored Papers
Microsoft clarified that the temporary blockage was a mistake resulting from a test of systems for large language models. The company encourages the use of services like Bing Chat Enterprise and ChatGPT Enterprise, offering enhanced levels of privacy and security protections.
Large corporations commonly restrict ChatGPT use to prevent the inadvertent sharing of confidential data. With over 100 million users, ChatGPT, trained on extensive internet data, crafts human-like responses to chat messages.
Microsoft suggests the use of its own Bing Chat tool, closely linked to OpenAI’s artificial intelligence models, reinforcing their collaboration. Amidst these developments, Microsoft continues its efforts to leverage OpenAI services within its Windows operating system and Office applications, all operating on the Microsoft Azure cloud infrastructure.