NextGen

Google Warns Employees About Using Chatbots— Even its Own

Alphabet, Google’s parent company, has issued warnings and guidelines to its employees on how to use chatbots, Reuters reported.

The caution includes its own chatbot product, Bard, which the company markets worldwide.

Alphabet employees are not to enter its confidential materials into AI chatbots, according to Reuters’s sources, citing long-standing policy on safeguarding information, and its engineers are to avoid direct use of computer code that chatbots can generate.

The company told Reuters that Bard can make undesired code suggestions, but it helps programmers nonetheless, while also saying that it aimed to be transparent about the limitations of its technology. By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported.

Companies such as Samsung, Amazon.com, Deutsche Bank and Apple have also warned their employees about using these publicly-available programs.

ChatGPT or other AI tools were being used by 43 percent of professionals, and 70 percent of workers using ChatGPT at work are not telling their bosses, a Fishbowl survey of 12,000 respondents found in January.

It "makes sense" that companies would not want their staff to use public chatbots for work,  Yusuf Mehdi, Microsoft's consumer chief marketing officer, told Reuters.

CEO Matthew Prince of Cloudflare, a software company that is marketing a service that tags and restricts some data from being made available externally, told Reuters that typing confidential matters into chatbots was like "turning a bunch of Ph.D. students loose in all of your private records."