Public officers can use ChatGPT and similar AI, but must take responsibility for their work: MCI
Sign up now: Get ST's newsletters delivered to your inbox
Public officers have been told not to feed sensitive information to AI tools, and to vet all AI-generated work.
PHOTO: REUTERS
Follow topic:
SINGAPORE - Public officers using ChatGPT and similar artificial intelligence (AI) tools for research and writing have been told not to feed these applications with sensitive information under new guidelines issued in May.
Officers should also vet all AI-generated work to ensure the work they submit is accurate and in line with copyright laws, said the guidelines on AI usage in the civil service by the Ministry of Communications and Information (MCI) that were e-mailed to all civil servants at the start of May.
The guidelines on the use of tools powered by large language models like ChatGPT and Microsoft Bing are aimed at general users of these apps and those developing apps for the Government, said MCI and the Smart Nation and Digital Government Group (SNDGG) in a joint reply to queries from The Straits Times.
Developers should implement measures to test the accuracy and robustness of their AI apps, and build tools into the user interface to educate users on how to use it properly, MCI and SNDGG added.
Concerns about the ethical use of AI
More organisations here, including government agencies, are developing new apps that integrate their services with large language models, raising questions over whether such data is safe in the hands of AI.
Responding in Parliament on May 9 to MP Tan Wu Meng’s concerns on what is being done to ensure the development of ethical AI, Senior Minister of State for Communications and Information Janil Puthucheary said that the guidelines make it clear that public officers are accountable for their work and responsible for fact-checking AI-generated content.
“The guidelines also aim to safeguard data security, by reminding officers not to input sensitive information into these applications,” he said, adding that the authorities will also expand advisory guidelines under the Personal Data Protection Act later in 2023 to address the use of AI.
An AI app in development for government staff is Pair, a writing and research tool that taps the brains of ChatGPT
Pair is being tested in selected agencies, with a few hundred officers involved, said MCI and SNDGG.
Pair will enable civil servants to use commercial AI tools with improved data security, thanks to legal and hosting agreements made with its providers, they said.
This will allow officers to include sensitive data while using AI tools, unlike publicly available versions of the chatbot.
The project’s developers previously told ST that an agreement had been struck with Azure OpenAI to ensure that data handled by the Government is kept confidential and out of sight to Microsoft and OpenAI.
The risk of data leaks has been a rising concern since the mainstream launch of ChatGPT last November. Bad coding can potentially open pathways for hackers to steal sensitive data uploaded to AI chatbots, experts previously told ST.
These concerns have prompted some companies that have integrated ChatGPT into their services to keep the chatbot away from client information,

