Governments can use “guidance” instead of regulation to encourage responsible AI

4 Min Read

53892888594 154dbd0fd5 o e1722990174267

Governments are trying to strike a tricky balance with generative AI. If you regulate too stringently, you risk stifling innovation. If you rule too lightly, you open the door to disruptive threats like deep fakes and disinformation. Generative AI can enhance the capabilities of both nefarious actors and those trying to defend themselves against them.

During a breakout session on responsible AI innovation last week, speakers at Fortune Brainstorm AI Singapore acknowledged that a global, one-size-fits-all set of AI rules would be difficult to achieve.

Governments already differ in the extent to which they want to regulate. The European Union, for example, has an extensive set of rules that govern how companies develop and deploy AI applications.

Other governments, such as the US, are developing what Sheena Jacob, head of intellectual property at CMS Holborn Asia, calls a ‘framework directive’: not hard laws, but instead nudges in a preferred direction.

“Over-regulation will stifle AI innovation,” Jacob warned.

She cited Singapore as an example of where innovation is happening, beyond the US and China. While Singapore has a national AI strategythe city-state has no laws directly regulating AI. Instead of, the general framework Counts on stakeholders such as policymakers and the research community to ‘do their part collectively’ to facilitate innovation in a ‘systemic and balanced approach’.

Like many others at Brainstorm AI Singapore, speakers at last week’s breakout recognized that smaller countries can still compete with larger countries in AI development.

“The whole point of AI is to create a level playing field,” said Phoram Mehta, APAC Chief Information Security Officer at PayPal. (PayPal was a sponsor of last week’s breakout session)

See also  Dollar Stabilizes Ahead of Key Employment Data; euro slides back by Investing.com

But experts also warned of the dangers of ignoring the risks of AI.

“What people are really missing is that AI cyber hacking is a board-level cybersecurity risk bigger than anything,” said Ayesha Khanna, co-founder of Addo AI and co-chair of Fortune Brainstorm AI Singapore. “If you were to do a quick attack and provide hundreds of cues that… poisoned the data of the fundamental model, it could completely change the way an AI works.”

Microsoft announced at the end of June that this was the case discovered a way to jailbreak a generative AI model, causing it to ignore the guardrails against generating harmful content related to topics like explosives, drugs and racism.

But when asked how companies can block malicious actors from their systems, Mehta suggested that AI can help the “good guys” too.

AI “helps the good guys create a level playing field… it’s better to be prepared and use AI in that defense, rather than wait for it and see what kinds of responses we can get.”

Recommended newsletter:

CEO Daily provides important context for the news leaders across the business world need to know. Every weekday morning, more than 125,000 readers rely on CEO Daily for insights about – and from inside – the C-suite. Subscribe now.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *