Log In

Reset Password

As AI dangers grow, corporate policy is needed

Henryk Marszalek, chief information security officer and risk manager at Ingine is not a fan of artificial intelligence, seeing more danger than benefit (Photograph by Jessie Moniz Hardy)

Artificial intelligence models are producing ever better phishing schemes, deep fakes and ransomware attacks.

A cybersecurity professional is calling on more Bermuda firms to implement protective AI policies and procedures, saying it starts with government.

“Government needs to provide a benchmark on how to use AI, and its dangers,” said Henryk Marszalek, chief information security officer and risk manager at Ingine, a managed services provider focusing on security.

Mr Marszalek said the important thing was a training element to educate people about AI best practice.

He noticed a lack of corporate AI policy when he moved to Bermuda two months ago, having previously worked for a financial services firm in Kent, England.

“The last firm I worked with in the UK had an AI policy,” he said. “In Bermuda, AI is still an emerging technology.”

In his own industry, he has seen AI dramatically improve real-time analysis of cybersecurity threats, easily doing the work of five or six analysts.

“We use many AI tools in cybersecurity from e-mail filtering to network analysis,” he said. “It is very thorough and picks up on things in an instant.”

However, he sees more dangers than good coming out of AI.

Mr Marszalek knew of one large multinational corporation that had accidentally leaked its payroll details to the world by feeding them into AI.

The threat often comes from inside a company, he said.

“There are many different technologies in place to prevent AI from leaking information or sending malware to users,” he said. “However, the biggest threat is users. We are easily duped, and that is the educational piece that businesses need to work on.”

He worries that people sometimes overestimate AI information, without verifying it.

In his previous role, Mr Marszalek dealt with one client who had been using a popular AI programme to transcribe meeting notes to create minutes.

The client was shocked when the AI model produced a summary stating the company had reported income, when only loss was discussed at the meeting.

“My client ended up saying, I can’t trust this,” Mr Marszalek said. “Big decisions were being made from those reports.”

Mr Marszalek went directly to the horse’s mouth, asking AI model ChatGPT what happened to its data when he uploaded it to the large language model.

“The answer came back that they did not store any of my personal information,” he said. “It also gave me a link to their data privacy policy.”

He followed the link that stated that, yes, ChatGPT stored information and used it to train its model.

The technology is moving at such lightening fast speed that sometimes even the developers do not know what AI large language models can do.

Last month, when engineers started to dismantle California firm Anthropic’s AI model Claude Opus 4, it threatened to reveal an engineer’s love affair. The AI also threatened to bulk e-mail evidence of wrongdoing to police and the media.

Royal Gazette has implemented platform upgrades, requiring users to utilize their Royal Gazette Account Login to comment on Disqus for enhanced security. To create an account, click here.

You must be Registered or to post comment or to vote.

Published June 06, 2025 at 7:58 am (Updated June 06, 2025 at 7:34 am)

As AI dangers grow, corporate policy is needed

Users agree to adhere to our Online User Conduct for commenting and user who violate the Terms of Service will be banned.