Save content
Have you found this content useful? Use the button above to save it to your profile.
AI brain | AccountingWEB | Work to be done after historic AI safety deal
istock_aibrain_xxmmxx

Work still to do after AI safety deal signing

by

The historic signing of a new artificial intelligence safety agreement has been hailed as a move in the right direction, but just the first step of a long journey.

29th May 2024
Save content
Have you found this content useful? Use the button above to save it to your profile.

The latest chapter in the ongoing artificial intelligence (AI) story has been written after commitments to develop the technology safely were agreed with 16 AI tech companies spanning the globe.

The world-first – which took place on the opening day of the AI Seoul Summit on 21 May – will see these businesses, where they have not already done so, publish safety frameworks on how they will measure risks of their frontier AI models.

The frameworks have also been designed to outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to “ensure thresholds are not surpassed”.

Those involved include Amazon, Anthropic, Cohere Google/Google DeepMind, G42, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Naver, OpenAI, Samsung Electronics, Technology Innovation Institute, xAI and Zhipu.ai.

First step on a long road

Audit, tax and consulting advisory firm RSM backed the agreement but also noted that it is only the “first step on a long road ahead, as clear legal and regulatory frameworks will need to evolve globally to give this pledge more teeth”.

Speaking to AccountingWEB, Stuart Leach, partner at RSM, believes the AI safety agreement will be “hugely welcomed by UK businesses”, with the firm’s The Real Economy report showing that 44% of middle market businesses are concerned about AI-enabled cyber attacks in 2024.

However, Leach added: “While the agreement has secured voluntary commitments from leading AI and technology companies, it is important to emphasise that the commitments are voluntary. There is yet to be regulatory oversight to ensure AI systems are securely developed and safely used.

“Guidelines have been published by authorities such as the UK National Cyber Security Centre (NCSC) and an AI Cyber Security Code of Practice has been published for consultation by the UK government in May 2024. However, similarly to the AI safety agreement, these guidelines and proposed code of practice are voluntary and do not have the necessary regulatory requirements to enforce secure development of AI.”

Understanding the use

Leach believes that to aid in the reassurance of such technology, businesses “need to understand their individual use case for AI”.

He said: “It is important to ensure its use has appropriate governance structures including policies, so the introduction of the technology is controlled. Businesses should seek independent validation to provide assurance that selected AI technologies and data models are securely designed, developed, deployed and maintained.

“We are seeing frameworks and standards such as ISO 42001:2023 becoming more mainstream, for businesses embedding AI and becoming more dependent on it for operations, working in alignment to these types of standards should be considered.”

Keeping humans in the loop

For those still concerned about AI, Sheila Pancholi, partner at RSM UK, told Accountingweb that effective AI “starts with data, gaining a clear understanding of data flows when using AI and ensuring robust controls are in place to prevent data being compromised”.

She said: “For mission-critical activity, carefully consider the technologies used and the data sources. Using public versions of generative AI tools may not be the best option as it is not possible to control data and it may be collected for further training of the public AI model, potentially resulting in business data leakage.”

She also stressed the importance of “keeping humans in the loop when it comes to reviewing output from AI and for making final decisions”.

Pancholi continued: “Currently, AI should be considered as a supportive tool to augment resource capability and undertake specific tasks. AI systems are subject to vulnerabilities such as adversarial machine learning and can be manipulated to cause unintended behaviours by corrupting the training data or user feedback (also known as data poisoning).

“Like with the use of most technology, businesses should understand the risks associated with AI so that it is introduced in a controlled manner. From this, it can be designed and implemented to support business strategy and to achieve well-considered objectives.”

Replies (1)

Please login or register to join the discussion.

avatar
By FactChecker
30th May 2024 14:03

Laudable conceptually ... but doomed to 'fail' (as in not achieve what people actually want in terms of safety and reliability).
Partly of course because the incentive to do so runs counter to the financial imperatives of those who've signed ... but mostly due to the laws of unintended consequences (or the failure to address unknown unknowns).

An example from the current issue of P Eye:
".. the algorithms that determines what a user likes from their browsing habits works in the same way for less salubrious interests."
Investigation has found that innocent Instagram ads featuring child models "got direct responses from dozens of Instagram users, including phone calls from two accused sex offenders, offers to pay the child for sexual acts and professions of love."

The solution is simple but will never be taken by those unwilling to hit their revenue streams ... so the hot air will increase at the same rate that new harms are uncovered but not prevented.

Thanks (2)