Australian businesses are being warned by the nationās leading cybersecurity organisation about threats to privacy, property and attacks on their operation due to the use of artificial intelligence technology.
The Australian Signals Directorate released the AI guidelines on Wednesday in collaboration with foreign security agencies, including the US Federal Bureau of Investigation, the UKās National Cyber Security Centre and Israelās National Cyber Directorate.
The 15-page report notes that AI āpresents both opportunities and threatsā to Australian businesses and outlines five concerns about the technology that could put businesses at risk.
The guidelines arrive one week after the federal government released its Safe and Responsible AI interim report that outlined mandatory and voluntary regulations planned for using the technology.
The ASDās Engaging with Artificial Intelligence report, which was designed for small, medium and large organisations as well as government agencies, detailed a series of AI risks.
They included ādata poisoningā or manipulating training data to produce incorrect results, āinput manipulation attacksā involving hidden commands to access more of an AI model than allowed, and generative AI āhallucinationsā in which the technology delivered incorrect data.
The report gave the example of a case in which a New York lawyer created a legal brief using ChatGPT but found six cases in the documents had been āhallucinatedā by the program.
āTo take advantage of the benefits of AI securely, all stakeholders involved with these systems ā¦ should take some time to understand what threats apply to them and how those threats can be mitigated,ā the report said.
The guidelines recommended businesses using AI hire qualified staff, conduct regular āhealth checks,ā maintain data backups and question how its use will affect privacy obligations.
Australian Institute for Machine Learning director Simon Lucey welcomed the guidelines, saying the risks were real but, if they could be overcome, the technology could unlock significant economic benefits.
Professor Lucey said data poisoning and hallucinations could prove to be a significant threat and anyone using the technology should take care to choose a transparent AI model.
āOne of the challenges that the technology has at the moment is that it has so much potential but itās such an alien technology in the sense that previous technologies have given us a sense of how they operate, how they work,ā he said.
āWhen AI makes a mistake, itās often very difficult to trace back to find why that happened.ā
University of the Sunshine Coast computer science lecturer Erica Mealy called the guidelines a āgreat first stepā in helping businesses to understand generative AI technology, particularly as it was being adopted faster than expected.
āThereās definitely security risks involved in AI for businesses in terms of trademarks and intellectual property,ā Dr Mealy said.
āWe need to develop a global understanding of what it is good for and what it isnāt good for and we need to keep an eye on data ownership and privacy.ā
Ā
Jennifer Dudley-Nicholson
(Australian Associated Press)