Modern technology gives us many things.

Government takes its first steps in regulating the development and deployment of AI

Artificial intelligence technologies rapidly increasing in our society and the federal government will introduce rules and regulations on how AI is developed and managed.

The Federal Government has released a document about the safe and responsible use of AI in Australia which acknowledges the positive potential of AI improving our quality of life and growing the economy.

But it also addressed the need for AI systems to be created, developed and deployed safely and responsibly.

There are numerous ways AI can be deployed from simple tasks to streamline our workflow all the way through to enabling a self-driving vehicle, but all come with a level of risk.

What the government proposes is setting up a risk-based approach and setting up guardrails, both mandatory and voluntary, to reduce the risk of harm particularly in high risk situations.

There are also concerns that I could affect jobs, our privacy and safety and that it could also be misused by some businesses.

Photo by Possessed Photography on Unsplash

The government paper also recognises the fact that AI services of being created and used at a speed and scale that could potentially outpace our legal frameworks which were previously designed to be technology neutral.

Australia’s approach to regulating AI when compared to other countries is a little on the softer side which is more in line with the US and the UK wall the European union’s AI act is a little more aggressive and could potentially ban some high risk uses of the technology.

The government has also called for transparency with AI including labelling content and images that were generated by artificial intelligence.

“The Australian government appears to be taking a proportional approach to potential risks of generative AI by focusing, at least initially, on application of AI technologies in high-risk settings (such as healthcare, employment, and law enforcement),” says Professor Lisa Given, RMIT Director of the Social Change Enabling Impact Platform and Professor of Information Sciences 

“This approach may be quite different to what other countries are considering; for example, the European Union is planning to ban AI tools that pose ‘unacceptable risk,’ while the United States has issued an executive order to introduce wide-ranging controls, such as requirements for transparency in the use of AI generally.

“However, the Australian government will also aim to align its regulatory decisions with those of other countries, given the global reach and application of AI technologies that could affect Australians directly.

“Taking a proportional approach enables the government to address areas where the potential harms of AI technologies are already known (e.g. potential gender discrimination when used in hiring practices to assess candidate’s resumes), as well as those that may pose significant risks to people’s lives (e.g. when used to inform medical diagnoses and treatments). Focusing on workplaces and contexts where AI tools pose the greatest risk is an important place to start.”

Professor Mark Sanderson, RMIT Dean of Research and Innovation, Schools of Engineering and of Computing Technologies says that as smart as AI has become it still need to be controlled by something smarter, human beings.

“As important as it is to be concerned about AI algorithms, it is also critically important to monitor how people interact with AI systems and observe how those systems react,” he said.

“Across a population as diverse as Australia’s, the way people request AI systems to take on tasks will differ widely in both in terms of expression and language.

“Understanding how AI reacts to that diversity of interaction needs to be a critical component of the planned legislation.”