Australia has a problem with slowing productivity growth. As the group tasked with addressing this, the Australian Government’s Productivity Commission is looking at AI as a potential part of the problem. Recently, the Commission released a three-paper report, Making The Most Of The AI Opportunity: Productivity, Regulation And Data Access, to further analyse this opportunity.

To maximise productivity gains from AI, the paper advocates for a soft-touch approach to regulation. Additionally, the Commission recommends that the government departments at all levels (federal, state and local) “lead by example” and contribute their own data and resources to further the development of quality AI models.

Breaking the research down

The research report is broken down into three separate papers.

Paper 1: AI uptake, productivity and the role of government

The first paper notes that because AI is already becoming ubiquitous in some areas online and is being baked into everyday tools, it is a technology that is already delivering productivity benefits to every business and individual. They’re small for now but will grow in time.

While those productivity gains are exciting, the report also acknowledges that AI poses risks, particularly around consumer trust. The Commission recommends that governments can be part of the solution to this trust challenge by contributing their own, high-quality data to support the development of quality models. “The Government’s interim data and digital strategy notes that the Australian Public Service manages a vast amount of data that is not used to its full extent, and access remains restricted despite the clear benefits derived from safely sharing data across public and private sectors,” the report notes.

Paper 2: The challenges of regulating AI

The second paper discusses the benefits and risks of AI and how the Australian government should regulate it. It cites the interim response of the Australian Government to the Safe and Responsible AI in Australia Consultation as a useful starting point and the Productivity Commission’s paper as a systematic and implementable approach to AI regulation (Figure A).

Infographic showing regulating AI use.
Figure A: Regulating AI use. Image: Productivity Commission

It is worth noting that currently, Australia has very soft regulations on AI, and industry and the public are, for the most part, looking at the soon-to-be-introduced European AI laws for guidance on the matter.

Rather than risk regulation undermining productivity, however, the paper argues that safe, ethical AI use comes down to a range of factors such as social norms, market pressures, coding architecture and public trust. In other words, the report argues that a rigid approach to AI regulation is unlikely to address the risks, and regulators need a more holistic approach.

Paper 3: AI raises the stakes for data policy

The third paper points out that data has been a resource for both the private and public sectors for decades. AI has accelerated the potential gains while also elevating the risk.

Australians know it, too. As the Commission research shows, privacy, along with quality and price, is a top three concern among Australians when it comes to data (Figure B). To allay these concerns, the Commission recommends a national data strategy as being preferable to bludgeoning regulation.

Chart showing Australians rank data privacy highly but behind quality and price.
Figure B: Australians rank data privacy highly but behind quality and price. Image: Productivity Commission

“Once developed, all future regulations and guidelines around data use and data analytics could refer to the agreed principals of the national data strategy,” the report notes. “In this way, the data strategy could provide a secure and consistent basis for the development and use of AI and other data‑intensive technologies.”

Collaboration with government on the cards

The overall thrust of the research is that the government should look to be an active participant in the shaping of AI. The researchers argue that the government should resist playing into the fear mongering in some corners regarding AI and embrace the opportunity it has to be an active part of the development of best-practice AI.

For the industry, this may mean a proliferation of opportunities for the private and public sectors to come together collaboratively. Some potential opportunities include:

Industry self-regulation initiatives

Data professionals and the private sector can voluntarily adopt ethical principles, best practices and guidelines for responsible AI development and use and demonstrate their commitment to social values and trustworthiness. This can help reduce the need for government intervention and ensure the Australian government can have that soft-touch approach recommended to it.

Co-design of AI policies with stakeholders

Data professionals and the private sector can actively participate in the development of AI policies and regulations and provide their expertise, insights and feedback to government agencies. This can help ensure that the policies are informed by the latest technological developments, reflect the needs and interests of various stakeholders and strike a balance between innovation and regulation.

AI ethics advisory boards

The government can be guided by AI ethics advisory boards, which can then be used as the framework for the development of any regulations. These boards can act to inform the government of risks and harms, propose mitigation strategies and promote public awareness and engagement on AI ethics.

Public sector adoption of AI technologies

Building AI solutions that enhance public service delivery, efficiency and transparency can result in a government that is more familiar with the capabilities and challenges of AI. Organisations that do a lot of work with government agencies should look at adding accreditations related to AI to assist with the tendering and strategic support that the company can then provide to the government.

Source link