BCS
Printable version

An AI ‘pause’ will hand advantage to bad actors, warns professional body for IT

A global pause in AI development will not work and play into the hands of rogue regimes and organisations, IT professionals have warned.

Attempts at a world consensus on holding back AI will produce an ‘asymmetrical pause’, where bad actors seize the advantage, said BCS, the Chartered Institute for IT.

An open letter last month by the Future of Life Institute called for an immediate halt in training of systems ‘more powerful than GPT-4’ for at least six months, which must be public, verifiable and include all public actors.

Even if a go-slow on AI could be achieved, it would still be harmful to humanity by delaying advances in medical diagnosis, climate science, and productivity, BCS said.

In a policy position paper, BCS, the professional body for computing, argued that putting ethical guardrails around AI, ‘as it grows up’ will be far better than a dangerously uneven pause.

To make sure humanity gets the benefits of AI as early and safely as possible, it should be clearly labelled, backed by public education, supported by professional standards, and developed within ‘AI sandboxes’ – safe spaces for early testing.

In the research paper entitled ‘Helping AI grow up – without pressing pause’ BCS said a halt in AI development would:

  • Delay AI research in areas that are fundamental for society’s survival like climate change and health.
  • Be ‘asymmetric’; it is not possible to ensure all governments and organisations would respect such an agreement; bad actors would win an advantage in the AI race.

The paper concluded that instead AI can continue to ‘grow up’ safely if:

  • Organisations are more transparent about their development and deployment of AI, comply with data privacy regulations, and submit to audit of processes and systems
  • AI systems are developed by communities of competent, ethical, inclusive, information technology professionals, supported by professional registration.
  • There are clear health warnings, labelling and opportunities to give informed consent around AI products and services.
  • It is supported by a programme of increased emphasis on computing education and adult digital skills to help the general public understand and use AI. This should be driven by government and industry.
  • It is tested safely within established regulatory ‘sandboxes’ as proposed in the white paper on AI regulation published by government in March this year.

Rashik Parmar MBE, Chief Executive of BCS, The Chartered Institute for IT said: “We can’t be certain every country and company with the power to develop AI would obey a pause, when the rewards for breaking an embargo are so rich.

“So, instead of trying to smother AI, only to see it revived in secret by bad actors, we need to help it grow up in a responsible way.

“That means working hard together to agree standards of transparency and ethical guardrails designed and deployed by AI professionals who all share the same values.

“We’ve got a generational opportunity to make something that, pretty soon, can solve a huge number of the world’s problems and be a trusted partner in our life and work; let’s take it.”

Channel website: http://www.bcs.org/

Original article link: https://www.bcs.org/about-us/press-office/press-releases/an-ai-pause-will-hand-advantage-to-bad-actors-warns-professional-body-for-it/

Share this article

Latest News from
BCS

Webinar: Harnessing Phone-AI for Smarter Customer Service: A Local Government Guide