03 Nov 2023
by Spencer Newsham

Advancing AI ethics beyond compliance (Guest blog by IBM)

As AI adoption rapidly increases, it’s critical that AI ethics progress from abstract theories to concrete practices.

AI adoption is rapidly growing and is widely recognized across the sector as ‘..an opportunity and an inevitability..’

AI is embedded in everyday life, business, government, medicine and more. It presents an enormous opportunity to turn data into insights and spark better decision-making. Customers, consumers of services and citizens are expecting to use and engage with generative AI. 

With heightened AI use comes heightened risk, in areas ranging from data responsibility to inclusion and algorithmic accountability. Today, many organisations struggle to operationalise AI with confidence and respond to changing regulations. Only by embedding ethical principles into AI applications and processes can we build systems based on trust.

At a recent joint techUK Justice and Emergency Services and Industry event, Megan Lee Devlin, co-hosting the event from the Cabinet Office, engaged with Law Enforcement and Central Government Agencies for a roundtable discussion. Each discussed their perspectives on the significance of ethics in the realm of AI and how they are integrating it into their respective organisations. 

The participants in the room were encouraged to continue their discussions at their respective tables.

Some of the discussion points were:

  • The EU AI act is important for organisations to input their own government structure/interventions so that they are ready for regulation that come in down the line.
  • People are more afraid of the liability of using AI, rather than the technology itself.
  • We need to guardrail the data, which requires education for developers.
  • There is a huge need for data infrastructure and data curation to be in place before AI projects happen
  • We need consumer education so consumers are aware of how AI got to here and how to control it.
  • AI is moving into SharePoint where everyone can have a play and basic understanding.

Some of the questions raised during the session:

  • What are our greatest hopes for Gen AI?
    • It can mitigate against risks.
    • Benefit the customer experience.                
    • Benefit the public sector.
  • How can we use AI in the development of products?
  • How to deploy AI safely across an organisation?
  • How to experiment and test AI while moving forward?
  • Do we think we’re ready for it?
  • How to adapt for growing UK economy while being safe?
  • How do you monitor on an ongoing basis?
  • How do you ensure you are capturing quality data?
  • Where does liability sit?

Technology must be transparent and explainable. Organisations must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.

The programme also recently hosted a roundtable with the Office of the Police Chief Scientific Advisor on the use of generative AI and large language models in policing. This really gave a fantastic opportunity for suppliers to share their knowledge and expertise in this space - including challenges policing must consider.

Conclusion

We have now entered the world of AI-augmented work and decision-making across all the functional areas of a business, regularly starting with the front office customer engagement to the back office.

AI, machine learning, and natural language processing are changing organisations and threat actors around the globe across multiple industry sectors. AI disrupters will scale AI initiatives, drive better customer engagements (aiming for a mixture of outcomes), and have faster rates of innovation, higher competitiveness, higher margins, and superior employee experiences.

Organisations across Justice and Emergency Services must evaluate their vision and transform their people, processes, technology, business models, and data readiness to unleash the power of AI and thrive in the digital era. 

While many organisations have taken the first step and defined AI principles, translating these into practice is far from easy, especially with few standards or regulations to guide them. 

Responsible AI implementation continues to be a major challenge. Taking a systematic approach from the start while addressing the associated challenges in parallel can be beneficial. A systematic approach requires proven tools, frameworks, and methodologies, enabling organisations to move from principles to practice with confidence and supporting the professionalisation of AI. 

Most techUK members are on the same journey, either in shaping their own products and services or helping the customer start the AI journey. Close collaboration between industry and the sector can be supported through the next term of the Justice and Emergency Services Committee to align to emerging public sector strategy, shape good practice, help resources and ensure responsible AI implementations deliver efficiencies while ensuring trust and confidence in its application.

Spencer Newsham - IBM


For more on AI, including upcoming events, please visit our AI Safety Hub.
 
Digital Ethics image 4.png

How do we ensure the responsible and safe use of powerful new technologies?

Join techUK for our 7th annual Digital Ethics Summit on 6 December. Given the ongoing concerns about the impact of emerging tech, the Digital Ethics Summit will explore AI regulation, preparing people for the future of work, the potential impact of misinformation and deepfakes on elections, and the ethical implications of tech on the climate emergency.

Book your free ticket

 

 

Authors

Spencer Newsham

Technology Sales, IBM