Preparing for AI in the public service sector (Guest blog from Snowflake)
Author: James Hall, VP & Country Manager UK&I, Snowflake
If you’ve ever tried to teach yourself how to bake bread or learn a new skill, you might have heard of the saying: ‘you don’t know what you don’t know.’ If you don’t know poppy seeds nicely finish off a loaf of bread, for example, how would you know to sprinkle them on before putting it in the oven?
As someone who has worked with data-driven organisations for years, I naturally see examples of this on a daily basis. It’s only when you have all the information at hand that you can then make better, more data-informed decisions.
To do more of this, public-sector organisations should to take a closer look at what they can do when it comes to data acquisition and data management. But with mounting pressure to keep up with people’s ever-changing expectations, while also balancing budgets, that’s easier said than done.
One way the industry is trying to do this is with AI. But like a lot of things, before that can happen, they need to better organise the data that would feed into AI: everything from clean and organised data sets (like spreadsheets and databases) to messy, unstructured data sets (like raw IoT device data streams) and everything in between. Once that’s sorted, AI can come in and start working its magic.
Simple, right?
The true value of data
It’s ironic to see that, while governments around the world are the largest producers of data, they’re also the biggest consumers of data too.
This makes it all the more important to ensure that all that data is better organised so it can be used more effectively. Giving the right people more access to accurate, well-organised data helps them make better, more timely decisions. Managing it all in a single location helps encourage collaboration between departments.
Doing this means that no one has to spend their day scaling mountains of information to help inform new policy decisions. It’s all there in one place for the right people to see.
Making data AI ready
We all know by now that AI is trained on the data that’s fed into it. But as the old saying goes: ‘garbage in, garbage out.’ Making sure that the data used to teach AI is good enough helps ensure anything it then produces is top quality.
The big issue though is that many public-sector organisations struggle to get a handle on their data. It’s often tied up in bureaucratic red tape or hidden away where no one knows it’s there. And even when they can get it, so much time is spent untangling the mess just to put it into something that resembles a semi-coherent structure.
Getting to the core of this issue – and tidying up data – is the first step towards creating an AI tool that can then increase efficiency, help reduce costs and improve citizen outcomes.
The possibilities for the public sector
So, what can happen once organisations have sorted out their data? Well, the possibilities are almost endless.
Many organisations find themselves buried underneath paperwork. Both digital and literal sheets of A4. And while scanning physical copies to store them digitally is the way forward, it’s a slow, manual process. The NHS, for example, is currently working its way through decades of patient files to better store them and make it easier to access. But it’s such a huge task that it’s estimated that they’ve managed to get through 25% of all records so far.
AI tools like Document AI can help organisations quickly and easily extract information from documents. So not only can they be stored digitally, but they can also be edited and worked on using a visual interface and natural language. And free up more people to do other vital work around the organisation.
Getting started
Understanding the role of data and changing the infrastructure it’s stored on is sure to make everything easier over the next few years. So if you’re in the public sector, start looking at what you have and think to yourself: ‘what don’t I know?’
#DataFoundation #DataManagement #AIPublicSector