;

Adult Literacy and AI

We were funded by Nesta Scotland as part of their AI for Good programme to explore how we might make use of AI for social benefit. This text features in a report that Nesta is putting together on the experience of the project that were funded. We used the opportunity to explore and devise a solution to the problems we encountered with regional accents when using the voice recognition in the App. All in all its been very useful and very timely for us.

Citizen Literacy is a Community Interest Company, a non-profit organisation based and registered in Scotland (SC671958). We are developing an adult literacy education programme to support teachers who are helping adults to improve their English reading and writing skills. Our work consists of developing printed and digital learning resources in cooperation with the adult literacy education community in the UK and beyond. In this project we have been using AI to solve a real-world problem we were facing in connection with making voice recognition more accurate in our adult literacy app.

The Citizen Literacy Programme – Reading and Writing for Everyone

Imagine living in a world of written words you do not understand? That is the daily reality for many people in the UK, who can speak and understand English but cannot read or write the language. It often comes as a shock to people when first discovering the scale of adult low literacy in the UK. It is quite easy to go through life unaware of the numbers of our fellow citizens who have low or very low levels of literacy. According to the UK government’s own figures about 15% of the working age populations have very low levels of literacy, this equates to about 6 million people.  We think every adult should have the right to be able to learn to read and write, as a matter of social justice. You can find out more about our work and the ideas and values behind our approach in a ‘White Paper’ that we have written. The app we are developing will be free to use, with no registration required, no adverts and no personal data recorded – it is being made available as a common good.

The problem we faced was that the voice recognition systems in our mobile phones tends to break down when we just use a single short word. If you add to this the fact that the UK (like many countries) has a rich diversity of strong regional accents, then things get tricky. The internet is full of many funny videos of people trying to make themselves understood to their voice recognition systems on a range of devices.  The BBC comedy sketch from the Burnistoun series with a voice operated elevator that cannot understand Scots saying the number ‘Eleven’ is a classic example of this problem – the video clip on YouTube has been viewed millions of times. Using AI, we have found a way of overcoming this problem in our app. By using the free open source AI software TensorFlow we have been able to build Machine Learning into our app, with the ability to have it trained by the user to recognise their accent for selected words. You can find a short video on YouTube of illustrating this in action: https://www.youtube.com/watch?reload=9&v=WvL-2S5z3JE. This shows a user training the system to distinguish between to very similar sounding words ‘pin’ and ‘pen’

Moving from Consuming AI to Producing AI Services

When we were originally designing our app we took some unusual approaches, which have put us in a good place to continue using AI technologies. First of all, we decided to avoid using a web framework like ‘React’ or ‘Angular’. These types of software tools are very powerful but for us there is too much of a ‘black’ box element to them. Plus, the fact that parts of them become redundant after a while (deprecated in techy speak) builds in major maintenance headaches. Instead we went ‘retro’ – using well established and more transparent technologies as much as possible – JavaScript and open JavaScript libraries, Json, HTML5, CSS, PHP etc. The other thing we did was do away with the use of a database. All our data logic for the learning design is written in JSON, it’s a data format very suitable for sharing between systems and so gives us future flexibility..

When we were creating our learning design for the app, we took good care to make it highly structured and ‘clean’, knowing that it had to clearly represent the underlying educational logic that had been developed with our adult literacy subject experts. From previous experience we knew this was a good place to start. We also built into our design the ability to be flexible with the data structures, knowing that it would evolve over time.

At this point we did not really think much about using AI. This might sound odd as we were building an App that uses a range of AI technology services such as voice and handwriting recognition for user input, and text-to-speech for our virtual teachers. But we were ‘consumers’ of these AI services. Our involvement with the Nesta AI for Good programme has changed our perspective to now include being ‘producers’. We’re using  transfer learning to extend an existing TensorFlow AI model so it can be used for any regional accent. We have been able to solve a real problem for our users and make our app a better learning resource. This change in perspective has had some profound effects about our plans for the future.

Next Steps

When we joined the Nesta AI for Good programme we were fortunate to attend a workshop about using data and AI tools like Machine Learning with the DataKind UK charity. This was a pivotal moment for us. As the workshop progressed, we recognised we had the opportunity to build into our Json structure the ability to capture even more meaningful data about what our learners were doing and the results of their activities. Now we are thinking how we might use that data with AI tools to make the app more responsive to individual learners needs and offer them personalised learning activities.