In this version we start to introduce audio instructions and experiment with visual cues to sync the audio to the interface elements.
CSS is being used to run animations. This is a consideration made to save the user battery. We do however use a JS function to control the synchronisation of CSS animations to the voiceover.
This really is a discovery phase, which will continue for the entire lifespan of this project I must say. Our main focus is on identifying the challenges faced for this particular app.
The first round of learning design materials shared amongst the team, Lesson 1, allows us to anticipate the majority of interactions. These include handwriting and speech recognition interactions. Technologically we consider these to be the most challenging features of the app.
During the discovery phase, we have only a found single reason to date which will require us to deliver the learning as a native iOS app. Speech recognition. Unfortunately to date, Apple does not provide support or offer an alternative for the WebkitSpeechRecognition API in any of its mobile browsers. The feature works on its desktop based browsers though. Maybe this will be a nice surprise on future iOS updates but in the meantime we will have to create an iOS version of the app which will allow us to use Siri and Apple servers based speech recog functionality for that platform. However for the main development phase we will focus on browser based delivery as this simplifies dev, testing and sharing until when we reach the latter stages of development, we will modify the speech recognition functionality for Apple mobile devices.