Edwin was like Google Now, Siri, or Cortana before it was built into nearly every phone today.
What I built
End to end speech recognition utility including:
- System integration with dedicated hardware button
- Advanced command and question processing of voice input
- Multiple 3rd party API integrations
The story
I had a simple goal, use my hands less and get more from my mobile devices. By using the dedicated hardware search buttons I was able to integrate voice assistance with a single button press, no matter what app you were in. Press it and say "Where is the closest coffee shop?" or "What is 30km in miles?" and Edwin would find the answer and speak it back. It could translate, spell, and even had a bit of a personality. The intent processing was original and done on device by categorizing the words spoken to narrow down topics. Further analysis was run to extract the specific request in that topic. Finally I leveraged web APIs for weather, directions, etc. to provide real time information.
This app was an experiment to learn all the pieces of Android, everything from networking, Intents, Services, UI customization, to simply taking user feedback and solving crashes (because there were no drop in solutions back then.) It garnered 250k organic downloads and was loved by many for its accessibility and ease of use.
Edwin was featured on Lifehacker and Android blogs [1] [2].
What I built it with
- Java
- Services & Intents
- Web APIs