[CLOSED] [APP][4.1+][V1.3.7] Saiy | Your Open Source Virtual Assistant

Not open for further replies.
Search This thread


Jan 5, 2020
Battery usage for long term hotword detection is not very well optimised actually. But still better than the Google Assistant.


Senior Member
Jan 8, 2013
is anyone still using it? think i found this app too late but is there anyway to ensure that hotword detection is active all day? sometime when i long press an item in the app it will default back to self aware

also what about the battery usage for 24 hours hotword detection?

I still use Saiy every day "Call Mum", "Read my last message", etc, was hoping to see it get some development though!


Senior Member
Sep 11, 2006
Latest version?

I see from the 1st post that we have V2.2.5A RELEASED as the latest version of Saiy. But I am unable to find it. Could someone point me to the latest version?


Senior Member
Jan 8, 2013


Senior Member
Jul 31, 2012
Hello..they removed sayi from playstore..is that safe to keep it or we should uninstall??

This does not mean it is unsafe. It means that the developer stopped working the project and unpublished it.

The .apk can be found on the internet but only in places that are hard to trust. If you are able to post your play-store provided .apk here, I would like to try it out.



Senior Member
Jan 8, 2013
Just search the thread! See my post #8227

This does not mean it is unsafe. It means that the developer stopped working the project and unpublished it.

The .apk can be found on the internet but only in places that are hard to trust. If you are able to post your play-store provided .apk here, I would like to try it out.



Senior Member
May 31, 2006


Aug 7, 2020
If it would have VOSK integrated it could also work independent from Google apps. This is a more then sufficient solution for a degoogled Android.


Aug 10, 2018
If it would have VOSK integrated it could also work independent from Google apps. This is a more then sufficient solution for a degoogled Android.

I am working on a assistant app that uses VOSK. It's not quite ready yet, but hopefully it will be a suitable replacement for Saiy



New member
Jun 6, 2014
I am working on a assistant app that uses VOSK. It's not quite ready yet, but hopefully it will be a suitable replacement for Saiy

That's great news... I can test if needed.
  • Like
Reactions: gbossley
Not open for further replies.

Top Liked Posts

  • There are no posts matching your filters.
  • 695
    Welcome to Saiy... Install - Mod edit: Broken DL link removed.

    For those of you visiting this thread as subscribers to utter! you'll know the history well. A promising release, active development and then silence..... Please accept my apologies - if you don't know that 'sometimes life gets in the way' then you are the envy of most.

    A bit of history for you...

    To cut a half-decade-long story short, the Fragmentation of Android ground this project to a halt. When I first demoed my creation to the world, I had visions of knocking it up on Android and then focusing on how it functioned in the background. I was about to drown myself in machine learning to bring my vision to life. Job done? Well no...

    It turned out that developing an app that covered almost every function available on an Android device (I refuse to say 'phone' - it's not 1983!) , was a job for 1,000+ developers, not just a lonesome one such as I, on a 10 year old Dell laptop - and each time a new update to Android was released, I huddled in a corner and wept, as I waited for the crash reports and 1* ratings to roll in.

    It turns out that bugs aren't only specific to Android versions. You multiply that by Manufacturers messing with builds - and devices running multiple versions and then even the Locale of the device causing crashes and you end up with 12,000+ supported devices exponentially multiplied by all other eventualities as your user base.

    I drowned... And my (in hindsight) naive plans of master AI'ery, whilst users enjoyed playing with it on Android, dropped down to the bottom of my to-do list. Things had to change.

    I decided to shut myself away in a dark room to completely rewrite the Android code, so that it was both readable and scalable; despite its complexity. Rumours flew that I had died - and in some ways, I did....

    Not really ^ that just felt justifiably dramatic! :cyclops:

    So, utter! is reborn as Saiy® and Open Sourced, so it may have a chance of competing with the big boys, before they run away with all of our private data and souls, in order to use their services...

    Install Saiy from the Play Store - here

    Note - a direct download link will appear here shortly!

    You can get involved by checking out the Development Section in the app, or alternatively, if you're a curious Android Developer, check out the base code published on GitHub here

    The code base is pretty large, so briefly, there are two major classes in the app, that direct and distribute work elsewhere:

    SelfAware is the main Foreground Service, responsible for managing the application state and channelling voice recognition, text to speech and other API requests.

    Quantum is the main processing class, where commands are locally resolved (if required), sensibility checked and actioned.

    Understanding the above two classes is essential to following the flow of the full application logic.

    MyLog is a global verbose logging toggle. When enabled, the output will flow class to class, as well as display durations for time sensitive functions.

    The following remains for the sake of me needing some sleep and posterity :cowboy:

    About this thread

    Firstly, I’d like to thank everyone for the positive feedback and encouragement and the huge amount of messages I’ve received on YouTube, email, twitter, G+ and here on XDA volunteering to be involved in beta testing. It’s very much appreciated and I apologise that I cannot respond to them all. Please take this intro as a thank you.

    This thread is for your open discussion about utter! and the features you'd like to see included, so please feel free to throw your ideas back and forth (be nice to each other) and I’ll do my best to keep up with them when I have time.

    About utter!

    Unlike other voice applications, utter! will be configurable to the user, enabling you to assign spoken keywords to use the functionality of your favourite applications or make system related changes to your device. There's no cumbersome overlay or launching of another application to use the features, utter! sits in the background ready to be activated, whenever you want, without interrupting whatever you are doing.

    Which applications will initially be compatible?

    The more interest I can demonstrate in utter! the more likely your favourite application developer will want to make their functionality available to use. The purpose of the YouTube video and this thread is to get their attention and create a more functional first beta release.

    I’m a developer and I'd like features of my application to be utter! compatible.

    Please contact me to discuss how our applications can work together.

    Q) Will an offline speech engine be built in?
    A) I hope so. I'm waiting to see what features of Google Now are made available to developers

    Q) What languages will it be available in?
    A) At first, English only. Once I have the framework functioning correctly, I can turn my attention to translations (thank you for the messages I’ve received offering translation help).

    Q) Will it use natural speech recognition?
    A) Over time it will, but in the testing stages commands will be more structured. As my algorithms develop, so will the application's ability to recognise exactly what you want.

    For the conversation mode I’m really up against it. I’m almost trying to reinvent the wheel knowing that Google are no doubt sitting on a very advanced algorithm purposely for this… They are more than welcome to allow me to use it…

    Q) How much is utter! going to cost when it’s out of beta?
    A) I don’t know as yet. Not more than a couple of dollars... I just need to make sure that whatever the price, it's more than worth it.

    Q) Which speech engine does it use?
    A) In the video I used IVONA beta (available on the Market here). This option is configurable so you can use a free or premium engine of your choice.

    Q) Google’s Project Majel will no doubt surpass this application. Why are you bothering?
    A) Perhaps.. It remains to be seen the direction Google take and whether their focus will be too much in the interest of nudging you towards Google services, rather than providing an open and configurable voice integrated assistant.

    For example: If you assign ‘Save Battery’ to a command, on detection utter! could go ahead and minimise your brightness level and screen time-out, turn off (or restrict) all data connections, set your device to GSM only, turn off vibrate functions and screen animations, underclock and undervolt your CPU (requires root) etc etc.. Is that what you expect from Majel? Personally, I don’t… [Update - I think I was right about this!]

    Q) How do I register to beta test!?
    A) Hang around this thread – thank you.

    Q) The icon you used in the video for utter! was lame!
    A) Yes! I just borrowed the inbuilt icon for now. If you think you can design a better one, please feel free! Maximum respect (at the very least) from the first post is offered in return!

    Q) Can you adapt Siri to do these things please?
    A) I honestly have had these requests – I’m afraid that’s not going to be possible now… or in the near/far future /ever…


    By genisis7

    By goander

    By joshaw

    By usaff22
    utter! release progress

    • TASKER
    • WIFI


    pingpongboss - amazing StandOut library!
    usaff22 - amazing icon and artists impression work
    meadowsjared - Sharing his coding skills
    nobnut - previously unknown generosity
    waydownsouth - previously unknown patience and sharing of knowledge
    fahadayaz - Bug solving GEEK


    All permissions are for device based command purposes. NONE of your personal data is uploaded or shared to any external server of any kind

    Change Log

    V2.2.5A RELEASED
    Directory Searches
    Car Locator
    Play Music
    Visual Results
    + many more features added! Please see the command list in the app for details.
    Changed to foreground application with permanent notification to stop Android killing it!
    V2.2.4A RELEASED
    Skype fixed
    FC's fixed on some commands
    Speed increases
    Added troubleshooting menu
    Changed icon display
    Enabled background test code (hidden).
    V2.2.1A RELEASED
    Code and UI revamp.
    Converted to pre-beta background app
    Usage details in the application.
    V2.1.9.1A RELEASED
    Simply too many to list... 
    All details in the app
    V2.1.0A RELEASED
    Mobile data
    Contacts (algorithm test)
    Dropped 2.1 compatibility
    Fahrenheit added to weather
    Initialisation tweaked
    Custom listener tweaked
    Button labels and Loquendo sample now family friendly :eek:
    HUGE code rebuild
    V2.0.1A RELEASED
    Tasker integration!
    World Weather
    Custom Listener test
    Long-press-search integration
    Loads of bug fixes and code improvements.
    Fixed Weather and Time force closes on 2.1 & 2.2 devices
    Root-functions fixed
    Tablet compatibility fixed
    Errors when no recogniser fixed
    Loads of bug fixes and code improvements.
    Root-functions included!
    FIXED - Recogniser button errors
    Loads and loads of bug fixes and code improvements.
    World-Time included
    FIXED 'unknown' Bluetooth state message.
    Loads and loads of bug fixes and code improvements.
    Bluetooth voice control test included
    FIXED the V1.4A 4.0.3 ICS crash
    FIXED FC on back button from config tab
    FIXED FC when closing app
    FIXED FC for Galaxy Nexus TTS settings
    FIXED leaked Receiver
    Loads and loads of bug fixes and code improvements.
    WiFi voice control test included
    Loads of bug fixes and code improvements.
    V1.3A - RELEASED
    Fixed FC on Config Tab
    V1.2A - RELEASED
    Release version 'jumped' to match Play Store
    Totally rewritten UI code
    Totally rewritten engine logic
    Prevented override of localised English voice
    Added test contact loader
    Intro changed to audio file
    Option to record output to sdcard for translation help
    So much else that I've forgotten...
    V0.0.1A - RELEASED
    Long presses for association are not functional yet

    IVONA registers itself in error, even if it may actually work. A full uninstall and reinstall of the IVONA files is required.
    Buttons don't reactivate occasionally after utterance - 'utterance' code depreciated.
    Weather and Time API's are useless for USA State searches. Need to change provider.

    utter! stable version is available from Google Play here.

    Saiy stable version is available from Google Play here.

    Latest test releases can be found in my more recent posts
    Please release an Alpha version!!! PLEASE!!!

    Thanks for all of the comments and support folks. For all of those eager to test something I'm going to knock up a configuration apk over the weekend. Nothing too exciting I'm afraid, but digesting the comments I've realised I really need to initially focus on the following:

    1) Getting the very basic framework out so if you'd be so kind to test on your various devices I can catch early device, Android version, custom ROM/kernel bugs and get them right from the start. I need to make sure that if the correct voice data etc isn't already installed on your device I handle it correctly, depending on your location and other things.

    2) Configure the app to be multilingual from the start - If I leave this until a later date, I know I'll end up putting it off with it coming second to bugs and fixes and enhancements etc. Thank you for all of the messages offering translation support, it's really appreciated.

    3) A few of you have mentioned your accents not working well with other speech apps. This got me thinking that I need to be able to allow you to configure the raw voice data, rather than just keywords. So, for example, if you say 'load app cut the rope' and the voice data returns 'low dap cudder hope' (or something else just as random), then I need to be able to allow you to view the raw voice data and if necessary assign 'low dap cudder hope' to open cut the rope! That way it will be accent issue free for many of the functions.

    So, I'll work on a simple test apk over the weekend then you can all have a mess with your raw voice data and let me know your findings depending on your language etc. Thanks in advance.

    4) Build in a simple (at first) offline speech engine, that can be improved version by version. I'm looking at integrating Sphinx, but it's not going to be easy as there is limited documentation.

    5) Create an option that can immediately change utter! to text interaction only, rather than having to abort what you are doing and adjust this in the preferences. So, a 'mute' button of sorts.


    Again, thank you for all of the suggestions, I do scan through them when I have time and it's really helped me prioritise what I should be doing to make utter! more useful to you.

    Please keep the suggestions coming and perhaps let me know your thoughts on the following:

    a) What voice engines do you use? I want to be able to refer users to a link on the market (or direct download) for a suggested engine. I know this is a personal preference, but if for a certain language there seems to be majority agreement, then I'll go with that.

    b) Do I need to allow the default search engine to be configurable? Does anyone not use Google...?!

    c) What are your experiences of other voice applications? What are your personal essential features that they include and what are they missing?


    Funding Update

    I've spoken to many potential partners and a very long story short, they each have their own commercial agenda, all of which I believe detract from the biggest appeal of utter!

    I want to be able to focus on utter! allowing you to use your own personal favourite applications, because that is the experience you have chosen, one which I want to enhance. I do not want to 'team up' with a limited number of selected commercial partners to provide your results in the hope you will then use their services.

    So, how can I achieve this?

    I'm going to go down the kickstarter route to raise as much funding as I can for the application. Briefly, every time I think of a function I wish to include, I think of a hundred eventualities and adjoining functions, I simply cannot do this to its full potential alone, so the funding will go towards the cost of outsourcing the Java development to assist me to build the application to the highest standards and quickly (without cutting corners)!

    Most importantly, I can focus on building the application based on its appeal, rather than a steered commercial angle.

    At the point the money I raise runs out, I will have the amount of downloads and popularity (hopefully) of the application to entice commercial partners and go from there.

    Having assessed all of my options, I think this is the best way forward all-round.

    Once I've written up the kickstarter brief (so much to do!), I'll post the link here. Just to clarify, you are not in any way expected to donate and never would be. If you do donate, you will of course have my gratitude and that of others. When the link appears, you all have the power of social media at your fingertips, so if you could share it on twitter, G+ etc that would be absolutely fantastic...

    Right, I best stop waffling and get coding!

    Tasker Plugin

    Please note - This thread will be updated specifically for Saiy very soon - I've been busy! I hope you can follow along, replacing utter! with Saiy in the mean time..... Do make sure you download the correct XML files at the bottom of this thread.

    Tasker Plugin Tutorial

    At present, the plugin has three main features:

    1) Speak
    2) Notify
    3) Pass variable values.

    Please download and import the attached utter! example project, to make it easier to follow the processes explained below. As with all tasks you haven’t created yourself, it’s a good idea to review the content within the XML on the sdcard, prior to importing them, as they could contain rouge actions. On this occasion, it might be ok to trust me though :)

    From the Tasker preferences, make sure beginner mode is not ticked and under the miscellaneous tab, tick ‘Allow External Access’ so the two applications can communicate.

    If you don’t see the imported utter! project, on the main Tasker screen, slide your finger downwards from the top of the screen (there is a faint white downwards pointing arrow), which will reveal the submenus.

    Make sure Tasker is turned on!

    Speak & Pass Variable Values

    The Profile smsReceivedU is activated when a text is received from anyone and triggers the task smsU. Click smsU to open it.

    You’ll firstly see a variable set action that is creating the voice content you want to pass to utter! The content consists of the message details. The second action is the Plugin configuration – tap on the action and then click edit.

    In the text box, you’ll see the content that you are going to ask utter! to speak. It can consist of just plain text, just variables, or a combination of both. For the sake of example, I used both. Save out of the plugin and go back to the task. Press the ‘play button’ at the bottom right of the task to ‘test’ it.

    At this point, Tasker may alert you that it will need to start monitoring your messages in order for the task to work correctly, accept the confirmation. When you press play, you hopefully will hear the information, including the populated variable content announced by utter! If you don’t, you may need to send yourself a quick text to populate the variables.

    * Note: occasionally, the activation of monitors (such as incoming texts) can be delayed. Saving all the way out of Tasker and opening it again seems to resolve this. Send yourself another text afterwards to confirm.

    When the profile smsReceivedU is ticked as active, your text messages will be read aloud when they are received. However, there may be situations when you want to choose whether you hear them or not.

    Head to the profile smsRecievedU – Interactive and open the triggered task smsU – Interactive. In action 1, you’ll see the content you are going to request utter! says and it contains a question about whether or not you want to hear the received message (the trigger for this task). Action #2 sets the message data that utter! will announce, if you confirm that you’d like to hear it. Click on the plugin action (#3) and press edit – You’ll see that the box ‘Send Value Only’ is ticked and the variable %smsdata (the message data) is entered to be sent to utter!

    * Note: There is currently no way for an external application to access the values of your Tasker variables (without a hack) due to privacy issues, as some of the data within the variables may be personal. As this is the case, you’ll need to select ‘Send Value Only’ in any tasks where you would like utter! to be made aware of a change in a variable value. These values are stored separately and securely by utter!

    Heading back to the task, action #4 is another plugin action, this time requesting that utter! immediately speaks the content of %tosay (configured in action #1). You’ll see that the ‘Start Listening’ box is ticked as you’ll need to answer the question!

    Now, when utter! announces ‘You’ve received a text message. Would you like me to read it?’ you’ll be aware that replying ‘yes, please read the message that is contained in the tasker variable I sent to you’ will result in utter! not knowing what on Earth you’re talking about! We need to configure a custom phrase within utter! to handle this.

    Heading into utter! select the customisation tab and click ‘Custom Phrase’. This is going to be handling our reply when utter! asks the question above. Enter ‘please read it’ in the phrase field (top box) and in the response box (bottom box!) %smsdata (case sensitive) and press ‘Create’.

    The result is, that when utter! asks ‘You’ve received a text message. Would you like me to read it?’, you can respond with ‘please read it’ and the custom phrase handler will detect the response is a variable name. The variable name will be substituted in the response for the current variable value (you sent it to utter! in action #3 of the task smsU – Interactive) and hey presto, your text message will be read out!

    For completeness, you may wish to add another custom phrase, where the phrase is ‘not right now thanks’ and the response is ‘okay, well let me know when you do’ – just to give it a little more personalisation… a bit nicer than just saying ‘cancel’.

    There are two more tasks in the project - drivingOnU and drivingOffU. These task turn the profile smsRecievedU – Interactive, on and off. You may wish to activate the Tasker profile 'hands-free' whilst driving, so from the Customisation Tab, click on 'Create Commands' and select 'Run Tasker Task'. Click on 'drivingOnU' and when prompted by utter! set the command phrase to something like 'I'm driving', the success words to 'drive safely' and the fail words to 'something went wrong!'

    Whenever you wish to activate the profile and have your messages announced, you can say 'I'm driving' and the profile will become active. Do the same again for the 'drivingOffU' task and you'll be able to toggle them by voice whenever you need to.

    To summarise, any variable values you want to keep utter! updated on so you can request the current value by voice, pass them in a ‘Send Value Only’ plugin action. Don’t forget, in the ‘Speak’ plugin action, utter! accepts plain text and variables in any order or combination.


    Stipulating the interaction level of utter! (speak uninvited whenever / speak uninvited if I’m not at work etc) is coming soon, but in the mean time there may be variable content within your tasks that you’d like to be made aware of, but only when you’re good and ready. For this reason, you can connect the speech data to a notification, that when clicked, will begin to announce the content. This is handled by utter! rather than the Tasker ‘notification click’ action.

    The profile unlockedU is a quick example of how this works. The trigger is the device being unlocked (so you can easily test it out) and it fires the task infoU. The task infoU places some random device info into the variable %anyname in action #1. The utter! plugin action #2 sets the content for the notification (%anyname) and the box is ticked to confirm the Notification action.

    Press the ‘play’ button and you should see a notification appear. Clicking on it will begin the speech of the content placed in %anyname.

    To avoid cluttering the notification bar, currently only one notification is available. If another is activated, it will replace the content of the first one. This will eventually be configurable.

    Tasker, Llama & External App Support

    Users have requested to be able to switch the application off and on under certain circumstances and so I've exposed the ability to do this and other functions using intents from external applications. The applications must allow you to add 'data extras' to intents and you can now do the following:

    1 - Start the voice recognition
    2 - Start the permanent voice recognition
    3 - Stop the permanent voice recognition
    4 - Turn utter! off completely
    5 - Flush the memory utter! is using
    6 - Toggle driving mode
    7 - Toggle caller announcement
    8 - Toggle notification announcement

    I've created a separate Activity to do this. Normally you have to list the package name, followed by the class name.

    Package name - com.brandall.nutter
    Class name - com.brandall.nutter.EIH

    To stop any rogue applications firing off commands to utter! you need to go to the Power User Settings and create an alphanumeric password. That password will need to be entered in the data extras of the intent, to confirm you have 'authorised' it.

    Here is a screenshot of how it would look in Tasker. The command:2 refers to the permanent recognition above. If you put command:4 it would turn utter! off. The password:mypassword123 is the password of mypassword123 you set in the Power User Settings.


    Each action must be labelled and immediately followed by a colon: before the content.


    I hope the above made sense?!


    Note: I've been informed that this project does not import correctly on the Gingerbread version of Tasker. I'm investigating.

    Before you read any further, this is for the more advanced Tasker users, who want to handle voice commands with their own custom Tasks. If you're new to Tasker, perhaps return to this post at a later date and get started with the project detailed above!

    For those that are still with me, import the Tasker Receiver Project below and then you'll need to tick the Send to Tasker box in the Try Again options (in the utter! Advanced Settings). Once you've done that, all unknown commands will be broadcast to Tasker to handle and the project you've imported will resolve them, or not....

    First thing to remember is that if utter! knows to send the voice data to Tasker, then it will not speak a response, such as 'I didn't understand that request' or something equally as irritating! It'll be up to you to handle any responses in Tasker - including when the voice data doesn't trigger a Task you've created.

    In the project there is a Master Command List, which at present contains the three commands, sausages, beans and chips. These are populated into an Array, which for all intents and purposes, is a list..... Every time Tasker receives voice data from utter! it will refresh this list, to make sure it picks up on any new commands you've created.

    It then gets a bit tricky - The voice data is also in the form of an Array, so the Task 'Voice Data Receiver' loops through each item in the voice data Array, whilst also looping (in a nested loop) through each item in your command Array. Basic pattern matching takes place on each command, so that if the voice data in any place contains the word 'sausages', it will trigger a match. This is using the Tasker regex of *sausages*. You can of course use more advanced regex should you wish.

    If a match is made, the Task looks for the index of the command number and that will then trigger the corresponding Task called 'CommandAction#', where the number at the end is populated by the index variable in the loop. The voice data is handed to this Task as a parameter, so you can do further analysis to look for more matching words, should you wish. An example would be a trigger command of 'display' and a further check for 'off' or 'on' in the voice data. The CommandAction# Tasks finish by performing an action (of course!) and providing the vocal response to utter!

    If no match is made, the negative voice response is set at the bottom of the Voice Data Receiver Task.

    There are notes placed inside the Tasks, which I hope will help to make the process a little more clear for you, as I confused myself writing the above :cyclops:

    If you get stuck, there are plenty of users and threads here on XDA that will come to your rescue. Check my signature below for a couple of links.