Hey guys,
I'm new to the forum and was following this thread about VoiceGuide.
I am also visually impaired and have been using yamaha keyboards for a long time.
I currently have a psrs910 and a recently purchased tyros5.
Here in Brazil it is increasingly difficult to buy new top-of-the-line keyboards, as they are very expensive.
The Genos is a keyboard for a few and even the SX900 is not very affordable in terms of price. And that's exactly the point I wanted to get to:
Yamaha, if it wanted to, could already implement accessibility in previous models, as well as many other keyboard brands that have not done so until now.
Touch is not a new technology. It is much more in evidence today in electronic devices, but it already existed in older keyboards.
A keyboard full of buttons doesn't necessarily make it a fully accessible instrument to operate, even more so considering the high price.
Blind users are forced to decorate the screens, tabs and how many buttons they need to press to perform some function.
Myself for example, I don't use 100% of the style creator's functions, because there are too many details and no feedback to help.
Generally speaking, there has long been enough keyboard technology to provide needed accessibility and feedback.
Unfortunately, accessibility doesn't keep up with this technological evolution.
Regarding VoiceGuide, I found out about this feature by listening to the demo videos on youtube and I tried to research more about it.
Technically speaking, they started this project in a very amateur way.
To begin with, it is necessary to download the wav files containing the speech possibilities that the keyboard plays in sequence as the information appears on the screen.
Anyone who knows how a screen reader works on other platforms agrees that this is not ideal.
For example, when saving a file, VoiceGuide will certainly not read the name of that file.
Why? Because the keyboard doesn't have a "voice synthesis engine" that is capable of reading anything.
In fact, it is very curious that they used synthesized speech in the recordings of each phrase, one for each wav file.
Not to mention that, to translate into other languages, it is necessary to re-record all the lines.
With a software-implemented speech synthesis engine, this task would be much easier.
The experience a blind user will have with VoiceGuide is far from similar to an iPhone with voiceOver or on Android with TalkBack and other alternative screen readers.
In short: It's still not quite a "keyboard screen reader".
I sincerely hope that Yamaha continues to think about these users with great care and that it improves the project, making the feature available in all its products as Apple does.
I also hope that, like her, many other keyboard brands can be open to this potential audience, so they can participate in the development by giving suggestions and collaborating in some way.
Sorry for the long post.
Best regards