Looking for a KDE related job? We are hiring!

Simon Listens - Fri, 11/04/2011 - 16:55
We, the non profit research organization simon listens e.V. are looking for qualified C++ / Qt / KDE hackers to join our team!

Initially, we would be looking to fill part time positions but they can be extended to full time afterwards.

While our projects mostly focus on speech recognition using our own, KDE based solution called simon, you do not need to know anything about speech recognition to join!

Interested? Contact me for more information or send me your resume right away: grasch at simon-listens dot org
Categories: Developer Blogs

simon meets MeeGo

Simon Listens - Fri, 10/14/2011 - 09:05
I'm happy to report that since August, I can now officially call myself a Qt Ambassador!

As an Ambassador, I had the opportunity to apply for a loaned Nokia N950 to develop / port applications to MeeGo/Harmattan. I took Nokia up on their offer and the result is simone - a trimmed down, mobile version of simon. In other words: "simon embedded" or "simone".

The client features push to talk or automatic voice activity detection (configurable) and because of simons client / server architecture uses little power on the device itself. Even with voice activity detection running you should get many hours of continuous speech recognition out of a single charge.

On the one hand, simone can be used to replace the headset of a "full" simon installation but also includes a couple of default actions on the device. For example, you can use a voice controlled quick dial feature or start / stop a turn-by-turn navigation.


For more information and a live demo, have a look at the youtube demonstration:

If you can't see the embedded video, try this direct link.
Categories: Developer Blogs

simon meets AT-SPI-2

Simon Listens - Tue, 09/06/2011 - 16:16
Over the last couple of days I have again been working on what I started during this years Desktop Summit: simons AT-SPI 2 integration.
What started as a GSoC project idea back in April is now beginning to take shape.

The basic idea is still the same: First, integrate sequitur in simon to be able to transcribe arbitrary words automatically. To facilitate this, sequitur first needs to learn the transcription rules from a large dictionary. So I integrated a feature that let's user turn their shadow dictionary (which already supports many different formats) into a regular sequitur model.
After this sequitur model generation process, the system is used to transcribe words for the ATSPI plugin but also for adding new words manually.

Thanks to sequitur, simon can now transcribe words automatically that are definitely not in the shadow dictionary:
With this as the basic foundation and some help from Frederik and Joanie I created a plugin that would analyze the UI of currently active window, create vocabulary and grammar for it and associate commands with the user interface elements.

It's still in an early development stage (as is the support for ATSPI-2 of GTK and Qt) but the basic stuff already works. To check it out, either build and install the current development version of simon from Git (atspi branch) or have a look at the demonstration video below.

For RSS readers: ATSPI demonstration on Youtube

Categories: Developer Blogs

Desktop Summit 2011

Simon Listens - Tue, 08/09/2011 - 16:39
I just arrived back home (I flew home after the talks) after this years Desktop Summit and it was awesome! In retrospect I kinda regret not staying the whole week... Next year... :)

Anyways, I met tons of interesting people and had a lot of productive meetings and discussions. It's amazing what can get done in just a few minutes if the right people are sitting together.

If we (the KDE accessibility team) can implement even half of what was discussed in the last couple of days, I'm sure we're looking at a big step towards a truly accessible free desktop.

Oh and Martin: I'm looking forward to all those KWin effects for simon :P
Categories: Developer Blogs

Benefit Project Completed

Simon Listens - Tue, 08/09/2011 - 13:56
After more than a year of hard work we - the simon listens Team - are proud to announce that the Benefit project to use simon among other open source technologies (XBMC, Ubuntu,...) to create an affordable, self contained, voice controlled multimedia solution especially suited for elderly people has been completed.
The created solution - including the speech model and scenarios - will be released under a free license very soon.

But in the meantime, you can already have a look at a short demo video on youtube:

(Planet readers, click here)
Categories: Developer Blogs

GSoC Guest Post: Context Detection

Simon Listens - Wed, 06/08/2011 - 07:59
This year, we have been given the opportunity to work with two students as part of Googles annual Summer of Code. Adam is working on context dependent speech recognition (see below) and Alessandro is working on the Voxforge integration. Moreover, another student, Saurabh, is working on the Workspace integration as part of the Season of KDE.

So as kind of a start to hopefully a series of blog posts of our new contributers, I asked Adam to talk a bit about his progress and future plans about the context dependent speech recognition. This is what he wrote:

As part of the Google Summer of Code, I have been working to add
context-based activation and deactivation of scenarios in the KDE speech
recognition program simon. The simon program allows users to create or
download scenarios which, when activated, allow them to control other
programs such as web browsers, text editors, and games with speech

When the number of commands that must be considered for speech
recognition in simon becomes too large (for example, if the scenarios
that are active have a large number of possible commands), the speed and
accuracy of the speech recognition can suffer to the point of
unusability. Context-based activation and deactivation of scenarios will
allow scenarios to be deactivated when they are not needed (for example,
when the program that they control is not opened, or when the program is
not the active window) so that the number of commands being considered
by speech recognition will be kept low enough to ensure accuracy and

The context gathering system has been developed so that scenarios have a
"compound condition" which is a group of conditions under which the
scenario should activate. The compound condition becomes satisfied when
all of its conditions (which gather contexts) are satisfied. When the
compound condition becomes satisfied or unsatisfied, it communicates
this to its scenario, which then indicates to the scenario manager
whether or not it should be activated.

Compound conditions will be created with a user interface similar to
simon's command adding and editing interface. A scenario with no
conditions in its compound condition will always be active. This means
that any scenario made before this feature was added will maintain its
former functionality, but can be easily changed to (de)activate under
certain conditions.

The conditions of which the compound condition is composed are developed
as plugins (similarly to the command managers in simon), so it will be
easy to add new types of conditions. For example, one of the currently
developed plugins gathers information about running processes, so a
scenario can be activated under the condition that some process is
running or not running (for example a Rekonq scenario could have the
condition "'rekonq' is running"). The extensibility allowed by this
plugin system means that conditions such as "'Firefox' is the active
window" or "The user is connected to the internet" or "Fewer than 3
scenarios are currently active in simon" or any other type of condition
that could be determined by simon can be easily developed and used to
guide scenario activation and deactivation.

The next steps of my project include making the scenarios actually
activate and deactivate in response to conditions, making a parent/child
scenario relationship so that a single scenario can have child scenarios
with independent grammars and conditions (so that parts of the scenario
can be activated and deactivated independently), making more condition
plugins, and exploring the possibilities of what else simon would be
able to do with the contexts that it will be able to gather (for example
switching speech models based on the microphone that is being used).
Categories: Developer Blogs