You can contact me at mailto:pollock.nageoire@wanadoo.frpollock.nageoire@wanadoo.fr or visit my http://www.pollock-nageoire.net/pierre-l/home page (in French) or visit the E.F.M. home page.
For the author who is blind, an entry in the Linux world was made possible by Emacspeak around 1998. Emacspeak was able to drive an Apolo hard voice synthesis produced by Dolphin System. I do not really remember if it was necessary to write a smmall server for this device and do not know if I kept this Tcl code somewhere.
The Apolo system was multilingual but language switching had to be done manually with no autodetection based on text content. In fact Emacspeak was designed by an english user for english users. Even if it was and remains a really marvelous tool it has ever lacked the ability to manage several languages simultaneously.
Around the years 2000 – 2003 I discovered Festival. I started considreing it as a replacement for the old hardware Apolo voice synthesis. In fact cpus were more and more powerful allowing letting the system doing the voice synthesis without to need using a external hardware device.
At this point it was natural to try to write an Emacspeak client for the Festival server. Since the author is not only blind but also French he had to find a solution to let Festival enable the multilingaul features that were provided by the Apolo device. The so called FranFest project was the solution that uses the Mbrola software voice synthesis system to produce french speaking. Indeed Festival can natively only generate english speaking but has the ability to integrate almost any voice synthesis system. This feature is probably not known by many people who consider that it is only a big an ununderstandable system.
From this point the goal was to integrate the above mentioned components Emacspeak Festival and Mbrola who respectively gave the E F M to the project name. Anyway the F may also come from FranFest but this is a very unsignificant detail. In fact FranFest has never been part of the E.F.M. system and has always been maintained as a separated project. One can also discuss about the signification of the E in the project name since Emacspeak was abandoned when I discovered Speechd-El and SpeechDispatcher (cf. 6.8 .) Anyway Emacs remains the client side integrating sysrtem and nowadays E in E.F.M. stands for Emacs and no longer for Emacspeak.
There are technical reasons for which it was necessary to make a release of E.F.M. but I will not explain them here since this section must not contain any technical aspects.
However even if you are not a developer, you will immediately understand if you know that the previous version was something like an experimental version of this system. Thus it did not allow the user to use all the Emacspeak features.
I only mention the following points which are accessible to any user:
The very sharp aspects of the voice configuration in Emacspeak were absolutely not implemented.
Hence this release must fix all these bugs and provide a fully functional speech server for Emacspeak.
See the FAQ just bellow (cf. 6.5 w)here people who already use E.F.M. mentioned the problems they encounter.
2003/06/06 Well the “voice philosophy” has changed between version 17.0 and 18.0 of Emacspeak. First you must know (or you already know) that the general principal of Emacspeak is to associate voices and faces.
In versions before 17.0, different faces were associated to different voices (different female and male voices when such changes were allowed by the speech synthesizer.)
Starting from version 18.0 the differences between faces were partially translated into differences of intonations with the same voice and differences of voices as well.
Finally you have a mixed system using both voice changes and voice switching.
For the moment E.F.M. only implements the voice-switch. It means that the faces changes can only be rendered by switching between different male or female voices. The intonation modifications required by the new features of Emacspeak are not yet implemented by E.F.M.. A piece of scheme code for Festival must be written for this purpose and others.
You’ll notice as well that the selection of punctuation (all some or none) which is one of the most useful feature of Emacspeak is not implemented either. It could be done by the same piece of scheme code :
AND I HOPE TO HAVE ENOUGH TIME TO MAKE IT AS SOON AS POSSIBLE !
For the moment you can replace certain voice changes rendering faces modifications by voice switches. You can do that by customizing association between voices and faces with the command Ctrl-E C in Emacspeak.
It will add a few lines looking like that to your .emacs file:
(custom-set-variables ;; custom-set-variables was added by Custom -- don’t edit or cut/paste it! ;; Your init file should contain only one such instance. ’(voice-bolden-medium-settings (quote (betty 3 6 6 nil nil))) ’(voice-bolden-settings (quote (betty 1 6 6 nil nil))) ’(voice-lock-function-name-personality (quote acss-betty-a3-p6-s6))) (custom-set-faces ;; custom-set-faces was added by Custom -- don’t edit or cut/paste it! ;; Your init file should contain only one such instance. )
In Emacspeak, the same “philosophy” is kept for the voices and the present release of E.F.M. should implement all the features required by the system. Therefore a fully functional festival-voices must be implemented.
I hope that this problem will be solved in the present release.
according to emacspeak-18.0-festival.patch I know that speech rate is set to 0.6:
+(defcustom festival-default-speech-rate 0.6 + "Rate for festival P.L. Nageoire 2003/04/30" + :group ’tts + :type ’integer )
I’ve also found command : dtk-set-predefined-speech-rate which seems not working. is it a bug or is it just not implemented for emacspeak and festival ?
It was simply not implemented but will in this release.
How could I turn up the volume in the system e/f/m? Generally, how could I control the volume?
Festival does not allow the volume control. Hence it must be done by the general sound system. It could be implemented in E.F.M. but the fact that there exists various system, make it not easy. I can’t insure that this feature will soon appear in E.F.M..
As you probably already noticed, E.F.M. is a system build on three components:
The roll played by each part of the system will be detailed in a later section. You just need to know for the moment that Mbrola voice synthesizer is needed by the multilingual aspect (cf. 5.2 a)nd especially for the French speaking.
Hence there are two interfaces between these three components:
The interface between Festival and Mbrola in inherited from the FranFest project. It is possible that I will make a new implementation of this interface for technical reasons.
My own contribution to this project consists essentially in the interface between Emacspeak and Festival which is a client/server application.
I tried to make as well the installation of the whole system more convenient.
tts-with-punctuations
Method name | Module | At line |
tts-with-punctuations | dtk-interp.el | 62 |
emacspeak-advice.el | 563 | |
emacspeak-advice.el | 571 | |
emacspeak-advice.el | 579 | |
emacspeak-advice.el | 587 | |
emacspeak-advice.el | 595 | |
emacspeak-advice.el | 647 | |
emacspeak-advice.el | 693 | |
emacspeak-advice.el | 700 | |
emacspeak-advice.el | 713 | |
emacspeak-advice.el | 724 | |
emacspeak-advice.el | 732 | |
emacspeak-advice.el | 740 | |
emacspeak-advice.el | 748 | |
emacspeak-advice.el | 756 | |
emacspeak-advice.el | 766 | |
emacspeak-advice.el | 783 | |
emacspeak-advice.el | 800 | |
emacspeak-advice.el | 803 | |
emacspeak-advice.el | 819 | |
emacspeak-advice.el | 827 | |
emacspeak-advice.el | 833 | |
emacspeak-advice.el | 839 | |
emacspeak-advice.el | 847 | |
emacspeak-advice.el | 854 | |
emacspeak-advice.el | 857 | |
emacspeak-advice.el | 867 | |
emacspeak-advice.el | 873 | |
emacspeak-advice.el | 881 | |
emacspeak-advice.el | 884 | |
emacspeak-advice.el | 894 | |
emacspeak-advice.el | 903 | |
emacspeak-advice.el | 920 | |
emacspeak-advice.el | 933 | |
emacspeak-advice.el | 951 | |
emacspeak-advice.el | 969 | |
emacspeak-advice.el | 976 | |
emacspeak-advice.el | 988 | |
emacspeak-advice.el | 1009 | |
emacspeak-advice.el | 1016 | |
emacspeak-advice.el | 1039 | |
emacspeak-advice.el | 1071 | |
emacspeak-advice.el | 1295 | |
emacspeak-advice.el | 1307 | |
emacspeak-advice.el | 1314 | |
emacspeak-advice.el | 1321 | |
emacspeak-advice.el | 1329 | |
emacspeak-advice.el | 1945 | |
emacspeak-advice.el | 1952 | |
emacspeak-advice.el | 2717 | |
emacspeak-calc.el | 76 | |
emacspeak-calc.el | 86 | |
emacspeak-calendar.el | 132 | |
emacspeak.el | 185 | |
emacspeak-erc.el | 286 | |
emacspeak-erc.el | 306 | |
emacspeak-eshell.el | 98 | |
emacspeak-fix-interactive.el | 128 | |
emacspeak-metapost.el | 65 | |
emacspeak-speak.el | 1538 | |
emacspeak-speak.el | 1560 | |
emacspeak-speak.el | 1629 | |
emacspeak-speak.el | 1770 | |
emacspeak-speak.el | 1802 | |
emacspeak-tapestry.el | 101 | |
emacspeak-wizards.el | 2646 |
dtk-interp-silence
Method name | Module | At line |
dtk-interp-silence | dtk-interp.el | 87 |
dtk-speak.el | 231 |
dtk-interp-tone
Method name | Module | At line |
dtk-interp-tone | dtk-interp.el | 97 |
dtk-speak.el | 278 |
dtk-interp-notes-initialize
Method name | Module | At line |
dtk-interp-notes-initialize | dtk-interp.el | 104 |
dtk-speak.el | 236 |
dtk-interp-notes-shutdown
Method name | Module | At line |
dtk-interp-notes-shutdown | dtk-interp.el | 108 |
dtk-speak.el | 241 |
dtk-interp-note
Method name | Module | At line |
dtk-interp-note | dtk-interp.el | 112 |
dtk-speak.el | 254 | |
dtk-speak.el | 260 |
dtk-interp-queue
Method name | Module | At line |
dtk-interp-queue | dtk-interp.el | 125 |
dtk-speak.el | 491 | |
dtk-speak.el | 534 | |
dtk-speak.el | 546 | |
dtk-speak.el | 551 |
dtk-interp-queue-set-rate
Method name | Module | At line |
dtk-interp-queue-set-rate | dtk-interp.el | 131 |
dtk-interp-speak
Method name | Module | At line |
dtk-interp-speak | dtk-interp.el | 139 |
dtk-speak.el | 557 |
dtk-interp-say
Method name | Module | At line |
dtk-interp-say | dtk-interp.el | 147 |
dtk-speak.el | 1655 |
dtk-interp-dispatch
Method name | Module | At line |
dtk-interp-dispatch | dtk-interp.el | 157 |
dtk-speak.el | 568 |
dtk-interp-stop
Method name | Module | At line |
dtk-interp-stop | dtk-interp.el | 166 |
dtk-speak.el | 574 |
dtk-interp-sync
Method name | Module | At line |
dtk-interp-sync | dtk-interp.el | 173 |
dtk-speak.el | 1546 | |
emacspeak-setup.el | 110 | |
emacspeak-speak.el | 224 |
dtk-interp-letter
Method name | Module | At line |
dtk-interp-letter | dtk-interp.el | 189 |
dtk-speak.el | 1642 |
dtk-interp-say-version
Method name | Module | At line |
dtk-interp-say-version | dtk-interp.el | 197 |
dtk-speak.el | 853 |
dtk-interp-set-rate
Method name | Module | At line |
dtk-interp-set-rate | dtk-interp.el | 202 |
dtk-speak.el | 673 |
dtk-interp-set-character-scale
Method name | Module | At line |
dtk-interp-set-character-scale | dtk-interp.el | 211 |
dtk-speak.el | 728 |
dtk-interp-toggle-split-caps
Method name | Module | At line |
dtk-interp-toggle-split-caps | dtk-interp.el | 220 |
dtk-interp-toggle-capitalization
Method name | Module | At line |
dtk-interp-toggle-capitalization | dtk-interp.el | 229 |
dtk-interp-toggle-allcaps-beep
Method name | Module | At line |
dtk-interp-toggle-allcaps-beep | dtk-interp.el | 238 |
dtk-interp-set-punctuations
Method name | Module | At line |
dtk-interp-set-punctuations | dtk-interp.el | 248 |
dtk-speak.el | 807 |
dtk-interp-reset-state
Method name | Module | At line |
dtk-interp-reset-state | dtk-interp.el | 257 |
dtk-speak.el | 848 |
dtk-interp-pause
Method name | Module | At line |
dtk-interp-pause | dtk-interp.el | 264 |
dtk-speak.el | 873 | |
dtk-speak.el | 879 |
dtk-interp-resume
Method name | Module | At line |
dtk-interp-resume | dtk-interp.el | 272 |
dtk-speak.el | 906 |
The dtk-interp methods are not used above :
Hence these three modules must be modified to be made server independent.
However the macro tts-with-punctuations is used in many higher level modules so it must be modified to be made server independent. Hence the module dtk-interp must be slightly modified as well.
The method tts-with-punctuations in module dtk-interp at line 62 must be modified since even if its name is tts-something it is clearly not server independent.
Every call to a dtk-interp-xxx method must be replaced by the corresponding tts-interp-xxx method. The binding between these aliases and the suitable method is done by the tts-setup module (cf. 6.7.3.4 .)
The method dtk-speak in module dtk-speak at line 1517 should integrate the language configuration since in E.F.M. the language is a speech parameter just like the speech rate, the punctuation mode etc ... There is no need to have a separate method for the E.F.M. situation and the other ones since this language variable will simply be ignored in the non multilingual cases.
The fst-interp must be a fully functional Festival client and implement all Emacspeak requirements. Therefore some server side customizations are needed and must be implement in the efm.scm (in particular the queuing mechanism.)
sh at line 90, arguments duration
t at line 100, arguments pitch,duration
notes_initialize at line 106, arguments noarg
notes_shutdown at line 110, arguments noarg
n at line 116, arguments instrument,pitch,duration,target,step
q at line 128, arguments string
r at line 134, arguments rate
d at line 142, arguments noarg
tts_say at line 160, arguments string
s at line 168, arguments noarg
tts_sync_state at line 179, arguments punctuation-mode,capitalize,allcaps-beep,split-caps,speech-rate
l at line 192, arguments string
version at line 200, arguments noarg
tts_set_speech_rate at line 205, arguments string
tts_set_character_scale at line 214, arguments string
tts_split_caps at line 223, arguments string
tts_capitalize at line 232, arguments string
tts_allcaps_beep at line 241, arguments string
tts_set_punctuations at line 251, arguments string
tts_reset at line 259, arguments noarg
tts_pause at line 267, arguments noarg
tts_resume at line 275, arguments noarg
There are two speech-server independent methods tts-define-voice-from-speech-style and tts-voice-defined-p which are bound to the dectalk specific methods dectalk-define-voice-from-speech-style and dectalk-voice-defined-p.
dectalk-define-voice-from-speech-style
Method name | Module | At line |
dectalk-define-voice-from-speech-style | acss-structure.el | 104 |
dectalk-voices.el | 571 | |
dectalk-voices.el | 606 |
dectalk-voice-defined-p
Method name | Module | At line |
dectalk-voice-defined-p | acss-structure.el | 102 |
dectalk-voices.el | 83 | |
dectalk-voices.el | 603 | |
dectalk-voices.el | 605 |
So the two methods festival-define-voice-from-speech-style and festival-voice-defined-p must be implemented.
The method tts-define-voice-from-speech-style is used at following places : tts-define-voice-from-speech-style
Method name | Module | At line |
tts-define-voice-from-speech-style | acss-structure.el | 103 |
acss-structure.el | 144 | |
dectalk-voices.e | 606 | |
emacspeak-ansi-color.el | 91 | |
outloud-voices.el | 462 . |
Indeed this method seems to be overlaid by the following but that one does not seem to be used ...
acss-personality-from-speech-style
Method name | Module | At line |
acss-personality-from-speech-style | acss-structure.el | 106 |
emacspeak-w3.el | 1587 | |
voice-setup.el | 197 . |
Indeed the last method itself is overlaid by the following :
voice-setup-personality-from-style
Method name | Module | At line |
voice-setup-personality-from-style | emacspeak-wizards.el | 2591 |
voice-setup.el | 193 | |
voice-setup.el | 259 |
dtk-speak-using-voice
Method name | Module | At line |
dtk-speak-using-voice | dtk-speak.el | 485 |
dtk-speak.el | 544 | |
emacspeak-advice.el | 1919 | |
emacspeak-calendar.el | 95 | |
emacspeak-cperl.el | 103 | |
emacspeak-python.el | 129 |
Indeed this method is overlaid by dtk-format-text-and-speak in module dtk-speak at line 524.
dtk-format-text-and-speak in module dtk-speak at line 524 :
%% (let ((last nil) (personality (get-text-property start ’personality ))) (while (and (< start end ) (setq last (next-single-property-change start ’personality (current-buffer) end))) (if personality (dtk-speak-using-voice personality (buffer-substring start last )) (dtk-interp-queue (buffer-substring start last))) (setq start last personality (get-text-property last ’personality))) ; end while )) ; end clause
dtk-speak-using-voice in module dtk-speak at line 485 :
%% (dtk-interp-queue (format "%s%s %s \n" (tts-get-voice-command voice ) text tts-voice-reset-code))))
The mechanism implemented in voice-setup and performed by the method def-voice-font in module voice-setup at line 161 associate a voice and a personality to any face.
Indeed the voice mechanism consists into two dual methods tts-define-voice-from-speech-style (which has two parameters name and style) and tts-get-voice-command which must produce the command from the name. The command has been recorded by the previous one. The process in between can be whatever !
To realize the integration sketched in 6.3, it was necessary to write an intermediate layer between Emacspeak and Festival. In particular it was necessary to implement features like punctuation handling Festival side. Instead of reinventing the weel I looked for existing packages that might do the job and found Festival Freebsoft Utils that was a Festival side Scheme layer that precisely did what I wished. But Festival Freebsoft Utils was designated to work with SpeechDispatcher and its Emacs client Speechd-El. At this point I did not yet decide to stop using Emacspeak but simply try to adapt Speechd-El to emulate the lowest Emacspeak layer to interface it with Festival through SpeechDispatcher. The planed architecture was : Emacspeak –> Speechd-El –> SpeechDispatcher –> FranFest Festival –> Mbrola temporarily giving the name E.S.D.F.F.M. to the project around the yeaers 2004 – 2005.
Anyway this architecture was never completely carried out.
Realizing the powerful features that are provided by Speechd-El I left Emacspeak around the years 2005 – 2006. Thus it was not stright ahead to make Speechd-El implement a lower Emacspeak layer since thiese two systems does not exactly share the same philosophy. Speechd-El was enough suitable for my needs even if it lacked certain subtle features provided by Emacspeak. It might be accurate to turn back to an architecture where upper Emacspeak modules would be supported by the lower Speechd-El layers. The structure 6.8 simplified into: Speechd-El –> SpeechDispatcher –> FranFest Festival –> Mbrola According to this simplification system could still be called E.S.D.F.F.M. where Emacspeak was simply replaced by Emacs.
At the beginning around the years 2003 – 2004 the goal was to develop an Emacs client for Festival. Indeed Festival has very powerful server features. With Speechd-El and SpeechDispatcher there was no need to write such a client/server application, since Speechd-El connects to SpeechDispatcher which has the ability to connect to Festival.
Anyway this protocl involves many parsing/serializing/reparsing processes that I personally hate ! Forcing the connection to be establish via the SSIP protocol many Festival interesting features are lost in this protocol.
Anyway it was not directly possible to obtain what SpeechDispatcher provides only with Festival since the latter lacks the ability to schedule the speech flue. Indeed Festival implements an elementary mechanism that allows to stop and resume the flue but no queue allowing to dispatch messages according to a priority schedul as SpeechDispatcher does.
Thats why since 2012 I started to develop the so called E.F.M._Client (cf. ?? ,)/E.F.M._Server (cf. ?? ,) system that implments a messages scheduling mechanism that avoids use of SpeechDispatcher. I guess that this direct EmacsFestival connection may allow very interesting developments and that it is the most suitable tool to implement a reealy powerful LaTeX audio reader for blind people. It certainly requires a good knoledge of Festival features that people who tried to carry out such projects probably do not have. Nowadays E.F.M. structure described in 6.9 became : Speechd-El –> FranFest Festival –> Mbrola ⇔ E.F.M._Server E.F.M._Client . allowing to return back to the name E.F.M. for the project.
In fact Speechd-El does not really implement an Emacs Festival client but its lower layer is implemented via eieio is very modular and flexible allowing to easy plug such a client between Speechd-El upper layers and Festival. This so called in Speechd-El dialect driver should be part of E.F.M. but at the moment is not. Communication between Emacs and Festival implemented by this Driver is based on the ScmEl protocl that I develop for this purpose but that have also a really nice application in EcaScheme that is more or less based on the same ideas.
![]() |
![]() |
![]() |
![]() |
Apache/2.4.41 (Unix) PHP/7.4.2 SVN/1.13.0 |
![]() |