[Accueil]
Accueil
[Blog]
Blog
[GNU/Linux]
GNU/Linux
[Pierre L.]
Pierre L.
[Whist]
Whist
[Nous]
Nous
[Jabberwocky]
Jabberwocky
[LaTeX]
LaTeX


E.F.M.

P.L. Nageoire

2022/09/29

Introduction Nowadays is a large collection of tools not all contained in the so called package. Package organization should probably be improved. But since many parts of the system are in a transitional state, the global architecture is probably not completely clear even to the author.In fact ideas about this system modified themselves when new tools that can be integrated to the project were discovered. History of the project bellow EFM.history, may let the reader understand the current state of the project. Requirements There are global requirements but in fact each compoenent as its own requirements :

 requirements If you configured with est_enabbleyes which is the option by default, you should specify where to find the sources : est_sources<somewhere>  requirements In fact the headers are needed by . The latter is needed by E.F.M._Client and E.F.M._Server. The plnutils_sources<somewhere> will tell the system where to find the sources, but if you unpack or download at the same place you unpack/download it will be found automatically.  requirements The festival_sources<somewhere> tells the system where to find sources which are needed if festival_enableyes. This is the default behavior and more or less needed if you want to be available.  requirements You can indicate where voices can be found: mbrola_voice_sources<somewhere>. Installation Installation runs according following schedule :  installationCan be controlled by speechtools_enableyes|no if you want to inhibit automatic installation of this package. Beware by default system will try to build in /usr/share and if the one who proceeds to building has not permissions on this directory it will fail. You can control this behavior with the est_path<PATH> option; setting it to a path where you have write permissions.  installationCan be controlled by plnutils_enableyes|no if you want to inhibit automatic installation of this package.  installationCan be controlled by festival_enableyes|no if you want to inhibit automatic installation of this package. Same write restrictions apply here has in ??. Moreover bulding process assumes sources are available essentially under the same root. EfmScheduler installation EfmClient installation

Shell scripts installation

Startup scripts installation PlnScmEl installation Emacs interface installation This interface installation requires to be available. Freebsoft installation

 installationCan be controlled by mbrola_enableyes|no if you want to inhibit automatic installation of this package.

Notes on the project structure PLN Sun Jun 2 06:30:17 2019 speech_tools Should be installed prior than anything since it is totally independent, but needed in particular even by plnutils. plnutils Is the second thing that should be installed since needed by almost everything after that. In particular requires speech_tools headers that should be installed before. What is ?

I would like the reader to be able to determine if he needs to read completely this documentation and even if he needs to install only by reading this first paragraph.

First of all, stands for

Emacspeak Festival Mbrola

and the most important word is . I can not explain here in details what is and I will simply remind you that is a complete audio environment allowing blind people to use the powerful text editor. In a certain sens, the project is an extension providing further facilities for blind people to use .

Thus you can use or parts of the system for other purposes but you must immediately notice that it was not designed for them and that the author can not insure you that it will be appropriate to your needs.

However if you have begun to read this document without knowing what is I would refer you to the home page.

If you now better know what is simply notice without knowing any technical aspect of the system that it provides extended features for in two directions : A free speech synthesizer

is a free software speech synthesizer. It means in particular that you can use without supplementary hardware and that you do not need to pay anything to use this system. It will be especially convenient with a laptop since you only need this one and any other box. Thus has many other interesting features that I will not describe here which make it one of the most powerful speech system that you can find. A multilingual speech system

Moreover, allows the user to use several languages simultaneously. It means that you can switch between the various available languages by a very simple command.

For instance, I am French as well and I would be very pleased if could read French documents for me in my own native language.

I was asked by some people if can speak Greek or Polish. But you must have a certain knowledge in the language for which you want to realize the customization. Since I am not a specialist of these languages I would be very interested in collaborating with experts. If you are interested too please contact me !

I will briefly explain the structure in the next section EFM.history.introduction.technical without to many technical details in order to better explain what was my own contribution to this project. History The author

You can contact me at pollock.nageoire@wanadoo.fr or visit my home page (in French) or visit the home page. For the author who is blind, an entry in the world was made possible by around 1998. was able to drive an Apolo hard voice synthesis produced by Dolphin System. I do not really remember if it was necessary to write a smmall server for this device and do not know if I kept this code somewhere.

The Apolo system was multilingual but language switching had to be done manually with no autodetection based on text content. In fact was designed by an english user for english users. Even if it was and remains a really marvelous tool it has ever lacked the ability to manage several languages simultaneously. Around the years 2000 – 2003 I discovered . I started considreing it as a replacement for the old hardware Apolo voice synthesis. In fact cpus were more and more powerful allowing letting the system doing the voice synthesis without to need using a external hardware device.

At this point it was natural to try to write an client for the server. Since the author is not only blind but also French he had to find a solution to let enable the multilingaul features that were provided by the Apolo device. The so called project was the solution that uses the software voice synthesis system to produce french speaking. Indeed can natively only generate english speaking but has the ability to integrate almost any voice synthesis system. This feature is probably not known by many people who consider that it is only a big an ununderstandable system.

From this point the goal was to integrate the above mentioned components and who respectively gave the E F M to the project name. Anyway the F may also come from but this is a very unsignificant detail. In fact has never been part of the system and has always been maintained as a separated project. One can also discuss about the signification of the E in the project name since was abandoned when I discovered and EFM.history.speechdel. Anyway remains the client side integrating sysrtem and nowadays E in stands for and no longer for . What is (will be) new?

There are technical reasons for which it was necessary to make a release of but I will not explain them here since this section must not contain any technical aspects.

However even if you are not a developer, you will immediately understand if you know that the previous version was something like an experimental version of this system. Thus it did not allow the user to use all the features.

I only mention the following points which are accessible to any user:

The very sharp aspects of the voice configuration in were absolutely not implemented.

Hence this release must fix all these bugs and provide a fully functional speech server for .

See the FAQ just bellow EFM.history.introduction.faq where people who already use mentioned the problems they encounter. Frequently asked questions How could I activate changes in voice, while running ?

2003/06/06 Well the “voice philosophy” has changed between version 17.0 and 18.0 of . First you must know (or you already know) that the general principal of is to associate voices and faces.

In versions before 17.0, different faces were associated to different voices (different female and male voices when such changes were allowed by the speech synthesizer.)

Starting from version 18.0 the differences between faces were partially translated into differences of intonations with the same voice and differences of voices as well.

Finally you have a mixed system using both voice changes and voice switching.

For the moment only implements the voice-switch. It means that the faces changes can only be rendered by switching between different male or female voices. The intonation modifications required by the new features of are not yet implemented by . A piece of scheme code for must be written for this purpose and others.

You’ll notice as well that the selection of punctuation (all some or none) which is one of the most useful feature of is not implemented either. It could be done by the same piece of scheme code :

AND I HOPE TO HAVE ENOUGH TIME TO MAKE IT AS SOON AS POSSIBLE !

For the moment you can replace certain voice changes rendering faces modifications by voice switches. You can do that by customizing association between voices and faces with the command Ctrl-E C in .

It will add a few lines looking like that to your .emacs file:

(custom-set-variables
  ;; custom-set-variables was added by Custom -- don’t edit or cut/paste it!
  ;; Your init file should contain only one such instance.
 ’(voice-bolden-medium-settings (quote (betty 3 6 6 nil nil)))
 ’(voice-bolden-settings (quote (betty 1 6 6 nil nil)))
 ’(voice-lock-function-name-personality (quote acss-betty-a3-p6-s6)))
(custom-set-faces
  ;; custom-set-faces was added by Custom -- don’t edit or cut/paste it!
  ;; Your init file should contain only one such instance.
 )

In , the same “philosophy” is kept for the voices and the present release of should implement all the features required by the system. Therefore a fully functional festival-voices must be implemented. What about the possibility to stop the TTS when desired by the user ?

I hope that this problem will be solved in the present release. Setting the speech rate Question

according to emacspeak-18.0-festival.patch I know that speech rate is set to 0.6:

+(defcustom festival-default-speech-rate 0.6
+  "Rate for festival P.L. Nageoire 2003/04/30"
+  :group ’tts
+  :type ’integer )

I’ve also found command : dtk-set-predefined-speech-rate which seems not working. is it a bug or is it just not implemented for emacspeak and festival ? Answer

It was simply not implemented but will in this release. Volume control Question

How could I turn up the volume in the system e/f/m? Generally, how could I control the volume? Answer

does not allow the volume control. Hence it must be done by the general sound system. It could be implemented in but the fact that there exists various system, make it not easy. I can’t insure that this feature will soon appear in . Short “technical” survey As you probably already noticed, is a system build on three components:

The roll played by each part of the system will be detailed in a later section. You just need to know for the moment that voice synthesizer is needed by the multilingual aspect EFM.history.introduction.what.multi and especially for the French speaking.

Hence there are two interfaces between these three components: The / interface The interface between and in inherited from the project. It is possible that I will make a new implementation of this interface for technical reasons. The / interface

My own contribution to this project consists essentially in the interface between and which is a client/server application.

I tried to make as well the installation of the whole system more convenient. The project structure 1 Remarks on the structure tts-with-punctuationstts-with-punctuations

Method nameModuleAt line
tts-with-punctuationsdtk-interp.el62
 emacspeak-advice.el563
 emacspeak-advice.el571
 emacspeak-advice.el579
 emacspeak-advice.el587
 emacspeak-advice.el595
 emacspeak-advice.el647
 emacspeak-advice.el693
 emacspeak-advice.el700
 emacspeak-advice.el713
 emacspeak-advice.el724
 emacspeak-advice.el732
 emacspeak-advice.el740
 emacspeak-advice.el748
 emacspeak-advice.el756
 emacspeak-advice.el766
 emacspeak-advice.el783
 emacspeak-advice.el800
 emacspeak-advice.el803
 emacspeak-advice.el819
 emacspeak-advice.el827
 emacspeak-advice.el833
 emacspeak-advice.el839
 emacspeak-advice.el847
 emacspeak-advice.el854
 emacspeak-advice.el857
 emacspeak-advice.el867
 emacspeak-advice.el873
 emacspeak-advice.el881
 emacspeak-advice.el884
 emacspeak-advice.el894
 emacspeak-advice.el903
 emacspeak-advice.el920
 emacspeak-advice.el933
 emacspeak-advice.el951
 emacspeak-advice.el969
 emacspeak-advice.el976
 emacspeak-advice.el988
 emacspeak-advice.el1009
 emacspeak-advice.el1016
 emacspeak-advice.el1039
 emacspeak-advice.el1071
 emacspeak-advice.el1295
 emacspeak-advice.el1307
 emacspeak-advice.el1314
 emacspeak-advice.el1321
 emacspeak-advice.el1329
 emacspeak-advice.el1945
 emacspeak-advice.el1952
 emacspeak-advice.el2717
 emacspeak-calc.el76
 emacspeak-calc.el86
 emacspeak-calendar.el132
 emacspeak.el185
 emacspeak-erc.el286
 emacspeak-erc.el306
 emacspeak-eshell.el98
 emacspeak-fix-interactive.el128
 emacspeak-metapost.el65
 emacspeak-speak.el1538
 emacspeak-speak.el1560
 emacspeak-speak.el1629
 emacspeak-speak.el1770
 emacspeak-speak.el1802
 emacspeak-tapestry.el101
 emacspeak-wizards.el2646

dtk-interp-silencedtk-interp-silence

Method nameModuleAt line
dtk-interp-silencedtk-interp.el87
 dtk-speak.el231

dtk-interp-tonedtk-interp-tone

Method nameModuleAt line
dtk-interp-tonedtk-interp.el97
 dtk-speak.el278

dtk-interp-notes-initializedtk-interp-notes-initialize

Method nameModuleAt line
dtk-interp-notes-initializedtk-interp.el104
 dtk-speak.el236

dtk-interp-notes-shutdowndtk-interp-notes-shutdown

Method nameModuleAt line
dtk-interp-notes-shutdowndtk-interp.el108
 dtk-speak.el241

dtk-interp-notedtk-interp-note

Method nameModuleAt line
dtk-interp-notedtk-interp.el112
 dtk-speak.el254
 dtk-speak.el260

dtk-interp-queuedtk-interp-queue

Method nameModuleAt line
dtk-interp-queuedtk-interp.el125
 dtk-speak.el491
 dtk-speak.el534
 dtk-speak.el546
 dtk-speak.el551

dtk-interp-queue-set-ratedtk-interp-queue-set-rate

Method nameModuleAt line
dtk-interp-queue-set-ratedtk-interp.el131

dtk-interp-speakdtk-interp-speak

Method nameModuleAt line
dtk-interp-speakdtk-interp.el139
 dtk-speak.el557

dtk-interp-saydtk-interp-say

Method nameModuleAt line
dtk-interp-saydtk-interp.el147
 dtk-speak.el1655

dtk-interp-dispatchdtk-interp-dispatch

Method nameModuleAt line
dtk-interp-dispatchdtk-interp.el157
 dtk-speak.el568

dtk-interp-stopdtk-interp-stop

Method nameModuleAt line
dtk-interp-stop dtk-interp.el166
 dtk-speak.el574

dtk-interp-syncdtk-interp-sync

Method nameModuleAt line
dtk-interp-syncdtk-interp.el173
 dtk-speak.el1546
 emacspeak-setup.el110
 emacspeak-speak.el224

dtk-interp-letterdtk-interp-letter

Method nameModuleAt line
dtk-interp-letterdtk-interp.el189
 dtk-speak.el1642

dtk-interp-say-versiondtk-interp-say-version

Method nameModuleAt line
dtk-interp-say-versiondtk-interp.el197
 dtk-speak.el853

dtk-interp-set-ratedtk-interp-set-rate

Method nameModuleAt line
dtk-interp-set-ratedtk-interp.el202
 dtk-speak.el673

dtk-interp-set-character-scaledtk-interp-set-character-scale

Method nameModuleAt line
dtk-interp-set-character-scaledtk-interp.el211
 dtk-speak.el728

dtk-interp-toggle-split-capsdtk-interp-toggle-split-caps

Method nameModuleAt line
dtk-interp-toggle-split-capsdtk-interp.el220

dtk-interp-toggle-capitalizationdtk-interp-toggle-capitalization

Method nameModuleAt line
dtk-interp-toggle-capitalizationdtk-interp.el229

dtk-interp-toggle-allcaps-beepdtk-interp-toggle-allcaps-beep

Method nameModuleAt line
dtk-interp-toggle-allcaps-beepdtk-interp.el238

dtk-interp-set-punctuationsdtk-interp-set-punctuations

Method nameModuleAt line
dtk-interp-set-punctuationsdtk-interp.el248
 dtk-speak.el807

dtk-interp-reset-statedtk-interp-reset-state

Method nameModuleAt line
dtk-interp-reset-statedtk-interp.el257
 dtk-speak.el848

dtk-interp-pausedtk-interp-pause

Method nameModuleAt line
dtk-interp-pausedtk-interp.el264
 dtk-speak.el873
 dtk-speak.el879

dtk-interp-resumedtk-interp-resume

Method nameModuleAt line
dtk-interp-resumedtk-interp.el272
 dtk-speak.el906

The dtk-interp module The dtk-interp methods are not used above :

Hence these three modules must be modified to be made server independent.

However the macro tts-with-punctuations is used in many higher level modules so it must be modified to be made server independent. Hence the module dtk-interp must be slightly modified as well. Modified modules dtk-interp

The method tts-with-punctuations in module dtk-interp at line 62 must be modified since even if its name is tts-something it is clearly not server independent. dtk-speak Aliasing dtk-interp-xxx

Every call to a dtk-interp-xxx method must be replaced by the corresponding tts-interp-xxx method. The binding between these aliases and the suitable method is done by the tts-setup module EFM.history.structure.added.tts-setup. dtk-speak-using-voice

dtk-speak

The method dtk-speak in module dtk-speak at line 1517 should integrate the language configuration since in the language is a speech parameter just like the speech rate, the punctuation mode etc ... There is no need to have a separate method for the situation and the other ones since this language variable will simply be ignored in the non multilingual cases. emacspeak-speak emacspeak-setup emacspeak emacspeak-wizards emacspeak-sounds Added modules festival-voices fst-interp fst-speak tts-setup The client server API

The fst-interp must be a fully functional client and implement all requirements. Therefore some server side customizations are needed and must be implement in the efm.scm (in particular the queuing mechanism.)

Server features Silence

sh at line 90, arguments duration Tone

t at line 100, arguments pitch,duration notes_initialize

notes_initialize at line 106, arguments noarg notes_shutdown

notes_shutdown at line 110, arguments noarg Note

n at line 116, arguments instrument,pitch,duration,target,step Queue

q at line 128, arguments string Queue set rate

r at line 134, arguments rate Speak

d at line 142, arguments noarg tts_say

tts_say at line 160, arguments string stop

s at line 168, arguments noarg tts_sync_state

tts_sync_state at line 179, arguments punctuation-mode,capitalize,allcaps-beep,split-caps,speech-rate Letter

l at line 192, arguments string Version

version at line 200, arguments noarg tts_set_speech_rate

tts_set_speech_rate at line 205, arguments string tts_set_character_scale

tts_set_character_scale at line 214, arguments string tts_split_caps

tts_split_caps at line 223, arguments string tts_capitalize

tts_capitalize at line 232, arguments string tts_allcaps_beep

tts_allcaps_beep at line 241, arguments string tts_set_punctuations

tts_set_punctuations at line 251, arguments string tts_reset

tts_reset at line 259, arguments noarg tts_pause

tts_pause at line 267, arguments noarg tts_resume

tts_resume at line 275, arguments noarg Voices control Remarks

There are two speech-server independent methods tts-define-voice-from-speech-style and tts-voice-defined-p which are bound to the dectalk specific methods dectalk-define-voice-from-speech-style and dectalk-voice-defined-p.

dectalk-define-voice-from-speech-style

Method nameModuleAt line
dectalk-define-voice-from-speech-styleacss-structure.el104
 dectalk-voices.el571
 dectalk-voices.el606

dectalk-voice-defined-p

Method nameModuleAt line
dectalk-voice-defined-pacss-structure.el102
 dectalk-voices.el83
 dectalk-voices.el603
 dectalk-voices.el605

So the two methods festival-define-voice-from-speech-style and festival-voice-defined-p must be implemented.

The method tts-define-voice-from-speech-style is used at following places : tts-define-voice-from-speech-style

Method nameModuleAt line
tts-define-voice-from-speech-styleacss-structure.el103
 acss-structure.el144
 dectalk-voices.e606
 emacspeak-ansi-color.el91
 outloud-voices.el462 .

Indeed this method seems to be overlaid by the following but that one does not seem to be used ...

acss-personality-from-speech-style

Method nameModuleAt line
acss-personality-from-speech-styleacss-structure.el106
 emacspeak-w3.el1587
 voice-setup.el197 .

Indeed the last method itself is overlaid by the following :

voice-setup-personality-from-style

Method nameModuleAt line
voice-setup-personality-from-styleemacspeak-wizards.el2591
 voice-setup.el193
 voice-setup.el259

dtk-speak-using-voice

Method nameModuleAt line
dtk-speak-using-voicedtk-speak.el485
 dtk-speak.el544
 emacspeak-advice.el1919
 emacspeak-calendar.el95
 emacspeak-cperl.el103
 emacspeak-python.el129

Indeed this method is overlaid by dtk-format-text-and-speak in module dtk-speak at line 524. Voices mechanism

dtk-format-text-and-speak in module dtk-speak at line 524 :

%% (let ((last nil)
          (personality (get-text-property start ’personality )))
      (while (and (< start end )
                  (setq last
                        (next-single-property-change  start ’personality
                                                      (current-buffer) end)))
        (if personality
            (dtk-speak-using-voice personality
                                   (buffer-substring start last ))
          (dtk-interp-queue (buffer-substring  start last)))
        (setq start  last
              personality
              (get-text-property last  ’personality))) ; end while
      ))                                               ; end clause

dtk-speak-using-voice in module dtk-speak at line 485 :

%% (dtk-interp-queue
     (format "%s%s %s \n"
             (tts-get-voice-command voice )
             text
             tts-voice-reset-code))))

The mechanism implemented in voice-setup and performed by the method def-voice-font in module voice-setup at line 161 associate a voice and a personality to any face.

Indeed the voice mechanism consists into two dual methods tts-define-voice-from-speech-style (which has two parameters name and style) and tts-get-voice-command which must produce the command from the name. The command has been recorded by the previous one. The process in between can be whatever ! and To realize the integration sketched in , it was necessary to write an intermediate layer between and . In particular it was necessary to implement features like punctuation handling side. Instead of reinventing the weel I looked for existing packages that might do the job and found that was a side layer that precisely did what I wished. But was designated to work with and its client . At this point I did not yet decide to stop using but simply try to adapt to emulate the lowest layer to interface it with through . The planed architecture was : temporarily giving the name E.S.D.F.F.M. to the project around the yeaers 2004 – 2005.

Anyway this architecture was never completely carried out. only Realizing the powerful features that are provided by I left around the years 2005 – 2006. Thus it was not stright ahead to make implement a lower layer since thiese two systems does not exactly share the same philosophy. was enough suitable for my needs even if it lacked certain subtle features provided by . It might be accurate to turn back to an architecture where upper modules would be supported by the lower layers. The structure simplified into: According to this simplification system could still be called E.S.D.F.F.M. where was simply replaced by . Without At the beginning around the years 2003 – 2004 the goal was to develop an client for . Indeed has very powerful server features. With and there was no need to write such a client/server application, since connects to which has the ability to connect to .

Anyway this protocl involves many parsing/serializing/reparsing processes that I personally hate ! Forcing the connection to be establish via the SSIP protocol many interesting features are lost in this protocol.

Anyway it was not directly possible to obtain what provides only with since the latter lacks the ability to schedule the speech flue. Indeed implements an elementary mechanism that allows to stop and resume the flue but no queue allowing to dispatch messages according to a priority schedul as does.

Thats why since 2012 I started to develop the so called E.F.M._Client EFM.components.client,/E.F.M._Server EFM.components.server, system that implments a messages scheduling mechanism that avoids use of . I guess that this direct connection may allow very interesting developments and that it is the most suitable tool to implement a reealy powerful audio reader for blind people. It certainly requires a good knoledge of features that people who tried to carry out such projects probably do not have. Nowadays structure described in became : ⇔ E.F.M._Server E.F.M._Client . allowing to return back to the name for the project.

In fact does not really implement an client but its lower layer is implemented via is very modular and flexible allowing to easy plug such a client between upper layers and . This so called in dialect driver should be part of but at the moment is not. Communication between and implemented by this Driver is based on the protocl that I develop for this purpose but that have also a really nice application in that is more or less based on the same ideas.


This document was translated from LATEX by HEVEA.



il bagatel il carro la ruote della fortuna il diavolo


Copyright (©) 2009 -- 2022 Pierre L. Nageoire


Apache/2.4.52 (Unix) PHP/8.1.3 SVN/1.14.1 Hevea 2.36