Synthetic Voiceovers
Our AI synthetic voices, provided by Eleven Labs, have a high degree of customizability. Read below to understand more about how to change the pausing, pacing, emotion, pronunciation, and more.
Last updated
Our AI synthetic voices, provided by Eleven Labs, have a high degree of customizability. Read below to understand more about how to change the pausing, pacing, emotion, pronunciation, and more.
Last updated
Generating high-quality voiceovers can be a difficult and time-consuming task. However, with Arcade, you can effortlessly produce synthetic voiceovers that sound both professional and natural simply by inputting your script.
You can add voiceovers to chapters and image steps to provide clear and concise explanations, improving the user experience.
Arcade's synthetic voiceovers support 29 languages with several accents for languages like Portugues, Spanish, and others.
Arabic
Bulgarian
Chinese
Croatian
Czech
Danish
Dutch
English
Filipino
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Korean
Malay
Polish
Portuguese
Romanian
Russian
Slovak
Spanish
Swedish
Tamil
Turkish
Ukrainian
We also provide close captioning (subtitles) for Arcades with voiceovers.
Please reach out to us! We can add more languages and accents as requested.
First, you can use our camera and microphone recording settings. If you're really interested in cloning your own speaking voice, reach out to us on our support channels.
Effective techniques to guide ElevenLabs AI in adding pauses, conveying emotions, and pacing the speech.
There are a few ways to introduce a pause or break and influence the rhythm and cadence of the speaker. The most consistent way is programmatically using the syntax <break time="1.5s" />
. This will create an exact and natural pause in the speech. It is not just added silence between words, but the AI has an actual understanding of this syntax and will add a natural pause.
However, since this is more than just inserted silence, how the AI handles these pauses can vary. As usual, the voice used plays a pivotal role in the output. Some voices, those trained with a few “uh”s and “ah”s in them, have shown to sometimes insert those vocal mannerisms during the pauses, like a real speaker might.
An example could look like this:
Break time should be described in seconds, and the AI can handle pauses of up to 3 seconds in length and can be used in Speech Synthesis and via the API. It is not yet available for Projects.
Please avoid using an excessive number of break tags as that has shown to potentially cause some instability in the AI. The speech of the AI might start speeding up and become very fast, or it might introduce more noise in the audio and a few other strange artifacts. We are working on resolving this.
These options are inconsistent and might not always work. We recommend using the syntax above for consistency.
One trick that seems to provide the most consistence output - sans the above option - is a simple dash -
or the em-dash —
. You can even add multiple dashes such as -- --
for a longer pause.
Ellipsis ...
can sometimes also work to add a pause between words but usually also adds some “hesitation” or “nervousness” to the voice that might not always fit.
This feature is currently only supported in English.
In certain instances, you may want the model to pronounce a word, name, or phrase in a specific way. Pronunciation can be specified using standardised pronunciation alphabets. Currently we support the International Phonetic Alphabet (IPA) and the CMU Arpabet. Pronunciations are specified by wrapping words using the Speech Synthesis Markup Language (SSML) phoneme tag.
To use this feature, you need to wrap the desired word or phrase in the <phoneme alphabet="ipa" ph="your-IPA-Pronunciation-here">word</phoneme>
tag for IPA, or <phoneme alphabet="cmu-arpabet" ph="your-CMU-pronunciation-here">word</phoneme>
tag for CMU Arpabet. Replace "your-IPA-Pronunciation-here"
or "your-CMU-pronunciation-here"
with the desired IPA or CMU Arpabet pronunciation.
An example for IPA:
An example for CMU Arpabet:
It is important to note that this only works per word. Meaning that if you, for example, have a name with a first and last name that you want to be pronounced a certain way, you will have to create the pronunciation for each word individually.
English is a lexical stress language, which means that within multi-syllable words, some syllables are emphasized more than others. The relative salience of each syllable is crucial for proper pronunciation and meaning distinctions. So, it is very important to remember to include the lexical stress when writing both IPA and ARPAbet as otherwise, the outcome might not be optimal.
Take the word “talon”, for example.
Incorrect:
Correct:
The first example might switch between putting the primary emphasis on AE and AH, while the second example will always be pronounced reliably with the emphasis on AE and no stress on AH.
If you write it as:
It will always put emphasis on AH instead of AE.
If you want the AI to express a specific emotion, the best approach is to write in a style similar to that of a book. To find good prompts to use, you can flip through some books and identify words and phrases that convey the desired emotion.
For instance, you can use dialogue tags to express emotions, such as he said, confused
, or he shouted angrily
. These types of prompts will help the AI understand the desired emotional tone and try to generate a voiceover that accurately reflects it. With this approach, you can create highly customized voiceovers that are perfect for a variety of applications.
You will also have to somehow remove the prompt as the AI will read exactly what you give it. The AI can also sometimes infer the intended emotion from the text’s context, even without the use of tags.
This is not always perfect since you are relying on the AI discretion to understand if something is sarcastic, funny, or just plain from the context of the text.
Based on varying user feedback and test results, it’s been theorized that using a singular long sample for voice cloning has brought more success for some, compared to using multiple smaller samples. The current theory is that the AI stitches these samples together without any separation, causing pacing issues and faster speech. This is likely why some people have reported fast-talking clones.
To control the pacing of the speaker, you can use the same approach as in emotion, where you write in a style similar to that of a book. While it’s not a perfect solution, it can help improve the pacing and ensure that the AI generates a voiceover at the right speed. With this technique, you can create high-quality voiceovers that are both customized and easy to listen to.