Vocaloid Wiki
! The following is a tutorial made for VOCALOID fans by fellow VOCALOID fans. !

The phonetic system forms the basis of speech playback in the VOCALOID software. Symbols used in the phoneme system are based on X-SAMPA.

Using the Phonetic System[]

Note: The following applies to the VOCALOID2 system onwards, while both programs work in a similar fashion, some things may not apply to VOCALOID or work differently than VOCALOID2.

The Recording Process[]

The samples are gathered via the provider reading out a script in various keys while being recorded. The recording is then transferred to into a library which the VOCALOIDs will pull their results from. The libraries consist of various sounds recorded and separated for use with the software.

For Japanese the script is much simpler with each phonetic sample successfully divided across the notes with little trouble. This renders each note fairly precisely.

However, for English VOCALOIDs, the phonetic data has to be separated by cutting sections out of the recorded samples, because some sounds simply cannot be gathered unless they were spoken as part of a word. This makes separating sounds for the English VOCALOIDs much harder to do. As such, Japanese VOCALOIDs are often more precise than English ones on their diaphonetic sounds.

Constructing Words[]

VOCALOID uses the method called Frequency-domain Singing Articulation Splicing and Shaping, a kind of concatenative synthesis. This one takes a series of sustained sounds, diphonetic and triphonetic samples from a sample library which are specified by the phonetic system and utilizes them to reconstruct the word reassembling them in accordance to how a word would be phonetically pronounced. For example, the word "sing" (IPA: sɪŋ, written as [s I N] in the VOCALOID Phonetic System) can be synthesized by concatenating the sequence of diaphones "#-s, s~ɪ, ɪ~ŋ, ŋ-#".[1] Using the phonetic system you can input the phonemes that conforms the word, allowing to the synthesizer, pick the correct sequence of diaphones to reconstruct the word. As the vowel [ɪ] (VOCALOID:[I]) sounds different for the diaphones s~ɪ (s-I) and ɪ~ŋ, the software needs apply a "smoothing processing" in frequency domain, which blends both diaphonic samples in a coherent syllable fragment (if this weren't the case, the results would be unnatural and glitchy).[2]

This way of reconstruction of the words is the same for all the languages in which VOCALOID is available and will use the same method of arrangement for the phonetic library. The fundamental difference between them is the number of samples required for reconstruct each language, being determined by its complexity. For example, with English, a language with numerous consonant clusters, as well as numerous vowels which includes diphthongs and a complex syllable structure, requires more diphonic and triphonetic samples than the Japanese, which has a simple syllable structure with practically no consonant clusters and a 5-vowel system.

In addition, the user cannot utilize the phonemes that don't exist in the current voicebank. If the user tries to enter a phoneme manually that the VOCALOID does not have in its voicebank, there will be no sound at all when the VOCALOID is played back.

Due the way the sound is recorded and articulated into the synthesizer's engine (which compromises full words and syllables), phonetic and phonological phenomena like the coarticulation and assimilation, where the phoneme sounds are affected by the surrounding sounds, are also reflected on the synthesized words. For that reason, the phoneme sounds do not always produce the same results; they may sound differently or weakly/strongly according to their previous/following phoneme sound.[3] Summarizing, the context of the phoneme it may affects its sound.

To make a consonant sound stronger than the following vowel it may be required to edit the Parameters. Adjusting the Velocity, editing Brightness, the constant sound's Breathiness or raising the Dynamics will often work on some level.[4][5] Another alternative is to switch the phonemes (the affected one or the adjacent to it) with an allophone or just a similar sounding phoneme.


The user word registration interface

The VOCALOID's dictionary will attempt to match the correct phonemes with the word the user enters, avoiding to have to input them manually. If a user allows the program to auto-find phonemes and it has a particular word that it simply cannot identify or not is registered in the dictionary, it will automatically write it as a default phoneme ([u:] for English and [a] for other languages). In that case the user will need input the phonemes manually or add the word to the dictionary, requiring in both cases to known how the word is written phonetically.

If a user knows how words are articulated, the person can infer how to write a word that isn't in the dictionary (Ex: knowing that "bung" is represented as [bh V N] and "bangle" is written as [bh { N g U l] you can infer that "bungle" has to be written as [bh V N g U l]).

Users can make their own custom dictionary this way and even share the dictionary with others. The customization of dictionaries can play a key role also in writing a dictionary for a new language entirely, such as English - Japanese. However, bugs normally occur when a large amount of non-native words (words not native to that language) are entered. For example, in Japanese [N'] followed by a vowel different to [i] may produce odd results, however, due to its use within the Japanese language there is no actual call for this phonetic to be followed by a vowel different to [i]. So when using it for language creation outside of Japanese, users are limited to how they can use it.

The resulting glitch is a sound that often appears broken or "choppy" often combined with an overall lack of smoothness between sounds. Alternatively, the sounds encounter timing issues, perhaps being skipped as a result altogether because of either situation as in may cases the sounds were not needed for that language in the first place so were never sampled. It is easily exampled by users who attempt to use "*_0" exploitation to force a VOCALOID to more accurately mimic languages such as English.

The impact of custom dictionares varies greatly between VOCALOIDs even of the same language. Megurine Luka's English VOCALOID2 vocal encounters issues due to its mix of good and badly recorded sounds, the lack of good sounds limit Luka's work around sounds, which often English VOCALOIDs in particular offer. Oliver's habit of cutting off due to his incorrect sample length assignment. Others such as Avanna do not suffer so harshly due to having many work around sounds.[6]

Users can often counter dictionary issues such as this in the editor by adjusting note lengths and range.

Also note that when VSQ/VSQx files are imported into VOCALOID, it will default to the standard dictionary. Users will have to re-write the lyrics/phonetics.[7]

Editing the phonemes[]

The "Note Properties" allows you to manually pick the suitable phonetic for a word

To create and edit phonemes, a user must right click on a note click and press "Note Properties". Here they can edit a phoneme and add additional effects through the "Note Expression Property" and the "Vibrato Property" windows. As shortcut, the user can double click a note to edit its lyric, then pressing the Alt key and down arrow key (Alt. + Down Arrow) at the same time allows the user to  edit the phonetic data directly. This also allows the user to use the Tab key to skip to the next note and skip back to the previous one using Shift and Tab keys. Since VOCALOID3 version 3.030, it has been possible to swap the phoneme input with the lyric input through the "Phoneme Preferred Display" (shortcut: Ctrl + R), allowing you edit it directly with a simple double click.[8]

Because some phonemes are written with more than one character, such as the phonemes [u:] (for English) or [ts] (for Japanese), those need to be written separated with a space between them. If the user does not take care of this, the synthesizer will interpret all the characters as just one symbol, being unrecognized and producing no sound. Also, capitalization affects phonemes because some symbols are differentiated just by this (example: [Z] and [z] are different phonemes, hence they don't produce the same sound).

Additional notes[]

Due to to the software's musical nature, monophonetic and polyphonetics may also be needed to be considered where needed for closer vocal pitching pronunciation.[9] The user will, however, have access only to the pronunciation at a phonetic levels and the finer levels of vocal speech adjustments that cannot be accessed currently (previously VOCALOID allowed a fine tuning of the sound samples through parameters that allowed to adjust things like the harmonics or formants, but these options were removed in favor for simple and more user friendly interface). This affirmation also it's reflected in the genre specialized vocals, where the pronunciation is adjusted to said genre, and can manifest issues when used out their area of comfort. Example: LEON & LOLA (Soul), Prima & Tonio (Opera)

Please note that all the VOCALOIDs simply do not have the same phonemes available, such as the breathing phonemes [br1]- [br5].[10][11] There are also some phonemes that are found only in one language, so not all of the Japanese and English VOCALOIDs will share the same phonemes.[12] Also, while a VOCALOID's help guide will list the alphabet of the language, they may not include additional notes.

Using One Language To Create Another[]

A user can use the phoneme system to create languages from scratch, so long as it is within the VOCALOID's capabilities. Due to the differences between both phonetic systems and between the individual voicebanks, there are some details the user must take into consideration when they attempt to make a VOCALOID sing a language they aren't intended for.

Regardless of this, if the user is aware of the phonology of both languages, the original one for that voice, and the target language, the task can be easier. Even more, a user may be creative, even going so far as to invent languages of their own if they desire.[13] Essentially, the more time a user spends working to get familiar with the phoneme system, the more they can get out of the VOCALOID program.

However, some voicebanks are easier to work with than others, presenting advantages others may not have. A clear example is Sonika, which is regarded as one of the most potential VOCALOIDs to "sing in any language" due to her unique set up, or Luka that allows to switch between her English and Japanese voicebanks according to the needs of the user. However, results are greatly influenced by both the user's technique and how much a VOCALOID's phonetic system has phonologically in common with that of the target language without aids of other music/audio software. An example is that due the phonetic similarities, Japanese VOCALOIDs can achieve a good level of Spanish. In the introduction of SeeU it was confirmed that the Korean language is capable of mimicking a decent amount of English due to phonetic similarities between the two.

Differences and Considerations[]

It's very important for the user to take note of the properties they may or may not be looking for in a voicebank. Certain advantages or disadvantages can make or break the song they're working to create, as well as details regarding the available phonemes or the voice clarity of a particular VOCALOID.

  • The VOCALOID or voicebank utilized: Each VOCALOID has their own characteristics, advantages and flaws, requiring their own tricks and considerations while working with them. Among the considerations the user must be aware of is the how the VOCALOID pronounces phonemes; some voicebanks have a more marked pronunciation of the consonants, or sometimes they pronounce the consonant clusters in a different way than others, and may make it difficult to achieve a closer pronunciation to the intended language.
  • The tempo utilized in the song: Important when short notes for some tricks or techniques are used. The tempo can affect them, requiring readjustment of the length or duration of those notes.
  • The pitch range of the current song: The voicebanks are recorded utilizing at least two registries: one for the higher pitches and one for the lower ones. The software then creates the transition between generating the scale of notes. Depending on how they're recorded, the pronunciation or quality of some phonemes will vary from voicebank to voicebank when the pitch is changed.
  • The influence of the adjacent phonemes, assimilation and coarticulation phenomena: The assimilation and coarticulation is present in the synthesizer, so a phoneme can affect its neighbors.

Due to the individual differences between the voicebanks, taking a different approach may obtain more desirable results. Phonemes that are not equivalent may work better than equivalent ones in the target languages; for example, when MIRIAM sings in Japanese, [v V] /vʌ/ sound closer to the actual pronunciation of [w a] /wa/ as a Japanese particle は than [w V] /wʌ/.[14][15][16]

For more explanations on the differences and comparisons between English and Japanese VOCALOIDs see the conversion list: English - Japanese


Due the way sounds are articulated by the synthesizer to simulate human speech, some phonologic phenomena also appears in the software (like the coarticulation). This allows the user to apply them to the software to increase the capabilities of the voicebanks.

Auxiliary Phonemes[]

An array of Auxiliary Phonemes exist within the voicebanks, and these phonemes are used to get some effects (like breaths) or to alter the default pronunciation (like [Sil] which is utilized to break the diphonic transition between two phonemes). It's important to consider that different auxiliary phonemes are present in the different versions of the software, and not all are available for the different voicebanks. As such, their effect or function may differ between the different voicebanks and versions of VOCALOID.

Coarticulation, Assimilation and Phoneme Combinations[]

An application of the coarticulation is combining phonemes to achieve new articulations, closer to the desired ones.

  • Induce palatalization in an English VOCALOID singing in other language like Japanese or Korean (in the case of the palatalized consonants) or Romance languages (for the case of the palatal nasal).
  • Generate a similar TH (voiceless dental fricative) sound

Glides or Semivowels[]

The glides or semivowels are sounds that share traits of a vowel, produced with little or no obstruction of the airstream, but that are non-sibilant; in other words, not the main element of a syllable. If the user is aware of the glide and its respective vowel counterpart, (s)he can utilize it in replace or along it vowel producing interesting results.

Some possible uses of the glides are:

  • Fixing choppy vowel combinations
  • Facilitating some diphthongs or diphones
  • Replacing the vowels when required.

Use of short notes[]

An additional technique is the use of short notes (around 1/64 or 1/32 length). When the note is too short the articulation will be incomplete, and the sound will blend with the next note. This technique is heavily affected by the tempo, however, and at lower tempos may not produce results as efficient as at higher tempos.

This technique can be utilized for:

  • Improving the pronunciation of some consonant clusters.
  • Generating colored consonants.
  • Blending some phonemes.
  • Achieving new articulations.

Second Voice Support[]

It is also possible to use a VOCALOID with a similar voice type to hide the flaws of the phonetic mispronunciations of another by having the two VOCALOIDs sing in a duet. One such example, and one greatly acknowledged by fans, is that of Sonika and Luka.

Another use of this technique is when a VOCALOID sings in other languages than it is intended. If a VOCALOID sings in a duet or chorus with another of the intended language, this one will compliment the pronunciation of the first.[17]

A third technique, albeit somewhat pointless considering the nature of VOCALOID and what it's generally intended for, is to use a human singer to take the place of the second VOCALOID.

Post-Edition and Phoneme Slicing[]

Besides all the tricks available in the editor, it's possible improve the pronunciation further more during the post edition. After rendering and export the WAV file, the user can edit it in any DAW or sound editor. If the pronunciation of a consonant is too soft or too strong the user can correct its volume.

Another technique that is possible to use on VOCALOIDs is phoneme slicing. This can be used on Japanese phonemes for Japanese VOCALOIDs, either in the VOCALOID software itself or the user's DAW. The length of the note is decreased or cut down, until only half the pronunciation needed for the spoken Japanese is heard (example "su" becomes "s"). However, this will affect the singing capabilities of the VOCALOID and the notes being cut have to be much longer than normal. Although this technique may be hard for new users and results in a lack of singing smoothness, it increases the chances of getting a closer match to the intended sound. This can also be applied to English capable VOCALOIDs. Additionally, software like Vocoder software can be used to artificially create or transform Japanese or English phonetics into those of another language.

Flaws in the Phonetic System[]

There are some flaws that can limit a VOCALOID's ability for language recreation and many of this issues are found in all languages and are not limited to one specifically.

The VOCALOID Engine's Habits[]

The VOCALOID system will attempt to sound out all data assigned to the phonemes used, even if that particular sound is not needed.

Yet a natural speaker may not sound out the needed sounds when they sing for various reasons such as a naturally slurred vocals, their localized accent, vocal disorders like stuttering or speech impediments such as a lisp. This restriction may limit the ability of a VOCALOID in regard to mimicking the language they are intended for. For example, the American English accents often involve the complete departure of the schwa vowel sound from words where it is featured. This sound is normally a prominent feature of the English language itself and present in British English accents.

The hidden Phonetic [Sil] will prevent this occurring and can be used with any VOCALOID language, even still this does not resolve all issues or scenarios.

Language Structuring[]

Languages themselves have their own sets of rules that breaking are difficult.

For example in English and Japanese VOCALOIDs;

  • Japanese; Since Japanese VOCALOIDs do not have to blend their words like English ones and for having just 500 diphone sound to use, Japanese VOCALOIDs can produce choppier results than English VOCALOIDs when trying to be used for non-Japanese words, especially very different vocal languages such as English. Often when slicing phonetic information remains ("Su" becoming "s") a small fragment of the missing phonetic sound (in the case of "su" the missing "u" sound), leaving behind awkward vocal sounds that lower the quality of a VOCALOID's results. As a result of VOCALOID3, voiceless sounds now make this a much easier attempt to do, but is still not a perfect solution to the problem. N' followed by a vowel may produce odd results, however, due to its use within the Japanese language there is no actual call for this phonetic to be followed by a vowel sound anyway so VOCALOID possessed very limited data related to it. Japanese VOCALOIDs still have a very limited amount of vowels and in many cases the entirely wrong vowel sound needed for many non-Japanese words.
  • English; In the case of English VOCALOIDs, attempting to always blend their letters and for having 2,500+ diaphonetic sounds, depending on where the stress accent is will result in closer or more distant sounds to the intended target language. This can often make them complex to construct non-English words. The result is a reliance on [Sil] to prevent unwanted combinations can leave behind choppiness and robotic results, mixing between smooth results and sudden stops. VOCALOID3 has the capabilities to make this easier to resolve and will soften such hard pronunciations anyway, but the sounds remain even though they are less apparent. Even still with their large selection of diaphonetic data they cannot be certain to say the right data even when needed and basic control of the diaphonetics may prove to result in random incorrectness.

In both cases, the language construction is the reason for the issue and if used for their own languages, the results will sound much natural and flow much easier. As noted in this section, due to the sheer number of things to take into account, English-capable VOCALOIDs can often be potentially far more complex (due to the problems presented by the English language) than the Japanese VOCALOIDs. Liberally interpreted, English VOCALOIDs have a greater language capacity than their Japanese cousins, in having more vowel and clearly separated consonant sounds, and are therefore easier to make sing in other languages, although both will only be using the equivalent or quasi-equivalent phonemes according to the set up of the phonetic system of either language. Japanese VOCALOIDs can often be far more simple to use, despite the more limited array of phonemes.

Sample Data Base[]

Despite all VOCALOIDs being made to produce a certain language there are some differences between them that effect performances. For example, SONiKA has every Vowel combination needed for English, while Megurine Luka has missing diaphonetic data. The VOCALOID will still sound out the words "I love you" in both cases, but what the missing data could affect is the smoothness of the transactions. The results of the bad transactions is a fairly broken or robotic result that does not sound as natural as a it could be.

Another issue is the clarity of some VOCALOIDs. However, this is also a common issue when VOCALOIDs overall singing high notes (or alternatively low in some cases). The natural softness of their vocals dampens the strength of the vocal in the high notes. This normally occurs when a VOCALOID sings out of their optimum range (See Optimum Recommendations) but some VOCALOIDs have overall softness in their vocals. However, they are certain VOCALOIDs such as the original Kagamine Rin/Len package or SONiKA who are said to be very difficult to get clear results from. Do note that equalising the singing results in a DAW or sound editing package can improve VOCALOIDs who lack clarity.

The VOCALOID Dictionary[]

There are also a number of known words that have been used by English-capable VOCALOIDs that have more than one pronunciation of the word due to stress accents. However the user often fails to be able to separate the correct results from what the software gave them since VOCALOID can currently only store one pronunciation of the word in its dictionary. Without knowing how to sound out the alternative pronunciation, these words can be considered a problem to non-native English speakers;

  • Wind
    • The wind blew (IPA: [ˈwɪnd]; Vocaloid: [w I n d])
    • You wind me up (IPA: [waɪnd]; Vocaloid: [w aI n d])
  • Read
    • I will read the book (IPA: [riːd]; Vocaloid: [r i: d])
    • I read the book (IPA : [rɛd]; Vocaloid: [r e d] )
  • Tear
    • You have a tear in your eye (IPA: [tɪə]; Vocaloid: [t I@])
    • The paper has a tear in it (IPA: [tɛə] ; Vocaloid: [t E@])
  • Bow
    • You must bow before royalty (IPA: [baʊ]; Vocaloid: [b aU])
    • I tie a bow in my hair. (IPA: [bəʊ] or [boʊ]; Vocaloid: [b @U])
  • Live
    • The show was broadcast on TV live (IPA: [laɪv]; Vocaloid: [l0 aI v])
    • I know where you live (IPA: [lɪv]; Vocaloid: [l0 I v])

Spanish VOCALOIDs also use stressing for some of the data, so this feature is not unique to English VOCALOIDs, but it absent from many Asian languages including Japanese.

Note Length[]

VOCALOIDs sometimes have difficulty pronouncing words. For example, Prima and Tonio struggle with the middle section of the word "together" if the middle section is too short when you spread the word out over several notes ("to-geth-er" becomes "to-g'-er" if "geth" has no room). Some VOCALOIDs singing results may impacted if a user does not consider this. When a VOCALOID fails to pronounce a phonetic that it should be able to there are ways around this. You can move the phonetic data onto another track, increase the "accent" (attack) in note properties, or change the length of the note to allow the vocal room to pronounce the words. VY2 also has a similar weakness: the phonetics a with re becomes a ge sound, but this is fixed by dividing the tracks, breaking the transition with [Sil] or modifying the tone of the voice.

Optimum Recommendations[]

Many VOCALOIDs also come with optimum range. The recommendations are to help direct Producers to the best range for the VOCALOID, as well as describe what vocal range the VOCALOID has (Soprano, Mezzo-Soprano, Tenor, Alto, etc). When hitting the high notes above the VOCALOID's capabilities, they may become muffled and lack clarity, while many low notes can be soft and quiet. Working within the optimum range increases the chance of clearer and more stable language skills of the VOCALOID.

Likewise, optimum tempo helps the Producer to know what range will leave the VOCALOID sounding most natural; too fast may not give the VOCALOID time to sound out the sounds correctly, resulting in digital noise in place of natural smooth pronunciations or missing sounds. In the opposite direction, too slow can make any digital defects more apparent by allowing them to be heard much clearer. The engine version will also affect the results in different ways, with VOCALOID being more criticized for its heavy digital sounds than VOCALOID3.

User related concerns[]

One of the issues related specifically to the user is that they may not be able to use a VOCALOID from a language they don't know particularly well. What may sound flawless and realistic to a person who has little knowledge on a language, is actually full of bugs and glitches. A speaker of that language can hear the VOCALOID's flaws much better then someone who knows little on the language. This issue can easily occur in even the most well tuned VOCALOID songs and can often add a kink to an otherwise perfect example of a VOCALOID's best singing results.

Even if one were to take a VSQ or VSQX file that had been tweaked by another user, even those that are a native speaker, not all VOCALOIDs have the same strengths and flaws. Therefore, it is vital that users take time to study even the basics of the language structure they are working with and further more spend time comparing results for every song they produce, even if there is already pre-tweaking on the VSQ and VSQX file.

Additional Help[]

Also note, both Zero-G and PowerFX also have tutorials of their own.

  • How To Make a VOCALOID Breathe Using VOCALOID: Explanation on how some of the Japanese VOCALOIDs sound when you use the breathing effects
  • Comparative Table of English and Japanese Phonetic System of Japanese and English VOCALOIDs, including notes on if the VOCALOID has this phoneme. List also includes information on how to transform the quasi- equivalent phonemes in Japanese and English into the opposite language effectively.
  • Vocaphonetic: A Japanese community site for creating and distributing Japanese dictionary data for English VOCALOIDs to sing better in Japanese. The dictionary data for VOCALOID and VOCALOID2 are respectively available.
  • VOCALOID Phonetic Library - a quick look up guide for Phonetics of all VOCALOIDs.
  • From English to Japanese - Using Tonio, this is the instructions for how Japanese users can make Tonio sing in Japanese. Also shown is how close to and how much of the Japanese language Tonio can reproduce.
  • Tutorial - here you see a tutorial showing a user making Miku sing in "English" Japanese phonemes.
  • Making Big-Al sing Japanese


  • One of the reasons for the large length of time between VOCALOID releases for English VOCALOID is owed to the length of time consumed in recording the phonetic samples (estimate; 2,500 samples needed for English vs 500 for Japanese per each pitch). It took 25 hours (4 hours a day) to record all the Kagamine "Appends".[18] Camui Gackpo's VOCALOID2 voicebank was confined to have been completed within 4 hours, plus a later additional voicebank was recorded for alternative samples.[19] In contrast, according to Anders, it takes anything from 1–3 weeks onwards to record a single english voicebank.[20]
  • The more samples involved in making a synthesized voice the harder it is to maintain quality and the lack of smoothness of older synthesizing software voicebanks can often reflect the difficulty it presents.
    • More complex languages such as English struggle much more to maintain quality while singing due to the sheer number of samples involved.
    • This is also why older voicebanks may be harder to use such as the VOCALOID voicebanks. For instance, "now" is often pronounced as "no-ow" by the English VOCALOID voicebanks. In contrast, VOCALOID2 voicebanks have no problems with this word.
  • Some fans struggle to understand how synethized vocals have developed over a single decade and do not understand why VOCALOID results are as they are. Here are Microsoft Mike, Mary, Sam and Ann, speaking (mature Content) showing the various stages of this particular software and progression the vocals for the Microsoft text-to-speech voices software. VOCALOID was released soon after this software was being developed, yet are much more advance software packages, but there are common problems shared between all synethizing software packages.
  • Studies of the brain prove that if the words are close enough to the intended words they are suppose to be when spoken the mind is capable of working out, or attempt to work out, what they actually are even if the actual words spoken are gibberish. This plays a role in the matching of phonetics from one language to another, and can make the mind believe that a word sounds closer to the intended word than it is.


  1. link
  2. [1] - Vocaloid.com - VOCALOID1#Characteristics of VOCALOID (2004)
  3. http://www29.atwiki.jp/vocalo-gojokai/pages/105.html VOCALOID Gojokai
  4. http://doku.bimyo.jp/miku/page03/index.html VOCALOID Introductory: Control Track
  5. http://www39.atwiki.jp/vocaloid/pages/32.html VOCALOID@wiki How to Edit Rin/Len Kagamine
  6. link
  7. link
  8. [2] Vocaloidism - Vocaloid 3 Update, v3.030
  9. VOCALOID document
  10. [3] VOCALOID Non Sense - How To Make a VOCALOID Breathe Using VOCALOID
  11. [4] Nicovideo - Big Al’s breathing phonemes
  12. Wikipedia: Phoneme
  13. NND: nm7051391 - Jutenija sung by Kagamine Rin / Len
  14. http://ww3.enjoy.ne.jp/~koti/kaito/miriam.html Making Miriam Sing in Japanese
  15. NND: sm10379602 - Lost Sheep sung by Miriam
  16. NND: sm4916135 - Lost Sheep sung by KAITO
  17. NND: sm10037931 - Unbalance sung by Kagamine Rin
  18. link
  19. link
  20. [5] VocaloidOtaku - Somebodyrandom's Questions