Board Thread:General Discussion/@comment-10966617-20150906101845/@comment-53539-20150906201357

Sopme producers have gotten good at hiding the noise, but at its core, UTAU sounds like autotuned samples and if not careful it really spoils the average result. Then again, bare in mind despite being the HQ product, in the wrong hands Vocaloid can sound sucky.

I find the comparison of vocal synths kinda crazy in the long term as they all have merits. They all still can't replace a singer fully and as software they are limited on capabilities; a real singer can learn new languages and tones. Whereas a synth, well there is a reason for appends and language banks to get a HQ result... Until the day comes a synth can sound as realistic and be as expansive as a real person all of them are limited. I know there are UTAU out there who know 15 or so languages, but I doubt they've been as thoroughly checked to make them as HQ as possible and are likely full of accent issues... Since for best results you have to go through training. One could argue this doesn't have to happen with Vocaloid, yeah, thats true but again...You'd have to edit the voice and bare in mind the more you're forced to edit away from raw the more LQ the result can often get. A lot of pre-Miku-English results are quite strange with Miku snapping words into the air as if she is hurrying because she lacks smothness, and many words often sound like they could be one of two or three possible words. :-/