On Tue, 10 Jan 2006, Alex Shinn wrote:
> UTF-8 as an encoding is simpler and the processing of each byte should
> be faster, however the 2-byte characters of the Japanese encodings
> take 3 bytes in UTF-8 so you would expect a performance (and
> bandwidth) hit in handsets there.
Actually, I think it would be quite reasonable to come up with another
encoding for Unicode specifically designed to reduce the size of texts
that use the common kanji pages extensively. It would be pretty easy to
reduce most kanji sequences to two bytes while leaving roamji as single
bytes, at the cost of making some of the rare kanji and other characters
longer than they would be in UTF-8.
> I don't know anything about the TRON architecture, but the article
> itself is overly biased and subjective....
I found it fascinating, actually. The tone of it, along with the
specious argument about missing kanji (which can easily be solved--and
for the most part has been--by adding the characters to Unicode) shows
the real problem these people have with Unicode: NIH.
Curt Sampson <cjs_at_cynic.net> +81 90 7737 2974
*** Contribute to the Keitai Developers' Wiki! ***
*** http://www.keitai-dev.net/ ***
Received on Wed Jan 11 05:07:51 2006