HN2new | past | comments | ask | show | jobs | submitlogin

The OP suggests that U+0243 B with stroke is a "capitalized alternate symbol for the voiced bilabial fricative in Americanist phonetic notation"

I would suspect that the other stroke capitals that are present were used in specific actual alphabets (and probably represented in some existing pre-unicode font or code block), whether 'Americanist phonetic alphabet', or other -- and the stroke capitals that are not present, were not.

In general, unicode codepoints are justified by actually existing use.

(There is an interesting metaphysical problem that arises as our lives become increasingly digitized -- things that are not present in unicode _won't_ be used, because they won't be _able_ to be used online... so how will new things be added justified by use, once all the existing pre-digital glyphs have been added? I dunno.)



For what is worth, the voiced bilabial fricative is a sound similar to English "b" and "v". Languages that use that sound normally write it with "b", "v" or "w". It's common in Spanish but also creeps up in certain words in Portuguese, Japanese and others. [1]

In IPA, the standard phonetic alphabet, that sound is notated by "β" instead. The b-with-a-stroke, as well as barred-b (ᴃ) were used by some linguists before IPA became widespread, but they still kept appearing since they were easier to type in most typewriters (B, backspace, hyphen) and apparently also because some linguists didn't like IPA.

In addition, b-with-a-stroke is part of the Jarai alphabet, a variant of the Vietnamese alphabet used to write the Jarai language. [2]

[1] http://en.wikipedia.org/wiki/Voiced_bilabial_fricative [2] http://en.wikipedia.org/wiki/Jarai_language


Everyone can add new things in the PUA, in fact, that's where Emoji were before they were encoded. There is the ConScript Unicode Registry [1], an inofficial effort to map various constructed scripts to the PUA without them clashing. There are lots of pages in Klingon out there that make use of that encoding even though it's not in Unicode. Some of those have been encoded formally by now, so there is a way of creating new characters. It just takes a little longer.

Also, by now with webfonts you can easily create your own font to include characters from the PUA you designed, so you can use them online.

[1] http://www.evertype.com/standards/csur/


> In general, unicode codepoints are justified by actually existing use.

I vaguely remember something about Klingon and unicode related to that... ah, there we go:

"In September 1997, Michael Everson made a proposal for encoding this in Unicode.[2] The Unicode Technical Committee rejected the Klingon proposal in May 2001 on the grounds that research showed almost no use of the script for communication, and the vast majority of the people who did use Klingon employed the Latin alphabet by preference.[3] Everson created a mapping of pIqaD into the Private Use Area of Unicode, which he listed in the ConScript Unicode Registry (U+F8D0 to U+F8FF[4][5])"

http://en.wikipedia.org/wiki/Klingon_alphabets


Everson did manage to get J.R.R. Tolkien's Cirth and Tengwar tentatively included; see http://www.unicode.org/roadmaps/smp/


No. The roadmap indicates that they are formally proposed for inclusion and yet to be accepted (and nor rejected either, as in [1]). From the bottom of the roadmap:

> (Text between parentheses) indicates scripts for which proposals have been formally submitted to the UTC or to WG2. There is generally a link to the formal proposal.

[1] http://www.unicode.org/roadmaps/not-the-roadmap/


According to the phrasing on the bottom of the page I linked to, the blocks are "tentatively allocated." I'm not sure how this is substantially different from the phrasing I used, which was "tentatively included."


Interesting! (The proposals themselves, for anyone else interested: http://std.dkuug.dk/JTC1/SC2/WG2/docs/n1641/n1641.htm, http://std.dkuug.dk/JTC1/SC2/WG2/docs/n1642/n1642.htm)


In a related vein, I suspect ligatures, small caps, and non-lining figures already look weird to "digital natives" because they were dropped during the age of DTP and generally have only been reintroduced as typographic novelties.


It would be interesting to do a study, I would suspect those forms are still seen as 'classy', 'elegant' and 'better' -- non-lining figures and the like. But maybe that's just my own preference/hope, and in fact they are just seen as 'weird'.

But hey, everyone likes an actual em or en dash to two or one hyphen, don't they? No? Say it ain't so! (Not that anyone, including me, bothers to enter them on the keyboard, see above).


They are very easy to use on the mac — shift-option-minus.


There is an interesting metaphysical problem that arises as our lives become increasingly digitized -- things that are not present in unicode _won't_ be used, because they won't be _able_ to be used online...

I remember reading about exactly this problem recently in China: http://www.dnaindia.com/world/report_chinese-villagers-force...


so how will new things be added justified by use, once all the existing pre-digital glyphs have been added?

I see two possible scenarios:

1. They won't be. Instead, existing glyphs will become increasingly overloaded with new meanings.

2. New glyphs will start their life as icons. Once a certain icon becomes wide spread, it may be made into a code point with a glyph.

In practice, it would probably be some combination of the two.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: