The result is a series of substandards named ISO/IEC 8859-1, ISO/IEC 8859-2, ISO/IEC 8859-3 and so on. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) worked to standardize these 8-bit character sets/encodings under a joint umbrella standard called ISO/IEC 8859. Many of these character sets are still used today, and much existing data is encoded in them. Because a byte can represent a maximum of 256 different characters, developers around the world started creating different character sets/encodings that encoded the 128 ASCII characters, but also encoded extra characters to meet the needs of languages such as French, Greek, and Russian. It’s not contain any of the other thousands of non-English characters that are used to read and write text around the world. Unfortunately, ASCII is inadequate for almost all non-English languages. In these languages, such as C, the same operations that read and write bytes also read and write characters. ASCII is a seven-bit character set, each ASCII character can easily be represented as a single byte, signed or unsigned. Thus, it’s natural for ASCII-based programming languages to equate the character data type with the byte data type. The resulting character set/encoding was named American Standard Code for Information Interchange (ASCII). They developed a standard mapping between code points 0 through 127 and the 128 commonly used characters in the English language (such as A–Z). Early computers and programming languages were created mainly by English-speaking programmers in countries where English was the native language.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |