The Windows console subsystem has a host of Unicode-related bugs. And standard Windows programs such as more
(not to mention the C# 4.0 compiler csc
) just crash when they’re run from a console window with UTF-8 as active codepage, perplexingly claiming that they’re out of memory. On top of that the C++ runtime libraries of various compilers differ in how they behave. Doing C++ Unicode i/o in Windows consoles is therefore problematic. In this series I show how to work around limitations of the Visual C++ _O_U8TEXT
file mode, with the Visual C++ and g++ compilers. This yields an automatic translation between external UTF-8 and internal UTF-16, enabling Windows console i/o of characters in the Basic Multilingual Plane.
- Introduction
- How the Linux “all UTF-8” approach does not work in Windows
- About direct console i/o
- Portable source code should be UTF-8 with BOM
- The Visual C++ UTF-8 stream mode
- Summary so far
- Cheers!
Introduction
In both Windows and Linux properly internationalized applications use either UTF-16 or UTF-32 for their internal string handling. For example, the popular cross platform ICU library (International Components for Unicode) is based on UTF-16 encoded strings. For this kind of application Windows seems to be the better fit, since Windows’ API is UTF-16 based while Linux’ API is effectively, on a modern installation, UTF-8 based.
Still, in a simple console program one does not typically take on the quite steep overhead of full-fledged Unicode handling.
Instead of using a full-fledged Unicode handling library like ICU, one then relies on just the standard C and C++ libraries, and the Unicode handling reduces to what can easily be expressed using just the direct C++ language and standard library support.
How the Linux “all UTF-8” approach does not work in Windows
In Linux the typical small Unicode console program has everything char
-based and UTF-8 encoded. The external data, the internal strings, the string literals, and of course the C or C++ source code, are all UTF-8 encoded. The total unification allows simple programs like this:
#include <stdexcept> // std::runtime_error, std::exception #include <stdlib.h> // EXIT_SUCCESS, EXIT_FAILURE #include <iostream> // std::cout, std::cerr, std::endl #include <string> // std::string using namespace std; bool throwX( string const& s ) { throw runtime_error( s ); } bool hopefully( bool v ) { return v; } string lineFrom( istream& stream ) { string result; getline( stream, result ); hopefully( !stream.fail() ) || throwX( "lineFrom: failed to read line" ); return result; } int main() { try { static char const narrowText[] = "Blåbærsyltetøy! 日本国 кошка!"; cout << "Narrow text: " << narrowText << endl; cout << endl; cout << "What's your name? "; string const name = lineFrom( cin ); cout << "Glad to meet you, " << name << "!" << endl; return EXIT_SUCCESS; } catch( exception const& x ) { cerr << "!" << x.what() << endl; } return EXIT_FAILURE; }
Testing this in Ubuntu 11.10:
[~/blog/examples] $ g++ utf8_sans_bom.all_utf8.cpp [~/blog/examples] $ ./a.out Narrow text: Blåbærsyltetøy! 日本国 кошка! What's your name? Bjørn Bråten Sæter Glad to meet you, Bjørn Bråten Sæter! [~/blog/examples] $ _
Yay, it worked OK in Linux!
Testing the very same source code file in Windows using the same Linux-origins compiler (namely g++), and intentionally not specifying any codepage for the console window:
W:\examples> g++ -pedantic -Wall utf8_sans_bom.all_utf8.cpp W:\examples> a Narrow text: Bl├Ñb├ªrsyltet├╕y! µùѵ£¼σ¢╜ ╨║╨╛╤ê╨║╨░! What's your name? Bjørn Bråten Sæter Glad to meet you, Bjorn Bråten Sæter! W:\examples> _
One reason for the gobbledygook here is that the Windows console by default assumes that the program produces OEM encoded text. That means, it assumes that the text is encoded using the original IBM PC character set, or a variation of that old character set. This encoding assumption is called the console window’s active codepage, and it can be inspected and changed via the chcp
command, e.g. from codepage 437 (original IBM PC character set) to 65001 (UTF-8):
W:\examples> chcp Active code page: 437 W:\examples> chcp 65001 Active code page: 65001 W:\examples> a Narrow text: Blåbærsyltetøy! 日本国 кошка!huh? W:\examples> _
Positive: the initial UTF-8 text output appeared to work. The Chinese characters 日本国 displayed as just empty rectangles, but they copied OK. Both the Norwegian and Russian copied OK and also displayed OK.
Negative: input did apparently not work, and it apparently caused some of the program’s output (including the prompt before the input operation) to disappear!
Exactly what went wrong above is difficult to say for sure. It might be the input operation, or it might be something else. However, the exact cause is irrelevant because input fails outright, not just producing weird side effects, if the user types in some non-ASCII characters such as Norwegian æ, ø and å:
W:\examples> a Narrow text: Blåbærsyltetøy! 日本国 кошка!Bjørn Bråten Sæter !lineFrom: failed to read line W:\examples> _
About direct console i/o
Given that total failure for the “all UTF-8” approach has been established, it may perhaps appear to be overkill to also show the unintelligible output effect with the Windows platform’s major compiler, Visual C++ (here version 10.0), but as you’ll see it’s relevant:
W:\examples> cl utf8_sans_bom.all_utf8.cpp /Fe"b" utf8_sans_bom.all_utf8.cpp W:\examples> chcp Active code page: 65001 W:\examples> b Narrow text: Bl��b��rsyltet��y! ��������� ����������! What's your name? Bjørn Bråten Sæter !lineFrom: failed to read line W:\examples> _
Here the Visual C++ runtime detects that the standard output is connected to a console window. And instead of sending the text via the ordinary standard output stream, it then attempts to place the correct Unicode UCS2-encoded characters directly in the console window’s text buffer. However, since the C++ source code was encoded as UTF-8 without BOM (as is usual in Linux), the Visual C++ compiler erroneously assumed that the source code was encoded as Windows ANSI, and so, since Visual C++ has Windows ANSI sort of hardwired as its C++ narrow character execution character set, it blindly copied the string literal’s bytes to the executable’s string values, whence the runtime, for its direct console i/o, is given UTF-8 bytes instead of the Windows ANSI bytes that it expects – so that its helpful translation to UCS2 fails…
At the Windows API level the runtime implements direct console output by calling the WriteConsole
function instead of the WriteFile
function. And similarly, if the console input had worked, then it would probably have been via a call to the ReadConsole
function instead of the ReadFile
function. The WriteConsole
function accesses the console window’s text buffer directly and takes an UTF-16 wchar_t
based argument, and ditto for ReadConsole
.
Portable source code should be UTF-8 with BOM
One can avoid the direct console i/o by redirecting the output.
Such redirection then establishes that the output text byte level data is good, that all would have been well for this particular program’s output, except for the interference from the probably well-intentioned direct console i/o help attempt:
W:\examples> echo Bjørn Bråten Sæter | b >result W:\examples> type result Narrow text: Blåbærsyltetøy! 日本国 кошка! What's your name? Glad to meet you, Bjørn Bråten Sæter ! W:\examples> _
And because the data is correct, one can be sure that the Visual C++ compiler was tricked into assuming that the source code was ANSI Western. And this then means that any wide string literal, which a Windows compiler has to translate to UTF-16, will be incorrectly translated if it contains any non-ASCII characters. Hence, for portable source code it is not a good idea to encode the source code as UTF-8 without BOM – for that is effectively to lie to the compiler.
Now that also g++ accepts a BOM at the start of the source code, portable source code should therefore be encoded as UTF-8 with BOM.
With the BOM in place Visual C++ correctly determines that the source code is UTF-8 encoded, although as of late 2011 this appears to still be undocumented. And with a correct assumption about the source code’s encoding, narrow string literals are correctly translated to Windows ANSI encoded string values in the executable. For Unicode literals in Windows one should therefore use wide string literals, e.g. L"Blåbærsyltetøy! 日本国 кошка!"
, which in Windows ends up as an UTF-16 encoded string value in the executable.
The Visual C++ UTF-8 stream mode
Use source code encoded as UTF-8 with BOM, and use wide string literals, OK (or rather, one just has to accept that complication!), but how does one then output one of these literals?
E.g., std::wcout
in Windows has a rather strong tendency to translate down to Windows ANSI, not to UTF-8?
Well, in his 2008 blog posting Conventional wisdom is retarded, aka What the @#%&* is _O_U16TEXT? Michael Kaplan explained that
“the [Visual C++] CRT? Starting in 2005/8.0, it knows more about Unicode than any of us having been giving it credit for…”
The Visual C++ runtime library can convert automatically between internal UTF-16 and external UTF-8, if you just ask it to do so by calling the _setmode
function with the appropriate file descriptor number and mode flag. E.g., mode _O_U8TEXT
causes conversion to/from UTF-8.
One reason that many people have not known about the Unicode support that he discusses there, a Visual C++ Unicode stream mode, is that it’s mostly undocumented. Kaplan gives a link to documentation of the deprecated _wsopen
function, as one place where the mode flags have been (inadvertently?) documented. However, the main usage is through the _setmode
function, where, on the contrary, the official documentation goes on about how _setmode
will invoke the “invalid parameter handler” unless the mode argument is either _O_TEXT
or _O_BINARY
. So, by using this functionality one is not just in ordinary Microsoft undocumented land. One is wholly over in explicitly-documented-as-not-working land.
On the other hand, considering that the official documentation is plain wrong about many things (e.g., for Visual C++ 10 it maintains that the source code encoding is limited to ASCII), and that the _setmode
documentation is incorrect about the argument checking, and that the g++ compiler provides C level support for the _O_U8TEXT
mode feature, considering all that one may choose to ignore the will-not-work statements of the documentation and just treat it as a documentation defect, for what good is a feature that can’t be used?
Since there is not really any alternative in order to get UTF-8 translation also down at the C library level, this is the approach that I’m going to discuss more detailed in part 2.
It might seem from Kaplan’s blog posting that you don’t have to do more than just set the mode, and go! But as you can expect from something in explicitly-documented-as-not-working land, it’s not fully implemented even in Visual C++. And even less fully implemented in g++…
Summary so far
Above I introduced two approaches to Unicode handling in small Windows console programs:
- The all UTF-8 approach where everything is encoded as UTF-8, and where there are no BOM encoding markers.
- The wide string approach where all external text (including the C++ source code) is encoded as UTF-8, and all internal text is encoded as UTF-16.
The all UTF-8 approach is the approach used in a typical Linux installation. With this approach a novice can remain unaware that he is writing code that handles Unicode: it Just Works™ – in Linux. However, we saw that it mass-failed in Windows:
- Input with active codepage 65001 (UTF-8) failed due to various bugs.
- Console output with Visual C++ produced gibberish due to the runtime library’s attempt to help by using direct console output.
- I mentioned how wide string literals with non-ASCII characters are incorrectly translated to UTF-16 by Visual C++ due to the necessary lying to Visual C++ about the source code encoding (which is accomplished by not having a BOM at the start of the source code file).
The wide string approach, on the other hand, was shown to have special support in Visual C++, via the _O_U8TEXT
file mode, which I called an UTF-8 stream mode. But I mentioned that as of Visual C++ 10 this special file mode is not fully implemented and/or it has some bugs: it cannot be used directly but needs some scaffolding and fixing. That’s what part 2 is about.
Cheers!
Cheers, & enjoy!
Really Interesting article 🙂
I always found C/C++ unicode support not working and not portable. And most programmers don’t even know the difference between the strings encoding, the source encoding and the possible i/o encodings.
Three things that are usually solved in a different way depending on the implementation of the OS,compiler and std lib.
I usually follow these rules for new programs:
– select an internal encoding for your strings: i.e choose between char or wchar or any other (i.e QString)
– use appropriate functions to correctly handle these strings: I always found stdlib ones very unreliable and non portable
– make really sure you accept any input stram encoding: try to detect BOM if avaiable. Again, I usually avoid standard library functions because VERY unreliable and unportable at this.
PLEASE DON’T REQUIRE THE USER TO HAVE A SPECIFIC TEXT FILE ENCODING 🙂 Many programs still can’t read UTF16 BOM text files. Why?
-when you output text, make sure you choose an encoding (maybe UTF8 BOM is the most user-friendly and portable), or let the user explictly choose one.
Many times it is not clear to people that every of these steps can require an appropriate conversion. And at the moment c++ stdlib doesn’t help too much in doing this. Qt, for instance, is much better. Probably C++11 and/or boost do this much better too… never tryed!
Finally I’m ok with the UNIX all-UTF8 approach, but I don’t really like the fact that most OS tools don’t understand UTF16 file I/O. Diffing an UTF16 file with diff,svn or git is still a nightmare ! (see the point above) That approach is simple, but let the programmer ignore all these aspects and problems arise when using different platforms.
Qb
(1) use locales (2) use wstrings (3) don’t ever assume what the runtime environment locale is
The reason you run into trouble on Windows is because the Windows console is actually UTF-16, not UTF-8. Windows is using UTF-16 everywhere (unfortunately even for wchar_t encoding). 65001 code page is actually officially unsupported on Windows.
Thank you for your comment. The goal is however a bit more lofty than merely being able to use text in the user’s locale. Namely, the goal is to provide a framework that allows the same source code to work portably on Windows and Linux, and for Windows regardless of locale. I’ve created most of that already. I’m just writing up the basics.
Technical, regarding Windows console windows, yes in a sense they’re UTF-16, but they’re UTF-16 restricted to the BMP, i.e. they’re UCS-2. You can see this most easily in the buffer data format for e.g. WriteConsoleOutput.
Also technical, the 65001 codepage exists, and works to some limited degree. I am not aware that it is now officially unsupported. For example, it is certainly (and necessarily!) supported by the
WideCharToMultiByte
API function. One might argue that thechcp
andmode
command documentation leaves out codepage 65001, but then it also leaves out codepage 1252, Windows ANSI Western (and Korean etc., but 1252 is AFAIK available everywhere). I.e. the command documentation is incorrect, so that the facts are in a somewhat blurry quantum-mechanical Schroeder’s cat like state…Anyway, the most important observation about codepage 65001 is that setting it as active codepage in a console window, does not work for input with the Visual C++ runtime.
The
_O_U8TEXT
mode provides a partial solution for code using the standard C++ i/o facilities — AFAIK it’s the best compromise that anyone (including here Microsoft’s Unicode guru Michael Kaplan) is aware of. One can do better by using the Windows API level directly, but that loses portability :-(.Pingback: Unicode part 2: UTF-8 stream mode | Alf on programming (mostly C++)
Pingback: String-to-byte sequence translation using a fixed encoding, preferably UTF-8
Pingback: Трансляция последовательности строк в байт с использованием фиксированного кодирования, предпочтительно UTF-8 Вычислить мир
Pingback: Traduction de séquence chaîne par octet utilisant un codage fixe, de préférence UTF-8 Calculer
Pingback: 使用固定编码进行string到字节的序列转换,最好是UTF-8 中国服务器网