The Windows console subsystem has a host of Unicode-related bugs. And standard Windows programs such as more
(not to mention the C# 4.0 compiler csc
) just crash when they’re run from a console window with UTF-8 as active codepage, perplexingly claiming that they’re out of memory. On top of that the C++ runtime libraries of various compilers differ in how they behave. Doing C++ Unicode i/o in Windows consoles is therefore problematic. In this series I show how to work around limitations of the Visual C++ _O_U8TEXT
file mode, with the Visual C++ and g++ compilers. This yields an automatic translation between external UTF-8 and internal UTF-16, enabling Windows console i/o of characters in the Basic Multilingual Plane.
In both Windows and Linux properly internationalized applications use either UTF-16 or UTF-32 for their internal string handling. For example, the popular cross platform ICU library (International Components for Unicode) is based on UTF-16 encoded strings. For this kind of application Windows seems to be the better fit, since Windows’ API is UTF-16 based while Linux’ API is effectively, on a modern installation, UTF-8 based.
Still, in a simple console program one does not typically take on the quite steep overhead of full-fledged Unicode handling.
Instead of using a full-fledged Unicode handling library like ICU, one then relies on just the standard C and C++ libraries, and the Unicode handling reduces to what can easily be expressed using just the direct C++ language and standard library support.
In Linux the typical small Unicode console program has everything char
-based and UTF-8 encoded. The external data, the internal strings, the string literals, and of course the C or C++ source code, are all UTF-8 encoded. The total unification allows simple programs like this:
[utf8_sans_bom.all_utf8.cpp]
#include <stdexcept> // std::runtime_error, std::exception
#include <stdlib.h> // EXIT_SUCCESS, EXIT_FAILURE
#include <iostream> // std::cout, std::cerr, std::endl
#include <string> // std::string
using namespace std;
bool throwX( string const& s ) { throw runtime_error( s ); }
bool hopefully( bool v ) { return v; }
string lineFrom( istream& stream )
{
string result;
getline( stream, result );
hopefully( !stream.fail() )
|| throwX( "lineFrom: failed to read line" );
return result;
}
int main()
{
try
{
static char const narrowText[] = "Blåbærsyltetøy! 日本国 кошка!";
cout << "Narrow text: " << narrowText << endl;
cout << endl;
cout << "What's your name? ";
string const name = lineFrom( cin );
cout << "Glad to meet you, " << name << "!" << endl;
return EXIT_SUCCESS;
}
catch( exception const& x )
{
cerr << "!" << x.what() << endl;
}
return EXIT_FAILURE;
}
Testing this in Ubuntu 11.10:
[~/blog/examples]
$ g++ utf8_sans_bom.all_utf8.cpp
[~/blog/examples]
$ ./a.out
Narrow text: Blåbærsyltetøy! 日本国 кошка!
What's your name? Bjørn Bråten Sæter
Glad to meet you, Bjørn Bråten Sæter!
[~/blog/examples]
$ _
Yay, it worked OK in Linux!
Testing the very same source code file in Windows using the same Linux-origins compiler (namely g++), and intentionally not specifying any codepage for the console window:
W:\examples> g++ -pedantic -Wall utf8_sans_bom.all_utf8.cpp
W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!
What's your name? Bjørn Bråten Sæter
Glad to meet you, Bjorn Bråten Sæter!
W:\examples> _
One reason for the gobbledygook here is that the Windows console by default assumes that the program produces OEM encoded text. That means, it assumes that the text is encoded using the original IBM PC character set, or a variation of that old character set. This encoding assumption is called the console window’s active codepage, and it can be inspected and changed via the chcp
command, e.g. from codepage 437 (original IBM PC character set) to 65001 (UTF-8):
W:\examples> chcp
Active code page: 437
W:\examples> chcp 65001
Active code page: 65001
W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!huh?
W:\examples> _
Positive: the initial UTF-8 text output appeared to work. The Chinese characters 日本国 displayed as just empty rectangles, but they copied OK. Both the Norwegian and Russian copied OK and also displayed OK.
Negative: input did apparently not work, and it apparently caused some of the program’s output (including the prompt before the input operation) to disappear!
Exactly what went wrong above is difficult to say for sure. It might be the input operation, or it might be something else. However, the exact cause is irrelevant because input fails outright, not just producing weird side effects, if the user types in some non-ASCII characters such as Norwegian æ, ø and å:
W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!Bjørn Bråten Sæter
!lineFrom: failed to read line
W:\examples> _
Given that total failure for the “all UTF-8” approach has been established, it may perhaps appear to be overkill to also show the unintelligible output effect with the Windows platform’s major compiler, Visual C++ (here version 10.0), but as you’ll see it’s relevant:
W:\examples> cl utf8_sans_bom.all_utf8.cpp /Fe"b"
utf8_sans_bom.all_utf8.cpp
W:\examples> chcp
Active code page: 65001
W:\examples> b
Narrow text: Bl��b��rsyltet��y! ��������� ����������!
What's your name? Bjørn Bråten Sæter
!lineFrom: failed to read line
W:\examples> _
Here the Visual C++ runtime detects that the standard output is connected to a console window. And instead of sending the text via the ordinary standard output stream, it then attempts to place the correct Unicode UCS2-encoded characters directly in the console window’s text buffer. However, since the C++ source code was encoded as UTF-8 without BOM (as is usual in Linux), the Visual C++ compiler erroneously assumed that the source code was encoded as Windows ANSI, and so, since Visual C++ has Windows ANSI sort of hardwired as its C++ narrow character execution character set, it blindly copied the string literal’s bytes to the executable’s string values, whence the runtime, for its direct console i/o, is given UTF-8 bytes instead of the Windows ANSI bytes that it expects – so that its helpful translation to UCS2 fails…
At the Windows API level the runtime implements direct console output by calling the WriteConsole
function instead of the WriteFile
function. And similarly, if the console input had worked, then it would probably have been via a call to the ReadConsole
function instead of the ReadFile
function. The WriteConsole
function accesses the console window’s text buffer directly and takes an UTF-16 wchar_t
based argument, and ditto for ReadConsole
.
One can avoid the direct console i/o by redirecting the output.
Such redirection then establishes that the output text byte level data is good, that all would have been well for this particular program’s output, except for the interference from the probably well-intentioned direct console i/o help attempt:
W:\examples> echo Bjørn Bråten Sæter | b >result
W:\examples> type result
Narrow text: Blåbærsyltetøy! 日本国 кошка!
What's your name? Glad to meet you, Bjørn Bråten Sæter !
W:\examples> _
And because the data is correct, one can be sure that the Visual C++ compiler was tricked into assuming that the source code was ANSI Western. And this then means that any wide string literal, which a Windows compiler has to translate to UTF-16, will be incorrectly translated if it contains any non-ASCII characters. Hence, for portable source code it is not a good idea to encode the source code as UTF-8 without BOM – for that is effectively to lie to the compiler.
Now that also g++ accepts a BOM at the start of the source code, portable source code should therefore be encoded as UTF-8 with BOM.
With the BOM in place Visual C++ correctly determines that the source code is UTF-8 encoded, although as of late 2011 this appears to still be undocumented. And with a correct assumption about the source code’s encoding, narrow string literals are correctly translated to Windows ANSI encoded string values in the executable. For Unicode literals in Windows one should therefore use wide string literals, e.g. L"Blåbærsyltetøy! 日本国 кошка!"
, which in Windows ends up as an UTF-16 encoded string value in the executable.
Use source code encoded as UTF-8 with BOM, and use wide string literals, OK (or rather, one just has to accept that complication!), but how does one then output one of these literals?
E.g., std::wcout
in Windows has a rather strong tendency to translate down to Windows ANSI, not to UTF-8?
Well, in his 2008 blog posting Conventional wisdom is retarded, aka What the @#%&* is _O_U16TEXT? Michael Kaplan explained that
“the [Visual C++] CRT? Starting in 2005/8.0, it knows more about Unicode than any of us having been giving it credit for…”
The Visual C++ runtime library can convert automatically between internal UTF-16 and external UTF-8, if you just ask it to do so by calling the _setmode
function with the appropriate file descriptor number and mode flag. E.g., mode _O_U8TEXT
causes conversion to/from UTF-8.
One reason that many people have not known about the Unicode support that he discusses there, a Visual C++ Unicode stream mode, is that it’s mostly undocumented. Kaplan gives a link to documentation of the deprecated _wsopen
function, as one place where the mode flags have been (inadvertently?) documented. However, the main usage is through the _setmode
function, where, on the contrary, the official documentation goes on about how _setmode
will invoke the “invalid parameter handler” unless the mode argument is either _O_TEXT
or _O_BINARY
. So, by using this functionality one is not just in ordinary Microsoft undocumented land. One is wholly over in explicitly-documented-as-not-working land.
On the other hand, considering that the official documentation is plain wrong about many things (e.g., for Visual C++ 10 it maintains that the source code encoding is limited to ASCII), and that the _setmode
documentation is incorrect about the argument checking, and that the g++ compiler provides C level support for the _O_U8TEXT
mode feature, considering all that one may choose to ignore the will-not-work statements of the documentation and just treat it as a documentation defect, for what good is a feature that can’t be used?
Since there is not really any alternative in order to get UTF-8 translation also down at the C library level, this is the approach that I’m going to discuss more detailed in part 2.
It might seem from Kaplan’s blog posting that you don’t have to do more than just set the mode, and go! But as you can expect from something in explicitly-documented-as-not-working land, it’s not fully implemented even in Visual C++. And even less fully implemented in g++…
Above I introduced two approaches to Unicode handling in small Windows console programs:
- The all UTF-8 approach where everything is encoded as UTF-8, and where there are no BOM encoding markers.
- The wide string approach where all external text (including the C++ source code) is encoded as UTF-8, and all internal text is encoded as UTF-16.
The all UTF-8 approach is the approach used in a typical Linux installation. With this approach a novice can remain unaware that he is writing code that handles Unicode: it Just Works™ – in Linux. However, we saw that it mass-failed in Windows:
- Input with active codepage 65001 (UTF-8) failed due to various bugs.
- Console output with Visual C++ produced gibberish due to the runtime library’s attempt to help by using direct console output.
- I mentioned how wide string literals with non-ASCII characters are incorrectly translated to UTF-16 by Visual C++ due to the necessary lying to Visual C++ about the source code encoding (which is accomplished by not having a BOM at the start of the source code file).
The wide string approach, on the other hand, was shown to have special support in Visual C++, via the _O_U8TEXT
file mode, which I called an UTF-8 stream mode. But I mentioned that as of Visual C++ 10 this special file mode is not fully implemented and/or it has some bugs: it cannot be used directly but needs some scaffolding and fixing. That’s what part 2 is about.
Cheers, & enjoy!
Like this:
Like Loading...