A wrapper for UTF-8 Windows apps.

The Windows API is based on UTF-16 encoded wide text. For example, the API function CommandLineToArgvW that parses a command line, only exists in wide text version. But the introduction of support for UTF-8 as the process code page in the May 2019 update of Windows 10 now greatly increases the incentive to use UTF-8 encoded narrow text internally in in Windows GUI applications, i.e. using UTF-8 as the C++ execution character set also in Windows.

This article presents a minimal example of that (a message box with international text, using Windows’ char based API wrappers); shows one way to configure Windows with UTF-8 as the ANSI code page default; and shows how to build such a program with the MingGW g++ and Visual C++ toolchains.

This is discussed in that order in the following sections:

  1. A minimal example.
  2. The header wrappers.
  3. Configuring Windows with UTF-8 as the ANSI code page default.
  4. An application manifest resource specifying UTF-8.
  5. Building with MinGW g++ and with Visual C++.

I apologize for the less than perfect formatting and possible odd things. Every time I edit WordPress removes all instances of the text <windows.h> and wreaks havoc on the rest. This article was originally written as a GitHub-compatible markdown file but it turned out that markdown syntax highlighting, and a lot more, didn’t work in WordPress, so the text had to be very manually re-expressed as a sequence of WordPress “blocks”.

1. A minimal example.

With a suitable wrapper for <windows.h> the C++ code of a program that displays a Windows message box with international text can now be as simple as this:

minimal.cpp

#include <header-wrappers/winapi/windows-h.utf8.hpp>

auto main()
    -> int
{
    const auto& text    = "Every 日本国 кошка loves Norwegian blåbærsyltetøy, nom nom!";
    const auto& title   = "Some Norwegian, Russian & Chinese text in there:";
    MessageBox( 0, text, title, MB_ICONINFORMATION | MB_SETFOREGROUND );
}

Result when the program is built with a specification of UTF-8 as process code page, or alternatively is run in a Windows installation configured with UTF-8 as the ANSI code page default:

Image of OK messagebox

In contrast, here is what it looks like when a corresponding program using <windows.h> directly is built without a specification of UTF-8 as process code page and the Windows ANSI default is codepage 1252, Windows ANSI Western, as in the old days:

Image of ungood Windows ANSI Western messagebox

2. The header wrappers.

The wrapper <header-wrappers/winapi/windows-h.utf8.hpp> supports this new “like ordinary C++” kind of Windows application:

  • it increases the C++-compatibility of <windows.h> by suppressing the min and max macro definitions via option NOMINMAX and by asking for more strictly typed declarations via option STRICT, plus it reduces the size of this gargantuan include (e.g. just now from 80 287 lines to 54 426 lines, with MinGW g++), via option WIN32_LEAN_AND_MEAN,
  • it makes the char based …A-functions such as MessageBoxA available without suffix, i.e. for that example as just MessageBox, by ensuring that option UNICODE is not defined, and
  • it asserts that the effective process codepage is UTF-8, which it might or might not be.

header-wrappers/winapi/windows-h.utf8.hpp

#pragma once
#undef UTF8_WINAPI
#define UTF8_WINAPI
#include "windows-h.hpp"

namespace uuid_0985060C_1AAD_453C_B3F9_A2E543F4CF1E {
    struct Winapi_envelope
    {
        Winapi_envelope()
        {
            static const bool dummy = winapi_h_assert_utf8_codepage();
        }
    };
    
    static const Winapi_envelope ensured_globally_single_utf8_assertion{};
}  // namespace uuid_0985060C_1AAD_453C_B3F9_A2E543F4CF1E

The little complexity above could be avoided by using a C++17 inline variable. It would be more in the C++ spirit of coding to absolutely maximum performance and least verbosity, when there is a choice. However, many people are still stuck with earlier C++ standards, and though a fallback using static instead could be automatically provided, the header would then require Visual C++ 2019 users to add option /Zc:__cplusplus, which is not presently supported by the Visual Studio GUI.

Except for that issue the wrapper is designed to be a trivial top-level wrapper so that one can replace it with one’s own equally trivial top-level wrapper, for example in order to communicate a “not UTF-8 process code page” failure to the user in manner of one’s own choosing.

To wit, the wrapper delegates the first two points to a more basic wrapper <header-wrappers/winapi/windows-h.hpp>, which goes like this:

header-wrappers/winapi/windows-h.hpp

#pragma once
#ifdef MessageBox
#   error "<windows.h> has already been included, possibly with undesired options."
#endif

#include <assert.h>
#ifdef _MSC_VER
#   include <iso646.h>                  // Standard `and` etc. also with MSVC.
#endif

#ifndef _WIN32_WINNT
#   define _WIN32_WINNT     0x0600      // Windows Vista as earliest supported OS.
#endif
#undef WINVER
#define WINVER _WIN32_WINNT

#define IS_NARROW_WINAPI() \
    ("Define UTF8_WINAPI please.", sizeof(*GetCommandLine()) == 1)

#define IS_WIDE_WINAPI() \
    ("Define UNICODE please.", sizeof(*GetCommandLine()) > 1)

// UTF8_WINAPI is a custom macro for this file. UNICODE, _UNICODE and _MBCS are MS macros.
#if defined( UTF8_WINAPI) and defined( UNICODE )
#   error "Inconsistent encoding options, both UNICODE (UTF-16) and UTF8_WINAPI (UTF-8)."
#endif

#undef UNICODE
#undef _UNICODE
#ifdef UTF8_WINAPI
#   define _MBCS        // Mainly for 3rd party code that uses it for platform detection.
#else
#   define UNICODE
#   define _UNICODE     // Mainly for 3rd party code that uses it for platform detection.
#endif
#undef NOMINMAX
#define NOMINMAX
#undef STRICT
#define STRICT
#undef WIN32_LEAN_AND_MEAN
#define WIN32_LEAN_AND_MEAN
// After this an `#include <winsock2.h>` will actually include that header.

#include <windows.h>

inline auto winapi_h_assert_utf8_codepage()
    -> bool
{
    #ifdef __GNUC__
        #pragma GCC diagnostic push
        #pragma GCC diagnostic ignored "-Wunused-value"
    #endif
    assert(( "The process codepage isn't UTF-8 (old Windows?).", GetACP() == 65001 ));
    #ifdef __GNUC__
        #pragma GCC diagnostic pop
    #endif
    return true;
}

2. Configuring Windows with UTF-8 as the ANSI code page default.

For portability the program should best be built with UTF-8 process code page specified as an application manifest resource. Alternatively it will work to configure Windows with UTF-8 as the Windows ANSI default, provided it’s Windows 10 with the update of May 2019, or later. But probably few if any ordinary users will want to configure their Windows, or to let a program do that just in order to run that program.

You as developer may however find the convenience of UTF-8 as the Windows ANSI default, highly desirable.

It worked for me to change the item ACP to value 65001, in the semi-documented registry key

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage

You can change the value e.g. via the “regedit” GUI utility, or the reg command. You must then reboot Windows for the change to take effect.

And voilà! 🙂

But wait! …

Before doing that let’s build the program with a manifest that specifies UTF-8 as process code page. That way the program will work on any post-May-2019 Windows 10 installation, not just “it works on my computer!”. The <windows.h> wrapper shown above ensures that it will not mistakenly run and present gibberish on an earlier Windows version.

4. An application manifest resource specifying UTF-8.

An application manifest is an UTF-8 encoded XML file that, if properly magically named, can be just shipped with the application, but that’s best embedded as a resource in the executable.

The following manifest specifies both UTF-8 as process code page, and that the app uses version 6.0 or later of the Common Controls DLL, as opposed to an earlier version that has the same DLL name. The Common Controls DLL version gives a modern look and feel to buttons, menus, list, edit fields etc. Why that’s not the default, or why it requires this heavy machinery to specify, would be mystery if Microsoft were an ordinary company.

Anyway, the text:

resources/application-manifest.xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1">
    <assemblyIdentity type="win32" name="UTF-8 message box" version="1.0.0.0"/>
    <application>
        <windowsSettings>
            <activeCodePage xmlns="http://schemas.microsoft.com/SMI/2019/WindowsSettings"
                >UTF-8</activeCodePage>
        </windowsSettings>
    </application>
    <dependency>
        <dependentAssembly>
            <assemblyIdentity
                type="win32"
                name="Microsoft.Windows.Common-Controls"
                version="6.0.0.0"
                processorArchitecture="*"
                publicKeyToken="6595b64144ccf1df"
                language="*"
                />
        </dependentAssembly>
    </dependency>
</assembly>

Beware: at the time of writing there could be no whitespace such as space or newline on either side of the “UTF-8” activeCodePage value, and it had to be all uppercase.

5. Building with MinGW g++ and with Visual C++.

One way to include that text as a resource in the executable is to use a general resource script:

resources.rc

#include <windows.h>
1 RT_MANIFEST "resources/application-manifest.xml"

Here the 1 is the resource id, and the RT_MANIFEST is the resource type (as I recall from years ago RT_MANIFEST is defined as small integer, probably just 1).

With the MinGW GNU tools this script is compiled by windres into an apparently ordinary object file, which is just linked with the main program object file:

[G:\code\minimal_gui\binaries]
> set CFG=-std=c++17 -Wall -pedantic-errors

[G:\code\minimal_gui\binaries]
> g++ %CFG% ..\minimal.cpp -c -o minimal.o

[G:\code\minimal_gui\binaries]
> windres ..\resources.rc -o resources.o

[G:\code\minimal_gui\binaries]
> g++ minimal.o resources.o -mwindows

Here the -mwindows option specifies the GUI subsystem for the executable, so that Windows doesn’t pop up a console window when one runs the program from Windows Explorer.

With Microsoft’s tools the script is compiled by rc into a special binary resource format in a .res file, which is just linked with the main program object file. Options can be passed to the compiler cl.exe via the environment variable CL, and to the linker link.exe via the environment variable LINK. Using an obscure linker option is unfortunately necessary for building a GUI subsystem executable with a standard C++ main function with this toolchain:

[G:\code\minimal_gui\binaries]
> set CL=^
More? /nologo ^
More? /utf-8 /EHsc /GR /FI"iso646.h" /std:c++17 /Zc:__cplusplus /W4 ^
More? /wd4459 /D _CRT_SECURE_NO_WARNINGS /D _STL_SECURE_NO_WARNINGS

[G:\code\minimal_gui\binaries]
> cl ..\minimal.cpp /c
minimal.cpp

[G:\code\minimal_gui\binaries]
> rc /nologo /fo resources.res ..\resources.rc

[G:\code\minimal_gui\binaries]
> set LINK=/entry:mainCRTStartup

[G:\code\minimal_gui\binaries]
> link /nologo minimal.obj resources.res user32.lib /subsystem:windows /out:b.exe

Microsoft now also has special tools (or maybe a special tool) to handle application manifests, but I haven’t used that.

Advertisement

UTF-8 in the Windows API

The May 2019 update of Windows 10 introduced the possibility of setting the ActiveCodePage property of an executable to UTF-8. This is done via the application manifest. The documentation is super-vague on the technical details and history, and in usual Microsoft fashion the functionality is obscured and the little desirable kill-a-gnat can only be done by costly nuclear bombing, so to speak — why let something simple be simple if it can be wrapped in military standard complexity?

But it means that with Visual C++ 2019 one can now use UTF-8 encoding for GUI applications, and for the output of console programs, without any encoding conversions in the code.

In particular, with UTF-8 active process codepage the arguments of main now come handily UTF-8 encoded, which means that they can now represent general filenames also in Windows. Hurray! Yippi!

However, interactive console input of UTF-8 is still limited to ASCII at the API-level. And the MinGW g++ 9.2 compiler’s default standard library implementation doesn’t support UTF-8 in the C and C++ locale machinery, e.g in setlocale, probably because it employs an old version of Microsoft’s runtime library. That means that FILE* or iostreams UTF-8 console output with MinGW g++ 9.2 only works for the default “C” locale.

I experimented by setting the ANSI codepage default in the registry to 65001, the UTF-8 codepage number. After rebooting the console windows came up with active codepage 65001, even though the OEM codepage default was the same old one (850 in my case). That indicates an effort on Microsoft’s part to support UTF-8 all the way in Windows, which if so is fantastically good.


An example application manifest.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1">
    <assemblyIdentity type="win32" name="UTF-8 app example" version="6.0.0.0"/>
    <application>
        <windowsSettings>
            <activeCodePage xmlns="http://schemas.microsoft.com/SMI/2019/WindowsSettings"
                >UTF-8</activeCodePage>
        </windowsSettings>
    </application>
    <dependency>
        <dependentAssembly>
            <assemblyIdentity
                type="win32"
                name="Microsoft.Windows.Common-Controls"
                version="6.0.0.0"
                processorArchitecture="*"
                publicKeyToken="6595b64144ccf1df"
                language="*"
                />
        </dependentAssembly>
    </dependency>
</assembly>

The second assemblyIdentity part has nothing to do with the UTF-8 support, it just corrects a practically unusable default for the look and feel of buttons etc. Essentially this manifest corrects the two “wrong” defaults: the narrow text encoding, and the look ’n feel. From within an application with this manifest it looks as both CP_ACP (the global default) and CP_THREAD_ACP (the mysterious thread codepage) are UTF-8, codepage 65001.

In my experimentation UTF-8 had to be specified in uppercase, and it did not work with whitespace such as space or newline at either side.


An example resource script:

#include <windows.h>
1 RT_MANIFEST "resources/application-manifest.xml"

Non-crashing Python 3.x output in Windows

Non-crashing Python 3.x output in Windows

Problem

The following little Python 3.x program just crashes with CPython 3.3.4 in a default-configured English Windows:

crash.py3
#encoding=utf-8
print( "Blåbærsyltetøy!")
H:\personal\web\blog alf on programming at wordpress\001\test>chcp 437
Active code page: 437

H:\personal\web\blog alf on programming at wordpress\001\test>type crash.py3 | display_utf8
#encoding=utf-8
print( "Blåbærsyltetøy!")

H:\personal\web\blog alf on programming at wordpress\001\test>crash.py3
Traceback (most recent call last):
  File "H:\personal\web\blog alf on programming at wordpress\001\test\crash.py3", line 2, in 
    print( "Blåbærsyltet\xf8y!")
  File "C:\Program Files\CPython 3_3_4\lib\encodings\cp437.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\xf8' in position 12: character maps to 

H:\personal\web\blog alf on programming at wordpress\001\test>_

Here codepage 437 is the original IBM PC character set, which is the default narrow text interpretation in an English Windows console window.

A partial solution is to change the default console codepage to Windows ANSI, which then at least for CPython matches the encoding for output to a pipe or file, and it’s nice with consistency. But also this has a severely limited character set, with possible crash behavior for any unsupported characters.

Direct console output

Unicode text limited to the Basic Multilingual Plane (essentially original 16-bits Unicode) can be output to a Windows console via the WriteConsoleW Windows API function.

The standard Python ctypes module provides access to the API:

Direct_console_io.py
import ctypes
class Object: pass

winapi = Object()
winapi.STD_INPUT_HANDLE     = -10
winapi.STD_OUTPUT_HANDLE    = -11
winapi.STD_ERROR_HANDLE     = -12
winapi.GetStdHandle         = ctypes.windll.kernel32.GetStdHandle
winapi.CloseHandle          = ctypes.windll.kernel32.CloseHandle
winapi.WriteConsoleW        = ctypes.windll.kernel32.WriteConsoleW

class Direct_console_io:
    def write( self, s ) -> int:
        n_written = ctypes.c_ulong()
        ret = winapi.WriteConsoleW(
            self.std_output_handle, s, len( s ), ctypes.byref( n_written ), 0
            )
        return n_written.value

    def __del__( self ):
        if not winapi: return       # Looks like a bug in CPython 3.x
        winapi.CloseHandle( self.std_error_handle )
        winapi.CloseHandle( self.std_output_handle )
        winapi.CloseHandle( self.std_input_handle )

    def __init__( self ):
        self.dependency = winapi
        self.std_input_handle   = winapi.GetStdHandle( winapi.STD_INPUT_HANDLE )
        self.std_output_handle  = winapi.GetStdHandle( winapi.STD_OUTPUT_HANDLE )
        self.std_error_handle   = winapi.GetStdHandle( winapi.STD_ERROR_HANDLE )

Implementing input is left as an exercise for the reader.

Overriding the standard streams to use direct i/o and UTF-8.

In addition to the silly crashing behavior, the standard streams in CPython 3.x, like sys.stdout, default to Windows ANSI for output to file or pipe. In Python 2.7 this could be reset to more useful UTF-8 by reloading the sys module in order to get back a dynamically removed method that could set the default encoding. No longer in Python 3.x, so this code just creates new stream objects:

Utf8_standard_streams.py
import io
import sys
from Direct_console_io import Direct_console_io

class Dcio_raw_iobase( io.RawIOBase ):
    def writable( self ) -> bool:
        return True

    def write( self, seq_of_bytes ) -> int:
        b = bytes( seq_of_bytes )
        return self.dcio.write( b.decode( 'utf-8' ) )

    def __init__( self ):
        self.dcio = Direct_console_io()

class Dcio_buffered_writer( io.BufferedWriter ):
    def write( self, seq_of_bytes ) -> int:
        return self.raw_stream.write( seq_of_bytes )

    def flush( self ):
        pass

    def __init__( self, raw_iobase ):
        super().__init__( raw_iobase )
        self.raw_stream = raw_iobase

# Module initialization:
def __init__():
    using_console_input = sys.stdin.isatty()
    using_console_output = sys.stdout.isatty()
    using_console_error = sys.stderr.isatty()

    if using_console_output:
        raw_io = Dcio_raw_iobase()
        buf_io = Dcio_buffered_writer( raw_io )
        sys.stdout = io.TextIOWrapper( buf_io, encoding = 'utf-8' )
        sys.stdout.isatty = lambda: True
    else:
        sys.stdout = io.TextIOWrapper( sys.stdout.detach(), encoding = 'utf-8-sig' )

    if using_console_error:
        raw_io = Dcio_raw_iobase()
        buf_io = Dcio_buffered_writer( raw_io )
        sys.stderr = io.TextIOWrapper( buf_io, encoding = 'utf-8' )
        sys.stderr.isatty = lambda: True
    else:
        sys.stderr = io.TextIOWrapper( sys.stderr.detach(), encoding = 'utf-8-sig' )
    return

__init__()

Disclaimer: It’s been a long time since I fiddled with Python, so possibly I’m breaking a number of conventions plus doing things in some less than optimal way. But this was the first path I found through the jungle of apparently arbitrary io class derivations etc. It worked well enough for my purposes (in a little script to convert NRK’s HTML-format subtitles to SubRip format), so, I gather it can be useful also for you – at least as a basis for more robust and/or more general code.

Unicode part 1: Windows console i/o approaches



The Windows console subsystem has a host of Unicode-related bugs. And standard Windows programs such as more (not to mention the C# 4.0 compiler csc) just crash when they’re run from a console window with UTF-8 as active codepage, perplexingly claiming that they’re out of memory. On top of that the C++ runtime libraries of various compilers differ in how they behave. Doing C++ Unicode i/o in Windows consoles is therefore problematic. In this series I show how to work around limitations of the Visual C++ _O_U8TEXT file mode, with the Visual C++ and g++ compilers. This yields an automatic translation between external UTF-8 and internal UTF-16, enabling Windows console i/o of characters in the Basic Multilingual Plane.

Introduction

In both Windows and Linux properly internationalized applications use either UTF-16 or UTF-32 for their internal string handling. For example, the popular cross platform ICU library (International Components for Unicode) is based on UTF-16 encoded strings. For this kind of application Windows seems to be the better fit, since Windows’ API is UTF-16 based while Linux’ API is effectively, on a modern installation, UTF-8 based.

Still, in a simple console program one does not typically take on the quite steep overhead of full-fledged Unicode handling.

Instead of using a full-fledged Unicode handling library like ICU, one then relies on just the standard C and C++ libraries, and the Unicode handling reduces to what can easily be expressed using just the direct C++ language and standard library support.

How the Linux “all UTF-8” approach does not work in Windows

In Linux the typical small Unicode console program has everything char-based and UTF-8 encoded. The external data, the internal strings, the string literals, and of course the C or C++ source code, are all UTF-8 encoded. The total unification allows simple programs like this:

[utf8_sans_bom.all_utf8.cpp]
#include <stdexcept>        // std::runtime_error, std::exception
#include <stdlib.h>         // EXIT_SUCCESS, EXIT_FAILURE
#include <iostream>         // std::cout, std::cerr, std::endl
#include <string>           // std::string
using namespace std;

bool throwX( string const& s ) { throw runtime_error( s ); }
bool hopefully( bool v ) { return v; }

string lineFrom( istream& stream )
{
    string result;
    getline( stream, result );
    hopefully( !stream.fail() )
        || throwX( "lineFrom: failed to read line" );
    return result;
}

int main()
{
    try
    {
        static char const       narrowText[]    = "Blåbærsyltetøy! 日本国 кошка!";
        
        cout << "Narrow text: " << narrowText << endl;
        cout << endl;
        cout << "What's your name? ";
        string const name = lineFrom( cin );
        cout << "Glad to meet you, " << name << "!" << endl;
        return EXIT_SUCCESS;
    }
    catch( exception const& x )
    {
        cerr << "!" << x.what() << endl;
    }
    return EXIT_FAILURE;
}

Testing this in Ubuntu 11.10:

[~/blog/examples]
$ g++ utf8_sans_bom.all_utf8.cpp
[~/blog/examples]
$ ./a.out
Narrow text: Blåbærsyltetøy! 日本国 кошка!

What's your name? Bjørn Bråten Sæter
Glad to meet you, Bjørn Bråten Sæter!
[~/blog/examples]
$  _

Yay, it worked OK in Linux!

Testing the very same source code file in Windows using the same Linux-origins compiler (namely g++), and intentionally not specifying any codepage for the console window:

W:\examples> g++ -pedantic -Wall utf8_sans_bom.all_utf8.cpp

W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!

What's your name? Bjørn Bråten Sæter
Glad to meet you, Bjorn Bråten Sæter!

W:\examples> _

One reason for the gobbledygook here is that the Windows console by default assumes that the program produces OEM encoded text. That means, it assumes that the text is encoded using the original IBM PC character set, or a variation of that old character set. This encoding assumption is called the console window’s active codepage, and it can be inspected and changed via the chcp command, e.g. from codepage 437 (original IBM PC character set) to 65001 (UTF-8):

W:\examples> chcp
Active code page: 437

W:\examples> chcp 65001
Active code page: 65001

W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!huh?

W:\examples> _

Positive: the initial UTF-8 text output appeared to work. The Chinese characters 日本国 displayed as just empty rectangles, but they copied OK. Both the Norwegian and Russian copied OK and also displayed OK.

Negative: input did apparently not work, and it apparently caused some of the program’s output (including the prompt before the input operation) to disappear!

Exactly what went wrong above is difficult to say for sure. It might be the input operation, or it might be something else. However, the exact cause is irrelevant because input fails outright, not just producing weird side effects, if the user types in some non-ASCII characters such as Norwegian æ, ø and å:

W:\examples> a
Narrow text: Blåbærsyltetøy! 日本国 кошка!Bjørn Bråten Sæter
!lineFrom: failed to read line

W:\examples> _

About direct console i/o

Given that total failure for the “all UTF-8” approach has been established, it may perhaps appear to be overkill to also show the unintelligible output effect with the Windows platform’s major compiler, Visual C++ (here version 10.0), but as you’ll see it’s relevant:

W:\examples> cl utf8_sans_bom.all_utf8.cpp /Fe"b"
utf8_sans_bom.all_utf8.cpp

W:\examples> chcp
Active code page: 65001

W:\examples> b
Narrow text: Bl��b��rsyltet��y! ��������� ����������!

What's your name? Bjørn Bråten Sæter
!lineFrom: failed to read line

W:\examples> _

Here the Visual C++ runtime detects that the standard output is connected to a console window. And instead of sending the text via the ordinary standard output stream, it then attempts to place the correct Unicode UCS2-encoded characters directly in the console window’s text buffer. However, since the C++ source code was encoded as UTF-8 without BOM (as is usual in Linux), the Visual C++ compiler erroneously assumed that the source code was encoded as Windows ANSI, and so, since Visual C++ has Windows ANSI sort of hardwired as its C++ narrow character execution character set, it blindly copied the string literal’s bytes to the executable’s string values, whence the runtime, for its direct console i/o, is given UTF-8 bytes instead of the Windows ANSI bytes that it expects – so that its helpful translation to UCS2 fails…

At the Windows API level the runtime implements direct console output by calling the WriteConsole function instead of the WriteFile function. And similarly, if the console input had worked, then it would probably have been via a call to the ReadConsole function instead of the ReadFile function. The WriteConsole function accesses the console window’s text buffer directly and takes an UTF-16 wchar_t based argument, and ditto for ReadConsole.

Portable source code should be UTF-8 with BOM

One can avoid the direct console i/o by redirecting the output.

Such redirection then establishes that the output text byte level data is good, that all would have been well for this particular program’s output, except for the interference from the probably well-intentioned direct console i/o help attempt:

W:\examples> echo Bjørn Bråten Sæter | b >result

W:\examples> type result
Narrow text: Blåbærsyltetøy! 日本国 кошка!

What's your name? Glad to meet you, Bjørn Bråten Sæter !

W:\examples> _

And because the data is correct, one can be sure that the Visual C++ compiler was tricked into assuming that the source code was ANSI Western. And this then means that any wide string literal, which a Windows compiler has to translate to UTF-16, will be incorrectly translated if it contains any non-ASCII characters. Hence, for portable source code it is not a good idea to encode the source code as UTF-8 without BOM – for that is effectively to lie to the compiler.

Now that also g++ accepts a BOM at the start of the source code, portable source code should therefore be encoded as UTF-8 with BOM.

With the BOM in place Visual C++ correctly determines that the source code is UTF-8 encoded, although as of late 2011 this appears to still be undocumented. And with a correct assumption about the source code’s encoding, narrow string literals are correctly translated to Windows ANSI encoded string values in the executable. For Unicode literals in Windows one should therefore use wide string literals, e.g. L"Blåbærsyltetøy! 日本国 кошка!", which in Windows ends up as an UTF-16 encoded string value in the executable.

The Visual C++ UTF-8 stream mode

Use source code encoded as UTF-8 with BOM, and use wide string literals, OK (or rather, one just has to accept that complication!), but how does one then output one of these literals?

E.g., std::wcout in Windows has a rather strong tendency to translate down to Windows ANSI, not to UTF-8?

Well, in his 2008 blog posting Conventional wisdom is retarded, aka What the @#%&* is _O_U16TEXT? Michael Kaplan explained that

“the [Visual C++] CRT? Starting in 2005/8.0, it knows more about Unicode than any of us having been giving it credit for…”

The Visual C++ runtime library can convert automatically between internal UTF-16 and external UTF-8, if you just ask it to do so by calling the _setmode function with the appropriate file descriptor number and mode flag. E.g., mode _O_U8TEXT causes conversion to/from UTF-8.

One reason that many people have not known about the Unicode support that he discusses there, a Visual C++ Unicode stream mode, is that it’s mostly undocumented. Kaplan gives a link to documentation of the deprecated _wsopen function, as one place where the mode flags have been (inadvertently?) documented. However, the main usage is through the _setmode function, where, on the contrary, the official documentation goes on about how _setmode will invoke the “invalid parameter handler” unless the mode argument is either _O_TEXT or _O_BINARY. So, by using this functionality one is not just in ordinary Microsoft undocumented land. One is wholly over in explicitly-documented-as-not-working land.

On the other hand, considering that the official documentation is plain wrong about many things (e.g., for Visual C++ 10 it maintains that the source code encoding is limited to ASCII), and that the _setmode documentation is incorrect about the argument checking, and that the g++ compiler provides C level support for the _O_U8TEXT mode feature, considering all that one may choose to ignore the will-not-work statements of the documentation and just treat it as a documentation defect, for what good is a feature that can’t be used?

Since there is not really any alternative in order to get UTF-8 translation also down at the C library level, this is the approach that I’m going to discuss more detailed in part 2.

It might seem from Kaplan’s blog posting that you don’t have to do more than just set the mode, and go! But as you can expect from something in explicitly-documented-as-not-working land, it’s not fully implemented even in Visual C++. And even less fully implemented in g++…

Summary so far

Above I introduced two approaches to Unicode handling in small Windows console programs:

  • The all UTF-8 approach where everything is encoded as UTF-8, and where there are no BOM encoding markers.
     
  • The wide string approach where all external text (including the C++ source code) is encoded as UTF-8, and all internal text is encoded as UTF-16.

The all UTF-8 approach is the approach used in a typical Linux installation. With this approach a novice can remain unaware that he is writing code that handles Unicode: it Just Works™ – in Linux. However, we saw that it mass-failed in Windows:

  • Input with active codepage 65001 (UTF-8) failed due to various bugs.
     
  • Console output with Visual C++ produced gibberish due to the runtime library’s attempt to help by using direct console output.
     
  • I mentioned how wide string literals with non-ASCII characters are incorrectly translated to UTF-16 by Visual C++ due to the necessary lying to Visual C++ about the source code encoding (which is accomplished by not having a BOM at the start of the source code file).

The wide string approach, on the other hand, was shown to have special support in Visual C++, via the _O_U8TEXT file mode, which I called an UTF-8 stream mode. But I mentioned that as of Visual C++ 10 this special file mode is not fully implemented and/or it has some bugs: it cannot be used directly but needs some scaffolding and fixing. That’s what part 2 is about.


Cheers!

Cheers, & enjoy!

[cppx] Xerces strings simplified by ownership, part II.

In part I of this series I discussed Xerces’ UTF16-based string representation, a common deallocation pitfall for such strings, and how to convert to and from such strings correctly by using C++ RAII techniques. For the RAII techniques I presented one solution using boost::shared_ptr, and one more efficient solution using a home-brewed ownership transfer class, cppx::Ownership. And I promised to next (i.e. in this installment) discuss how to do it even more efficiently by using wchar_t strings as the program’s native strings, and detecting the minimal amount of work needed for each conversion.

[… More] Read all of this posting →