> Rather than saying the article is wrong, can you demonstrate /why/ it is wrong?
I think you might to re-read my entire comment: note that I'm not arguing that the technical details are wrong, only that they're insufficient to support the huge “APFS is unusable” conclusion.
As previously noted, Windows and Linux work the same way and they are used by more people in individual non-English locales than the total number of Mac users. Would you say “NTFS is unusable by non-English users” is a useful statement?
There's plenty of room to say that a particular tool needs improvement, or that people making systems which copy or archive files should check for pathological cases, but it doesn't help anything to overstate the case so broadly.
The issue isn't a "bag-of-bytes" filename model. The issue is a "bag-of-bytes" filename model combined with an inconsistent normalization scheme.
It's not a problem on Windows or Linux filesystems because Windows and Linux don't provide a half-assed normalization scheme that lets me fairly easily create files that can't be accessed. If the Cocoa libraries did no normalization, then the resulting behavior might be obnoxious from a human-interface perspective, but I don't think the article would describe it as "little short of catastrophic".
I'm sitting here on my US English keyboard typing scancodes that look just like they did in 1990, so I'm not the best authority on how big of a problem it really is, but I'd guess it's going to result in a lot of bugs. Anyone who's ever tried to use a Mac with a case-sensitive HFS+ partition should be able to tell you that programmers can't even "normalize" their filenames consistently strictly within their native language.
> It's not a problem on Windows or Linux filesystems because Windows and Linux don't provide a half-assed normalization scheme that lets me fairly easily create files that can't be accessed
This is only true if you're talking about the kernel APIs. Unfortunately, filenames come from a variety of sources and it's easy to find tools which inconsistently normalize them – e.g. simply copying and pasting a name from a Word doc, web page, etc. which has different normalization than whatever originally created the file – or which produce either duplicate error messages or confusing error messages because the normalization form used in a file doesn't match the normalization form written on disk.
I've encountered variations of this problem on all three systems. No approach is going to handle 100% of the filenames in the wild and all of them will require extra care in the user-interface which may or may not have been done – e.g. the Windows Explorer still provides no way to tell why Café.txt and Café.txt are not the same file – and fixing the cases where programs are internally inconsistent. APFS switching will expose some programs which were unsafe before but since it's consistent with the other common filesystems it'll remove the need for every archive, version control, etc. system to either special-case or break.
I think you might to re-read my entire comment: note that I'm not arguing that the technical details are wrong, only that they're insufficient to support the huge “APFS is unusable” conclusion.
As previously noted, Windows and Linux work the same way and they are used by more people in individual non-English locales than the total number of Mac users. Would you say “NTFS is unusable by non-English users” is a useful statement?
There's plenty of room to say that a particular tool needs improvement, or that people making systems which copy or archive files should check for pathological cases, but it doesn't help anything to overstate the case so broadly.