Software Accessibility Architecture for Blind Users: A Comparison of Windows, macOS, and Linux
The comfort and convenience provided by digital technologies—real-time communication, the reduction of routine work for those whose activities are tied to text and even science, the free choice of information sources, and much more—have transformed the lives of people across the planet.

First Steps
Until the mid-1980s, most digital technologies were inaccessible to people with visual impairments. It is difficult for many of us to imagine that the operating systems popular at the time—Microsoft’s MS-DOS and Apple’s Macintosh System—had no screen readers or support for Braille input. All the more surprising is the fact that in just over forty years, the accessibility of digital technologies has changed dramatically.
Microsoft
The predecessor of Windows, MS-DOS, had no built-in accessibility tools for blind users. However, Microsoft relied on commercial third-party solutions such as OutSpoken, a screen reader developed in 1986. By the late 1980s, another screen reader appeared—arguably the most well-known and widely used—called Job Access With Speech, or JAWS.
The history of the JAWS screen reader deserves special mention. Its creation is owed to 27-year-old motorcycle racer Ted Henter, who lost his sight completely as a result of an accident. Henter did not give up: he learned programming, met businessman Bill Joyce, and together they began developing a fundamentally new screen reader. The story of how Ted Henter and Bill Joyce created their famous screen reader can be found here.
Early versions of Windows, like MS-DOS, lacked a mature accessibility architecture for blind and low-vision users. However, following the release of Windows 3.1, Microsoft began introducing APIs necessary to support screen readers. With Windows 95, built-in Accessibility Options appeared, including a screen magnifier, keyboard filters, and visual indicators for sound.
From that point on, the development of accessibility in Windows and its integral component, Microsoft Office, accelerated significantly. Beginning with the introduction of Microsoft Active Accessibility (MSAA), the company’s investments in accessibility technologies became substantial. These financial resources made it possible to develop a new API—UI Automation (UIA). By the early 2010s, Microsoft’s approach to software accessibility had become universal, and by the 2020s, the accessibility architecture of both Windows and Microsoft 365 is considered classical—much like the Ionic or Corinthian orders in architecture.
It is important to note that in the 1990s, JAWS began to be used with Microsoft Office, making Word the first tool that allowed people with visual impairments to work with documents. In the 2000s, Windows 2000 introduced the built-in screen reader Narrator, later inherited by Windows XP. This was Microsoft’s first in-house accessibility solution for blind users. While the step may have been symbolic, it was critically important in the development of the company’s own accessibility architecture.
Today, in the early twenty-first century, Narrator has evolved into an out-of-the-box screen reader and a full-fledged component of both Windows and Microsoft Office. Microsoft 365 applications are fully compatible with the major Windows screen readers: JAWS, NVDA, and the built-in Narrator. Word, Excel, and PowerPoint include accessibility checkers, automatic image descriptions, and improved keyboard navigation. In addition, Microsoft 365 supports Braille displays and web-based interfaces—for example, NVDA can be used in Office Online via a browser.
Historically, Windows has developed several parallel accessibility APIs—from Microsoft Active Accessibility (MSAA) to the more modern UI Automation (UIA). This has created an ecosystem in which different screen readers can coexist: the commercial JAWS, the free NVDA, and the built-in Narrator. Each uses the available APIs in its own way, resulting in differences in how interfaces are interpreted.
Apple
Apple built its own accessibility architecture but chose a different strategy and technologies. In 2005, the company announced the release of its first built-in screen reader, VoiceOver, for Mac OS X Tiger (version 10.4). In subsequent versions—Mac OS X Leopard (2007) and Snow Leopard (2011)—VoiceOver’s functionality was expanded.
Starting in 2010, VoiceOver—now an integral part of subsequent macOS versions—received additional features such as Screen Curtain for privacy, image descriptions, and haptic feedback. From that year to the present, across all Apple devices—macOS, iOS, iPadOS, watchOS, and tvOS—VoiceOver has been a standard feature, alongside other accessibility tools.
Apple’s chosen path of deeply integrating the VoiceOver screen reader into the architecture of its products—embedding accessibility features at the system kernel level—made it possible to unify VoiceOver’s functionality across the entire product line. Despite limited user customization, the NSAccessibility protocol and UI Accessibility APIs significantly expanded accessibility for blind users.
Linux and Open Source Communities
In 1999, the GNOME Project introduced a Linux desktop environment with accessibility features and declared accessibility a priority for UNIX-like operating systems. Around the same time, an open-source analogue to Microsoft’s MSAA/UIA technologies was developed: the Assistive Technology Service Provider Interface (AT-SPI).
The Orca screen reader, developed in 2002 for GNOME, supports both Braille displays and speech synthesis. It also introduced navigation capabilities in applications such as Firefox, LibreOffice, and others.
Modern Linux systems and other open-source projects continue to develop accessibility technologies for blind users, including:
- the Orca screen reader, screen magnifiers, and high-contrast themes;
- the BRLTTY service for working with Braille displays in the text console;
- the modular speech synthesis system Speech Dispatcher.
The advantage of Linux and the open-source ecosystem lies in an independent accessibility platform. For graphical environments such as GNOME, the Assistive Technology Service Provider Interface (AT-SPI) framework is used, enabling bidirectional communication between assistive technologies—such as the Orca screen reader—and applications. Originally based on CORBA (Common Object Request Broker Architecture), AT-SPI transitioned to Desktop Bus (D-Bus) in 2011, improving performance, stability, and support for various toolkits, including the cross-platform Qt framework.
However, the need for manual installation and configuration in Linux often complicates accessibility and requires additional effort from users, unlike out-of-the-box solutions.
Accessibility in Numbers: What Do Users Choose?
According to the WebAIM 2023-2024 study, 1,522 responses were received from 1,539 respondents to the question “Which operating system do you use along with your main screen software on your computer or laptop?” The picture looks like this:
| Response | Number of respondents | % of respondents |
| Windows | 1311 | 86.1% |
| Mac | 146 | 9.6% |
| Linux | 44 | 2.9% |
| Other | 21 | 1.4% |
Primary Desktop/Laptop Screen Reader
If the majority of blind users prefer Windows, what do they use inside this system and why?
“Which of the following is your primary desktop/laptop screen reader?”:
| Response | Number of respondents | % of respondents |
| JAWS | 619 | 40.5% |
| NVDA | 577 | 37.7% |
| VoiceOver | 148 | 9.7% |
| Dolphin SuperNova | 57 | 3.7% |
| ZoomText/Fusion | 41 | 2.7% |
| Orca | 36 | 2.4% |
| Narrator | 10 | 0.7% |
| Other | 41 | 2.7% |
Screen Readers Commonly Used
“Which of the following desktop/laptop screen readers do you commonly use?”
| Response | Number of respondents | % of respondents |
| NVDA | 1009 | 65.6% |
| JAWS | 931 | 60.5% |
| VoiceOver | 675 | 43.9% |
| Narrator | 574 | 37.3% |
| Orca | 127 | 8.3% |
| ZoomText/Fusion | 115 | 7.5% |
| Dolphin SuperNova | 83 | 5.4% |
| Other | 184 | 11.9% |
71.6% of respondents use more than one desktop/laptop screen reader. 43% use three or more, and 17.4% use four or more different screen readers. VoiceOver users most commonly use additional screen readers.
The Practical Picture of Usage
When examining what blind users actually use on their computers, it becomes clear that Windows maintains a dominant position among users with visual impairments. This is due to a combination of widespread market adoption and a rich, flexible ecosystem of screen readers available out of the box.
The primary tools remain JAWS (40.5%) and NVDA (37.7%). While Narrator, built into Windows, is used as a primary screen reader by only 0.7% of users, it serves as a secondary tool for 37.3%.
Users actively combine screen readers: 71.6% use more than one tool, 43% use three or more, and 17.4% use four or more.
NVDA, which is free and accessible to everyone, surpasses JAWS in the number of users who use it at least occasionally (65.6% versus 60.5%).
VoiceOver, traditionally associated with macOS, is the primary screen reader for most users on that platform (9.6% of all respondents). At the same time, it is widely known beyond macOS: 43.9% of all respondents listed it among the screen readers they commonly use, reflecting its reach and versatility.
Less popular solutions, such as Orca for Linux or Dolphin SuperNova, remain niche tools.
All of this confirms one key trend: users choose solutions that work out of the box, are easy to install, and are compatible with their primary working environment. Flexibility and the ability to combine screen readers are what make the digital environment truly accessible.