If you think about it, we interact with our desktop computers mostly with our hands and our eyes. Our hands are on the keyboard or mouse; our eyes are looking at the screen. However, some users cannot make use of a screen or a mouse. They may be blind, or have low vision, or may not have fine motor skills with their wrists, hands and fingers.
If we don't make our websites univerally accessible, we risk losing out on the contributions these users have to make: their ideas, donations, bug fixes, purchases, stories etc.
Users who cannot see a screen, or who cannot see it well, may interact with their computer by hearing: aural interaction. Users who cannot use a mouse might rely on keyboard interaction, but a bunch of assistive technologies exist:
- screen magnifiers, which are used to enlarge and improve the visual readability of rendered text and images
- screen readers, which are most-often used to convey information through synthesized speech or a refreshable Braille display
- text-to-speech software, which is used to convert text into synthetic speech
- speech recognition software, which is used to allow spoken control and dictation
- alternate input technologies (including head pointers, on-screen keyboards, single switches, and sip/puff devices), which are used to simulate the keyboard
- alternate pointing devices, which are used to simulate mouse pointing and clicking.
However, accessibility is not just an issue for users with impaired vision or motor skils; other groups that need to be considered include deaf users and users with learning disabilites.