Skip to content

Audio Programming Unseen: an introduction 


This article forms the introduction to a series in which I aim to introduce audio programming using a screen reader. Typical Readers of the series will be blind or visually impaired (BVI) musicians or audio engineers or hobbyists with little or no previous programming experience, or simply BVI people who are interested in the possibilities for programming or synthesising music or other audio in an accessible way.

There exists several excellent options for a BVI person to learn and develop their audio programming skills. These systems have a few issues concerning accessibility, but for the most part are very accessible. The focus of the articles in this series will be on these systems. Unfortunately two of the most commonly used systems for teaching audio programming are not particularly accessible using a screen reader. These two systems, MAX MSP and its open source equivalent, Pure Data, both rely on a visual programming paradigm to create sounds and music. In these systems, objects are connected together in order to form what are called “patches”. You might create a patch to produce a system to play a piece of music or piece of audio where the pitch, tempo or other features can be controlled interactively. Dials and sliders can be connected into the patch to apply controls or modulate playback in some way. Pre-recorded audio clips can be dragged in to the mix and other controls can be applied to manipulate the sound and/or introduce effects such as reverb or distortion. In MAX MSP and Pure Data, all of this is achieved by using the computer mouse to connect objects, typically using the output of one or more objects as the  inputs to further objects, until the final output is routed to the output device, typically the sound card of a computer. 

The reason why systems like MAX MSP and Pure Data are widely used, particularly in introductory audio programming courses, is that for sighted people it is visually very clear  how objects are connected together to form a patch. The objects and connections making up a patch are presented on screen, and it is relatively straightforward and intuitive to connect the objects making up a patch using drag and drop mouse operations. Such operations are, of course, very difficult or impossible to perform using a screen reader, meaning that sadly, the initial experience of some BVI people is that audio programming is very inaccessible. The good news is that their exists accessible systems every bit as good, if not better, than MAX MSP and Pure Data, which can be used to achieve all of the things described above, and much more. These systems involve learning some kind of programming language, and writing code much in the same way one might write general purpose computer programs in a language such as Python, Java or C++. The style, power and  difficulty of the language each of these systems supports varies considerably, and these are likely to be major factors when you come to make the choice regarding which particular system is right for you. It is interesting to note, however, that many experienced sighted audio programmers consider that using a text-based programming language, as used in these systems, ultimately provides much more power and flexibility than those systems based on a visual programming paradigm.   

Accessible Audio Programming Systems

The articles in this series will each look at an accessible audio programming system. We will describe what each system is capable of, the characteristics of the audio programming language used, describe the programming process from the point of view of someone using a screen reader with examples and mention any known accessibility issues with the system. We will also provide links and/or references sufficient to enable you to get started using the system.

 The remainder of this article will briefly introduce each system, in order to give a flavour of what will be covered in future articles.

Sonic Pi – “The Live Coding Music Synth for Everyone” 

Sonic Pi is described as:

  • “Powerful for professional musicians and DJs. 
  • Expressive for composition and performance.
  • Accessible for blind and partially sighted people.
  • Simple for computing and music lessons.

Learn to code creatively by composing or performing music in an incredible range of styles from Classical & Jazz to Hip hop & EDM. Free for everyone with a friendly tutorial.

Brought to you by Sam Aaron and the Sonic Pi Core Team.”

The SonicPy system is available on Windows, Mac and Raspberry Pi OS. The material in the article on SonicPy in this series will focus on the Windows version. 

The easiest way I have found to code the Sonic Pi system is to write a text file and run the file using the Sonic Pi system. At is simplest, the text file can consist of a series of notes with specified durations, but it is easy to add sound effects and create much more sophisticated musical scores.    

The reference to “live coding” refers to the fact that it is easy to make alterations to the code interactively while it is running, and so alter the music or sounds being played. The Sonic Pi system, like all of the systems covered in this series, supports Open Sound Control (OSC), which is a protocol for transmitting audio information between computers, and so all of these systems can be networked across a number of computers.  


ChucK was originally developed by Perry Cook, and is now supported by the ChucK team.

ChucK is described as “a programming language for real-time sound synthesis and music creation. ChucK offers a unique time-based, concurrent programming model that is precise and expressive (we call this strongly-timed), dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio. It is open-source and freely available on macOS, Windows, and Linux. It’s fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive music.”

The statement in the quote above about being “easy to learn” has to be understood in context as being aimed at an audience who are not new to audio programming. However, the language is in fact fairly straightforward, the code is easy to read and certainly easy to change “on the fly”, as is the case with SonicPy. The term “HiD device” simply refers to the fact that humans can interact with the device while programs are running, and so just as with SonicPy, the system supports live coding.   

ChucK has a particularly small footprint, that is, it doesn’t demand much computing power to run, unless, of course, you are using it to control a large number of possibly networked instruments. Again, the easiest way I have found to program ChucK is to write a ChucK program in a text file and simply run it using ChucK. It has to be said that sometimes the documentation is quite concise, and often the easiest way to learn how to use it is to look at the examples in the manual. While the programming functionality of ChucK is, in general, very impressive, one thing that ChucK does not support, in contrast to systems such as Csound (to be covered in this series), is support for spatial sound, that is, other than simple panning, you can’t moves sounds around in space in ChucK. 


Csound is an extremely powerful, open source audio programming language which runs on Linux, Mac, Windows, IOS and Android.      

Csound is described as “a sound and music computing system which was originally developed by Barry Vercoe in 1985 at MIT Media Lab. Since the 90s, it has been developed by a group of core developers. A wider community of volunteers contribute examples, documentation, articles, and takes part in the Csound development”.

Csound is a hugely powerful environment with almost limitless capabilities for sound and music production, synthesis and analysis. As with SonicPy and ChucK, it is programmed by putting together a text file containing Csound program code and running on one of the numerous Csounbd environments. Under Windows, there are varius environments  that can be used to run Csound, but in the article dealing with Csound in this series, we shall focus on the command-line option. 

Text files of program code for Csound consist of two major parts. The first part defines the instruments to be used in the program. The second part is used to specify the notes to be played by those  instruments. The mechanisms provided by Csound for defining instruments and specifying the notes and events that occur in the score can be extremely sophisticated, but the tutorial provided in the download and examples in the “Floss” manual, and hopefully in the article on Csound in this series, are simple enough to enable those new to audio programming to grasp the basic ideas before moving on to more sophisticated use cases.


While there certainly exists other options that BVI people might explore for accessible programming, I believe the 3 systems to be covered in this series are among the most usable and powerful available. While the focus of the series will be on the use of these systems using Windows, I will aim to keep the Windows specific information to a minimum, so that the vast majority of the material will b relevant no matter which operating system you are using.

The next article in the series will focus on Sonic Pi, as I believe this is the system with the lowest technical barriers to entry and the most intuitive one for coding for people with no previous programming experience. The system does assume some basic knowledge of music, and the greater your familiarity with music, sound effects and mixing, the quicker you will be able to take advantage of SonicPy’s wide range of controls and language constructs.

Leave a comment