Oscilloscope Video Render

Analogue oscilloscopes have a beautiful fluorescent display. The oscilloscope’s electron emitter fires electrons at the fluorescent material which briefly illuminates it at a single point. By sweeping the electrons path across the fluorescent screen images can be made. The trace of electrons need to periodically pass through the parts of the screen that needs to be illuminated.

Article summary: I write a python program that turns an animation into an audio-file which encodes position data as left- and right channel voltages. When an analogue oscilloscope receives the audio-input in will render the animation on the fluorescent display. Videos of the result can be found in the end of the article.

In what is called the oscilloscope’s X-Y mode the electron emitter can be controlled using two voltage inputs. The voltage level at the X-input determines the position the emitter is firing at a given time in the horizontal axis and the voltage level at the Y-input determines the position the emitter is firing at in the vertical axis. A voltage of 0 V at both inputs would render a singular dot in the middle of the display. By changing the voltages at the inputs with time the emitter can trace images on the display.

En example of drawing a triangle using X-Y mode is shown in the image below. The point at the top of the triangle corresponds to the X-input having 0 V and the Y-input 3 V. By alternating between the points (0, 3), (-3, -2), (3, -2) the triangle will be rendered. Because the emitter is always turned on even when it is moving between points it will illuminate the parts of the screen in between the points as well. If the voltages are cycled fast enough the triangle will have a nice uniform brightness.

Grid.svg

Alternate the X- and Y-channel voltages between the three points shown to draw a triangle.

To create something more interesting to render on the oscilloscope screen I used Blender to create a simple animation of a wireframe sphere and exported it as a sequence of 64×64 8-bit depth PNG-images. A single frame of the animation is shown below. The resolution shown here is a bit higher.

osc_animation.png

The animation target. (You'll have to pretend it's rotating. I haven't added support for animated images in this blog-framework.)

To draw something on the oscilloscope the animation needs to be converted into voltages which maps to the coordinates in the image. This can be easily done in a stereo audio format where for example the left audio channel could correspond to the X-channel and the right audio channel to the Y-channel. It’s easy to hook up the oscilloscope to an audio source. The actual problem here is figuring out in which order to visit all points in the image in a single frame so the trace doesn’t cut unnecessarily across empty parts of the image. If we just naively visited each coordinate in the frame in no particular order the result would be a mess on the oscilloscope screen. We don’t want the trace to move over empty parts of the image unnecessarily. This is similar to a problem which appear in graph theory which is called the traveling salesman problem. We have a list of coordinates in the image which the trace needs to visit and we want to find out which order we should visit all points. If we minimize the total distance traveled the trace will mostly track the actual parts of the image.

To create the animation for the oscilloscope I used python. First I had to write a function which turns images into a coordinate list. It accepts a 4D array where the 3 first axis corresponds to the brightness values for the colors red, green and blue and the last axis is the alpha value, i.e. an RGBA array. Since we don’t care about different brightness levels and we only care if a pixel is “turned on” I convert the array into a boolean array using a threshold to determine if a pixel is turned on or not. If the average of the three color-channels is above the threshold then the pixel is determined to be in the on state. I then isolate only the pixels being in the on state and return the coordinates for them.

import numpy as np
import imageio

"""
Convert RGBA array to graph coordinates, i.e. a list of coordinates for 
the pixels that are turned ON. All pixels with values above the threshold
will be turned ON
"""
def convert_RGBA_to_coords(RGBA_arr, threshold=127):
    # Discard alpha channel
    data = np.delete(RGBA_arr, -1, axis=-1)
    # Collapse RGB values to a single value using a average-filter
    data = np.mean(data, axis=-1)
    # Apply threshold to array to set pixels to ON or OFF (1 or 0) and save as coordinates
    on_coordinates = np.transpose((data > threshold).nonzero())
    return on_coordinates

The next step is to solve the Traveling salesman problem for the coordinates. Conveniently it’s a famous problem and implementation of solutions are easy to find. The solver uses a distance-matrix. To use it we need to calculate a distance matrix for the coordinates for the pixels in the frame being in the “on” state. The matrix is only a description of the distance each position has from each other. For example if there were 32 coordinate to visit the distance matrix would be a 32×32 matrix. This is easily done using scipy. We then call the Traveling salesman problem solver using this.

from scipy.spatial import distance
from tsp_solver.greedy import solve_tsp # https://github.com/dmishin/tsp-solver

"""
Get an array with Euclidean distances between each coordinates in a 2d-array of coordinates
"""
def _get_distance_matrix_(positions):
    dist_matrix = distance.cdist(positions, positions)
    return dist_matrix

"""
Convert a list of 2D-points to an efficient path that passes through all points
"""
def get_path(positions):
    D = _get_distance_matrix_(on_coordinates)
    path = solve_tsp(D)
    return path

All that’s left is to iterate through each of the frames in the animation, solve the traveling salesman problem for each of them and encode the path in a stereo audio format. There are simple to use wave-functions included in scipy. To save a wave-file you only need the channel data and to specify which sampling rate to use. For stereo audio we call the library with a 2d-array of channel data.

import os
import glob
from scipy.io.wavfile import write

if __name__ == '__main__':
    # Get a list of paths to each of the frames
    animation_src_dir = '/mnt/c/tmp/'
    animation_src_paths = glob.glob(os.path.join(animation_src_dir, '*.png'))

    # Audio wave setup
    sampleRate = 44100 # Standard audio sampling, higher could give more smooth results
    maxVal = 64 # Max coordinate value
    frameLength = 3/60 # multiple of 1 s. Length of each frame

    ch1, ch2 = [], [] # Store channel voltages in these lists
    # Iterate through each image and encode the paths as audio
    for im_file in animation_src_paths:        
        print(f"On image {im_file}")
        # Get the image for the current frame
        im = imageio.imread(im_file)

        # Solve the traveling salesman problem for the image
        on_coordinates = convert_RGBA_to_coords(im)
        path = get_path(on_coordinates)

        # Encode the paths as audio
        for t in range(int((sampleRate * frameLength) / pointRepeat)):
            try:
                x, y = on_coordinates[path[t % len(path)]]
            except:
                x, y = 32, 32 # If no path then just default at the middle of screen
            ch1.append(x / maxVal)
            ch2.append(y / maxVal)

    # Save as a wave-file
    write('oscilloscope_animation.wav', sampleRate, np.array([ch1,ch2]).transpose())

All that’s left is to play the audio data while the oscilloscope is hooked up to the audio output. Slow-motion capture of the trace moving across the sphere’s outline.

The video below shows the sequence of the animation. I’m slowly adjusting the oscilloscope intensity throughout the video. Because of the camera sampling rate the result appears more flickery than it does in person.

Because I used low resolution images (64 by 64) the result is quite jagged. It’s possible to see traces passing through empty parts of the images as well. Solving the jaggedness could be done in multiple ways, for example increasing the resolution (but which would also greatly increasing the computational demands for the traveling salesman problem solution). It would be possible to add some random noise to the coordinates which would smooth out the image or it would be possible to interpolate between coordinates. I’m however happy with the result as it is now.

Other algorithms than solving the traveling salesman problem could be used to achieve a better result. For example vector graphics could be used for the animation instead of png-images. Vector graphics are already more suitable for this implementation since they already describe and encode paths. It’s also possible to write a simple path-solver which work better for cases where the graph is already mostly connected. In the case I have shown here there is never an actual need to move across empty parts of the images and writing such a solver is not that difficult.

This has been a fun project to make use of my old electronics equipment. Watching these old fluorescent displays in action in person is always cool.

Source and files at github