video-to-oscilloscope
a set of simple tools to turn a video into 2-channel audio ready to be "displayed" on an oscilloscope in xy mode
(Yes I know it's not an AI model, I don't like github anymore and I don't want to put public projects on my private gitea server)
Requirements:
Note: There's only two python packages you need to install using pip, if you're on a mac creating an env is optional. Also: It might work on Windows, I have not tested it there.
(use linux package manager or brew on mac) ffmpeg 7.x or newer potrace (use pip) vpype svgpathtools
Setup:
- create a conda env
conda env create -n v2o python=3.12 (you can call it 'v2o' or whatever you prefer)
- activate the env
conda activate v2o
- install the requiremnts
pip3 install vpype svgpathtools
Usage:
video prep:
- If you have just a single file you can skip this. If you have multiple files and don't wan to use a video editor, you can quickly slam them together into one file. put the videos into a directory somewhere cd to it make a list of files
ls -1>list.txt
use vi to put a 'file ' at the start of each line
%s /^/file /g
use ffmpeg to concatenate the files into one file
ffmpeg -f concat -safe 0 -i list.txt -c copy my_video.mp4
- If you want the video to be in a certain ratio e.g. 1:1 you can quickly fix it
ffmpeg ~/Movies/my_orig_video.m4v -filter_complex "[0:v]crop=w='min(iw\,ih)':h='min(iw\,ih)',scale=720:720,setsar=1" my_square_video.mp4
Note: Whatever you choose to do, make sure you know the dimensions of your video. The example provided is 1280x720. You will need to edit vectorize.sh if it is something else. Most oscilloscope tubes are 4:3 ratio, but you can always play with the gain/voltage controls for each channel to make it fit and make it the right ratio.
video processing:
- Run the extraction script. It uses ffmpeg edgedetect filter to reduce the video to outlines/edges, and then it breaks it up in to individual .png files for further processing.
./extract.sh my_video.mp4 etc...
- Run the vectorizing script.
./vectorize.sh
It takes a while. It doesn't display anything while running.
- Convert the svg files into an audio file.
python svg_to_audio.py "frames/*_opt.svg" --sample-rate 192000 --fps 30
Use the highest samplerate you can. There are cheap $20 USB-C adapters which can do 192KHz, Cubilux, as an example, makes one.
You can edit the script to change the output resolution. 720x720 is fine, I think going beyond 1024x1024 is pointless, given the limited samplerate you can use.
- Combine the wav files into a single file
./cat-wavs.sh
- Watching the result On mac there's a free program called "Oscilloscope" which will let you see the file the way it would look on an oscilloscope. You can use your computer sound card, headphone output, or a USB dongle to play the audio into an oscilloscope. You can buy a 3.5mm stereo to 2x BNC breakout cable, which makes things much easier.
tips for creating better output
- choose video files with an uncluttered background
- if you use AI i2v to make your video, avoid black outlines - edgedetect will create double lines if the outlines aren't really thin, and this wastes drawing bandwidth
- gen your AI videos with a black background
- you can play with the edgedetect filter parameters used by ffmpeg in extract.sh
edgedetect=high=0.5:low=0.8
- keep it simple - even at 192KHz the beam is moving pretty slow compared to arcade stuff, where it might me in the MHz range
finally
I appreciate your feedback. I notice there's some 'ringing' and 'sqigglyness' that I think needs finetuning, since it's wasting time vs. drawing straight lines. I imagine it's an atifact of how svgpathtools is creating the fourier series needed to draw the lines.