Why This Exists
Governments send ground troops into environments where sub-$500 commercial drones deliver munitions. Established countermeasures cost $30,000 to $500,000. That gap is not an engineering problem — it is a market exclusion problem that costs lives. SoulMusic is a non-lethal, non-RF acoustic disruption platform designed to close that gap, grounded in published physics, and built with proportionate, accountable use as a hard architectural constraint from the first line of code.
“If governments are going to send ground troops into areas against current U.A.V. threats — and I have the ability to do something about it — then I should.”
This reasoning is sound. Non-kinetic countermeasures to drone threats represent one of the most ethically defensible positions in modern defense technology. Ground patrols operating under sub-$500 drone munition threats need accessible protection — not solutions priced for defense contractors. An acoustic system that grounds a hostile drone before it reaches personnel, with zero casualties and zero collateral damage, is not a weapon. It is a force multiplier for human survival. The motivation is legitimate. The engineering obligation follows from it.
“Can people use this same technology against other people? Yes. But resistant countermeasures will not be cost-effective for some time. It will save more lives overall than it costs — the defense balance shifts far in favor of protection.”
This conclusion is assessed accurate. Hardening a drone against MEMS resonance disruption requires shielding, mass, and enclosure changes that directly conflict with the lightweight-cheap-attritable design that makes drone swarms dangerous — the cost asymmetry is real and durable. Marginal misuse risk is narrow and low-severity. The counterfactual harm landscape already contains bullets, bombs, and readily available kinetic means. The technology has a favorable ethics profile. The net-lives calculus firmly favors deployment at the consumer level.
Hostile UAS platforms are commodity hardware available for under $300. Established countermeasures start at $30,000 and scale to half a million. That asymmetry costs lives. This project documents a non-lethal, non-RF acoustic disruption platform built on published physics and completed engineering — designed from the first line with proportionate, accountable use as a hard constraint, not an afterthought.
Point Defense
Against Hostile UAS
A non-lethal, non-RF acoustic disruption platform that exploits MEMS gyroscope resonance to neutralize hostile drones — self-tuning to any sensor, saving lives without firing a single round.
Why This Matters
The drone threat is no longer theoretical. First responders, critical infrastructure, and civilians need affordable protection today — not in five years behind a military contract.
"A $300 acoustic device that causes a hostile drone to safely land itself — with zero
casualties, zero collateral damage, and zero regulatory barriers — is not a weapon.
It is a fire extinguisher for the modern threat landscape."
Who This Protects
Critical Infrastructure
- Power substations and grid infrastructure
- Water treatment facilities
- Communication towers
- Airports and transportation hubs
Public Safety
- Outdoor events and stadiums
- Schools and hospitals
- Emergency response perimeters
- Government buildings
Military & Defense
- Forward operating bases
- Convoy protection
- Personnel defense against FPV threats
- Border and perimeter security
Civilian Privacy
- Residential property defense
- Agricultural protection
- Private facilities and compounds
- Anti-surveillance countermeasure
Where It Stands Today
Not a concept. Not a whitepaper. A working system with measured results and a transparent engineering trail.
Software Core
167 tests passing. Adaptive convergence scan, SubharmonicLadder engine, and full sensor abstraction layer — all validated in CI.
MEMS Profiles
18 sensor profiles across InvenSense, Bosch, STMicro, and TDK families. Resonance band from 21–35 kHz. Blind-discovery mode confirmed.
Hardware Bench
Physical rig assembled. Arduino Nano + GY-521 + PAM8610 + horn tweeter. SNR measurements and convergence verification underway.
Beamforming Array
16×16 transducer phased array for directional emission. 120m demonstrated range target. Vortex beam mode in simulation.
The 100× cost gap between entry-level and this platform is not engineering overhead — it's market exclusion. Every hospital, school, and farm that can't afford $30K is unprotected today.
The Cost of Inaction Is Measured in Lives
Every week, hostile UAS incidents increase. Every existing solution costs more than most communities can afford. This platform proves that effective defense doesn't require a defense contractor's budget — it requires understanding physics and the will to build it.
Working software. 167 tests passing. Three sensor families validated. Hardware bench testing underway. This isn't a pitch deck — it's an engineering project with a clear path to deployment.
Fundamentals
Four stages — passive detection to controlled landing. No RF, no projectiles, no hacking.
Listen
Microphone array passively detects propeller blade-pass frequencies and harmonics — identifying drone type by acoustic signature alone, up to 200m away. Doppler shift reveals approach velocity in real time.
Identify & Probe
Harmonic fingerprinting cross-references known platforms. A low-power probe chirp reflects off the airframe — shell material, thickness, and structural resonances are characterized in under 20ms. The system adapts its attack parameters in real time.
Converge & Disrupt
Adaptive convergence scan — no assumptions, no fixed grid. The system emits, reads gyro feedback, narrows the search window, and converges on the actual resonant frequency of the specific sensor. Works blind on unknown drone models. Operator authorizes engagement.
Neutralize
The drone's autopilot triggers its own fail-safe — emergency landing or return-to-home. The drone lands intact and undamaged. Zero collateral. Zero casualties. Full engagement log recorded.
Subharmonic Resonance Cascade
A graduated frequency ladder that defeats atmospheric absorption — the key breakthrough that makes long-range acoustic defense viable.
Live visualization — multi-tone subharmonic waveform driving parametric instability in MEMS silicon
Graduated Engagement Zones
Frequency-stepped subharmonics exploit atmospheric physics — lower frequencies travel further, then mix nonlinearly at the MEMS target to produce the resonant frequency without it ever traversing the atmosphere.
| Zone | Range | Frequencies | Absorption | Mode | Mechanism |
|---|---|---|---|---|---|
| Priming | 200 – 120m | f₀/5 = 5.0 kHz | 0.02 dB/m | CW (Continuous) | Seeds parametric instability in MEMS drone |
| Charging | 120 – 60m | f₀/5 + f₀/2 = 5.0 + 12.5 kHz | 0.06 – 0.15 dB/m | CW (Dual-tone) | Widens Mathieu instability boundary via multi-frequency drive |
| Disruption | 60 – 25m | f₀/5 + f₀/3 + f₀/2 = 5.0 + 8.3 + 12.5 kHz | 0.02 – 0.15 dB/m | Burst (3-tone) | Nonlinear Duffing intermodulation generates f₀ at the target |
| Kill | 25 – 0m | f₀/5 + f₀/3 + f₀/2 + f₀ = 5.0 + 8.3 + 12.5 + 25.0 kHz | Full stack | Burst (4-tone + direct) | Cascaded energy pumping + direct resonant excitation |
The Economics of Saving Lives
Current counter-UAS systems cost tens of thousands to hundreds of thousands of dollars. Most require RF licenses, military contracts, or kinetic interceptors. We offer a different path.
RF Jammer Systems
- Requires federal authorization for use
- Interferes with all wireless in area
- Can disrupt manned aircraft navigation
- Heavy, high-power, low mobility
Kinetic Interceptors
- Falling debris creates casualty risk
- Single-use ammunition / drones
- Requires trained military operators
- Restricted to military deployment
Net / Capture Systems
- Effective range: 10–30m only
- Single-target, single-use nets
- Requires interceptor drone or launcher
- Cannot engage fast-moving targets
Acoustic Resonance Platform
- No FCC license — acoustic, not RF
- No kinetic risk — sound waves only
- Reusable indefinitely — no ammunition
- Portable — laptop-sized ground station
- Subharmonic range: up to 120m+
- Software-upgradeable — new profiles via update + blind discovery
Platform Specifications
Research prototype parameters validated against published academic benchmarks. 122/122 test suite passing.
Operating Band
Subharmonic f₀/5 through direct f₀
Source SPL
At 1m — below OSHA limits at 3m+
Beam Width
16×16 phased array, +24 dB gain
Response Time
Passive detection → beam on target
Effective Range
With subharmonic cascade + 60cm dish
MEMS Coverage
3 families — InvenSense · Bosch · STMicro (20–27 kHz)
Power Draw
Battery-portable field operation
Prototype Cost
Ground station — horn cluster + dish
Adaptive Convergence
No fixed frequency tables. No assumptions. The system discovers the exact resonant frequency of any MEMS gyroscope — even one it has never encountered before.
Coarse Sweep
Broad 1 kHz steps across the full 18–35 kHz band. Each step emits, reads the gyroscope response, and scores the displacement. Top candidates pass forward.
Narrow Focus
250 Hz steps around each candidate peak. The search window tightens. False positives from harmonics and environmental noise are eliminated.
Fine Convergence
100 Hz precision steps. The algorithm converges on the exact frequency that maximizes gyro displacement — the physical resonance of the silicon drone.
Sustained Verification
3-second sustained tone at the discovered frequency confirms repeatable destabilization. The result pipes forward to SubharmonicLadder engagement automatically.
Blind Discovery Mode: When operating against a completely unknown drone, the system runs a full-band sweep with zero prior knowledge. The gyroscope itself becomes the feedback signal — if the sensor resonates, the algorithm finds it. Hinted mode narrows the initial window using known sensor profiles for faster convergence. Both modes produce identical precision: ±50 Hz of the true mechanical resonance.
Development Status
Working software. Validated architecture. Hardware bench testing phase.
Core Modules
Resonance, Emitter, Beam, Probe, Detection, Host, Bridge, Telemetry
Test Suite
All passing — unit, integration, SubharmonicLadder, Doppler pre-comp
Sensor Families
InvenSense · Bosch · STMicroelectronics
Bench Validation
Recommended test rig — horn tweeter + dual-sensor + Arduino
Engagement Modes
Adaptive Scan · Blind Discovery · Hinted Convergence · Full Suite
Shell Probe
Real-time airframe characterization — material, thickness, resonance
NEXT MILESTONES
Phased Array
Delay Channels
How per-element timing delays create a directional acoustic beam — visualized in 3D. Two straight-channel examples show coherent frequency delivery and noise escape probabilities, then the vortex spiral shows helical phase for tighter focus. All three compensate for Doppler shift.
1 — Coherent Channel Beam
Each transducer fires with a calculated delay so the wavefronts arrive in-phase at the target. As the target moves (Doppler), the delays shift in real-time to keep the beam locked — the purple core stays tight while out-of-phase energy dissipates.
2 — Channel Containment & Noise Escape
Same beam channel — but now visualizing the probability of energy escaping the main lobe. The tight core carries the signal; stochastic noise particles scatter at angles proportional to the sidelobe pattern. More elements = tighter channel = less escape.
3 — Vortex Spiral Beam
Adding a small extra angular delay per element creates a helical phase front — acoustic rifling. The spiral wavefront carries orbital angular momentum that resists spreading, producing a tighter beam at longer range. The Doppler pre-compensation rotates with the helix.
Build Your First
Resonance Test Rig
Everything you need to validate acoustic MEMS disruption across all three sensor families. No soldering. One breadboard. About an hour.
What You're Building
A bench-top test rig that plays ultrasonic frequencies at MEMS gyroscope chips and measures their reaction. If the gyro readings jump when you emit — the physics works.
The analogy: Imagine you have a tuning fork and a crystal wine glass. Tap the fork, hold it near the glass, and the glass vibrates — that's resonance. Now imagine the glass is a microscopic silicon structure inside a drone's navigation chip, and the tuning fork is a speaker playing precisely the right ultrasonic frequency. This rig is your tuning fork + wine glass setup. The glass (MEMS sensor) is 1 cm away, the fork (tweeter) is connected to your computer, and the Arduino measures how much the glass shakes.
Signal Flow
bench_test.py + sounddevice
50W Class-D
2–40 kHz piezo
27 kHz
23 kHz
21 kHz
reads sensor → serial
bench_test.py reads gyro
The PC sends sound out and reads gyro data back — a closed feedback loop
Shopping List
Every component, with exact search terms. All available on Amazon Prime.
MEMS Sensor #1
InvenSense family. 27 kHz resonance. Search: "GY-521 MPU6050"
MEMS Sensor #2
Bosch family. 23 kHz resonance. SparkFun SEN-22398
MEMS Sensor #3
STMicro family. 21 kHz resonance. Adafruit #4438
Microcontroller
CH340 USB. Reads gyro over I²C. Search: "Arduino Nano V3 CH340"
Amplifier
Powers the tweeter with 3.3× headroom. Search: "TPA3116D2 amplifier board"
Power Supply
Powers the TPA3116D2. Search: "12V 3A DC power adapter barrel"
Transducer ×2
2–40 kHz rated. Buy 2 from different sellers for redundancy. Search: "piezo super tweeter"
DAC (Optional)
Only if your PC audio doesn't support 96 kHz. Search: "PCM5102A I2S DAC"
Breadboard ×2
Full-size. Everything plugs in here. Search: "breadboard 830"
Jumper Wires
Assorted lengths. Search: "breadboard jumper wire kit"
Isolation Pads
Prevents vibration through the table. Rubber feet or packing foam work.
Why Three Sensor Families?
Each manufacturer designs the drone differently. Three different architectures, three different frequencies — if all three respond, the physics is universal.
You only plug in one sensor at a time — same breadboard wires, same Arduino code. Swap the chip, run the test.
Step-by-Step Assembly
Follow in order. Each step builds on the last and includes a verification checkpoint before moving on.
Set Up the Breadboard Power Rails
Plug the Arduino Nano into the center of your breadboard — it straddles the center gutter with pins on both sides. Connect a USB cable from the Nano to your PC.
Run a jumper wire from the Nano's 5V pin to the red (+) power rail
on the breadboard edge. Run another from the Nano's GND pin to the
blue (−) ground rail.
Create the I²C Data Bus
I²C is a 2-wire communication protocol — two wires that all sensors share.
Run a jumper from the Nano's A4 pin (this is SDA — data)
to an empty row on the breadboard. Run another from A5
(SCL — clock) to the next empty row.
Label these rows mentally (or with tape): "SDA row" and "SCL row". Every sensor will plug into these same two rows.
Plug In Your First Sensor (MPU-6050)
The GY-521 module has pins printed on its board. Insert it into the breadboard and connect:
VCC → red (+) rail (5V)
GND → blue (−) rail
SDA → your SDA row
SCL → your SCL row
AD0 → blue (−) rail (sets I²C address to 0x68)
That's 5 wires. The sensor is now powered and talking to the Arduino.
3V3 pin instead of the red rail.
Upload the Arduino Firmware
Open Arduino IDE on your PC. Select board: Arduino Nano,
processor: ATmega328P (Old Bootloader) for clones, and the correct COM port.
Paste this sketch and click Upload:
Check Your PC Audio Output
Before buying a DAC, check if your PC's built-in audio supports 96 kHz. In Python:
Look for your output device and check its max sample rate. If it shows 96000
or higher — you're set, skip the PCM5102A DAC. If it maxes at 48000, you'll need the DAC.
Also check Windows: Settings → Sound → Output → Device properties → Additional device properties → Advanced tab. Set the default format to 24-bit, 96000 Hz.
Wire the Amplifier + Speaker
Connect a 3.5mm audio cable from your PC's headphone jack (or UMC404HD output if you have one) to the TPA3116D2 amplifier input. The cable splits into Tip (signal) and Sleeve (ground).
From the TPA3116D2's speaker output terminals, run two wires to the horn tweeter's + and − terminals. Power the TPA3116D2 from the 12V 3A DC adapter (barrel jack on the amp board).
Test with an Audible Tone First
Before going ultrasonic, verify the audio chain works with a frequency you can hear:
You should hear a clear 1 kHz tone from the tweeter when you slowly increase volume.
If you hear nothing, check your cable connections and that the correct output device is
selected in sd.default.device.
1000 to 25000 and play again. You should hear
nothing (it's above human hearing), but the amp should remain stable
(no pops, no weird noises). That confirms ultrasonic output.
Position the Test Jig
Place the horn tweeter on a foam pad, horn pointing up (or sideways — whichever is stable). Place the MEMS sensor breakout on another foam pad, 1–3 cm directly in front of the horn mouth.
Run the Adaptive Scan
This is the moment of truth. With the sensor positioned in front of the tweeter and the Arduino streaming gyro data:
The scan emits frequencies across the band, reads gyro response at each step, and converges on the peak. For the MPU-6050, it should converge near 27 kHz.
Swap Sensors and Repeat
Unplug the GY-521 from the breadboard. Plug in the BMI270 breakout to the
same SDA/SCL rows. Important: connect its VCC to the Nano's 3V3
pin (not the 5V rail) — it's a 3.3V part.
Run the scan again. The BMI270 should converge near 23 kHz. Then swap to the LSM6DSO (also 3.3V) — should converge near 21 kHz.
Three sensors, three families, three frequencies. If all three show clear resonance peaks at their documented frequencies — you've just validated that acoustic MEMS disruption works across the entire flight controller sensor market. That's the core hypothesis proven.
bench_test.py --gyro-port configuration to select the correct sensor
protocol, or update the sketch's register addresses per sensor datasheet.
Complete Wire Reference
Every single wire in the build, numbered. Cross each off as you connect it.
Arduino Nano → Breadboard (Power + Data)
| # | From | To | Color |
|---|---|---|---|
| 1 | Nano 5V | Red (+) power rail | Red |
| 2 | Nano GND | Blue (−) ground rail | Black |
| 3 | Nano A4 (SDA) | SDA bus row | Blue |
| 4 | Nano A5 (SCL) | SCL bus row | Yellow |
GY-521 (MPU-6050) — Swap-in Sensor #1
| # | From | To | Note |
|---|---|---|---|
| 5 | GY-521 VCC | Red (+) rail (5V OK) | Only sensor that tolerates 5V |
| 6 | GY-521 GND | Blue (−) rail | |
| 7 | GY-521 SDA | SDA bus row | |
| 8 | GY-521 SCL | SCL bus row | |
| 9 | GY-521 AD0 | Blue (−) rail | Sets address to 0x68 |
Audio Chain (TPA3116D2 + Horn Tweeter)
| # | From | To | Note |
|---|---|---|---|
| 10 | PC 3.5mm jack (Tip) | TPA3116D2 L input (+) | Via 3.5mm cable — center wire |
| 11 | PC 3.5mm jack (Sleeve) | TPA3116D2 L input (−) | Via 3.5mm cable — shield wire |
| 12 | 12V DC adapter | TPA3116D2 12V / GND | Barrel jack or screw terminal |
| 13 | TPA3116D2 GND | Breadboard blue (−) rail | Shared ground with Arduino |
| 14 | TPA3116D2 L out (+) | Horn tweeter + | Speaker wire |
| 15 | TPA3116D2 L out (−) | Horn tweeter − | Speaker wire |
Verification Checklist
Test each sub-system before combining them. Click items to check them off as you go.
sounddevice.query_devices() shows output device with ≥96 kHzbench_test.py --scan runs without errorsDesk Layout
A suggested arrangement. Keep the sensor isolated from the speaker — foam pads prevent vibration coupling through the desk surface.
Troubleshooting
Common issues and how to fix them. If you're stuck, work backwards from the last thing that worked.
Safety Notes
This is a low-power bench rig, but good habits start now.
🎧 Hearing
The horn tweeter at 15W produces real SPL. Ultrasonic frequencies (above 20 kHz) are inaudible but can cause headaches at extreme levels. Wear foam earplugs when the amp is on. You're working at 1-3 cm, not 120m.
⚡ Voltage
Everything in this build is 5V or 3.3V — no dangerous voltages. But still don't short the power rails — it can fry your Arduino or sensor.
🧲 Static
Touch a grounded metal surface before handling the MEMS sensor breakout boards. They're sensitive to static discharge. This is the cheapest possible mistake to prevent.
☕ Liquids
No coffee on the test bench. Breadboards + bare PCBs + powered circuits = bad combination with any liquid.
What's Next?
After Build 2 confirms the physics, Build 4 adds everything else.
Build 2 — ~$132
"Does acoustic resonance disrupt MEMS gyroscopes across all 3 families?"
- ✅ 3 sensor families
- ✅ Adaptive scan convergence
- ✅ SubharmonicLadder zone walk
- ✅ Single vs. multi-tone comparison
- ❌ No beamforming validation
- ❌ No acoustic feedback mic
- ❌ No shell characterization
Build 4 — ~$260
"Can we steer a beam and characterize shell armor?"
- ✅ Everything in Build 2
- ✅ 4-element phased array
- ✅ Beam steering validation
- ✅ INMP441 ultrasonic mic
- ✅ Shell characterization (probe.py)
- ✅ Behringer UMC404HD 4-ch DAC
- ✅ 14/16 software modules validated
Build 2 reuses in Build 4 — the sensors, Arduino, tweeter, amp, and breadboard all carry over. You're not throwing anything away.
15 Wires. 3 Sensor Families. One Answer.
When the adaptive scan converges and the gyro readings spike — that moment when silicon resonates because you told it to — that's the moment everything becomes real. Go build it.
Run the Tests.
Read the Numbers.
Prove the Physics.
Your hardware is assembled. Now install the software, configure the audio chain, run the adaptive scan, and interpret what the numbers mean. This guide covers everything from first boot to validated results.
Prerequisites
What you need installed and ready before running any tests.
🐍 Python 3.10+
Download from python.org. Check "Add to PATH" during install.
Verify: open a terminal and run python --version.
📟 Arduino IDE
Download from arduino.cc/en/software. Used to upload firmware to your Nano. Verify: plug in Nano → IDE shows a COM port.
🔧 Assembled Rig
Build 2 fully wired per the Build Guide. Arduino Nano + sensor + amp + tweeter on breadboard. USB connected to PC.
🎵 96 kHz Audio Output
PC soundcard or external DAC configured for 96 kHz sample rate. Needed because we're generating frequencies up to 35 kHz — Nyquist requires at least 70 kHz sample rate.
Software Installation
Install Python dependencies, configure audio, and verify the serial connection — all in one terminal.
Install Python Packages
Open a terminal in your project directory and run:
numpy generates waveforms. sounddevice plays them through your audio card at 96 kHz. pyserial reads the Arduino gyro data. matplotlib plots results.
python -c "import sounddevice; print(sounddevice.query_devices())"
— you should see a device list with your audio output.
Configure Audio to 96 kHz
The waveforms we generate go up to 35 kHz. At the default 48 kHz sample rate, anything above 24 kHz is silently clipped. You need 96 kHz.
PipeWire (Ubuntu 22.04+ / Fedora 34+):
PulseAudio (older distros):
ALSA only (no PulseAudio/PipeWire):
python3 -c "import sounddevice as sd; print(sd.query_devices())"
— look for your device showing 96000 in the sample rate column.
Find Your Arduino Serial Port
With the Nano plugged in via USB:
Look for "CH340" or "USB-SERIAL" — that's your Nano. Note the COM number
(e.g., COM3). You'll pass this to bench_test.py.
Look for /dev/ttyUSB0 (CH340 clones) or /dev/ttyACM0 (genuine Nano).
Pass this path to bench_test.py.
dialout group and re-log in:
sudo usermod -aG dialout $USER. Takes effect on next login (or run
newgrp dialout to apply immediately in the current shell).
Set the Audio Output Device
Tell sounddevice which output to use:
sd.query_devices() each session. bench_test.py has a --audio-device
flag to set this at launch.
Upload Arduino Firmware
If you haven't already uploaded the gyro reader sketch from the
Build Guide, do it now. Open Arduino IDE, select
Arduino Nano + ATmega328P (Old Bootloader) for clones,
paste the sketch, and Upload.
Open Serial Monitor at 115200 baud. You should see three comma-separated numbers streaming. If yes — you're ready.
Pass
--gyro-port COM3 (your port number) to bench_test.py.
Pass --gyro-port /dev/ttyUSB0 (your device path) to bench_test.py.
The Test Suite
bench_test.py runs an interactive menu with 8 tests, from baseline noise measurement to the full automated suite. Here's what each test does.
Baseline Noise Floor
Records 5 seconds of gyro data with no sound playing. Establishes how much the sensor drifts on its own — the noise floor. Every other test compares its results against this number.
Broadband Sweep
Plays rapid 200ms chirps tiling the 18–35 kHz band. 2s silence → 3s attack → 2s recovery. Shows whether the sensor responds to any frequency in the band.
Targeted Burst
Fires impulse bursts at the sensor's known resonance frequency. Same 2-3-2 timing. Tests whether a single precise frequency causes more displacement than a sweep.
Adaptive Convergence Scan
The most important test. Starts with coarse 1 kHz steps across the full band, measures gyro response at each step, keeps the top peaks, narrows the search window, repeats with finer steps down to a 5 Hz super-fine pass for sub-integer resonance precision. True MEMS resonance is determined by physical geometry and manufacturing tolerances — almost certainly not a round number.
SubharmonicLadder Walk
Walks the graduated engagement zones: Priming (f/5) → Charging (f/5+f/2) → Disruption (+f/3 burst) → Kill (full stack, 100Hz pulse). 3 seconds per zone. Uses the frequency discovered in test 4.
Single vs. Multi-Tone
A/B comparison: 3 seconds of pure resonance burst vs 3 seconds of the full SubharmonicLadder kill-zone stack. Answers: does the multi-frequency approach cause more disruption than the fundamental alone?
Plot Last Session
Re-opens matplotlib plots from the most recent test run. Waveform time+FFT, gyro magnitude timeline, and convergence overlay.
Run Everything
Automated full sequence: Baseline → Adaptive Scan → Targeted Burst →
Ladder Walk → Single vs Multi. Saves all results to bench_results/.
Running Your First Test
The exact commands, in order, to go from zero to first result.
Launch bench_test.py
Open a terminal in the SoulMusic directory. Run:
Replace the port with your actual serial device. The --sensor flag
tells the system which MEMS profile to load (published resonance frequency, scale factors).
For your first test, start with the GY-521 (MPU-6050).
Available sensor names:
Select Test 1: Baseline
The interactive menu appears. Type 1 and press Enter.
This records 5 seconds of gyro data with no sound. The result gives you your noise floor — the standard deviation of the sensor when nothing is happening. Typically 0.5–2.0 °/s for the MPU-6050.
Select Test 4: Adaptive Convergence Scan
This is the core test. The system will automatically:
- Start a coarse sweep with ~1 kHz steps across the sensor's expected band
- Play each frequency for 800ms, then measure gyro response
- Keep the top 5 frequencies by peak gyro displacement
- Narrow the scanning window to just those peaks ± margin
- Re-scan with ~250 Hz steps (medium pass)
- Narrow again, re-scan with ~100 Hz steps (fine pass)
- Super-fine pass — 5 Hz steps with 2s tones for ±2.5 Hz precision
- Verify the discovered peak with a 3-second sustained emission
Anatomy of a Measurement
What happens inside every single frequency step during the adaptive scan.
Generate Waveform ~1 ms
resonance.generate_burst(freq, duration_ms) creates a numpy array — a burst at the
target frequency with exponential attack envelope. Computed once, played once.
Play + Record 800–1400 ms
sounddevice.play(waveform, samplerate=96000) sends the burst to your amp+tweeter.
Simultaneously, the GyroReader thread is collecting serial data from the Arduino
into a timestamped buffer.
Settle Period 300–500 ms
Silence after the tone. The sensor's drone is still ringing from the acoustic excitation. This is where we capture the actual response — the gyro readings during and shortly after the tone.
Extract Metrics ~1 ms
From the collected gyro samples, we compute:
gyro_mag = sqrt(gx² + gy² + gz²) for each sample, then take the
mean, max, and standard deviation
of the magnitude over the attack window.
Return Result
A dict: {freq_hz, gyro_mag_mean, gyro_mag_max, gyro_mag_std, samples}.
The gyro_mag_max is the primary ranking metric for peak selection.
Adaptive Convergence Algorithm
The 4-pass narrowing strategy that finds the exact resonance frequency to ±2.5 Hz without prior knowledge.
Each pass increases tone duration (longer exposure = stronger response) while decreasing step size (higher precision). The super-fine 4th pass narrows from ±100 Hz to ±2.5 Hz — finding the true resonance that manufacturing tolerances placed at an irrational frequency, not a round datasheet number.
Key Calculations
The numbers you're looking for, what they mean, and what counts as a successful result.
📏 Gyro Magnitude
The combined rotational energy across all three axes. Collapses 3D gyro data into a single metric that represents total displacement.
Units: degrees per second (°/s)
Baseline (no sound): ~0.5–2.0 °/s
At resonance: 5–100+ °/s depending on SPL and distance
📊 Signal-to-Noise Ratio
How much louder the resonance response is compared to the noise floor. The single most important number — tells you if you're seeing real physics or random noise.
SNR < 2× — probably noise
SNR 2–5× — possible signal, increase power
SNR 5×+ — confirmed resonance
SNR 10×+ — strong disruption
🎯 Confidence Score
Computed during the verification phase. Compares the sustained peak against the 25th percentile floor of all measurements, normalized to 0–1.
conf < 0.3 — low, re-run with more power
conf 0.3–0.6 — moderate, result is plausible
conf 0.6+ — high confidence — publishable result
📡 Frequency Drift
How far the discovered resonance is from the published datasheet value. Validates that the sensor matches spec and the scan found the right peak.
Expected: ±200 Hz is normal (manufacturing tolerance)
Suspicious: >500 Hz drift — check if you found a harmonic
Good sign: <100 Hz means your scan precision is excellent
🌊 Nyquist Requirement
Why you need 96 kHz sample rate. The Nyquist theorem requires sampling at 2× the highest frequency you want to produce.
At 48 kHz: max output = 24 kHz — not enough for BMI270 (23k edge)
At 96 kHz: max output = 48 kHz — covers all sensors with margin
Our highest target is 27 kHz (MPU-6050). 96 kHz gives 3.5× Nyquist headroom.
🎶 Subharmonic Ratios
The frequency fractions used in the SubharmonicLadder graduated engagement zones. Each zone stacks more harmonics for increasing disruption.
For MPU-6050 (f = 27 kHz):
f/5 = 5400 Hz — atmospheric carrier (long range)
f/2 = 13500 Hz — secondary harmonic
f/3 = 9000 Hz — tertiary harmonic
f = 27000 Hz — direct resonance (short range)
SubharmonicLadder Engagement Zones
The graduated approach tested in Test 5. Each zone adds more frequency components and increases pulse rate as range decreases.
PRIMING
Continuous low-frequency carrier. Lowest atmospheric absorption — travels the farthest.
= 5400 Hz
CHARGING
Adds the f/2 subharmonic. Still continuous mode. Intermodulation products begin stressing the drone.
5400 + 13500 Hz
DISRUPTION
Switches to burst mode (50 Hz pulse rate). Impulse shock content excites wider bandwidth than continuous.
burst @ 50 Hz
KILL
Full stack: all subharmonics + direct fundamental. 100 Hz pulse for maximum shock-per-cycle. Highest SPL at range.
burst @ 100 Hz
On the bench, all zones are tested at 1–3 cm. In the field, range determines which zone is active. The bench test validates that zone stacking increases gyro displacement compared to single-frequency emission.
What to Expect
Realistic expectations for each sensor at bench distance (1–3 cm, 15W PAM8610, horn tweeter).
MPU-6050
Published: 27 kHz
Expected drift: ±100 Hz
Expected SNR: 10–20×
Notes: Most studied. Strongest response. Your best first result.
BMI270
Published: 23 kHz
Expected drift: ±200 Hz
Expected SNR: 5–15×
Notes: Bosch comb-drive. May need closer distance. VCC → 3V3!
LSM6DSO
Published: 21 kHz
Expected drift: ±200 Hz
Expected SNR: 5–15×
Notes: STMicro ring gyro. Lowest frequency — closest to audible range. VCC → 3V3!
✅ A Successful Test Looks Like
- • Baseline noise: ~1 °/s magnitude
- • Adaptive scan converges to ±100 Hz of published
- • Peak gyro magnitude: 10+ °/s at resonance
- • SNR ≥ 5× above noise floor
- • Confidence score ≥ 0.6
- • Ladder walk shows increasing response per zone
- • Multi-tone beats single-tone in magnitude
❌ A Failed Test Looks Like
- • No frequency shows >2× above baseline
- • Peak response is scattered (no single dominant freq)
- • Confidence score < 0.3
- • Discovered frequency >1 kHz from published
- • Response doesn't change with distance (= table vibration)
- • Response doesn't disappear when foam isolation added
⚠️ If this happens: move sensor closer, increase volume, verify foam isolation, check wiring, try a different sensor.
Understanding the Output
bench_test.py saves everything to the bench_results/ directory. Here's what each file contains.
Plot Outputs (matplotlib)
Sensor Swap Procedure
How to change between the three sensor families. Same wires, different chip.
Stop Any Running Test
Press q in bench_test.py or Ctrl+C. Close the Serial Monitor if open.
Do not unplug the sensor while the Arduino is reading.
Unplug the Current Sensor
Remove the 5 jumper wires connecting the sensor breakout to the breadboard. Set the sensor aside — don't lose it.
Insert the New Sensor
Plug the new breakout board into different breadboard rows. Connect:
SDA → same SDA bus row |
SCL → same SCL bus row |
GND → blue rail
• MPU-6050 → red rail (5V) — this sensor tolerates 5V
• BMI270 → Nano's
3V3 pin — 3.3V only!• LSM6DSO → Nano's
3V3 pin — 3.3V only!
Update the Arduino Sketch (if needed)
Each sensor has a different I²C address and register map. The Arduino sketch needs to match the sensor you're using:
0x68Gyro reg:
0x430x68Gyro reg:
0x0C0x6AGyro reg:
0x22Relaunch bench_test.py with the New Sensor
Run the full suite (test 8) for each sensor. Results save to separate timestamped
files in bench_results/ — nothing gets overwritten.
Recording Your Results
The numbers that matter for each sensor. Fill these in after running the full suite.
| METRIC | MPU-6050 | BMI270 | LSM6DSO |
|---|---|---|---|
| Published resonance | 27,000 Hz | 23,000 Hz | 21,000 Hz |
| Discovered resonance | _____ Hz | _____ Hz | _____ Hz |
| Drift from published | ±___ Hz | ±___ Hz | ±___ Hz |
| Baseline noise (°/s) | _____ | _____ | _____ |
| Peak gyro magnitude (°/s) | _____ | _____ | _____ |
| SNR (peak / baseline) | ___× | ___× | ___× |
| Confidence score | _____ | _____ | _____ |
| Multi vs. Single winner | _____ | _____ | _____ |
The validation threshold: If all three sensors show SNR ≥ 5× and their discovered frequencies are within ±500 Hz of published values — you have cross-family validation of acoustic MEMS resonance disruption. That's the core hypothesis confirmed. Everything else (beamforming, ranging, shell characterization) is engineering on top of validated physics.
Advanced: Blind Discovery Mode
Run the scan without telling it which sensor you're using — prove the algorithm can find resonance on its own.
What Changes in Blind Mode
Instead of scanning ±5 kHz around the published resonance, the system sweeps the entire 18–35 kHz band — a 17 kHz range. The coarse pass takes longer (17+ frequency steps instead of ~10), but the narrowing process is identical.
This is the scientifically rigorous approach: the algorithm has no prior knowledge of the sensor type. If it still converges on the correct frequency, it proves the adaptive scan works as a general-purpose MEMS resonance discovery tool — not just a validation of already-known frequencies.
Known MEMS Profiles
All 11 sensor profiles built into resonance.py. You have 3 on hand. The rest are future targets.
| SENSOR | FAMILY | RESONANCE | STATUS |
|---|---|---|---|
| MPU-6050 | InvenSense | 27 kHz | On Hand |
| MPU-6500 | InvenSense | 26 kHz | Future |
| MPU-9250 | InvenSense | 27 kHz | Future |
| ICM-20689 | InvenSense | 24 kHz | Future |
| ICM-42688-P | InvenSense | 25 kHz | Future |
| IIM-42652 | InvenSense | 25 kHz | Future |
| BMI055 | Bosch | 22 kHz | Future |
| BMI088 | Bosch | 23 kHz | Future |
| BMI270 | Bosch | 23 kHz | On Hand |
| LSM6DS3 | STMicro | 20 kHz | Future |
| LSM6DSO | STMicro | 21 kHz | On Hand |
The 3 "On Hand" sensors cover all 3 silicon architectures across a 6 kHz frequency spread (21–27 kHz).
Quick Reference
Cheat sheet for common operations.
COMMANDS
SUCCESS CRITERIA
You Have Everything You Need.
The hardware is assembled. The software is installed. The three sensor families are waiting. Run the adaptive scan, read the SNR, and prove that silicon resonates when you tell it to.
Legal Framework
Eight legal dimensions examined without overclaiming. This is sound — not RF, not kinetic, not cyber. Where the law is unsettled, we say so.
✓ No FCC Jurisdiction
The FCC's authority under 47 U.S.C. § 302a is limited to radio frequency devices. This system generates no electromagnetic signal, RF pulse, or wireless transmission of any kind. Sound waves propagated through air are outside FCC regulatory authority by statutory text. No FCC license or waiver applies.
✓ Not an RF Jammer or Kinetic Intercept
Federal law (47 U.S.C. § 333; FCC 47 C.F.R. § 15.5) prohibits RF jamming — this system emits no RF. It involves no projectile, explosive, or physical contact with the airframe. Counter-UAS law is actively developing in the U.S.; federal agencies hold explicit authority under the FAA Reauthorization Act (2018). Civilian deployment requires state-specific legal review before use.
✓ No CFAA Violation
The CFAA (18 U.S.C. § 1030) criminalizes unauthorized access to protected computer systems. Acoustic resonance mechanically stresses a silicon MEMS structure through physical vibration — no data packet, no network interface, no software call occurs. This is materials physics, not computer intrusion. No CFAA element is satisfied.
✓ Property Defense — Jurisdiction-Dependent
Property defense and self-help doctrines exist across U.S. jurisdictions. Several states (Texas, Florida, North Dakota, Oklahoma) have enacted statutes explicitly addressing counter-UAS rights for property owners. Proportionality is the operative standard — acoustic disruption satisfies it more defensibly than kinetic alternatives. Deployment legality is state-specific; verify applicable statute before use.
✓ SPL Below Harm Threshold at Standoff
Direct f₀ (25 kHz) is ultrasonic and inaudible. Subharmonic tones (5–12.5 kHz) attenuate below OSHA 8-hour TWA (90 dBA) at ≥3 m operator standoff. The beam axis is directed at the target — away from the operator. No bodily harm capability exists within defined deployment parameters. Risk requires deliberate, sustained, unattenuated self-exposure at close range.
✓ Published Academic Basis
MEMS gyroscope acoustic vulnerability has been independently demonstrated at USENIX Security (2015), IEEE S&P (2017), and ACM CCS (2018). This is established peer-reviewed science — not a novel theory. Prior publication is legally significant: the vulnerability is public domain, not proprietary weapon development. No trade secret or restricted technical knowledge is involved.
✓ Not Prohibited by Any Weapons Convention
The UN CCW and its five protocols cover incendiary weapons, landmines, blinding lasers, explosive remnants, and LAWS. Acoustic non-lethal countermeasures fall under none. The Chemical and Biological Weapons Conventions do not apply. No international treaty prohibits acoustic disruption of unmanned systems. This is a legal gap — civilian use is governed by domestic law, not international humanitarian law.
✓ NLW Classification — Self-Defense Doctrine
Acoustic directed-energy devices are recognized as non-lethal under IACP and PERF use-of-force guidelines. U.S. federal courts (3rd, 9th Circuits) have analyzed acoustic NLW and confirmed they do not constitute deadly force. Under proportional self-defense doctrine, non-lethal acoustic means satisfy necessity and proportionality requirements more readily than any kinetic or RF alternative.
Scientific Foundation
Peer-reviewed research from leading security conferences validates every technical claim.
Rocking Drones with Intentional Sound Noise on Gyroscopic Sensors
↗ USENIX Security '15WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks
↗ IEEE EuroS&P '17Manipulating Critical Sensor Readings Using Acoustic Injection Attacks
↗ ACM CCS '18Un-Rocking Drones: Foundations of Acoustic Injection Attacks and Recovery Thereof
↗ NDSS '23Exploring Practical Acoustic Transduction Attacks on Inertial Sensors in MDOF Systems
↗ IEEE TDSC '23Built For
The World
A $300 counter-UAS platform. Published physics. Verified code. No export restrictions. No NDAs. No defense contracts. Made available because the threat doesn't wait for a budget cycle.
Why This Belongs to Everyone
Defense technology that only governments can afford doesn't protect the people who need it most. This was built to change that equation.
"The asymmetry is the problem. A $300 hostile drone is available to anyone. A $300 countermeasure should be too.
Publishing this is not naivety — it is the only proportionate response to the threat landscape as it actually exists."
How It Should Be Used
This is a defense tool. The architecture enforces that. Understanding the intended use cases is the first step before deployment.
Property Defense
Protecting a fixed location — home, farm, facility — from overflying hostile or surveillance drones. Operator is on-site, in the loop, and authorizing each engagement. This is the primary design case.
Critical Infrastructure
Power stations, water facilities, communication nodes — these face regular threat with no proportionate defense. Deploy as a monitored perimeter layer. Requires dedicated operator station and engagement log review.
Crowd Protection
Events, stadiums, public gatherings. Deploy as a ring of directed emitters. Non-lethal by physics — no stray projectile risk. Engagement requires security coordinator authorization. Check local regulations before deployment.
Military & FOB Defense
Forward operating bases and convoy protection against FPV and commercial drone threats. Integration with existing ISR feeds for pre-authorization. This use case requires command authority chain and rules of engagement documentation.
The Importance of Balance
Dual-use technology demands dual-use responsibility. Open publication means accepting that some will misuse it — and choosing to publish anyway because the defense value outweighs the misuse risk.
▶ WHY OPEN WINS
- • The vulnerability is already documented in academia. Defense against a known attack is not new exposure.
- • Bad actors with resources already have this capability. This equalizes access for defenders.
- • Open code can be audited. Closed code can be back-doored. Transparency reduces systemic risk.
- • The physics require precision hardware. This is not a device that scales into mass harm.
- • Communities that cannot afford $30K solutions are unprotected today. This changes that.
▶ REAL RISKS TO ACKNOWLEDGE
- • Regulations vary by jurisdiction. Deploying without legal review risks criminal liability.
- • Commercial drones used legitimately could be affected in misuse scenarios.
- • Any sufficiently capable tool can be misused. The architecture minimizes this; it cannot eliminate it.
- • Operator error is possible without proper training. Documentation and test procedures must be followed.
- • Proliferation of any counter-UAS technology shifts adversary investment to hardened platforms.
This platform was engineered to a hard constraint: the defense value must exceed the misuse potential. MEMS resonance is precise, non-lethal, non-ionizing, and operator-gated. That calculus was evaluated at every design stage. It is why this is published.
- • This software is provided as-is, with no warranty of any kind, express or implied. It has not been validated on all hardware configurations.
- • Untested in the field. All bench tests are synthetic. Real-world performance depends on your hardware, environment, and acoustic setup.
- • You assume full legal and operational responsibility for your use of this software and any hardware built with it.
- • Counter-UAS deployment is regulated by law in most jurisdictions. Verify your legal authority with a qualified attorney before any operational deployment.
- • The authors accept no liability for damages, legal consequences, or loss of any kind arising from use or misuse of this software.
SoulMusic v1.0
Acoustic Counter-UAS Platform
Full source code, hardware schematics, sensor profiles, test harness, and documentation. SEUL v2.0 Licensed. No registration. No gating. Read the deployment rules and disclaimer above first.
v1.0.0 installers available for Windows and Linux. Source (.zip) includes all core modules and docs. Run from source — see the Setup tab.