Just before I sink a bunch of time into proof of concept for something that was already considered and rejected for whatever reason that is not yet obvious to me, I guess I'll ask here first 
We all know that these radar detectors (I have uniden r3 ATM) have various sounds to alert us of danger. You could set separate sounds per different bands and some of them can even speak out frequencies. the stronger the signal, the more frequently the alert sounds.
So the idea is to activate phone microphone to listen in on these alerts and record them in an app, after some research it looks like fft-based approach should be relatively easy to implement to tell different types of alerts and timing them would allow deducing the signal level. Some speech recognition (local (variability is low so should be doable with some struggles) or "in the cloud") would even allows us to record the frequencies hit at least for the Ka-band.
Obvious upsides include:
Additionally after reading about project SEAL and a potential patent trouble - I read the patent in question and I think this approach works around the patent nicely:
So am I out of my mind?
We all know that these radar detectors (I have uniden r3 ATM) have various sounds to alert us of danger. You could set separate sounds per different bands and some of them can even speak out frequencies. the stronger the signal, the more frequently the alert sounds.
So the idea is to activate phone microphone to listen in on these alerts and record them in an app, after some research it looks like fft-based approach should be relatively easy to implement to tell different types of alerts and timing them would allow deducing the signal level. Some speech recognition (local (variability is low so should be doable with some struggles) or "in the cloud") would even allows us to record the frequencies hit at least for the Ka-band.
Obvious upsides include:
- crowd sourcing false alert maps
- automatic crowdsourcing of LEO presence without relying on V1 units that have the necessary machinery to report it (and cost $$$)
- (if the use of external audio cable mutes in-build RD speaker - to be tested) - then sound alerting could be offloaded to the phone and that way you can have various automatic muting implemented in the phone bringing this and other functionality for detectors that don't have the capability. - this also removes the concern about loud music drowning the alerts and such.
Additionally after reading about project SEAL and a potential patent trouble - I read the patent in question and I think this approach works around the patent nicely:
- The patent centers around radar detector that is then communicating with computers and "upgrade devices" by radio waves - so audio should be fine ("the invention features a radar detector having a wireless device interface comprising a radio compliant with one or more of Bluetooth, Zigbee, 802.11, and wireless personal area network communication protocols")
- The patent talks about how upgrade device could silence the alerts from the Radar detectors based on various criteria - with RD always alerting (into the audio cable, and also the onboard display) and never being silenced - this also seems ok when the actual sounds are produced by the phone based on both RD input and other sources.
- IANAL but the whole phone thing being totally separate from the RD and having a one way input-only link seems to run contrary to the very integrated system discussed in the patent.
So am I out of my mind?