V1 on 34.7 with Custom Sweeps vs. Factory Defaults

Vortex

Making Videos
Observer
Advanced User
Lifetime Premium Member
Joined
Jul 19, 2012
Messages
21,340
Reaction score
41,936
Location
Washington State
Last time I went out to go test some detectors, I also did a set of runs with the V1 on 34.7 to see what difference custom sweeps made vs. factory defaults.

While custom sweeps *should* improve performance with the V1, there was a test done (sorry I forget whose it was, but @Tallyho just reminded me about it) where it was found that the V1 had better range with factory defaults on 34.7. Because of this, I went out to repeat this test myself and see what I found.

This was with a V1 3.8945 with very standard 2,5,8 custom sweeps and Ka guard off vs. factory defaults. In this case, the custom swept configuration definitely did better. (I moved the radar gun here between the main test runs and these test runs which made the radar source even weaker than before, so the detection distances here in these passes are not in alignment with the detection distances I got earlier in the day. They were even shorter than in the main test.)

Anyways, here's a quick test showing the V1 consistently doing better with custom sweeps.

The custom swept setup had one monster detection way back in the S-curves but otherwise consistently alerted just before the 2nd gravel pullout. When set to factory defaults, the detector alerted when I was practically on top of the radar gun. (It was sitting in a box in the grass in the gravel pullout I turn around in.)

https://www.youtube.com/watch?v=CqbdefiHcmk
 

Tallyho

Advanced User
Premium Member
Joined
Mar 21, 2013
Messages
3,047
Reaction score
5,519
Thanks, Vortex. It's another piece of information to throw into the data hopper.

I'm very much interested to know what causes these wild variances. The results we saw at the last meet on TWO separate V1C's gave identically poor results. We were so perplexed at the results we didn't think to default them both back to factory settings. However the one we defaulted to factory settings, inexplicably gave double detection range.

I'm really looking forward to the next meet to continue our testing.
 
Last edited:

milkman

Advanced User
Premium Member
Joined
Dec 6, 2010
Messages
15,934
Reaction score
6,438
Location
Missouri
Thanks Vortex!
 

jdong

Advanced User
Premium Member
Joined
Jun 5, 2013
Messages
6,856
Reaction score
9,343
In both of my previous 34.7 tests, the V1C performed better than the V1 on 34.7 with custom sweeps, but the superior performance was kind of inconsistent. In fact, in one of my tests it beat out the Redline: https://www.rdforum.org/showthread.php?t=32241

In future tests, even with 6 loaded 34.7 sweeps, I failed to achieve the same result again.


I think it's just the variance of high end detectors and fringe detections falsely producing "patterns" in our observation. If we did 10 or 20 runs with each detector or interleaved the runs, we would have a better picture of how much variance there is, and whether or not any of the differences between detectors are statistically significant

(and hiddencam would be really tired too :D)
 

Tallyho

Advanced User
Premium Member
Joined
Mar 21, 2013
Messages
3,047
Reaction score
5,519
Sure but not 100% range improvement, 2500 foot variances!

There's standard deviation and then there's variance. One requires further explanation and the other can be dismissed.
 

jdong

Advanced User
Premium Member
Joined
Jun 5, 2013
Messages
6,856
Reaction score
9,343
Sure but not 100% range improvement, 2500 foot variances!

There's standard deviation and then there's variance. One requires further explanation and the other can be dismissed.
I honestly would not be surprised if the variance is 100%. That's the fun with terrain. Like take this contrived course for an example:

Untitled 4.png

Due to the terrain, the only points of detection in reality are A, B, and C. If those points are 1 mile apart, with the right gun placement and right hills, you can basically separate radar detectors into 3 groups of results, A, B, and C. Now if we say Redlines pick this up reliably at 3 miles (point A), but the V1 can only sometimes pick it up there but always picks it up at point B, you would come to the same kind on conclusion where the variance from run to run with the V1 might be an entire mile.


Of course, nobody picks a test course that looks exactly like this from an altitude perspective. However, a lot of courses in the real world look like this from an RF perspective — foliage or terrain or road curvature results in a few BRIEF windows of opportunity to detect the signal from afar, but in between those brief opportunities, no detector can see the signal at all. Then once you get to a certain point (like the 2500ft point in HC's course), basically you're at point "C" in my diagram — any detector will see it. So in the real world, missing "point A" (lack of sensitivity, or a car was blocking the radar gun at the exact point in time where you were cresting that "hill") might cost you several thousand feet in the results.
 
Last edited:

Tallyho

Advanced User
Premium Member
Joined
Mar 21, 2013
Messages
3,047
Reaction score
5,519
Agreed.

And in keeping with this example a V1C should be able to sometimes pick up point A and reliably pick up point B.

So why was it that the V1 was reliably picking up A AND B and the V1C couldn't pick up any A at all?
 

jdong

Advanced User
Premium Member
Joined
Jun 5, 2013
Messages
6,856
Reaction score
9,343
Agreed.

And in keeping with this example a V1C should be able to sometimes pick up point A and reliably pick up point B.

So why was it that the V1 was reliably picking up A AND B and the V1C couldn't pick up any A at all?
That result is definitely very curious and warrants additional investigation. My general experience has been along what others have reported — that the V1C gets same or better (usually better) range compared to the stock V1.

I think we'd need more information about the testing circumstances, and what happened during the time between the V1 runs and V1C runs, and how many runs were done with each. I would also be concerned if the V1/V1C were each mounted once and then all the runs were done, because then there's an additional variable that mount biasing could affect all the runs of each detector. Ideally to account for that, you take each detector off the windshield (suction cups and all) and then put it back on according to your favorite mounting algorithm.
 

GTO_04

Advanced User
Premium Member
Joined
May 22, 2011
Messages
4,515
Reaction score
5,180
That result is definitely very curious and warrants additional investigation. My general experience has been along what others have reported — that the V1C gets same or better (usually better) range compared to the stock V1.

I think we'd need more information about the testing circumstances, and what happened during the time between the V1 runs and V1C runs, and how many runs were done with each. I would also be concerned if the V1/V1C were each mounted once and then all the runs were done, because then there's an additional variable that mount biasing could affect all the runs of each detector. Ideally to account for that, you take each detector off the windshield (suction cups and all) and then put it back on according to your favorite mounting algorithm.
The only thing that happened between runs was that Tallyho changed his V1 settings, going from custom sweeps to stock mode. No mounting locations were changed. So the mystery remains.........

GTO_04
 
Last edited:

hiddencam

Advanced User
Premium Member
Joined
Oct 29, 2010
Messages
11,557
Reaction score
25,090
In both of my previous 34.7 tests, the V1C performed better than the V1 on 34.7 with custom sweeps, but the superior performance was kind of inconsistent. In fact, in one of my tests it beat out the Redline: https://www.rdforum.org/showthread.php?t=32241

In future tests, even with 6 loaded 34.7 sweeps, I failed to achieve the same result again.


I think it's just the variance of high end detectors and fringe detections falsely producing "patterns" in our observation. If we did 10 or 20 runs with each detector or interleaved the runs, we would have a better picture of how much variance there is, and whether or not any of the differences between detectors are statistically significant

(and hiddencam would be really tired too :D)
This is an excellent post! I couldn't agree more with your last paragraph.
 

Discord Server

Latest threads

Forum statistics

Threads
78,470
Messages
1,194,360
Members
20,001
Latest member
John456
Top