TXCTG – Testing the new V1 Gen 2 – Ka 34.7, 35.5, 33.8 and Low-Powered K - Waco Pioneer Pkwy 03-08-2020

jdong

Premium Plus
Lifetime Premium
Advanced User
Joined
Jun 5, 2013
Messages
7,470
Reaction score
11,673
Nope. Not waiting for Theia. Just waiting for the cash... lol

New A/C for my RV / current residence must come first.... as multimeter continuity tests revealed bad compressor the other day.... as if all the oil everywhere wasnt a clue .... lol

And I can't sleep at night if im hot... 🥵

Figures because A/C unit is gonna cost about what that V1 would.... 🙄

Ah yes that's absolutely more important than CMs!

But really, no exaggeration, the V1G2 made my jaw drop the first time I used it in terms of how quiet it was on K band, especially to BSMs and also to driving past shopping centers. I had to check many times whether or not I disabled K band!

Short of Escort pulling something out of a hat (unlikely IMO), Theia is probably going to be the only thing to top the V1G2.
 

hammerdown

The North remembers....
Advanced User
Joined
Apr 8, 2012
Messages
9,889
Reaction score
10,659
Ah yes that's absolutely more important than CMs!

But really, no exaggeration, the V1G2 made my jaw drop the first time I used it in terms of how quiet it was on K band, especially to BSMs and also to driving past shopping centers. I had to check many times whether or not I disabled K band!

Short of Escort pulling something out of a hat (unlikely IMO), Theia is probably going to be the only thing to top the V1G2.

I like hearing that is that quiet. Thats definately important to me. There's almost as much real K threats around me nowadays as there are falses.... ☹
 

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
Thank you so much for the testing!!!

I ask this with true sincerity (some will flame me), as an engineer... did the course runs with 30 degree angled guns have a test car run through to verify speed against gun for cosine error? If not well it creates this dilemma:

If each course run isnt validated with a test car's speed (GPS) against gun (outside shooter) ahead of time what happens?

We become subjective. We cannot argue on the legitimacy, objectively, of someone shooting at 30 degrees like in some of these cases vs someone shooting at a mirror trying to get your speed... if the course isn't verified.

I know cops don't always play fair, but having some type of standards we could all adobt would be be good. Just like going back to our roots of outside I/O testing. First figure out how fast you can get someone's speed legitimately... then test each RD one by one!!! The good ones will hardly miss!!

Although we all can't repeat test conditions, we can repeat test standards!

Tl;dont read lol: some of my more academic peers would be asking what you're truly testing for at 30 deg other than that itself? What if one horn geometry was focused more on angles of incidence(?) which would likely equate to a ticket? Just a hypothesis.

Hint I'm working off of: why does the V1 seem to have an atypical horn geometry (still) that might be biased towards on-axis. I'm not doubting he dramatically improved off axis either. I'm just wondering if anyone's tried really long on-axis with slight dips and rises?

And I get the off-axis benefit, but I would refer to Dukes' testing course!

I'm going going back to testing against real [staged] scenarios in preference.

But I'm not making fun, I can't do what these guys do. It's a lot is hard work, and a thankless job. But I'm wondering if we're limiting ourselves in the extreme cases. I'm not taking about low power K, but the concepts could be applied.

But what if in the scenarios where the Ka measurements becomes a dick measuring contest. ... you get into slick outside io testing from a distance in a very, very tricky real world setting. And you spend most of your time measuring how far and we'll each RD does. You can still post the 85 deg extreme long range off-axis posts, but I think these would be a lot of fun and value to all!
 
Last edited:

Elcid2015

Late for my own funeral
Lifetime Premium
Intermediate User
Joined
Nov 20, 2017
Messages
738
Reaction score
1,459
Location
North/South Carolina
@Brainstorm69 sorry if I missed this, how did you test the rear antenna? Basically I'm interested in if the latch time affected the rear antenna distance.
 

Deacon

TXCTG
VIP
Lifetime Premium
Advanced User
Joined
Nov 13, 2016
Messages
14,794
Reaction score
21,165
Location
Hill Country, TX
We become subjective.
Certainly not. You can try to argue the real-world applicability of various courses and setups, but the “kill zone” isn’t the point of this testing. It’s the relative performance. It’s not practical or even particularly useful to try to find the furthest possible range under ideal conditions and measure against that.

It’s not the ideal detecting conditions that cause us the most concern in the real world. Blister pack Cobras can pick up radar fired straight down the barrel on a flat straight. Even a Batman detector can pick up radar shot at it from across the street. It’s the pain in the butt where a Tahoe comes rolling from behind the dense trees around a sweeping curve to meet you face to face. It’s the I/O fired in the distance for just a couple seconds with no thoughts of accommodating your detector.

That’s why tests under less-than-ideal circumstances are important and useful. Like anything else, there’s a bell curve in effectiveness of testing. The ideal test is tough enough to tease out repeatable differentiation, but not so tough as to render each detector equally useless. The “kill zone” or cosine errors and such are beside the point unless you’re a cop running that radar to collect revenue.
 
Last edited:

Jag42

USA TMG a-15 Dealer - PM me
Lifetime Premium
Advanced User
Manufacturer
Joined
Jun 13, 2011
Messages
10,250
Reaction score
22,094
@Brainstorm69 sorry if I missed this, how did you test the rear antenna? Basically I'm interested in if the latch time affected the rear antenna distance.

We drove past the radar source, turned around and drove away from the source with the rear antenna.
 

Brainstorm69

TXCTG - 2016 MOTY
Premium Plus
Lifetime Premium
Advanced User
Joined
May 23, 2015
Messages
12,042
Reaction score
31,750
Location
Lone Star State
@Brainstorm69 sorry if I missed this, how did you test the rear antenna? Basically I'm interested in if the latch time affected the rear antenna distance.
We tested the rear antenna by driving away from the radar, so latch time (to the extent it is different) did affect the results.

[Edit: BTW, will respond to @cihkal's post when I get the time. Just don't have it right now.]
 
Last edited:

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
Certainly not. You can try to argue the real-world applicability of various courses and setups, but the “kill zone” isn’t the point of this testing. It’s the relative performance. It’s not practical or even particularly useful to try to find the furthest possible range under ideal conditions and measure against that.

It’s not the ideal detecting conditions that cause us the most concern in the real world. Blister pack Cobras can pick up radar fired straight down the barrel on a flat straight. Even a Batman detector can pick up radar shot at it from across the street. It’s the pain in the butt where a Tahoe comes rolling from behind the dense trees around a sweeping curve to meet you face to face. It’s the I/O fired in the distance for just a couple seconds with no thoughts of accommodating your detector.

That’s why tests under less-than-ideal circumstances are important and useful. Like anything else, there’s a bell curve in effectiveness of testing. The ideal test is tough enough to tease out repeatable differentiation, but not so tough as to render each detector equally useless. The “kill zone” or cosine errors and such are beside the point unless you’re a cop running that radar to collect revenue.
Simply put, how do you guys reconcile cosine error in the extreme cases?

Do you ever validate the test courses in a manner like a mentioned.

I understand how you may have your reasoning for disagreeing with why these questions are relevant; they just recently dawned on me. It would be nice to know plain and simple for my own perspective, whether this is actually done by the Texas group.

I mean you can argue over subjectivity but it's relative to what? If you're testing how the detectors simply pick up the signal when a gun's angled at 30 degrees well we can conclude how well they do at that task.

If you're trying to draw parallel conclusions about how a LEO might be working that section of road in a tricky manner, well I'm not so sold on the fact that the testing is representative of that. I think that's a completely fair stance if you're someone that works in the engineering or scientific community.

If the answer is we don't do these things or consider these things cuz we simply don't find them important, then simply tell me that and we'll move on.

At the end of the day it becomes to me a question of: what did the testing really show us and what reasonable conclusions can we out together from this. I guess I look at these tests as a snapshot of a real-world scenario, and it's something us users can fairly easily compare against.

If it's just a sensitivity measurement that can't be truly paralleled to a real scenario, well it just gets confusing. Should we just pool together and get the lab sensitivity numbers verified by some RDF selected group? Just the questions that come to mind.
Post automatically merged:

We tested the rear antenna by driving away from the radar, so latch time (to the extent it is different) did affect the results.

[Edit: BTW, will respond to @cihkal's post when I get the time. Just don't have it right now.]
Thank you, it really is just this:

I am not saying the testing has no value or information... because obviously it tells us something. For some reason I think MV's interview had me go down memory lane. I don't think that what I'm proposing is new.

I started thinking about cosine error and how this plays into everything. How would an actual shooter be working the street, and if he/she were a clever LEO what techniques would they use to circumvent that type of terrain and RD users.

For me I would conclude that in extreme off-axis cases, maybe the R7 is the stronger of the two when comparing the V1G2. It's one test like there are many of course. Just the devil is in the details, started thinking about horn geometry and why certain companies might intentionally choose one design over another (not just due to engineering constraints).

Understanding more around the test just helps me take more from the results. Please do not take questions as a bad thing - I'm not angry. I guess I just realized that because these modern RDs are quite good, it really has changed our way of testing due to course limitations. With that, how do you also change your thinking about results that might not be directly representative of a real world scenario that often. Certainly not saying things are invalid, just how I should apply the info to my surroundings and threat environment.

Edit: and yes it becomes subjective because in my eyes the results are left up to you to apply them to your threat scenarios. it was not a test of a real world scenario, it was essentially a sensitivity measurement. I am left to superimpose the results over what I see to make an assessment. There's a gap between results and how they are applicable to each viewer. If you test in a verified manner using a real world scenario (even if super tricky) it at least does this: users can take the real world scenario and superimpose this onto their threat environment. Seems much more valuable to me. Yes we're getting into semantics, but I'm trying to be practical here. Just the fact that most don't know what consine error is should prove my point.

Remember guys, even within this extreme (enthusiast community) we are very stratified by experience levels and time with the community. I just want to bring up cosine error because keeping it out does add a "*" to the results in my book. Just like when foam was used as well.

For the record: if testing was done with cosine error accounted for and the Max 360 obliterated the V1G2 I would not cry about it. I am not trying to change results. I am just openly talking about an important "tool" aka cosine error.

ANYONE READING THIS: THESE GUYS ARE GREAT DO NOT THINK THIS IS SOME ATTACK BANDWAGON. If you're in the scientific or engineering community, these types of "teardown" questions are common.
 
Last edited:

Deacon

TXCTG
VIP
Lifetime Premium
Advanced User
Joined
Nov 13, 2016
Messages
14,794
Reaction score
21,165
Location
Hill Country, TX
Simply put, how do you guys reconcile cosine error in the extreme cases? ...If you're trying to draw parallel conclusions about how a LEO might be working that section of road in a tricky manner, well I'm not so sold on the fact that the testing is representative of that.
Cosine doesn’t come into it. We’re testing for detector performance, not ideal ways for cops to set up the toughest traps while minimizing cosine errors. The goal isn’t to set the gun up to get maximum reads at minimum distances. Whether the gun reads anything at all isn’t even important. Some testing setups involve antennas that fit the connector and emit their signal when powered on but aren’t actually compatible with the dash mounted counting unit to get speed readings. It doesn’t matter, as long as the antenna is emitting its CW at the frequency we’re looking to test.

If you're testing how the detectors simply pick up the signal when a gun's angled at 30 degrees well we can conclude how well they do at that task.
Yes, that’s what we’re testing. Or rather whatever combination of angles and heights and foliage and whatever other factors are required on a given course to challenge the detectors enough to show separation. As I mentioned before, that’s more representative of the real-world encounter scenarios that can cause concern and separate out the performers from the also-rans, for better or for worse. Code brown moments are not caused by radar operators minimizing cosine error when set up to maximize reading range. Why do you think @Vortex runs the red barn curve or @Dukes his curve of death (IIRC), or the Equalizer Curve @Jag42 and I found that allows great separation options in relatively short driving distances—and is a course that actually gets messed up by real cops setting up shop to make their quotas. This testing they did follows the same concept, not testing the capabilities of radar systems but rather the ability of detectors to sniff them out. @Brainstorm69 said as much in the OP. I’ll let him elaborate further, as I believe he might intend to do.
Realize that these tests are not necessarily representative of the real world in terms of detection distances you are likely to see from any of these detectors from LEOs shooting straight down a road. But it does highlight sensitivity differences between detectors for the purpose of showing those that have a better chance of catching weak signals, whatever the cause (distance, foliage, hills, curves, etc.).


If the answer is we don't do these things or consider these things cuz we simply don't find them important, then simply tell me that and we'll move on.
Personally I don’t mind the discussion or the opportunity to hone testing approaches. And you’re not the first to ask the question. So often people demand to know what the “kill zone” was, ostensibly seeking to learn whether their favorite detector would’ve alerted in time to “save” them. But in this case it’s not clear to me what relevance cosine has when that’s not what we’re testing for.
 
Last edited:

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
Cosine doesn’t come into it. We’re testing for detector performance, not ideal ways for cops to set up the toughest traps while minimizing cosine errors. The goal isn’t to set the gun up to get maximum reads at minimum distances. Whether the gun reads anything at all isn’t even important. Some testing setups involve antennas that fit the connector and emit their signal when powered on but aren’t actually compatible with the dash mounted counting unit to get speed readings. It doesn’t matter, as long as the antenna is emitting its CW at the frequency we’re looking to test.


Yes, that’s what we’re testing. Or rather whatever combination of angles and heights and foliage and whatever other factors are required on a given course to challenge the detectors enough to show separation. As I mentioned before, that’s more representative of the real-world encounter scenarios that can cause concern and separate out the performers from the also-rans, for better or for worse. Code brown moments are not caused by radar operators minimizing cosine error when set up to maximize reading range. Why do you think @Vortex runs the red barn curve or @Dukes his curve of death (IIRC), or the Equalizer Curve @Jag42 and I found that allows great separation options in relatively short driving distances—and is a course that actually gets messed up by real cops setting up shop to make their quotas. This testing they did follows the same concept, not testing the capabilities of radar systems but rather the ability of detectors to sniff them out. @Brainstorm69 said as much in the OP. I’ll let him elaborate further, as I believe he might intend to do.



Personally I don’t mind the discussion or the opportunity to hone testing approaches. And you’re not the first to ask the question. So often people demand to know what the “kill zone” was, ostensibly seeking to learn whether their favorite detector would’ve alerted in time to “save” them. But in this case it’s not clear to me what relevance cosine has when that’s not what we’re testing for.
Thank you for the informative responses, I think that covers it for me!

and thank you for quoting a section of the test to help answer questions... there's a lot of excitement and I will fully admit I might miss a detail or two.

And as you know in another post, I was looking at horn geometries which could loosely validate why I was thinking about cosine error and how that would apply. An honest thought that came to mind and I realized it would apply to a lot of our testing! Not just in texas, but I believe they were the first to post major well documented results.
 
Last edited:

Brainstorm69

TXCTG - 2016 MOTY
Premium Plus
Lifetime Premium
Advanced User
Joined
May 23, 2015
Messages
12,042
Reaction score
31,750
Location
Lone Star State
My responses are below in red

Thank you so much for the testing!!!

I ask this with true sincerity (some will flame me), as an engineer... did the course runs with 30 degree angled guns have a test car run through to verify speed against gun for cosine error? If not well it creates this dilemma:

If each course run isnt validated with a test car's speed (GPS) against gun (outside shooter) ahead of time what happens?

We become subjective. We cannot argue on the legitimacy, objectively, of someone shooting at 30 degrees like in some of these cases vs someone shooting at a mirror trying to get your speed... if the course isn't verified.

I have to disagree with your position here as far as the testing goes. The test is objective. It is measurable and repeatable. The point of the test, as mentioned by @Deacon above, is not to measure a "kill zone" and determine whether a detector is good enough to provide a save. It is to measure relative sensitivity. As long as all the detectors are being measured in the same manner, I think it's valid.

I know cops don't always play fair, but having some type of standards we could all adobt would be be good. Just like going back to our roots of outside I/O testing. First figure out how fast you can get someone's speed legitimately... then test each RD one by one!!! The good ones will hardly miss!!

I see that you (and others apparently) are still concerned about the bench reactivity tests of the V1G2, and now it's bleeding over to your thoughts about other testing, which is ok. It's good to think about these things. I agree that it is possible that there is something going on with the V1G2 and its K-band filtering that may make make that test not representative of what happens in the real world (and I'm glad @DrHow and @GTO_04 did some additional testing on that point). I still plan to do some I/O testing at distance to see if and how the results vary. I suspect that they will, but rather than speculate about it, I plan to test it. @Jag42 and I were going to do that when we tested in Waco. Unfortunately, we ran out of time, so that test will have to wait until next time.

Although we all can't repeat test conditions, we can repeat test standards!

Please feel free to suggest whatever

As far as having testing standards, feel free to suggest what those would be and how they would be implemented. We had this discussion some time back without any consensus on the topic as I recall.


Tl;dont read lol: some of my more academic peers would be asking what you're truly testing for at 30 deg other than that itself? What if one horn geometry was focused more on angles of incidence(?) which would likely equate to a ticket? Just a hypothesis.

Hint I'm working off of: why does the V1 seem to have an atypical horn geometry (still) that might be biased towards on-axis. I'm not doubting he dramatically improved off axis either. I'm just wondering if anyone's tried really long on-axis with slight dips and rises?

And I get the off-axis benefit, but I would refer to Dukes' testing course!

Turning the gun 30 deg. on a straight road does not make the test "off-axis" if the gun is still in the same place that is considered on-axis if the gun is pointing straight down the road. What it should hopefully be doing (and it seems to be borne out by the results) is present a different (and weaker) lobe of the radar signal to the detector at the same angle the detector was receiving the signal before. Again, to help highlight sensitivity differences.

I'm going going back to testing against real [staged] scenarios in preference.

I don't have any issue with members testing real [staged] scenarios. I'd be happy to see folks do testing they prefer, be that what they consider real [staged] scenarios testing or other testing. The more tests we have, the better it is for all of us.

But I'm not making fun, I can't do what these guys do. It's a lot is hard work, and a thankless job. But I'm wondering if we're limiting ourselves in the extreme cases. I'm not taking about low power K, but the concepts could be applied.

But what if in the scenarios where the Ka measurements becomes a dick measuring contest. ... you get into slick outside io testing from a distance in a very, very tricky real world setting. And you spend most of your time measuring how far and we'll each RD does. You can still post the 85 deg extreme long range off-axis posts, but I think these would be a lot of fun and value to all!

Again, I'd love to see more and different testing of all varieties by various members. It's better for all of us. @Jag42 and I are trying to do our part. But neither of us have the time to test these days that we once had. So we are doing what we can.

As far as a dick measuring contest, if you are trying to say we have reached a point where sensitivity doesn't matter any more, in many cases it doesn't. But I think most here that drive in difficult terrain and/or vegetation (or face I/O) daily would still disagree with a statement that it doesn't ever matter anymore. But maybe I'm misunderstanding your point.
 
Last edited:

OBeerWANKenobi

This is not the car you're looking for......
ModSec
VIP
Premium Plus
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Mar 20, 2018
Messages
8,192
Reaction score
25,738
Location
Outer Rim - Hiding from 35.5 I/O
@cihkal

If you haven't yet, it would be great for you if you got out with a testing group one of these days and experience it for yourself. I know you were originally going to come to one of our testing events but it fell through for you. You're welcome to come again or set one up yourself in the future. Anyway, it's a little hard to imagine what you have to do to find a good course and get an existing course to work with all guns and detectors.

You need to make sure you aren't terrain limited. You need to make sure you don't max out the course. You need to make sure you have separation. You could test 3 detectors and find out on the 4th that one of the factors I just mentioned screws up your whole deal and you have to set up again. You can't bump the gun, you can't leave and go to lunch and come back etc......

These are tests comparing different detectors in the same scenario, that's it. Range numbers aren't really important, it's more the difference in those numbers. So we can't say how they will directly affect real-world detections of actual threats. What we can say is that in most cases, the detector with the longer range in testing will have the greater sensitivity to those threats.

If you wanted to test different detectors for their off-axis sensitivity, that could definitely be setup as well. That would compare the various detectors in that scenario Once again though, the numbers would be correlative rather than exact.
 

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
I have to disagree with your position here as far as the testing goes. The test is objective. It is measurable and repeatable. The point of the test, as mentioned by @Deacon above, is not to measure a "kill zone" and determine whether a detector is good enough to provide a save. It is to measure relative performance. As long as all the detectors are being measured in the same manner, I think it's valid.

As far as a dick measuring contest, if you are trying to say we have reached a point where sensitivity doesn't matter any more, in many cases it doesn't. But I think most here that drive in difficult terrain and/or vegetation daily would still disagree with a statement that it doesn't ever matter anymore. But maybe I'm misunderstanding your point.

My point is, we can agree on the above in Blue while agreeing to disagree on minor details, but the big gap for me becomes how you connect Blue ---------> Green. That's where the gap lies and where users are left to superimpose the data onto their threat environment and assess. If, we simply consider cosine error and frame tricky tests in a real world manner, well going from Blue ---> Green is much easier for not only the advanced guys, but up and coming members.

That's really what is going on in my end. And when you consider horn geometries and cosine error, well you have reason to consider. Shoot even the Max 360 users would love this, we have pretty good documented information that the designers specifically played to cosine error at the neck of the horn (asymmetrical).

Relative performance to what? It's all relative I would say. Not trying to be tricky, really trying to drive in my point about going from Blue to Green. These things become talking points when you try to connect the two, you see?

And if we want to get into off-axis I would just say that putting the gun at an extreme angle certainly is going to make for interesting reflections. It's kinda hard to say what's going on at that point I would think. Maybe no reflections in texas and glancing at the course lol... not like chicago (JK'ing of course).

I agree that it's objective*, but this objectivity severely degrades more so than other testing scenarios because of the "gap" I am referring to when connecting BLUE to GREEN. As in, results to what users are actually seeing in the everyday lives. Practically speaking. Very old, old topics... I get that I guess this testing is really only to separate RDs using extreme cases. That's just widely different than the median IO/CO traps many see, or the rolling IO K I see o_O. That last bit was selfish and maybe I'm stupid, but I would have a hard time applying these results to much of what I see in a meaningful manner. Playing within cosine error likely keeps everyone within the field that all decent RD companies focus on. Not that extremes should be ignored, I don't mean that at all. So I guess I'm going back to old school techniques yes, and do feel others should too. Then once those results are wrapped up, move onto different tests for the ones who really want to get into the weeds. Maybe going to the extremes first is backwards and confusing for many. IDK I'm just going through this all in my head because testings results are coming out faster than I expected lol - more to come by others.

Edit: I just ask that anyone reading this not get bent out of shape by the questions, even if you didn't test. Depending on your profession this is really common and not meant with a bad heart.
 
Last edited:

Deacon

TXCTG
VIP
Lifetime Premium
Advanced User
Joined
Nov 13, 2016
Messages
14,794
Reaction score
21,165
Location
Hill Country, TX
EDIT: interrupted while writing this with phone calls and such, hopefully still applicable to the conversation.

I was looking at horn geometries which could loosely validate why I was thinking about cosine error and how that would apply.
I don’t follow, other than maybe generally speaking some antenna designs might be extremely directional (“wearing blinders”) while others may be better at collecting off-axis signals to give better alerts to threats about to come around the corner and give you time to avoid locking them up and getting your upholstery cleaned. Which I would say is a critical function—maybe the critical function—of the way I use a detector.

PS I just saw you replied again to your own post. I’ll take the opportunity to address what I think is the heart of the question.

First, you ask, “I mean you can argue over subjectivity but it's relative to what?” It’s difficult for my mind to comprehend the nature of that question, but the we’re testing the detectors relative to each other. Absolute distances aren’t relevant or important, only how they compare to each other. There’s no specific passing grade; it’s all graded on a curve. A detector that picks something up at half the range of another, whether that range is measured in miles or fractions, will alert you to threats with far less heads up than the one it’s being measured against. When people say things like “X detector gives you 1.2 miles of range” it makes me scratch my head. Which leads to point number two.

Second, real-world scenarios are incredibly variable. If a bored or gung-ho LEO aiming to get promoted wants to set up a trap with the best of them, he can try to find ways along roads he knows well to do so. A Texas DPS trooper that’s an old friend of my wife’s, stationed out in the middle of nowhere on I-10, between San Antonio and El Paso, and the one time I met him was on the side of the road after she called him and let him know we’d be passing through on a road trip and he said he’d be on the lookout and pull us over so we could say hi. I took the opportunity to tease out what he thought about CMs without giving away my own perspectives.

In Texas, troopers live off traffic stops for the most part. They write tickets, but really their equipment is used to give them an excuse to pull you over and demand to see your papers, in hopes that you’ve got some weed on you or whatever. And they’re generally known to be better educated on CMs than most in the state and are far more likely to incorporate I/O as their SOP (stuff I know, not gleaned from his conversation).

He told me he knows what the Escort remote installed controller looks like and looks out for it, and he said detectors (he could see my R3) didn’t bother him much because he has a couple of little hollows behind a hill off the road he can hide in and blast people as they blow by without even knowing he’s there, and even if detectors were good they won’t help if he’s not running radar until it’s too late. Yup.

But are those scenarios really the point? Yes and no. No, they’re not feasibly testable in general, and if you’re the target, you’re hosed anyway. But yes, detectors that show the greatest sensitivity in the toughest conditions will be the ones most likely to give you that little blip that lets you know someone got targeted up ahead, and watch out lest you be next. Some people joke about detections “over the river and through the woods” but if that’s where you live that can represent some tough scenarios, and it’s best to test against them so you get an idea of their limits.

Anyway, the point is that testing is done to separate detectors into strata based on performance against tough scenarios. Radar testing can be interesting and fun, too, playing around with them on the road to see what it’s like on the other end of the revenuer’s watchful eye. But that’s not the subject of the tests.
 
Last edited:

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
@Deacon

First, you ask, “I mean you can argue over subjectivity but it's relative to what?” It’s difficult for my mind to comprehend the nature of that question, but the we’re testing the detectors relative to each other. Absolute distances aren’t relevant or important, only how they compare to each other. There’s no specific passing grade; it’s all graded on a curve. A detector that picks something up at half the range of another, whether that range is measured in miles or fractions, will alert you to threats with far less heads up than the one it’s being measured against. When people say things like “X detector gives you 1.2 miles of range” it makes me scratch my head. Which leads to point number two.

That gets into a very important point. You answer with: we are testing the detector relative to each other.

Relative to each other. Well I understand that, but relative to or against what? How well one can pick up faint sources from a gun angled on the side of the road in a somewhat extreme manner?

Which goes into your point #2. We talk about the unknowns of the world. Well of course, that's why we use known phenomena (cosine error) when testing and also conduct experiments in real world settings. It removes further red tape that clouds you up when trying to connect "BLUE to GREEN" in my comment to BS69. BLUE to GREEN = data/results to what people actually see. I would agrue that more synthetic testing pushes you further and further away from real world examples, and that makes things tricky.

I think this is good discussion and what a forum is for, so I hope mods are not* mad that the comments are being used how they should be. That said, these notions about testing have essentially existed since the dawn of RDs. I just happen to realize we have moved away from that and these things are becoming forgotten, and apparently some feel irrelevant.
 
Last edited:

Deacon

TXCTG
VIP
Lifetime Premium
Advanced User
Joined
Nov 13, 2016
Messages
14,794
Reaction score
21,165
Location
Hill Country, TX
Shoot even the Max 360 users would love this, we have pretty good documented information that the designers specifically played to cosine error at the neck of the horn (asymmetrical).
I don’t know what you mean, but I know that in testing on one of the Waco courses, the Max 360 gave results so awful (both absolute and relative) that while I realize that in real-world scenarios it could be just fine, for my own personal protection it would not even be considered. This was the same testing session where the V1 (Gen1) did OK at best but paled in comparison to the competition, and I know the V1 only rarely caused a picker event, so it’s not the end of the world. But it’s that kind of relative performance that’s valuable to me.

I would just say that putting the gun at an extreme angle certainly is going to make for interesting reflections.
Depends on the course, doesn’t it? Just like the real world? Go back to the OP and watch the video and tell me what you think those “interesting reflections” are going to be reflecting off of? On a different course with more traffic, especially 18 wheelers, there’s a much better possibility for interesting reflections, but that’s in no way limited to off-axis concepts, and anyway that’s part of the game to some extent, isn’t it? As detector users, reflections are our friends. But on the test course in the OP there’s not much opportunity for them, which makes sensitivity levels, off-axis or otherwise, all the more crucial to us as users.
Post automatically merged:

Relative to each other. Well I understand that, but relative to or against what? How well one can pick up faint sources from a gun angled on the side of the road in a somewhat extreme manner?
Yes.

Well of course, that's why we use known phenomena (cosine error) when testing
No we don’t. What are you talking about? Cosine error refers exclusively to speed readings on radar units being slightly lower (actual velocity from the radar gun’s frame of reference) than the car itself would report. It has nothing whatsoever to do with a detector detecting.
 
Last edited:

cihkal

🕊
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Apr 21, 2014
Messages
4,417
Reaction score
8,797
@Deacon

It's easy to fall away from the topic at hand though which is this:

1.) Was cosine error accounted for? Answer: No
2.) Was a test car driven through the course to see if a speed could be measured accurately? Answer: No

Is this easily transferable to real world scenarios? Not really. Does this tell us something about RDs relative to each other in this very specific and objective test. Absolutely.

Would I personally like to see more of points 1 and 2? Absolutely, because in my personal belief I think it removes confusion and provides more meaning to users. I'm not saying it's easy and I'm not trashing the first major results to be posted since the V1G2 came out.

Nothing new with what I'm saying.

cihkal said:
Well of course, that's why we use known phenomena (cosine error) when testing
No we don’t. What are you talking about? Cosine error refers exclusively to speed readings on radar units being slightly lower (actual velocity from the radar gun’s frame of reference) than the car itself would report. It has nothing whatsoever to do with a detector detecting.

Well of course you guys don't, you've acknowledged that. Here's what that sentence means if you understand cosine error and keep it in your thoughts: "Well of course, that's why we use known phenomena [keep such phenomena in mind when creating our test courses] when testing [as we know this can be advantageous to RD users and is a constraint dealt with by a properly trained radar gun operator].

Please ask for a clarification if you think I'm so 180 on a notion, I think I've been around here long enough to deserve that.
 
Last edited:

Brainstorm69

TXCTG - 2016 MOTY
Premium Plus
Lifetime Premium
Advanced User
Joined
May 23, 2015
Messages
12,042
Reaction score
31,750
Location
Lone Star State
At this point, can we take the this discussion elsewhere? While related to testing results, testing methods are not the topic of this thread. Thanks.
 

OBeerWANKenobi

This is not the car you're looking for......
ModSec
VIP
Premium Plus
Lifetime Premium
Corgi Lovers
Advanced User
Joined
Mar 20, 2018
Messages
8,192
Reaction score
25,738
Location
Outer Rim - Hiding from 35.5 I/O
At this point, can we take the this discussion elsewhere? While related to testing results, testing methods are not the topic of this thread. Thanks.
I agree.
I feel like this thread of conversation is actually obfuscating the results here instead of removing confusion or providing meaningful info to other users.
@Deacon

It's easy to fall away from the topic at hand though which is this:

1.) Was cosine error accounted for? Answer: No
2.) Was a test car driven through the course to see if a speed could be measured accurately? Answer: No

Is this easily transferable to real world scenarios? Not really. Does this tell us something about RDs relative to each other in this very specific and objective test. Absolutely.

Would I personally like to see more of points 1 and 2? Absolutely, because in my personal belief I think it removes confusion and provides more meaning to users. I'm not saying it's easy and I'm not trashing the first major results to be posted since the V1G2 came out.

Nothing new with what I'm saying.

The testing wasn't designed to do either of those things but you are welcome to test them yourself and post the results.
 

DC Fluid

RDF Addicts Anonymous Member
Corgi Lovers
Advanced User
Joined
Jun 7, 2019
Messages
5,441
Reaction score
18,936
Age
55
Location
Prince George, B.C. Canada
IMO running testing @ 30 degrees is one of many useful tests.
Reason:
I live in hilly, twisty mountainous, treed terrain.
Many times the LEO coming the opposite direction on remote 2 lane highway has curves and hills between me and them before a line of sight kill zone is entered.
It would be foolish for me to favor a detector with great straight on sensitivity but falls off when an angle is introduced.
I need that off angle early warning before LEO comes around the corner and has a kill shot.
 

Discord Server

Latest threads

Forum statistics

Threads
96,578
Messages
1,469,649
Members
24,773
Latest member
downhiller2010
Top