On Wednesday 19 Apr 2006 10:45, Reuben Farrelly wrote:
I'd say with a lot of confidence that I've had more false positives from dynamic blocklists tagging email than HELO checking (perhaps not surprising).
The spamhaus list is amazingly good -- and also provides genuine senders with an automatic way out.
But as regards HELO, I know what the rules for HELO state, and so I know why it should work (even if it is considered RFC infringing). But I've never seen a detailed analysis of HELO strings in the same manner as I've seen studies for the effectiveness of different RBL.
What I want to see is someone saying "X% of our normal genuine email servers were (would have been) caught" & "it stopped (or would have stopped) Y% of spam".
Lots of anecdotes don't cut it I'm afraid, not that I don't believe the people, just it is different when it is someone elses email you are filtering, especially if you are making a Yes/No decision on accepting email on the basis of the test. I need to know if it is <1%, <0.1%, or <0.001% false positives, as that'll establish how many people get angry. Obviously email sources vary, but I need to know where to concentrate the effort.