[Dovecot] Firewalls are [essentially] free - WAS: Re: Source patches from Apple

Dave McGuire mcguire at neurotica.com
Sat Dec 13 20:22:16 EET 2008


On Dec 13, 2008, at 12:57 PM, Charles Marcus wrote:
>> My network security is handled elsewhere.  I too believe in layered
>> security, but my desire to use the right tool for the job is much
>> stronger.  My mail server is busy serving mail; my network security
>> is handled by equipment built and optimized for that job.
>
> Firewalls don't add any (perceptible) extra work or overhead for most
> any system, even old systems with old processors and not much RAM...

   Perhaps not immediately perceptible in themselves on most systems,  
but certainly calculable.  And it all adds up.  I use modern  
processors and gobs of RAM; that's not really relevant...My user- 
visible performance is effectively instantaneous, and I want to keep  
it that way.  I admit that I may be taking a stand as a purist here,  
but it really does add up, I've seen it (and corrected it) myself.

>>> It's not like it costs anything extra.... :)
>
>> Well...that's the attitude that got us operating systems that need a
>> gigabyte of memory just to boot, and processors clocked at 3GHz that
>> give me the same useful performance as my 4MHz Z80 twenty years ago.
>> ;) Nothing is free.
>
> Your argument is bogus - see above... again, a basic, properly
> configured firewall has negligible impact on pretty much any systems
> resources, even ancient ones...
>
> So, yeah, enabling a firewall on a mail server is essentially free,
> whether talking impact on system resources, or dollar cost.

   I am an embedded systems designer as well as a network  
administrator.  I know very well what each and every instruction a  
CPU executes costs.  In my embedded design work, I often spend hours  
optimizing out a single instruction.  This can mean the difference  
between needing a $2 CPU vs. a $4 CPU in a high-volume product, or  
even, in extreme cases, the success or failure of a product.  The  
decisions of 80% of network designers today (the clueless ones)  
notwithstanding, things no different in the context of this  
discussion.  Wasting resources leads to poor performance, reliability  
problems, and increased operating costs.

   Why would I threaten the much-loved near-instantaneous response of  
my mail servers by spending resources there that are better spent on  
my border routers, whose CPUs sit at 90% idle time unless they're  
doing a BGP update?

   By way of example, Windows became the bloated, dog-slow pile of  
crap that it is today because some idiot said something like "oh,  
let's throw this at the CPU, it's free!"  Before long, the CPU was  
running half of the graphics operations, doing most of the work of  
the NIC, rasterizing for dumb printers ("WinPrinters"), doing the DSP  
the the modem should be doing ("WinModems"), etc etc.  Look at the  
resource hog it has become because of this lack of knowledge,  
discipline, and good engineering practice.  Even the clueless Windows  
world is moving to distributed processing (in the form of multi-core  
CPUs) to get back some of the performance they've wasted.   
Distributed processing within GPUs started even earlier.

   Anyone claiming that any of this stuff is free should consider  
looking at the assembler output of the compiler when building a  
kernel.  I have.  Trust me, my friend, it's not free.

             -Dave

-- 
Dave McGuire
Port Charlotte, FL




More information about the dovecot mailing list