Yapadu's Feature Wish List
Printed From: LogSat Software
Category: Spam Filter ISP
Forum Name: Spam Filter ISP Support
Forum Description: General support for Spam Filter ISP
URL: https://www.logsat.com/spamfilter/forums/forum_posts.asp?TID=6831
Printed Date: 05 February 2025 at 8:03am
Topic: Yapadu's Feature Wish List
Posted By: yapadu
Subject: Yapadu's Feature Wish List
Date Posted: 05 May 2010 at 4:18am
Spamfilter is excellent, no question. But of course everyone wants more, more, more.
Here are some things I would like to see added (in no particular order).
- - Ability to query the status of a server remotely. It would be really cool if from a remote source we could see some things, like:
- How many inbound connections
- How many outbound connections
- How many messages in queue
- How many messages in quarantine queue (this one is basically an indication of DB connectivity issues)
Right now there is no way to monitor the health of the server, or at least I don't know how it can be done. We can monitor port 25, but we can't tell if access to the database is down or if the rest of the system is healthy or being overrun with too many connections. Ideas?
- Ability to see how many messages are in queue for a specific domain. If email is backing up for abc.com it would be nice to have a way of seeing that. I suppose this would be possible ourselves if we wrote something to scan all the files in the queue and figure it out, but I hate to be messing with files when spamfilter should have priority over them.
- Ability to bounce back emails if they are larger than a specific size, which can be set per domain. Currently we allow 20 meg message sizes, which seems OK.
I know when the server answers it informs the sender of the maximum message size so it can not be done per domain.
However once we accept a message we forward it to the client's email server. We have seen on a few occasions where the messages are simply too large for their server or the connection is closed without completing the transmission. If you want to see a bandwidth bill, try having a number of 10 meg email messages in the queue which you can not successfully deliver. We are talking about gigs and gigs of transmission and they never complete, they also hold open the outbound sessions. Ideally we could accept all messages up to 20 megs (or whatever is set) and then reject based on individual domain settings after message is received.
I always think of things that would be nice to have, that is all I can think of right now. Will update this thread when I think of something else.
|
Replies:
Posted By: yapadu
Date Posted: 07 May 2010 at 11:46pm
Thought of another one that I would like. Having the ability to query the configured DNS servers on non-standard ports. I don't know if this is possible now or not, but it would be nice to have.
|
Posted By: yapadu
Date Posted: 21 May 2010 at 8:55pm
Today's feature request
Do not stop all checks because a message is over a certain size. I understand the logic that the larger the message the longer it takes to check the message body and thus could slow down the server.
However, regardless of the message size it takes the same amount of time to scan the:
IP Address Sender Info Subject Etc.
None of these things have anything to do with the message size.
If someone blocks an IP range, or a sender they think the messages from these senders will no longer be received.
But they slip past is the message is larger than the max scan size.
I doubt this suggestion would ever be considered, so let me as this in a different way.
If my max message size is 20 megs, what would the impact be if I set the max scan size to 20 megs also? Will I bring the server down? Dunno, but I am off to try it right now as messages that should be blocked but are not because of automatic whitelisting is a problem for us.
------------- --------------------------------------------------------------
I am a user of SF, not an employee. Use any advice offered at your own risk.
|
Posted By: yapadu
Date Posted: 21 May 2010 at 8:59pm
Hmmm, just checking the ini and it looks like what I want may already exist:
;Any emails whose text portion exceeds this number of KB will not be scanned for keywords and Bayes Higher values *may* catch more spam but will cause higher load on processor MaxMsgSizeForKeywordScan=500
;Any emails whose text portion exceeds this number of KB will be whitelisted. Most spam emails are small in size, lowering this value may help in reducing the chances of incorrectly blocking valid emails with large attachments MaxMsgSizeForSpamFiltering=800
If I crank up the MaxMsgSizeForSpamFiltering I get what I want? If over 500kb a message is not scanned for keywords, but only if over 800kb the remainder of the filters will be skipped?
------------- --------------------------------------------------------------
I am a user of SF, not an employee. Use any advice offered at your own risk.
|
Posted By: yapadu
Date Posted: 25 May 2010 at 8:37pm
Another item for my wish list, I mentioned it once before. It still bothers me.
I hate seeing thousands of checks for URLsInMAPS for invalid URLS, I see stuff like this all the time:
Resolving for URLsInMAPS: www..ups.com Error occurred during URLsInMAPS: DNS Server Reports Query Not Implemented Error Resolving for URLsInMAPS: www.fbi_ Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: www.fbi Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: www.fbi. Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: www.snopes_ Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: www.snopes. Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: www=2edishnetwork=2ecom Error occurred during URLsInMAPS: DNS Server Reports Query Name Error Resolving for URLsInMAPS: sns1=2ersys4=2enet Error occurred during URLsInMAPS: DNS Server Reports Query Name Error
It is really wasteful, I don't know how many times a day SF ends up checking for URL's that can't possibly be valid but there is extra load on SF to check all these things (especially when going to a remote DNS). Extra load on the DNS servers, it polutes the cache on the DNS will all these invalid domains.
Can't SF do a simply sanity check before going out and asking someone else to confirm if the URL is blacklisted? If the URL can't be valid, it does not need to ask someone else.
------------- --------------------------------------------------------------
I am a user of SF, not an employee. Use any advice offered at your own risk.
|
|