Subject: The mailing list for listmasters using Sympa
List archive
- From: Serge Aumont <address@concealed>
- To: address@concealed
- Subject: Re: [sympa-users] Redundancy and Failover
- Date: Mon, 25 Oct 2010 10:54:15 +0200
Le 10/20/10 9:01 PM, Dan Pritts a écrit :
If a load balancing systeme distribute messages in deferent spools then message validation by list editor or authentication by reply may not work properly. That's why we are working in order to use the database for all spools.
We're looking to improve the availability of our Sympa system
here at Duke, to hopefully prevent the risk of having a single
point of failure as we do now, with one machine on one network in
one data center running the service. It looks like running a
hot/cold failover system would be simple enough, with a load
balancer routing mail appropriately and each system being aware
of all the lists. I don't know how well archives would work with
such a setup, though, unless we were doing some sort of shared
storage (e.g. NFS), which introduces its own set of problems.
Sympa archive are not static html. They are tt2 templates that must be delivered by wwsympa.fcgi in order to be conform to what they must be. This is mainly because of access control which are performed by Sympa scenario but also inclusion of some tt2 vars for example for email encoding to prevent spam.
checking through old mail ...
Archives are just static HTML so i would suggest you just periodically rsync them from one server to the other.
In addition, users may remove messages from archive, so if you are using mirroring you must organize it in order to remove all copies of related message.
Serge Aumont
-
Re: [sympa-users] Redundancy and Failover,
Dan Pritts, 10/20/2010
-
Re: [sympa-users] Redundancy and Failover,
Serge Aumont, 10/25/2010
- Re: [sympa-users] Redundancy and Failover, Miles Fidelman, 10/25/2010
- Re: [sympa-users] Redundancy and Failover, Dan Pritts, 10/26/2010
-
Re: [sympa-users] Redundancy and Failover,
Serge Aumont, 10/25/2010
Archive powered by MHonArc 2.6.19+.