--- /dev/null
+
+-- Version 0.9 released
+
+1999-07-07 Linux FTP-Administrator <ftplinux@ftp.rrze.uni-erlangen.de>
+
+ * debianqueued: Implemented new upload methods "copy" and "ftp" as
+ alternatives to "ssh". "copy" simply copies files to another
+ directory on the queue host, "ftp" uses FTP to upload files. Both
+ of course need no ssh-agent.
+ New config vars:
+ $upload_method, $ftptimeout, $ftpdebug, $ls, $cp, $chmod,
+ Renamed config vars:
+ $master -> $target
+ $masterlogin -> $targetlogin
+ $masterdir -> $targetdir
+ $chmod_on_master -> $chmod_on_target
+
+ Note that the FTP method has some limitations: If no SITE MD5SUM
+ command is supported by the server, uploaded files can be verified
+ by their size only. And if removing of files in the target dir
+ isn't allowed, upload errors can't be handled gracefully.
+
+ * debianqueued: .changes files can now also be signed by GnuPG.
+
+ * dqueued-watcher: Also updates debian-keyring.gpg.
+
+Tue Dec 8 14:09:44 1998 Linux FTP-Administrator <ftplinux@ftp.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes): After an upload, do not remove
+ files with the same name stem if a .changes file is among them.
+ Then there is probably a second upload for a different
+ version/architecture.
+
+-- Version 0.8 released
+
+Thu May 14 16:17:48 1998 Linux FTP-Administrator <ftplinux@ftp.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes): When --after a successfull
+ upload-- deleting files that seem to belong to the same job, check
+ for equal revision number on files that have one. It has happened
+ that the daemon deleted files that belonged to another job with
+ different revision, which shouldn't happen. The current algorithm
+ is more conservative, i.e. it tends not to delete such files. They
+ will be removed as stray files anyway after some time.
+
+Tue Apr 21 10:29:01 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (check_incoming_writable): Also recognize
+ "read-only filesystem" as an error message that makes the daemon
+ think the incoming is unwritable.
+
+ * debianqueued (check_dir): Break from the .changes loop if
+ $incoming_writable has become cleared.
+
+ * debianqueued (process_changes): Don't increment failure count if
+ upload failed due to incoming dir being unwritable.
+
+ * debianqueued (check_dir): Don't use return value of
+ debian_file_stem as regexp, it's a shell pattern.
+
+Tue Mar 31 11:06:11 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes, process_commands): Check for
+ improper mail addresses from Maintainer: fields and try to handle
+ them by looking up the string in the Debian keyring. New funtion
+ try_to_get_mail_addr for the latter.
+
+ * debianqueued (fatal_signal): Kill status daemon only if it has
+ been started.
+
+ * debianqueued (copy_to_master): Change mode of files uploaded to
+ master explicitly to 644. scp uses the permission from the
+ original files, and those could be restricted due to local upload
+ policies.
+
+Mon Mar 30 13:24:51 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * dqueued-watcher (main): If called with arguments, only make
+ summaries for the log files given. With this, you can view the
+ summaries also between normal watcher runs.
+
+ * dqueued-watcher (make_summary): New arg $to_stdout, to print
+ report directly to stdout instead of sending via mail.
+
+Tue Mar 24 14:18:18 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (check_incoming_writable): New function that checks
+ if the incoming dir on master is writable (it isn't during a
+ freeze is done). The check is triggered if an upload fails due to
+ "permission denied" errors. Until the incoming is writable again,
+ the queue is holded and no uploads are tried (so that the max.
+ number of tries isn't exceeded.)
+
+-- Version 0.7 released
+
+Mon Mar 23 13:23:20 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes): In an upload failure message,
+ say explicitly that the job will be retried, to avoid confusion of
+ users.
+
+ * debianqueued (process_changes): $failure_file was put on
+ @keep_list only for first retry.
+
+ * debianqueued (process_changes): If the daemon removes a
+ .changes, set SGID bit on all files associated with it, so that
+ the test for Debian files without a .changes doesn't find them.
+
+ * debianqueued (check_dir): Don't send reports for files without a
+ .changes if the files look like a recompilation for another
+ architecture. Then the maintainer extracted from the files isn't
+ the uploader. A job is treated like that if it doesn't include a
+ .dsc file and no *_{i386,all}.deb files.
+
+ * debianqueued (check_dir): Also don't send such a report if the
+ list of files with the same stem contains a .changes. This can be
+ the case if an upload failed and the .changes is still around, and
+ there's some file with the same name stem but which isn't in the
+ .changes (e.g. .orig.tar.gz).
+
+ * debianqueued (process_changes): Set @keep_list earlier, before
+ PGP and non-US checks.
+
+ * debianqueued (main): Fix recognition of -k argument.
+
+Tue Feb 17 11:54:33 1998 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (check_dir): Added test for binaries that could
+ reside on slow NFS filesystems. It is specially annoying if pgp
+ isn't found, because then the .changes is deleted. If one of the
+ files listed in @conf::test_binaries isn't present immediately
+ before a queue run, that one is delayed.
+
+-- Version 0.6 released
+
+Tue Dec 9 14:53:23 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes): Reject jobs whose package name
+ is in @nonus_packages (new config var). These must be uploaded to
+ nonus.debian.org instead of master itself.
+
+Tue Nov 25 11:02:38 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (main): Implemented -k and -r arguments (kill or
+ restart daemon, resp.)
+
+ * debianqueued (is_debian_file): Exclude orig.tar.gz files from
+ that class, so that the maintainer address isn't searched in them
+ if they happen to come first in the dir.
+
+ * debianqueued (END): Fix kill call (pid and signo were swapped)
+
+ * debianqueued (process_changes): Moved check if job is already on
+ master to a later stage, to avoid connecting to master as long as
+ there are still errors with the job (missing files or the like).
+
+ * debianqueued (check_alive): Lookup master's IP address before
+ every ping, it could change while the daemon is running...
+
+-- Version 0.5 released
+
+Mon Nov 11 14:37:52 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_commands): rm command now can process more
+ than one argument and knows about wildcards
+
+Mon Nov 6 15:09:53 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_commands): Recognize commands on the same
+ line as the Commands: keyword, not only on continuation lines.
+
+Mon Nov 3 16:49:57 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (close_log): After reopening the log file, write
+ one message it. This avoids that dqueued-watcher's rotating
+ algorithm delays from several minutes to a few hours on every
+ rotate, since it looks at the time of the first entry.
+
+Thu Oct 30 13:56:35 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * dqueued-watcher (make_summary): Added some new summary counters
+ for command files.
+
+ * debianqueued (process_changes): Added check for files that seem
+ to belong to an upload (match debian_file_stem($changes)), but
+ aren't listed in the .changes. Most probably these are unneeded
+ .orig.tar.gz files. They are deleted.
+
+ * debianqueued (print_status): Print revision and version number
+ of debianqueued in status file.
+
+ * debianqueued (process_commands): New function, for processing
+ the new feature of .command files. These enable uploaders to
+ correct mistakes in the queue dir (corrupted/misnamed files)
+
+Wed Oct 29 15:35:03 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ *debianqueued (check_dir): Extra check for files that look like an
+ upload, but miss a .changes file. A problem report is sent to the
+ probable uploader after $no_changes_timeout seconds (new config
+ var). The maintainer email can be extracted from .dsc, .deb,
+ .diff.gz and .tar.gz files (though the maintainer needs not
+ necessarily be the uploader...) New utility functions
+ is_debian_file, get_maintainer, debian_file_stem.
+
+ * debianqueued (pgp_check, get_maintainer): Quote filenames used
+ on sh command lines, so metacharacters in the names can't do bad
+ things. (Though wu-ftpd generally shouldn't allow uploading files
+ with such names.)
+
+ * debianqueued (print_time): Print times always as
+ hour:minute:second, i.e. don't omit the hour if it's 0. This could
+ confuse users, because they don't know if the hour or the seconds
+ are missing.
+
+-- Version 0.4 released
+
+Thu Sep 25 13:18:57 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (process_changes): Forgot to remove a bad .changes
+ file in some cases (no mail address, not PGP signed at all, no
+ files mentioned). Also initialize some variables to avoid Perl
+ warnings.
+
+Wed Sep 17 14:15:21 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * dqueued-watcher (make_summary): Add feature of writing summaries
+ also to a file. Config var do_summary renamed to mail_summary,
+ additional var summary_file.
+
+Mon Sep 15 11:56:59 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * dqueued-watcher: Log several activities of the watcher to the log
+ file; new function logger() for this.
+
+ * debianqueued (process_changes, check_alive): Make some things more
+ verbose in non-debug mode.
+
+Mon Aug 18 13:25:04 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * dqueued-watcher (rotate_log): Using the log file's ctime for
+ calculating its age was a rather bad idea -- starting the daemon
+ updates that time stamp. Now the first date found in the log file
+ is used as basis for age calculation.
+
+ * dqeued-watcher (make_summary): New function to build a summary
+ of daemon actions when rotating logs. Controlled by config
+ variable $do_summary.
+
+Tue Aug 12 13:26:52 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * Makefile: new files with targets for automating various
+ administrative tasks
+
+-- Version 0.3 released
+
+Mon Aug 11 10:48:31 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (is_on_master, copy_to_master): Oops, forget
+ alarm(0)'s to turn off timeouts again.
+
+ * debianqueued: Revised the startup scheme so that it also works
+ with the socket-based ssh-agent. That agent periodically checks
+ whether the process it started is still alive and otherwise exits.
+ For that, the go-into-background fork must be done before
+ ssh-agent is started.
+
+ * debianqueued: Implemented close_log and SIGHUP handling for
+ logfile rotating.
+
+ * dqueued-watcher: Implemented log file rotating.
+
+Thu Aug 07 11:25:22 1997 Linux FTP-Administrator <ftplinux@arachnia.rrze.uni-erlangen.de>
+
+ * debianqueued (is_on_master, copy_to_master): added timeouts to
+ all ssh/scp operations, because I've seen one once hanging...
+
+-- Started ChangeLog
+-- Version 0.2 released
+
+$Id: ChangeLog,v 1.36 1999/07/08 09:43:24 ftplinux Exp $
+
--- /dev/null
+#
+# Makefile for debianqueued -- only targets for package maintainance
+#
+# $Id: Makefile,v 1.10 1998/03/25 09:21:01 ftplinux Exp $
+#
+# $Log: Makefile,v $
+# Revision 1.10 1998/03/25 09:21:01 ftplinux
+# Implemented snapshot target
+#
+# Revision 1.9 1998/03/23 14:10:28 ftplinux
+# $$num in make upload needs braces because _ follows
+#
+# Revision 1.8 1997/12/16 13:20:57 ftplinux
+# add _all to changes name in upload target
+#
+# Revision 1.7 1997/11/20 15:34:11 ftplinux
+# upload target should copy only current release to queue dir
+#
+# Revision 1.6 1997/09/29 14:28:38 ftplinux
+# Also fill in Version: for .changes file
+#
+# Revision 1.5 1997/09/25 11:33:48 ftplinux
+# Added automatic adding of release number to ChangeLog
+#
+# Revision 1.4 1997/08/18 11:29:11 ftplinux
+# Include new release number in message of cvs commits
+#
+# Revision 1.3 1997/08/12 10:39:08 ftplinux
+# Added generation of .changes file in 'dist' target; added 'upload'
+# target (using the queue :-)
+#
+# Revision 1.2 1997/08/12 10:01:32 ftplinux
+# Fixed dist target to work (last checkin was needed to test it at all)
+#
+#
+
+CVS = cvs
+RELNUMFILE = release-num
+# files that contain the release number
+FILES_WITH_NUM = debianqueued dqueued-watcher
+# name of cvs module
+MODULE = debianqueued
+
+.PHONY: default release dist
+
+default:
+ @echo "Nothing to make -- the Makefile is only for maintainance purposes"
+ @exit 1
+
+# Usage:
+# make release (use number from file release-num)
+# or
+# make release RELNUM=x.y (writes new number to release-num)
+
+release:
+ if cvs status $(RELNUMFILE) | grep -q Up-to-date; then true; else \
+ echo "$(RELNUMFILE) needs commit first"; exit 1; \
+ fi
+ifdef RELNUM
+ echo $(RELNUM) >$(RELNUMFILE)
+ cvs commit -m "Bumped release number to `cat $(RELNUMFILE)`" $(RELNUMFILE)
+endif
+ perl -pi -e "s/Release: \S+/Release: `cat $(RELNUMFILE)`/;" \
+ $(FILES_WITH_NUM)
+ cvs commit -m "Bumped release number to `cat $(RELNUMFILE)`" $(FILES_WITH_NUM)
+ if grep -q "Version `cat release-num` released" ChangeLog; then true; else \
+ mv ChangeLog ChangeLog.orig; \
+ echo "" >ChangeLog; \
+ echo "-- Version `cat $(RELNUMFILE)` released" >>ChangeLog; \
+ echo "" >>ChangeLog; \
+ cat ChangeLog.orig >>ChangeLog; \
+ rm ChangeLog.orig; \
+ cvs commit -m "Bumped release number to `cat $(RELNUMFILE)`" ChangeLog; \
+ fi
+ cvs tag release-`cat $(RELNUMFILE) | sed 's/\./-/'`
+
+dist:
+ set -e; \
+ num=`cat $(RELNUMFILE)`; name=debianqueued-$$num; \
+ mkdir tmp; \
+ (cd tmp; cvs export -r release-`echo $$num | sed 's/\./-/'` $(MODULE); \
+ mv $(MODULE) $$name; \
+ tar cvf ../../$$name.tar $$name); \
+ gzip -9f ../$$name.tar; \
+ rm -rf tmp; \
+ file=../$$name.tar.gz; \
+ md5=`md5sum $$file | awk -e '{print $$1}'`; \
+ size=`ls -l $$file | awk -e '{print $$4}'`; \
+ chfile=../debianqueued_`cat $(RELNUMFILE)`_all.changes; \
+ sed -e "s/^Date: .*/Date: `822-date`/" -e "s/Version: .*/Version: `cat $(RELNUMFILE)`/" <changes-template >$$chfile; \
+ echo " $$md5 $$size byhand - $$name.tar.gz" >>$$chfile; \
+ pgp -u 'Roman Hodek' +clearsig=on -fast <$$chfile >$$chfile.asc; \
+ mv $$chfile.asc $$chfile
+
+# can only be used on ftp.uni-erlangen.de :-)
+upload:
+ set -e; \
+ num=`cat $(RELNUMFILE)`; \
+ cp ../debianqueued-$$num.tar.gz ../debianqueued_$${num}_all.changes $$HOME/Linux/debian/UploadQueue
+
+# make snapshot from current sources
+snapshot:
+ set -e; \
+ modified=`cvs status 2>/dev/null | awk '/Status:/ { if ($$4 != "Up-to-date") print $$2 }'`; \
+ if [ "x$$modified" != "x" ]; then \
+ echo "There are modified files: $$modified"; \
+ echo "Commit first"; \
+ exit 1; \
+ fi; \
+ name=debianqueued-snapshot-`date +%y%m%d`; \
+ rm -rf tmp; \
+ mkdir tmp; \
+ (cd tmp; cvs export -D now $(MODULE); \
+ mv $(MODULE) $$name; \
+ tar cvf ../../$$name.tar $$name); \
+ gzip -9f ../$$name.tar; \
+ rm -rf tmp
--- /dev/null
+
+This is a list of problems that I have seen:
+
+ - One an upload failed with the following error:
+
+ Jul 8 12:13:53 Upload to master.debian.org failed, last exit status 1
+ Jul 8 12:13:53 Error messages from scp:
+ bind: Permission denied
+ lost connection
+
+ Never seen such an error from ssh/scp before... But since it didn't
+ happen again, I suspect something with master and/or the net.
+
+ - There are some protocol problems between certain ssh version (on
+ client/server side). The effect is that scp either hangs itself
+ (times out after $remote_timeout), or leaves ssh processes hanging
+ around. I've noticed that with ssh 1.2.19 on the server. I have a
+ prototype for a workaround, but haven't included it in
+ debianqueued, because master has been updated to 1.2.20 now and the
+ problem disappeared.
+
+ - The "ftp" method has some limitiations:
+ 1) Files in the target dir can't be deleted.
+ 2) Uploaded files can't be verified as good as with the other methods.
+ 3) $chmod_on_target often doesn't work.
+ 4) The check for a writable incoming directory leaves temporary files
+ behind.
+
+$Id: PROBLEMS,v 1.4 1999/07/08 09:34:52 ftplinux Exp $
--- /dev/null
+
+This directory is the Debian upload queue of ftp.uni-erlangen.de. All
+files uploaded here will be moved into the project incoming dir on
+master.debian.org.
+
+Only known Debian developers can upload here. All uploads must be in
+the same format as they would go to master, i.e. with a PGP-signed
+.changes file that lists all files that belong to the upload. Files
+not meeting this condition will be removed automatically after some
+time.
+
+The queue daemon will notify you by mail of success or any problems
+with your upload. For this, the Maintainer: field in the .changes must
+contain your (the uploader's) correct e-mail address, not the address
+of the real maintainer (if different). The same convention applies to
+master itself, which sends installation acknowledgements to the
+address in Maintainer:.
+
+
+*.commands Files
+----------------
+
+Besides *.changes files, you can also upload *.commands files for the
+daemon to process. With *.commands files, you can instruct the daemon
+to remove or rename files in the queue directory that, for example,
+resulted from failed or interrupted uploads. A *.commands file looks
+much like a *.changes, but contains only two fields: Uploader: and
+Commands:. It must be PGP-signed by a known Debian developer, to avoid
+that E.V.L. Hacker can remove/rename files in the queue. The basename
+(the part before the .commands extension) doesn't matter, but best
+make it somehow unique.
+
+The Uploader: field should contain the mail address to which the reply
+should go, just like Maintainer: in a *.changes. Commands: is a
+multi-line field like e.g. Description:, so each continuation line
+should start with a space. Each line in Commands: can contain a
+standard 'rm' or 'mv' command, but no options are allowed, and
+filenames may not contain slashes (so that they're restricted to the
+queue directory). 'rm' can process as much arguments as you give it
+(not only one), and also knows about the shell wildcards *, ?, and [].
+
+Example of a *.commands file:
+
+-----BEGIN PGP SIGNED MESSAGE-----
+
+Uploader: Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+Commands:
+ rm hello_1.0-1_i386.deb
+ mv hello_1.0-1.dsx hello_1.0-1.dsc
+
+-----BEGIN PGP SIGNATURE-----
+Version: 2.6.3ia
+
+iQCVAwUBNFiQSXVhJ0HiWnvJAQG58AP+IDJVeSWmDvzMUphScg1EK0mvChgnuD7h
+BRiVQubXkB2DphLJW5UUSRnjw1iuFcYwH/lFpNpl7XP95LkLX3iFza9qItw4k2/q
+tvylZkmIA9jxCyv/YB6zZCbHmbvUnL473eLRoxlnYZd3JFaCZMJ86B0Ph4GFNPAf
+Z4jxNrgh7Bc=
+=pH94
+-----END PGP SIGNATURE-----
--- /dev/null
+
+This directory is the Debian upload queue of ftp.uni-erlangen.de. Only
+known Debian developers can upload here.
--- /dev/null
+ debianqueued -- daemon for managing Debian upload queues
+ ========================================================
+
+Copyright (C) 1997 Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+$Id: README,v 1.20 1999/07/08 09:35:37 ftplinux Exp $
+
+
+Copyright and Disclaimer
+------------------------
+
+This program is free software. You can redistribute it and/or
+modify it under the terms of the GNU General Public License as
+published by the Free Software Foundation: either version 2 or
+(at your option) any later version.
+
+This program comes with ABSOLUTELY NO WARRANTY!
+
+You're free to modify this program at your will, according to the GPL,
+and I don't object if you modify the program. But it would be nice if
+you could send me back such changes if they could be of public
+interest. I will try to integrate them into the mainstream version
+then.
+
+
+Installation
+------------
+
+debianqueued has been written for running a new Debian upload queue at
+ftp.uni-erlangen.de, but I tried to keep it as general as possible and
+it should be useable for other sites, too. If there should be
+non-portabilities, tell me about them and we'll try to get them fixed!
+
+Before installing debianqueued, you should have the following
+utilities installed:
+
+ - pgp (needed for checking signatures)
+
+ - ssh & Co. (but not necessarily sshd, only client programs used)
+
+ - md5sum (for checking file integrity)
+
+ - mkfifo (for creating the status FIFO)
+
+ - GNU tar
+
+ - gzip
+
+ - ar (for analyzing .deb files)
+
+The daemon needs a directory of its own where the scripts reside and
+where it can put certain files. This directory is called $queued_dir
+in the Perl scripts and below. There are no special requirements where
+in the filesystem hierarchy this directory should be.
+
+All configurations are done in file 'config' in $queued_dir. For
+security reasons, the $queued_dir should not be in a public FTP area,
+and should be writeable (as the files in it) only for the user
+maintaining the local debianqueued.
+
+The file Queue.README and Queue.message in the distribution archive
+are examples for README and .message files to put into the queue
+directory. Modify them as you like, or don't install them if you
+don't like them...
+
+
+Running debianqueued
+--------------------
+
+debianqueued is intended to run all time, not as a cron job.
+Unfortunately, you can't start it at system boot time automatically,
+because a human has to type in the pass phrase for the ssh key. So you
+have to start the daemon manually.
+
+The daemon can be stopped by simply killing it (with SIGTERM
+preferrably). SIGTERM and SIGINT are blocked during some operations,
+where it could leave files in a inconsistent state. So it make take
+some time until the daemon really dies. If you have the urgent need
+that it goes away immediately, use SIGQUIT. Please don't use SIGKILL
+except unavoidable, because the daemon can't clean up after this
+signal.
+
+For your convenience, the daemon can kill and restart itself. If you
+start debianqueued with a "-k" argument, it tries to kill a running
+daemon (and it complains if none is running.) If "-r" is on the
+command line, it tries to kill a running daemon first if there is one.
+(If not, it starts anyway, but prints a little warning.) If a daemon
+is running and a new one is started without "-r", you get an error
+message about this. This is to protect you from restarting the daemon
+without intention.
+
+The other script, dqueued-watcher, is intended as cron job, and it
+watches that the daemon is running, in case that it should crash
+sometimes. It also takes care of updating the Debian keyring files if
+necessary. You should enter it e.g. like
+
+ 0,30 * * * * .../dqueued-watcher
+
+into your crontab. (Assuming you want to run it every 30 minutes,
+which seems a good compromise.)
+
+Both scripts (debianqueued and dqueued-watcher) need no special
+priviledges and thus can be run as an ordinary user (not root). You
+can create an own user for debianqueued (e.g. "dqueue"), but you need
+not. The only difference could be which ssh key is used for connects
+to the target host. But you can configure the file to take the ssh key
+from in the config file.
+
+
+The Config File
+---------------
+
+The config file, $queued_dir/config, is plain Perl code and is
+included by debianqueued and dqueued-watcher. You can set the
+following variables there:
+
+ - $debug:
+ Non-zero values enable debugging output (to log file).
+
+The following are all programs that debianqueued calls. You should
+always use absolute pathnames!
+
+ - $pgp, $ssh, $scp, $ssh_agent, $ssh_add, $md5sum, $mail, $mkfifo,
+ $tar, $gzip, $ar
+
+ Notes:
+
+ o $mail should support the -s option for supplying a subject.
+ Therefore choose mailx if your mail doesn't know -s.
+
+ o $tar should be GNU tar, several GNU features are used (e.g.
+ --use-compress-program).
+
+ o $ar must be able to unpack *.deb files and must understand the
+ 'p' command. Better check this first... If you don't define $ar
+ (or define it to be empty), debianqueued won't be able to
+ extract a maintainer address from .deb files. (Which isn't that
+ disturbing...)
+
+ - @test_binaries:
+
+ All binaries listed in this variable are tested to be present
+ before each queue run. If any is not available, the queue run is
+ delayed. This test can be useful if those binaries reside on NFS
+ filesystems which may be (auto-)mounted only slowly. It is
+ specially annoying for users if pgp can't be found and a .changes
+ is deleted.
+
+ - $ssh_options:
+ Options passed to ssh and scp on every call. General ssh
+ configuration should be done here and not in ~/.ssh/config, to
+ avoid dependency on the user's settings. A good idea for
+ $ssh_options seems to be
+
+ -o'BatchMode yes' -o'FallBackToRsh no' -o'ForwardAgent no'
+ -o'ForwardX11 no' -o'PasswordAuthentication no'
+ -o'StrictHostKeyChecking yes'
+
+ - $ssh_key_file:
+ The file containing the ssh key you want the daemon to use for
+ connects to the target host. If you leave this empty, the default
+ ~/.ssh/identity is used, which may or may not be what you want.
+
+ - $incoming:
+ This names the queue directory itself. Probably it will be inside
+ the public FTP area. Don't forget to allow uploads to it in
+ ftpaccess if you're using wu-ftpd.
+
+ Maybe you should also allow anonymous users to rename files in that
+ directory, to fix upload problems (they can't delete files, so they
+ have to move the errorneous file out of the way). But this
+ introduces a denial-of-service security hole, that an attacker
+ renames files of other people and then a job won't be done. But at
+ least the data aren't lost, and the rename command probably was
+ logged by ftpd. Nevertheless, there's no urgent need to allow
+ renamings, because the queue daemon deletes all bad files
+ automatically, so they can be reuploaded under the same name.
+ Decide on your own...
+
+ - $keep_files:
+ This is a regular expression for files that never should be deleted
+ in the queue directory. The status file must be included here,
+ other probable candicates are .message and/or README files.
+
+ - $chmod_on_target:
+ If this variable is true (i.e., not 0 or ""), all files belonging
+ to a job are changed to mode 644 only on the target host. The
+ alternative (if the variable is false, i.e. 0) is to change the
+ mode already locally, after the sizes and md5 sums have been
+ verified. The latter is the default.
+
+ The background for this is the following: The files must be
+ word-readable on master for dinstall to work, so they must be at
+ least mode 444, but 644 seems more useful. If the upload policy of
+ your site says that uploaded files shouldn't be readable for world,
+ the queue daemon has to change the permission at some point of
+ time. (scp copies a file's permissions just as the contents, so
+ after scp, the files on the target have the same mode as in the
+ queue directory.) If the files in the queue are mode 644 anyway,
+ you don't need to care about this option. The default --to give
+ word read permission in the queue already after some checks-- is
+ obviously less restrictive, but might be against the policy of your
+ site. The alternative keeps the files unreadable in the queue in
+ any case, and they'll be readable only on the target host.
+
+ - $statusfile:
+ This is the name of the status file or FIFO, through which users
+ can ask the daemon what it's currently doing. It should normally be
+ in the queue directory. If you change the name, please don't forget
+ to check $keep_files. See also the own section on the status file.
+
+ If you leave $statusfile empty, the daemon doesn't create and
+ manage a status file at all, if you don't want it. Unfortunately,
+ dqueued-watcher's algorithm to determine whether it already has
+ reported a missing daemon depends on the status file, so this
+ doesn't work anymore in this case. You'll get dead daemon mails on
+ every run of dqueued-watcher.
+
+ - $statusdelay:
+ If this number is greater than 0, the status file is implemented as
+ a regular file, and updated at least every $statusdelay seconds. If
+ $statusdelay is 0, the FIFO implementation is used (see status file
+ section).
+
+ - $keyring:
+ The name of the PGP keyring the daemon uses to check PGP signatures
+ of .changes files. This is usually $queued_dir/debian-keyring.pgp.
+ It should contain exactly the keys of all Debian developers (i.e.
+ those and no other keys).
+
+ - $gpg_keyring:
+ The name of the GnuPG keyring. The daemon now alternatively accepts
+ GnuPG signatures on .changes and .commands files. The value here is
+ usually $queued_dir/debian-keyring.gpg. It should contain only keys
+ of Debian developers (but not all developers have a GPG key
+ yet...).
+
+ - $keyring_archive:
+ Path of the debian-keyring.tar.gz file inside a Debian mirror. The
+ file is "/debian/doc/debian-keyring.tar.gz" on ftp.debian.org,
+ don't know where you mirror it to... Leave it empty if you don't
+ have that file on your local machine. But then you'll have to
+ update the keyring manually from time to time.
+
+ - $keyring_archive_name:
+ Name of the PGP keyring file in the archive $keyring_archive. Currently
+ "debian-keyring*/debian-keyring.pgp".
+
+ - $gpg_keyring_archive_name:
+ Name of the GnuPG keyring file in the archive $keyring_archive. Currently
+ "debian-keyring*/debian-keyring.gpg".
+
+ - $logfile:
+ The file debianqueued writes its logging data to. Usually "log" in
+ $queued_dir.
+
+ - $pidfile:
+ The file debianqueued writes its pid to. Usually "pid" in
+ $queued_dir.
+
+ - $target:
+ Name of the target host, i.e. the host where the queue uploads to.
+ Usually "master.debian.org". (Ignored with "copy" upload method.)
+
+ - $targetlogin:
+ The login on the target to use for uploads. (Ignored with "copy"
+ and "ftp" upload methods; "ftp" always does anonymous logins.)
+
+ - $targetdir:
+ The directory on the target to where files should be uploaded. On
+ master.debian.org this currently is
+ "/home/Debian/ftp/private/project/Incoming".
+
+ - $max_upload_retries:
+ This is the number how often the daemon tries to upload a job (a
+ .changes + the files belonging to it). After that number is
+ exhausted, all these files are deleted.
+
+ - $log_age:
+ This is how many days are waited before logfiles are rotated. (The
+ age of the current log files is derived from the first date found
+ in it.)
+
+ - $log_keep:
+ How many old log files to keep. The current logfile is what you
+ configured as $logfile above, older versions have ".0", ".1.gz",
+ ".2.gz", ... appended. I.e., all old versions except the first are
+ additionally gzipped. $log_keep is one higher than the max.
+ appended number that should exist.
+
+ - $mail_summary:
+ If this is set to a true value (not 0 and not ""), dqueued-watcher
+ will send a mail with a summary of the daemon's acivities whenever
+ logfiles are rotated.
+
+ - $summary_file:
+ If that value is a file name (and not an empty string),
+ dqueued-watcher will write the same summary of daemon activities as
+ above to the named file. This can be in addition to sending a mail.
+
+ - @nonus_packages:
+ This is a (Perl) list of names of packages that must be uploaded to
+ nonus.debian.org and not to master. Since the queue daemon only can
+ deal with one target, it can't do that upload and thus must reject
+ the job. Generally you can treat this variable as a list of any
+ packages that should be rejected.
+
+All the following timing variables are in seconds:
+
+ - $upload_delay_1:
+ The time between the first (failed) upload try and the next one.
+ Usually shorter then $upload_delay_2 for quick retry after
+ transient errors.
+
+ - $upload_delay_2:
+ The time between the following (except the first) upload retries.
+
+ - $queue_delay:
+ The time between two queue runs. (May not be obeyed too exactly...
+ a few seconds deviation are normal).
+
+ - $stray_remove_timeout:
+ If a file not associated with any .changes file is found in the
+ queue directory, it is removed after this many seconds.
+
+ - $problem_report_timeout:
+ If there are problems with a job that could also be result of a
+ not-yet-complete upload (missing or too small files), the daemon
+ waits this long before reporting the problem to the uploader. This
+ avoids warning mails for slow but ongoing uploads.
+
+ - $no_changes_timeout:
+
+ If files are found in the queue directory that look like a Debian
+ upload (*.tar.gz, *.diff.gz, *.deb, or *.dsc files), but aren't
+ accompanied by a .changes file, then debianqueued tries to notify
+ the uploader after $no_changes_timeout seconds about this. This
+ value is somewhat similar to $problem_report_timeout, and the
+ values can be equal.
+
+ Since there's no .changes, the daemon can't never be sure who
+ really uploaded the files, but it tries to extract the maintainer
+ address from all of the files mentioned above. If they're real
+ Debian files (except a .orig.tar.gz), this works in most cases.
+
+ - $bad_changes_timeout:
+ After this time, a job with persisting problems (missing files,
+ wrong size or md5 checksum) is removed.
+
+ - $remote_timeout:
+ This is the maximum time a remote command (ssh/scp) may take. It's
+ to protect against network unreliabilities and the like. Choose the
+ number sufficiently high, so that the timeout doesn't inadventedly
+ kill a longish upload. A few hours seems ok.
+
+Contents of $queued_dir
+-----------------------
+
+$queued_dir contains usually the following files:
+
+ - config:
+ The configuration file, described above.
+
+ - log:
+ Log file of debianqueued. All interesting actions and errors are
+ logged there, in a format similar to syslog.
+
+ - pid:
+ This file contains the pid of debianqueued, to detect double
+ daemons and for killing a running daemon.
+
+ - debian-keyring.pgp, debian-keyring.gpg:
+ These are the PGP and GnuPG key rings used by debianqueued to
+ verify the signatures of .changes files. It should contain the keys
+ of all Debian developers and no other keys. The current Debian key
+ ring can be obtained from
+ ftp.debian.org:/debian/doc/debian-keyring.tar.gz. dqueued-watcher
+ supports the facility to update this file automatically if you also
+ run a Debian mirror.
+
+ - debianqueued, dqueued-watcher:
+ The Perl scripts.
+
+All filenames except "config" can be changed in the config file. The
+files are not really required to reside in $queued_dir, but it seems
+practical to have them all together...
+
+
+Details of Queue Processing
+---------------------------
+
+The details of how the files in the queue are processed may be a bit
+complicated. You can skip this section if you're not interested in
+those details and everything is running fine... :-)
+
+The first thing the daemon does on every queue run is determining all
+the *.changes files present. All of them are subsequently read and
+analyzed. The .changes MUST contain a Maintainer: field, and the
+contents of that field should be the mail address of the uploader. The
+address is used for sending back acknowledges and error messages.
+(dinstall on master uses the same convention.)
+
+Next, the PGP or GnuPG signature of the .changes is checked. The
+signature must be valid and must belong to one of the keys in the
+Debian keyring (see config variables $keyring and $gpg_keyring). This
+ensures that only registered Debian developers can use the upload
+queue to transfer files to master.
+
+Then all files mentioned in the Files: field of the .changes are
+checked. All of them must be present, and must have correct size and
+md5 checksum. If any of this conditions is violated, the upload
+doesn't happen and an error message is sent to the uploader. If the
+error is a incorrect size/md5sum, the file is also deleted, because it
+has to be reuploaded anyway, and it could be the case that the
+uploader cannot easily overwrite a file in the queue dir (due to
+upload permission restrictions). If the error is a missing file or a
+too small file, the error message is hold back for some time
+($problems_report_timeout), because they can also be result of an
+not-yet-complete upload.
+
+The time baseline for when to send such a problem report is the
+maximum modification time of the .changes itself and all files
+mentioned in it. When such a report is sent, the setgid bit (show as
+'S' in ls -l listing, in group x position) on the .changes is set to
+note that fact, and to avoid the report being sent on every following
+queue run. If any modification time becomes greater than the time the
+setgid bit was set, a new problem report is sent, because obviously
+something has changed to the files.
+
+If a job is hanging around for too long with errors
+($bad_changes_timeout), the .changes and all its files are deleted.
+The base for that timeout is again the maximum modification time as
+explained above.
+
+If now the .changes itself and all its files are ok, an upload is
+tried. The upload itself is done with scp. In that stage, various
+errors from the net and/or ssh can occur. All these simply count as
+upload failures, since it's not easy to distinguish transient and
+permanent failures :-( If the scp goes ok, the md5sums of the files on
+the target are compared with the local ones. This is to ensure that
+the transfer didn't corrupt anything. On any error in the upload or in
+the md5 check, the files written to the target host are deleted again
+(they may be broken), and an error message is sent to the uploader.
+
+The upload is retied $upload_delay_1 seconds later. If it fails again,
+the next retries have a (longer) delay $upload_delay_2 between them.
+At most $max_upload_retries retries are done. After all these failed,
+all the files are deleted, since it seems we can't move them... For
+remembering how many tries were alredy done (and when), debianqueued
+uses a separate file. Its name is the .changes' filename with
+".failures" appended. It contains simply two integers, the retry count
+and the last upload time (in Unix time format).
+
+After a successfull upload, the daemon also checks for files that look
+like they belonged to the same job, but weren't listed in the
+.changes. Due to experience, this happens rather often with
+.orig.tar.gz files, which people upload though they're aren't needed
+nor mentioned in the .changes. The daemon uses the filename pattern
+<pkg-name>_<version>* to find such unneeded files, where the Debian
+revision is stripped from <version>. The latter is needed to include
+.orig.tar.gz files, which don't have the Debian revision part. But
+this also introduces the possibility that files of another upload for
+the same package but with another revision are deleted though they
+shouldn't. However, this case seems rather unlikely, so I didn't care
+about it. If such files are deleted, that fact is mentioned in the
+reply mail to the uploader.
+
+If any files are found in the queue dir that don't belong to any
+.changes, they are considered "stray". Such files are remove after
+$stray_remove_timeout. This should be around 1 day or so, to avoid
+files being removed that belong to a job, but whose .changes is still
+to come. The daemon also tries to find out whether such stray files
+could be part of an incomplete upload, where the .changes file is
+still missing or has been forgotten. Files that match the patterns
+*.deb, *.dsc, *.diff.gz, or *.tar.gz are analyzed whether a maintainer
+address can be extracted from them. If yes, the maintainer is notified
+about the incomplete upload after $no_changes_timeout seconds.
+However, the maintainer needs not really be the uploader... It could
+be a binary-only upload for another architecture, or a non-maintainer
+upload. In these cases, the mail goes to the wrong wrong person :-(
+But better than not writing at all, IMHO...
+
+
+The status file
+---------------
+
+debianqueued provides a status file for the user in the queue
+directory. By reading this file, the user can get an idea what the
+daemon is currently doing.
+
+There are two possible implementations of the status file: as a plain
+file, or as a named pipe (FIFO). Both have their advantages and
+disadvantages.
+
+If using the FIFO, the data printed (last ping time, next queue run)
+are always up to date, because they're interrogated (by a signal) just
+at the time the FIFO is opened for reading. Also, the daemon hasn't to
+care about the status file if nobody accesses it. The bad things about
+the FIFO: It is a potential portability problem, because not all
+systems have FIFOs, or they behave different than I expect... But the
+more severe problem: wu-ftpd refuses to send the contents of a FIFO on
+a FTP GET request :-(( It does an explicit check whether a file to be
+retrieved is a regular file. This can be easily patched [1], but not
+everybody wants to do that or can do that (but I did it for
+ftp.uni-erlangen.de). (BTW, there could still be problems (races) if
+more than one process try to read the status file at the same time...)
+
+The alternative is using a plain file, which is updated regularily by
+the daemon. This works on every system, but causes more overhead (the
+daemon has to wake up each $statusdelay seconds and write a file), and
+the time figures in the file can't be exact. $statusdelay should be a
+compromise between CPU wastage and desired accuracy of the times found
+in the status file. I think 15 or 30 seconds should be ok, but your
+milage may vary.
+
+If the status file is a FIFO, the queue daemon forks a second process
+for watching the FIFO (so don't wonder if debianqueued shows up twice
+in ps output :-), to avoid blocking a reading process too long until
+the main daemon has time to watch the pipe. The status daemon requests
+data from the main daemon by sending a signal (SIGUSR1). Nevertheless
+it can happen that a process that opens the status file (for reading)
+is blocked, because the daemon has crashed (or never has been started,
+after reboot). To minimize chances for that situation, dqueued-watcher
+replaces the FIFO by a plain file (telling that the daemon is down) if
+it sees that no queue daemon is running.
+
+
+ [1]: This is such a patch, for wu-ftpd-2.4.2-BETA-13:
+
+--- wu-ftpd/src/ftpd.c~ Wed Jul 9 13:18:44 1997
++++ wu-ftpd/src/ftpd.c Wed Jul 9 13:19:15 1997
+@@ -1857,7 +1857,9 @@
+ return;
+ }
+ if (cmd == NULL &&
+- (fstat(fileno(fin), &st) < 0 || (st.st_mode & S_IFMT) != S_IFREG)) {
++ (fstat(fileno(fin), &st) < 0 ||
++ ((st.st_mode & S_IFMT) != S_IFREG &&
++ (st.st_mode & S_IFMT) != S_IFIFO))) {
+ reply(550, "%s: not a plain file.", name);
+ goto done;
+ }
+
+
+Command Files
+-------------
+
+The practical experiences with debianqueued showed that users
+sometimes make errors with their uploads, resulting in misnamed or
+corrupted files... Formerly they didn't have any chance to fix such
+errors, because the ftpd usually doesn't allow deleting or renaming
+files in the queue directory. (If you would allow this, *anybody* can
+remove/rename files, which isn't desirable.) So users had to wait
+until the daemon deleted the bad files (usually ~ 24 hours), before
+they could start the next try.
+
+To overcome this, I invented the *.command files. The daemon looks for
+such files just as it tests for *.changes files on every queue run,
+and processes them before the usual jobs. *.commands files must be PGP
+or GnuPG signed by a known Debian developer (same test as for
+*.changes), so only these people can give the daemon commands. Since
+Debian developers can also delete files in master's incoming, the
+*.commands feature doesn't give away any security.
+
+The syntax of a *.commands file is much like a *.changes, but it
+contains only two (mandatory) fields: Uploader: and Commands.
+Uploader: contains the e-mail address of the uploader for reply mails,
+and should have same contents as Maintainer: in a .changes. Commands:
+is a multi-line field like e.g. Description: or Changes:. Every
+continuation line must start with a space. Each line in Commands:
+contains a command for the daemon that looks like a shell command (but
+it isn't one, the daemon parses and executes it itself and doesn't use
+sh or the respective binaries).
+
+Example:
+-----BEGIN PGP SIGNED MESSAGE-----
+
+Uploader: Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+Commands:
+ rm hello_1.0-1_i386.deb
+ mv hello_1.0-1.dsx hello_1.0-1.dsc
+
+-----BEGIN PGP SIGNATURE-----
+Version: 2.6.3ia
+
+iQCVAwUBNFiQSXVhJ0HiWnvJAQG58AP+IDJVeSWmDvzMUphScg1EK0mvChgnuD7h
+BRiVQubXkB2DphLJW5UUSRnjw1iuFcYwH/lFpNpl7XP95LkLX3iFza9qItw4k2/q
+tvylZkmIA9jxCyv/YB6zZCbHmbvUnL473eLRoxlnYZd3JFaCZMJ86B0Ph4GFNPAf
+Z4jxNrgh7Bc=
+=pH94
+-----END PGP SIGNATURE-----
+
+The only commands implemented at this time are 'rm' and 'mv'. No
+options are implemented, and filenames may not contain slashes and are
+interpreted relative to the queue directory. This ensures that only
+files there can be modified. 'mv' always takes two arguments. 'rm' can
+take any number of args. It also knows about the following shell
+wildcard chars: *, ?, and [...]. {..,..} constructs are *not*
+supported. The daemon expands these patterns itself and doesn't use sh
+for that (for security reasons).
+
+*.commands files are processed before the usual *.changes jobs, so if
+a commands file fixes a job so that it can be processed, that
+processing happens in the same queue run and no unnecessary delay is
+introduced.
+
+The uploader of a *.commands will receive a reply mail with a comment
+(OK or error message) to each of the commands given. The daemon not
+only logs the contents of the Uploader: field, but also the owner of
+the PGP/GnuPG key that was used to sign the file. In case you want to
+find out who issued some commands, the Uploader: field is insecure,
+since its contents can't be checked.
+
+
+Security Considerations
+-----------------------
+
+You already know that debianqueued uses ssh & Co. to get access to
+master, or in general any target host. You also probably know that you
+need to unlock your ssh secret key with a passphrase before it can be
+used. For the daemon this creates a problem: It needs the passphrase
+to be able to use ssh/scp, but obviously you can't type in the phrase
+every time the daemon needs it... It would also be very ugly and
+insecure to write the passphase into some config file of the daemon!
+
+The solution is using ssh-agent, which comes with the ssh package.
+This agent's purpose is to store passphrases and give it to
+ssh/scp/... if they need it. ssh-agent has to ways how it can be
+accessed: through a Unix domain socket, or with an inherited file
+descriptor (ssh-agent is the father of your login shell then). The
+second method is much more secure than the first, because the socket
+can be easily exploited by root. On the other hand, an inherited file
+descriptor can be access *only* from a child process, so even root has
+bad chances to get its hands on it. Unfortunately, the fd method has
+been removed in ssh-1.2.17, so I STRONGLY recommend to use ssh-1.2.16.
+(You can still have a newer version for normal use, but separate
+binaries for debianqueued.) Also, using debianqueued with Unix domain
+sockets is basically untested, though I've heard that it doesn't
+work...
+
+debianqueued starts the ssh-agent automatically and runs ssh-add. This
+will ask you for your passphrase. The phrase is stored in the agent
+and available only to child processes of the agent. The agent will
+also start up a second instance of the queue daemon that notices that
+the agent is already running.
+
+Currently, there's no method to store the passphrase in a file, due to
+all the security disadvantages of this. If you don't mind this and
+would like to have some opportunity to do it nevertheless, please ask
+me. If there's enough demand, I'll do it.
+
+
+New Upload Methods
+------------------
+
+Since release 0.9, debianqueued has two new upload methods as
+alternatives to ssh: copy and ftp.
+
+The copy method simply moves the files to another directory on the
+same host. This seems a bit silly, but is for a special purpose: The
+admins of master intend to run an upload queue there, too, in the
+future to avoid non-anonymous FTP connections, which transmit the
+password in cleartext. And, additionally to simply moving the files,
+the queue daemon also checks the signature and integrity of uploads
+and can reject non-US packages.
+
+The ftp method uploads to a standard anon-FTP incoming directory. The
+intention here is that you could create second-level queue daemons.
+I.e., those daemons would upload into the queue of another daemon
+(and, for example, this could be the queue of the daemon on master).
+
+However, the ftp method still has some limitations:
+
+ 1) Files in the target dir can't be deleted.
+ 2) Uploaded files can't be verified as good as with the other methods.
+ 3) $chmod_on_target often doesn't work.
+ 4) The check for a writable incoming directory leaves temporary files
+ behind.
+
+Ad 1): In anon-FTP incoming directories removing of files usually
+isn't allowed (this would widely open doors to denial-of-service
+attacks). But debianqueued has to remove files on the target as part
+of handling upload errors. So if an transmission error happens during
+a job, the bad file can't be deleted. On the next try, the file is
+already present on the target and can't be overwritten, so all the
+following tries will fail, too, except the upstream queue daemon has
+deleted them already. And if the .changes was among the files already
+(at least partially) uploaded, the daemon even will think that the
+whole job is already present on the target and will delete the job in
+its queue.
+
+Ad 2): Uploaded files are usually verified with md5sum if they're
+really the same as the originals. But getting the md5sum for a file on
+a FTP server usually isn't possible. It's currently handled as
+follows: If the server supports a SITE MD5SUM command (non-standard!),
+then this is used and you have the same checking quality. Otherwise,
+debianqueued falls back to only comparing the file sizes. This is
+better than nothing, but doesn't detected changed contents that don't
+result in size changes.
+
+Ad 3): Often SITE CHMOD (standard) isn't allowed in incoming
+directories. If this is the case, $chmod_on_target must be off,
+otherwise all uploads will fail. The mode of uploaded files if forced
+anyway by the FTP server in most cases.
+
+Ad 4): As you know, the queue daemon has a special check if the target
+directory is writable at all (it isn't during a freeze) to protect
+against repeated upload errors. (Jobs would be even deleted otherwise
+if the target dir is unaccessible for too long.) This check is
+performed by creating a test file and deleting it immediately again.
+But since in FTP incoming dirs deletion isn't permitted, the temporary
+file ("junk-for-writable-test-DATE") will remain there. As a partial
+fix, the daemon deletes such files immediately, it doesn't even wait
+for $stray_remove_timeout. So if the upload goes to the queue dir of
+an upstream debianqueued, those temporary files won't be there for
+long.
+
+These problems of the FTP method might be remove in future, if I have
+better ideas how to bypass the limitations of anon-FTP incoming
+directories. Hints welcome :-)
+
+
+# Local Variables:
+# mode: indented-text
+# End:
--- /dev/null
+$Header: /allftp/CVS/debianqueued/TODO,v 1.8 1998/04/01 15:27:39 ftplinux Exp $
+
+ - There are numerous potential portability problems... They'll show
+ up as this script is used on more and different machines.
+
+ - There was a suggestion how bad files on uploads could be handled
+ easier than with command files: Give them some known extension
+ (e.g. .<digits>), and the daemon could look for those files if the
+ main file has bad size or md5.
+
+ - Make provisions for the (rare) case that the daemon looks at a
+ yet-incomplete .changes file.
+
--- /dev/null
+Format: 1.5
+Date:
+Source: debianqueued
+Binary: debianqueued
+Architecture: source all
+Version:
+Distribution: unstable
+Urgency: low
+Maintainer: Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+Description:
+ Debian Upload Queue Daemon
+Files:
--- /dev/null
+#
+# example configuration file for debianqueued
+#
+# $Id: config,v 1.15 1999/07/07 16:19:32 ftplinux Exp $
+#
+# $Log: config,v $
+# Revision 1.15 1999/07/07 16:19:32 ftplinux
+# New variables for upload methods: $upload_method, $ftptimeout,
+# $ftpdebug, $ls, $cp, $chmod.
+# New variables for GnuPG checking: $gpg, $gpg_keyring,
+# $gpg_keyring_archive_name.
+# Renamed "master" in vars to "target".
+# Updated list of non-US packages.
+#
+# Revision 1.14 1998/07/06 14:25:46 ftplinux
+# Make $keyring_archive_name use a wildcard, newer debian keyring tarball
+# contain a dir with a date.
+#
+# Revision 1.13 1998/04/23 10:56:53 ftplinux
+# Added new config var $chmod_on_master.
+#
+# Revision 1.12 1998/02/17 10:57:21 ftplinux
+# Added @test_binaries
+#
+# Revision 1.11 1997/12/09 13:51:46 ftplinux
+# Implemented rejecting of nonus packages (new config var @nonus_packages)
+#
+# Revision 1.10 1997/10/30 11:32:39 ftplinux
+# Implemented warning mails for incomplete uploads that miss a .changes
+# file. Maintainer address can be extracted from *.deb, *.diff.gz,
+# *.dsc, or *.tar.gz files with help of new utility functions
+# is_debian_file, get_maintainer, and debian_file_stem.
+#
+# Revision 1.9 1997/09/17 12:16:33 ftplinux
+# Added writing summaries to a file
+#
+# Revision 1.8 1997/08/18 13:07:14 ftplinux
+# Implemented summary mails
+#
+# Revision 1.7 1997/08/11 12:49:09 ftplinux
+# Implemented logfile rotating
+#
+# Revision 1.6 1997/08/07 09:25:21 ftplinux
+# Added timeout for remote operations
+#
+# Revision 1.5 1997/07/09 10:14:58 ftplinux
+# Change RCS Header: to Id:
+#
+# Revision 1.4 1997/07/09 10:13:51 ftplinux
+# Alternative implementation of status file as plain file (not FIFO), because
+# standard wu-ftpd doesn't allow retrieval of non-regular files. New config
+# option $statusdelay for this.
+#
+# Revision 1.3 1997/07/08 08:34:14 ftplinux
+# If dqueued-watcher runs as cron job, $PATH might not contain gzip. Use extra
+# --use-compress-program option to tar, and new config var $gzip.
+#
+# Revision 1.2 1997/07/03 13:06:48 ftplinux
+# Little last changes before beta release
+#
+# Revision 1.1.1.1 1997/07/03 12:54:59 ftplinux
+# Import initial sources
+#
+
+# set to != 0 for debugging output (to log file)
+$debug = 0;
+
+# various programs:
+# -----------------
+$gpg = "/usr/bin/gpg";
+$ssh = "/usr/bin/ssh";
+$scp = "/usr/bin/scp";
+$ssh_agent = "/usr/bin/ssh-agent";
+$ssh_add = "/usr/bin/ssh-add";
+$md5sum = "/usr/bin/md5sum";
+$mail = "/usr/bin/mail";
+$mkfifo = "/usr/bin/mkfifo";
+$tar = "/bin/tar"; # must be GNU tar!
+$gzip = "/bin/gzip";
+$ar = "/usr/bin/ar"; # must support p option, optional
+$ls = "/bin/ls";
+$cp = "/bin/cp";
+$chmod = "/bin/chmod";
+
+# binaries which existance should be tested before each queue run
+#@test_binaries = ();
+
+# general options to ssh/scp
+$ssh_options = "-o'BatchMode yes' -o'FallBackToRsh no' ".
+ "-o'ForwardAgent no' -o'ForwardX11 no' ".
+ "-o'PasswordAuthentication no' -o'StrictHostKeyChecking yes'";
+
+# ssh key file to use for connects to master (empty: default ~/.ssh/identity)
+$ssh_key_file = "";
+
+# the incoming dir we live in
+$incoming = "/srv/queued/UploadQueue";
+
+# files not to delete in $incoming (regexp)
+$keep_files = '(status|\.message|README)$';
+
+# file patterns that aren't deleted right away
+$valid_files = '(\.changes|\.tar\.gz|\.dsc|\.u?deb|diff\.gz|\.sh)$';
+
+# Change files to mode 644 locally (after md5 check) or only on master?
+$chmod_on_target = 0;
+
+# name of the status file or named pipe in the incoming dir
+$statusfile = "$incoming/status";
+
+# if 0, status file implemented as FIFO; if > 0, status file is plain
+# file and updated with a delay of this many seconds
+$statusdelay = 30;
+
+# names of the keyring files
+@keyrings = ( "/srv/keyring.debian.org/keyrings/debian-keyring.gpg",
+ "/srv/keyring.debian.org/keyrings/debian-keyring.pgp",
+ "/srv/ftp.debian.org/keyrings/debian-maintainers.gpg" );
+
+# our log file
+$logfile = "$queued_dir/log";
+
+# our pid file
+$pidfile = "$queued_dir/pid";
+
+# upload method (ssh, copy, ftp)
+$upload_method = "copy";
+
+# name of target host (ignored on copy method)
+$target = "localhost";
+
+# login name on target host (for ssh, always 'ftp' for ftp, ignored for copy)
+$targetlogin = "queue";
+
+# incoming on target host
+$targetdir = "/srv/ftp.debian.org/queue/unchecked/";
+
+# select FTP debugging
+#$ftpdebug = 0;
+
+# FTP timeout
+$ftptimeout = 900;
+
+# max. number of tries to upload
+$max_upload_retries = 8;
+
+# delay after first failed upload
+$upload_delay_1 = 30*60; # 30 min.
+
+# delay between successive failed uploads
+$upload_delay_2 = 4*60*60; # 4 hours
+
+# packages that must go to nonus.debian.org and thus are rejected here
+#@nonus_packages = qw(gpg-rsaidea);
+
+# timings:
+# --------
+# time between two queue checks
+$queue_delay = 5*60; # 5 min.
+# when are stray files deleted?
+$stray_remove_timeout = 24*60*60; # 1 day
+# delay before reporting problems with a .changes file (not
+# immediately for to-be-continued uploads)
+$problem_report_timeout = 30*60; # 30 min.
+# delay before reporting that a .changes file is missing (not
+# immediately for to-be-continued uploads)
+$no_changes_timeout = 30*60; # 30 min.
+# when are .changes with persistent problems removed?
+$bad_changes_timeout = 2*24*60*60; # 2 days
+# how long may a remote operation (ssh/scp) take?
+$remote_timeout = 3*60*60; # 3 hours
+
+# mail address of maintainer
+$maintainer_mail = "james\@nocrew.org";
+
+
+# logfile rotating:
+# -----------------
+# how often to rotate (in days)
+$log_age = 7;
+# how much old logs to keep
+$log_keep = 4;
+# send summary mail when rotating logs?
+$mail_summary = 1;
+# write summary to file when rotating logs? (no if name empty)
+$summary_file = "$queued_dir/summary";
+
+# don't remove this, Perl needs it!
+1;
--- /dev/null
+#!/usr/bin/perl -w
+#
+# debianqueued -- daemon for managing Debian upload queues
+#
+# Copyright (C) 1997 Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+# Copyright (C) 2001-2007 Ryan Murray <rmurray@debian.org>
+#
+# This program is free software. You can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation: either version 2 or
+# (at your option) any later version.
+# This program comes with ABSOLUTELY NO WARRANTY!
+#
+# $Id: debianqueued,v 1.51 1999/07/08 09:43:21 ftplinux Exp $
+#
+# $Log: debianqueued,v $
+# Revision 1.51 1999/07/08 09:43:21 ftplinux
+# Bumped release number to 0.9
+#
+# Revision 1.50 1999/07/07 16:17:30 ftplinux
+# Signatures can now also be created by GnuPG; in pgp_check, also try
+# gpg for checking.
+# In several messages, also mention GnuPG.
+#
+# Revision 1.49 1999/07/07 16:14:43 ftplinux
+# Implemented new upload methods "copy" and "ftp" as alternatives to "ssh".
+# Replaced "master" in many function and variable names by "target".
+# New functions ssh_cmd, ftp_cmd, and local_cmd for more abstraction and
+# better readable code.
+#
+# Revision 1.48 1998/12/08 13:09:39 ftplinux
+# At the end of process_changes, do not remove the @other_files with the same
+# stem if a .changes file is in that list; then there is probably another
+# upload for a different version or another architecture.
+#
+# Revision 1.47 1998/05/14 14:21:44 ftplinux
+# Bumped release number to 0.8
+#
+# Revision 1.46 1998/05/14 14:17:00 ftplinux
+# When --after a successfull upload-- deleting files for the same job, check
+# for equal revision number on files that have one. It has happened that the
+# daemon deleted files that belonged to another job with different revision.
+#
+# Revision 1.45 1998/04/23 11:05:47 ftplinux
+# Implemented $conf::chmod_on_master. If 0, new part to change mode locally in
+# process_changes.
+#
+# Revision 1.44 1998/04/21 08:44:44 ftplinux
+# Don't use return value of debian_file_stem as regexp, it's a shell pattern.
+#
+# Revision 1.43 1998/04/21 08:22:21 ftplinux
+# Also recogize "read-only filesystem" as error message so it triggers assuming
+# that incoming is unwritable.
+# Don't increment failure count after an upload try that did clear
+# $incoming_writable.
+# Fill in forgotten pattern for mail addr in process_commands.
+#
+# Revision 1.42 1998/03/31 13:27:32 ftplinux
+# In fatal_signal, kill status daemon only if it has been started (otherwise
+# warning about uninitialized variable).
+# Change mode of files uploaded to master explicitly to 644 there, scp copies the
+# permissions in the queue.
+#
+# Revision 1.41 1998/03/31 09:06:00 ftplinux
+# Implemented handling of improper mail addresses in Maintainer: field.
+#
+# Revision 1.40 1998/03/24 13:17:33 ftplinux
+# Added new check if incoming dir on master is writable. This check is triggered
+# if an upload returns "permission denied" errors. If the dir is unwritable, the
+# queue is holded (no upload tries) until it's writable again.
+#
+# Revision 1.39 1998/03/23 14:05:14 ftplinux
+# Bumped release number to 0.7
+#
+# Revision 1.38 1998/03/23 14:03:55 ftplinux
+# In an upload failure message, say explicitly that the job will be
+# retried, to avoid confusion of users.
+# $failure_file was put on @keep_list only for first retry.
+# If the daemon removes a .changes, set SGID bit on all files associated
+# with it, so that the test for Debian files without a .changes doesn't
+# find them.
+# Don't send reports for files without a .changes if the files look like
+# a recompilation for another architecture.
+# Also don't send such a report if the list of files with the same stem
+# contains a .changes.
+# Set @keep_list earlier, before PGP and non-US checks.
+# Fix recognition of -k argument.
+#
+# Revision 1.37 1998/02/17 12:29:58 ftplinux
+# Removed @conf::test_binaries used only once warning
+# Try to kill old daemon for 20secs instead of 10
+#
+# Revision 1.36 1998/02/17 10:53:47 ftplinux
+# Added test for binaries on maybe-slow NFS filesystems (@conf::test_binaries)
+#
+# Revision 1.35 1997/12/16 13:19:28 ftplinux
+# Bumped release number to 0.6
+#
+# Revision 1.34 1997/12/09 13:51:24 ftplinux
+# Implemented rejecting of nonus packages (new config var @nonus_packages)
+#
+# Revision 1.33 1997/11/25 10:40:53 ftplinux
+# In check_alive, loop up the IP address everytime, since it can change
+# while the daemon is running.
+# process_changes: Check presence of .changes on master at a later
+# point, to avoid bothering master as long as there are errors in a
+# .changes.
+# Don't view .orig.tar.gz files as is_debian_file, to avoid that they're
+# picked for extracting the maintainer address in the
+# job-without-changes processing.
+# END statement: Fix swapped arguments to kill
+# Program startup: Implemented -r and -k arguments.
+#
+# Revision 1.32 1997/11/20 15:18:47 ftplinux
+# Bumped release number to 0.5
+#
+# Revision 1.31 1997/11/11 13:37:52 ftplinux
+# Replaced <./$pattern> contruct be cleaner glob() call
+# Avoid potentially uninitialized $_ in process_commands file read loop
+# Implemented rm command with more than 1 arg and wildcards in rm args
+#
+# Revision 1.30 1997/11/06 14:09:53 ftplinux
+# In process_commands, also recognize commands given on the same line as
+# the Commands: keyword, not only the continuation lines.
+#
+# Revision 1.29 1997/11/03 15:52:20 ftplinux
+# After reopening the log file write one line to it for dqueued-watcher.
+#
+# Revision 1.28 1997/10/30 15:37:23 ftplinux
+# Removed some leftover comments in process_commands.
+# Changed pgp_check so that it returns the address of the signator.
+# process_commands now also logs PGP signator, since Uploader: address
+# can be choosen freely by uploader.
+#
+# Revision 1.27 1997/10/30 14:05:37 ftplinux
+# Added "command" to log string for command file uploader, to make it
+# unique for dqueued-watcher.
+#
+# Revision 1.26 1997/10/30 14:01:05 ftplinux
+# Implemented .commands files
+#
+# Revision 1.25 1997/10/30 13:05:29 ftplinux
+# Removed date from status version info (too long)
+#
+# Revision 1.24 1997/10/30 13:04:02 ftplinux
+# Print revision, version, and date in status data
+#
+# Revision 1.23 1997/10/30 12:56:01 ftplinux
+# Implemented deletion of files that (probably) belong to an upload, but
+# weren't listed in the .changes.
+#
+# Revision 1.22 1997/10/30 12:22:32 ftplinux
+# When setting sgid bit for stray files without a .changes, check for
+# files deleted in the meantime.
+#
+# Revision 1.21 1997/10/30 11:32:19 ftplinux
+# Added quotes where filenames are used on sh command lines, in case
+# they contain metacharacters.
+# print_time now always print three-field times, as omitting the hour if
+# 0 could cause confusing (hour or seconds missing?).
+# Implemented warning mails for incomplete uploads that miss a .changes
+# file. Maintainer address can be extracted from *.deb, *.diff.gz,
+# *.dsc, or *.tar.gz files with help of new utility functions
+# is_debian_file, get_maintainer, and debian_file_stem.
+#
+# Revision 1.20 1997/10/13 09:12:21 ftplinux
+# On some .changes errors (missing/bad PGP signature, no files) also log the
+# uploader
+#
+# Revision 1.19 1997/09/25 11:20:42 ftplinux
+# Bumped release number to 0.4
+#
+# Revision 1.18 1997/09/25 08:15:02 ftplinux
+# In process_changes, initialize some vars to avoid warnings
+# If first consistency checks failed, don't forget to delete .changes file
+#
+# Revision 1.17 1997/09/16 10:53:35 ftplinux
+# Made logging more verbose in queued and dqueued-watcher
+#
+# Revision 1.16 1997/08/12 09:54:39 ftplinux
+# Bumped release number
+#
+# Revision 1.15 1997/08/11 12:49:09 ftplinux
+# Implemented logfile rotating
+#
+# Revision 1.14 1997/08/11 11:35:05 ftplinux
+# Revised startup scheme so it works with the socket-based ssh-agent, too.
+# That watches whether its child still exists, so the go-to-background fork must be done before the ssh-agent.
+#
+# Revision 1.13 1997/08/11 08:48:31 ftplinux
+# Aaarg... forgot the alarm(0)'s
+#
+# Revision 1.12 1997/08/07 09:25:22 ftplinux
+# Added timeout for remote operations
+#
+# Revision 1.11 1997/07/28 13:20:38 ftplinux
+# Added release numner to startup message
+#
+# Revision 1.10 1997/07/28 11:23:39 ftplinux
+# $main::statusd_pid not necessarily defined in status daemon -- rewrite check
+# whether to delete pid file in signal handler.
+#
+# Revision 1.9 1997/07/28 08:12:16 ftplinux
+# Again revised SIGCHLD handling.
+# Set $SHELL to /bin/sh explicitly before starting ssh-agent.
+# Again raise ping timeout.
+#
+# Revision 1.8 1997/07/25 10:23:03 ftplinux
+# Made SIGCHLD handling more portable between perl versions
+#
+# Revision 1.7 1997/07/09 10:15:16 ftplinux
+# Change RCS Header: to Id:
+#
+# Revision 1.6 1997/07/09 10:13:53 ftplinux
+# Alternative implementation of status file as plain file (not FIFO), because
+# standard wu-ftpd doesn't allow retrieval of non-regular files. New config
+# option $statusdelay for this.
+#
+# Revision 1.5 1997/07/09 09:21:22 ftplinux
+# Little revisions to signal handling; status daemon should ignore SIGPIPE,
+# in case someone closes the FIFO before completely reading it; in fatal_signal,
+# only the main daemon should remove the pid file.
+#
+# Revision 1.4 1997/07/08 11:31:51 ftplinux
+# Print messages of ssh call in is_on_master to debug log.
+# In ssh call to remove bad files on master, the split() doesn't work
+# anymore, now that I use -o'xxx y'. Use string interpolation and let
+# the shell parse the stuff.
+#
+# Revision 1.3 1997/07/07 09:29:30 ftplinux
+# Call check_alive also if master hasn't been pinged for 8 hours.
+#
+# Revision 1.2 1997/07/03 13:06:49 ftplinux
+# Little last changes before beta release
+#
+# Revision 1.1.1.1 1997/07/03 12:54:59 ftplinux
+# Import initial sources
+#
+#
+
+require 5.002;
+use strict;
+use POSIX;
+use POSIX qw( sys_stat_h sys_wait_h signal_h );
+use Net::Ping;
+use Net::FTP;
+use Socket qw( PF_INET AF_INET SOCK_STREAM );
+use Config;
+
+# ---------------------------------------------------------------------------
+# configuration
+# ---------------------------------------------------------------------------
+
+package conf;
+($conf::queued_dir = (($0 !~ m,^/,) ? POSIX::getcwd()."/" : "") . $0)
+ =~ s,/[^/]+$,,;
+require "$conf::queued_dir/config";
+my $junk = $conf::debug; # avoid spurious warnings about unused vars
+$junk = $conf::ssh_key_file;
+$junk = $conf::stray_remove_timeout;
+$junk = $conf::problem_report_timeout;
+$junk = $conf::queue_delay;
+$junk = $conf::keep_files;
+$junk = $conf::valid_files;
+$junk = $conf::max_upload_retries;
+$junk = $conf::upload_delay_1;
+$junk = $conf::upload_delay_2;
+$junk = $conf::ar;
+$junk = $conf::gzip;
+$junk = $conf::cp;
+$junk = $conf::ls;
+$junk = $conf::chmod;
+$junk = $conf::ftpdebug;
+$junk = $conf::ftptimeout;
+$junk = $conf::no_changes_timeout;
+$junk = @conf::nonus_packages;
+$junk = @conf::test_binaries;
+$junk = @conf::maintainer_mail;
+$conf::target = "localhost" if $conf::upload_method eq "copy";
+package main;
+
+($main::progname = $0) =~ s,.*/,,;
+
+# extract -r and -k args
+$main::arg = "";
+if (@ARGV == 1 && $ARGV[0] =~ /^-[rk]$/) {
+ $main::arg = ($ARGV[0] eq '-k') ? "kill" : "restart";
+ shift @ARGV;
+}
+
+# test for another instance of the queued already running
+my $pid;
+if (open( PIDFILE, "<$conf::pidfile" )) {
+ chomp( $pid = <PIDFILE> );
+ close( PIDFILE );
+ if (!$pid) {
+ # remove stale pid file
+ unlink( $conf::pidfile );
+ }
+ elsif ($main::arg) {
+ local($|) = 1;
+ print "Killing running daemon (pid $pid) ...";
+ kill( 15, $pid );
+ my $cnt = 20;
+ while( kill( 0, $pid ) && $cnt-- > 0 ) {
+ sleep 1;
+ print ".";
+ }
+ if (kill( 0, $pid )) {
+ print " failed!\nProcess $pid still running.\n";
+ exit 1;
+ }
+ print "ok\n";
+ if (-e "$conf::incoming/core") {
+ unlink( "$conf::incoming/core" );
+ print "(Removed core file)\n";
+ }
+ exit 0 if $main::arg eq "kill";
+ }
+ else {
+ die "Another $main::progname is already running (pid $pid)\n"
+ if $pid && kill( 0, $pid );
+ }
+}
+elsif ($main::arg eq "kill") {
+ die "No daemon running\n";
+}
+elsif ($main::arg eq "restart") {
+ print "(No daemon running; starting anyway)\n";
+}
+
+# if started without arguments (initial invocation), then fork
+if (!@ARGV) {
+ # now go to background
+ die "$main::progname: fork failed: $!\n" unless defined( $pid = fork );
+ if ($pid) {
+ # parent: wait for signal from child (SIGCHLD or SIGUSR1) and exit
+ my $sigset = POSIX::SigSet->new();
+ $sigset->emptyset();
+ $SIG{"CHLD"} = sub { };
+ $SIG{"USR1"} = sub { };
+ POSIX::sigsuspend( $sigset );
+ waitpid( $pid, WNOHANG );
+ if (kill( 0, $pid )) {
+ print "Daemon started in background (pid $pid)\n";
+ exit 0;
+ }
+ else {
+ exit 1;
+ }
+ }
+ else {
+ # child
+ setsid;
+ if ($conf::upload_method eq "ssh") {
+ # exec an ssh-agent that starts us again
+ # force shell to be /bin/sh, ssh-agent may base its decision
+ # whether to use a fd or a Unix socket on the shell...
+ $ENV{"SHELL"} = "/bin/sh";
+ exec $conf::ssh_agent, $0, "startup", getppid();
+ die "$main::progname: Could not exec $conf::ssh_agent: $!\n";
+ }
+ else {
+ # no need to exec, just set up @ARGV as expected below
+ @ARGV = ("startup", getppid());
+ }
+ }
+}
+die "Please start without any arguments.\n"
+ if @ARGV != 2 || $ARGV[0] ne "startup";
+my $parent_pid = $ARGV[1];
+
+do {
+ my $version;
+ ($version = 'Release: 0.9 $Revision: 1.51 $ $Date: 1999/07/08 09:43:21 $ $Author: ftplinux $') =~ s/\$ ?//g;
+ print "debianqueued $version\n";
+};
+
+# check if all programs exist
+my $prg;
+foreach $prg ( $conf::gpg, $conf::ssh, $conf::scp, $conf::ssh_agent,
+ $conf::ssh_add, $conf::md5sum, $conf::mail, $conf::mkfifo ) {
+ die "Required program $prg doesn't exist or isn't executable\n"
+ if ! -x $prg;
+# check for correct upload method
+die "Bad upload method '$conf::upload_method'.\n"
+ if $conf::upload_method ne "ssh" &&
+ $conf::upload_method ne "ftp" &&
+ $conf::upload_method ne "copy";
+die "No keyrings\n" if ! @conf::keyrings;
+
+}
+
+# ---------------------------------------------------------------------------
+# initializations
+# ---------------------------------------------------------------------------
+
+# prototypes
+sub calc_delta();
+sub check_dir();
+sub process_changes($\@);
+sub process_commands($);
+sub is_on_target($);
+sub copy_to_target(@);
+sub pgp_check($);
+sub check_alive(;$);
+sub check_incoming_writable();
+sub fork_statusd();
+sub write_status_file();
+sub print_status($$$$$$);
+sub format_status_num(\$$);
+sub format_status_str(\$$);
+sub send_status();
+sub ftp_open();
+sub ftp_cmd($@);
+sub ftp_close();
+sub ftp_response();
+sub ftp_code();
+sub ftp_error();
+sub ssh_cmd($);
+sub scp_cmd(@);
+sub local_cmd($;$);
+sub check_alive(;$);
+sub check_incoming_writable();
+sub rm(@);
+sub md5sum($);
+sub is_debian_file($);
+sub get_maintainer($);
+sub debian_file_stem($);
+sub msg($@);
+sub debug(@);
+sub init_mail(;$);
+sub finish_mail();
+sub send_mail($$$);
+sub try_to_get_mail_addr($$);
+sub format_time();
+sub print_time($);
+sub block_signals();
+sub unblock_signals();
+sub close_log($);
+sub kid_died($);
+sub restart_statusd();
+sub fatal_signal($);
+
+$ENV{"PATH"} = "/bin:/usr/bin";
+$ENV{"IFS"} = "" if defined($ENV{"IFS"} && $ENV{"IFS"} ne "");
+
+# constants for stat
+sub ST_DEV() { 0 }
+sub ST_INO() { 1 }
+sub ST_MODE() { 2 }
+sub ST_NLINK() { 3 }
+sub ST_UID() { 4 }
+sub ST_GID() { 5 }
+sub ST_RDEV() { 6 }
+sub ST_SIZE() { 7 }
+sub ST_ATIME() { 8 }
+sub ST_MTIME() { 9 }
+sub ST_CTIME() { 10 }
+# fixed lengths of data items passed over status pipe
+sub STATNUM_LEN() { 30 }
+sub STATSTR_LEN() { 128 }
+
+# init list of signals
+defined $Config{sig_name} or die "$main::progname: No signal list defined!\n";
+my $i = 0;
+my $name;
+foreach $name (split( ' ', $Config{sig_name} )) {
+ $main::signo{$name} = $i++;
+}
+
+@main::fatal_signals = qw( INT QUIT ILL TRAP ABRT BUS FPE USR2 SEGV PIPE
+ TERM XCPU XFSZ PWR );
+
+$main::block_sigset = POSIX::SigSet->new;
+$main::block_sigset->addset( $main::signo{"INT"} );
+$main::block_sigset->addset( $main::signo{"TERM"} );
+
+# some constant net stuff
+$main::tcp_proto = (getprotobyname('tcp'))[2]
+ or die "Cannot get protocol number for 'tcp'\n";
+my $used_service = ($conf::upload_method eq "ssh") ? "ssh" : "ftp";
+$main::echo_port = (getservbyname($used_service, 'tcp'))[2]
+ or die "Cannot get port number for service '$used_service'\n";
+
+# clear queue of stored mails
+@main::stored_mails = ();
+
+# run ssh-add to bring the key into the agent (will use stdin/stdout)
+if ($conf::upload_method eq "ssh") {
+ system "$conf::ssh_add $conf::ssh_key_file"
+ and die "$main::progname: Running $conf::ssh_add failed ".
+ "(exit status ", $? >> 8, ")\n";
+}
+
+# change to queue dir
+chdir( $conf::incoming )
+ or die "$main::progname: cannot cd to $conf::incoming: $!\n";
+
+# needed before /dev/null redirects, some system send a SIGHUP when loosing
+# the controlling tty
+$SIG{"HUP"} = "IGNORE";
+
+# open logfile, make it unbuffered
+open( LOG, ">>$conf::logfile" )
+ or die "Cannot open my logfile $conf::logfile: $!\n";
+chmod( 0644, $conf::logfile )
+ or die "Cannot set modes of $conf::logfile: $!\n";
+select( (select(LOG), $| = 1)[0] );
+
+sleep( 1 );
+$SIG{"HUP"} = \&close_log;
+
+# redirect stdin, ... to /dev/null
+open( STDIN, "</dev/null" )
+ or die "$main::progname: Can't redirect stdin to /dev/null: $!\n";
+open( STDOUT, ">&LOG" )
+ or die "$main::progname: Can't redirect stdout to $conf::logfile: $!\n";
+open( STDERR, ">&LOG" )
+ or die "$main::progname: Can't redirect stderr to $conf::logfile: $!\n";
+# ok, from this point usually no "die" anymore, stderr is gone!
+msg( "log", "daemon (pid $$) started\n" );
+
+# initialize variables used by send_status before launching the status daemon
+$main::dstat = "i";
+format_status_num( $main::next_run, time+10 );
+format_status_str( $main::current_changes, "" );
+check_alive();
+$main::incoming_writable = 1; # assume this for now
+
+# start the daemon watching the 'status' FIFO
+if ($conf::statusfile && $conf::statusdelay == 0) {
+ $main::statusd_pid = fork_statusd();
+ $SIG{"CHLD"} = \&kid_died; # watch out for dead status daemon
+ # SIGUSR1 triggers status info
+ $SIG{"USR1"} = \&send_status;
+}
+$main::maind_pid = $$;
+
+END { kill( $main::signo{"ABRT"}, $$ ) if defined $main::signo{"ABRT"}; }
+
+# write the pid file
+open( PIDFILE, ">$conf::pidfile" )
+ or msg( "log", "Can't open $conf::pidfile: $!\n" );
+printf PIDFILE "%5d\n", $$;
+close( PIDFILE );
+chmod( 0644, $conf::pidfile )
+ or die "Cannot set modes of $conf::pidfile: $!\n";
+
+# other signals will just log an error and exit
+foreach ( @main::fatal_signals ) {
+ $SIG{$_} = \&fatal_signal;
+}
+
+# send signal to user-started process that we're ready and it can exit
+kill( $main::signo{"USR1"}, $parent_pid );
+
+# ---------------------------------------------------------------------------
+# the mainloop
+# ---------------------------------------------------------------------------
+
+$main::dstat = "i";
+write_status_file() if $conf::statusdelay;
+while( 1 ) {
+
+ # ping target only if there is the possibility that we'll contact it (but
+ # also don't wait too long).
+ my @have_changes = <*.changes *.commands>;
+ check_alive() if @have_changes || (time - $main::last_ping_time) > 8*60*60;
+
+ if (@have_changes && $main::target_up) {
+ check_incoming_writable if !$main::incoming_writable;
+ check_dir() if $main::incoming_writable;
+ }
+ $main::dstat = "i";
+ write_status_file() if $conf::statusdelay;
+
+ # sleep() returns if we received a signal (SIGUSR1 for status FIFO), so
+ # calculate the end time once and wait for it being reached.
+ format_status_num( $main::next_run, time + $conf::queue_delay );
+ my $delta;
+ while( ($delta = calc_delta()) > 0 ) {
+ debug( "mainloop sleeping $delta secs" );
+ sleep( $delta );
+ # check if statusd died, if using status FIFO, or update status file
+ if ($conf::statusdelay) {
+ write_status_file();
+ }
+ else {
+ restart_statusd();
+ }
+ }
+}
+
+sub calc_delta() {
+ my $delta;
+
+ $delta = $main::next_run - time;
+ $delta = $conf::statusdelay
+ if $conf::statusdelay && $conf::statusdelay < $delta;
+ return $delta;
+}
+
+
+# ---------------------------------------------------------------------------
+# main working functions
+# ---------------------------------------------------------------------------
+
+
+#
+# main function for checking the incoming dir
+#
+sub check_dir() {
+ my( @files, @changes, @keep_files, @this_keep_files, @stats, $file );
+
+ debug( "starting checkdir" );
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+
+ # test if needed binaries are available; this is if they're on maybe
+ # slow-mounted NFS filesystems
+ foreach (@conf::test_binaries) {
+ next if -f $_;
+ # maybe the mount succeeds now
+ sleep 5;
+ next if -f $_;
+ msg( "log", "binary test failed for $_; delaying queue run\n");
+ goto end_run;
+ }
+
+ # look for *.commands files
+ foreach $file ( <*.commands> ) {
+ init_mail( $file );
+ block_signals();
+ process_commands( $file );
+ unblock_signals();
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+ finish_mail();
+ }
+
+ opendir( INC, "." )
+ or (msg( "log", "Cannot open incoming dir $conf::incoming: $!\n" ),
+ return);
+ @files = readdir( INC );
+ closedir( INC );
+
+ # process all .changes files found
+ @changes = grep /\.changes$/, @files;
+ push( @keep_files, @changes ); # .changes files aren't stray
+ foreach $file ( @changes ) {
+ init_mail( $file );
+ # wrap in an eval to allow jumpbacks to here with die in case
+ # of errors
+ block_signals();
+ eval { process_changes( $file, @this_keep_files ); };
+ unblock_signals();
+ msg( "log,mail", $@ ) if $@;
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+
+ # files which are ok in conjunction with this .changes
+ debug( "$file tells to keep @this_keep_files" );
+ push( @keep_files, @this_keep_files );
+ finish_mail();
+
+ # break out of this loop if the incoming dir has become unwritable
+ goto end_run if !$main::incoming_writable;
+ }
+ ftp_close() if $conf::upload_method eq "ftp";
+
+ # find files which aren't related to any .changes
+ foreach $file ( @files ) {
+ # filter out files we never want to delete
+ next if ! -f $file || # may have disappeared in the meantime
+ $file eq "." || $file eq ".." ||
+ (grep { $_ eq $file } @keep_files) ||
+ $file =~ /$conf::keep_files/;
+ # Delete such files if they're older than
+ # $stray_remove_timeout; they could be part of an
+ # yet-incomplete upload, with the .changes still missing.
+ # Cannot send any notification, since owner unknown.
+ next if !(@stats = stat( $file ));
+ my $age = time - $stats[ST_MTIME];
+ my( $maint, $pattern, @job_files );
+ if ($file =~ /^junk-for-writable-test/ ||
+ $file !~ m,$conf::valid_files, ||
+ $age >= $conf::stray_remove_timeout) {
+ msg( "log", "Deleted stray file $file\n" ) if rm( $file );
+ }
+ elsif ($age > $conf::no_changes_timeout &&
+ is_debian_file( $file ) &&
+ # not already reported
+ !($stats[ST_MODE] & S_ISGID) &&
+ ($pattern = debian_file_stem( $file )) &&
+ (@job_files = glob($pattern)) &&
+ # If a .changes is in the list, it has the same stem as the
+ # found file (probably a .orig.tar.gz). Don't report in this
+ # case.
+ !(grep( /\.changes$/, @job_files ))) {
+ $maint = get_maintainer( $file );
+ # Don't send a mail if this looks like the recompilation of a
+ # package for a non-i386 arch. For those, the maintainer field is
+ # useless :-(
+ if (!grep( /(\.dsc|_(i386|all)\.deb)$/, @job_files )) {
+ msg( "log", "Found an upload without .changes and with no ",
+ ".dsc file\n" );
+ msg( "log", "Not sending a report, because probably ",
+ "recompilation job\n" );
+ }
+ elsif ($maint) {
+ init_mail();
+ $main::mail_addr = $maint;
+ $main::mail_addr = $1 if $main::mail_addr =~ /<([^>]*)>/;
+ $main::mail_subject = "Incomplete upload found in ".
+ "Debian upload queue";
+ msg( "mail", "Probably you are the uploader of the following ".
+ "file(s) in\n" );
+ msg( "mail", "the Debian upload queue directory:\n " );
+ msg( "mail", join( "\n ", @job_files ), "\n" );
+ msg( "mail", "This looks like an upload, but a .changes file ".
+ "is missing, so the job\n" );
+ msg( "mail", "cannot be processed.\n\n" );
+ msg( "mail", "If no .changes file arrives within ",
+ print_time( $conf::stray_remove_timeout - $age ),
+ ", the files will be deleted.\n\n" );
+ msg( "mail", "If you didn't upload those files, please just ".
+ "ignore this message.\n" );
+ finish_mail();
+ msg( "log", "Sending problem report for an upload without a ".
+ ".changes\n" );
+ msg( "log", "Maintainer: $maint\n" );
+ }
+ else {
+ msg( "log", "Found an upload without .changes, but can't ".
+ "find a maintainer address\n" );
+ }
+ msg( "log", "Files: @job_files\n" );
+ # remember we already have sent a mail regarding this file
+ foreach ( @job_files ) {
+ my @st = stat($_);
+ next if !@st; # file may have disappeared in the meantime
+ chmod +($st[ST_MODE] |= S_ISGID), $_;
+ }
+ }
+ else {
+ debug( "found stray file $file, deleting in ",
+ print_time($conf::stray_remove_timeout - $age) );
+ }
+ }
+
+ end_run:
+ $main::dstat = "i";
+ write_status_file() if $conf::statusdelay;
+}
+
+#
+# process one .changes file
+#
+sub process_changes($\@) {
+ my $changes = shift;
+ my $keep_list = shift;
+ my( $pgplines, @files, @filenames, @changes_stats, $failure_file,
+ $retries, $last_retry, $upload_time, $file, $do_report, $ls_l,
+ $problems_reported, $errs, $pkgname, $signator );
+ local( *CHANGES );
+ local( *FAILS );
+
+ format_status_str( $main::current_changes, $changes );
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+
+ @$keep_list = ();
+ msg( "log", "processing $changes\n" );
+
+ # parse the .changes file
+ open( CHANGES, "<$changes" )
+ or die "Cannot open $changes: $!\n";
+ $pgplines = 0;
+ $main::mail_addr = "";
+ @files = ();
+ outer_loop: while( <CHANGES> ) {
+ if (/^---+(BEGIN|END) PGP .*---+$/) {
+ ++$pgplines;
+ }
+ elsif (/^Maintainer:\s*/i) {
+ chomp( $main::mail_addr = $' );
+ $main::mail_addr = $1 if $main::mail_addr =~ /<([^>]*)>/;
+ }
+ elsif (/^Source:\s*/i) {
+ chomp( $pkgname = $' );
+ $pkgname =~ s/\s+$//;
+ }
+ elsif (/^Files:/i) {
+ while( <CHANGES> ) {
+ redo outer_loop if !/^\s/;
+ my @field = split( /\s+/ );
+ next if @field != 6;
+ # forbid shell meta chars in the name, we pass it to a
+ # subshell several times...
+ $field[5] =~ /^([a-zA-Z0-9.+_:@=%-][~a-zA-Z0-9.+_:@=%-]*)/;
+ if ($1 ne $field[5]) {
+ msg( "log", "found suspicious filename $field[5]\n" );
+ msg( "mail", "File '$field[5]' mentioned in $changes\n",
+ "has bad characters in its name. Removed.\n" );
+ rm( $field[5] );
+ next;
+ }
+ push( @files, { md5 => $field[1],
+ size => $field[2],
+ name => $field[5] } );
+ push( @filenames, $field[5] );
+ debug( "includes file $field[5], size $field[2], ",
+ "md5 $field[1]" );
+ }
+ }
+ }
+ close( CHANGES );
+
+ # tell check_dir that the files mentioned in this .changes aren't stray,
+ # we know about them somehow
+ @$keep_list = @filenames;
+
+ # some consistency checks
+ if (!$main::mail_addr) {
+ msg( "log,mail", "$changes doesn't contain a Maintainer: field; ".
+ "cannot process\n" );
+ goto remove_only_changes;
+ }
+ if ($main::mail_addr !~ /^(buildd_\S+-\S+|\S+\@\S+\.\S+)/) {
+ # doesn't look like a mail address, maybe only the name
+ my( $new_addr, @addr_list );
+ if ($new_addr = try_to_get_mail_addr( $main::mail_addr, \@addr_list )){
+ # substitute (unique) found addr, but give a warning
+ msg( "mail", "(The Maintainer: field didn't contain a proper ".
+ "mail address.\n" );
+ msg( "mail", "Looking for `$main::mail_addr' in the Debian ".
+ "keyring gave your address\n" );
+ msg( "mail", "as unique result, so I used this.)\n" );
+ msg( "log", "Substituted $new_addr for malformed ".
+ "$main::mail_addr\n" );
+ $main::mail_addr = $new_addr;
+ }
+ else {
+ # not found or not unique: hold the job and inform queue maintainer
+ my $old_addr = $main::mail_addr;
+ $main::mail_addr = $conf::maintainer_mail;
+ msg( "mail", "The job $changes doesn't have a correct email\n" );
+ msg( "mail", "address in the Maintainer: field:\n" );
+ msg( "mail", " $old_addr\n" );
+ msg( "mail", "A check for this in the Debian keyring gave:\n" );
+ msg( "mail", @addr_list ?
+ " " . join( ", ", @addr_list ) . "\n" :
+ " nothing\n" );
+ msg( "mail", "Please fix this manually\n" );
+ msg( "log", "Bad Maintainer: field in $changes: $old_addr\n" );
+ goto remove_only_changes;
+ }
+ }
+ if ($pgplines < 3) {
+ msg( "log,mail", "$changes isn't signed with PGP/GnuPG\n" );
+ msg( "log", "(uploader $main::mail_addr)\n" );
+ goto remove_only_changes;
+ }
+ if (!@files) {
+ msg( "log,mail", "$changes doesn't mention any files\n" );
+ msg( "log", "(uploader $main::mail_addr)\n" );
+ goto remove_only_changes;
+ }
+
+ # check for packages that shouldn't be processed
+ if (grep( $_ eq $pkgname, @conf::nonus_packages )) {
+ msg( "log,mail", "$pkgname is a package that must be uploaded ".
+ "to nonus.debian.org\n" );
+ msg( "log,mail", "instead of target.\n" );
+ msg( "log,mail", "Job rejected and removed all files belonging ".
+ "to it:\n" );
+ msg( "log,mail", " ", join( ", ", @filenames ), "\n" );
+ rm( $changes, @filenames );
+ return;
+ }
+
+ $failure_file = $changes . ".failures";
+ $retries = $last_retry = 0;
+ if (-f $failure_file) {
+ open( FAILS, "<$failure_file" )
+ or die "Cannot open $failure_file: $!\n";
+ my $line = <FAILS>;
+ close( FAILS );
+ ( $retries, $last_retry ) = ( $1, $2 ) if $line =~ /^(\d+)\s+(\d+)$/;
+ push( @$keep_list, $failure_file );
+ }
+
+ # run PGP on the file to check the signature
+ if (!($signator = pgp_check( $changes ))) {
+ msg( "log,mail", "$changes has bad PGP/GnuPG signature!\n" );
+ msg( "log", "(uploader $main::mail_addr)\n" );
+ remove_only_changes:
+ msg( "log,mail", "Removing $changes, but keeping its associated ",
+ "files for now.\n" );
+ rm( $changes );
+ # Set SGID bit on associated files, so that the test for Debian files
+ # without a .changes doesn't consider them.
+ foreach ( @filenames ) {
+ my @st = stat($_);
+ next if !@st; # file may have disappeared in the meantime
+ chmod +($st[ST_MODE] |= S_ISGID), $_;
+ }
+ return;
+ }
+ elsif ($signator eq "LOCAL ERROR") {
+ # An error has appened when starting pgp... Don't process the file,
+ # but also don't delete it
+ debug( "Can't PGP/GnuPG check $changes -- don't process it for now" );
+ return;
+ }
+
+ die "Cannot stat $changes (??): $!\n"
+ if !(@changes_stats = stat( $changes ));
+ # Make $upload_time the maximum of all modification times of files
+ # related to this .changes (and the .changes it self). This is the
+ # last time something changes to these files.
+ $upload_time = $changes_stats[ST_MTIME];
+ for $file ( @files ) {
+ my @stats;
+ next if !(@stats = stat( $file->{"name"} ));
+ $file->{"stats"} = \@stats;
+ $upload_time = $stats[ST_MTIME] if $stats[ST_MTIME] > $upload_time;
+ }
+
+ $do_report = (time - $upload_time) > $conf::problem_report_timeout;
+ $problems_reported = $changes_stats[ST_MODE] & S_ISGID;
+ # if any of the files is newer than the .changes' ctime (the time
+ # we sent a report and set the sticky bit), send new problem reports
+ if ($problems_reported && $changes_stats[ST_CTIME] < $upload_time) {
+ $problems_reported = 0;
+ chmod +($changes_stats[ST_MODE] &= ~S_ISGID), $changes;
+ debug( "upload_time>changes-ctime => resetting problems reported" );
+ }
+ debug( "do_report=$do_report problems_reported=$problems_reported" );
+
+ # now check all files for correct size and md5 sum
+ for $file ( @files ) {
+ my $filename = $file->{"name"};
+ if (!defined( $file->{"stats"} )) {
+ # could be an upload that isn't complete yet, be quiet,
+ # but don't process the file;
+ msg( "log,mail", "$filename doesn't exist\n" )
+ if $do_report && !$problems_reported;
+ msg( "log", "$filename doesn't exist (ignored for now)\n" )
+ if !$do_report;
+ msg( "log", "$filename doesn't exist (already reported)\n" )
+ if $problems_reported;
+ ++$errs;
+ }
+ elsif ($file->{"stats"}->[ST_SIZE] < $file->{"size"} && !$do_report) {
+ # could be an upload that isn't complete yet, be quiet,
+ # but don't process the file
+ msg( "log", "$filename is too small (ignored for now)\n" );
+ ++$errs;
+ }
+ elsif ($file->{"stats"}->[ST_SIZE] != $file->{"size"}) {
+ msg( "log,mail", "$filename has incorrect size; deleting it\n" );
+ rm( $filename );
+ ++$errs;
+ }
+ elsif (md5sum( $filename ) ne $file->{"md5"}) {
+ msg( "log,mail", "$filename has incorrect md5 checksum; ",
+ "deleting it\n" );
+ rm( $filename );
+ ++$errs;
+ }
+ }
+
+ if ($errs) {
+ if ((time - $upload_time) > $conf::bad_changes_timeout) {
+ # if a .changes fails for a really long time (several days
+ # or so), remove it and all associated files
+ msg( "log,mail",
+ "$changes couldn't be processed for ",
+ int($conf::bad_changes_timeout/(60*60)),
+ " hours and is now deleted\n" );
+ msg( "log,mail",
+ "All files it mentions are also removed:\n" );
+ msg( "log,mail", " ", join( ", ", @filenames ), "\n" );
+ rm( $changes, @filenames, $failure_file );
+ }
+ elsif ($do_report && !$problems_reported) {
+ # otherwise, send a problem report, if not done already
+ msg( "mail",
+ "Due to the errors above, the .changes file couldn't ",
+ "be processed.\n",
+ "Please fix the problems for the upload to happen.\n" );
+ # remember we already have sent a mail regarding this file
+ debug( "Sending problem report mail and setting SGID bit" );
+ my $mode = $changes_stats[ST_MODE] |= S_ISGID;
+ msg( "log", "chmod failed: $!" ) if (chmod ($mode, $changes) != 1);
+ }
+ # else: be quiet
+
+ return;
+ }
+
+ # if this upload already failed earlier, wait until the delay requirement
+ # is fulfilled
+ if ($retries > 0 && (time - $last_retry) <
+ ($retries == 1 ? $conf::upload_delay_1 : $conf::upload_delay_2)) {
+ msg( "log", "delaying retry of upload\n" );
+ return;
+ }
+
+ if ($conf::upload_method eq "ftp") {
+ return if !ftp_open();
+ }
+
+ # check if the job is already present on target
+ # (moved to here, to avoid bothering target as long as there are errors in
+ # the job)
+ if ($ls_l = is_on_target( $changes )) {
+ msg( "log,mail", "$changes is already present on target host:\n" );
+ msg( "log,mail", "$ls_l\n" );
+ msg( "mail", "Either you already uploaded it, or someone else ",
+ "came first.\n" );
+ msg( "log,mail", "Job $changes removed.\n" );
+ rm( $changes, @filenames, $failure_file );
+ return;
+ }
+
+ # clear sgid bit before upload, scp would copy it to target. We don't need
+ # it anymore, we know there are no problems if we come here. Also change
+ # mode of files to 644 if this should be done locally.
+ $changes_stats[ST_MODE] &= ~S_ISGID;
+ if (!$conf::chmod_on_target) {
+ $changes_stats[ST_MODE] &= ~0777;
+ $changes_stats[ST_MODE] |= 0644;
+ }
+ chmod +($changes_stats[ST_MODE]), $changes;
+
+ # try uploading to target
+ if (!copy_to_target( $changes, @filenames )) {
+ # if the upload failed, increment the retry counter and remember the
+ # current time; both things are written to the .failures file. Don't
+ # increment the fail counter if the error was due to incoming
+ # unwritable.
+ return if !$main::incoming_writable;
+ if (++$retries >= $conf::max_upload_retries) {
+ msg( "log,mail",
+ "$changes couldn't be uploaded for $retries times now.\n" );
+ msg( "log,mail",
+ "Giving up and removing it and its associated files:\n" );
+ msg( "log,mail", " ", join( ", ", @filenames ), "\n" );
+ rm( $changes, @filenames, $failure_file );
+ }
+ else {
+ $last_retry = time;
+ if (open( FAILS, ">$failure_file" )) {
+ print FAILS "$retries $last_retry\n";
+ close( FAILS );
+ chmod( 0600, $failure_file )
+ or die "Cannot set modes of $failure_file: $!\n";
+ }
+ push( @$keep_list, $failure_file );
+ debug( "now $retries failed uploads" );
+ msg( "mail",
+ "The upload will be retried in ",
+ print_time( $retries == 1 ? $conf::upload_delay_1 :
+ $conf::upload_delay_2 ), "\n" );
+ }
+ return;
+ }
+
+ # If the files were uploaded ok, remove them
+ rm( $changes, @filenames, $failure_file );
+
+ msg( "mail", "$changes uploaded successfully to $conf::target\n" );
+ msg( "mail", "along with the files:\n ",
+ join( "\n ", @filenames ), "\n" );
+ msg( "log", "$changes processed successfully (uploader $main::mail_addr)\n" );
+
+ # Check for files that have the same stem as the .changes (and weren't
+ # mentioned there) and delete them. It happens often enough that people
+ # upload a .orig.tar.gz where it isn't needed and also not in the
+ # .changes. Explicitly deleting it (and not waiting for the
+ # $stray_remove_timeout) reduces clutter in the queue dir and maybe also
+ # educates uploaders :-)
+
+# my $pattern = debian_file_stem( $changes );
+# my $spattern = substr( $pattern, 0, -1 ); # strip off '*' at end
+# my @other_files = glob($pattern);
+ # filter out files that have a Debian revision at all and a different
+ # revision. Those belong to a different upload.
+# if ($changes =~ /^\Q$spattern\E-([\d.+-]+)/) {
+# my $this_rev = $1;
+# @other_files = grep( !/^\Q$spattern\E-([\d.+-]+)/ || $1 eq $this_rev,
+# @other_files);
+ #}
+ # Also do not remove those files if a .changes is among them. Then there
+ # is probably a second upload for another version or another architecture.
+# if (@other_files && !grep( /\.changes$/, @other_files )) {
+# rm( @other_files );
+# msg( "mail", "\nThe following file(s) seemed to belong to the same ".
+# "upload, but weren't listed\n" );
+# msg( "mail", "in the .changes file:\n " );
+# msg( "mail", join( "\n ", @other_files ), "\n" );
+# msg( "mail", "They have been deleted.\n" );
+# msg( "log", "Deleted files in upload not in $changes: @other_files\n" );
+ #}
+}
+
+#
+# process one .commands file
+#
+sub process_commands($) {
+ my $commands = shift;
+ my( @cmds, $cmd, $pgplines, $signator );
+ local( *COMMANDS );
+
+ format_status_str( $main::current_changes, $commands );
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+
+ msg( "log", "processing $commands\n" );
+
+ # parse the .commands file
+ if (!open( COMMANDS, "<$commands" )) {
+ msg( "log", "Cannot open $commands: $!\n" );
+ return;
+ }
+ $pgplines = 0;
+ $main::mail_addr = "";
+ @cmds = ();
+ outer_loop: while( <COMMANDS> ) {
+ if (/^---+(BEGIN|END) PGP .*---+$/) {
+ ++$pgplines;
+ }
+ elsif (/^Uploader:\s*/i) {
+ chomp( $main::mail_addr = $' );
+ $main::mail_addr = $1 if $main::mail_addr =~ /<([^>]*)>/;
+ }
+ elsif (/^Commands:/i) {
+ $_ = $';
+ for(;;) {
+ s/^\s*(.*)\s*$/$1/; # delete whitespace at both ends
+ if (!/^\s*$/) {
+ push( @cmds, $_ );
+ debug( "includes cmd $_" );
+ }
+ last outer_loop if !defined( $_ = scalar(<COMMANDS>) );
+ chomp;
+ redo outer_loop if !/^\s/ || /^$/;
+ }
+ }
+ }
+ close( COMMANDS );
+
+ # some consistency checks
+ if (!$main::mail_addr || $main::mail_addr !~ /^\S+\@\S+\.\S+/) {
+ msg( "log,mail", "$commands contains no or bad Uploader: field: ".
+ "$main::mail_addr\n" );
+ msg( "log,mail", "cannot process $commands\n" );
+ $main::mail_addr = "";
+ goto remove;
+ }
+ msg( "log", "(command uploader $main::mail_addr)\n" );
+
+ if ($pgplines < 3) {
+ msg( "log,mail", "$commands isn't signed with PGP/GnuPG\n" );
+ goto remove;
+ }
+
+ # run PGP on the file to check the signature
+ if (!($signator = pgp_check( $commands ))) {
+ msg( "log,mail", "$commands has bad PGP/GnuPG signature!\n" );
+ remove:
+ msg( "log,mail", "Removing $commands\n" );
+ rm( $commands );
+ return;
+ }
+ elsif ($signator eq "LOCAL ERROR") {
+ # An error has appened when starting pgp... Don't process the file,
+ # but also don't delete it
+ debug( "Can't PGP/GnuPG check $commands -- don't process it for now" );
+ return;
+ }
+ msg( "log", "(PGP/GnuPG signature by $signator)\n" );
+
+ # now process commands
+ msg( "mail", "Log of processing your commands file $commands:\n\n" );
+ foreach $cmd ( @cmds ) {
+ my @word = split( /\s+/, $cmd );
+ msg( "mail,log", "> @word\n" );
+ next if @word < 1;
+
+ if ($word[0] eq "rm") {
+ my( @files, $file, @removed );
+ foreach ( @word[1..$#word] ) {
+ if (m,/,) {
+ msg( "mail,log", "$_: filename may not contain slashes\n" );
+ }
+ elsif (/[*?[]/) {
+ # process wildcards
+ my $pat = quotemeta($_);
+ $pat =~ s/\\\*/.*/g;
+ $pat =~ s/\\\?/.?/g;
+ $pat =~ s/\\([][])/$1/g;
+ opendir( DIR, "." );
+ push( @files, grep /^$pat$/, readdir(DIR) );
+ closedir( DIR );
+ }
+ else {
+ push( @files, $_ );
+ }
+ }
+ if (!@files) {
+ msg( "mail,log", "No files to delete\n" );
+ }
+ else {
+ @removed = ();
+ foreach $file ( @files ) {
+ if (!-f $file) {
+ msg( "mail,log", "$file: no such file\n" );
+ }
+ elsif ($file =~ /$conf::keep_files/) {
+ msg( "mail,log", "$file is protected, cannot ".
+ "remove\n" );
+ }
+ elsif (!unlink( $file )) {
+ msg( "mail,log", "$file: rm: $!\n" );
+ }
+ else {
+ push( @removed, $file );
+ }
+ }
+ msg( "mail,log", "Files removed: @removed\n" ) if @removed;
+ }
+ }
+ elsif ($word[0] eq "mv") {
+ if (@word != 3) {
+ msg( "mail,log", "Wrong number of arguments\n" );
+ }
+ elsif ($word[1] =~ m,/,) {
+ msg( "mail,log", "$word[1]: filename may not contain slashes\n" );
+ }
+ elsif ($word[2] =~ m,/,) {
+ msg( "mail,log", "$word[2]: filename may not contain slashes\n" );
+ }
+ elsif (!-f $word[1]) {
+ msg( "mail,log", "$word[1]: no such file\n" );
+ }
+ elsif (-e $word[2]) {
+ msg( "mail,log", "$word[2]: file exists\n" );
+ }
+ elsif ($word[1] =~ /$conf::keep_files/) {
+ msg( "mail,log", "$word[1] is protected, cannot rename\n" );
+ }
+ else {
+ if (!rename( $word[1], $word[2] )) {
+ msg( "mail,log", "rename: $!\n" );
+ }
+ else {
+ msg( "mail,log", "OK\n" );
+ }
+ }
+ }
+ else {
+ msg( "mail,log", "unknown command $word[0]\n" );
+ }
+ }
+ rm( $commands );
+ msg( "log", "-- End of $commands processing\n" );
+}
+
+#
+# check if a file is already on target
+#
+sub is_on_target($) {
+ my $file = shift;
+ my $msg;
+ my $stat;
+
+ if ($conf::upload_method eq "ssh") {
+ ($msg, $stat) = ssh_cmd( "ls -l $file" );
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my $err;
+ ($msg, $err) = ftp_cmd( "dir", $file );
+ if ($err) {
+ $stat = 1;
+ $msg = $err;
+ }
+ elsif (!$msg) {
+ $stat = 1;
+ $msg = "ls: no such file\n";
+ }
+ else {
+ $stat = 0;
+ $msg = join( "\n", @$msg );
+ }
+ }
+ else {
+ ($msg, $stat) = local_cmd( "$conf::ls -l $file" );
+ }
+ chomp( $msg );
+ debug( "exit status: $stat, output was: $msg" );
+
+ return "" if $stat && $msg =~ /no such file/i; # file not present
+ msg( "log", "strange ls -l output on target:\n", $msg ), return ""
+ if $stat || $@; # some other error, but still try to upload
+
+ # ls -l returned 0 -> file already there
+ $msg =~ s/\s\s+/ /g; # make multiple spaces into one, to save space
+ return $msg;
+}
+
+#
+# copy a list of files to target
+#
+sub copy_to_target(@) {
+ my @files = @_;
+ my( @md5sum, @expected_files, $sum, $name, $msgs, $stat );
+
+ $main::dstat = "u";
+ write_status_file() if $conf::statusdelay;
+
+ # copy the files
+ if ($conf::upload_method eq "ssh") {
+ ($msgs, $stat) = scp_cmd( @files );
+ goto err if $stat;
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my($rv, $file);
+ foreach $file (@files) {
+ ($rv, $msgs) = ftp_cmd( "put", $file );
+ goto err if !$rv;
+ }
+ }
+ else {
+ ($msgs, $stat) = local_cmd( "$conf::cp @files $conf::targetdir", 'NOCD' );
+ goto err if $stat;
+ }
+
+ # check md5sums or sizes on target against our own
+ my $have_md5sums = 1;
+ if ($conf::upload_method eq "ssh") {
+ ($msgs, $stat) = ssh_cmd( "md5sum @files" );
+ goto err if $stat;
+ @md5sum = split( "\n", $msgs );
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my ($rv, $err, $file);
+ foreach $file (@files) {
+ ($rv, $err) = ftp_cmd( "quot", "site", "md5sum", $file );
+ if ($err) {
+ next if ftp_code() == 550; # file not found
+ if (ftp_code() == 500) { # unimplemented
+ $have_md5sums = 0;
+ goto get_sizes_instead;
+ }
+ $msgs = $err;
+ goto err;
+ }
+ chomp( my $t = ftp_response() );
+ push( @md5sum, $t );
+ }
+ if (!$have_md5sums) {
+ get_sizes_instead:
+ foreach $file (@files) {
+ ($rv, $err) = ftp_cmd( "size", $file );
+ if ($err) {
+ next if ftp_code() == 550; # file not found
+ $msgs = $err;
+ goto err;
+ }
+ push( @md5sum, "$rv $file" );
+ }
+ }
+ }
+ else {
+ ($msgs, $stat) = local_cmd( "$conf::md5sum @files" );
+ goto err if $stat;
+ @md5sum = split( "\n", $msgs );
+ }
+
+ @expected_files = @files;
+ foreach (@md5sum) {
+ chomp;
+ ($sum,$name) = split;
+ next if !grep { $_ eq $name } @files; # a file we didn't upload??
+ next if $sum eq "md5sum:"; # looks like an error message
+ if (($have_md5sums && $sum ne md5sum( $name )) ||
+ (!$have_md5sums && $sum != (-s $name))) {
+ msg( "log,mail", "Upload of $name to $conf::target failed ",
+ "(".($have_md5sums ? "md5sum" : "size")." mismatch)\n" );
+ goto err;
+ }
+ # seen that file, remove it from expect list
+ @expected_files = map { $_ eq $name ? () : $_ } @expected_files;
+ }
+ if (@expected_files) {
+ msg( "log,mail", "Failed to upload the files\n" );
+ msg( "log,mail", " ", join( ", ", @expected_files ), "\n" );
+ msg( "log,mail", "(Not present on target after upload)\n" );
+ goto err;
+ }
+
+ if ($conf::chmod_on_target) {
+ # change file's mode explicitly to 644 on target
+ if ($conf::upload_method eq "ssh") {
+ ($msgs, $stat) = ssh_cmd( "chmod 644 @files" );
+ goto err if $stat;
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my ($rv, $file);
+ foreach $file (@files) {
+ ($rv, $msgs) = ftp_cmd( "quot", "site", "chmod", "644", $file );
+ msg( "log", "Can't chmod $file on target:\n$msgs" )
+ if $msgs;
+ goto err if !$rv;
+ }
+ }
+ else {
+ ($msgs, $stat) = local_cmd( "$conf::chmod 644 @files" );
+ goto err if $stat;
+ }
+ }
+
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+ return 1;
+
+ err:
+ msg( "log,mail", "Upload to $conf::target failed",
+ $? ? ", last exit status ".sprintf( "%s", $?>>8 ) : "", "\n" );
+ msg( "log,mail", "Error messages:\n", $msgs )
+ if $msgs;
+
+ # If "permission denied" was among the errors, test if the incoming is
+ # writable at all.
+ if ($msgs =~ /(permission denied|read-?only file)/i) {
+ if (!check_incoming_writable()) {
+ msg( "log,mail", "(The incoming directory seems to be ",
+ "unwritable.)\n" );
+ }
+ }
+
+ # remove bad files or an incomplete upload on target
+ if ($conf::upload_method eq "ssh") {
+ ssh_cmd( "rm -f @files" );
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my $file;
+ foreach $file (@files) {
+ my ($rv, $err);
+ ($rv, $err) = ftp_cmd( "delete", $file );
+ msg( "log", "Can't delete $file on target:\n$err" )
+ if $err;
+ }
+ }
+ else {
+ my @tfiles = map { "$conf::targetdir/$_" } @files;
+ debug( "executing unlink(@tfiles)" );
+ rm( @tfiles );
+ }
+ $main::dstat = "c";
+ write_status_file() if $conf::statusdelay;
+ return 0;
+}
+
+#
+# check if a file is correctly signed with PGP
+#
+sub pgp_check($) {
+ my $file = shift;
+ my $output = "";
+ my $signator;
+ my $found = 0;
+ my $stat;
+ local( *PIPE );
+
+ $stat = 1;
+ if (-x $conf::gpg) {
+ debug( "executing $conf::gpg --no-options --batch ".
+ "--no-default-keyring --always-trust ".
+ "--keyring ". join (" --keyring ",@conf::keyrings).
+ " --verify '$file'" );
+ if (!open( PIPE, "$conf::gpg --no-options --batch ".
+ "--no-default-keyring --always-trust ".
+ "--keyring " . join (" --keyring ",@conf::keyrings).
+ " --verify '$file'".
+ " 2>&1 |" )) {
+ msg( "log", "Can't open pipe to $conf::gpg: $!\n" );
+ return "LOCAL ERROR";
+ }
+ $output .= $_ while( <PIPE> );
+ close( PIPE );
+ $stat = $?;
+ }
+
+ if ($stat) {
+ msg( "log,mail", "GnuPG signature check failed on $file\n" );
+ msg( "mail", $output );
+ msg( "log,mail", "(Exit status ", $stat >> 8, ")\n" );
+ return "";
+ }
+
+ $output =~ /^(gpg: )?good signature from (user )?"(.*)"\.?$/im;
+ ($signator = $3) ||= "unknown signator";
+ if ($conf::debug) {
+ debug( "GnuPG signature ok (by $signator)" );
+ }
+ return $signator;
+}
+
+
+# ---------------------------------------------------------------------------
+# the status daemon
+# ---------------------------------------------------------------------------
+
+#
+# fork a subprocess that watches the 'status' FIFO
+#
+# that process blocks until someone opens the FIFO, then sends a
+# signal (SIGUSR1) to the main process, expects
+#
+sub fork_statusd() {
+ my $statusd_pid;
+ my $main_pid = $$;
+ my $errs;
+ local( *STATFIFO );
+
+ $statusd_pid = open( STATUSD, "|-" );
+ die "cannot fork: $!\n" if !defined( $statusd_pid );
+ # parent just returns
+ if ($statusd_pid) {
+ msg( "log", "forked status daemon (pid $statusd_pid)\n" );
+ return $statusd_pid;
+ }
+ # child: the status FIFO daemon
+
+ # ignore SIGPIPE here, in case some closes the FIFO without completely
+ # reading it
+ $SIG{"PIPE"} = "IGNORE";
+ # also ignore SIGCLD, we don't want to inherit the restart-statusd handler
+ # from our parent
+ $SIG{"CHLD"} = "DEFAULT";
+
+ rm( $conf::statusfile );
+ $errs = `$conf::mkfifo $conf::statusfile`;
+ die "$main::progname: cannot create named pipe $conf::statusfile: $errs"
+ if $?;
+ chmod( 0644, $conf::statusfile )
+ or die "Cannot set modes of $conf::statusfile: $!\n";
+
+ # close log file, so that log rotating works
+ close( LOG );
+ close( STDOUT );
+ close( STDERR );
+
+ while( 1 ) {
+ my( $status, $mup, $incw, $ds, $next_run, $last_ping, $currch, $l );
+
+ # open the FIFO for writing; this blocks until someone (probably ftpd)
+ # opens it for reading
+ open( STATFIFO, ">$conf::statusfile" )
+ or die "Cannot open $conf::statusfile\n";
+ select( STATFIFO );
+ # tell main daemon to send us status infos
+ kill( $main::signo{"USR1"}, $main_pid );
+
+ # get the infos from stdin; must loop until enough bytes received!
+ my $expect_len = 3 + 2*STATNUM_LEN + STATSTR_LEN;
+ for( $status = ""; ($l = length($status)) < $expect_len; ) {
+ sysread( STDIN, $status, $expect_len-$l, $l );
+ }
+
+ # disassemble the status byte stream
+ my $pos = 0;
+ foreach ( [ mup => 1 ], [ incw => 1 ], [ ds => 1 ],
+ [ next_run => STATNUM_LEN ], [ last_ping => STATNUM_LEN ],
+ [ currch => STATSTR_LEN ] ) {
+ eval "\$$_->[0] = substr( \$status, $pos, $_->[1] );";
+ $pos += $_->[1];
+ }
+ $currch =~ s/\n+//g;
+
+ print_status( $mup, $incw, $ds, $next_run, $last_ping, $currch );
+ close( STATFIFO );
+
+ # This sleep is necessary so that we can't reopen the FIFO
+ # immediately, in case the reader hasn't closed it yet if we get to
+ # the open again. Is there a better solution for this??
+ sleep 1;
+ }
+}
+
+#
+# update the status file, in case we use a plain file and not a FIFO
+#
+sub write_status_file() {
+
+ return if !$conf::statusfile;
+
+ open( STATFILE, ">$conf::statusfile" ) or
+ (msg( "log", "Could not open $conf::statusfile: $!\n" ), return);
+ my $oldsel = select( STATFILE );
+
+ print_status( $main::target_up, $main::incoming_writable, $main::dstat,
+ $main::next_run, $main::last_ping_time,
+ $main::current_changes );
+
+ select( $oldsel );
+ close( STATFILE );
+}
+
+sub print_status($$$$$$) {
+ my $mup = shift;
+ my $incw = shift;
+ my $ds = shift;
+ my $next_run = shift;
+ my $last_ping = shift;
+ my $currch = shift;
+ my $approx;
+ my $version;
+
+ ($version = 'Release: 0.9 $Revision: 1.51 $') =~ s/\$ ?//g;
+ print "debianqueued $version\n";
+
+ $approx = $conf::statusdelay ? "approx. " : "";
+
+ if ($mup eq "0") {
+ print "$conf::target is down, queue pausing\n";
+ return;
+ }
+ elsif ($conf::upload_method ne "copy") {
+ print "$conf::target seems to be up, last ping $approx",
+ print_time(time-$last_ping), " ago\n";
+ }
+
+ if ($incw eq "0") {
+ print "The incoming directory is not writable, queue pausing\n";
+ return;
+ }
+
+ if ($ds eq "i") {
+ print "Next queue check in $approx",print_time($next_run-time),"\n";
+ return;
+ }
+ elsif ($ds eq "c") {
+ print "Checking queue directory\n";
+ }
+ elsif ($ds eq "u") {
+ print "Uploading to $conf::target\n";
+ }
+ else {
+ print "Bad status data from daemon: \"$mup$incw$ds\"\n";
+ return;
+ }
+
+ print "Current job is $currch\n" if $currch;
+}
+
+#
+# format a number for sending to statusd (fixed length STATNUM_LEN)
+#
+sub format_status_num(\$$) {
+ my $varref = shift;
+ my $num = shift;
+
+ $$varref = sprintf "%".STATNUM_LEN."d", $num;
+}
+
+#
+# format a string for sending to statusd (fixed length STATSTR_LEN)
+#
+sub format_status_str(\$$) {
+ my $varref = shift;
+ my $str = shift;
+
+ $$varref = substr( $str, 0, STATSTR_LEN );
+ $$varref .= "\n" x (STATSTR_LEN - length($$varref));
+}
+
+#
+# send a status string to the status daemon
+#
+# Avoid all operations that could call malloc() here! Most libc
+# implementations aren't reentrant, so we may not call it from a
+# signal handler. So use only already-defined variables.
+#
+sub send_status() {
+ local $! = 0; # preserve errno
+
+ # re-setup handler, in case we have broken SysV signals
+ $SIG{"USR1"} = \&send_status;
+
+ syswrite( STATUSD, $main::target_up, 1 );
+ syswrite( STATUSD, $main::incoming_writable, 1 );
+ syswrite( STATUSD, $main::dstat, 1 );
+ syswrite( STATUSD, $main::next_run, STATNUM_LEN );
+ syswrite( STATUSD, $main::last_ping_time, STATNUM_LEN );
+ syswrite( STATUSD, $main::current_changes, STATSTR_LEN );
+}
+
+
+# ---------------------------------------------------------------------------
+# FTP functions
+# ---------------------------------------------------------------------------
+
+#
+# open FTP connection to target host if not already open
+#
+sub ftp_open() {
+
+ if ($main::FTP_chan) {
+ # is already open, but might have timed out; test with a cwd
+ return $main::FTP_chan if $main::FTP_chan->cwd( $conf::targetdir );
+ # cwd didn't work, channel is closed, try to reopen it
+ $main::FTP_chan = undef;
+ }
+
+ if (!($main::FTP_chan = Net::FTP->new( $conf::target,
+ Debug => $conf::ftpdebug,
+ Timeout => $conf::ftptimeout ))) {
+ msg( "log,mail", "Cannot open FTP server $conf::target\n" );
+ goto err;
+ }
+ if (!$main::FTP_chan->login()) {
+ msg( "log,mail", "Anonymous login on FTP server $conf::target failed\n" );
+ goto err;
+ }
+ if (!$main::FTP_chan->binary()) {
+ msg( "log,mail", "Can't set binary FTP mode on $conf::target\n" );
+ goto err;
+ }
+ if (!$main::FTP_chan->cwd( $conf::targetdir )) {
+ msg( "log,mail", "Can't cd to $conf::targetdir on $conf::target\n" );
+ goto err;
+ }
+ debug( "opened FTP channel to $conf::target" );
+ return 1;
+
+ err:
+ $main::FTP_chan = undef;
+ return 0;
+}
+
+sub ftp_cmd($@) {
+ my $cmd = shift;
+ my ($rv, $err);
+ my $direct_resp_cmd = ($cmd eq "quot");
+
+ debug( "executing FTP::$cmd(".join(", ",@_).")" );
+ $SIG{"ALRM"} = sub { die "timeout in FTP::$cmd\n" } ;
+ alarm( $conf::remote_timeout );
+ eval { $rv = $main::FTP_chan->$cmd( @_ ); };
+ alarm( 0 );
+ $err = "";
+ $rv = (ftp_code() =~ /^2/) ? 1 : 0 if $direct_resp_cmd;
+ if ($@) {
+ $err = $@;
+ undef $rv;
+ }
+ elsif (!$rv) {
+ $err = ftp_response();
+ }
+ return ($rv, $err);
+}
+
+sub ftp_close() {
+ if ($main::FTP_chan) {
+ $main::FTP_chan->quit();
+ $main::FTP_chan = undef;
+ }
+ return 1;
+}
+
+sub ftp_response() {
+ return join( '', @{${*$main::FTP_chan}{'net_cmd_resp'}} );
+}
+
+sub ftp_code() {
+ return ${*$main::FTP_chan}{'net_cmd_code'};
+}
+
+sub ftp_error() {
+ my $code = ftp_code();
+ return ($code =~ /^[45]/) ? 1 : 0;
+}
+
+# ---------------------------------------------------------------------------
+# utility functions
+# ---------------------------------------------------------------------------
+
+sub ssh_cmd($) {
+ my $cmd = shift;
+ my ($msg, $stat);
+
+ my $ecmd = "$conf::ssh $conf::ssh_options $conf::target ".
+ "-l $conf::targetlogin \'cd $conf::targetdir; $cmd\'";
+ debug( "executing $ecmd" );
+ $SIG{"ALRM"} = sub { die "timeout in ssh command\n" } ;
+ alarm( $conf::remote_timeout );
+ eval { $msg = `$ecmd 2>&1`; };
+ alarm( 0 );
+ if ($@) {
+ $msg = $@;
+ $stat = 1;
+ }
+ else {
+ $stat = $?;
+ }
+ return ($msg, $stat);
+}
+
+sub scp_cmd(@) {
+ my ($msg, $stat);
+
+ my $ecmd = "$conf::scp $conf::ssh_options @_ ".
+ "$conf::targetlogin\@$conf::target:$conf::targetdir";
+ debug( "executing $ecmd" );
+ $SIG{"ALRM"} = sub { die "timeout in scp\n" } ;
+ alarm( $conf::remote_timeout );
+ eval { $msg = `$ecmd 2>&1`; };
+ alarm( 0 );
+ if ($@) {
+ $msg = $@;
+ $stat = 1;
+ }
+ else {
+ $stat = $?;
+ }
+ return ($msg, $stat);
+}
+
+sub local_cmd($;$) {
+ my $cmd = shift;
+ my $nocd = shift;
+ my ($msg, $stat);
+
+ my $ecmd = ($nocd ? "" : "cd $conf::targetdir; ") . $cmd;
+ debug( "executing $ecmd" );
+ $msg = `($ecmd) 2>&1`;
+ $stat = $?;
+ return ($msg, $stat);
+
+}
+
+#
+# check if target is alive (code stolen from Net::Ping.pm)
+#
+sub check_alive(;$) {
+ my $timeout = shift;
+ my( $saddr, $ret, $target_ip );
+ local( *PINGSOCK );
+
+ if ($conf::upload_method eq "copy") {
+ format_status_num( $main::last_ping_time, time );
+ $main::target_up = 1;
+ return;
+ }
+
+ $timeout ||= 30;
+
+ if (!($target_ip = (gethostbyname($conf::target))[4])) {
+ msg( "log", "Cannot get IP address of $conf::target\n" );
+ $ret = 0;
+ goto out;
+ }
+ $saddr = pack( 'S n a4 x8', AF_INET, $main::echo_port, $target_ip );
+ $SIG{'ALRM'} = sub { die } ;
+ alarm( $timeout );
+
+ $ret = $main::tcp_proto; # avoid warnings about unused variable
+ $ret = 0;
+ eval <<'EOM' ;
+ return unless socket( PINGSOCK, PF_INET, SOCK_STREAM, $main::tcp_proto );
+ return unless connect( PINGSOCK, $saddr );
+ $ret = 1;
+EOM
+ alarm( 0 );
+ close( PINGSOCK );
+ msg( "log", "pinging $conf::target: " . ($ret ? "ok" : "down") . "\n" );
+ out:
+ $main::target_up = $ret ? "1" : "0";
+ format_status_num( $main::last_ping_time, time );
+ write_status_file() if $conf::statusdelay;
+}
+
+#
+# check if incoming dir on target is writable
+#
+sub check_incoming_writable() {
+ my $testfile = ".debianqueued-testfile";
+ my ($msg, $stat);
+
+ if ($conf::upload_method eq "ssh") {
+ ($msg, $stat) = ssh_cmd( "rm -f $testfile; touch $testfile; ".
+ "rm -f $testfile" );
+ }
+ elsif ($conf::upload_method eq "ftp") {
+ my $file = "junk-for-writable-test-".format_time();
+ $file =~ s/[ :.]/-/g;
+ local( *F );
+ open( F, ">$file" ); close( F );
+ my $rv;
+ ($rv, $msg) = ftp_cmd( "put", $file );
+ $stat = 0;
+ $msg = "" if !defined $msg;
+ unlink $file;
+ ftp_cmd( "delete", $file );
+ }
+ elsif ($conf::upload_method eq "copy") {
+ ($msg, $stat) = local_cmd( "rm -f $testfile; touch $testfile; ".
+ "rm -f $testfile" );
+ }
+ chomp( $msg );
+ debug( "exit status: $stat, output was: $msg" );
+
+ if (!$stat) {
+ # change incoming_writable only if ssh didn't return an error
+ $main::incoming_writable =
+ ($msg =~ /(permission denied|read-?only file|cannot create)/i) ? "0":"1";
+ }
+ else {
+ debug( "local error, keeping old status" );
+ }
+ debug( "incoming_writable = $main::incoming_writable" );
+ write_status_file() if $conf::statusdelay;
+ return $main::incoming_writable;
+}
+
+#
+# remove a list of files, log failing ones
+#
+sub rm(@) {
+ my $done = 0;
+
+ foreach ( @_ ) {
+ (unlink $_ and ++$done)
+ or $! == ENOENT or msg( "log", "Could not delete $_: $!\n" );
+ }
+ return $done;
+}
+
+#
+# get md5 checksum of a file
+#
+sub md5sum($) {
+ my $file = shift;
+ my $line;
+
+ chomp( $line = `$conf::md5sum $file` );
+ debug( "md5sum($file): ", $? ? "exit status $?" :
+ $line =~ /^(\S+)/ ? $1 : "match failed" );
+ return $? ? "" : $line =~ /^(\S+)/ ? $1 : "";
+}
+
+#
+# check if a file probably belongs to a Debian upload
+#
+sub is_debian_file($) {
+ my $file = shift;
+ return $file =~ /\.(deb|dsc|(diff|tar)\.gz)$/ &&
+ $file !~ /\.orig\.tar\.gz/;
+}
+
+#
+# try to extract maintainer email address from some a non-.changes file
+# return "" if not possible
+#
+sub get_maintainer($) {
+ my $file = shift;
+ my $maintainer = "";
+ local( *F );
+
+ if ($file =~ /\.diff\.gz$/) {
+ # parse a diff
+ open( F, "$conf::gzip -dc '$file' 2>/dev/null |" ) or return "";
+ while( <F> ) {
+ # look for header line of a file */debian/control
+ last if m,^\+\+\+\s+[^/]+/debian/control(\s+|$),;
+ }
+ while( <F> ) {
+ last if /^---/; # end of control file patch, no Maintainer: found
+ # inside control file patch look for Maintainer: field
+ $maintainer = $1, last if /^\+Maintainer:\s*(.*)$/i;
+ }
+ while( <F> ) { } # read to end of file to avoid broken pipe
+ close( F ) or return "";
+ }
+ elsif ($file =~ /\.(deb|dsc|tar\.gz)$/) {
+ if ($file =~ /\.deb$/ && $conf::ar) {
+ # extract control.tar.gz from .deb with ar, then let tar extract
+ # the control file itself
+ open( F, "($conf::ar p '$file' control.tar.gz | ".
+ "$conf::tar -xOf - ".
+ "--use-compress-program $conf::gzip ".
+ "control) 2>/dev/null |" )
+ or return "";
+ }
+ elsif ($file =~ /\.dsc$/) {
+ # just do a plain grep
+ debug( "get_maint: .dsc, no cmd" );
+ open( F, "<$file" ) or return "";
+ }
+ elsif ($file =~ /\.tar\.gz$/) {
+ # let tar extract a file */debian/control
+ open(F, "$conf::tar -xOf '$file' ".
+ "--use-compress-program $conf::gzip ".
+ "\\*/debian/control 2>&1 |")
+ or return "";
+ }
+ else {
+ return "";
+ }
+ while( <F> ) {
+ $maintainer = $1, last if /^Maintainer:\s*(.*)$/i;
+ }
+ close( F ) or return "";
+ }
+
+ return $maintainer;
+}
+
+#
+# return a pattern that matches all files that probably belong to one job
+#
+sub debian_file_stem($) {
+ my $file = shift;
+ my( $pkg, $version );
+
+ # strip file suffix
+ $file =~ s,\.(deb|dsc|changes|(orig\.)?tar\.gz|diff\.gz)$,,;
+ # if not is *_* (name_version), can't derive a stem and return just
+ # the file's name
+ return $file if !($file =~ /^([^_]+)_([^_]+)/);
+ ($pkg, $version) = ($1, $2);
+ # strip Debian revision from version
+ $version =~ s/^(.*)-[\d.+-]+$/$1/;
+
+ return "${pkg}_${version}*";
+}
+
+#
+# output a messages to several destinations
+#
+# first arg is a comma-separated list of destinations; valid are "log"
+# and "mail"; rest is stuff to be printed, just as with print
+#
+sub msg($@) {
+ my @dest = split( ',', shift );
+
+ if (grep /log/, @dest ) {
+ my $now = format_time();
+ print LOG "$now ", @_;
+ }
+
+ if (grep /mail/, @dest ) {
+ $main::mail_text .= join( '', @_ );
+ }
+}
+
+#
+# print a debug messages, if $debug is true
+#
+sub debug(@) {
+ return if !$conf::debug;
+ my $now = format_time();
+ print LOG "$now DEBUG ", @_, "\n";
+}
+
+#
+# intialize the "mail" destination of msg() (this clears text,
+# address, subject, ...)
+#
+sub init_mail(;$) {
+ my $file = shift;
+
+ $main::mail_addr = "";
+ $main::mail_text = "";
+ $main::mail_subject = $file ? "Processing of $file" : "";
+}
+
+#
+# finalize mail to be sent from msg(): check if something present, and
+# then send out
+#
+sub finish_mail() {
+ local( *MAIL );
+
+ debug( "No mail for $main::mail_addr" )
+ if $main::mail_addr && !$main::mail_text;
+ return unless $main::mail_addr && $main::mail_text;
+
+ if (!send_mail($main::mail_addr, $main::mail_subject, $main::mail_text)) {
+ # store this mail in memory so it isn't lost if executing sendmail
+ # failed.
+ push( @main::stored_mails, { addr => $main::mail_addr,
+ subject => $main::mail_subject,
+ text => $main::mail_text } );
+ }
+ init_mail();
+
+ # try to send out stored mails
+ my $mailref;
+ while( $mailref = shift(@main::stored_mails) ) {
+ if (!send_mail( $mailref->{'addr'}, $mailref->{'subject'},
+ $mailref->{'text'} )) {
+ unshift( @main::stored_mails, $mailref );
+ last;
+ }
+ }
+}
+
+#
+# send one mail
+#
+sub send_mail($$$) {
+ my $addr = shift;
+ my $subject = shift;
+ my $text = shift;
+
+ debug( "Sending mail to $addr" );
+ debug( "executing $conf::mail -s '$subject' '$addr'" );
+ if (!open( MAIL, "|$conf::mail -s '$subject' '$addr'" )) {
+ msg( "log", "Could not open pipe to $conf::mail: $!\n" );
+ return 0;
+ }
+ print MAIL $text;
+ print MAIL "\nGreetings,\n\n\tYour Debian queue daemon\n";
+ if (!close( MAIL )) {
+ msg( "log", "$conf::mail failed (exit status ", $? >> 8, ")\n" );
+ return 0;
+ }
+ return 1;
+}
+
+#
+# try to find a mail address for a name in the keyrings
+#
+sub try_to_get_mail_addr($$) {
+ my $name = shift;
+ my $listref = shift;
+
+ @$listref = ();
+ open( F, "$conf::gpg --no-options --batch --no-default-keyring ".
+ "--always-trust --keyring ".
+ join (" --keyring ",@conf::keyrings).
+ " --list-keys |" )
+ or return "";
+ while( <F> ) {
+ if (/^pub / && / $name /) {
+ /<([^>]*)>/;
+ push( @$listref, $1 );
+ }
+ }
+ close( F );
+
+ return (@$listref >= 1) ? $listref->[0] : "";
+}
+
+#
+# return current time as string
+#
+sub format_time() {
+ my $t;
+
+ # omit weekday and year for brevity
+ ($t = localtime) =~ /^\w+\s(.*)\s\d+$/;
+ return $1;
+}
+
+sub print_time($) {
+ my $secs = shift;
+ my $hours = int($secs/(60*60));
+
+ $secs -= $hours*60*60;
+ return sprintf "%d:%02d:%02d", $hours, int($secs/60), $secs % 60;
+}
+
+#
+# block some signals during queue processing
+#
+# This is just to avoid data inconsistency or uploads being aborted in the
+# middle. Only "soft" signals are blocked, i.e. SIGINT and SIGTERM, try harder
+# ones if you really want to kill the daemon at once.
+#
+sub block_signals() {
+ POSIX::sigprocmask( SIG_BLOCK, $main::block_sigset );
+}
+
+sub unblock_signals() {
+ POSIX::sigprocmask( SIG_UNBLOCK, $main::block_sigset );
+}
+
+#
+# process SIGHUP: close log file and reopen it (for logfile cycling)
+#
+sub close_log($) {
+ close( LOG );
+ close( STDOUT );
+ close( STDERR );
+
+ open( LOG, ">>$conf::logfile" )
+ or die "Cannot open my logfile $conf::logfile: $!\n";
+ chmod( 0644, $conf::logfile )
+ or msg( "log", "Cannot set modes of $conf::logfile: $!\n" );
+ select( (select(LOG), $| = 1)[0] );
+
+ open( STDOUT, ">&LOG" )
+ or msg( "log", "$main::progname: Can't redirect stdout to ".
+ "$conf::logfile: $!\n" );
+ open( STDERR, ">&LOG" )
+ or msg( "log", "$main::progname: Can't redirect stderr to ".
+ "$conf::logfile: $!\n" );
+ msg( "log", "Restart after SIGHUP\n" );
+}
+
+#
+# process SIGCHLD: check if it was our statusd process
+#
+sub kid_died($) {
+ my $pid;
+
+ # reap statusd, so that it's no zombie when we try to kill(0) it
+ waitpid( $main::statusd_pid, WNOHANG );
+
+# Uncomment the following line if your Perl uses unreliable System V signal
+# (i.e. if handlers reset to default if the signal is delivered).
+# (Unfortunately, the re-setup can't be done in any case, since on some
+# systems this will cause the SIGCHLD to be delivered again if there are
+# still unreaped children :-(( )
+
+# $SIG{"CHLD"} = \&kid_died; # resetup handler for SysV
+}
+
+sub restart_statusd() {
+ # restart statusd if it died
+ if (!kill( 0, $main::statusd_pid)) {
+ close( STATUSD ); # close out pipe end
+ $main::statusd_pid = fork_statusd();
+ }
+}
+
+#
+# process a fatal signal: cleanup and exit
+#
+sub fatal_signal($) {
+ my $signame = shift;
+ my $sig;
+
+ # avoid recursions of fatal_signal in case of BSD signals
+ foreach $sig ( qw( ILL ABRT BUS FPE SEGV PIPE ) ) {
+ $SIG{$sig} = "DEFAULT";
+ }
+
+ if ($$ == $main::maind_pid) {
+ # only the main daemon should do this
+ kill( $main::signo{"TERM"}, $main::statusd_pid )
+ if defined $main::statusd_pid;
+ unlink( $conf::statusfile, $conf::pidfile );
+ }
+ msg( "log", "Caught SIG$signame -- exiting (pid $$)\n" );
+ exit 1;
+}
+
+
+# Local Variables:
+# tab-width: 4
+# fill-column: 78
+# End:
--- /dev/null
+#!/usr/bin/perl -w
+#
+# dqueued-watcher -- for regularily watching the queue daemon
+#
+# This script is intended to check periodically (e.g. started by cron) that
+# everything is ok with debianqueued. If the daemon isn't running, it notifies
+# the maintainer. It also checks if a new Debian keyring is available (in a
+# Debian mirror aera, f.i.) and then updates the keyring used by debianqueued.
+#
+# Copyright (C) 1997 Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+#
+# This program is free software. You can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation: either version 2 or
+# (at your option) any later version.
+# This program comes with ABSOLUTELY NO WARRANTY!
+#
+# $Id: dqueued-watcher,v 1.28 1999/07/08 09:43:22 ftplinux Exp $
+#
+# $Log: dqueued-watcher,v $
+# Revision 1.28 1999/07/08 09:43:22 ftplinux
+# Bumped release number to 0.9
+#
+# Revision 1.27 1999/07/07 11:58:22 ftplinux
+# Also update gpg keyring if $conf::gpg_keyring is set.
+#
+# Revision 1.26 1998/07/06 14:24:36 ftplinux
+# Some changes to handle debian-keyring.tar.gz files which expand to a
+# directory including a date.
+#
+# Revision 1.25 1998/05/14 14:21:45 ftplinux
+# Bumped release number to 0.8
+#
+# Revision 1.24 1998/03/30 12:31:05 ftplinux
+# Don't count "already reported" or "ignored for now" errors as .changes errors.
+# Also list files for several error types.
+# Also print out names of processed jobs.
+#
+# Revision 1.23 1998/03/30 11:27:37 ftplinux
+# If called with args, make summaries for the log files given.
+# make_summary: New arg $to_stdout, for printing report directly.
+#
+# Revision 1.22 1998/03/23 14:05:15 ftplinux
+# Bumped release number to 0.7
+#
+# Revision 1.21 1997/12/16 13:19:29 ftplinux
+# Bumped release number to 0.6
+#
+# Revision 1.20 1997/11/20 15:18:48 ftplinux
+# Bumped release number to 0.5
+#
+# Revision 1.19 1997/10/31 12:26:31 ftplinux
+# Again added new counters in make_summary: suspicious_files,
+# transient_changes_errs.
+# Extended tests for bad_changes.
+# Quotes in pattern seem not to work, replaced by '.'.
+#
+# Revision 1.18 1997/10/30 14:17:32 ftplinux
+# In make_summary, implemented some new counters for command files.
+#
+# Revision 1.17 1997/10/17 09:39:09 ftplinux
+# Fixed wrong args to plural_s
+#
+# Revision 1.16 1997/09/25 11:20:42 ftplinux
+# Bumped release number to 0.4
+#
+# Revision 1.15 1997/09/17 12:16:33 ftplinux
+# Added writing summaries to a file
+#
+# Revision 1.14 1997/09/16 11:39:29 ftplinux
+# In make_summary, initialize all counters to avoid warnings about uninited
+# values.
+#
+# Revision 1.13 1997/09/16 10:53:36 ftplinux
+# Made logging more verbose in queued and dqueued-watcher
+#
+# Revision 1.12 1997/08/18 13:07:15 ftplinux
+# Implemented summary mails
+#
+# Revision 1.11 1997/08/18 12:11:44 ftplinux
+# Replaced timegm by timelocal in parse_date; times in log file are
+# local times...
+#
+# Revision 1.10 1997/08/18 11:27:20 ftplinux
+# Revised age calculation of log file for rotating
+#
+# Revision 1.9 1997/08/12 09:54:40 ftplinux
+# Bumped release number
+#
+# Revision 1.8 1997/08/11 12:49:10 ftplinux
+# Implemented logfile rotating
+#
+# Revision 1.7 1997/07/28 13:20:38 ftplinux
+# Added release numner to startup message
+#
+# Revision 1.6 1997/07/25 10:23:04 ftplinux
+# Made SIGCHLD handling more portable between perl versions
+#
+# Revision 1.5 1997/07/09 10:13:55 ftplinux
+# Alternative implementation of status file as plain file (not FIFO), because
+# standard wu-ftpd doesn't allow retrieval of non-regular files. New config
+# option $statusdelay for this.
+#
+# Revision 1.4 1997/07/08 08:39:56 ftplinux
+# Need to remove -z from tar options if --use-compress-program
+#
+# Revision 1.3 1997/07/08 08:34:15 ftplinux
+# If dqueued-watcher runs as cron job, $PATH might not contain gzip. Use extra
+# --use-compress-program option to tar, and new config var $gzip.
+#
+# Revision 1.2 1997/07/03 13:05:57 ftplinux
+# Added some verbosity if stdin is a terminal
+#
+# Revision 1.1.1.1 1997/07/03 12:54:59 ftplinux
+# Import initial sources
+#
+#
+
+require 5.002;
+use strict;
+use POSIX;
+require "timelocal.pl";
+
+sub LINEWIDTH { 79 }
+my $batchmode = !(-t STDIN);
+$main::curr_year = (localtime)[5];
+
+do {
+ my $version;
+ ($version = 'Release: 0.9 $Revision: 1.28 $ $Date: 1999/07/08 09:43:22 $ $Author: ftplinux $') =~ s/\$ ?//g;
+ print "dqueued-watcher $version\n" if !$batchmode;
+};
+
+package conf;
+($conf::queued_dir = (($0 !~ m,^/,) ? POSIX::getcwd()."/" : "") . $0)
+ =~ s,/[^/]+$,,;
+require "$conf::queued_dir/config";
+my # avoid spurious warnings about unused vars
+$junk = $conf::gzip;
+$junk = $conf::maintainer_mail;
+$junk = $conf::log_age;
+package main;
+
+# prototypes
+sub check_daemon();
+sub daemon_running();
+sub rotate_log();
+sub logf($);
+sub parse_date($);
+sub make_summary($$$);
+sub stimes($);
+sub plural_s($);
+sub format_list($@);
+sub mail($@);
+sub logger(@);
+sub format_time();
+
+# the main program:
+if (@ARGV) {
+ # with arguments, make summaries (to stdout) for the logfiles given
+ foreach (@ARGV) {
+ make_summary( 1, undef, $_ );
+ }
+}
+else {
+ # without args, just do normal maintainance actions
+ check_daemon();
+ rotate_log();
+}
+exit 0;
+
+
+#
+# check if the daemon is running, notify maintainer if not
+#
+sub check_daemon() {
+ my $daemon_down_text = "Daemon is not running\n";
+ my( $line, $reported );
+
+ if (daemon_running()) {
+ print "Daemon is running\n" if !$batchmode;
+ return;
+ }
+ print "Daemon is NOT running!\n" if !$batchmode;
+
+ $reported = 0;
+ if ($conf::statusfile && -f $conf::statusfile && ! -p _ &&
+ open( STATUSFILE, "<$conf::statusfile" )) {
+ $line = <STATUSFILE>;
+ close( STATUSFILE );
+ $reported = $line eq $daemon_down_text;
+ }
+ if (!$reported) {
+ mail( "debianqueued down",
+ "The Debian queue daemon isn't running!\n",
+ "Please start it up again.\n" );
+ logger( "Found that daemon is not running\n" );
+ }
+
+ # remove unnecessary pid file
+ # also remove status FIFO, so opening it for reading won't block
+ # forever
+ unlink( $conf::pidfile, $conf::statusfile );
+
+ # replace status FIFO by a file that tells the user the daemon is down
+ if ($conf::statusfile) {
+ open( STATUSFILE, ">$conf::statusfile" )
+ or die "Can't open $conf::statusfile: $!\n";
+ print STATUSFILE $daemon_down_text;
+ close( STATUSFILE );
+ }
+}
+
+#
+# check if daemon is running
+#
+sub daemon_running() {
+ my $pid;
+ local( *PIDFILE );
+
+ if (open( PIDFILE, "<$conf::pidfile" )) {
+ chomp( $pid = <PIDFILE> );
+ close( PIDFILE );
+ $main::daemon_pid = $pid, return 1 if $pid && kill( 0, $pid );
+ }
+ return 0;
+}
+
+#
+# check if new keyring is available, if yes extract it
+#
+
+sub rotate_log() {
+ my( $first_date, $f1, $f2, $i );
+ local( *F );
+
+ return if !defined $main::daemon_pid || !-f $conf::logfile;
+
+ open( F, "<$conf::logfile" ) or die "Can't open $conf::logfile: $!\n";
+ while( <F> ) {
+ last if $first_date = parse_date( $_ );
+ }
+ close( F );
+ # Simply don't rotate if nothing couldn't be parsed as date -- probably
+ # the file is empty.
+ return if !$first_date;
+ # assume year-wrap if $first_date is in the future
+ $first_date -= 365*24*60*60 if $first_date > time;
+ # don't rotate if first date too young
+ return if time - $first_date < $conf::log_age*24*60*60;
+ logger( "Logfile older than $conf::log_age days, rotating\n" );
+
+ # remove oldest log
+ $f1 = logf($conf::log_keep-1);
+ if (-f $f1) {
+ unlink( $f1 ) or warn "Can't remove $f1: $!\n";
+ }
+
+ # rename other logs
+ for( $i = $conf::log_keep-2; $i > 0; --$i ) {
+ $f1 = logf($i);
+ $f2 = logf($i+1);
+ if ($i == 0) {
+ }
+ if (-f $f1) {
+ rename( $f1, $f2 ) or warn "Can't rename $f1 to $f2: $!\n";
+ }
+ }
+
+ # compress newest log
+ $f1 = "$conf::logfile.0";
+ $f2 = "$conf::logfile.1.gz";
+ if (-f $f1) {
+ system $conf::gzip, "-9f", $f1
+ and die "gzip failed on $f1 (status $?)\n";
+ rename( "$f1.gz", $f2 ) or warn "Can't rename $f1.gz to $f2: $!\n";
+ }
+
+ # rename current log and signal the daemon to open a new logfile
+ rename( $conf::logfile, $f1 );
+ kill( 1, $main::daemon_pid );
+
+ print "Rotated log files\n" if !$batchmode;
+ make_summary( 0, $first_date, $f1 )
+ if $conf::mail_summary || $conf::summary_file;
+}
+
+sub logf($) {
+ my $num = shift;
+ return sprintf( "$conf::logfile.%d.gz", $num );
+}
+
+sub parse_date($) {
+ my $date = shift;
+ my( $mon, $day, $hours, $mins, $month, $year, $secs );
+ my %month_num = ( "jan", 0, "feb", 1, "mar", 2, "apr", 3, "may", 4,
+ "jun", 5, "jul", 6, "aug", 7, "sep", 8, "oct", 9,
+ "nov", 10, "dec", 11 );
+
+ warn "Invalid date: $date\n", return 0
+ unless $date =~ /^(\w\w\w)\s+(\d+)\s+(\d+):(\d+):(\d+)\s/;
+ ($mon, $day, $hours, $mins, $secs) = ($1, $2, $3, $4, $5);
+
+ $mon =~ tr/A-Z/a-z/;
+ return 0 if !exists $month_num{$mon};
+ $month = $month_num{$mon};
+ return timelocal( $secs, $mins, $hours, $day, $month, $main::curr_year );
+}
+
+sub make_summary($$$) {
+ my $to_stdout = shift;
+ my $startdate = shift;
+ my $file = shift;
+ my( $starts, $statusd_starts, $suspicious_files, $transient_errs,
+ $upl_failed, $success, $commands, $rm_cmds, $mv_cmds, $msg,
+ $uploader );
+ my( @pgp_fail, %transient_errs, @changes_errs, @removed_changes,
+ @already_present, @del_stray, %uploaders, %cmd_uploaders );
+ local( *F );
+
+ if (!open( F, "<$file" )) {
+ mail( "debianqueued summary failed",
+ "Couldn't open $file to make summary of events." );
+ return;
+ }
+
+ $starts = $statusd_starts = $suspicious_files = $transient_errs =
+ $upl_failed = $success = $commands = $rm_cmds = $mv_cmds = 0;
+ while( <F> ) {
+ $startdate = parse_date( $_ ) if !$startdate;
+ ++$starts if /daemon \(pid \d+\) started$/;
+ ++$statusd_starts if /forked status daemon/;
+ push( @pgp_fail, $1 )
+ if /PGP signature check failed on (\S+)/;
+ ++$suspicious_files if /found suspicious filename/;
+ ++$transient_errs, ++$transient_errs{$1}
+ if /(\S+) (doesn.t exist|is too small) \(ignored for now\)/;
+ push( @changes_errs, $1 )
+ if (!/\((already reported|ignored for now)\)/ &&
+ (/(\S+) doesn.t exist/ || /(\S+) has incorrect (size|md5)/)) ||
+ /(\S+) doesn.t contain a Maintainer: field/ ||
+ /(\S+) isn.t signed with PGP/ ||
+ /(\S+) doesn.t mention any files/;
+ push( @removed_changes, $1 )
+ if /(\S+) couldn.t be processed for \d+ hours and is now del/ ||
+ /(\S+) couldn.t be uploaded for \d+ times/;
+ push( @already_present, $1 )
+ if /(\S+) is already present on master/;
+ ++$upl_failed if /Upload to \S+ failed/;
+ ++$success, push( @{$uploaders{$2}}, $1 )
+ if /(\S+) processed successfully \(uploader (\S*)\)$/;
+ push( @del_stray, $1 ) if /Deleted stray file (\S+)/;
+ ++$commands if /processing .*\.commands$/;
+ ++$rm_cmds if / > rm /;
+ ++$mv_cmds if / > mv /;
+ ++$cmd_uploaders{$1}
+ if /\(command uploader (\S*)\)$/;
+ }
+ close( F );
+
+ $msg .= "Queue Daemon Summary from " . localtime($startdate) . " to " .
+ localtime(time) . ":\n\n";
+
+ $msg .= "Daemon started ".stimes($starts)."\n"
+ if $starts;
+ $msg .= "Status daemon restarted ".stimes($statusd_starts-$starts)."\n"
+ if $statusd_starts > $starts;
+ $msg .= @pgp_fail." job".plural_s(@pgp_fail)." failed PGP check:\n" .
+ format_list(2,@pgp_fail)
+ if @pgp_fail;
+ $msg .= "$suspicious_files file".plural_s($suspicious_files)." with ".
+ "suspicious names found\n"
+ if $suspicious_files;
+ $msg .= "Detected ".$transient_errs." transient error".
+ plural_s($transient_errs)." in .changes files:\n".
+ format_list(2,keys %transient_errs)
+ if $transient_errs;
+ $msg .= "Detected ".@changes_errs." error".plural_s(@changes_errs).
+ " in .changes files:\n".format_list(2,@changes_errs)
+ if @changes_errs;
+ $msg .= @removed_changes." job".plural_s(@removed_changes).
+ " removed due to persistent errors:\n".
+ format_list(2,@removed_changes)
+ if @removed_changes;
+ $msg .= @already_present." job".plural_s(@already_present).
+ " were already present on master:\n".format_list(2,@already_present)
+ if @already_present;
+ $msg .= @del_stray." stray file".plural_s(@del_stray)." deleted:\n".
+ format_list(2,@del_stray)
+ if @del_stray;
+ $msg .= "$commands command file".plural_s($commands)." processed\n"
+ if $commands;
+ $msg .= " ($rm_cmds rm, $mv_cmds mv commands)\n"
+ if $rm_cmds || $mv_cmds;
+ $msg .= "$success job".plural_s($success)." processed successfully\n";
+
+ if ($success) {
+ $msg .= "\nPeople who used the queue:\n";
+ foreach $uploader ( keys %uploaders ) {
+ $msg .= " $uploader (".@{$uploaders{$uploader}}."):\n".
+ format_list(4,@{$uploaders{$uploader}});
+ }
+ }
+
+ if (%cmd_uploaders) {
+ $msg .= "\nPeople who used command files:\n";
+ foreach $uploader ( keys %cmd_uploaders ) {
+ $msg .= " $uploader ($cmd_uploaders{$uploader})\n";
+ }
+ }
+
+ if ($to_stdout) {
+ print $msg;
+ }
+ else {
+ if ($conf::mail_summary) {
+ mail( "debianqueued summary", $msg );
+ }
+
+ if ($conf::summary_file) {
+ local( *F );
+ open( F, ">>$conf::summary_file" ) or
+ die "Cannot open $conf::summary_file for appending: $!\n";
+ print F "\n", "-"x78, "\n", $msg;
+ close( F );
+ }
+ }
+}
+
+sub stimes($) {
+ my $num = shift;
+ return $num == 1 ? "once" : "$num times";
+}
+
+sub plural_s($) {
+ my $num = shift;
+ return $num == 1 ? "" : "s";
+}
+
+sub format_list($@) {
+ my $indent = shift;
+ my( $i, $pos, $ret, $item, $len );
+
+ $ret = " " x $indent; $pos += $indent;
+ while( $item = shift ) {
+ $len = length($item);
+ $item .= ", ", $len += 2 if @_;
+ if ($pos+$len > LINEWIDTH) {
+ $ret .= "\n" . " "x$indent;
+ $pos = $indent;
+ }
+ $ret .= $item;
+ $pos += $len;
+ }
+ $ret .= "\n";
+ return $ret;
+}
+
+#
+# send mail to maintainer
+#
+sub mail($@) {
+ my $subject = shift;
+ local( *MAIL );
+
+ open( MAIL, "|$conf::mail -s '$subject' '$conf::maintainer_mail'" )
+ or (warn( "Could not open pipe to $conf::mail: $!\n" ), return);
+ print MAIL @_;
+ print MAIL "\nGreetings,\n\n\tYour Debian queue daemon watcher\n";
+ close( MAIL )
+ or warn( "$conf::mail failed (exit status $?)\n" );
+}
+
+#
+# log something to logfile
+#
+sub logger(@) {
+ my $now = format_time();
+ local( *LOG );
+
+ if (!open( LOG, ">>$conf::logfile" )) {
+ warn( "Can't open $conf::logfile\n" );
+ return;
+ }
+ print LOG "$now dqueued-watcher: ", @_;
+ close( LOG );
+}
+
+#
+# return current time as string
+#
+sub format_time() {
+ my $t;
+
+ # omit weekday and year for brevity
+ ($t = localtime) =~ /^\w+\s(.*)\s\d+$/;
+ return $1;
+}
+
+
+# Local Variables:
+# tab-width: 4
+# fill-column: 78
+# End: