]> git.decadent.org.uk Git - dak.git/commitdiff
Merge branch 'master' into bpo
authorJoerg Jaspert <joerg@debian.org>
Sun, 15 Feb 2009 15:28:11 +0000 (16:28 +0100)
committerJoerg Jaspert <joerg@debian.org>
Sun, 15 Feb 2009 15:28:11 +0000 (16:28 +0100)
* master:
  one more squeeze
  its filenames, not filename
  Squeeze is now testing
  security stuff
  Also modified ddtp-i18n-check for squeeze
  i18n for squeeze
  Remove lenny-r0 specials
  Add security support for the lenny release
  First pass at lenny release branch
  url
  remove backwards-compatiblity stuff which is no longer needed

22 files changed:
config/backports.org/Contents.top [new file with mode: 0644]
config/backports.org/apt.conf [new file with mode: 0644]
config/backports.org/bpo-copy-packages [new file with mode: 0755]
config/backports.org/cron.buildd [new file with mode: 0755]
config/backports.org/cron.daily [new file with mode: 0755]
config/backports.org/cron.hourly [new file with mode: 0755]
config/backports.org/cron.monthly [new file with mode: 0755]
config/backports.org/cron.unchecked [new file with mode: 0755]
config/backports.org/cron.weekly [new file with mode: 0755]
config/backports.org/dak.conf [new file with mode: 0644]
config/backports.org/mail-whitelist [new file with mode: 0644]
config/backports.org/vars [new file with mode: 0644]
config/debian/dak.conf
dak/add_user.py [new file with mode: 0755]
dak/dak.py
daklib/regexes.py
daklib/utils.py
scripts/backports.org/copyoverrides [new file with mode: 0755]
scripts/backports.org/mkchecksums [new file with mode: 0755]
scripts/backports.org/mklslar [new file with mode: 0755]
scripts/backports.org/mkmaintainers [new file with mode: 0755]
templates/add-user.added [new file with mode: 0644]

diff --git a/config/backports.org/Contents.top b/config/backports.org/Contents.top
new file mode 100644 (file)
index 0000000..ee791eb
--- /dev/null
@@ -0,0 +1,32 @@
+This file maps each file available in the backports.org archive system to
+the package from which it originates.  It includes packages from the
+DIST distribution for the ARCH architecture.
+
+You can use this list to determine which package contains a specific
+file, or whether or not a specific file is available.  The list is
+updated weekly, each architecture on a different day.
+
+When a file is contained in more than one package, all packages are
+listed.  When a directory is contained in more than one package, only
+the first is listed.
+
+The best way to search quickly for a file is with the Unix `grep'
+utility, as in `grep <regular expression> CONTENTS':
+
+ $ grep nose Contents
+ etc/nosendfile                                          net/sendfile
+ usr/X11R6/bin/noseguy                                   x11/xscreensaver
+ usr/X11R6/man/man1/noseguy.1x.gz                        x11/xscreensaver
+ usr/doc/examples/ucbmpeg/mpeg_encode/nosearch.param     graphics/ucbmpeg
+ usr/lib/cfengine/bin/noseyparker                        admin/cfengine
+
+This list contains files in all packages, even though not all of the
+packages are installed on an actual system at once.  If you want to
+find out which packages on an installed Debian system provide a
+particular file, you can use `dpkg --search <filename>':
+
+ $ dpkg --search /usr/bin/dselect
+ dpkg: /usr/bin/dselect
+
+
+FILE                                                    LOCATION
diff --git a/config/backports.org/apt.conf b/config/backports.org/apt.conf
new file mode 100644 (file)
index 0000000..1fe25ae
--- /dev/null
@@ -0,0 +1,71 @@
+Dir
+{
+   ArchiveDir "/org/backports.org/ftp/";
+   OverrideDir "/org/backports.org/scripts/override/";
+   CacheDir "/org/backports.org/database/";
+};
+
+Default
+{
+   Packages::Compress ". gzip bzip2";
+   Sources::Compress ". gzip bzip2";
+   DeLinkLimit 0;
+   FileMode 0664;
+   Contents::Compress "gzip";
+   MaxContentsChange 12000;
+};
+
+TreeDefault
+{
+   Contents::Header "/org/backports.org/dak-config/Contents.top";
+};
+
+tree "dists/lenny-backports"
+{
+   FileList "/org/backports.org/database/dists/lenny-backports_$(SECTION)_binary-$(ARCH).list";
+   SourceFileList "/org/backports.org/database/dists/lenny-backports_$(SECTION)_source.list";
+   Sections "main contrib non-free";
+   Architectures "alpha amd64 arm armel hppa hurd-i386 i386 ia64 mips mipsel powerpc s390 sparc source";
+   BinOverride "override.lenny-backports.$(SECTION)";
+   ExtraOverride "override.lenny-backports.extra.$(SECTION)";
+   SrcOverride "override.lenny-backports.$(SECTION).src";
+   Packages::Compress ". gzip bzip2";
+   Sources::Compress ". gzip bzip2";
+};
+
+tree "dists/lenny-backports/main"
+{
+   FileList "/org/backports.org/database/dists/lenny-backports_main_$(SECTION)_binary-$(ARCH).list";
+   Sections "debian-installer";
+   Architectures "alpha amd64 arm armel hppa hurd-i386 i386 ia64 mips mipsel powerpc s390 sparc source";
+   BinOverride "override.lenny-backports.main.$(SECTION)";
+   SrcOverride "override.lenny-backports.main.src";
+   BinCacheDB "packages-debian-installer-$(ARCH).db";
+   Packages::Extensions ".udeb";
+   Contents "$(DIST)/../Contents-udeb";
+};
+
+tree "dists/etch-backports"
+{
+   FileList "/org/backports.org/database/dists/etch-backports_$(SECTION)_binary-$(ARCH).list";
+   SourceFileList "/org/backports.org/database/dists/etch-backports_$(SECTION)_source.list";
+   Sections "main contrib non-free";
+   Architectures "alpha amd64 arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
+   BinOverride "override.etch-backports.$(SECTION)";
+   ExtraOverride "override.etch-backports.extra.$(SECTION)";
+   SrcOverride "override.etch-backports.$(SECTION).src";
+   Packages::Compress ". gzip bzip2";
+   Sources::Compress ". gzip bzip2";
+};
+
+tree "dists/etch-backports/main"
+{
+   FileList "/org/backports.org/database/dists/etch-backports_main_$(SECTION)_binary-$(ARCH).list";
+   Sections "debian-installer";
+   Architectures "alpha amd64 arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
+   BinOverride "override.etch-backports.main.$(SECTION)";
+   SrcOverride "override.etch-backports.main.src";
+   BinCacheDB "packages-debian-installer-$(ARCH).db";
+   Packages::Extensions ".udeb";
+   Contents "$(DIST)/../Contents-udeb";
+};
diff --git a/config/backports.org/bpo-copy-packages b/config/backports.org/bpo-copy-packages
new file mode 100755 (executable)
index 0000000..6d98a9e
--- /dev/null
@@ -0,0 +1,72 @@
+#!/bin/bash
+
+# Copyright (c) 2005 Peter Palfrader <peter@palfrader.org>
+
+# WARNING: spaces in filenames/paths considered harmful.
+
+export SCRIPTVARS=/org/backports.org/dak/config/backports.org/vars
+. $SCRIPTVARS
+
+cd ${configdir}
+
+for suite in etch lenny; do
+    source="${ftpdir}/dists/${suite}-backports"
+    target="${basedir}/buildd/dists/${suite}-backports"
+
+    if ! [ -d "$source" ]; then
+           echo "Source '$source' does not exist or is not a directory or we can't acess it." >&2
+           exit 1;
+    fi
+    if ! [ -d "$target" ]; then
+           echo "Target '$target' does not exist or is not a directory or we can't acess it." >&2
+           exit 1;
+    fi
+
+    for file in $( cd "$source" && find . -name 'Packages.gz' -o -name 'Packages' -o -name 'Sources.gz' -o -name 'Sources' -o -name 'Release' ); do
+           basedir=$(dirname "$file")
+           basename=$(basename "$file")
+           targetdir="$target/$basedir"
+           [ -d "$targetdir" ] || mkdir -p "$targetdir"
+           if [ "$basename" = "Release" ]; then
+                   cp -a "$source/$file" "$target/$file"
+                   echo 'NotAutomatic: yes' >> "$target/$file"
+           else
+                   cp -a "$source/$file" "$target/$file"
+           fi
+    done
+
+# postprocess top level Release file
+    if ! [ -e "$target/Release" ]; then
+           echo "Did not find $target/Release after copying stuff.  something's fishy" >&2
+           exit 1;
+    fi
+
+    cd "$target"
+
+    perl -a -p -i -e '
+       if (substr($_,0,1) eq " ") {
+               if ($in_sha1 || $in_md5) {
+                       ($hash, $size, $file) = @F;
+                       $_="",next unless -f $file;
+
+                       (undef,undef,undef,undef,undef,undef,undef,$filesize,
+                        undef,undef,undef,undef,undef) = stat($file);
+                       if ($size != $filesize) {
+                               if ($in_sha1) {
+                                       $hash = `sha1sum "$file" | cut -d " " -f 1`
+                               } else {
+                                       $hash = `md5sum "$file" | cut -d " " -f 1`
+                               };
+                               chomp $hash;
+                               $_ = sprintf(" %s %16d %s\n", $hash, $filesize, $file);
+                       }
+               }
+       } else {
+               $in_sha1 = ($F[0] eq "SHA1:") ? 1 : 0;
+               $in_md5  = ($F[0] eq "MD5Sum:") ? 1 : 0;
+       }
+' Release
+
+    rm -f ${basedir}/buildd/dists/${suite}-backports/Release.gpg
+    gpg --no-options --batch --no-tty --secret-keyring ${basedir}/s3kr1t/dot-gnupg/secring.gpg --output "Release.gpg" --armor --detach-sign "Release"
+done
diff --git a/config/backports.org/cron.buildd b/config/backports.org/cron.buildd
new file mode 100755 (executable)
index 0000000..f48f98f
--- /dev/null
@@ -0,0 +1,10 @@
+#! /bin/bash -e
+
+# Executed hourly via cron, out of katie's crontab.
+# stolen from newraff and adjusted by aba on 2005-04-30
+#exit 0
+
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+ssh -i $base/s3kr1t/dot-ssh/id_rsa wanna-build@wanna-build.farm.ftbfs.de echo broken
+exit 0
diff --git a/config/backports.org/cron.daily b/config/backports.org/cron.daily
new file mode 100755 (executable)
index 0000000..b84d801
--- /dev/null
@@ -0,0 +1,19 @@
+#! /bin/sh
+#
+# Executed daily via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+
+################################################################################
+# Clean out old packages
+dak clean-suites
+dak clean-queues
+
+# Send a report on NEW/BYHAND packages
+dak queue-report | mail -e -s "NEW and BYHAND on $(date +%D)" ftpmaster@backports.org
+# and one on crufty packages
+dak cruft-report | tee $webdir/cruft-report-daily.txt | mail -e -s "Debian archive cruft report for $(date +%D)" ftpmaster@backports.org
+
+echo Daily cron scripts successful.
diff --git a/config/backports.org/cron.hourly b/config/backports.org/cron.hourly
new file mode 100755 (executable)
index 0000000..4598006
--- /dev/null
@@ -0,0 +1,116 @@
+#! /bin/sh
+#
+# Executed daily via cron, out of katie's crontab.
+set -e
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+
+################################################################################
+cd $accepted
+
+changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
+
+if [ -z "$changes" ]; then
+ exit 0;
+fi
+
+echo Archive maintenance started at $(date +%X)
+
+NOTICE="$ftpdir/Archive_Maintenance_In_Progress"
+LOCKCU="$lockdir/daily.lock"
+LOCKAC="$lockdir/unchecked.lock"
+
+cleanup() {
+  rm -f "$NOTICE"
+  rm -f "$LOCKCU"
+}
+trap cleanup 0
+
+rm -f "$NOTICE"
+lockfile -l 3600 $LOCKCU
+cat > "$NOTICE" <<EOF
+Packages are currently being installed and indices rebuilt.
+Maintenance is automatic, starting hourly at 5 minutes past the hour.
+Most of the times it is finished after about 10 til 15 minutes.
+
+You should not mirror the archive during this period.
+EOF
+
+################################################################################
+
+cd $accepted
+rm -f REPORT
+dak process-accepted -pa *.changes | tee REPORT | \
+     mail -s "Install for $(date +%d.%m.%Y)" ftpmaster@backports.org
+chgrp debadmin REPORT
+chmod 664 REPORT
+
+cd $masterdir
+
+rm -f $LOCKAC
+
+symlinks -d -r $ftpdir
+
+cd $masterdir
+dak make-suite-file-list
+
+# Generate override files
+cd $overridedir
+dak make-overrides
+
+# Generate Packages and Sources files
+cd $configdir
+apt-ftparchive generate apt.conf
+# Generate *.diff/ incremental updates
+dak generate-index-diffs
+# Generate Release files
+dak generate-releases
+
+# Clean out old packages
+# Now in cron.daily. JJ[03.05.2005.]
+#rhona
+#shania
+
+cd $scriptsdir
+./mkmaintainers
+./copyoverrides
+./mklslar
+./mkchecksums
+
+rm -f $NOTICE
+rm -f $LOCKCU
+echo Archive maintenance finished at $(date +%X)
+
+################################################################################
+
+echo "Creating post-hourly-cron-job backup of projectb database..."
+POSTDUMP=/org/backports.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+pg_dump projectb > $POSTDUMP
+(cd /org/backports.org/backup; ln -sf $POSTDUMP current)
+
+################################################################################
+
+# Vacuum the database
+echo "VACUUM; VACUUM ANALYZE;" | psql projectb 2>&1 | grep -v "^NOTICE:  Skipping.*only table owner can VACUUM it$"
+
+################################################################################
+
+# Now in cron.daily JJ[03.05.2005]
+# Send a report on NEW/BYHAND packages
+#helena | mail -e -s "NEW and BYHAND on $(date +%D)" ftpmaster@amd64.debian.net
+# and one on crufty package
+#rene | mail -e -s "rene run for $(date +%D)" ftpmaster@amd64.debian.net
+
+################################################################################
+
+(cd /org/backports.org/stats; rm -f master.list; ./dmc.pl get >/dev/null 2>&1; \
+./mirror.pl>$ftpdir/README.mirrors.html; cd $ftpdir; /usr/bin/links -dump README.mirrors.html >README.mirrors.txt)
+
+
+################################################################################
+
+ulimit -m 90000 -d 90000 -s 10000 -v 90000
+
+run-parts --report /org/backports.org/scripts/distmnt
+
+echo Daily cron scripts successful.
diff --git a/config/backports.org/cron.monthly b/config/backports.org/cron.monthly
new file mode 100755 (executable)
index 0000000..f604936
--- /dev/null
@@ -0,0 +1,33 @@
+#!/bin/sh
+#
+# Run at the beginning of the month via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+
+################################################################################
+
+DATE=`date -d yesterday +%y%m`
+
+cd ${basedir}/mail/archive
+for m in mail import; do
+    if [ -f $m ]; then
+        mv $m ${m}-$DATE
+        sleep 20
+        gzip -9 ${m}-$DATE
+        chgrp debadmin ${m}-$DATE.gz
+        chmod 660 ${m}-$DATE.gz
+    fi;
+done
+
+DATE=`date +%Y-%m`
+cd ${basedir}/log
+touch $DATE
+rm current
+ln -s $DATE current
+chmod g+w $DATE
+chown dak:debadmin $DATE
+
+dak split-done
+################################################################################
diff --git a/config/backports.org/cron.unchecked b/config/backports.org/cron.unchecked
new file mode 100755 (executable)
index 0000000..bb2337e
--- /dev/null
@@ -0,0 +1,34 @@
+#! /bin/sh
+set -e
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+
+LOCKFILE="$lockdir/unchecked.lock"
+NOTICE="$lockdir/daily.lock"
+
+cleanup() {
+  rm -f "$LOCKFILE"
+  if [ ! -z $LOCKDAILY ]; then
+         rm -f "$NOTICE"
+  fi
+}
+trap cleanup 0
+
+# only run one cron.unchecked
+if lockfile -r3 $LOCKFILE; then
+       cd $unchecked
+
+       changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
+       report=$queuedir/REPORT
+       timestamp=$(date "+%Y-%m-%d %H:%M")
+
+       if [ ! -z "$changes" ]; then
+               echo "$timestamp": "$changes"  >> $report
+               dak process-unchecked -a $changes >> $report
+               echo "--" >> $report
+       else
+               echo "$timestamp": Nothing to do >> $report
+       fi
+fi
+
+rm -f "$LOCKFILE"
diff --git a/config/backports.org/cron.weekly b/config/backports.org/cron.weekly
new file mode 100755 (executable)
index 0000000..0ab9afd
--- /dev/null
@@ -0,0 +1,22 @@
+#!/bin/sh
+#
+# Run once a week via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/backports.org/dak-config/vars
+. $SCRIPTVARS
+
+################################################################################
+
+# Purge empty directories
+
+if [ ! -z "$(find $ftpdir/pool/ -type d -empty)" ]; then
+   find $ftpdir/pool/ -type d -empty | xargs rmdir;
+fi
+
+# Clean up apt-ftparchive's databases
+
+cd $configdir
+apt-ftparchive -q clean apt.conf
+
+################################################################################
diff --git a/config/backports.org/dak.conf b/config/backports.org/dak.conf
new file mode 100644 (file)
index 0000000..84f33d6
--- /dev/null
@@ -0,0 +1,425 @@
+Dinstall
+{
+   // Both need to be defined at the moment, but they can point to the
+   // same file.
+   GPGKeyring {
+      "/org/backports.org/keyrings/keyring.gpg";
+   };
+   // To sign the release files. Adjust the keyid!
+   // Note: Key must be without a passphrase or it wont work automagically!
+   SigningKeyring "/org/backports.org/s3kr1t/dot-gnupg/secring.gpg";
+   SigningPubKeyring "/org/backports.org/s3kr1t/dot-gnupg/pubring.gpg";
+   SigningKeyIds "16BA136C";
+   SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
+   MyEmailAddress "Backports.org archive Installer <installer@backports.org>";
+   MyAdminAddress "ftpmaster@backports.org";
+   MyHost "backports.org";  // used for generating user@my_host addresses in e.g. manual_reject()
+   MyDistribution "Backports.org archive"; // Used in emails
+   // Alicia and melanie can use it
+   BugServer "bugs.backports.org";
+   // melanie uses the packages server.
+   // PackagesServer "packages.test.backports.org";
+   // If defined then the package@this.server gets a copy of most of the
+   // actions related to the package. For an example look at
+   // packages.qa.debian.org
+   // TrackingServer "packages.qa.test.backports.org";
+   LockFile "/org/backports.org/lock/dinstall.lock";
+   // If defined this address gets a bcc of all mails.
+   // FIXME: Einrichten wenn das hier produktiv geht!
+   Bcc "backports-archive@lists.backports.org";
+   GroupOverrideFilename "override.group-maint";
+   FutureTimeTravelGrace 28800; // 8 hours
+   PastCutoffYear "1984";
+   SkipTime 300;
+   // If defined then mails to close bugs are sent to the bugserver.
+   CloseBugs "false";
+   OverrideDisparityCheck "true";
+   DefaultSuite "etch-backports";
+   Reject
+   {
+     NoSourceOnly "true";
+     ReleaseTransitions "/org/backports.org/hints/transitions.yaml";
+   };
+   // If set, only send mails to addresses listed there.
+   MailWhiteList "/org/backports.org/dak/config/backports.org/mail-whitelist";
+};
+
+Generate-Index-Diffs
+{
+   Options
+   {
+     TempDir "/org/backports.org/tiffani";
+     MaxDiffs { Default 50; };
+   };
+};
+
+Override
+{
+   MyEmailAddress "Backports.org archive FTP Masters <ftpmaster@backports.org>";
+};
+
+Add-User
+{
+// Should we sent a mail to newly added users?
+  SendEmail "true";
+
+// Should we create an account so they can login?
+// Account will be created with the defaults from adduser, so adjust
+// it's configuration to fit your needs.
+// NOTE: This requires that your dak user has a sudo entry, allowing
+// to run /usr/sbin/useradd!
+  CreateAccount "false";
+
+// Note: This is a comma separated list of additional groupnames to
+// which uma should add the user. NO spaces between the groupnames or
+// useradd will die.
+// Disable it if you dont want or need that feature.
+  GID "debuser";
+
+};
+
+Check-Overrides
+{
+  OverrideSuites
+  {
+    lenny-backports
+    {
+      Process "1";
+//      OriginSuite "Unstable";
+    };
+
+    etch-backports
+    {
+      Process "1";
+//      OriginSuite "Unstable";
+    };
+
+//    Unstable
+//    {
+//    Process "0";
+//  };
+  };
+};
+
+
+Import-Users-From-Passwd
+{
+  // The Primary GID of your users. Using uma it is the gid from group users.
+  ValidGID "1001";
+  // Comma separated list of users who are in Postgres but not the passwd file
+  KnownPostgres "postgres,katie";
+};
+
+Clean-Queues
+{
+  Options
+  {
+    Days 14;
+   };
+ MorgueSubDir "queues";
+};
+
+Control-Overrides
+{
+  Options
+  {
+    Component "main";
+    Suite "etch-backports";
+    Type "deb";
+   };
+
+ ComponentPosition "prefix"; // Whether the component is prepended or appended to the section name
+};
+
+Rm
+{
+  Options
+  {
+    Suite "etch-backports";
+   };
+
+   MyEmailAddress "Backports.org archive Maintenance <ftpmaster@backports.org>";
+   LogFile "/org/backports.org/ftp/removals.txt";
+};
+
+Import-Archive
+{
+  ExportDir "/org/backports.org/dak/import-archive-files/";
+};
+
+Clean-Suites
+{
+  // How long (in seconds) dead packages are left before being killed
+  StayOfExecution 1209600; // 14 days
+  AcceptedAutoBuildStayOfExecution 86400; // 24 hours
+  MorgueSubDir "pool";
+};
+
+Process-New
+{
+  AcceptedLockFile "/org/backports.org/lock/unchecked.lock";
+};
+
+Suite
+{
+  lenny-backports
+  {
+       Components
+       {
+         main;
+         contrib;
+         non-free;
+       };
+       Architectures
+       {
+         source;
+         all;
+         alpha;
+         amd64;
+         arm;
+         armel;
+         hppa;
+         i386;
+         ia64;
+         mips;
+         mipsel;
+         powerpc;
+         s390;
+         sparc;
+       };
+
+       Announce "backports-changes@lists.backports.org";
+       Origin "Backports.org archive";
+       Description "Backports for the Lenny Distribution";
+       CodeName "Lenny-backports";
+       OverrideCodeName "Lenny-backports";
+       Priority "7";
+       NotAutomatic "yes";
+  };
+
+  etch-backports
+  {
+       Components
+       {
+         main;
+         contrib;
+         non-free;
+       };
+       Architectures
+       {
+         source;
+         all;
+         alpha;
+         amd64;
+         arm;
+         hppa;
+         hurd-i386;
+         i386;
+         ia64;
+         m68k;
+         mips;
+         mipsel;
+         powerpc;
+         s390;
+         sh;
+         sparc;
+       };
+       Announce "backports-changes@lists.backports.org";
+       Origin "Backports.org archive";
+       Description "Backports for the Etch Distribution";
+       CodeName "etch-backports";
+       OverrideCodeName "etch-backports";
+       Priority "7";
+       NotAutomatic "yes";
+  };
+
+};
+
+Dir
+{
+  Root "/org/backports.org/ftp/";
+  Pool "/org/backports.org/ftp/pool/";
+  Templates "/org/backports.org/dak/templates/";
+  PoolRoot "pool/";
+  Lists "/org/backports.org/database/dists/";
+  Log "/org/backports.org/log/";
+  Morgue "/org/backports.org/morgue/";
+  MorgueReject "reject";
+  Lock "/org/backports.org/lock";
+  Override "/org/backports.org/scripts/override/";
+  UrgencyLog "/org/backports.org/testing/urgencies/";
+  Queue
+  {
+    Accepted "/org/backports.org/queue/accepted/";
+    Byhand "/org/backports.org/queue/byhand/";
+    Done "/org/backports.org/queue/done/";
+    Holding "/org/backports.org/queue/holding/";
+    New "/org/backports.org/queue/new/";
+    ProposedUpdates "/org/backports.org/queue/p-u-new/";
+    Reject "/org/backports.org/queue/reject/";
+    Unchecked "/org/backports.org/queue/unchecked/";
+    BTSVersionTrack "/org/backports.org/queue/bts_version_track/";
+    Embargoed "/org/backports.org/queue/Embargoed/";
+    Unembargoed "/org/backports.org/queue/Unembargoed/";
+    OldProposedUpdates "/org/backports.org/queue/Unembargoed/";
+  };
+};
+
+DB
+{
+  Name "projectb";
+  Host "";
+  Port -1;
+};
+
+SuiteMappings
+{
+ "propup-version stable-security testing";
+ "propup-version testing-security unstable";
+// "map stable proposed-updates";
+ "map lenny lenny-backports";
+ "map lenny-bpo lenny-backports";
+ "map etch etch-backports";
+// formi mag des nit
+// "map stable etch-backports";
+ "map etch-bpo etch-backports";
+// "map stable-security proposed-updates";
+// "map-unreleased stable unstable";
+// "map-unreleased proposed-updates unstable";
+// "map testing testing-proposed-updates";
+// "map testing-security testing-proposed-updates";
+// "map-unreleased testing unstable";
+// "map-unreleased testing-proposed-updates unstable";
+};
+
+Architectures
+{
+  source "Source";
+  all "Architecture Independent";
+  alpha "DEC Alpha";
+  amd64 "AMD x86_64 (AMD64)";
+  hurd-i386 "Intel ia32 running the HURD";
+  hppa "HP PA RISC";
+  arm "ARM";
+  armel "ARM EABI";
+  i386 "Intel ia32";
+  ia64 "Intel ia64";
+  m68k "Motorola Mc680x0";
+  mips "MIPS (Big Endian)";
+  mipsel "MIPS (Little Endian)";
+  powerpc "PowerPC";
+  s390 "IBM S/390";
+  sh "Hitatchi SuperH";
+  sparc "Sun SPARC/UltraSPARC";
+};
+
+Archive
+{
+  backports
+  {
+    OriginServer "backports.org";
+    PrimaryMirror "backports.org";
+    Description "Master Archive for Backports.org archive";
+  };
+};
+
+Component
+{
+  main
+  {
+       Description "Main";
+       MeetsDFSG "true";
+  };
+
+  contrib
+  {
+       Description "Contrib";
+       MeetsDFSG "true";
+  };
+
+  non-free
+  {
+        Description "Software that fails to meet the DFSG";
+        MeetsDFSG "false";
+  };
+
+};
+
+Section
+{
+  admin;
+  base;
+  comm;
+  debian-installer;
+  devel;
+  doc;
+  editors;
+  embedded;
+  electronics;
+  games;
+  gnome;
+  graphics;
+  hamradio;
+  interpreters;
+  kde;
+  libdevel;
+  libs;
+  mail;
+  math;
+  misc;
+  net;
+  news;
+  oldlibs;
+  otherosfs;
+  perl;
+  python;
+  science;
+  shells;
+  sound;
+  tex;
+  text;
+  utils;
+  web;
+  x11;
+};
+
+Priority
+{
+  required 1;
+  important 2;
+  standard 3;
+  optional 4;
+  extra 5;
+  source 0; // i.e. unused
+};
+
+OverrideType
+{
+  deb;
+  udeb;
+  dsc;
+};
+
+Location
+{
+  // Pool locations on backports.org
+  /org/backports.org/ftp/pool/
+    {
+      Archive "backports";
+      Type "pool";
+    };
+
+};
+
+Urgency
+{
+  Default "low";
+  Valid
+  {
+    low;
+    medium;
+    high;
+    emergency;
+    critical;
+  };
+};
diff --git a/config/backports.org/mail-whitelist b/config/backports.org/mail-whitelist
new file mode 100644 (file)
index 0000000..e69de29
diff --git a/config/backports.org/vars b/config/backports.org/vars
new file mode 100644 (file)
index 0000000..e61a11b
--- /dev/null
@@ -0,0 +1,45 @@
+# locations used by many scripts
+
+base=/org/backports.org
+ftpdir=$base/ftp/
+webdir=$base/web
+
+archs="alpha amd64 arm armel hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sh sparc"
+
+masterdir=$base/dak/
+overridedir=$base/scripts/override
+extoverridedir=$scriptdir/external-overrides
+configdir=$base/dak/config/backports.org/
+scriptsdir=$base/dak/scripts/backports.org/
+
+queuedir=$base/queue
+unchecked=$queuedir/unchecked/
+accepted=$queuedir/accepted/
+done=$queuedir/done/
+over=$base/over/
+lockdir=$base/lock/
+incoming=$base/incoming
+
+dbdir=$base/database/
+indices=$ftpdir/indices
+
+ftpgroup=debadmin
+
+copyoverrides="lenny-backports.contrib lenny-backports.contrib.src lenny-backports.main lenny-backports.main.debian-installer lenny-backports.main.src lenny-backports.extra.contrib lenny-backports.extra.main"
+
+# Change this to your hostname
+uploadhost=localhost
+uploaddir=/pub/UploadQueue/
+
+# What components to support
+components="main contrib non-free"
+suites="lenny-backports"
+override_types="deb dsc udeb"
+
+# temporary fix only!
+# export TMP=/org/backports.org/tmp
+# export TEMP=/org/backports.org/tmp
+# export TMPDIR==/org/backports.org/tmp
+
+PATH=$masterdir:$PATH
+umask 022
index 765e6e83417d3056f615f244c70cc3374f33d89c..dae2c9d987bba732c21e265b56d40cac9c89e5a8 100644 (file)
@@ -35,6 +35,14 @@ Dinstall
      NoSourceOnly "true";
      ReleaseTransitions "/srv/ftp.debian.org/web/transitions.yaml";
    };
+   // if you setup an own dak repository and want to upload Debian packages you most possibly want
+   // to set the following option to a real path/filename and then enter those mail addresses that
+   // you want to be able to receive mails generated by your dak installation. This avoids spamming
+   // the real maintainers of a package you upload with mail.
+   // format of entries: one entry per line. Either an email address directly, or a regular expression,
+   // prefixed by "RE:". Examples: "jane.doe@domain.com" or "RE:jane[^@]@domain.com", where the first will
+   // only allow to mail jane.doe@domain.com while the second will mail all of jane*@domain.com
+   //  MailWhiteList "/some/path/to/a/file";
 };
 
 Transitions
diff --git a/dak/add_user.py b/dak/add_user.py
new file mode 100755 (executable)
index 0000000..fd06a72
--- /dev/null
@@ -0,0 +1,251 @@
+#!/usr/bin/env python
+
+"""
+Add a user to to the uid/maintainer/fingerprint table and
+add his key to the GPGKeyring
+
+@contact: Debian FTP Master <ftpmaster@debian.org>
+@copyright: 2004, 2009  Joerg Jaspert <joerg@ganneff.de>
+@license: GNU General Public License version 2 or later
+"""
+
+################################################################################
+# <elmo> wow, sounds like it'll be a big step up.. configuring dak on a
+#        new machine even scares me :)
+################################################################################
+
+# You don't want to read this script if you know python.
+# I know what I say. I dont know python and I wrote it. So go and read some other stuff.
+
+import commands
+import pg
+import re
+import sys
+import time
+import os
+import apt_pkg
+from daklib import database
+from daklib import logging
+from daklib import queue
+from daklib import utils
+from daklib.regexes import re_gpg_fingerprint, re_user_address, re_user_mails, re_user_name
+
+################################################################################
+
+Cnf = None
+projectB = None
+Logger = None
+Upload = None
+Subst = None
+
+################################################################################
+
+def usage(exit_code=0):
+    print """Usage: add-user [OPTION]...
+Adds a new user to the dak databases and keyrings
+
+    -k, --key                keyid of the User
+    -u, --user               userid of the User
+    -c, --create             create a system account for the user
+    -h, --help               show this help and exit."""
+    sys.exit(exit_code)
+
+################################################################################
+# Stolen from userdir-ldap
+# Compute a random password using /dev/urandom.
+def GenPass():
+   # Generate a 10 character random string
+   SaltVals = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ/."
+   Rand = open("/dev/urandom")
+   Password = ""
+   for i in range(0,15):
+      Password = Password + SaltVals[ord(Rand.read(1)[0]) % len(SaltVals)]
+   return Password
+
+# Compute the MD5 crypted version of the given password
+def HashPass(Password):
+   import crypt
+   # Hash it telling glibc to use the MD5 algorithm - if you dont have
+   # glibc then just change Salt = "$1$" to Salt = ""
+   SaltVals = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789/."
+   Salt  = "$1$"
+   Rand = open("/dev/urandom")
+   for x in range(0,10):
+      Salt = Salt + SaltVals[ord(Rand.read(1)[0]) % len(SaltVals)]
+   Pass = crypt.crypt(Password,Salt)
+   if len(Pass) < 14:
+      raise "Password Error", "MD5 password hashing failed, not changing the password!"
+   return Pass
+
+################################################################################
+
+def createMail(login, passwd, keyid, keyring):
+    import GnuPGInterface
+
+    message= """
+
+Additionally there is now an account created for you.
+
+"""
+    message+= "\nYour password for the login %s is: %s\n" % (login, passwd)
+
+    gnupg = GnuPGInterface.GnuPG()
+    gnupg.options.armor = 1
+    gnupg.options.meta_interactive = 0
+    gnupg.options.extra_args.append("--no-default-keyring")
+    gnupg.options.extra_args.append("--always-trust")
+    gnupg.options.extra_args.append("--no-secmem-warning")
+    gnupg.options.extra_args.append("--keyring=%s" % keyring)
+    gnupg.options.recipients = [keyid]
+    proc = gnupg.run(['--encrypt'], create_fhs=['stdin', 'stdout'])
+    proc.handles['stdin'].write(message)
+    proc.handles['stdin'].close()
+    output = proc.handles['stdout'].read()
+    proc.handles['stdout'].close()
+    proc.wait()
+    return output
+
+################################################################################
+
+def main():
+    global Cnf, projectB
+    keyrings = None
+
+    Cnf = daklib.utils.get_conf()
+
+    Arguments = [('h',"help","Add-User::Options::Help"),
+                 ('c',"create","Add-User::Options::Create"),
+                 ('k',"key","Add-User::Options::Key", "HasArg"),
+                 ('u',"user","Add-User::Options::User", "HasArg"),
+                 ]
+
+    for i in [ "help", "create" ]:
+       if not Cnf.has_key("Add-User::Options::%s" % (i)):
+           Cnf["Add-User::Options::%s" % (i)] = ""
+
+    apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv)
+
+    Options = Cnf.SubTree("Add-User::Options")
+    if Options["help"]:
+        usage()
+
+    projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]))
+    daklib.database.init(Cnf, projectB)
+
+    if not keyrings:
+        keyrings = Cnf.ValueList("Dinstall::GPGKeyring")
+
+# Ignore the PGP keyring for download of new keys. Ignore errors, if key is missing it will
+# barf with the next commands.
+    cmd = "gpg --no-secmem-warning --no-default-keyring %s --recv-keys %s" \
+           % (daklib.utils.gpg_keyring_args(keyrings), Cnf["Add-User::Options::Key"])
+    (result, output) = commands.getstatusoutput(cmd)
+
+    cmd = "gpg --with-colons --no-secmem-warning --no-auto-check-trustdb --no-default-keyring %s --with-fingerprint --list-key %s" \
+           % (daklib.utils.gpg_keyring_args(keyrings),
+              Cnf["Add-User::Options::Key"])
+    (result, output) = commands.getstatusoutput(cmd)
+    m = re_gpg_fingerprint.search(output)
+    if not m:
+       print output
+        daklib.utils.fubar("0x%s: (1) No fingerprint found in gpg output but it returned 0?\n%s" \
+                                       % (Cnf["Add-User::Options::Key"], daklib.utils.prefix_multi_line_string(output, \
+                                                                                                                                                               " [GPG output:] ")))
+    primary_key = m.group(1)
+    primary_key = primary_key.replace(" ","")
+
+    uid = ""
+    if Cnf.has_key("Add-User::Options::User") and Cnf["Add-User::Options::User"]:
+        uid = Cnf["Add-User::Options::User"]
+        name = Cnf["Add-User::Options::User"]
+    else:
+        u = re_user_address.search(output)
+        if not u:
+            print output
+            daklib.utils.fubar("0x%s: (2) No userid found in gpg output but it returned 0?\n%s" \
+                        % (Cnf["Add-User::Options::Key"], daklib.utils.prefix_multi_line_string(output, " [GPG output:] ")))
+        uid = u.group(1)
+        n = re_user_name.search(output)
+        name = n.group(1)
+
+# Look for all email addresses on the key.
+    emails=[]
+    for line in output.split('\n'):
+        e = re_user_mails.search(line)
+        if not e:
+            continue
+        emails.append(e.group(2))
+
+
+    print "0x%s -> %s <%s> -> %s -> %s" % (Cnf["Add-User::Options::Key"], name, emails[0], uid, primary_key)
+
+    prompt = "Add user %s with above data (y/N) ? " % (uid)
+    yn = daklib.utils.our_raw_input(prompt).lower()
+
+    if yn == "y":
+# Create an account for the user?
+          summary = ""
+          if Cnf.FindB("Add-User::CreateAccount") or Cnf["Add-User::Options::Create"]:
+              password = GenPass()
+              pwcrypt = HashPass(password)
+              if Cnf.has_key("Add-User::GID"):
+                  cmd = "sudo /usr/sbin/useradd -g users -m -p '%s' -c '%s' -G %s %s" \
+                         % (pwcrypt, name, Cnf["Add-User::GID"], uid)
+              else:
+                  cmd = "sudo /usr/sbin/useradd -g users -m -p '%s' -c '%s' %s" \
+                         % (pwcrypt, name, uid)
+              (result, output) = commands.getstatusoutput(cmd)
+              if (result != 0):
+                   daklib.utils.fubar("Invocation of '%s' failed:\n%s\n" % (cmd, output), result)
+              try:
+                  summary+=createMail(uid, password, Cnf["Add-User::Options::Key"], Cnf["Dinstall::GPGKeyring"])
+              except:
+                  summary=""
+                  daklib.utils.warn("Could not prepare password information for mail, not sending password.")
+
+# Now add user to the database.
+          projectB.query("BEGIN WORK")
+          uid_id = daklib.database.get_or_set_uid_id(uid)
+          projectB.query('CREATE USER "%s"' % (uid))
+          projectB.query("COMMIT WORK")
+# The following two are kicked out in rhona, so we don't set them. kelly adds
+# them as soon as she installs a package with unknown ones, so no problems to expect here.
+# Just leave the comment in, to not think about "Why the hell aren't they added" in
+# a year, if we ever touch uma again.
+#          maint_id = daklib.database.get_or_set_maintainer_id(name)
+#          projectB.query("INSERT INTO fingerprint (fingerprint, uid) VALUES ('%s', '%s')" % (primary_key, uid_id))
+
+# Lets add user to the email-whitelist file if its configured.
+          if Cnf.has_key("Dinstall::MailWhiteList") and Cnf["Dinstall::MailWhiteList"] != "":
+              file = daklib.utils.open_file(Cnf["Dinstall::MailWhiteList"], "a")
+              for mail in emails:
+                  file.write(mail+'\n')
+              file.close()
+
+          print "Added:\nUid:\t %s (ID: %s)\nMaint:\t %s\nFP:\t %s" % (uid, uid_id, \
+                    name, primary_key)
+
+# Should we send mail to the newly added user?
+          if Cnf.FindB("Add-User::SendEmail"):
+              mail = name + "<" + emails[0] +">"
+              Upload = daklib.queue.Upload(Cnf)
+              Subst = Upload.Subst
+              Subst["__NEW_MAINTAINER__"] = mail
+              Subst["__UID__"] = uid
+              Subst["__KEYID__"] = Cnf["Add-User::Options::Key"]
+              Subst["__PRIMARY_KEY__"] = primary_key
+              Subst["__FROM_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"]
+              Subst["__HOSTNAME__"] = Cnf["Dinstall::MyHost"]
+              Subst["__SUMMARY__"] = summary
+              new_add_message = daklib.utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/add-user.added")
+              daklib.utils.send_mail(new_add_message)
+
+    else:
+          uid = None
+
+
+#######################################################################################
+
+if __name__ == '__main__':
+    main()
+
index a08f20e0eecf5e87d65e6483a75bc4c18cab7492..981a31a95d0d52e6408c458218c012726a8229bd 100755 (executable)
@@ -171,6 +171,8 @@ def init():
          "Generate statistics"),
         ("bts-categorize",
          "Categorize uncategorized bugs filed against ftp.debian.org"),
+        ("add-user",
+         "Add a user to the archive"),
         ]
     return functionality
 
index b94c53b314ce74378e62e1899f5202e531a63a5c..ee26430ce8673b1908b5a948cce877898328e4f7 100755 (executable)
@@ -7,6 +7,7 @@ Central repository of regexes for dak
 @contact: Debian FTP Master <ftpmaster@debian.org>
 @copyright: 2001, 2002, 2003, 2004, 2005, 2006  James Troup <james@nocrew.org>
 @copyright: 2009  Mark Hymers <mhy@debian.org>
+@copyright: 2009  Joerg Jaspert <joerg@debian.org>
 @license: GNU General Public License version 2 or later
 """
 
@@ -97,3 +98,12 @@ re_build_dep_arch = re.compile(r"\[[^]]+\]")
 
 # From dak/transitions.py
 re_broken_package = re.compile(r"[a-zA-Z]\w+\s+\-.*")
+
+# From dak/add_user.py
+re_gpg_fingerprint = re.compile(r"^fpr:+(.*):$", re.MULTILINE);
+# The next one is dirty
+re_user_address = re.compile(r"^pub:.*<(.*)@.*>.*$", re.MULTILINE);
+re_user_mails = re.compile(r"^(pub|uid):[^rdin].*<(.*@.*)>.*$", re.MULTILINE);
+re_user_name = re.compile(r"^pub:.*:(.*)<.*$", re.MULTILINE);
+re_re_mark = re.compile(r'^RE:')
+
index 7b822b9db312f3adb965bf3db4e586925c141db6..cb5df31ce30fd77e2c086c6266209720fbffec88 100755 (executable)
@@ -37,10 +37,13 @@ import stat
 import apt_pkg
 import database
 import time
+import re
+import string
+import email as modemail
 from dak_exceptions import *
 from regexes import re_html_escaping, html_escaping, re_single_line_field, \
                     re_multi_line_field, re_srchasver, re_verwithext, \
-                    re_parse_maintainer, re_taint_free, re_gpg_uid
+                    re_parse_maintainer, re_taint_free, re_gpg_uid, re_re_mark
 
 ################################################################################
 
@@ -598,6 +601,67 @@ def send_mail (message, filename=""):
         os.write (fd, message)
         os.close (fd)
 
+    if Cnf.has_key("Dinstall::MailWhiteList") and \
+           Cnf["Dinstall::MailWhiteList"] != "":
+        message_in = open_file(filename)
+        message_raw = modemail.message_from_file(message_in)
+        message_in.close();
+
+        whitelist = [];
+        whitelist_in = open_file(Cnf["Dinstall::MailWhiteList"])
+        try:
+            for line in whitelist_in:
+                if re_re_mark.match(line):
+                    whitelist.append(re.compile(re_re_mark.sub("", line.strip(), 1)))
+                else:
+                    whitelist.append(re.compile(re.escape(line.strip())))
+        finally:
+            whitelist_in.close()
+
+        # Fields to check.
+        fields = ["To", "Bcc", "Cc"]
+        for field in fields:
+            # Check each field
+            value = message_raw.get(field, None)
+            if value != None:
+                match = [];
+                for item in value.split(","):
+                    (rfc822_maint, rfc2047_maint, name, email) = fix_maintainer(item.strip())
+                    mail_whitelisted = 0
+                    for wr in whitelist:
+                        if wr.match(email):
+                            mail_whitelisted = 1
+                            break
+                    if not mail_whitelisted:
+                        print "Skipping %s since it's not in %s" % (item, Cnf["Dinstall::MailWhiteList"])
+                        continue
+                    match.append(item)
+
+                # Doesn't have any mail in whitelist so remove the header
+                if len(match) == 0:
+                    del message_raw[field]
+                else:
+                    message_raw.replace_header(field, string.join(match, ", "))
+
+        # Change message fields in order if we don't have a To header
+        if not message_raw.has_key("To"):
+            fields.reverse()
+            for field in fields:
+                if message_raw.has_key(field):
+                    message_raw[fields[-1]] = message_raw[field]
+                    del message_raw[field]
+                    break
+            else:
+                # Clean up any temporary files
+                # and return, as we removed all recipients.
+                if message:
+                    os.unlink (filename);
+                return;
+
+        fd = os.open(filename, os.O_RDWR|os.O_EXCL, 0700);
+        os.write (fd, message_raw.as_string(True));
+        os.close (fd);
+
     # Invoke sendmail
     (result, output) = commands.getstatusoutput("%s < %s" % (Cnf["Dinstall::SendmailCommand"], filename))
     if (result != 0):
diff --git a/scripts/backports.org/copyoverrides b/scripts/backports.org/copyoverrides
new file mode 100755 (executable)
index 0000000..a90db62
--- /dev/null
@@ -0,0 +1,29 @@
+#! /bin/sh
+
+set -e
+. $SCRIPTVARS
+echo 'Copying override files into public view ...'
+
+for f in $copyoverrides ; do
+       cd $overridedir
+       chmod g+w override.$f
+
+       cd $indices
+       rm -f .newover-$f.gz
+       pc="`gzip 2>&1 -9nv <$overridedir/override.$f >.newover-$f.gz`"
+       set +e
+       nf=override.$f.gz
+       cmp -s .newover-$f.gz $nf
+       rc=$?
+       set -e
+        if [ $rc = 0 ]; then
+               rm -f .newover-$f.gz
+       elif [ $rc = 1 -o ! -f $nf ]; then
+               echo "   installing new $nf $pc"
+               mv -f .newover-$f.gz $nf
+               chmod g+w $nf
+       else
+               echo $? $pc
+               exit 1
+       fi
+done
diff --git a/scripts/backports.org/mkchecksums b/scripts/backports.org/mkchecksums
new file mode 100755 (executable)
index 0000000..575d55c
--- /dev/null
@@ -0,0 +1,15 @@
+#!/bin/sh
+# Update the md5sums file
+
+set -e
+. $SCRIPTVARS
+
+dsynclist=$dbdir/dsync.list
+md5list=$indices/md5sums
+
+echo -n "Creating md5 / dsync index file ... "
+
+cd "$ftpdir"
+dsync-flist -q generate $dsynclist --exclude $dsynclist --md5
+dsync-flist -q md5sums $dsynclist | gzip -9n > ${md5list}.gz
+dsync-flist -q link-dups $dsynclist || true
diff --git a/scripts/backports.org/mklslar b/scripts/backports.org/mklslar
new file mode 100755 (executable)
index 0000000..19363f1
--- /dev/null
@@ -0,0 +1,36 @@
+#!/bin/sh
+# Update the ls-lR.
+
+set -e
+. $SCRIPTVARS
+
+cd $ftpdir
+
+filename=ls-lR
+
+echo "Removing any core files ..."
+find -type f -name core -print0 | xargs -0r rm -v
+
+echo "Checking permissions on files in the FTP tree ..."
+find -type f \( \! -perm -444 -o -perm +002 \) -ls
+find -type d \( \! -perm -555 -o -perm +002 \) -ls
+
+echo "Checking symlinks ..."
+symlinks -rd .
+
+echo "Creating recursive directory listing ... "
+rm -f .$filename.new
+TZ=UTC ls -lR | grep -v Archive_Maintenance_In_Progress > .$filename.new
+
+if [ -r ${filename}.gz ] ; then
+  mv -f ${filename}.gz $filename.old.gz
+  mv -f .$filename.new $filename
+  rm -f $filename.patch.gz
+  zcat $filename.old.gz | diff -u - $filename | gzip -9cfn - >$filename.patch.gz
+  rm -f $filename.old.gz
+else
+  mv -f .$filename.new $filename
+fi
+
+gzip -9cfN $filename >$filename.gz
+rm -f $filename
diff --git a/scripts/backports.org/mkmaintainers b/scripts/backports.org/mkmaintainers
new file mode 100755 (executable)
index 0000000..edb0f20
--- /dev/null
@@ -0,0 +1,31 @@
+#! /bin/sh
+
+echo
+echo -n 'Creating Maintainers index ... '
+
+set -e
+. $SCRIPTVARS
+cd $base/misc/
+
+nonusmaint="$base/misc/Maintainers_Versions-non-US"
+
+
+cd $indices
+dak make-maintainers | sed -e "s/~[^  ]*\([   ]\)/\1/"  | awk '{printf "%-20s ", $1; for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n";}' > .new-maintainers
+
+set +e
+cmp .new-maintainers Maintainers >/dev/null
+rc=$?
+set -e
+if [ $rc = 1 ] || [ ! -f Maintainers ] ; then
+       echo -n "installing Maintainers ... "
+       mv -f .new-maintainers Maintainers
+       gzip -9v <Maintainers >.new-maintainers.gz
+       mv -f .new-maintainers.gz Maintainers.gz
+elif [ $rc = 0 ] ; then
+       echo '(same as before)'
+       rm -f .new-maintainers
+else
+       echo cmp returned $rc
+       false
+fi
diff --git a/templates/add-user.added b/templates/add-user.added
new file mode 100644 (file)
index 0000000..1a93491
--- /dev/null
@@ -0,0 +1,18 @@
+From: __FROM_ADDRESS__
+To: __NEW_MAINTAINER__
+Subject: Account on __HOSTNAME__ activated
+
+Hi __UID__,
+
+your account on __HOSTNAME__ has just been activated. You are now able
+to upload packages there, using dput or dupload as you wish.
+The gpg-key you need to sign your packages with is key 0x__KEYID__
+with the fingerprint __PRIMARY_KEY__.
+
+__SUMMARY__
+
+This message was generated automatically; if you believe that there is
+a problem with it please contact the archive administrators by mailing
+__ADMIN_ADDRESS__.
+
+__DISTRO__ distribution maintenance software