+++ /dev/null
-2005-12-16 Ryan Murray <rmurray@debian.org>
-
- * halle: add support for udebs
- * kelly: stable_install: add support for binNMU versions
-
-2005-12-05 Anthony Towns <aj@erisian.com.au>
-
- * katie.py: Move accept() autobuilding support into separate function
- (queue_build), and generalise to build different queues
-
- * db_access.py: Add get_or_set_queue_id instead of hardcoding accepted=0
-
- * jennifer: Initial support for enabling embargo handling with the
- Dinstall::SecurityQueueHandling option.
- * jennifer: Shift common code into remove_from_unchecked and move_to_dir
- functions.
-
- * katie.conf-security: Include embargo options
- * katie.conf-security: Add Lock dir
- * init_pool.sql-security: Create disembargo table
- * init_pool.sql-security: Add constraints for disembargo table
-
-2005-11-26 Anthony Towns <aj@erisian.com.au>
-
- * Merge of changes from klecker, by various people
-
- * amber: special casing for not passing on amd64 and oldstable updates
- * amber: security mirror triggering
- * templates/amber.advisory: updated advisory structure
- * apt.conf.buildd-security: update for sarge's release
- * apt.conf-security: update for sarge's release
- * cron.buildd-security: generalise suite support, update for sarge's release
- * cron.daily-security: update for sarge's release, add udeb support
- * vars-security: update for sarge's release
- * katie.conf-security: update for sarge's release, add amd64 support,
- update signing key
-
- * docs/README.names, docs/README.quotes: include the additions
-
-2005-11-25 Anthony Towns <aj@erisian.com.au>
-
- * Changed accepted_autobuild to queue_build everywhere.
- * Add a queue table.
- * Add a "queue" field in the queue_build table (currently always 0)
-
- * jennifer: Restructure to make it easier to support special
- purpose queues between unchecked and accepted.
-
-2005-11-25 Anthony Towns <aj@erisian.com.au>
-
- * Finishing merge of changes from spohr, by various people still
-
- * jennifer: If changed-by parsing fails, set variables to "" so REJECT
- works
- * jennifer: Re-enable .deb ar format checking
- * katie.py: Convert to +bX binNMU special casing
- * rhona: Add some debug output when deleting binaries
- * cron.daily: Add emilie
- * cron.unchecked: Add lock files
-
-2005-11-15 Anthony Towns <aj@erisian.com.au>
-
- * Merge of changes from spohr, by various people.
-
- * tiffani: new script to do patches to Packages, Sources and Contents
- files for quicker downloads.
- * ziyi: update to authenticate tiffani generated files
-
- * dak: new script to provide a single binary with less arbitrary names
- for access to dak functionality.
-
- * cindy: script implemented
-
- * saffron: cope with suites that don't have a Priority specified
- * heidi: use get_suite_id()
- * denise: don't hardcode stable and unstable, or limit udebs to unstable
- * denise: remove override munging for testing (now done by cindy)
- * helena: expanded help, added new, sort and age options, and fancy headers
- * jennifer: require description, add a reject for missing dsc file
- * jennifer: change lock file
- * kelly: propogation support
- * lisa: honour accepted lock, use mtime not ctime, add override type_id
- * madison: don't say "dep-retry"
- * melanie: bug fix in output (missing %)
- * natalie: cope with maintainer_override == None; add type_id for overrides
- * nina: use mtime, not ctime
-
- * katie.py: propogation bug fixes
- * logging.py: add debugging support, use | as the logfile separator
-
- * katie.conf: updated signing key (4F368D5D)
- * katie.conf: changed lockfile to dinstall.lock
- * katie.conf: added Lisa::AcceptedLockFile, Dir::Lock
- * katie.conf: added tiffani, cindy support
- * katie.conf: updated to match 3.0r6 release
- * katie.conf: updated to match sarge's release
-
- * apt.conf: update for sarge's release
- * apt.conf.stable: update for sarge's release
- * apt.conf: bump daily max Contents change to 25MB from 12MB
-
- * cron.daily: add accepted lock and invoke cindy
- * cron.daily: add daily.lock
- * cron.daily: invoke tiffani
- * cron.daily: rebuild accepted buildd stuff
- * cron.daily: save rene-daily output on the web site
- * cron.daily: disable billie
- * cron.daily: add stats pr0n
-
- * cron.hourly: invoke helena
-
- * pseudo-packages.maintainers,.descriptions: miscellaneous updates
- * vars: add lockdir, add etch to copyoverrides
- * Makefile: add -Ipostgresql/server to CXXFLAGS
-
- * docs/: added README.quotes
- * docs/: added manpages for alicia, catherine, charisma, cindy, heidi,
- julia, katie, kelly, lisa, madison, melanie, natalie, rhona.
-
- * TODO: correct spelling of "conflicts"
-
-2005-05-28 James Troup <james@nocrew.org>
-
- * helena (process_changes_files): use MTIME rather than CTIME (the
- C's not for 'creation', stupid).
- * lisa (sort_changes): likewise.
-
- * jennifer (check_distributions): use has_key rather than an 'in'
- test which doesn't work with python2.1. [Probably by AJ]
-
-2005-03-19 James Troup <james@nocrew.org>
-
- * rene (main): use Suite::<suite>::UdebComponents to determine
- what components have udebs rather than assuming only 'main' does.
-
-2005-03-18 James Troup <james@nocrew.org>
-
- * utils.py (rfc2047_encode): use codecs.lookup() rather than
- encodings.<encoding>.Codec().decode() as encodings.utf_8 no longer
- has a Codec() module in python2.4. Thanks to Andrew Bennetts
- <andrew@ubuntu.com>.
-
-2005-03-06 Joerg Jaspert <ganneff@debian.org>
-
- * helena: add -n/--new HTML output option and improved sorting
- options.
-
-2005-03-06 Ryan Murray <rmurray@debian.org>
-
- * shania(main): use Cnf::Dir::Reject instead of REJECT
-
-2005-02-08 James Troup <james@nocrew.org>
-
- * rene (main): add partial NBS support by checking that binary
- packages are built by their real parent and not some random
- stranger.
- (do_partial_nbs): likewise.
-
-2005-01-18 James Troup <james@nocrew.org>
-
- * katie.py (Katie.build_summaries): avoid leaking file handle when
- extracting package description.
- (Katie.force_reject): remember and close each file descriptor we
- use.
- (Katie.do_reject): s/file/temp_fh/ to avoid pychecker warning.
- s/reason_file/reason_fd/ because it's a file descriptor.
- (Katie.check_dsc_against_db): avoid leaking file handle whenever
- invoking apt_pkg.md5sum().
-
- * jennifer (check_deb_ar): new function: sanity check the ar
- contents of a .deb.
- (check_files): use it.
- (check_timestamps): check for data.tar.bz2 if data.tar.gz can't be
- found.
- (check_files): accept 'raw-installer' as an alias for 'byhand'.
-
-2005-01-14 Anthony Towns <ajt@debian.org>
-
- * kelly: when UNACCEPTing, don't double up the "Rejecting:"
-
- * propup stuff (thanks to Andreas Barth)
- * katie.conf: add stable MustBeOlderThan testing, add -security
- propup
- * jennifer: set distribution-version in .katie if propup may be needed
- * katie.py: add propogation to cross_suite_version_check
-
-2004-11-27 James Troup <james@nocrew.org>
-
- * nina: new script to split monolithic queue/done into date-based
- hierarchy.
-
- * rene (usage): document -s/--suite.
- (add_nbs): use .setdefault().
- (do_anais): likewise.
- (do_nbs): don't set a string to "" and then += it.
- (do_obsolete_source): new function - looks for obsolete source
- packages (i.e source packages whose binary packages are ALL a)
- claimed by someone else and b) newer when built from the other
- source package).
- (main): support -s/--suite. Add 'obsolete source' to both 'daily'
- and 'full' check modes. Check for obsolete source packages.
- linux-wlan-ng has been fixed - remove hideous bodge.
-
- * jennifer (check_distributions): support 'reject' suite map type.
-
- * utils.py (validate_changes_file_arg): s/file/filename/.
- s/fatal/require_changes/. If require_changes is -1, ignore errors
- and return the .changes filename regardless.
- (re_no_epoch): s/\*/+/ as there must be a digit in an epoch.
- (re_no_revision): don't escape '-', it's not a special character.
- s/\*/+/ as there must be at least one non-dash character after the
- dash in a revision. Thanks to Christian Reis for noticing both of
- these.
-
- * ashley (main): pass require_changes=-1 to
- utils.validate_changes_file_arg().
-
- * pseudo-packages.maintainers (kernel): switch to 'Debian Kernel
- Team <debian-kernel@lists.debian.org>'.
-
- * katie.py (Katie.in_override_p): fix .startswith() usage.
-
- * katie.conf (Dinstall::DefaultSuite): add as 'unstable'.
- (Lauren::MoreInfoURL): update to 3.0r3.
- (Suite::Stable::Version): likewise.
- (Suite::Stable::Description): likewise.
-
- * cron.daily: disable automatic task override generation.
-
- * cindy (process): restrict "find all packages" queries by
- component. Respect Options["No-Action"].
- (main): add -n/--no-action support. Only run on unstable. Rename
- type to otype (pychecker).
-
-2004-11-27 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * katie.conf (Billie::BasicTrees): add all architectures.
- (Billie::CombinationTrees): remove 'welovehp' and 'embedded', add
- 'everything'.
-
- * cron.daily: Update a 'current' symlink when creating the
- post-daily-cron-job database backup to aid mirroring to merkel.
- Run billie.
-
- * billie (BillieTarget.poolish_match): handle .udeb too.
-
-2004-10-13 Ryan Murray <rmurray@debian.org>
-
- * amber (do_upload): Sort changes files in "katie" order so that
- source always arrives before binary-only rebuilds
-
-2004-10-05 James Troup <james@nocrew.org>
-
- * jennifer (check_dsc): correct reject message on invalid
- Maintainer field.
-
-2004-09-20 James Troup <james@nocrew.org>
-
- * alicia: remove unused 'pwd' import.
-
- * tea (check_override): underline suite name in output properly.
-
- * rene (main): read a compressed Packages file.
- * tea (validate_packages): likewise.
-
- * katie.py (re_fdnic): add 'r' prefix.
- (re_bin_only_nmu_of_mu): likewise.
- (re_bin_only_nmu_of_nmu): likewise.
-
- * madison (main): retrieve component information too and display
- it if it's not 'main'.
- * melanie (reverse_depends_check): likewise.
-
- * utils.py (pp_dep): renamed...
- (pp_deps): ... to this.
- * jeri (check_dep): update calls to utils.pp_deps().
- * melanie (reverse_depends_check): likewise.
-
- * jennifer (check_changes): move initalization of email variables
- from here...
- (process_it): ...to here as we no longer always run
- check_changes(). Don't bother to initialize
- changes["architecture"].
-
- * denise (list): renamed to...
- (do_list): ...this to avoid name clash with builtin 'list'.
- Similarly, s/file/output_file/, s/type/otype/. Use .setdefault()
- for dictionaries.
- (main): Likewise for name clash avoidance and also
- s/override_type/suffix/. Adjust call to do_list().
-
-2004-09-01 Ryan Murray <rmurray@debian.org>
-
- * tea (check_files): check the pool/ directory instead of dists/
-
-2004-08-04 James Troup <james@nocrew.org>
-
- * jenna (cleanup): use .setdefault() for dictionaries.
- (write_filelists): likewise.
-
- (write_filelists): Use utils.split_args() not split() to split
- command line arguments.
- (stable_dislocation_p): likewise.
-
- (write_filelists): Add support for mapping side of suite-based
- "Arch: all mapping".
- (do_da_do_da): ensure that if we're not doing all suites that we
- process enough to be able correct map arch: all packages.
-
- * utils.py (cant_open_exc): correct exception string,
- s/read/open/, s/.$//.
-
- * templates/amber.advisory: update to match reality a little
- better.
-
- * melanie (reverse_depends_check): read Packages.gz rather than
- Packages.
-
- * jennifer (check_files): check for unknown component before
- checking for NEWness.
-
- * katie.py (Katie.in_override_p): use .startswith in favour of a
- slice.
-
- * docs/melanie.1.sgml: document -R/--rdep-check.
-
-2004-07-12 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * billie (main): Make the verbatim lists include all the README
- elements.
- * docs/README.names: Add billie in (correcting oversight)
-
-2004-07-01 James Troup <james@nocrew.org>
-
- * emilie (main): handle woody's case-sensitive python-ldap,
- s/keyfingerprint/keyFingerPrint/.
-
-2004-06-25 James Troup <james@nocrew.org>
-
- * debian/control (Depends): add dpkg-dev since jennifer uses
- dpkg-source.
-
-2004-06-24 James Troup <james@nocrew.org>
-
- * melanie (main): s/file/temp_file/ and close file handle before
- removing the temporary file.
- (main): don't warn about needing a --carbon-copy if in no-action
- mode.
-
- * rene (do_nbs): pcmcia-cs has been fixed - remove hideous bodge.
- (main): likewise.
-
- * test/006/test.py (main): check bracketed email-only form.
-
- * utils.py (fix_maintainer): if the Maintainer string is bracketed
- email-only, strip the brackets so we don't end up with
- <<james@nocrew.org>>.
-
-2004-06-20 James Troup <james@nocrew.org>
-
- * jennifer (process_it): only run check_changes() if
- check_signature() returns something. (Likewise)
-
- * utils.py (changes_compare): if there's no changes["version"] use
- "0" rather than None. (Avoids a crash on unsigned changes file.)
-
-2004-06-17 Martin Michlmayr <tbm@cyrius.com>
-
- * jeri (pp_dep): moved from here to ...
- * utils.py (pp_dep): here.
-
- * melanie (main): add reverse dependency checking.
-
-2004-06-17 James Troup <james@nocrew.org>
-
- * jennifer (check_dsc): s/dsc_whitespace_rules/signing_rules/.
- * tea (check_dscs): likewise.
-
- * utils.py (parse_changes): s/dsc_whitespace_rules/signing_rules/,
- change from boolean to a variable with 3 possible values, 0 and 1
- as before, -1 means don't require a signature. Makes
- parse_changes() useful for parsing arbitary RFC822-style files,
- e.g. 'Release' files.
- (check_signature): add support for detached signatures by passing
- the files the signature is for as an optional third argument.
- s/filename/sig_filename/g. Add a fourth optional argument to
- choose the keyring(s) to use. Don't os.path.basename() the
- sig_filename before checking it for taint.
- (re_taint_free): allow '/'.
-
-2004-06-11 James Troup <james@nocrew.org>
-
- * tea (check_files): make override.unreadable optional.
- (validate_sources): close the Sources file handle.
-
- * docs/README.first: clarify that 'alyson' and running
- add_constraints.sql by hand is something you only want to do if
- you're not running 'neve'.
-
- * docs/README.config (Location::$LOCATION::Suites): document.
-
- * db_access.py (do_query): also print out the result of the query.
-
-2004-06-10 James Troup <james@nocrew.org>
-
- * katie.py (Katie.cross_suite_version_check): post-woody versions
- of python-apt's apt_pkg.VersionCompare() function apparently
- returns variable integers for less than or greater than results -
- update our result checking to match.
- * jenna (resolve_arch_all_vs_any): likewise.
- * charisma (main): likewise.
-
-2004-06-09 James Troup <james@nocrew.org>
-
- * jennifer (process_it): s/changes_valid/valid_changes_p/. Add
- valid_dsc_p and don't run check_source() if check_dsc() failed.
- (check_dsc): on fatal failures return 0 so check_source() isn't
- run (since it makes fatal assumptions about the presence of
- mandatory .dsc fields).
- Remove unused and obsolete re_bad_diff and re_is_changes regexps.
-
-2004-05-07 James Troup <james@nocrew.org>
-
- * katie.conf (Rhona::OverrideFilename): unused and obsolete, remove.
- * katie.conf-non-US (Rhona::OverrideFilename): likewise.
-
- * katie.conf (Dir::Override): remove duplicate definition.
-
- * neve (get_or_set_files_id): add an always-NULL last_used column
- to output.
-
-2004-04-27 James Troup <james@nocrew.org>
-
- * apt.conf-security (tree "dists/stable/updates"): add
- ExtraOverride - noticed by Joey Hess (#246050).
- (tree "dists/testing/updates"): likewise.
-
-2004-04-20 James Troup <james@nocrew.org>
-
- * jennifer (check_files): check for existing .changes or .katie
- files of the same name in the Suite::<suite>::Copy{Changes,Katie}
- directories.
-
-2004-04-19 James Troup <james@nocrew.org>
-
- * jennifer (check_source): handle failure to remove the temporary
- directory (used for source tree extraction) better, specifically:
- if we fail with -EACCES, chmod -R u+rwx the temporary directory
- and try again and if that works, REJECT the package.
-
-2004-04-17 James Troup <james@nocrew.org>
-
- * docs/madison.1.sgml: document -b/--binary-type,
- -g/--greaterorequal and -G/--greaterthan.
-
- * madison (usage): -b/--binary-type only takes a single argument.
- Document -g/--greaterorequal and -G/--greaterthan.
- (main): add support for -g/--greaterorequal and -G/--greaterthan.
-
-2004-04-12 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * billie: Cleaned up a load of comments, added /README.non-US to
- the verbatim matches list.
-
-2004-04-07 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * utils.py (size_type): Make it use real binary megabytes and
- kilobytes, instead of the marketing terms used before.
-
-2004-04-07 James Troup <james@nocrew.org>
-
- * katie.py (Katie.check_dsc_against_db): in the case we're
- ignoring an identical-to-existing orig.tar.gz remember the path to
- the existent version in pkg.orig_tar_gz. Adjust query to grab
- location.path too to be able to do so.
-
-2004-04-03 James Troup <james@nocrew.org>
-
- * debian/control (Depends): add python2.1-email | python (>= 2.2)
- needed for new utils.rfc2047_encode() function.
-
- * utils.py (re_parse_maintainer): allow whitespace inside the
- email address.
- (Error): new exception base class.
- (ParseMaintError): new exception class.
- (force_to_utf8): new function.
- (rfc2047_encode): likewise.
- (fix_maintainer): rework. use force_to_utf8() to force name and
- rfc822 return values to always use UTF-8. use rfc2047_encode() to
- return an rfc2047 value. Validate the address to catch missing
- email addresses and (some) broken ones.
-
- * katie.py (nmu_p.is_an_nmu): adapt for new utils.fix_maintainer()
- by adopting foo2047 return value.
- (Katie.dump_vars): add changedby2047 and maintainer2047 as
- mandatory changes fields. Promote changes and maintainer822 to
- mandatory fields.
- (Katie.update_subst): default maintainer2047 rather than
- maintainer822. User foo2047 rather than foo822 when setting
- __MAINTAINER_TO__ or __MAINTAINER_FROM__.
-
- * jennifer (check_changes): set default changes["maintainer2047"]
- and changes["changedby2047"] values rather than their 822
- equivalents. Makes changes["changes"] a mandatory field. Adapt
- to new utils.fix_maintainer() - reject on exception and adopt
- foo2047 return value.
- (check_dsc): if a mandatory field is missing don't do any further
- checks and as a result reduce paranoia about dsc[var] existence.
- Validate the maintainer field by calling new
- utils.fix_maintainer().
-
- * ashley (main): add changedby2047 and maintainer2047 to mandatory
- changes fields. Promote maintainer822 to a mandatory changes
- field. add "pool name" to files fields.
-
- * test/006/test.py: new file - tests for new
- utils.fix_maintainer().
-
-2004-04-01 James Troup <james@nocrew.org>
-
- * templates/lisa.prod (To): use __MAINTAINER_TO__ not __MAINTAINER__.
-
- * jennifer (get_changelog_versions): create a symlink mirror of
- the source files in the temporary directory.
- (check_source): if check_dsc_against_db() couldn't find the
- orig.tar.gz bail out.
-
- * katie.py (Katie.check_dsc_against_db): if the orig.tar.gz is not
- part of the upload store the path to it in pkg.orig_tar_gz and if
- it can't be found set pkg.orig_tar_gz to -1.
-
- Explicitly return the second value as None in the (usual) case
- where we don't have to reprocess. Remove obsolete diagnostic
- logs.
-
- * lisa (prod_maintainer): don't return anything, no one cares. (pychecker)
-
- * utils.py (temp_filename): new helper function that wraps around
- tempfile.mktemp().
-
- * katie.py (Katie.do_reject): use it and don't import tempfile.
- * lisa (prod_maintainer): likewise.
- (edit_note): likewise.
- (edit_new): likewise.
- * lauren (reject): likewise.
- * melanie (main): likewise.
- * neve (do_sources): likewise.
- * rene (main): likewise.
- * tea (validate_sources): likewise.
-
-2004-03-31 James Troup <james@nocrew.org>
-
- * tea (validate_sources): remove unused 's' temporary variable.
-
-2004-03-15 James Troup <james@nocrew.org>
-
- * jennifer (check_dsc): check changes["architecture"] for
- source before we do anything else.
-
-2004-03-21 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * billie: Added
- * katie.conf (Billie): Added sample Billie stanza to katie.conf
-
-2004-03-12 James Troup <james@nocrew.org>
-
- * docs/README.config (Dir::Queue::BTSVersionTrack): document.
-
- * katie.conf (Dir::Queue::BTSVersionTrack): define.
-
- * katie.py (Katie.accept): add support for DebBugs Version
- Tracking by writing out .versions (generated in jennifer's
- get_changelog_versions()) and .debinfo (mapping of binary ->
- source) files.
-
- * ashley (main): add dsc["bts changelog"].
-
- * katie.py (Katie.dump_vars): store dsc["bts changelog"] too.
-
- * jennifer (check_diff): obsoleted by check_source(), removed.
- (check_source): new function: create a temporary directory and
- move into it and call get_changelog_versions().
- (get_changelog_versions): new function: extract the source package
- and optionally parse debian/changelog to obtain the version
- history for the BTS.
- (process_it): call check_source() rather than check_diff().
-
-2004-03-08 James Troup <james@nocrew.org>
-
- * lisa (edit_index): Fix logic swapo from 'use "if varfoo in
- listbar" rather than "if listbar.count(varfoo)"' change on
- 2004-02-24.
-
-2004-03-05 James Troup <james@nocrew.org>
-
- * alicia (main): don't warn about not closing bugs - we don't
- manage overrides through the BTS.
-
-2004-02-27 Martin Michlmayr <tbm@cyrius.com>
-
- * docs/README.config: lots of updates and corrections.
- * docs/README.first: likewise.
-
- * docs/README.config: drop unused Dir::Queue::Root.
- * katie.conf-non-US: likewise.
- * katie.conf: likewise.
- * katie.conf-security: likewise.
-
-2004-02-27 James Troup <james@nocrew.org>
-
- * rose (process_tree): use 'if var in [ list ]' rather than long
- 'if var == foo or var == bar or var == baz'. Suggested by Martin
- Michlmayr.
-
- * jennifer (check_files): reduce 'if var != None' to 'if var' as
- suggested by Martin Michlmayr.
- * catherine (poolize): likewise.
- * charisma (main): likewise.
- * halle (check_changes): likewise.
- * heidi (main): likewise.
- (process_file): likewise.
- * kelly (install): likewise.
- (stable_install): likewise.
- * utils.py (fix_maintainer): likewise.
-
- * apt.conf: add support for debian-installer in testing-proposed-updates.
- * katie.conf (Suite::Testing-Proposed-Updates::UdebComponents):
- add - set to main.
-
- * mkmaintainers: add "-T15" option to wget of non-US packages file
- so that we don't hang cron.daily if non-US is down.
-
- * templates/lisa.prod (Subject): Prefix with "Comments regarding".
-
- * templates/jennifer.bug-close: add Source and Source-Version
- pseudo-headers that may be used for BTS Version Tracking someday
- [ajt@].
-
- * rene (do_nbs): special case linux-wlan-ng like we do for pcmcia.
- (main): likewise.
-
- * cron.unchecked: it's /org/ftp.debian.org not ftp-master.
-
-2004-02-25 James Troup <james@nocrew.org>
-
- * katie.conf (SuiteMappings): don't map testing-security to
- proposed-updates.
-
-2004-02-24 James Troup <james@nocrew.org>
-
- * katie.py (Katie.__init__): remove unused 'values' field.
-
- * utils.py (extract_component_from_section): use 's.find(c) != -1'
- rather than 's.count(c) > 0'.
-
- * katie.py (Katie.source_exists): use "if varfoo in listbar"
- rather than "if listbar.count(varfoo)".
- * halle (check_joey): likewise.
- * jeri (check_joey): likewise.
- * lisa (edit_index): likewise.
- * jenna (stable_dislocation_p): likewise.
-
- * jennifer (main): remove unused global 'nmu'.
-
-2004-02-03 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * pseudo-packages.maintainers (ftp.debian.org): Changed the maintainer
- to be ftpmaster@ftp-master.debian.org to bring it into line with how
- the dak tools close bugs.
-
-2004-02-02 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * katie.conf (Alicia): Added an Alicia section with email address
- * templates/alicia.bug-close: Added
- * docs/alicia.1.sgml: Added the docs for the -d/--done argument
- * alicia (main): Added a -d/--done argument
-
-2004-02-02 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * templates/lisa.prod: Oops, missed a BITCH->PROD conversion
-
-2004-01-29 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * lisa (prod_maintainer): Added function to prod the maintainer without
- accepting or rejecting the package
- * templates/lisa.prod: Added this template for the prodding mail
-
- * .cvsignore: Added neve-files which turns up in new installations
-
-2004-01-30 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * alicia (usage): Fixed usage message to offer section and priority
- as seperately optional arguments.
- * alicia (main): Added a % (arg) interpolation needed when only
- one of section or priority is provided and it cannot be found.
-
-2004-01-29 Daniel Silverstone <dsilvers@digital-scurf.org>
-
- * alicia: Added
- * docs/alicia.1.sgml: Added
- * docs/Makefile: Added alicia to the list of manpages to build
- * docs/README.names: Noted what alicia does
- * docs/README.first: Noted where alicia is useful
-
-2004-01-21 James Troup <james@nocrew.org>
-
- * madison (main): add -b/--binary-type.
- (usage): likewise.
-
- * denise (main): generate debian-installer overrides for testing
- too.
- * apt.conf: add support for debian-installer in testing.
- * katie.conf (Suite::Testing::UdebComponents): set to main.
-
- * katie.conf (Dinstall::SigningKeyIds): 2004 key.
- * katie.conf-non-US (Dinstall::SigningKeyIds): likewise.
- * katie.conf-security (Dinstall::SigningKeyIds): likewise.
-
- * utils.py (parse_changes): don't process data not inside the
- signed data. Thanks to Andrew Suffield <asuffield@debian.org> for
- pointing this out.
- * test/005/test.py (main): new test to test for above.
-
-2004-01-04 James Troup <james@nocrew.org>
-
- * jenna (write_filelists): correct typo, s/Components/Component/
- for Options.
-
-2004-01-04 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: move update of overrides and Packages file...
- * cron.unchecked: to here.
- * katie.conf-non-US: (Dinstall::SingingKeyIds) update for 2003v2 key
- * katie.conf-security: likewise
-
-2003-11-20 James Troup <james@nocrew.org>
-
- * jenna (main): don't use utils.try_with_debug(), it produces way
- too much output.
-
- * halle (check_changes): don't error out if a .changes refers to a
- non-existent package, just warn and skip the file.
-
- * docs/README.stable-point-release: mention halle and .changes
- obsoleted by removal through melanie. Update for 3.0r2.
-
- * katie.conf (Suite::Stable::Version): bump to 3.0r2.
- (Suite::Stable::Description): update for 3.0r2.
- (Lauren::MoreInfoURL): likewise.
- * katie.conf-non-US (Suite::Stable::Version): likewise.
- (Suite::Stable::Description): likewise.
- (Lauren::MoreInfoURL): likewise.
-
- * apt.conf.stable (Default): don't define MaxContentsChange.
- * apt.conf.stable-non-US (Default): likewise.
-
- * lauren (reject): hack to work around partial replacement of an
- upload, i.e. one or more binaries superseded by another source
- package.
-
-2003-11-17 James Troup <james@nocrew.org>
-
- * pseudo-packages.maintainers: point installation-reports at
- debian-boot@l.d.o rather than debian-testing@l.d.o at jello@d.o's
- request.
-
- * utils.py (parse_changes): calculate the number of lines once
- with len() rather than max().
-
- * jennifer (check_dsc): handle the .orig.tar.gz disappearing from
- files, since check_dsc_against_db() deletes the .orig.tar.gz
- entry.
-
-2003-11-13 Ryan Murray <rmurray@debian.org>
-
- * apt.conf: specify a src override file for debian-installer
-
-2003-11-10 James Troup <james@nocrew.org>
-
- * fernanda.py (strip_pgp_signature): new function - strips PGP
- signature from a file and returns the modified contents of the
- file in a string.
- (display_changes): use it.
- (read_dsc): likewise.
-
-2003-11-09 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: export accepted_autobuild table for unstable, and use
- it to generate the incoming Packages/Sources rather than having apt
- walk the directory.
- * apt.conf.buildd: use exported table from cron.buildd to generate
- Packages/Sources
-
-2003-11-07 James Troup <james@nocrew.org>
-
- * kelly: import errno.
-
- * katie.py (Katie.build_summaries): sort override disparities.
-
- * kelly (install): set dsc_component based on the .dsc's component
- not a random binaries.
-
-2003-10-29 James Troup <james@nocrew.org>
-
- * katie.py (Katie.build_summaries): don't assume changes["source"]
- exists since it might not.
-
-2003-10-20 James Troup <james@nocrew.org>
-
- * pseudo-packages.maintainers: update security.d.o to use
- team@s.d.o at joy@'s request.
-
-2003-10-17 James Troup <james@nocrew.org>
-
- * jennifer (check_dsc): use .startswith rather than .find() == 0.
-
-2003-10-17 Martin Michlmayr <tbm@cyrius.com>
-
- * tea (chk_bd_process_dir): use .endswith rather than slice.
-
-2003-10-14 James Troup <james@nocrew.org>
-
- * tea (check_build_depends): new function.
- (chk_bd_process_dir): likewise. Validates build-depends in .dsc's
- in the archive.
- (main): update for new function.
- (usage): likewise.
-
- * katie.py (Katie.do_reject): sanitize variable names,
- s/reject_filename/reason_filename/, s/fd/reason_fd/. Move shared
- os.close() to outside if clause.
-
- * jennifer (check_dsc): check build-depends and
- build-depends-indep by running them past apt_pkg.ParseSrcDepends.
- Fold the ARRAY check into the same code block and tidy up it's
- rejection message.
- (check_changes): ensure that the Files field is non-empty.
- Suggested by Santiago Vila <sanvila@unex.es>
- (check_changes): normalize reject messages.
- (check_dsc): instead of doing most of the checks inside a for loop
- and an if, find the dsc_filename in a short loop over files first
- and then do all the checks. Add check for more than one .dsc in a
- .changes which we can't handle. Normalize reject messages.
-
-2003-10-13 James Troup <james@nocrew.org>
-
- * katie.conf (Dinstall::Reject::NoSourceOnly): set to true.
- * katie.conf-non-US (Dinstall::Reject::NoSourceOnly): likewise.
-
- * jennifer (check_files): Set 'has_binaries' and 'has_source'
- variables while iterating over 'files'. Don't regenerate it when
- checking for source if source is mentioned.
-
- Reject source only uploads if the config variable
- Dinstall::Reject::NoSourceOnly is set.
-
-2003-10-03 James Troup <james@nocrew.org>
-
- * rene (main): add nasty hardcoded reference to debian-installer
- so we detect NBS .udebs.
-
-2003-09-29 James Troup <james@nocrew.org>
-
- * apt.conf (old-proposed-updates): remove.
- * apt.conf-non-US (old-proposed-updates): likewise.
-
-2003-09-24 James Troup <james@nocrew.org>
-
- * tea (check_files_not_symlinks): new function, ensure files
- mentioned in the database aren't symlinks. Includes code to
- update any files that are like this to their real filenames +
- location; commented out by though.
- (usage): update for new function.
- (main): likewise.
-
-2003-09-24 Anthony Towns <ajt@debian.org>
-
- * vars: external-overrides variable added
- * cron.daily: Update testing/unstable Task: overrides from joeyh
- managed external source.
-
-2003-09-22 James Troup <james@nocrew.org>
-
- * kelly (install): if we can't move the .changes into queue/done,
- fail don't warn and carry on. The old behaviour pre-dates NI and
- doesn't make much sense now since jennifer checks both
- queue/accepted and queue/done for any .changes files she's
- processing.
-
- * utils.py (move): don't throw exceptions on existing files or
- can't overwrite, instead just fubar out.
-
- * jennifer (check_dsc): also check Build-Depends-Indep for
- ARRAY-lossage. Noticed by Matt Zimmerman <mdz@debian.org>.
-
-2003-09-18 James Troup <james@nocrew.org>
-
- * katie.py (Katie.close_bugs): only log the bugs we've closed
- once.
-
- * kelly (main): log as 'kelly', not 'katie'.
-
-2003-09-16 James Troup <james@nocrew.org>
-
- * katie.py (Katie.check_binary_against_db): likewise noramlize.
-
- * jennifer (check_changes): normalize reject message for "changes
- file already exists" to be %s: <foo>.
- (check_dsc): add a check for 'Build-Depends: ARRAY(<hex>)'
- produced by broken dpkg-source in 1.10.11. Tone down and
- normalize rejection message for incompatible 'Format' version
- numbers.
- (check_diff): likewise tone down and normalize.
-
-2003-09-07 James Troup <james@nocrew.org>
-
- * utils.py (parse_changes): if dsc_whitespace_rules is false,
- don't bomb out on bogus empty lines.
- (build_file_list): check for changes["files"] earlier. use Dict
- to create files[name] dictionary.
- (send_mail): don't bother validating arguments.
- (check_signature): minor improvements to some of the rejection
- messages including listing the key id of the key that wasn't found
- in the keyring.
- (wrap): new function.
-
- * tea: add new check 'validate-indices' that ensures all files
- mentioned in indices (Packages, Sources) files do in fact exist.
-
- * catherine (poolize): use a local re_isadeb which handles legacy
- (i.e. no architecture) style .deb filenames.
-
- * rosamund: new script.
-
- * rhona (check_binaries): when checking for binary packages not in
- a suite, don't bother selecting files that already have a
- last_used date.
- (check_sources): likewise.
-
- * rhona: change all SQL EXISTS sub-query clauses to use the
- postgres suggested convention of "SELECT 1 FROM".
- * andrea (main): likewise.
- * tea (check_override): likewise.
- * catherine (main): likewise.
-
- * katie.conf (Suite): remove OldStable and Old-Proposed-Updates
- entries and in other suites MustBeNewerThan's.
- (SuiteMappings): likewise
- * katie.conf-non-US: likewise.
- * katie.conf-security: likewise.
-
- * apt.conf-security: remove oldstable.
- * apt.conf.stable: likewise.
- * apt.conf.stable-non-US: likewise.
- * cron.buildd-security: likewise.
- * cron.daily-security: likewise.
- * vars-security (suites): likewise.
- * wanna-build/trigger.daily: likewise.
-
- * claire.py (clean_symlink): move...
- * utils.py (clean_symlink): here.
-
- * claire.py (find_dislocated_stable): update accordingly.
-
-2003-08-16 Anthony Towns <ajt@debian.org>
-
- * katie.py (source_exists): expand the list of distributions
- the source may exist in to include any suite that's mapped to
- the destination suite (even transitively (a->b->c)). This should
- unbreak binary uploads to *-proposed-updates.
-
-2003-08-09 Randall Donald <rdonald@debian.org>
-
- * lisa (recheck): change changes["distribution"].keys() to
- Katie.pkg.changes...
-
-2003-08-08 Randall Donald <rdonald@debian.org>
-
- * katie.py: only tag bugs as fixed-in-experimental for
- experimental uploads
-
-2003-07-26 Anthony Towns <ajt@debian.org>
-
- * katie.py (source_exists): add an extra parameter to limit the
- distribution(s) the source must exist in.
- * kelly, lisa, jennifer: update to use the new source_exists
-
-2003-07-15 Anthony Towns <ajt@debian.org>
-
- * ziyi: quick hack to support a FakeDI line in apt.conf to generate
- checksums for debian-installer stuff even when it's just a symlink to
- another suite
-
- * apt.conf: add the FakeDI line
-
-2003-06-09 James Troup <james@nocrew.org>
-
- * kelly (check): make sure the 'file' we're looking for in 'files'
- hasn't been deleted by katie.check_dsc_against_db().
-
-2003-05-07 James Troup <james@nocrew.org>
-
- * helena (time_pp): fix s/years/year/ typo.
-
-2003-04-29 James Troup <james@nocrew.org>
-
- * madison (usage): document -c/--component.
-
- * madison (usage): Fix s/seperated/separated/.
- * melanie (usage): likewise.
- * jenna (usage): likewise.
-
-2003-04-24 James Troup <james@nocrew.org>
-
- * cron.daily-non-US: if there's nothing for kelly to install, say
- so.
-
- * jennifer (check_timestamps): print sys.exc_value as well as
- sys.exc_type when capturing exceptions. Prefix 'timestamp check
- failed' with 'deb contents' to make it clearer what timestamp(s)
- are being checked.
-
-2003-04-15 James Troup <james@nocrew.org>
-
- * cron.daily-non-US: only run kelly if there are some .changes
- files in accepted.
-
- * rene: add -m/--mode argument which can be either daily (default)
- or full. In daily mode only 'nviu' and 'nbs' checks are run.
- Various changes to make this possible including a poor attempt at
- splitting main() up a little. De-hardcode suite numbers from SQL
- queries and return quietly from do_nviu() if experimental doesn't
- exist (i.e. non-US). Hardcode pcmcia-cs as dubious NBS since it
- is.
-
- * debian/control (Depends): remove python-zlib as it's obsolete.
-
- * charisma (main): don't slice the \n off strings when we're
- strip()-ing it anyway.
- * heidi (set_suite): likewise.
- (process_file): likewise.
- * natalie (process_file): likewise.
-
-2003-04-08 James Troup <james@nocrew.org>
-
- * katie.py (Katie.check_dsc_against_db): improve the speed of two
- slow queries by using one LIKE '%foo%' and then matching against
- '%s' or '/%s$' in python. Also only join location when we need it
- (i.e. the .orig.tar.gz query). On auric, this knocks ~3s of each
- query, so 6s for each sourceful package.
-
- * cron.daily: invoke rene and send the report to ftpmaster.
- * cron.daily-non-US: likewise.
-
-2003-03-14 James Troup <james@nocrew.org>
-
- * utils.py (send_mail): default filename to blank.
- * amber (make_advisory): adapt.
- * jennifer (acknowledge_new): likewise.
- * katie.py (Katie.close_bugs): likewise.
- (Katie.announce): likewise.
- (Katie.accept): likewise.
- (Katie.check_override): likewise.
- (Katie.do_reject): likewise.
- * kelly (do_reject): likewise.
- (stable_install): likewise.
- * lisa (do_bxa_notification): likewise.
- * lauren (reject): likewise.
- * melanie (main): likewise.
-
- * rene (add_nbs): check any NBS packages against unstable to see
- if they haven't been removed already.
-
- * templates/katie.rejected: remove paragraph about rejected files
- since they're o-rwx due to c-i-m and the uploader can't do
- anything about them and shania will take care of them anyway.
-
- * madison (usage): update usage example to use comma seperation.
- * melanie (usage): likewise.
-
- * utils.py (split_args): new function; splits command line
- arguments either by space or comma (whichever is used). Also has
- optional-but-default DWIM spurious space detection to avoid
- 'command -a i386, m68k' problems.
- (parse_args): use it.
- * melanie (main): likewise.
-
- * melanie (main): force the admin to tell someone if we're not
- doing a rene-led removal (or closing a bug, which counts as
- telling someone).
-
-2003-03-05 James Troup <james@nocrew.org>
-
- * katie.conf (Section): add embedded, gnome, kde, libdevel, perl
- and python sections.
- * katie.conf-security (Section): likewise.
-
- * add_constraints.sql: add uid and uid_id_seq to grants.
-
- * lisa (determine_new): also warn about adding overrides to
- oldstable.
-
- * madison (main): make the -S/--source-and-binary query obey
- -s/--suite restrictions.
-
-2003-03-03 James Troup <james@nocrew.org>
-
- * madison (main): if the Archive_Maintenance_In_Progress lockfile
- exists, warn the user that our output might seem strange. (People
- get confused by multiple versions in a suite which happens
- post-kelly but pre-jenna.)
-
-2003-02-21 James Troup <james@nocrew.org>
-
- * kelly (main): we don't need to worry about StableRejector.
-
- * melanie (main): sort versions with apt_pkg.VersionCompare()
- prior to output.
-
- * lauren: new script to manually reject packages from
- proposed-updates. Updated code from pre-NI kelly (nee katie).
-
-2003-02-20 James Troup <james@nocrew.org>
-
- * kelly (init): remove unused -m/--manual-reject argument.
-
- * katie.py (Katie.force_reject): renamed from force_move to make
- it more explicit what this function does.
- (Katie.do_reject): update to match.
-
- * utils.py (prefix_multi_line_string): add an optional argument
- include_blank_lines which defaults to 0. If non-zero, blank lines
- will be includes in the output.
-
- * katie.py (Katie.do_reject): don't add leading space to each line
- of the reject message. Include blank lines when showing the
- message to the user.
-
-2003-02-19 Martin Michlmayr <tbm@cyrius.com>
-
- * utils.py (fix_maintainer): replace pointless re.sub() with
- simple string format.
-
-2003-02-11 James Troup <james@nocrew.org>
-
- * lisa (edit_overrides): only strip-to-one-char and upper-case
- non-numeric answers. Fixes editing of items with indices >= 10;
- noticed by Randall.
- (edit_overrides): correct order of arguments to "not a valid
- index" error message.
-
- * jenna (cleanup): call resolve_arch_all_vs_any() rather than
- remove_duplicate_versions(); thanks to aj for the initial
- diagnosis.
- (remove_duplicate_versions): correct how we return
- dominant_versions.
- (resolve_arch_all_vs_any): arch_all_versions needs to be a list of
- a tuple rather than just a tuple.
-
-2003-02-10 James Troup <james@nocrew.org>
-
- * emilie: new script - sync fingerprint and uid tables with a
- debian.org LDAP DB.
-
- * init_pool.sql: new table 'uid'; contains user ids. Reference it
- in 'fingerprint'.
-
- * db_access.py (get_or_set_uid_id): new function.
-
- * jennifer (main): update locking to a) not used FCNTL (deprecated
- in python >= 2.2) and b) acknowledge upstream's broken
- implementation of lockf (see Debian bug #74777), c) try to acquire
- the lock non-blocking.
- * kelly (main): likewise.
-
- * contrib/python_1.5.2-fcntl_lockf.diff: obsolete, removed.
-
- * madison (main): only append the package to new_packages if it's
- not already in there; fixes -S/--source-and-binary for cases where
- the source builds a binary package of the same name.
-
-2003-02-10 Anthony Towns <ajt@debian.org>
-
- * madison (main): use explicit JOIN syntax for
- -S/--source-and-binary queries to reduce the query's runtime from
- >10 seconds to negligible.
-
-2003-02-08 James Troup <james@nocrew.org>
-
- * rene (main): in the NVIU output, append items to lists, not
- extend them; fixes amusing suggestion that "g n u m e r i c" (sic)
- should be removed.
-
-2003-02-07 James Troup <james@nocrew.org>
-
- * apt.conf (tree "dists/unstable"): Add bzip2-ed Packages and
- Sources [aj].
-
- * pseudo-packages.maintainers (bugs.debian.org): s/Darren
- O. Benham/Adam Heath/.
-
- * katie.conf (Suite::Stable::Version): bump to 3.0r1a.
- (Suite::Stable::Description): update for 3.0r1a.
- (Dinstall::SigningKeyIds): update for 2003 key [aj].
-
- * utils.py (gpgv_get_status_output): rename from
- get_status_output().
-
- * neve (check_signature): use gpgv_get_status_output and Dict from
- utils(). Add missing newline to error message about duplicate tokens.
-
- * saffron (per_arch_space_use): also print space used by source.
- (output_format): correct string.join() invocation.
-
- * jennifer (check_signature): ignored duplicate EXPIRED tokens.
-
-2003-02-04 James Troup <james@nocrew.org>
-
- * cron.buildd: correct generation of Packages/Sources and grep out
- non-US/non-free as well as non-free.
-
-2003-02-03 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: generate quinn-diff output with full Packages/Sources
- files to get out-of-date vs. uncompiled right.
- * apt.conf.buildd: no longer generate uncompressed files, as they
- are generated in cron.buildd instead
- * add -i option to quinn-diff to ignore binary-all packages
- * apt.conf.buildd: remove and readd udeb to extensions. If the udebs
- aren't in the packages file, the arch that uploaded them will build
- them anyways...
-
-2003-01-30 James Troup <james@nocrew.org>
-
- * rene (main): only print suggested melanie command when there's
- some NBS to remove.
-
-2003-01-30 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: fix incorrectly inverted lockfile check
-
-2003-01-29 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: generate override.sid.all3.src
- * apt.conf.buildd: use generated override.sid.all3.src
-
-2003-01-27 Martin Michlmayr <tbm@cyrius.com>
-
- * utils.py (get_status_output): moved from jennifer.
- (Dict): likewise.
- (check_signature): likewise.
-
- * jennifer (get_status_output): moved to utils.py.
- (Dict): likewise.
- (check_signature): likewise.
-
- * utils.py (check_signature): add an argument to specifiy which
- function to call when an error was found.
- (check_signature): document this function better.
-
- * jennifer (check_files): pass the reject function as an argument
- to utils.check_signature.
- (process_it): likewise.
-
-2003-01-20 James Troup <james@nocrew.org>
-
- * rene (main): lots of changes to improve the output and make it
- more useful.
-
- * katie.py (Katie.check_override): make the override messages
- clearer (hopefully).
-
-2002-12-26 James Troup <james@nocrew.org>
-
- * ziyi (usage): document the ability to pass suite(s) as
- argument(s).
- (main): read apt.conf after checking for -h/--help.
-
- * tea (main): take the check to run as an argument.
-
- * saffron.R: R script to graph daily install runs.
-
- * saffron: new script; various stats functions.
-
- * rhona (main): connect to the database after checking for -h/--help.
-
- * neve (do_da_do_da): if no -a/--action option is given, bail out.
-
- * melanie (main): sort versions with utils.arch_compare_sw().
-
- * madison (usage): alphabetize order of options.
- * melanie (usage): likewise.
-
- * kelly (usage): fix usage short description (we aren't dinstall).
-
- * julia (usage): fix usage description and alphabetize order of
- options.
-
- * jeri (usage): fix usage short description.
-
- * jennifer (main): move --help and --version checks from here...
- (init): to here so that they work with an empty katie.conf.
- * kelly: likewise.
-
- * alyson (usage): new function.
- (main): use it.
- * andrea: likewise.
- * ashley: likewise.
- * cindy: likewise.
- * denise: likewise.
- * helena: likewise.
- * neve: likewise.
- * rene: likewise.
- * rose: likewise.
- * tea: likewise.
-
- * apt.conf.stable (tree "dists/stable"): add missing ExtraOverride
- entry that caused tasks to be omitted from 3.0r1.
-
-2002-12-10 James Troup <james@nocrew.org>
-
- * jennifer (check_files): sanity check the Depends field to ensure
- it's non-empty if present since apt chokes on an empty one.
- Thanks to Ryan Murray for the idea.
-
-2002-12-08 James Troup <james@nocrew.org>
-
- * katie.conf-security (Helena::Directories): new; include accepted
- in addition to byhand and new.
-
- * helena (process_changes_files): use utils.arch_compare_sw().
- Justify things based on the longest [package, version,
- architecture]. Reduce '[note]' to '[N]' to save space, and remove
- the commas in architecture and version lists for the same reason.
- (main): make directories we process configurable through
- Helena::Directories in the config file; if that doesn't exist
- default to the old hardcoded values (byhand & new).
-
- * utils.py (arch_compare_sw): moved here from madison.
- * madison (main): adjust to compensate.
-
-2002-12-06 James Troup <james@nocrew.org>
-
- * ziyi (main): fix "suite foo not in apt.conf" msg to use the
- right filename.
-
-2002-12-05 James Troup <james@nocrew.org>
-
- * katie.conf-non-US (Julia::KnownPostgres): add 'udmsearch'.
-
-2002-11-28 Randall Donald <rdonald@debian.org>
-
- * fernanda.py (read_control): fix typo of 'Architecture'.
-
-2002-11-26 James Troup <james@nocrew.org>
-
- * lisa (check_pkg): call less with '-R' so we see the colour from
- Randall's fernanda changes.
-
- * neve (process_sources): if Directory points to a legacy location
- but the .dsc isn't there; assume it's broken and look in the pool.
- (update_section): new, borroed from alyson.
- (do_da_do_da): use it.
- (process_packages): add suite_it to the cache key used for
- arch_all_cache since otherwise we only add a package to the first
- suite it's in and ignore any subsequent ones.
-
- * katie.conf-non-US (Location): fixed to reflect reality (all
- suites, except old-proposed-updates (which is legacy-mixed)) are
- pool.
-
- * utils.py (try_with_debug): wrapper for print_exc().
- * jenna (main): use it.
- * neve (main): likewise.
-
-2002-11-25 Randall Donald <rdonald@debian.org>
-
- * fernanda.py (main): added -R to less command line for raw control
- character support to print colours
- (check_deb): Instead of running dpkg -I on deb file, call
- output_deb_info, the new colourized control reporter.
- (check_dsc): add call to colourized dsc info reader, read_dsc, instead
- of printing out each .dsc line.
- (output_deb_info): new function. Outputs each key/value pair from
- read_control except in special cases where we highlight section,
- maintainer, architecture, depends and recommends.
- (create_depends_string): new function. Takes Depends tree and looks
- up it's compontent via projectb db, colourizes and constructs a
- depends string in original order.
- (read_dsc): new function. reads and parses .dsc info via
- utils.parse_changes. Build-Depends and Build-Depends-Indep are
- colourized.
- (read_control): new function. reads and parses control info via
- apt_pkg. Depends and Recommends are split in to list structures,
- Section and Architecture are colourized. Maintainer is colourized
- if it has a localhost.localdomain address.
- (split_depends): new function. Creates a list of lists of
- dictionaries of depends (package,version relation). Top list is
- colected from comma delimited items. Sub lists are | delimited.
- (get_comma_list): new function. splits string input among commas
- (get_or_list): new function. splits string input among | delimiters
- (get_depends_parts): new function. Creates dictionary of package name
- and version relation from dependancy string.
- Colours for section and depends are per component. Unfound depends
- are in bold. Lookups using version info is not supported yet.
-
-2002-11-22 James Troup <james@nocrew.org>
-
- * katie.conf-security (Julia::KnownPostgres): add 'www-data' and
- 'udmsearch'.
-
- * amber (make_advisory): string.atol() is deprecated and hasn't
- been ported to string methods. Use long() instead.
-
- * init_pool.sql: explicitly specify the encoding (SQL_ASCII) when
- creating the database since we'll fail badly if it's created with
- e.g. UNICODE encoding.
-
- * rose (main): AptCnf is a global.
-
- * neve (get_location_path): new function determines the location
- from the the first (left-most) directory of a Filename/Directory.
- (process_sources): don't need 'location' anymore. Use
- utils.warn(). Use the Directory: field for each package to find
- the .dsc. Use get_location_path() to determine the location for
- each .dsc.
- (process_packages): do't need 'location' anymore. Use
- utils.warn(). Use get_location_path().
- (do_sources): don't need 'location', drop 'prefix' in favour of
- being told the full path to the Sources file, like
- process_packages().
- (do_da_do_da): main() renamed, so that main can call us in a
- try/except. Adapt for the changes in do_sources() and
- process_packages() above. Assume Sources and Packages file are in
- <root>/dists/<etc.>. Treat pool locations like we do legacy ones.
-
- * katie.conf-security (Location): fixed to reflect reality (all
- suites are pool, not legacy).
-
- * utils.py (print_exc): more useful (i.e. much more verbose)
- traceback; a recipe from the Python cookbook.
- * jenna (main): use it.
- * neve (main): likewise.
-
-2002-11-19 James Troup <james@nocrew.org>
-
- * kelly (install): fix brain-damaged CopyChanges/CopyKatie
- handling which was FUBAR for multi-suite uploads. Now we just
- make a dictionary of destinations to copy to and iterate over
- those.
-
- * fernanda.py (check_deb): run linda as well as lintian.
-
-2002-10-21 James Troup <james@nocrew.org>
-
- * melanie (main): change X-Melanie to X-Katie and prefix it with
- 'melanie '.
-
- * lisa (main): prefix X-Katie with 'lisa '.
-
- * jennifer (clean_holding): fix typo in string method changes;
- s/file.find(file/file.find(/.
-
- * cron.daily: invoke helena and send the report to ftpmaster.
- * cron.daily-non-US: likewise.
-
-2002-10-16 James Troup <james@nocrew.org>
-
- * kelly (check): call reject() with a blank prefix when parsing
- the return of check_dsc_against_db() since it does its own
- prefix-ing.
-
- * rose: new script; only handles directory creation initally.
-
- * katie.conf (Dinstall::NewAckList): obsolete, removed.
- * katie.conf-non-US (Dinstall::NewAckList): likewise.
-
-2002-10-06 James Troup <james@nocrew.org>
-
- * rene (main): remove bogus argument handling.
-
- * kelly: katie, renamed.
- * cron.daily: adapt for katie being renamed to kelly.
- * cron.daily-non-US: likewise.
- * amber (main): likewise.
-
- * Changes for python 2.1.
-
- * kelly: time.strftime no longer requires a second argument of
- "time.localtime(time.time())".
- * logging.py: likewise.
- * rhona: likewise.
- * shania (init): likewise.
-
- * amber: use augmented assignment.
- * catherine (poolize): likewise.
- * claire.py (fix_component_section): likewise.
- * halle (check_changes): likewise.
- * helena: likewise.
- * jenna: likewise.
- * jennifer: likewise.
- * jeri: likewise.
- * katie.py: likewise.
- * kelly: likewise.
- * lisa: likewise.
- * madison (main): likewise.
- * melanie: likewise.
- * natalie: likewise.
- * neve: likewise.
- * rhona: likewise.
- * tea: likewise.
- * utils.py: likewise.
- * ziyi: likewise.
-
- * amber: use .endswith.
- * fernanda.py: likewise.
- * halle (main): likewise.
- * jennifer: likewise.
- * jeri: likewise.
- * katie.py: likewise.
- * kelly: likewise.
- * lisa: likewise.
- * neve: likewise.
- * shania (main): likewise.
- * utils.py: likewise.
-
- * alyson: use string methods.
- * amber: likewise.
- * andrea: likewise.
- * ashley: likewise.
- * catherine: likewise.
- * charisma: likewise.
- * claire.py: likewise.
- * db_access.py: likewise.
- * denise: likewise.
- * halle: likewise.
- * heidi: likewise.
- * helena: likewise.
- * jenna: likewise.
- * jennifer: likewise.
- * jeri: likewise.
- * julia: likewise.
- * katie.py: likewise.
- * kelly: likewise.
- * lisa: likewise.
- * logging.py: likewise.
- * madison: likewise.
- * melanie: likewise.
- * natalie: likewise.
- * neve: likewise.
- * rene: likewise.
- * tea: likewise.
- * utils.py: likewise.
- * ziyi: likewise.
-
-2002-09-20 Martin Michlmayr <tbm@cyrius.com>
-
- * utils.py (parse_changes): use <string>.startswith() rather than
- string.find().
-
-2002-08-27 Anthony Towns <ajt@debian.org>
-
- * katie.py (in_override_p): when searching for a source override,
- and the dsc query misses, search for both udeb and deb overrides
- as well. Should fix the UNACCEPT issues with udebs.
-
-2002-08-24 James Troup <james@nocrew.org>
-
- * melanie (main): remove gratuitous WHERE EXISTS sub-select from
- source+binary package finding code which was causing severe
- performance degradation with postgres 7.2.
-
-2002-08-14 James Troup <james@nocrew.org>
-
- * julia (main): use the pwd.getpwall() to get system user info
- rather than trying to read a password file. Add a -n/--no-action
- option.
-
- * cron.hourly: julia no longer takes any arguments.
- * cron.hourly-non-US: likewise.
-
-2002-08-07 James Troup <james@nocrew.org>
-
- * katie (install): handle multi-suite uploads when CopyChanges
- and/or CopyKatie are in use, ensuring we only copy stuff once.
-
-2002-08-01 Ryan Murray <rmurray@debian.org>
-
- * wanna-build/trigger.daily: initial commit, with locking
- * cron.buildd: add locking against daily run
-
-2002-07-30 James Troup <james@nocrew.org>
-
- * melanie (main): readd creation of suite_ids_list so melanie is
- remotely useful again.
-
- * katie.conf: adopt for woody release; diable
- StableDislocationSupport, add oldstable, adjust other suites and
- mappings, fix up location.
- * katie.conf-non-US: likewise.
- * katie.conf-security: likewise.
-
- * apt.conf.stable: adapt for woody release; add oldstable, adjust
- stable.
- * apt.conf.stable-non-US: likewise.
-
- * apt.conf-security: adapt for woody release; adding oldstable,
- oldstable, adjust stable and testing.
- * cron.daily-security: likewise.
- * cron.buildd-security: likewise.
-
- * apt.conf: adapt for woody release; rename woody-proposed-updates
- to testing-proposed-updates and proposed-updates to
- old-proposed-updates.
- * apt.conf-non-US: likewise.
-
- * vars-non-US (copyoverrides): add sarge.
- * vars (copyoverrides): likewise.
-
- * vars-security (suites): add oldstable.
-
-2002-07-22 Ryan Murray <rmurray@debian.org>
-
- * apt.conf.security-buildd: use suite codenames instead of
- distnames.
-
-2002-07-16 James Troup <james@nocrew.org>
-
- * denise (main): fix filenames for testing override files.
-
-2002-07-14 James Troup <james@nocrew.org>
-
- * jennifer (process_it): call check_md5sums later so we can check
- files in the .dsc too
- (check_md5sums): check files in the .dsc too. Check both md5sum
- and size.
-
- * melanie (main): use parse_args() and join_with_commas_and() from
- utils. If there's nothing to do, say so and exit, don't ask for
- confirmation etc.
-
- * amber (join_with_commas_and): moved from here to ...
- * utils.py (join_with_commas_and): here.
-
-2002-07-13 James Troup <james@nocrew.org>
-
- * madison (main): use parse_args() from utils. Support
- -c/--component.
-
- * jenna (parse_args): moved from here to ...
- * utils.py (parse_args): here.
-
- * katie.conf (Architectures): minor corrections to the description
- for arm, mips and mipsel.
- * katie.conf-non-US (Architectures): likewise.
- * katie.conf-security (Architectures): likewise.
-
- * cron.daily-security: use natalie's new -a/--add functionality to
- flesh out the security overrides.
-
-2002-07-12 James Troup <james@nocrew.org>
-
- * cron.buildd (ARCHS): add arm.
-
- * katie.conf: 2.2r7 was released.
- * katie.conf-non-US: likewise.
-
- * utils.py (parse_changes): handle a multi-line field with no
- starting line.
-
-2002-06-25 James Troup <james@nocrew.org>
-
- * templates/amber.advisory (To): add missing email address since
- __WHOAMI__ is only a name.
-
- * katie.conf-security (Melane::LogFile): correct to go somewhere
- katie has write access to.
- (Location::/org/security.debian.org/ftp/dists/::Suites): add
- Testing.
-
- * natalie: add support for -a/-add which adds packages only
- (ignoring changes and deletions).
-
- * katie.py (Katie.announce): Dinstall::CloseBugs is a boolean so
- use FindB, not get.
-
-2002-06-22 James Troup <james@nocrew.org>
-
- * jennifer (check_files): validate the package name and version
- field. If 'Package', 'Version' or 'Architecture' are missing,
- don't try any further checks.
- (check_dsc): likewise.
-
- * utils.py (re_taint_free): add '~' as a valid character.
-
-2002-06-20 Anthony Towns <ajt@debian.org>
-
- * katie.conf-non-US: add OverrideSuite for w-p-u to allow uploads
-
-2002-06-09 James Troup <james@nocrew.org>
-
- * jennifer (check_files): reduce useless code.
-
- * cron.daily-security: run symlinks -dr on $ftpdir.
-
- * vars-security (ftpdir): add.
-
-2002-06-08 James Troup <james@nocrew.org>
-
- * neve (update_override_type): transaction is handled higher up in
- main().
- (update_priority): likewise.
- (process_sources): remove code that makes testing a duplicate of
- stable.
- (process_packages): likewise.
-
- * templates/amber.advisory: add missing mail headers.
-
- * cron.daily-security: also call apt-ftparchive clean for
- apt.conf.buildd-security.
- * cron.weekly: likewise.
-
- * amber (do_upload): write out a list of source packages (and
- their version) uploaded for testing.
- (make_advisory): add more Subst mappings for the mail headers.
- (spawn): check for suspicious characters in the command and abort
- if their found.
-
-2002-06-07 James Troup <james@nocrew.org>
-
- * ziyi (main): remove the 'nonus'/'security' hacks and use
- Dinstall::SuiteSuffix (if it exists) instead. Always try to write
- the lower level Release files, even if they don't exist. fubar
- out if we can't open a lower level Release file for writing.
-
- * katie.conf-non-US (Dinstall): add SuiteSuffix, used to simplify
- ziyi.
- * katie.conf-security (Dinstall): likewise.
-
- * amber (do_upload): renamed from get_file_list(). Determine the
- upload host from the original component.
- (init): Add a -n/--no-action option. Fix how we get changes_files
- (i.e. from the return value of apt_pkg.ParseCommandLine(), not
- sys.argv). Add an Options global.
- (make_advisory): support -n/--no-action.
- (spawn): likewise.
- (main): likewise.
- (usage): document -n/--no-action.
-
- * cron.buildd-security: fix path to Packages-arch-specific in
- quinn-diff invocation.
-
- * katie.conf-security (Dinstall::AcceptedAutoBuildSuites): change
- to proper suite names (i.e. stable, testing) rather than codenames
- (potato, woody).
- (Dinstall::DefaultSuite): likewise.
- (Suite): likewise.
- (Location::/org/security.debian.org/ftp/dists/::Suites): likewise.
- * vars-security (suites): likewise.
- * apt.conf-security: likewise.
-
- * katie.conf-security (Component): add "updates/" prefix.
- (Suite::*::Components): likewise.
- (ComponentMappings): new; map all {ftp-master,non-US} components
- -> updates/<foo>.
-
- * katie.conf-security (Natalie): removed; the options have
- defaults and ComponentPosition is used by alyson which doesn't
- work on security.d.o.
- (Amber): replace UploadHost and UploadDir with ComponentMappings
- which is a mapping of components -> URI.
- (Suite::*::CodeName): strip bogus "/updates" suffix hack.
- (SuiteMappings): use "silent-map" in preference to "map".
-
- * cron.unchecked-security: fix call to cron.buildd-security.
-
- * cron.daily-security: map local suite (stable) -> override suite
- (potato) when fixing overrides. Correct component in natalie call
- to take into account "updates/" prefix. Fix cut'n'waste in
- override.$dist.all3 generation, the old files weren't being
- removed, so they were endlessly growing.
-
- * neve (main): don't use .Find for the CodeName since we require
- it. Location::*::Suites is a ValueList.
- (check_signature): ignore duplicate SIGEXPIRED tokens. Don't bomb
- out on expired keys, just warn.
- (update_override_type): new function; lifted from alyson.
- (update_priority): likewise.
- (main): use update_{override_type,priority}().
-
- * jennifer (check_distributions): remove redunant test for
- SuiteMappings; ValueList("does-not-exist") returns [] which is
- fine. Add support for a "silent-map" type which doesn't warn
- about the mapping to the user.
- (check_files): add support for ComponentMappings, similar to
- SuiteMappings, but there's no type, just a source and a
- destination and the original component is stored in "original
- component".
- * katie.py (Katie.dump_vars): add "original component" as an
- optionsal files[file] dump variable.
-
- * claire.py (find_dislocated_stable): dehardcode 'potato' in SQL
- query. Add support for section-less legacy locations like current
- security.debian.org through YetAnotherConfigBoolean
- 'LegacyStableHasNoSections'.
- * katie.conf-security (Dinstall): LegacyStableHasNoSections is true.
-
- * utils.py (real_arch): moved here from ziyi.
- * ziyi (real_arch): moved to utils.py.
- * ziyi (main): likewise.
-
- * claire.py (find_dislocated_stable): use real_arch() with
- filter() to strip out source and all.
- * neve (main): likewise.
- * rene (main): likewise.
- * jeri (parse_packages): likewise.
-
-2002-06-06 James Troup <james@nocrew.org>
-
- * tea (check_missing_tar_gz_in_dsc): modifed patch from Martin
- Michlmayr <tbm@cyrius.com> to be more verbose about what we're
- doing.
-
-2002-05-23 Martin Michlmayr <tbm@cyrius.com>
-
- * jeri (check_joey): check if the line contains two elements
- before accessing the second. Also, strip trailing spaces as well
- as the newline.
- * halle (check_joey): likewise.
-
-2002-06-05 James Troup <james@nocrew.org>
-
- * cron.unchecked-security: new file; like cron.unchecked but if
- there's nothing to do exit so we don't call cron.buildd-security.
-
- * apt.conf.buildd-security: new file.
-
- * vars (archs): alphabetize.
- * vars-non-US (archs): likewise.
-
- * vars-security: add unchecked.
-
- * madison (main): reduce rather bizarrely verbose and purposeless
- code to print arches to a simple string.join.
-
- * katie.conf (Suites::Unstable): add UdebComponents, a new
- valuelist of suites, used by jenna to flesh out the list of
- <suite>_main-debian-installer-binary-<arch>.list files generated.
- (Dinstall): add StableDislocationSupport, a new boolean used by
- jenna to enable or disable stable dislocation support
- (i.e. claire), as true.
-
- * katie.conf (Dinstall): add StableDislocationSupport, a new
- boolean used by jenna to enable or disable stable dislocation
- support (i.e. claire), as true.
- * katie.conf-non-US: likewise.
- * katie.conf-security: likewise.
-
- * cron.daily-security: generate .all3 overrides for the buildd
- support. Freshen a local copy of Packages-arch-specific from
- buildd.debian.org.
-
- * claire.py (find_dislocated_stable): disable the support for
- files in legacy-mixed locations since none of the Debian archives
- have any anymore.
-
- * helena: new script; produces a report on NEW and BYHAND
- packages.
-
- * jenna: rewritten from scratch to fix speed problems. Run time
- on auric goes down from 31.5 minutes to 3.5 minutes. Of that 3.5
- minutes, 105 seconds are the monster query and 70 odd seconds is
- claire.
-
- * apt.conf.buildd (Default): remove MaxContentsChange as it's
- irrelevant.
-
-2002-06-05 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd-security: new file.
-
-2002-06-05 Matt Kraai <kraai@alumni.cmu.edu>
-
- * denise (list): take a file argument and use it.
- (main): don't abuse sys.stdout, just write to the file.
-
- * claire.py (usage): Fix misspelling.
- (clean_symlink): Simplify.
- (find_dislocated_stable): Avoid unnecessary work.
-
-2002-05-29 James Troup <james@nocrew.org>
-
- * cameron: removed; apt-ftparchive can simply walk the directory.
-
-2002-05-26 Anthony Towns <ajt@debian.org>
-
- * katie.conf{,-non-US}: Map testing to testing-proposed-updates
- for the autobuilders.
-
-2002-05-24 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: update override files before running apt-ftparchive
-
-2002-05-23 Martin Michlmayr <tbm@cyrius.com>
-
- * amber (main): remove extra space in prompt.
-
- * utils.py (validate_changes_file_arg): use original filename in
- error messages.
-
- * jeri (check_joey): close file after use.
- (parse_packages): likewise.
- (main): setup debug option properly.
-
- * melanie (main): remove unused packages variable and simplify the
- code to build up con_packages by using repr().
-
-2002-05-23 James Troup <james@nocrew.org>
-
- * lisa (recheck): when we reject, also return 0 so the package is
- skipped.
- (sg_compare): fix note sorting.
- (recheck): remove the .katie file after rejection.
-
- * katie.py (Katie.accept): accepted auto-build support take 3;
- this time adding support for security. Security needs a) non-pool
- files copied rather than symlinked since accepted is readable only
- by katie/security and www-data needs to be able to see the files,
- b) needs per-suite directories. SpecialAcceptedAutoBuild becomes
- AcceptedAutoBuildSuites and is a ValueList containing the suites.
- SecurityAcceptedAutoBuild is a new boolean which controls whether
- or not normal or security style is used. The unstable_accepted
- table was renamed to accepted_autobuild and a suite column added.
- Also fix a bug noticed by Ryan where an existent orig.tar.gz
- didn't have it's last_used/in_accepted flags correctly updated.
- * katie (install): likewise.
- * rhona (clean_accepted_autobuild): likewise.
-
-2002-05-22 James Troup <james@nocrew.org>
-
- * lisa (sort_changes): new function; sorts changes properly.
- Finally.
- (sg_compare): new function; helper for sort_changes(). Sorts by
- have note and time of oldest upload.
- (indiv_sg_compare): new function; helper for sort_changes().
- Sorts by source version, have source and filename.
- (main): use sort_changes().
- (changes_compare): obsoleted; removed.
-
-2002-05-20 James Troup <james@nocrew.org>
-
- * rhona (clean_accepted_autobuild): don't die if a file we're
- trying to remove doesn't exist. Makes rhona more friendly to
- katie/katie.py crashes/bugs without any undue cost.
-
-2002-05-19 James Troup <james@nocrew.org>
-
- * lisa (main): if sorting a large number of changes give some
- feedback.
- (recheck): new function, run the same checks (modulo NEW,
- obviously) as katie does, if they fail do the standard
- reject/skip/quit dance.
- (do_pkg): use it.
-
- * katie (install): don't try to unlink the symlink in the
- AcceptedAutoBuild support if the destination is not a symlink (or
- doesn't exist). Avoids unnecessary bombs on previous partial
- accepts and will still bomb hard if the file exists and isn't a
- symlink.
-
- * utils.py: blah, commands _is_ used when the mail stuff isn't
- commented out like it is in my test environment.
-
- * lisa (changes_compare): "has note" overrides everything else.
- Use .katie files rather than running parse_changes, faster and
- allows "has note" to work. Correct version compare, it was
- reversed. Ctime check should only kick in if the source packages
- are not the same.
- (print_new): print out and return any note. Rename 'ret_code' to
- 'broken'.
- (edit_new): renamed from spawn_editor. Don't leak file
- descriptors. Clean up error message if editor fails.
- (edit_note): new function, allows one to edit notes.
- (do_new): add note support, editing and removing.
- (init): kill -s/--sort; with notes we always want to use our
- sorting method.
- (usage): likewise.
-
- * katie.py (Katie.dump_vars): add "lisa note" as an optional
- changes field.
-
- * utils.py (build_file_list): rename 'dsc' to 'is_a_dsc' and have
- it default to 0. Adapt tests to assume it's boolean.
- * fernanda.py (check_changes): adjust call appropriately.
- * halle (check_changes): likewise.
- * jennifer (check_changes): likewise.
- * jeri (check_changes): likewise.
- * shania (flush_orphans): likewise.
-
- * jennifer (check_dsc): pass is_a_dsc by name when calling
- build_file_list() for clarity.
- * shania (flush_orphans): likewise.
- * tea (check_missing_tar_gz_in_dsc): likewise.
-
- * jennifer (check_dsc): pass dsc_whitespace_rules by name when
- calling parse_changes() for clarity.
- * tea (check_dscs): likewise.
-
- * utils.py (parse_changes): make dsc_whitespace_rules default to
- not true.
- * halle (check_changes): adjust call appropriately.
- * jennifer (check_changes): likewise.
- * jeri (check_changes): likewise.
- * lisa (changes_compare): likewise.
- * utils.py (changes_compare): likewise.
- * melanie (main): likewise.
- * shania (flush_orphans): likewise.
- * fernanda.py (check_changes): likewise.
-
-2002-05-18 James Troup <james@nocrew.org>
-
- * katie.py (Katie.dump_vars): make the .katie file unreadable,
- it's not useful and by and large a duplication of information
- available in readable format in other files.
-
-2002-05-16 Ryan Murray <rmurray@debian.org>
-
- * melanie: Dir::TemplatesDir -> Dir::Templates
-
-2002-05-15 Ryan Murray <rmurray@debian.org>
-
- * cameron: correct the use of os.path.join
-
-2002-05-15 Anthony Towns <ajt@debian.org>
-
- * ziyi: Update to match the new format for Architectures/Components
- in katie.conf.
-
-2002-05-14 James Troup <james@nocrew.org>
-
- * amber: new script; 'installer' wrapper script for the security
- team.
-
- * katie.py (Katie.announce): remove unused 'dsc' local
- variable. (pychecker)
-
- * ziyi: pre-define AptCnf and out globals to None. (pychecker)
-
- * neve: don't import sys, we don't use it. (pychecker)
- (check_signature): fix return type mismatch. (pychecker)
-
- * utils.py: don't import commands, we don't use it. (pychecker)
-
- * katie (install): SpecialAcceptedAutoBuild is a boolean.
-
- * katie.py (Katie.dump_vars): don't store "oldfiles", it's
- obsoleted by the change to "othercomponents" handling in jennifer
- detailed below.
- (Katie.cross_suite_version_check): new function; implements
- cross-suite version checking rules specified in the conf file
- while also enforcing the standard "must be newer than target
- suite" rule.
- (Katie.check_binary_against_db): renamed, since it's invoked once
- per-binary, "binaries" was inaccurate. Use
- cross_suite_version_check() and don't bother with the "oldfiles"
- rubbish as jennifer works out "othercomponents" herself now.
- (Katie.check_source_against_db): use cross_suite_version_check().
-
- * katie (check): the version and file overwrite checks
- (check_{binary,source,dsc}_against_db) are not per-suite.
-
- * jennifer (check_files): less duplication of
- 'control.Find("Architecture", "")' by putting it in a local
- variable.
- (check_files): call check_binary_against_db higher up since it's
- not a per-suite check.
- (check_files): get "othercomponents" directly rather than having
- check_binary_against_db do it for us.
-
- * heidi (main): 'if x:', not 'if x != []:'.
- * katie.py (Katie.in_override_p): likewise.
- (Katie.check_dsc_against_db): likewise.
- * natalie (main): likewise.
- * rene (main): likewise.
- * ziyi (real_arch): likewise.
-
- * alyson (main): Suite::%s::Architectures, Suite::%s::Components
- and OverrideType are now value lists, not lists.
- * andrea (main): likewise.
- * cindy (main): likewise.
- * claire.py (find_dislocated_stable): likewise.
- * denise (main): likewise.
- * jenna (main): likewise.
- * jennifer (check_distributions): likewise.
- (check_files): likewise.
- (check_urgency): likewise (Urgency::Valid).
- * jeri (parse_packages): likewise.
- * neve (main): likewise (and Location::%s::Suites).
- * rene (main): likewise.
-
-2002-05-13 James Troup <james@nocrew.org>
-
- * katie.py (Katie.check_source_against_db): correct case of reject
- message to be consistent with binary checks.
-
- * jennifer (get_status_output): don't leak 2 file descriptors per
- invocation.
- (check_signature): add missing '\n' to "duplicate status token"
- error message.
-
-2002-05-09 James Troup <james@nocrew.org>
-
- * utils.py (validate_changes_file_arg): new function; validates an
- argument which should be a .changes file.
- * ashley (main): use it.
- * lisa (main): likewise.
-
- * katie.py (Katie.check_dsc_against_db): since there can be more
- than one .orig.tar.gz make sure we don't assume the .orig.tar.gz
- entry still exists in files.
-
- * jennifer (check_dsc): handle the .orig.tar.gz disappearing from
- files, since check_dsc_against_db() deletes the .orig.tar.gz
- entry.
-
- * cameron: cleanups.
-
- * utils.py (changes_compare): change sort order so that source
- name and source version trump 'have source'; this should fix
- UNACCEPT problems in katie where -1 hppa+source & i386, -2
- i386&source & hppa lead to -1 i386 unaccept. Problem worked out
- by Ryan.
-
- * lisa (main): allow the arguments to be .katie files too.
-
-2002-05-07 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: add s390 to arch list again
-
-2002-05-05 Ryan Murray <rmurray@debian.org>
-
- * cron.buildd: new script, update w-b database from unstable_accepted
- table
- * cameron: new script, take list in unstable_accepted and write out
- a file list for apt-ftparchive
- * apt.conf.buildd: new apt configuration for Packages/Sources for
- unstable_accepted
- * vars: add s390 to arch list.
-
-2002-05-03 James Troup <james@nocrew.org>
-
- * neve (main): don't hard code the calling user as that doesn't
- work with modern postgres installs. Fix psql invocation for
- init_pool.sql (database name required). Dont' hard code the
- database name.
- (process_sources): add support for fingerprint and install_date.
- (process_packages): add support for fingerprint.
- (do_sources): pass in the directory, fingerprint support needs it.
- (get_status_output): borrowed from jennifer.
- (reject): likewise.
- (check_signature): likewise.
-
- * katie (install): only try to log urgencies if Urgency_Logger is
- defined.
- (main): only initialize Urgency_Logger is Dir::UrgencyLog is
- defined; only close Urgency_Logger if it's defined.
-
- * catherine (poolize): adapt for Dir rationalization.
- * claire.py (find_dislocated_stable): likewise.
- * denise (main): likewise.
- * halle (check_joey): likewise.
- * jenna: likewise.
- * jennifer: likewise.
- * jeri: likewise.
- * katie.py: likewise.
- * katie: likewise.
- * lisa (do_bxa_notification): likewise.
- * logging.py (Logger.__init__): likewise
- * rene (main): likewise.
- * rhona (clean): likewise.
- * shania (init): likewise.
- * tea: likewise.
- * ziyi: likewise.
-
- * lisa (add_overrides): Dinstall::BXANotify is a boolean, use
- FindB, not FindI.
-
- * rhona (clean_accepted_autobuild): SpecialAcceptedAutoBuild is a
- boolean, use FindB, not get.
-
- * katie.py (Katie.check_dsc_against_db): ignore duplicate
- .orig.tar.gz's which are an exact (size/md5sum) match.
-
- * ashley (main): really allow *.katie files as arguments too;
- noticed by aj.
-
- * sql-aptvc.cpp: postgres.h moved to a "server" subdirectory.
-
-2002-05-03 Anthony Towns <ajt@debian.org>
-
- * ziyi: support for security.
-
-2002-05-02 James Troup <james@nocrew.org>
-
- * jennifer (accept): call Katie.check_override() unconditional as
- no-mail check moved into that function.
- (do_byhand): likewise.
-
- * katie.py (Katie.check_override): don't do anything if we're a)
- not sending mail or b) the override disparity checks have been
- disbled via Dinstall::OverrideDisparityCheck.
-
- * jennifer (check_files): don't hard code Unstable as the suite
- used to check for architecture validity; use
- Dinstall::DefaultSuite instead, if it exists.
- (accept): conditionalize
-
- * katie.py (Katie.update_subst): support global maintainer
- override with Dinstall::OverrideMaintainer.
-
- * jennifer (check_distributions): new function, Distribution
- validation and mapping. Uses new SuiteMappings variable from
- config file to abstract suite mappings.
- (check_changes): call it.
-
- * natalie: renamed; nothing imports or likely will for some time.
-
- * denise (main): remove unused natalie import and init().
-
- * natalie.py (init): removed.
- (main): initalize here instead and don't hardcode the database
- name.
-
-2002-04-30 James Troup <james@nocrew.org>
-
- * katie.py (Katie.close_bugs): new function, split out from
- announce().
- (Katie.announce): only call close_bugs() if Dinstall::CloseBugs is
- true.
- (Katie.close_bugs): new function, split out
- (Katie.close_bugs): return immediately if there are no bugs to
- close.
-
- * jennifer (acknowledge_new): adapt for new utils.TemplateSubst().
- * katie (do_reject): likewise.
- (stable_install): likewise.
- * katie.py (Katie.announce): likewise.
- (Katie.accept): likewise.
- (Katie.check_override): likewise.
- (Katie.do_reject): likewise.
- * lisa (do_bxa_notification): likewise.
- * melanie (main): likewise.
-
- * utils.py (TemplateSubst): change second argument to be a
- filename rather than a template since every caller opened a file
- on the fly which was ugly and leaked file descriptor.
-
-2002-04-29 James Troup <james@nocrew.org>
-
- * katie.py (Katie.announce): (modified) patch from Raphael Hertzog
- <hertzog@debian.org> to send 'accepted' announce mails to the
- PTS. [#128140]
-
-2002-04-24 James Troup <james@nocrew.org>
-
- * init_pool.sql (unstable_accepted): add two new fields to
- unstable_accepted; in_accepted is a boolean indicating whether or
- not the file is in accepted and last_used is a timestamp used by
- rhona to determine when to remove symlinks for installed packages.
-
- * katie.py (Katie.accept): auto-build support take 2. Create
- symlinks for all files into a seperate directory. Add files to
- unstable_accepted as paths to the new directory; mark them as
- being in accepted for cameron. Properly conditionalize it on a
- configuration variable.
-
- * katie (install): likewise. Update symlinks to point into the
- pool; mark the files for later deletion by rhona and mark them as
- not being in accepted for cameron.
-
- * rhona (clean_accepted_autobuild): new function.
-
-2002-04-22 James Troup <james@nocrew.org>
-
- * jennifer (check_files): handle db_access.get_location_id()
- returning -1 properly/better.
-
- * rhona (clean_fingerprints): new function.
-
-2002-04-21 James Troup <james@nocrew.org>
-
- * utils.py (touch_file): unused; remove.
- (plural): likewise.
-
- * jennifer (check_files): close file descriptor used to get the
- control file.
- (check_md5sums): likewise.
- (callback): likewise.
-
- * katie.py (Katie.do_reject): handle manual rejects much better;
- call the editor first and get confirmation from the user before
- proceeding.
-
- * jennifer (check_signature): prefix_multi_line_string() moved to
- utils.
-
- * utils.py (prefix_multi_line_string): moved here from jennifer.
-
-2002-04-20 James Troup <james@nocrew.org>
-
- * lisa (main): handle non-existent files.
-
- * ashley (main): allow *.katie files as arguments too.
-
-2002-04-19 James Troup <james@nocrew.org>
-
- * katie.py (Katie.accept): add stuff to help auto-building from
- accepted; if the .orig.tar.gz is not part of the upload (i.e. it's
- in the pool), create a symlink to it in the accepted directory and
- add the .dsc and .{u,}deb(s) to a new 'unstable_accepted' table.
-
- * katie (install): undo the "auto-building from accepted" stuff
- (i.e. remove the .orig.tar.gz symlink and remove the files from
- unstable_accepted table).
-
-2002-04-16 James Troup <james@nocrew.org>
-
- * jennifer (upload_too_new): fix typo which was causing all
- timestamp comparisons to be against the .changes file. Also move
- back to the original directory so we do the comparisons against
- accurate timestamps.
-
- * tea (check_missing_tar_gz_in_dsc): new function.
-
- * jennifer (check_dsc): add a check to ensure there is a .tar.gz
- file mentioned in the .dsc.
-
- * lisa (main): use X-Katie in the mail headers, not X-Lisa; that
- way mails reach debian-{devel-,}changes@l.d.o.
-
-2002-04-02 Ryan Murray <rmurray@debian.org>
-
- * cron.daily: run shania after rhona
- * cron.daily-non-US: likewise.
-
-2002-04-01 James Troup <james@nocrew.org>
-
- * katie: re-add proposed-updates/stable install support.
-
- * katie.py (Katie.dump_vars): add changes["changes"] as an
- optional field; should be mandatory later.
-
-2002-03-31 James Troup <james@nocrew.org>
-
- * katie (install): support a Suite::<foo>::CopyKatie similar to
- CopyChanges. Done seperately because .katie files don't need to
- be mirrored and will probably be copied to another directory as a
- result.
-
- * halle (main): add missing debug to options.
-
-2002-03-29 James Troup <james@nocrew.org>
-
- * madison (main): add a -r/--regex option.
-
-2002-03-26 James Troup <james@nocrew.org>
-
- * lisa: don't trample on changes["distribution"]; make a copy of
- it as changes["suite"] instead and use that.
-
-2002-03-16 Anthony Towns <ajt@debian.org>
-
- * templates/lisa.bxa_notification: Fix some grammatical errors.
- Encourage contact via bxa@ftp-master email address.
-
-2002-03-15 James Troup <james@nocrew.org>
-
- * jennifer (check_timestamps): remove bogus raise in except.
-
-2002-03-15 Anthony Towns <ajt@debian.org>
-
- * cron.monthly: rotate mail/archive/bxamail as well as
- mail/archive/mail. This is for a complete archive of
- correspondence with the BXA.
-
-2002-03-14 Anthony Towns <ajt@debian.org>
-
- * crypto-in-main changes.
-
- * utils.py (move, copy): add an optional perms= parameter to let you
- set the resulting permissions of the moved/copied file
- * katie.py (force_move): rejected/morgued files should be unreadable
- * jennifer (do_byhand, acknowledge_new): pending new and byhand files
- should be unreadable.
-
-2002-03-07 Ryan Murray <rmurray@debian.org>
-
- * katie (install): check for existance of "files id" key as well as
- it being set to a valid value.
- * katie (install): check for existense and valid value for location
- id as well
-
-2002-03-05 Ryan Murray <rmurray@debian.org>
-
- * katie.py (do_reject): reread the reason file after editing it.
-
-2002-02-25 James Troup <james@nocrew.org>
-
- * jennifer (check_changes): don't enforce sanity in .changes file
- names since it doesn't seem to be possible; pcmica-cs and similar
- freak show packages in particular cause real problems.
-
- * katie.py (Katie.check_dsc_against_db): initialize 'found' for
- each dsc_file since the .orig.tar.gz checking code now uses it as
- a boolean. Fixes bizarro rejections which bogusly claimed
- .diff.gz md5sum/size was incorrect.
-
-2002-02-24 James Troup <james@nocrew.org>
-
- * katie (process_it): reset reject_message.
-
-2002-02-22 James Troup <james@nocrew.org>
-
- * db_access.py(set_files_id): disable use of
- currval('files_id_seq') because it was taking 3 seconds on auric
- which is insane (most calls take < 0.01) and simply call
- get_files_id() for the return value instead.
-
- * katie.py (Katie.do_query): convenience function; unused by
- default, useful for profiling.
- * db_access.py (do_query): likewise.
-
- * katie (install): fix insert SQL call when binary has no source.
-
- * lisa (determine_new): auto-correct non-US/main to non-US.
- (determine_new): add a warning when adding things to stable.
- (edit_index): use our_raw_input().
- (edit_overrides): likewise.
- (do_new): likewise. Use re.search() not re.match() since the
- default answer may not be the first one.
- (do_byhand): likewise.
- (do_new): Default to 'S'kip and never 'A'dd.
-
- * jennifer (action): pass prompt to our_raw_input().
- * melanie (game_over): likewise.
- * katie (action): likewise.
-
- * utils.py (our_raw_input): add an optional prompt argument to
- make the function more usable as a drop in replacement for
- raw_input().
-
- * jennifer (check_files): correct reject() to not double prefix
- when using katie.py based functions.
- (check_dsc): likewise.
-
- * katie.py (Katie.reject): prepend a new line if appropriate
- rathen than appending one to avoid double new lines when caller
- adds one of his own.
-
- * lisa (determine_new): warn if the package is also in other
- components.
-
-2002-02-20 James Troup <james@nocrew.org>
-
- * jennifer (check_files): if .changes file lists "source" in
- Architecture field, there must be a .dsc.
-
-2002-02-15 James Troup <james@nocrew.org>
-
- * ashley (main): add some missing fields.
-
- * katie.py (Katie.check_dsc_against_db): fix to take into account
- the fact that the .orig.tar.gz might be in byhand, accepted or
- new. Also fix calling of reject().
- (Katie.check_binaries_against_db): fix calling of reject().
- (Katie.check_source_against_db): likewise.
- (Katie.dump_vars): add missing variables used for bug closures.
-
- * lisa (changes_compare_by_time): sort by reverse time.
-
- * katie.py (Katie.accept): log.
- (Katie.dump_vars): missing has_key test for optional changes fields.
-
- * jennifer (main): print "Accepted blah blah" to stdout, not stderr.
- (process_it): traceback goes to stderr, not stdout.
- (acknowledge_new): log.
- (do_byhand): likewise.
-
- * katie.py (Katie.update_subst): fix typo (Cnf vs. self.Cnf).
-
- * add_constraints.sql: add grants for the new fingerprint table.
-
-2002-02-13 James Troup <james@nocrew.org>
-
- * katie (do_reject): basename the .changes filename before trying
- to use it to construct the .reason filename.
- (process_it): call Katie.update_subst() so do_reject() DTRT with
- the mail template.
- (do_reject): setup the mail template correctly.
-
-2002-02-12 James Troup <james@nocrew.org>
-
- * tea (process_dir): renamed 'arg' to 'unused' for clarity.
- (check_files): don't abuse global dictionaries.
- (Ent): use all variables.
- (check_timestamps): don't abuse global dictionaries.
-
- * fernanda.py: renamed to .py so lisa can import it.
- (check_dsc): remove unused local variables (pychecker).
- (display_changes): split off from check_changes.
-
- * katie: rewritten; most of the functionality moves to jennifer;
- what's left is the code to install packages once a day from the
- 'accepted' directory.
-
- * jennifer: new program, processes packages in 'unchecked'
- (i.e. most of the non-install functionality of old katie).
-
- * katie.py: common functions shared between the clique of
- jennifer, lisa and katie.
-
- * lisa: new program; handles NEW and BYHAND packages.
-
- * jeri (usage): new function.
- (main): use it.
- (check_package): remove unused local variable (pychecker).
-
- * init_pool.sql: new table fingerprint. Add fingerprint colums to
- binaries and source. Add install_date to source.
-
- * halle (usage): new function.
- (main): use it. Remove unused options.
- (check_changes): remove unused local variable (pychecker).
-
- * add_constraints.sql: add fingerprint references.
-
- * db_access.py (get_or_set_fingerprint_id): new function.
-
- * ashley (main): new program; dumps the contents of a .katie file
- to stdout.
-
- * alyson (main): remove option handling since we don't actually
- support any.
- * cindy (main): likewise.
-
- * remove unnecessary imports and pre-define globals (pychecker).
-
-2002-02-11 Anthony Towns <ajt@debian.org>
-
- * added installation-report and upgrade-report pseudo-packages
-
-2002-01-28 Martin Michlmayr <tbm@cyrius.com>
-
- * katie (update_subst): use Dinstall::TrackingServer.
- * melanie (main): likewise.
-
-2002-01-27 James Troup <james@nocrew.org>
-
- * shania (main): it's IntLevel not IntVal; thanks to tbm@ for
- noticing, jgg@ for fix.
-
-2002-01-19 James Troup <james@nocrew.org>
-
- * utils.py (extract_component_from_section): handle non-US
- non-main properly.
-
-2002-01-12 James Troup <james@nocrew.org>
-
- * madison: add support for -S/--source-and-binary which displays
- information for the source package and all it's binary children.
-
-2002-01-13 Anthony Towns <ajt@debian.org>
-
- * katie.conf: Remove Catherine Limit and bump stable to 2.2r5
- * katie.conf: Add Dinstall::SigningKeyIds option, set to the 2001
- and 2002 key ids.
- * katie.conf-non-US: Likewise.
- * ziyi: Suppoer Dinstall::SigningKeyIds to sign a Release file with
- multiple keys automatically. This is probably only useful for
- transitioning from an expired (or revoked?) key.
-
-2002-01-08 Ryan Murray <rmurray@debian.org>
-
- * debian/python-dep: new file that prints out python:Depends for
- substvars
- * debian/control: use python:Depends, build-depend on python
- lower Depends: on postgresql to Suggests:
- * debian/rules: determine python version, install to the correct
- versioned dir
-
-2001-12-18 Anthony Towns <ajt@debian.org>
-
- * ziyi: unlink Release files before overwriting them (in case they've
- been merged)
- * ziyi: always include checksums/sizes for the uncompressed versions
- of Sources and Packages, even if they're not present on disk
-
-2001-11-26 Ryan Murray <rmurray@debian.org>
-
- * ziyi (main): add SigningPubKey config option
- * katie.conf: use SigningPubKey config option
- * katie.conf-non-US: likewise
-
-2001-11-24 James Troup <james@nocrew.org>
-
- * katie (acknowledge_new): log newness.
-
-2001-11-24 Anthony Towns <ajt@debian.org>
-
- * ziyi (real_arch): bail out if some moron forgot to reset
- untouchable on stable.
- (real_arch): source Release files.
-
-2001-11-19 James Troup <james@nocrew.org>
-
- * claire.py (main): don't use apt_pkg.ReadConfigFileISC and
- utils.get_conf().
- * shania (main): likewise.
-
- * rhona (main): add default options.
-
- * db_access.py (get_archive_id): case independent.
-
- * katie (action): sort files so that ordering is consistent
- between mails; noticed/requested by Joey.
-
-2001-11-17 Ryan Murray <rmurray@debian.org>
-
- * utils.py: add get_conf function, change startup code to read all
- config files to the Cnf that get_conf returns
- use the component list from the katie conf rather than the hardcoded
- list.
- * all scripts: use new get_conf function
- * shania: fix try/except around changes files
- * jenna: only do debian-installer if it is a section in Cnf
-
-2001-11-16 Ryan Murray <rmurray@debian.org>
-
- * shania (main): Initialize days to a string of a number.
- (main): Initialize Cnf vars before reading in Cnf
-
-2001-11-14 Ryan Murray <rmurray@debian.org>
-
- * shania (main): Initialize days to a number.
-
-2001-11-04 James Troup <james@nocrew.org>
-
- * docs/Makefile: use docbook-utils' docbook2man binary.
-
- * Change all "if foo == []" constructs into "if not foo".
-
- * katie (check_changes): when installing into stable from
- proposed-updates, remove all non-stable target distributions.
- (check_override): don't check for override disparities on stable
- installs.
- (stable_install): update install_bytes appropriately.
- (reject): stable rejection support; i.e. don't remove files when
- rejecting files in the pool, rather remove them from the
- proposed-update suite instead, rhona will do the rest.
- (manual_reject): support for a stable specific template.
- (main): setup up stable rejector in subst.
-
-2001-11-04 Martin Michlmayr <tbm@cyrius.com>
-
- * debian/control (Build-Depends): docbook2man has been superseded
- by docbook-utils.
-
- * neve (main): exit with a more useful error message.
- (update_suites): Suite::<suite>::Version, Origin and Description
- are not required, so don't fail if they don't exist.
-
- * db_access.py (get_archive_id): return -1 on error rather than
- raise an exception.
- (get_location_id): likewise.
-
- * madison (main): don't exit on the first not-found package,
- rather exit with an appropriate return code after processing all
- packages.
-
-2001-11-03 James Troup <james@nocrew.org>
-
- * claire.py (find_dislocated_stable): add per-architecture
- symlinks for dislocated architecture: all debs.
-
-2001-10-19 Anthony Towns <ajt@debian.org>
-
- * apt.conf*, katie.conf*: add mips, mipsel, s390 to testing.
-
-2001-10-10 Anthony Towns <ajt@debian.org>
-
- * claire.py (fix_component_section): do _not_ assign to None under
- any circumstances
-
-2001-10-07 Martin Michlmayr <tbm@cyrius.com>
-
- * melanie (main): don't duplicate architectures when removing from
- more than one suite.
-
- * heidi (main, process_file, get_list): report suite name not ID.
-
- * naima (nmu_p): be case insensitive.
-
- * naima (action): more list handling clean ups.
-
- * melanie (main): clean up lists handling to use string.join and
- IN's.
-
- * madison (main): clean up suite and architecture argument parsing
- to use slices less and string.join more.
-
- * utils.py (parse_changes): Use string.find() instead of slices for
- string comparisons thereby avoid hardcoding the length of strings.
- * ziyi (main): likewise.
-
-2001-10-07 James Troup <james@nocrew.org>
-
- * Remove mode argument from utils.open_files() calls if it's the
- default, i.e. 'r'.
-
-2001-09-27 James Troup <james@nocrew.org>
-
- * katie (init): new function; options clean up.
- (usage): add missing options, remove obsolete ones.
- (main): adapt for the two changes above. If the lock file or
- new-ack log file don't exist, create them. Don't try to open the
- new-ack log file except running in new-ack mode.
-
- * alyson (main): initialize all the tables that are based on the
- conf file.
-
- * utils.py (touch_file): like touch(1).
- (where_am_i): typo.
-
- * catherine (usage): new.
- (main): use it. options cleanup.
- * claire.py: likewise.
- * fernanda: likewise.
- * heidi: likewise.
- * jenna: likewise.
- * shania: likewise.
- * ziyi: likewise.
-
- * andrea: options cleanup.
- * charisma: likewise.
- * julia: likewise.
- * madison: likewise.
- * melanie: likewise.
- * natalie: likewise.
- * rhona: likewise.
- * tea: likewise.
-
-2001-09-26 James Troup <james@nocrew.org>
-
- * utils.py: default to sane config file locations
- (/etc/katie/{apt,katie}.conf. They can be the actual config files
- or they can point to the real ones through use of a new Config
- section. Based on an old patch by Adam Heath.
- (where_am_i): use the new default config stuff.
- (which_conf_file): likewise.
- (which_apt_conf_file): likewise.
-
- * charisma (main): output defaults to
- `Package~Version\tMaintainer'; input can be of either form. When
- parsing the new format do version comparisons, when parsing the
- old format assume anything in the extra file wins. This fixes the
- problem of newer non-US packages being overwhelmed by older
- versions still in stable on main.
-
-2001-09-17 James Troup <james@nocrew.org>
-
- * natalie.py (list): use result_join().
-
- * denise (main): result_join() moved to utils.
-
- * utils.py (result_join): move to utils; add an optional seperator
- argument.
-
-2001-09-14 James Troup <james@nocrew.org>
-
- * heidi (set_suite): new function; does --set like natalie does,
- i.e. turns it into a sequence of --add's and --remove's
- internally. This is a major win (~20 minute run time > ~20
- seconds) in the common, everday (i.e. testing) case.
- (get_id): common code used by set_suite() and process_file().
- (process_file): call set_suite() and get_id().
- (main): add logging support.
-
- * julia: new script; syncs PostgeSQL with (LDAP-generated) passwd
- files.
-
- * utils.py (parse_changes): use slices or simple string comparison
- in favour of regexes where possible.
-
- * sql-aptvc.cpp (versioncmp): rewritten to take into account the
- fact that the string VARDATA() points to are not null terminated.
-
- * denise (result_join): new function; like string.join() but
- handles None's.
- (list): use it.
- (main): likewise.
-
- * charisma (main): python-pygresql 7.1 returns None not "".
-
-2001-09-14 Ryan Murray <rmurray@debian.org>
-
- * natalie.py (list): handle python-pygresql 7.1 returning None.
-
-2001-09-10 Martin Michlmayr <tbm@cyrius.com>
-
- * madison (main): return 1 if no package is found.
-
-2001-09-08 Martin Michlmayr <tbm@cyrius.com>
-
- * madison (main): better error handling for incorrect
- -a/--architecture or -s/--suite arguments.
- (usage): new.
- (main): use it.
-
-2001-09-05 Ryan Murray <rmurray@debian.org>
-
- * charisma, madison, katie: remove use of ROUser
- * katie.conf,katie.conf-non-US: remove defintion of ROUser
-
-2001-08-26 James Troup <james@nocrew.org>
-
- * katie (nmu_p.is_an_nmu): use maintaineremail to check for group
- maintained packages at cjwatson@'s request.
-
-2001-08-21 James Troup <james@nocrew.org>
-
- * madison (main): add -a/--architecture support.
-
- * jenna: use logging instead of being overly verbose on stdout.
-
-2001-08-11 Ryan Murray <rmurray@debian.org>
-
- * melanie: add functional help option
-
-2001-08-07 Anthony Towns <ajt@debian.org>
-
- * apt.conf, katie.conf: Add ia64 and hppa to testing.
-
-2001-07-28 James Troup <james@nocrew.org>
-
- * katie (check_dsc): ensure source version is >> than existing
- source in target suite.
-
-2001-07-25 James Troup <james@nocrew.org>
-
- * natalie.py: add logging support.
-
- * utils.py (open_file): make the second argument optional and
- default to read-only.
-
- * rene (main): work around broken source packages that duplicate
- arch: all packages with arch: !all packages (no longer allowed
- into the archive by katie).
-
-2001-07-13 James Troup <james@nocrew.org>
-
- * katie (action): don't assume distribution is a dictionary.
- (update_subst): don't assume architecture is a dictionary and that
- maintainer822 is defined.
- (check_changes): recognise nk_format exceptions.
- (check_changes): reject on 'testing' only uploads.
- (check_files): when checking to ensure all packages are newer
- versions check against arch-all packages too.
- (check_dsc): enforce the existent of a sane set of mandatory
- fields. Ensure the version number in the .dsc (modulo epoch)
- matches the version number in the .changes file.
-
- * utils.py (changes_compare): ignore all exceptions when parsing
- the changes files.
- (build_file_list): don't UNDEF on a changes file with no format
- field.
-
-2001-07-07 James Troup <james@nocrew.org>
-
- * katie (nmu_p.is_an_nmu): check 'changedby822' for emptiness
- rather than 'changedbyname' to avoid false negatives on uploads
- with an email-address-only Changed-By field.
- (check_dsc): don't overwrite reject_message; append to it.
- (check_signature): likewise.
- (check_changes): likewise.
- (announce): condition logging on 'action'.
-
- * logging.py: new logging module.
-
- * katie: Cleaned up code by putting Cnf["Dinstall::Options"]
- sub-tree into a separate (global) variable.
- (check_dsc): ensure format is 1.0 to retain backwards
- compatability with dpkg-source in potato.
- (main): only try to obtain the lock when not running in no-action
- mode.
- Use the new logging module.
-
- * christina: initial version; only partially usable.
-
-2001-06-28 Anthony Towns <ajt@debian.org>
-
- * apt.conf: Add ExtraOverrides to auric.
-
-2001-06-25 James Troup <james@nocrew.org>
-
- * katie (nmu_p.is_an_nmu): the wonderful dpkg developers decided
- they preferred the name 'Uploaders'.
-
-2001-06-23 James Troup <james@nocrew.org>
-
- * katie (check_files): fix typo in uncommon rejection message,
- s/sourceversion/source version/.
-
- * denise (main): we can't use print because stdout has been
- redirected.
-
- * katie (source_exists): new function; moved out of check_files()
- and added support for binary-only NMUs of earlier sourceful NMUs.
-
- * rhona (clean): find_next_free has moved.
-
- * utils.py (find_next_free): new function; moved here from rhona.
- Change too_many to be an argument with a default value, rather
- than a hardcoded variable.
-
- * shania: rewritten to work better; REJECTion reminder mail
- handling got lost though.
-
-2001-06-22 James Troup <james@nocrew.org>
-
- * rhona (main): remove unused override code.
-
- * fernanda (main): remove extraneous \n's from utils.warn calls.
- * natalie.py (list): likewise.
-
- * catherine, cindy, denise, heidi, jenna, katie, neve, rhona, tea:
- use utils.{warn,fubar} where appropriate.
-
-2001-06-21 James Troup <james@nocrew.org>
-
- * katie (nmu_p): new class that encapsulates the "is a nmu?"
- functionality.
- (nmu_p.is_an_nmu): add support for multiple maintainers specified
- by the "Maintainers" field in the .dsc file and maintainer groups.
- (nmu_p.__init__): read in the list of group maintainer names.
- (announce): use nmu_p.
-
-2001-06-20 James Troup <james@nocrew.org>
-
- * rene (main): hardcode the suite experimental is compared to by
- name rather than number.
-
- * katie (check_files): differentiate between doesn't-exist and
- permission-denied in "can not read" rejections; requested by edd@.
- (check_dsc): use os.path.exists rather than os.access to allow the
- above check to kick in.
-
- * heidi (process_file): read all input before doing anything and
- use transactions.
-
-2001-06-15 James Troup <james@nocrew.org>
-
- * fernanda: new script; replaces old 'check' shell script
- nastiness.
-
-2001-06-14 James Troup <james@nocrew.org>
-
- * katie: actually import traceback module to avoid amusing
- infinite loop.
-
-2001-06-10 James Troup <james@nocrew.org>
-
- * utils.py (extract_component_from_section): fix to handle just
- 'non-free' and 'contrib'. Also fix to handle non-US in a
- completely case insensitive manner as a component.
-
-2001-06-08 James Troup <james@nocrew.org>
-
- * madison (arch_compare): sort function that sorts 'source' first
- then alphabetically.
- (main): use it.
-
-2001-06-05 Jeff Licquia <jlicquia@progeny.com>
-
- * catherine (poolize): explicitly make poolized_size a long so it
- doesn't overflow when poolizing e.g. entire archives.
-
-2001-06-01 James Troup <james@nocrew.org>
-
- * utils.py (send_mail): throw exceptions rather than exiting.
-
- * katie (process_it): catch exceptions and ignore them.
-
-2001-06-01 Michael Beattie <mjb@debian.org>
-
- * added update-mailingliststxt and update-readmenonus to update
- those files, respectively. modified cron.daily{,-non-US} to
- use them.
-
-2001-05-31 Anthony Towns <ajt@debian.org>
-
- * rhona: make StayOfExecution work.
-
-2001-05-31 James Troup <james@nocrew.org>
-
- * rhona (find_next_free): fixes to not overwrite files but rename
- them by appending .<n> instead.
- (clean): use find_next_free and use dated sub-directories in the
- morgue.
-
- * utils.py (move): don't overwrite files unless forced to.
- (copy): likewise.
-
-2001-05-24 James Troup <james@nocrew.org>
-
- * katie (check_files): determine the source version here instead
- of during install().
- (check_files): check for existent source with bin-only NMU
- support.
- (main): sort the list of changes so that the source-must-exist
- check Does The Right Thing(tm).
-
- * utils.py (changes_compare): new function; sorts a list of
- changes files by 'have-source', source, version.
- (cc_fix_changes): helper function.
- (parse_changes): use compiled regexes.
- (fix_maintainer): likewise.
-
- * rene (main): warn about packages in experimental that are
- superseded by newer versions in unstable.
-
-2001-05-21 James Troup <james@nocrew.org>
-
- * rene (main): add support for checking for ANAIS (Architecture
- Not Allowed In Source) packages.
-
-2001-05-17 James Troup <james@nocrew.org>
-
- * katie (check_changes): initalize `architecture' dictionary in
- changes global so that if we can't parse the changes file for
- whatever reason we don't undef later on.
-
- * utils.py (parse_changes): fix handling of multi-line fields
- where the first line did have data.
-
-2001-05-05 Anthony Towns <ajt@debian.org>
-
- * ziyi: Add "NotAutomatic: yes" to experimental Release files.
- (It should always have been there. Ooopsy.)
-
-2001-05-03 Anthony Towns <ajt@debian.org>
-
- * jenna: Cleanup packages that move from arch:any to arch:all or
- vice-versa.
-
-2001-04-24 Anthony Towns <ajt@debian.org>
-
- * ziyi: add ``SHA1:'' info to Release files. Also hack them up to
- cope with debian-installer and boot-floppies' md5sum.txt.
-
-2001-04-16 James Troup <james@nocrew.org>
-
- * katie (check_changes): add missing %s format string argument.
- (stable_install): temporary work around for morgue location to
- move installed changes files into.
- (stable_install): helps if you actually read in the template.
- (manual_reject): fix for editing of reject messages which was
- using the wrong variable name.
-
- * jenna (generate_src_list): typo; s/package/source/; fixes undef crash.
-
-2001-04-13 James Troup <james@nocrew.org>
-
- * katie (manual_reject): Cc the installer.
- (reject): don't.
- (check_changes): remove unused maintainer-determination code.
- (update_subst): add support for Changed-By when setting the
- *MAINTAINER* variables.
-
- * rene (bar): new function to check for packages on architectures
- when they shouldn't be.
-
- * natalie.py (main): use fubar() and warn() from utils.
-
- * utils.py (whoami): new mini-function().
- * melanie (main): use it.
- * katie (manual_reject): likewise.
-
-2001-04-03 James Troup <james@nocrew.org>
-
- * katie (action): ignore exceptions from os.path.getmtime() so we
- don't crash on non-existent changes files (e.g. when they are
- moved between the start of the install run in cron.daily and the
- time we get round to processing them).
-
- * madison (main): also list source and accept -s/--suite.
-
- * jenna (generate_src_list): missing \n in error message.
-
- * katie (update_subst): add sane defaults for when changes is
- skeletal.
-
-2001-03-29 James Troup <james@nocrew.org>
-
- * melanie (main): use fubar() and warn() from utils. Remember who
- the maintainer for the removed packages are and display that info
- to the user. Readd support for melanie-specific Bcc-ing that got
- lost in the TemplateSubst transition.
-
- * utils.py (fubar): new function.
- (warn): like wise.
-
- * db_access.py (get_maintainer): as below.
-
- * charisma (get_maintainer): moved the bulk of this function to
- db_access so that melanie can use it too.
-
- * claire.py (find_dislocated_stable): restrict the override join
- to those entries where the suite is stable; this avoids problems
- with packages which have moved to new sections (e.g. science)
- between stable and unstable.
-
-2001-03-24 James Troup <james@nocrew.org>
-
- * catherine (poolize): new function; not really independent of
- main() fully, yet.
- (main): use it.
-
- * katie (stable_install): __SUITE__ needs to be space prefixed
- because buildd's check for 'INSTALLED$'.
-
-2001-03-22 James Troup <james@nocrew.org>
-
- * utils.py (regex_safe): also need to escape '.'; noticed by ajt@.
-
- * jenna: rewritten; now does deletions on a per-suite level
- instead of a per-suite-component-architecture-type level. This
- allows mutli-component packages to be auto-cleaned (and as a
- bonus, reduces the code size and duplication).
-
-2001-03-22 Anthony Towns <ajt@debian.org>
-
- * ziyi (main): fix ziyi to overwrite existing Release.gpg files
- instead of just giving a gpg error.
-
-2001-03-21 James Troup <james@nocrew.org>
-
- * madison (main): use apt_pkg.VersionCompare to sort versions so
- that output is correctly sorted for packages like debhlper.
- Noticed by ajt@.
-
- * tea (check_source_in_one_dir): actually find problematic source
- packages.
-
- * katie (check_dsc): remember the orig.tar.gz's location ID if
- it's not in a legacy suite.
- (check_diff): we don't use orig_tar_id.
- (install): add code to handle sourceful diff-only upload of
- packages which change components by copying the .orig.tar.gz into
- the new component, if it doesn't already exist there.
- (process_it): reset orig_tar_location (as above).
-
- * melanie (main): use template substiution for the bug closing
- emails.
- (main): don't hardcode bugs.debian.org or packages.debian.org
- either; use configuration items.
-
- * katie: likewise.
-
- * natalie.py (init): use None rather than 'localhost' for the
- hostname given to pg.connect.
-
- * utils.py (TemplateSubst): new function; lifted from
- userdir-ldap.
-
-2001-03-21 Ryan Murray <rmurray@debian.org>
-
- * katie (announce): fix the case of non-existent
- Suite::$SUITE::Announce.
-
-2001-03-20 Ryan Murray <rmurray@debian.org>
-
- * debian/rules (binary-indep): install melanie into /usr/bin/ not
- /usr/.
-
- * alyson (main): use config variable for database name.
- * andrea (main): likewise.
- * catherine (main): likewise.
- * charisma (main): likewise.
- * cindy (main): likewise.
- * claire.py (main): likewise.
- * denise (main): likewise.
- * heidi (main): likewise.
- * jenna (main): likewise.
- * katie (main): likewise.
- * madison (main): likewise.
- * melanie (main): likewise.
- * neve (main): likewise.
- * rhona (main): likewise.
- * tea (main): likewise.
-
-2001-03-15 James Troup <james@nocrew.org>
-
- * rhona (check_sources): fixed evil off by one (letter) error
- which was causing only .dsc files to be deleted when cleaning
- source packages.
-
- * charisma (get_maintainer_from_source): remove really stupid and
- gratuitous IN sub-query and replace with normal inner join.
- (main): connect as read-only user nobody.
-
- * rhona (clean_maintainers): rewritten to use SELECT and sub-query
- with EXISTS.
- (check_files): likewise; still disabled by default though.
- (clean_binaries): add ' seconds' to the mysterious number in the
- output.
- (clean): likewise.
-
- * tea (check_files): add missing global declaration on db_files.
-
-2001-03-14 James Troup <james@nocrew.org>
-
- * rhona: rewritten large chunks. Removed a lot of the silly
- selecting into dictionaries and replaced it with 'where exists'
- based sub queries. Added support for StayOfExecution. Fix the
- problem with deleting dsc_files too early and disable cleaning of
- unattached files.
-
-2001-03-14 Anthony Towns <ajt@debian.org>
-
- * katie (announce): also check Changed-By when trying to detect
- NMUs.
-
-2001-03-06 Anthony Towns <ajt@debian.org>
-
- * ziyi (main): Generate Release.gpg files as well, using the key from
- Dinstall::SigningKey in katie.conf, if present. That key has to be
- passwordless, and hence kept fairly secretly.
-
-2001-03-02 James Troup <james@nocrew.org>
-
- * utils.py (str_isnum): new function; checks to see if the string
- is a number.
-
- * shania (main): fix _hideous_ bug which was causing all files > 2
- weeks old to be deleted from incoming, even if they were part of a
- valid upload.
-
-2001-02-27 James Troup <james@nocrew.org>
-
- * melanie (main): accept new argument -C/--carbon-copy which
- allows arbitarty carbon-copying of the bug closure messages.
- Cleaned up code by putting Cnf["Melanie::Options"] sub-tree into a
- separate variable.
-
-2001-02-27 Anthony Towns <ajt@debian.org>
-
- * ziyi: new program; generates Release files.
-
-2001-02-25 James Troup <james@nocrew.org>
-
- * katie (reject): add missing '\n' to error message.
- (manual_reject): likewise.
- (install): catch exceptions from moving the changes file into DONE
- and ignore them.
-
- * tea (check_md5sums): new function.
-
-2001-02-25 Michael Beattie <mjb@debian.org>
-
- * melanie: use $EDITOR if available.
-
-2001-02-15 James Troup <james@nocrew.org>
-
- * utils.py (parse_changes): don't crash and burn on empty .changes
- files. Symptoms noticed by mjb@.
-
-2001-02-15 Adam Heath <doogie@debian.org>
-
- * denise (main): use an absolute path for the output filename.
-
- * sql-aptvc.cpp: don't #include <utils/builtins.h> as it causes
- compile errors with postgresql-dev >= 7.0.
-
-2001-02-12 James Troup <james@nocrew.org>
-
- * rene: initial version.
-
- * andrea: initial version.
-
- * catherine (main): remove obsolete assignment of arguments.
-
-2001-02-09 James Troup <james@nocrew.org>
-
- * catherine: first working version.
-
-2001-02-06 James Troup <james@nocrew.org>
-
- * katie (check_files): validate the priority field; i.e. ensure it
- doesn't contain a '/' (to catch people prepending the priority
- with the component rather than the section).
- (check_override): don't warn about source packages; the only check
- is on section and we have no GUI tools that would use the Section
- field for a Sources file.
- (announce): use tags rather than severities for NMUs. Requested
- by Josip Rodin <joy@>. [#78035]
-
-2001-02-04 James Troup <james@nocrew.org>
-
- * tea (check_override): new function; ensures packages in suites
- are also in the override file. Thanks to bod@ for noticing that
- such packages existed.
-
- * katie: move file type compiled regular expressions to utils as
- catherine uses them too.
- (check_changes): always default maintainer822 to the installer
- address so that any bail out won't cause undefs later.
- (check_files): update file type re's to match above.
- (stable_install): likewise.
- (reject): handle any except from moving the changes files. Fixes
- crashes on unreadable changes files.
-
- * melanie (main): add an explanation of why things are not removed
- from testing.
-
-2001-01-31 James Troup <james@nocrew.org>
-
- * melanie (main): ignore a) no message, b) removing from stable or
- testing when invoked with -n/--no-action.
-
- * katie (check_override): lower section before checking to see if
- we're whining about 'non-US' versus 'non-US/main'.
-
- * sql-aptvc.cpp: new file; wrapper around apt's version comparison
- function so that we can use inside of PostgreSQL.
-
-2001-01-28 James Troup <james@nocrew.org>
-
- * katie: updated to pass new flag to parse_changes() and deal with
- the exception raised on invalid .dsc's if appropriate.
- * shania (main): likewise.
- * melanie (main): likewise.
-
- * tea (check_dscs): new function to validate all .dsc files in
- unstable.
-
- * utils.py (parse_changes): if passed an additional flag, validate
- the .dsc file to ensure it's extractable by dpkg-source.
- Requested by Ben Collins <bcollins@>.
-
-2001-01-27 James Troup <james@nocrew.org>
-
- * madison (main): connect to the DB as nobody.
-
- * katie (check_files): remove support for -r/--no-version-check
- since it makes no sense under katie (jenna will automatically
- remove the (new) older version) and was evil in any event.
- (check_changes): add missing new line to rejection message.
- (check_dsc): likewise.
- (process_it): reset reject_message here.
- (main): not here. Also remove support for -r.
-
-2001-01-26 James Troup <james@nocrew.org>
-
- * katie (check_override): don't whine about 'non-US/main' versus
- 'non-US'.
-
-2001-01-26 Michael Beattie <mjb@debian.org>
-
- * natalie.py (usage): new function.
- (main): use it.
-
-2001-01-25 Antti-Juhani Kaijanaho <gaia@iki.fi>
-
- * update-mirrorlists: Update README.non-US too (request from Joy).
-
-2001-01-25 James Troup <james@nocrew.org>
-
- * katie (reject): catch any exception from utils.move() and just
- pass, we previously only caught can't-overwrite errors and not
- can't-read ones.
-
- * jenna (generate_src_list): use ORDER BY in selects to avoid
- unnecessary changes to Packages files.
- (generate_bin_list): likewise.
-
- * utils.py (extract_component_from_section): separated out from
- build_file_list() as it's now used by claire too.
-
- * claire.py (find_dislocated_stable): rewrite the query to extract
- section information and handle component-less locations properly.
- Thanks to ajt@ for the improved queries.
- (fix_component_section): new function to fix components and
- sections.
-
-2001-01-23 James Troup <james@nocrew.org>
-
- * katie (check_files): set file type for (u?)debs first thing, so
- that if we continue, other functions which rely on file type
- existing don't bomb out. If apt_pkg or apt_inst raise an
- exception when parsing the control file, don't try any other
- checks, just drop out.
- (check_changes): new test to ensure there is actually a target
- distribution.
-
-2001-01-22 James Troup <james@nocrew.org>
-
- * katie (usage): s/dry-run/no-action/. Noticed by Peter Gervai
- <grin@>.
- (check_changes): when mapping to unstable, remember to actually
- add unstable to the suite list and not just remove the invalid
- suite.
-
-2001-01-21 James Troup <james@nocrew.org>
-
- * katie (check_files): catch exceptions from debExtractControl()
- and reject packages which raise any.
-
-2001-01-19 James Troup <james@nocrew.org>
-
- * katie (check_signature): basename() file name in rejection
- message.
-
-2001-01-18 James Troup <james@nocrew.org>
-
- * katie (in_override_p): remember the section and priority from
- the override file so we can check them against the package later.
- (check_override): new function; checks section and priority (for
- binaries) from the package against the override file and mails the
- maintainer about any disparities.
- (install): call check_override after announcing the upload.
-
-2001-01-16 James Troup <james@nocrew.org>
-
- * utils.py (build_file_list): catch ValueError's from splitting up
- the files field and translate it into a parse error.
-
- * tea: add support for finding unreferenced files.
-
- * katie (in_override_p): add support for suite-aliasing so that
- proposed-updates uploads work again.
- (check_changes): catch parses errors from utils.build_file_list().
- (check_dsc): likewise.
- (check_diff): yet more dpkg breakage so we require even newer a
- version.
-
- * jenna (generate_bin_list): don't do nasty path munging that's no
- longer needed.
-
- * denise (main): support for non-US; and rename testing's override
- files so they're based on testing's codename.
-
-2001-01-16 Martin Michlmayr <tbm@cyrius.com>
-
- * melanie: add to the bug closing message explaining what happens
- (or rather doesn't) with bugs against packages that have been
- removed.
-
-2001-01-14 James Troup <james@nocrew.org>
-
- * charisma (main): fix silly off-by-one error; suite priority
- checking was done using "less than" rather than "less than or
- equal to" which was causing weird hesienbugs with wrong Maintainer
- fields.
-
-2001-01-10 James Troup <james@nocrew.org>
-
- * katie (in_override_p): adapted to use SQL-based overrides.
- read_override_file function disappears.
-
- * db_access.py: add new functions get_section_id, get_priority_id
- and get_override_type_id.
- (get_architecture_id): return -1 if the architecture is not found.
-
- * heidi: switch %d -> %d in all SQL queries.
- (get_list): Use string.join where appropriate.
-
- * rhona (in_override_p): don't die if the override file doesn't
- exist.
- (main): warn if the override file doesn't exist.
-
- * alyson: new script; will eventually sync the config file and the
- SQL database.
-
- * natalie.py: new script; manipulates overrides.
-
- * melanie: new script; removes packages from suites.
-
-2001-01-08 James Troup <james@nocrew.org>
-
- * katie (re_bad_diff): whee; dpkg 1.8.1.1 didn't actually fix
- anything it just changed the symptom. Recognise the new breakage
- and reject them too.
-
-2001-01-07 James Troup <james@nocrew.org>
-
- * katie (check_dsc): when adding the cwd copy of the .orig.tar.gz
- to the .changes file, be sure to set up files[filename]["type"]
- too.
-
-2001-01-06 James Troup <james@nocrew.org>
-
- * katie (check_diff): new function; detects bad diff files
- produced by dpkg 1.8.1 and rejects thems.
- (process_it): call check_diff().
- (check_dsc): gar. Add support for multiple versions of the
- .orig.tar.gz file in the archive from -sa uploads. Check md5sum
- and size against all versions and use one which matches if
- possible and exclude any that don't from being poolized to avoid
- file overwrites. Thanks to broonie@ for providing the example.
- (install): skip any files marked as excluded as above.
-
-2001-01-05 James Troup <james@nocrew.org>
-
- * heidi (process_file): add missing argument to error message.
-
-2001-01-04 James Troup <james@nocrew.org>
-
- * heidi (main): fix handling of multiple files by reading all
- files not just the first file n times (where n = the number of
- files passed as arguments).
-
-2001-01-04 Anthony Towns <ajt@debian.org>
-
- * katie (check_dsc): proper fix for the code which locates the
- .orig.tar.gz; check for '<filename>$' or '^<filename>$'.
-
-2000-12-20 James Troup <james@nocrew.org>
-
- * rhona: replace IN's with EXISTS's to make DELETE time for
- binaries and source sane on auric. Add a -n/--no-action flag and
- make it stop actions if used. Fixed a bug in binaries deletion
- with no StayOfExecution (s/</<=/). Add working -h/--help and
- -V/--version. Giving timing info on deletion till I'm sure it's
- sane.
-
- * katie (check_changes): map testing to unstable.
-
- * madison: new script; shows versions in different architectures.
-
- * katie (check_dsc): ensure size matches as well as md5sum;
- suggested by Ben Collins <bcollins@debian.org> in Debian Bug
- #69702.
-
-2000-12-19 James Troup <james@nocrew.org>
-
- * katie (reject): ignore the "can't overwrite file" exception from
- utils.move() and leave the files where they are.
- (reject): doh! os.access() test was reversed so we only tried to
- move files which didn't exist... replaced with os.path.exists()
- test the right way round.
-
- * utils.py (move): raise an exception if we can't overwrite the
- destination file.
- (copy): likewise.
-
-2000-12-18 James Troup <james@nocrew.org>
-
- * rhona: first working version.
-
- * db_access.py (get_files_id): force both sizes to be integers.
-
- * katie (main): use size_type().
-
- * utils.py (size_type): new function; pretty prints a file size.
-
-2000-12-17 James Troup <james@nocrew.org>
-
- * charisma (main): do version compares so that older packages do
- not override newer ones and process source first as source wins
- over binaries in terms of who we think of as the Maintainer.
-
-2000-12-15 James Troup <james@nocrew.org>
-
- * katie (install): use the files id for the .orig.tar.gz from
- check_dsc().
- (install): limit select for legacy source to a) source in legacy
- or legacy-mixed type locations and b) distinct on files.id.
- (install): rather than the bizarre insert new, delete old method
- for moving legacy source into the pool, use a simple update of
- files.
- (process_it): initalize some globals before each process.
-
-2000-12-14 James Troup <james@nocrew.org>
-
- * katie (in_override_p): index on binary_type too since .udeb
- overrides are in a different file.
- (read_override_file): likewise.
- (check_files): correct filename passed to get_files_id().
- (check_dsc): we _have_ to preprend '/' to the filename to avoid
- mismatches like jabber.orig.tar.gz versus libjabber.orig.tar.gz.
- (check_dsc): remember the files id of the .orig.tar.gz, not the
- location id.
-
-2000-12-13 James Troup <james@nocrew.org>
-
- * utils.py (poolify): force the component to lower case except for
- non-US.
-
- * katie (in_override_p): handle .udeb-specific override files.
- (check_files): pass the binary type to in_override_p().
- (check_dsc): remember the location id of the old .orig.tar.gz in
- case it's not in the pool.
- (install): use location id from dsc_files; which is where
- check_dsc() puts it for old .orig.tar.gz files.
- (install): install files after all DB work is complete.
- (reject): basename() the changes filename.
- (manual_reject): likewise.
-
- * shania: new progam; replaces incomingcleaner.
-
-2000-12-05 James Troup <james@nocrew.org>
-
- * katie (check_changes): if inside stable and can't find files
- from the .changes; assume it's installed in the pool and chdir()
- to there.
- (check_files): we are not installing for stable installs, so don't
- check for overwriting existing files.
- (check_dsc): likewise.
- (check_dsc): reorder .orig.tar.gz handling so that we search in
- the pool first and only then fall back on any .orig.tar.gz in the
- cwd; this avoids false positives on the overwrite check when
- people needlessly reupoad the .orig.tar.gz in a non-sa upload.
- (install): if this is a stable install, bail out to
- stable_install() immediately.
- (install): dsc_files handling was horribly broken. a) we need to
- add files from the .dsc and not the .changes (duh), b) we need to
- add the .dsc file itself to dsc_files (to be consistent with neve
- if for no other reason).
- (stable_install): new function; handles installs from inside
- proposed-updates to stable.
- (acknowledge_new): basename changes_filename before doing
- anything.
- (process_it): absolutize the changes filename to avoid the
- requirement of being in the same directory as the .changes file.
- (process_it): save and restore the cwd as stable installs can
- potentially jump into the pool to find files.
-
- * jenna: dislocated_files support using claire.
-
- * heidi (process_file): select package field from binaries
- explicitly.
-
- * db_access.py (get_files_id): fix cache key used.
-
- * utils.py (build_file_list): fix 'non-US/non-free' case in
- section/component splitting.
- (move): use os.path.isdir() rather than stat.
- (copy): likewise.
-
- * claire.py: new file; stable in non-stable munger.
-
- * tea: new file; simply ensures all files in the DB exist.
-
-2000-12-01 James Troup <james@nocrew.org>
-
- * katie (check_dsc): use regex_safe().
- (check_changes): typo in changes{} key:
- s/distributions/distribution/.
- (install): use changes["source"], not files[file]["source"] as the
- latter may not exist and the former is used elsewhere. Commit the
- SQL transaction earlier.
-
- * utils.py (regex_safe): new function; escapes characters which
- have meaning to SQL's regex comparison operator ('~').
-
-2000-11-30 James Troup <james@nocrew.org>
-
- * katie (install): pool_location is based on source package name,
- not binary package.
-
- * utils.py (move): if dest is a directory, append the filename
- before chmod-ing.
- (copy): ditto.
-
- * katie (check_files): don't allow overwriting of existing .debs.
- (check_dsc): don't allow overwriting of existing source files.
-
-2000-11-27 James Troup <james@nocrew.org>
-
- * katie (check_signature): don't try to load rsaref; it's
- obsolete.
- (in_override_p): don't try to lookup override entries for packages
- with an invalid suite name.
- (announce): don't assume the suite name is valid; use Find() to
- lookup the mailing list name for announcements.
-
- * utils.py (where_am_i): typo; hostname is in the first element,
- not second.
-
- * db_access.py (get_suite_id): return -1 on an unknown suite.
-
-2000-11-26 James Troup <james@nocrew.org>
-
- * katie (install): fix CopyChanges handling; typo in in checking
- Cnf for CopyChanges flag and was calling non-existent function
- copy_file.
-
- * utils.py (copy): new function; clone of move without the
- unlink().
-
-2000-11-25 James Troup <james@nocrew.org>
-
- * utils.py (build_file_list): handle non-US prefixes properly
- (i.e. 'non-US' -> 'non-US/main' and 'non-US/libs' -> 'non-US/main'
- + 'libs' not 'non-US/libs').
- (send_mail): add '-odq' to sendmail invocation to avoid DNS lookup
- delays. This is possibly(/probably) exim speicifc and (like other
- sendmail options) needs to be in the config file.
-
-2000-11-24 James Troup <james@nocrew.org>
-
- * rhona (check_sources): we need file id from dsc_files; not id.
- Handle non .dsc source files being re-referenced in dsc_files.
-
- * katie (in_override_p): strip out any 'non-US' prefix.
- (check_files): use utils.where_am_i() rather than hardcoding.
- (check_files): validate the component.
- (install): use utils.where_am_i() rather than hardcoding.
- (install): fix mail to go to actual recepient.
- (reject): likewise.
- (manual_reject): likewise.
- (acknowledge_new): likewise.
- (announce): likewise.
-
- * db_access.py (get_component_id): ignore case when searching for
- the component and don't crash if the component can't be found, but
- return -1.
- (get_location_id): handle -1 from get_component_id().
-
- * jenna (generate_src_list): don't bring 'suite' into our big
- multi-table-joining select as we already know the 'suite_id'.
- (generate_bin_list): likewise.
-
- * neve (main): don't quit if not on ftp-master.
- (process_packages): remove unused variable 'suite_codename'.
-
- * utils.py (move): actually move the file.
- (build_file_list): handle non-US prefixes in the section.
-
- * catherine (main): use which_conf_file().
- * charisma (main): likewise.
- * heidi (main): likewise.
- * jenna (main): likewise.
- * katie (main): likewise.
- * neve (main): likewise.
- * rhona (main): likewise.
-
- * utils.py (where_am_i): new routine; determines the archive as
- understood by other 'dak' programs.
- (which_conf_file): new routine; determines the conf file to read.
-
-2000-11-17 James Troup <james@nocrew.org>
-
- * katie (install): fix where .changes files for proposed-updates
- go.
-
-2000-11-04 James Troup <james@nocrew.org>
-
- * jenna (main): handle architecture properly if no
- -a/--architecture argument is given, i.e. reset architecture with
- the values for the suite for each suite.
-
- * Add apt_pkg.init() to the start of all scripts as it's now
- required by python-apt.
-
-2000-10-29 James Troup <james@nocrew.org>
-
- * jenna (generate_bin_list): take an additional argument 'type'
- and use it in the SELECT.
- (main): if processing component 'main', process udebs and debs.
-
- * neve (process_packages): set up 'type' in 'binaries' (by
- assuming .deb).
-
- * katie (re_isadeb): accept ".udeb" or ".deb" as a file ending.
- (check_files): set up files[file]["dbtype"].
- (install): use files[file]["dbtype"] to set up the 'type' field in
- the 'binaries' table.
-
- * init_pool.sql: add a 'type' field to the 'binaries' table to
- distinguisgh between ".udeb" and ".deb" files.
-
- * utils.py (move): scrap basename() usage; use a "dir_p(dest) :
- dest ? dirname(dest)" type check instead.
-
- * katie (check_dsc): handle the case of an .orig.tar.gz not found
- in the pool without crashing. Also handle the case of being asked
- to look for something other than an .orig.tar.gz in the pool.
-
-2000-10-26 James Troup <james@nocrew.org>
-
- * katie (install): fix filenames put into files table during
- poolification of legacy source.
-
- * utils.py (move): work around a bug in os.path.basename() which
- cunningly returns '' if there is a trailing slash on the path
- passed to it.
-
- * katie (check_dsc): Remove more cruft. If we find the
- .orig.tar.gz in the pool and it's in a legacy (or legacy-mixed)
- location, make a note of that so we can fix things in install().
- (install): as above. Move any old source out of legacy locations
- so that 'apt-get source' will work.
- (process_it): reset the flag that indicates to install that the
- source needs moved.
-
- * cron.daily: more. Nowhere near complete yet though.
-
- * katie (install): don't run os.makedirs, a) utils.move() does
- this now, b) we weren't removing the user's umask and were
- creating dirs with SNAFU permissions.
- (check_dsc): rewrite the .orig.tar.gz handling to take into
- account, err, package pools. i.e. look anywhere in the pool
- rather than faffing around with two simple paths.
-
- * neve (process_sources): add the .dsc to dsc_files too.
-
-2000-10-25 James Troup <james@nocrew.org>
-
- * neve (process_sources): don't duplicate .orig.tar.gz's.
-
-2000-10-23 James Troup <james@nocrew.org>
-
- * utils.py (re_extract_src_version): moved here.
-
- * neve: move re_extract_src_version to utils.
- (process_packages): reflect change.
-
- * katie (install): reflect change.
-
-2000-10-19 James Troup <james@nocrew.org>
-
- * jenna (generate_src_list): handle locations with null
- components.
- (generate_bin_list): likewise.
-
+++ /dev/null
-This file maps each file available in the Debian GNU/Linux system to
-the package from which it originates. It includes packages from the
-DIST distribution for the ARCH architecture.
-
-You can use this list to determine which package contains a specific
-file, or whether or not a specific file is available. The list is
-updated weekly, each architecture on a different day.
-
-When a file is contained in more than one package, all packages are
-listed. When a directory is contained in more than one package, only
-the first is listed.
-
-The best way to search quickly for a file is with the Unix `grep'
-utility, as in `grep <regular expression> CONTENTS':
-
- $ grep nose Contents
- etc/nosendfile net/sendfile
- usr/X11R6/bin/noseguy x11/xscreensaver
- usr/X11R6/man/man1/noseguy.1x.gz x11/xscreensaver
- usr/doc/examples/ucbmpeg/mpeg_encode/nosearch.param graphics/ucbmpeg
- usr/lib/cfengine/bin/noseyparker admin/cfengine
-
-This list contains files in all packages, even though not all of the
-packages are installed on an actual system at once. If you want to
-find out which packages on an installed Debian system provide a
-particular file, you can use `dpkg --search <filename>':
-
- $ dpkg --search /usr/bin/dselect
- dpkg: /usr/bin/dselect
-
-
-FILE LOCATION
+++ /dev/null
-#!/usr/bin/make -f
-
-CXXFLAGS = -I/usr/include/postgresql/ -I/usr/include/postgresql/server/ -fPIC -Wall
-CFLAGS = -fPIC -Wall
-LDFLAGS = -fPIC
-LIBS = -lapt-pkg
-
-C++ = g++
-
-all: sql-aptvc.so
-
-sql-aptvc.o: sql-aptvc.cpp
-sql-aptvc.so: sql-aptvc.o
- $(C++) $(LDFLAGS) $(LIBS) -shared -o $@ $<
-clean:
- rm -f sql-aptvc.so sql-aptvc.o
-
+++ /dev/null
-The katie software is based in large part on 'dinstall' by Guy Maor.
-The original 'katie' script was pretty much a line by line
-reimplementation of the perl 'dinstall' in python.
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-[Alphabetical Order]
-
-Adam Heath <doogie@debian.org>
-Anthony Towns <ajt@debian.org>
-Antti-Juhani Kaijanaho <ajk@debian.org>
-Ben Collins <bcollins@debian.org>
-Brendan O'Dea <bod@debian.org>
-Daniel Jacobwitz <dan@debian.org>
-Daniel Silverstone <dsilvers@debian.org>
-Drake Diedrich <dld@debian.org>
-Guy Maor <maor@debian.org>
-Jason Gunthorpe <jgg@debian.org>
-Joey Hess <joeyh@debian.org>
-Mark Brown <broonie@debian.org>
-Martin Michlmayr <tbm@debian.org>
-Michael Beattie <mjb@debian.org>
-Randall Donald <rdonald@debian.org>
-Ryan Murray <rmurray@debian.org>
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-Special thanks go to Jason and AJ; without their patient help, none of
-this would have been possible.
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+++ /dev/null
- TODO
- ====
-
-[NB: I use this as a thought record/scribble, not everything on here
- makes sense and/or is actually ever going to get done, so IIWY I
- wouldn't use it as gospel for the future of katie or as a TODO
- list for random hacking.]
-
-================================================================================
-
-Others
-------
-
- o cindy should remove the src-only override when a binary+source override
- exists
-
- o reject on > or < in a version constraint
-
-23:07 < aba> elmo: and, how about enhancing rene to spot half-dropped
- binaries on one arch (i.e. package used to build A and B, but B is
- no longer built on some archs)?
-
- o tabnanny the source
-
- o drop map-unreleased
-
- o check email only portions of addresses match too, iff the names
- don't, helps with the "James Troup <james@nocrew.org>"
- vs. "<james@nocrew.org>" case.
-
- o ensure .dsc section/prio match .changes section/prio
-
- o rhona's kind of crap when asked to remove a lot of files (e.g. 2k
- or so).
-
- o we don't handle the case where an identical orig.tar.gz is
- mentioned in the .changes, but not in unchecked; but should we
- care?
-
- o madison could do better sanity checking for -g/-G (e.g. not more
- than one suite, etc.)
-
- o use python2.2-tarfile (once it's in stable?) to check orig.tar.gz
- timestamps too.
-
- o need to decide on whether we're tying for most errors at once.. if
- so (probably) then make sure code doesn't assume variables exist and
- either way do something about checking error code of check_dsc and
- later functions so we skip later checks if they're bailing.
-
- o the .katie stuff is fundamentally braindamaged, it's not versioned
- so there's no way to change the format, yay me. need to fix.
- probably by putting a version var as the first thing and checking
- that.. auto-upgrade at least from original format would be good.
- might also be a good idea to put everything in one big dict after
- that?
-
- o [?, wishlist, distant future] RFC2047-ing should be extended to
- all headers of mails sent out.
-
- o reject sparc64 binaries in a non '*64*' package.
-
- o katie.py(source_exists): a) we take arguments as parameters that
- we could figure out for ourselves (we're part of the Katie class
- after all), b) we have this 3rd argument which defaults to "any"
- but could in fact be dropped since no one uses it like that.
-
- o jennifer: doesn't handle bin-only NMUs of stuff still in NEW,
- BYHAND or ACCEPTED (but not the pool) - not a big deal, upload can
- be retried once the source is in the archive, but still.
-
- o security global mail overrides should special case buildd stuff so
- that buildds get ACCEPTED mails (or maybe amber (?)), that way
- upload-security doesn't grow boundlessly.
-
- o amber should upload sourceful packages first, otherwise with big
- packages (e.g. X) and esp. when source is !i386, half the arches
- can be uploaded without source, get copied into queue/unaccepted
- and promptly rejected.
-
- o rene's NVIU check doesn't catch cases where source package changed
- name, should check binaries too. [debian-devel@l.d.o, 2004-02-03]
-
- o cnf[melanie::logfile] is misnamed...
-
-<aj> i'd be kinda inclined to go with insisting the .changes file take
- the form ---- BEGIN PGP MESSAGE --- <non -- BEGIN/END lines> --
- BEGIN PGP SIG -- END PGP MESSAGE -- with no lines before or after,
- and rejecting .changes that didn't match that
-
- o rene should check for source packages not building any binaries
-
- o heidi should have a diff mode that accepts diff output!
-
- o halle doesn't deal with melanie'd packages, partial replacements
- etc. and more.
-
- o lauren, the tramp, blindly deletes with no check that the delete
- failed which it might well given we only look for package/version,
- not package/version _in p-u_. duh.
-
- o melanie should remove obsolete changes when removing from p-u, or
- at least warn. or halle should handle it.
-
- o need a testsuite _badly_
-
- o lisa should have an Bitch-Then-Accept option
-
- o jennifer crashes if run as a user in -n mode when orig.tar.gz is
- in queue/new...
-
-<elmo_home> [<random>maybe I should reject debian packages with a non-Debian origin or bugs field</>]
-<Kamion> [<random>agreed; dunno what origin does but non-Debian bugs fields would be bad]
-
- o rhona should make use of select..except select, temporary tables
- etc. rather than looping and calling SQL every time so we can do
- suite removal sanely (see potato-removal document)
-
- o melanie will happily include packages in the Cc list that aren't
- being removed...
-
- o melanie doesn't remove udebs when removing the source they build from
-
- o check_dsc_against_db's "delete an entry from files while you're
- not looking" habit is Evil and Bad.
-
- o lisa allows you to edit the section and change the component, but
- really shouldn't.
-
- o melanie needs to, when not sending bug close mails, promote Cc: to
- To: and send the mail anyways.
-
- o the lockfile (Archive_Maintenance_In_Progress) should probably be in a conf file
-
- o madison should cross-check the b.source field and if it's not null
- and s.name linked from it != the source given in
- -S/--source-and-binary ignore.
-
- o lauren sucks; she should a) only spam d-i for sourceful
- rejections, b) sort stuff so she rejects sourceful stuff first. the
- non-sourceful should probably get a form mail, c) automate the
- non-sourceful stuff (see b).
-
- o jennifer should do q-d stuff for faster AA [ryan]
-
- o split the morgue into source and binary so binaries can be purged first!
-
- o per-architecture priorities for things like different arch'es
- gcc's, silly BSD libftw, palo, etc.
-
- o use postgres 7.2's built-in stat features to figure out how indices are used etc.
-
- o neve shouldn't be using location, she should run down suites instead
-
- o halle needs to know about udebs
-
- o by default hamstring katie's mail sending so that she won't send
- anything until someone edits a script; she's been used far too
- much to send spam atm :(
-
- o $ftpdir/indices isn't created by rose because it's not in katie.conf
-
- o sanity check depends/recommends/suggests too? in fact for any
- empty field?
-
-[minor] kelly's copychanges, copykatie handling sucks, the per-suite
- thing is static for all packages, so work out in advance dummy.
-
-[madison] # filenames ?
-[madison] # maintainer, component, install date (source only?), fingerprint?
-
- o UrgencyLog stuff should minimize it's bombing out(?)
- o Log stuff should open the log file
-
- o helena should footnote the actual notes, and also * the versions
- with notes so we can see new versions since being noted...
-
- o helena should have alternative sorting options, including reverse
- and without or without differentiaion.
-
- o julia should sync debadmin and ftpmaster (?)
-
- o <drow> Can't read file.:
- /org/security.debian.org/queue/accepted/accepted/apache-perl_1.3.9-14.1-1.21.20000309-1_sparc.katie.
- You assume that the filenames are relative to accepted/, might want
- to doc or fix that.
-
- o <neuro> the orig was in NEW, the changes that caused it to be NEW
- were pulled out in -2, and we end up with no orig in the archive
- :(
-
- o SecurityQueueBuild doesn't handle the case of foo_3.3woody1
- with a new .orig.tar.gz followed by a foo_3.3potato1 with the same
- .orig.tar.gz; jennifer sees it and copes, but the AA code doesn't
- and can't really easily know so the potato AA dir is left with no
- .orig.tar.gz copy. doh.
-
- o orig.tar.gz in accepted not handled properly (?)
-
- o amber doesn't include .orig.tar.gz but it should
-
- o permissions (paranoia, group write, etc.) configurability and overhaul
-
- o remember duplicate copyrights in lisaand skip them, per package
-
- o <M>ove option for lisa byhand proecessing
-
- o rene could do with overrides
-
- o db_access.get_location_id should handle the lack of archive_id properly
-
- o the whole versioncmp thing should be documented
-
- o lisa doesn't do the right thing with -2 and -1 uploads, as you can
- end up with the .orig.tar.gz not in the pool
-
- o lisa exits if you check twice (aj)
-
- o lisa doesn't trap signals from fernanda properly
-
- o queued and/or perl on sparc stable sucks - reimplement it.
-
- o aj's bin nmu changes
-
- o Lisa:
- * priority >> optional
- * arch != {any,all}
- * build-depends wrong (via andrea)
- * suid
- * conflicts
- * notification/stats to admin daily
- o trap fernanda exiting
- o distinguish binary only versus others (neuro)
-
- o cache changes parsed from ordering (careful tho: would be caching
- .changes from world writable incoming, not holding)
-
- o katie doesn't recognise binonlyNMUs correctly in terms of telling
- who their source is; source-must-exist does, but the info is not
- propogated down.
-
- o Fix BTS vs. katie sync issues by queueing(via BSMTP) BTS mail so
- that it can be released on deman (e.g. ETRN to exim).
-
- o maintainers file needs overrides
-
- [ change override.maintainer to override.maintainer-from +
- override.maintainer-to and have them reference the maintainers
- table. Then fix charisma to use them and write some scripting
- to handle the Santiago situation. ]
-
- o Validate Depends (et al.) [it should match \(\s*(<<|<|<=|=|>=|>|>>)\s*<VERSIONREGEXP>\)]
-
- o Clean up DONE; archive to tar file every 2 weeks, update tar tvzf INDEX file.
-
- o testing-updates suite: if binary-only and version << version in
- unstable and source-ver ~= source-ver in testing; then map
- unstable -> testing-updates ?
-
- o hooks or configurability for debian specific checks (e.g. check_urgency, auto-building support)
-
- o morgue needs auto-cleaning (?)
-
- o saffron: two modes, all included, seperate
- o saffron: add non-US
- o saffron: add ability to control components, architectures, archives, suites
- o saffron: add key to expand header
-
-================================================================================
-
-queue/approved
---------------
-
- o What to do with multi-suite uploads? Presumably hold in unapproved
- and warn? Or what? Can't accept just for unstable or reject just
- from stable.
-
- o Whenever we check for anything in accepted we also need to check in
- unapproved.
-
- o non-sourceful uploads should go straight through if they have
- source in accepted or the archive.
-
- o security uploads on auric should be pre-approved.
-
-================================================================================
-
-Less Urgent
------------
-
- o change utils.copy to try rename() first
-
- o [hard, long term] unchecked -> accepted should go into the db, not
- a suite, but similar. this would allow katie to get even faster,
- make madison more useful, decomplexify specialacceptedautobuild
- and generally be more sane. may even be helpful to have e.g. new
- in the DB, so that we avoid corner cases like the .orig.tar.gz
- disappearing 'cos the package has been entirely removed but was
- still on stayofexecution when it entered new.
-
- o Logging [mostly done] (todo: rhona (hard), .. ?)
-
- o jennifer: the tar extractor class doesn't need to be redone for each package
-
- o reverse of source-must-exist; i.e. binary-for-source-must-not-exist
- o REJECT reminders in shania.
- o fernanda should check for conflicts and warn about them visavis priority [rmurray]
- o store a list of removed/files versions; also compare against them.
- [but be careful about scalability]
-
- o fernanda: print_copyright should be a lot more intelligent
- @ handle copyright.gz
- @ handle copyright.ja and copyright
- @ handle (detect at least) symlinks to another package's doc directory
- @ handle and/or fall back on source files (?)
-
- o To incorporate from utils:
- @ unreject
-
- o auto-purge out-of-date stuff from non-free/contrib so that testing and stuff works
- o doogie's binary -> source index
- o jt's web stuff, matt's changelog stuff (overlap)
-
- o [Hard] Need to merge non-non-US and non-US DBs.
-
- o experimental needs to auto clean (relative to unstable) [partial: rene warns about this]
-
- o Do a checkpc(1)-a-like which sanitizes a config files.
- o fix parse_changes()/build_file_list() to sanity check filenames
- o saftey check and/or rename debs so they match what they should be
-
- o Improve andrea.
- o Need to optimize all the queries by using EXAMINE and building some INDEXs.
- [postgresql 7.2 will help here]
- o Need to enclose all the setting SQL stuff in transactions (mostly done).
- o Need to finish alyson (a way to sync katie.conf and the DB)
- o Need the ability to rebuild all other tables from dists _or_ pools (in the event of disaster) (?)
- o Make the --help and --version options do stuff for all scripts
-
- o charisma can't handle whitespace-only lines (for the moment, this is feature)
-
- o generic way of saying isabinary and isadsc. (?)
-
- o s/distribution/suite/g
-
- o cron.weekly:
- @ weekly postins to d-c (?)
- @ backup of report (?)
- @ backup of changes.tgz (?)
-
- o --help doesn't work without /etc/katie/katie.conf (or similar) at
- least existing.
-
- o rename andrea (clashes with existing andrea)...
-
- * Harder:
-
- o interrupting of stracing jennifer causes exceptions errors from apt_inst calls
- o dependency checking (esp. stable) (partially done)
- o override checks sucks; it needs to track changes made by the
- maintainer and pass them onto ftpmaster instead of warning the
- maintainer.
- o need to do proper rfc822 escaping of from lines (as opposed to s/\.//g)
- o Revisit linking of binary->source in install() in katie.
- o Fix component handling in overrides (aj)
- o Fix lack of entires in source overrides (aj)
- o direport misreports things as section 'devel' (? we don't use direport)
- o vrfy check of every Maintainer+Changed-By address; valid for 3 months.
- o binary-all should be done on a per-source, per-architecture package
- basis to avoid, e.g. the perl-modules problem.
- o a source-missing-diff check: if the version has a - in it, and it
- is sourceful, it needs orig and diff, e.g. if someone uploads
- esound_0.2.22-6, and it is sourceful, and there is no diff ->
- REJECT (version has a dash, therefore not debian native.)
- o check linking of .tar.gz's to .dsc's.. see proftpd 1.2.1 as an example
- o archive needs md5sum'ed regularly, but takes too long to do all
- in one go; make progressive or weekly.
- o katie/jenna/rhona/whatever needs to clear out .changes
- files from p-u when removing stuff superseded by newer versions.
- [but for now we have halle]
- o test sig checking stuff in test/ (stupid thing is not modularized due to global abuse)
- o when encountering suspicous things (e.g. file tainting) do something more drastic
-
- * Easy:
-
- o suite mapping and component mapping are parsed per changes file,
- they should probably be stored in a dictionary created at startup.
- o don't stat/md5sum files you have entries for in the DB, moron
- boy (Katie.check_source_blah_blah)
- o promote changes["changes"] to mandatory in katie.py(dump_vars)
- after a month or so (or all .katie files contain in the queue
- contain it).
- o melanie should behave better with -a and without -b; see
- gcc-defaults removal for an example.
- o Reject on misconfigured kernel-package uploads
- o utils.extract_component_from_section: main/utils -> main/utils, main rather than utils, main
- o Fix jennier to warn if run when not in incoming or p-u
- o katie should validate multi-suite uploads; only possible valid one
- is "stable unstable"
- o cron.daily* should change umask (aj sucks)
- o Rene doesn't look at debian-installer but should.
- o Rene needs to check for binary-less source packages.
- o Rene could accept a suite argument (?)
- o byhand stuff should send notification
- o catherine should udpate db; move files, not the other way around [neuro]
- o melanie should update the stable changelog [joey]
- o update tagdb.dia
-
- * Bizzare/uncertain:
-
- o drop rather dubious currval stuff (?)
- o rationalize os.path.join() usage
- o Rene also doesn't seem to warn about missing binary packages (??)
- o logging: hostname + pid ?
- o ANAIS should be done in katie (?)
- o Add an 'add' ability to melanie (? separate prog maybe)
- o Replicate old dinstall report stuff (? needed ?)
- o Handle the case of 1:1.1 which would overwrite 1.1 (?)
- o maybe drop -r/--regex in madison, make it the default and
- implement -e/--exact (a la joey's "elmo")
- o dsc files are not checked for existence/perms (only an issue if
- they're in the .dsc, but not the .changes.. possible?)
-
- * Cleanups & misc:
-
- o db_access' get_files needs to use exceptions not this None, > 0, < 0 return val BS (?)
- o The untouchable flag doesn't stop new packages being added to ``untouchable'' suites
-
-================================================================================
-
-Packaging
----------
-
- o Fix stuff to look in sensible places for libs and config file in debian package (?)
-
-================================================================================
-
- --help manpage
------------------------------------------------------------------------------
-alyson X
-amber X
-andrea X
-ashley X
-catherine X X
-charisma X X
-cindy X X
-claire X
-denise X
-fernanda X
-halle X
-heidi X X
-helena X
-jenna X
-jennifer X
-jeri X
-julia X X
-kelly X X
-lisa X X
-madison X X
-melanie X X
-natalie X X
-neve X
-rene X
-rose X
-rhona X X
-saffron X
-shania X
-tea X
-ziyi X
-
-
-================================================================================
-
-Random useful-at-some-point SQL
--------------------------------
-
-UPDATE files SET last_used = '1980-01-01'
- FROM binaries WHERE binaries.architecture = <x>
- AND binaries.file = files.id;
-
-DELETE FROM bin_associations
- WHERE EXISTS (SELECT id FROM binaries
- WHERE architecture = <x>
- AND id = bin_associations.bin);
-
-================================================================================
+++ /dev/null
--- Fix up after population of the database...
-
--- First of all readd the constraints (takes ~1:30 on auric)
-
-ALTER TABLE files ADD CONSTRAINT files_location FOREIGN KEY (location) REFERENCES location(id) MATCH FULL;
-
-ALTER TABLE source ADD CONSTRAINT source_maintainer FOREIGN KEY (maintainer) REFERENCES maintainer(id) MATCH FULL;
-ALTER TABLE source ADD CONSTRAINT source_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
-ALTER TABLE source ADD CONSTRAINT source_sig_fpr FOREIGN KEY (sig_fpr) REFERENCES fingerprint(id) MATCH FULL;
-
-ALTER TABLE dsc_files ADD CONSTRAINT dsc_files_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
-ALTER TABLE dsc_files ADD CONSTRAINT dsc_files_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
-
-ALTER TABLE binaries ADD CONSTRAINT binaries_maintainer FOREIGN KEY (maintainer) REFERENCES maintainer(id) MATCH FULL;
-ALTER TABLE binaries ADD CONSTRAINT binaries_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
-ALTER TABLE binaries ADD CONSTRAINT binaries_architecture FOREIGN KEY (architecture) REFERENCES architecture(id) MATCH FULL;
-ALTER TABLE binaries ADD CONSTRAINT binaries_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
-ALTER TABLE binaries ADD CONSTRAINT binaries_sig_fpr FOREIGN KEY (sig_fpr) REFERENCES fingerprint(id) MATCH FULL;
-
-ALTER TABLE suite_architectures ADD CONSTRAINT suite_architectures_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
-ALTER TABLE suite_architectures ADD CONSTRAINT suite_architectures_architecture FOREIGN KEY (architecture) REFERENCES architecture(id) MATCH FULL;
-
-ALTER TABLE bin_associations ADD CONSTRAINT bin_associations_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
-ALTER TABLE bin_associations ADD CONSTRAINT bin_associations_bin FOREIGN KEY (bin) REFERENCES binaries(id) MATCH FULL;
-
-ALTER TABLE src_associations ADD CONSTRAINT src_associations_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
-ALTER TABLE src_associations ADD CONSTRAINT src_associations_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
-
-ALTER TABLE override ADD CONSTRAINT override_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
-ALTER TABLE override ADD CONSTRAINT override_component FOREIGN KEY (component) REFERENCES component(id) MATCH FULL;
-ALTER TABLE override ADD CONSTRAINT override_priority FOREIGN KEY (priority) REFERENCES priority(id) MATCH FULL;
-ALTER TABLE override ADD CONSTRAINT override_section FOREIGN KEY (section) REFERENCES section(id) MATCH FULL;
-ALTER TABLE override ADD CONSTRAINT override_type FOREIGN KEY (type) REFERENCES override_type(id) MATCH FULL;
-
-ALTER TABLE queue_build ADD CONSTRAINT queue_build_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
-ALTER TABLE queue_build ADD CONSTRAINT queue_build_queue FOREIGN KEY (queue) REFERENCES queue(id) MATCH FULL;
-
--- Then correct all the id SERIAL PRIMARY KEY columns...
-
-CREATE FUNCTION files_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM files'
- LANGUAGE 'sql';
-CREATE FUNCTION source_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM source'
- LANGUAGE 'sql';
-CREATE FUNCTION src_associations_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM src_associations'
- LANGUAGE 'sql';
-CREATE FUNCTION dsc_files_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM dsc_files'
- LANGUAGE 'sql';
-CREATE FUNCTION binaries_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM binaries'
- LANGUAGE 'sql';
-CREATE FUNCTION bin_associations_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM bin_associations'
- LANGUAGE 'sql';
-CREATE FUNCTION section_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM section'
- LANGUAGE 'sql';
-CREATE FUNCTION priority_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM priority'
- LANGUAGE 'sql';
-CREATE FUNCTION override_type_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM override_type'
- LANGUAGE 'sql';
-CREATE FUNCTION maintainer_id_max() RETURNS INT4
- AS 'SELECT max(id) FROM maintainer'
- LANGUAGE 'sql';
-
-SELECT setval('files_id_seq', files_id_max());
-SELECT setval('source_id_seq', source_id_max());
-SELECT setval('src_associations_id_seq', src_associations_id_max());
-SELECT setval('dsc_files_id_seq', dsc_files_id_max());
-SELECT setval('binaries_id_seq', binaries_id_max());
-SELECT setval('bin_associations_id_seq', bin_associations_id_max());
-SELECT setval('section_id_seq', section_id_max());
-SELECT setval('priority_id_seq', priority_id_max());
-SELECT setval('override_type_id_seq', override_type_id_max());
-SELECT setval('maintainer_id_seq', maintainer_id_max());
-
--- Vacuum the tables for efficency
-
-VACUUM archive;
-VACUUM component;
-VACUUM architecture;
-VACUUM maintainer;
-VACUUM location;
-VACUUM files;
-VACUUM source;
-VACUUM dsc_files;
-VACUUM binaries;
-VACUUM suite;
-VACUUM suite_architectures;
-VACUUM bin_associations;
-VACUUM src_associations;
-VACUUM section;
-VACUUM priority;
-VACUUM override_type;
-VACUUM override;
-
--- FIXME: has to be a better way to do this
-GRANT ALL ON architecture, architecture_id_seq, archive,
- archive_id_seq, bin_associations, bin_associations_id_seq, binaries,
- binaries_id_seq, component, component_id_seq, dsc_files,
- dsc_files_id_seq, files, files_id_seq, fingerprint,
- fingerprint_id_seq, location, location_id_seq, maintainer,
- maintainer_id_seq, override, override_type, override_type_id_seq,
- priority, priority_id_seq, section, section_id_seq, source,
- source_id_seq, src_associations, src_associations_id_seq, suite,
- suite_architectures, suite_id_seq, queue_build, uid,
- uid_id_seq TO GROUP ftpmaster;
-
--- Read only access to user 'nobody'
-GRANT SELECT ON architecture, architecture_id_seq, archive,
- archive_id_seq, bin_associations, bin_associations_id_seq, binaries,
- binaries_id_seq, component, component_id_seq, dsc_files,
- dsc_files_id_seq, files, files_id_seq, fingerprint,
- fingerprint_id_seq, location, location_id_seq, maintainer,
- maintainer_id_seq, override, override_type, override_type_id_seq,
- priority, priority_id_seq, section, section_id_seq, source,
- source_id_seq, src_associations, src_associations_id_seq, suite,
- suite_architectures, suite_id_seq, queue_build, uid,
- uid_id_seq TO PUBLIC;
+++ /dev/null
-#!/usr/bin/env python
-
-# Microscopic modification and query tool for overrides in projectb
-# Copyright (C) 2004 Daniel Silverstone <dsilvers@digital-scurf.org>
-# $Id: alicia,v 1.6 2004-11-27 17:58:13 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-
-################################################################################
-## So line up your soldiers and she'll shoot them all down
-## Coz Alisha Rules The World
-## You think you found a dream, then it shatters and it seems,
-## That Alisha Rules The World
-################################################################################
-
-import pg, sys;
-import utils, db_access;
-import apt_pkg, logging;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-# Shamelessly stolen from melanie. Should probably end up in utils.py
-def game_over():
- answer = utils.our_raw_input("Continue (y/N)? ").lower();
- if answer != "y":
- print "Aborted."
- sys.exit(1);
-
-
-def usage (exit_code=0):
- print """Usage: alicia [OPTIONS] package [section] [priority]
-Make microchanges or microqueries of the overrides
-
- -h, --help show this help and exit
- -d, --done=BUG# send priority/section change as closure to bug#
- -n, --no-action don't do anything
- -s, --suite specify the suite to use
-"""
- sys.exit(exit_code)
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Alicia::Options::Help"),
- ('d',"done","Alicia::Options::Done", "HasArg"),
- ('n',"no-action","Alicia::Options::No-Action"),
- ('s',"suite","Alicia::Options::Suite", "HasArg"),
- ];
- for i in ["help", "no-action"]:
- if not Cnf.has_key("Alicia::Options::%s" % (i)):
- Cnf["Alicia::Options::%s" % (i)] = "";
- if not Cnf.has_key("Alicia::Options::Suite"):
- Cnf["Alicia::Options::Suite"] = "unstable";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Alicia::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- if not arguments:
- utils.fubar("package name is a required argument.");
-
- package = arguments.pop(0);
- suite = Options["Suite"]
- if arguments and len(arguments) > 2:
- utils.fubar("Too many arguments");
-
- if arguments and len(arguments) == 1:
- # Determine if the argument is a priority or a section...
- arg = arguments.pop();
- q = projectB.query("""
- SELECT ( SELECT COUNT(*) FROM section WHERE section=%s ) AS secs,
- ( SELECT COUNT(*) FROM priority WHERE priority=%s ) AS prios
- """ % ( pg._quote(arg,"str"), pg._quote(arg,"str")));
- r = q.getresult();
- if r[0][0] == 1:
- arguments = (arg,".");
- elif r[0][1] == 1:
- arguments = (".",arg);
- else:
- utils.fubar("%s is not a valid section or priority" % (arg));
-
-
- # Retrieve current section/priority...
- q = projectB.query("""
- SELECT priority.priority AS prio, section.section AS sect
- FROM override, priority, section, suite
- WHERE override.priority = priority.id
- AND override.section = section.id
- AND override.package = %s
- AND override.suite = suite.id
- AND suite.suite_name = %s
- """ % (pg._quote(package,"str"), pg._quote(suite,"str")));
-
- if q.ntuples() == 0:
- utils.fubar("Unable to find package %s" % (package));
- if q.ntuples() > 1:
- utils.fubar("%s is ambiguous. Matches %d packages" % (package,q.ntuples()));
-
- r = q.getresult();
- oldsection = r[0][1];
- oldpriority = r[0][0];
-
- if not arguments:
- print "%s is in section '%s' at priority '%s'" % (
- package,oldsection,oldpriority);
- sys.exit(0);
-
- # At this point, we have a new section and priority... check they're valid...
- newsection, newpriority = arguments;
-
- if newsection == ".":
- newsection = oldsection;
- if newpriority == ".":
- newpriority = oldpriority;
-
- q = projectB.query("SELECT id FROM section WHERE section=%s" % (
- pg._quote(newsection,"str")));
-
- if q.ntuples() == 0:
- utils.fubar("Supplied section %s is invalid" % (newsection));
- newsecid = q.getresult()[0][0];
-
- q = projectB.query("SELECT id FROM priority WHERE priority=%s" % (
- pg._quote(newpriority,"str")));
-
- if q.ntuples() == 0:
- utils.fubar("Supplied priority %s is invalid" % (newpriority));
- newprioid = q.getresult()[0][0];
-
- if newpriority == oldpriority and newsection == oldsection:
- print "I: Doing nothing"
- sys.exit(0);
-
- # If we're in no-action mode
- if Options["No-Action"]:
- if newpriority != oldpriority:
- print "I: Would change priority from %s to %s" % (oldpriority,newpriority);
- if newsection != oldsection:
- print "I: Would change section from %s to %s" % (oldsection,newsection);
- if Options.has_key("Done"):
- print "I: Would also close bug(s): %s" % (Options["Done"]);
-
- sys.exit(0);
-
- if newpriority != oldpriority:
- print "I: Will change priority from %s to %s" % (oldpriority,newpriority);
- if newsection != oldsection:
- print "I: Will change section from %s to %s" % (oldsection,newsection);
-
- if not Options.has_key("Done"):
- pass;
- #utils.warn("No bugs to close have been specified. Noone will know you have done this.");
- else:
- print "I: Will close bug(s): %s" % (Options["Done"]);
-
- game_over();
-
- Logger = logging.Logger(Cnf, "alicia");
-
- projectB.query("BEGIN WORK");
- # We're in "do it" mode, we have something to do... do it
- if newpriority != oldpriority:
- q = projectB.query("""
- UPDATE override
- SET priority=%d
- WHERE package=%s
- AND suite = (SELECT id FROM suite WHERE suite_name=%s)""" % (
- newprioid,
- pg._quote(package,"str"),
- pg._quote(suite,"str") ));
- Logger.log(["changed priority",package,oldpriority,newpriority]);
-
- if newsection != oldsection:
- q = projectB.query("""
- UPDATE override
- SET section=%d
- WHERE package=%s
- AND suite = (SELECT id FROM suite WHERE suite_name=%s)""" % (
- newsecid,
- pg._quote(package,"str"),
- pg._quote(suite,"str") ));
- Logger.log(["changed priority",package,oldsection,newsection]);
- projectB.query("COMMIT WORK");
-
- if Options.has_key("Done"):
- Subst = {};
- Subst["__ALICIA_ADDRESS__"] = Cnf["Alicia::MyEmailAddress"];
- Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
- bcc = [];
- if Cnf.Find("Dinstall::Bcc") != "":
- bcc.append(Cnf["Dinstall::Bcc"]);
- if Cnf.Find("Alicia::Bcc") != "":
- bcc.append(Cnf["Alicia::Bcc"]);
- if bcc:
- Subst["__BCC__"] = "Bcc: " + ", ".join(bcc);
- else:
- Subst["__BCC__"] = "X-Filler: 42";
- Subst["__CC__"] = "X-Katie: alicia $Revision: 1.6 $";
- Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
- Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
- Subst["__WHOAMI__"] = utils.whoami();
-
- summary = "Concerning package %s...\n" % (package);
- summary += "Operating on the %s suite\n" % (suite);
- if newpriority != oldpriority:
- summary += "Changed priority from %s to %s\n" % (oldpriority,newpriority);
- if newsection != oldsection:
- summary += "Changed section from %s to %s\n" % (oldsection,newsection);
- Subst["__SUMMARY__"] = summary;
-
- for bug in utils.split_args(Options["Done"]):
- Subst["__BUG_NUMBER__"] = bug;
- mail_message = utils.TemplateSubst(
- Subst,Cnf["Dir::Templates"]+"/alicia.bug-close");
- utils.send_mail(mail_message);
- Logger.log(["closed bug",bug]);
-
- Logger.close();
-
- print "Done";
-
-#################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Sync the ISC configuartion file and the SQL database
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: alyson,v 1.12 2003-09-07 13:52:07 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import pg, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: alyson
-Initalizes some tables in the projectB database based on the config file.
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def get (c, i):
- if c.has_key(i):
- return "'%s'" % (c[i]);
- else:
- return "NULL";
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Alyson::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Alyson::Options::%s" % (i)):
- Cnf["Alyson::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Alyson::Options")
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- # archive
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM archive");
- for name in Cnf.SubTree("Archive").List():
- Archive = Cnf.SubTree("Archive::%s" % (name));
- origin_server = get(Archive, "OriginServer");
- description = get(Archive, "Description");
- projectB.query("INSERT INTO archive (name, origin_server, description) VALUES ('%s', %s, %s)" % (name, origin_server, description));
- projectB.query("COMMIT WORK");
-
- # architecture
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM architecture");
- for arch in Cnf.SubTree("Architectures").List():
- description = Cnf["Architectures::%s" % (arch)];
- projectB.query("INSERT INTO architecture (arch_string, description) VALUES ('%s', '%s')" % (arch, description));
- projectB.query("COMMIT WORK");
-
- # component
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM component");
- for name in Cnf.SubTree("Component").List():
- Component = Cnf.SubTree("Component::%s" % (name));
- description = get(Component, "Description");
- if Component.get("MeetsDFSG").lower() == "true":
- meets_dfsg = "true";
- else:
- meets_dfsg = "false";
- projectB.query("INSERT INTO component (name, description, meets_dfsg) VALUES ('%s', %s, %s)" % (name, description, meets_dfsg));
- projectB.query("COMMIT WORK");
-
- # location
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM location");
- for location in Cnf.SubTree("Location").List():
- Location = Cnf.SubTree("Location::%s" % (location));
- archive_id = db_access.get_archive_id(Location["Archive"]);
- type = Location.get("type");
- if type == "legacy-mixed":
- projectB.query("INSERT INTO location (path, archive, type) VALUES ('%s', %d, '%s')" % (location, archive_id, Location["type"]));
- elif type == "legacy" or type == "pool":
- for component in Cnf.SubTree("Component").List():
- component_id = db_access.get_component_id(component);
- projectB.query("INSERT INTO location (path, component, archive, type) VALUES ('%s', %d, %d, '%s')" %
- (location, component_id, archive_id, type));
- else:
- utils.fubar("E: type '%s' not recognised in location %s." % (type, location));
- projectB.query("COMMIT WORK");
-
- # suite
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM suite")
- for suite in Cnf.SubTree("Suite").List():
- Suite = Cnf.SubTree("Suite::%s" %(suite))
- version = get(Suite, "Version");
- origin = get(Suite, "Origin");
- description = get(Suite, "Description");
- projectB.query("INSERT INTO suite (suite_name, version, origin, description) VALUES ('%s', %s, %s, %s)"
- % (suite.lower(), version, origin, description));
- for architecture in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
- architecture_id = db_access.get_architecture_id (architecture);
- if architecture_id < 0:
- utils.fubar("architecture '%s' not found in architecture table for suite %s." % (architecture, suite));
- projectB.query("INSERT INTO suite_architectures (suite, architecture) VALUES (currval('suite_id_seq'), %d)" % (architecture_id));
- projectB.query("COMMIT WORK");
-
- # override_type
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM override_type");
- for type in Cnf.ValueList("OverrideType"):
- projectB.query("INSERT INTO override_type (type) VALUES ('%s')" % (type));
- projectB.query("COMMIT WORK");
-
- # priority
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM priority");
- for priority in Cnf.SubTree("Priority").List():
- projectB.query("INSERT INTO priority (priority, level) VALUES ('%s', %s)" % (priority, Cnf["Priority::%s" % (priority)]));
- projectB.query("COMMIT WORK");
-
- # section
-
- projectB.query("BEGIN WORK");
- projectB.query("DELETE FROM section");
- for component in Cnf.SubTree("Component").List():
- if Cnf["Natalie::ComponentPosition"] == "prefix":
- suffix = "";
- if component != "main":
- prefix = component + '/';
- else:
- prefix = "";
- else:
- prefix = "";
- component = component.replace("non-US/", "");
- if component != "main":
- suffix = '/' + component;
- else:
- suffix = "";
- for section in Cnf.ValueList("Section"):
- projectB.query("INSERT INTO section (section) VALUES ('%s%s%s')" % (prefix, section, suffix));
- projectB.query("COMMIT WORK");
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Wrapper for Debian Security team
-# Copyright (C) 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: amber,v 1.11 2005-11-26 07:52:06 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
-# USA
-
-################################################################################
-
-# <aj> neuro: <usual question>?
-# <neuro> aj: PPG: the movie! july 3!
-# <aj> _PHWOAR_!!!!!
-# <aj> (you think you can distract me, and you're right)
-# <aj> urls?!
-# <aj> promo videos?!
-# <aj> where, where!?
-
-################################################################################
-
-import commands, os, pwd, re, sys, time;
-import apt_pkg;
-import katie, utils;
-
-################################################################################
-
-Cnf = None;
-Options = None;
-Katie = None;
-
-re_taint_free = re.compile(r"^['/;\-\+\.\s\w]+$");
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: amber ADV_NUMBER CHANGES_FILE[...]
-Install CHANGES_FILE(s) as security advisory ADV_NUMBER
-
- -h, --help show this help and exit
- -n, --no-action don't do anything
-
-"""
- sys.exit(exit_code)
-
-################################################################################
-
-def do_upload(changes_files):
- file_list = "";
- suites = {};
- component_mapping = {};
- for component in Cnf.SubTree("Amber::ComponentMappings").List():
- component_mapping[component] = Cnf["Amber::ComponentMappings::%s" % (component)];
- uploads = {}; # uploads[uri] = file_list;
- changesfiles = {}; # changesfiles[uri] = file_list;
- package_list = {} # package_list[source_name][version];
- changes_files.sort(utils.changes_compare);
- for changes_file in changes_files:
- changes_file = utils.validate_changes_file_arg(changes_file);
- # Reset variables
- components = {};
- upload_uris = {};
- file_list = [];
- Katie.init_vars();
- # Parse the .katie file for the .changes file
- Katie.pkg.changes_file = changes_file;
- Katie.update_vars();
- files = Katie.pkg.files;
- changes = Katie.pkg.changes;
- dsc = Katie.pkg.dsc;
- # We have the changes, now return if its amd64, to not upload them to ftp-master
- if changes["architecture"].has_key("amd64"):
- print "Not uploading amd64 part to ftp-master\n";
- continue
- if changes["distribution"].has_key("oldstable-security"):
- print "Not uploading oldstable-security changes to ftp-master\n";
- continue
- # Build the file list for this .changes file
- for file in files.keys():
- poolname = os.path.join(Cnf["Dir::Root"], Cnf["Dir::PoolRoot"],
- utils.poolify(changes["source"], files[file]["component"]),
- file);
- file_list.append(poolname);
- orig_component = files[file].get("original component", files[file]["component"]);
- components[orig_component] = "";
- # Determine the upload uri for this .changes file
- for component in components.keys():
- upload_uri = component_mapping.get(component);
- if upload_uri:
- upload_uris[upload_uri] = "";
- num_upload_uris = len(upload_uris.keys());
- if num_upload_uris == 0:
- utils.fubar("%s: No valid upload URI found from components (%s)."
- % (changes_file, ", ".join(components.keys())));
- elif num_upload_uris > 1:
- utils.fubar("%s: more than one upload URI (%s) from components (%s)."
- % (changes_file, ", ".join(upload_uris.keys()),
- ", ".join(components.keys())));
- upload_uri = upload_uris.keys()[0];
- # Update the file list for the upload uri
- if not uploads.has_key(upload_uri):
- uploads[upload_uri] = [];
- uploads[upload_uri].extend(file_list);
- # Update the changes list for the upload uri
- if not changes.has_key(upload_uri):
- changesfiles[upload_uri] = [];
- changesfiles[upload_uri].append(changes_file);
- # Remember the suites and source name/version
- for suite in changes["distribution"].keys():
- suites[suite] = "";
- # Remember the source name and version
- if changes["architecture"].has_key("source") and \
- changes["distribution"].has_key("testing"):
- if not package_list.has_key(dsc["source"]):
- package_list[dsc["source"]] = {};
- package_list[dsc["source"]][dsc["version"]] = "";
-
- if not Options["No-Action"]:
- answer = yes_no("Upload to files to main archive (Y/n)?");
- if answer != "y":
- return;
-
- for uri in uploads.keys():
- uploads[uri].extend(changesfiles[uri]);
- (host, path) = uri.split(":");
- file_list = " ".join(uploads[uri]);
- print "Uploading files to %s..." % (host);
- spawn("lftp -c 'open %s; cd %s; put %s'" % (host, path, file_list));
-
- if not Options["No-Action"]:
- filename = "%s/testing-processed" % (Cnf["Dir::Log"]);
- file = utils.open_file(filename, 'a');
- for source in package_list.keys():
- for version in package_list[source].keys():
- file.write(" ".join([source, version])+'\n');
- file.close();
-
-######################################################################
-# This function was originally written by aj and NIHishly merged into
-# amber by me.
-
-def make_advisory(advisory_nr, changes_files):
- adv_packages = [];
- updated_pkgs = {}; # updated_pkgs[distro][arch][file] = {path,md5,size}
-
- for arg in changes_files:
- arg = utils.validate_changes_file_arg(arg);
- Katie.pkg.changes_file = arg;
- Katie.init_vars();
- Katie.update_vars();
-
- src = Katie.pkg.changes["source"];
- if src not in adv_packages:
- adv_packages += [src];
-
- suites = Katie.pkg.changes["distribution"].keys();
- for suite in suites:
- if not updated_pkgs.has_key(suite):
- updated_pkgs[suite] = {};
-
- files = Katie.pkg.files;
- for file in files.keys():
- arch = files[file]["architecture"];
- md5 = files[file]["md5sum"];
- size = files[file]["size"];
- poolname = Cnf["Dir::PoolRoot"] + \
- utils.poolify(src, files[file]["component"]);
- if arch == "source" and file.endswith(".dsc"):
- dscpoolname = poolname;
- for suite in suites:
- if not updated_pkgs[suite].has_key(arch):
- updated_pkgs[suite][arch] = {}
- updated_pkgs[suite][arch][file] = {
- "md5": md5, "size": size,
- "poolname": poolname };
-
- dsc_files = Katie.pkg.dsc_files;
- for file in dsc_files.keys():
- arch = "source"
- if not dsc_files[file].has_key("files id"):
- continue;
-
- # otherwise, it's already in the pool and needs to be
- # listed specially
- md5 = dsc_files[file]["md5sum"];
- size = dsc_files[file]["size"];
- for suite in suites:
- if not updated_pkgs[suite].has_key(arch):
- updated_pkgs[suite][arch] = {};
- updated_pkgs[suite][arch][file] = {
- "md5": md5, "size": size,
- "poolname": dscpoolname };
-
- if os.environ.has_key("SUDO_UID"):
- whoami = long(os.environ["SUDO_UID"]);
- else:
- whoami = os.getuid();
- whoamifull = pwd.getpwuid(whoami);
- username = whoamifull[4].split(",")[0];
-
- Subst = {
- "__ADVISORY__": advisory_nr,
- "__WHOAMI__": username,
- "__DATE__": time.strftime("%B %d, %Y", time.gmtime(time.time())),
- "__PACKAGE__": ", ".join(adv_packages),
- "__KATIE_ADDRESS__": Cnf["Dinstall::MyEmailAddress"]
- };
-
- if Cnf.has_key("Dinstall::Bcc"):
- Subst["__BCC__"] = "Bcc: %s" % (Cnf["Dinstall::Bcc"]);
-
- adv = "";
- archive = Cnf["Archive::%s::PrimaryMirror" % (utils.where_am_i())];
- for suite in updated_pkgs.keys():
- suite_header = "%s %s (%s)" % (Cnf["Dinstall::MyDistribution"],
- Cnf["Suite::%s::Version" % suite], suite);
- adv += "%s\n%s\n\n" % (suite_header, "-"*len(suite_header));
-
- arches = Cnf.ValueList("Suite::%s::Architectures" % suite);
- if "source" in arches:
- arches.remove("source");
- if "all" in arches:
- arches.remove("all");
- arches.sort();
-
- adv += " %s was released for %s.\n\n" % (
- suite.capitalize(), utils.join_with_commas_and(arches));
-
- for a in ["source", "all"] + arches:
- if not updated_pkgs[suite].has_key(a):
- continue;
-
- if a == "source":
- adv += " Source archives:\n\n";
- elif a == "all":
- adv += " Architecture independent packages:\n\n";
- else:
- adv += " %s architecture (%s)\n\n" % (a,
- Cnf["Architectures::%s" % a]);
-
- for file in updated_pkgs[suite][a].keys():
- adv += " http://%s/%s%s\n" % (
- archive, updated_pkgs[suite][a][file]["poolname"], file);
- adv += " Size/MD5 checksum: %8s %s\n" % (
- updated_pkgs[suite][a][file]["size"],
- updated_pkgs[suite][a][file]["md5"]);
- adv += "\n";
- adv = adv.rstrip();
-
- Subst["__ADVISORY_TEXT__"] = adv;
-
- adv = utils.TemplateSubst(Subst, Cnf["Dir::Templates"]+"/amber.advisory");
- if not Options["No-Action"]:
- utils.send_mail (adv);
- else:
- print "[<Would send template advisory mail>]";
-
-######################################################################
-
-def init():
- global Cnf, Katie, Options;
-
- apt_pkg.init();
- Cnf = utils.get_conf();
-
- Arguments = [('h', "help", "Amber::Options::Help"),
- ('n', "no-action", "Amber::Options::No-Action")];
-
- for i in [ "help", "no-action" ]:
- Cnf["Amber::Options::%s" % (i)] = "";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Amber::Options")
- Katie = katie.Katie(Cnf);
-
- if Options["Help"]:
- usage(0);
-
- if not arguments:
- usage(1);
-
- advisory_number = arguments[0];
- changes_files = arguments[1:];
- if advisory_number.endswith(".changes"):
- utils.warn("first argument must be the advisory number.");
- usage(1);
- for file in changes_files:
- file = utils.validate_changes_file_arg(file);
- return (advisory_number, changes_files);
-
-######################################################################
-
-def yes_no(prompt):
- while 1:
- answer = utils.our_raw_input(prompt+" ").lower();
- if answer == "y" or answer == "n":
- break;
- else:
- print "Invalid answer; please try again.";
- return answer;
-
-######################################################################
-
-def spawn(command):
- if not re_taint_free.match(command):
- utils.fubar("Invalid character in \"%s\"." % (command));
-
- if Options["No-Action"]:
- print "[%s]" % (command);
- else:
- (result, output) = commands.getstatusoutput(command);
- if (result != 0):
- utils.fubar("Invocation of '%s' failed:\n%s\n" % (command, output), result);
-
-######################################################################
-
-
-def main():
- (advisory_number, changes_files) = init();
-
- if not Options["No-Action"]:
- print "About to install the following files: "
- for file in changes_files:
- print " %s" % (file);
- answer = yes_no("Continue (Y/n)?");
- if answer == "n":
- sys.exit(0);
-
- os.chdir(Cnf["Dir::Queue::Accepted"]);
- print "Installing packages into the archive...";
- spawn("%s/kelly -pa %s" % (Cnf["Dir::Katie"], " ".join(changes_files)));
- os.chdir(Cnf["Dir::Katie"]);
- print "Updating file lists for apt-ftparchive...";
- spawn("./jenna");
- print "Updating Packages and Sources files...";
- spawn("apt-ftparchive generate %s" % (utils.which_apt_conf_file()));
- print "Updating Release files...";
- spawn("./ziyi");
-
- if not Options["No-Action"]:
- os.chdir(Cnf["Dir::Queue::Done"]);
- else:
- os.chdir(Cnf["Dir::Queue::Accepted"]);
- print "Generating template advisory...";
- make_advisory(advisory_number, changes_files);
-
- # Trigger security mirrors
- spawn("sudo -u archvsync /home/archvsync/signal_security");
-
- do_upload(changes_files);
-
-################################################################################
-
-if __name__ == '__main__':
- main();
-
-################################################################################
+++ /dev/null
-#!/usr/bin/env python
-
-# Check for fixable discrepancies between stable and unstable
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: andrea,v 1.10 2003-09-07 13:52:13 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-
-################################################################################
-
-import pg, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: andrea
-Looks for fixable descrepancies between stable and unstable.
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf();
- Arguments = [('h',"help","Andrea::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Andrea::Options::%s" % (i)):
- Cnf["Andrea::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Andrea::Options")
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- src_suite = "stable";
- dst_suite = "unstable";
-
- src_suite_id = db_access.get_suite_id(src_suite);
- dst_suite_id = db_access.get_suite_id(dst_suite);
- arch_all_id = db_access.get_architecture_id("all");
- dsc_type_id = db_access.get_override_type_id("dsc");
-
- for arch in Cnf.ValueList("Suite::%s::Architectures" % (src_suite)):
- if arch == "source":
- continue;
-
- # Arch: all doesn't work; consider packages which go from
- # arch: all to arch: any, e.g. debconf... needs more checks
- # and thought later.
-
- if arch == "all":
- continue;
- arch_id = db_access.get_architecture_id(arch);
- q = projectB.query("""
-SELECT b_src.package, b_src.version, a.arch_string
- FROM binaries b_src, bin_associations ba, override o, architecture a
- WHERE ba.bin = b_src.id AND ba.suite = %s AND b_src.architecture = %s
- AND a.id = b_src.architecture AND o.package = b_src.package
- AND o.suite = %s AND o.type != %s AND NOT EXISTS
- (SELECT 1 FROM bin_associations ba2, binaries b_dst
- WHERE ba2.bin = b_dst.id AND b_dst.package = b_src.package
- AND (b_dst.architecture = %s OR b_dst.architecture = %s)
- AND ba2.suite = %s AND EXISTS
- (SELECT 1 FROM bin_associations ba3, binaries b2
- WHERE ba3.bin = b2.id AND ba3.suite = %s AND b2.package = b_dst.package))
-ORDER BY b_src.package;"""
- % (src_suite_id, arch_id, dst_suite_id, dsc_type_id, arch_id, arch_all_id, dst_suite_id, dst_suite_id));
- for i in q.getresult():
- print " ".join(i);
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/ftp.debian.org/ftp/";
- OverrideDir "/org/ftp.debian.org/scripts/override/";
- CacheDir "/org/ftp.debian.org/database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress "gzip";
- Contents::Compress "gzip";
- DeLinkLimit 0;
- MaxContentsChange 25000;
- FileMode 0664;
-}
-
-TreeDefault
-{
- Contents::Header "/org/ftp.debian.org/katie/Contents.top";
-};
-
-tree "dists/proposed-updates"
-{
- FileList "/org/ftp.debian.org/database/dists/proposed-updates_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/proposed-updates_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.sarge.$(SECTION)";
- ExtraOverride "override.sarge.extra.$(SECTION)";
- SrcOverride "override.sarge.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/testing"
-{
- FakeDI "dists/unstable";
- FileList "/org/ftp.debian.org/database/dists/testing_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/testing_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.etch.$(SECTION)";
- ExtraOverride "override.etch.extra.$(SECTION)";
- SrcOverride "override.etch.$(SECTION).src";
- Packages::Compress ". gzip bzip2";
- Sources::Compress "gzip bzip2";
-};
-
-tree "dists/testing-proposed-updates"
-{
- FileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.etch.$(SECTION)";
- ExtraOverride "override.etch.extra.$(SECTION)";
- SrcOverride "override.etch.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/unstable"
-{
- FileList "/org/ftp.debian.org/database/dists/unstable_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/unstable_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
- BinOverride "override.sid.$(SECTION)";
- ExtraOverride "override.sid.extra.$(SECTION)";
- SrcOverride "override.sid.$(SECTION).src";
- Packages::Compress "gzip bzip2";
- Sources::Compress "gzip bzip2";
-};
-
-// debian-installer
-
-tree "dists/testing/main"
-{
- FileList "/org/ftp.debian.org/database/dists/testing_main_$(SECTION)_binary-$(ARCH).list";
- Sections "debian-installer";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
- BinOverride "override.etch.main.$(SECTION)";
- SrcOverride "override.etch.main.src";
- BinCacheDB "packages-debian-installer-$(ARCH).db";
- Packages::Extensions ".udeb";
- Contents "$(DIST)/../Contents-udeb";
-};
-
-tree "dists/testing-proposed-updates/main"
-{
- FileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_main_$(SECTION)_binary-$(ARCH).list";
- Sections "debian-installer";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
- BinOverride "override.etch.main.$(SECTION)";
- SrcOverride "override.etch.main.src";
- BinCacheDB "packages-debian-installer-$(ARCH).db";
- Packages::Extensions ".udeb";
- Contents " ";
-};
-
-tree "dists/unstable/main"
-{
- FileList "/org/ftp.debian.org/database/dists/unstable_main_$(SECTION)_binary-$(ARCH).list";
- Sections "debian-installer";
- Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc";
- BinOverride "override.sid.main.$(SECTION)";
- SrcOverride "override.sid.main.src";
- BinCacheDB "packages-debian-installer-$(ARCH).db";
- Packages::Extensions ".udeb";
- Contents "Contents $(DIST)/../Contents-udeb";
-};
-
-// Experimental
-
-tree "project/experimental"
-{
- FileList "/org/ftp.debian.org/database/dists/experimental_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/experimental_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
- BinOverride "override.sid.$(SECTION)";
- SrcOverride "override.sid.$(SECTION).src";
- Contents " ";
-};
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/non-us.debian.org/ftp/";
- OverrideDir "/org/non-us.debian.org/scripts/override/";
- CacheDir "/org/non-us.debian.org/database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress "gzip";
- Contents::Compress "gzip";
- DeLinkLimit 0;
- MaxContentsChange 6000;
- FileMode 0664;
-}
-
-TreeDefault
-{
- Contents::Header "/org/non-us.debian.org/katie/Contents.top";
-};
-
-tree "dists/proposed-updates/non-US"
-{
- FileList "/org/non-us.debian.org/database/dists/proposed-updates_non-US/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/non-us.debian.org/database/dists/proposed-updates_non-US/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.woody.$(SECTION)";
- SrcOverride "override.woody.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/testing/non-US"
-{
- FileList "/org/non-us.debian.org/database/dists/testing_non-US/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/non-us.debian.org/database/dists/testing_non-US/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.sarge.$(SECTION)";
- SrcOverride "override.sarge.$(SECTION).src";
-};
-
-tree "dists/testing-proposed-updates/non-US"
-{
- FileList "/org/non-us.debian.org/database/dists/testing-proposed-updates_non-US/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/non-us.debian.org/database/dists/testing-proposed-updates_non-US/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.sarge.$(SECTION)";
- SrcOverride "override.sarge.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/unstable/non-US"
-{
- FileList "/org/non-us.debian.org/database/dists/unstable_non-US/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/non-us.debian.org/database/dists/unstable_non-US/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
- BinOverride "override.sid.$(SECTION)";
- SrcOverride "override.sid.$(SECTION).src";
-};
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/security.debian.org/ftp/";
- OverrideDir "/org/security.debian.org/override/";
- CacheDir "/org/security.debian.org/katie-database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress "gzip";
- DeLinkLimit 0;
- FileMode 0664;
-}
-
-tree "dists/oldstable/updates"
-{
- FileList "/org/security.debian.org/katie-database/dists/oldstable_updates/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/security.debian.org/katie-database/dists/oldstable_updates/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
- BinOverride "override.woody.$(SECTION)";
- ExtraOverride "override.woody.extra.$(SECTION)";
- SrcOverride "override.woody.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/stable/updates"
-{
- FileList "/org/security.debian.org/katie-database/dists/stable_updates/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/security.debian.org/katie-database/dists/stable_updates/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha amd64 arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
- BinOverride "override.sarge.$(SECTION)";
- ExtraOverride "override.sarge.extra.$(SECTION)";
- SrcOverride "override.sarge.$(SECTION).src";
- Contents " ";
-};
-
-tree "dists/testing/updates"
-{
- FileList "/org/security.debian.org/katie-database/dists/testing_updates/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/security.debian.org/katie-database/dists/testing_updates/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
- BinOverride "override.etch.$(SECTION)";
- ExtraOverride "override.etch.extra.$(SECTION)";
- SrcOverride "override.etch.$(SECTION).src";
- Contents " ";
-};
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/incoming.debian.org/buildd/";
- OverrideDir "/org/ftp.debian.org/scripts/override/";
- CacheDir "/org/ftp.debian.org/database/";
-};
-
-Default
-{
- Packages::Compress "gzip";
- Sources::Compress "gzip";
- DeLinkLimit 0;
- FileMode 0664;
-}
-
-bindirectory "incoming"
-{
- Packages "Packages";
- Contents " ";
-
- BinOverride "override.sid.all3";
- BinCacheDB "packages-accepted.db";
-
- FileList "/org/ftp.debian.org/database/dists/unstable_accepted.list";
-
- PathPrefix "";
- Packages::Extensions ".deb .udeb";
-};
-
-bindirectory "incoming/"
-{
- Sources "Sources";
- BinOverride "override.sid.all3";
- SrcOverride "override.sid.all3.src";
- SourceFileList "/org/ftp.debian.org/database/dists/unstable_accepted.list";
-};
-
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/security.debian.org/buildd/";
- OverrideDir "/org/security.debian.org/override/";
- CacheDir "/org/security.debian.org/katie-database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress ". gzip";
- DeLinkLimit 0;
- FileMode 0664;
-}
-
-bindirectory "etch"
-{
- Packages "etch/Packages";
- Sources "etch/Sources";
- Contents " ";
-
- BinOverride "override.etch.all3";
- BinCacheDB "packages-accepted-etch.db";
- PathPrefix "";
- Packages::Extensions ".deb .udeb";
-};
-
-bindirectory "woody"
-{
- Packages "woody/Packages";
- Sources "woody/Sources";
- Contents " ";
-
- BinOverride "override.woody.all3";
- BinCacheDB "packages-accepted-woody.db";
- PathPrefix "";
- Packages::Extensions ".deb .udeb";
-};
-
-bindirectory "sarge"
-{
- Packages "sarge/Packages";
- Sources "sarge/Sources";
- Contents " ";
-
- BinOverride "override.sarge.all3";
- BinCacheDB "packages-accepted-sarge.db";
- PathPrefix "";
- Packages::Extensions ".deb .udeb";
-};
-
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/ftp.debian.org/ftp/";
- OverrideDir "/org/ftp.debian.org/scripts/override/";
- CacheDir "/org/ftp.debian.org/database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress "gzip";
- Contents::Compress "gzip";
- DeLinkLimit 0;
- FileMode 0664;
-}
-
-TreeDefault
-{
- Contents::Header "/org/ftp.debian.org/katie/Contents.top";
-};
-
-tree "dists/stable"
-{
- FileList "/org/ftp.debian.org/database/dists/stable_$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/ftp.debian.org/database/dists/stable_$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.sarge.$(SECTION)";
- ExtraOverride "override.sarge.extra.$(SECTION)";
- SrcOverride "override.sarge.$(SECTION).src";
-};
-
-// debian-installer
-
-tree "dists/stable/main"
-{
- FileList "/org/ftp.debian.org/database/dists/stable_main_$(SECTION)_binary-$(ARCH).list";
- Sections "debian-installer";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
- BinOverride "override.sarge.main.$(SECTION)";
- SrcOverride "override.sarge.main.src";
- BinCacheDB "packages-debian-installer-$(ARCH).db";
- Packages::Extensions ".udeb";
- Contents " ";
-};
+++ /dev/null
-Dir
-{
- ArchiveDir "/org/non-us.debian.org/ftp/";
- OverrideDir "/org/non-us.debian.org/scripts/override/";
- CacheDir "/org/non-us.debian.org/database/";
-};
-
-Default
-{
- Packages::Compress ". gzip";
- Sources::Compress "gzip";
- Contents::Compress "gzip";
- DeLinkLimit 0;
- FileMode 0664;
-}
-
-TreeDefault
-{
- Contents::Header "/org/non-us.debian.org/katie/Contents.top";
-};
-
-tree "dists/stable/non-US"
-{
- FileList "/org/non-us.debian.org/database/dists/stable_non-US/$(SECTION)_binary-$(ARCH).list";
- SourceFileList "/org/non-us.debian.org/database/dists/stable_non-US/$(SECTION)_source.list";
- Sections "main contrib non-free";
- Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
- BinOverride "override.woody.$(SECTION)";
- SrcOverride "override.woody.$(SECTION).src";
-};
+++ /dev/null
-#!/usr/bin/env python
-
-# Dump variables from a .katie file to stdout
-# Copyright (C) 2001, 2002, 2004 James Troup <james@nocrew.org>
-# $Id: ashley,v 1.11 2004-11-27 16:05:12 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <elmo> ooooooooooooooohhhhhhhhhhhhhhhhhhhhhhhhh dddddddddeeeeeeeaaaaaaaarrrrrrrrrrr
-# <elmo> iiiiiiiiiiiii tttttttttthhhhhhhhiiiiiiiiiiiinnnnnnnnnkkkkkkkkkkkkk iiiiiiiiiiiiii mmmmmmmmmmeeeeeeeesssssssssssssssseeeeeeeddd uuuupppppppppppp ttttttttthhhhhhhheeeeeeee xxxxxxxssssssseeeeeeeeettttttttttttt aaaaaaaarrrrrrrggggggsssssssss
-#
-# ['xset r rate 30 250' bad, mmkay]
-
-################################################################################
-
-import sys;
-import katie, utils;
-import apt_pkg;
-
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: ashley FILE...
-Dumps the info in .katie FILE(s).
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def main():
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Ashley::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Ashley::Options::%s" % (i)):
- Cnf["Ashley::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Ashley::Options")
- if Options["Help"]:
- usage();
-
- k = katie.Katie(Cnf);
- for arg in sys.argv[1:]:
- arg = utils.validate_changes_file_arg(arg,require_changes=-1);
- k.pkg.changes_file = arg;
- print "%s:" % (arg);
- k.init_vars();
- k.update_vars();
-
- changes = k.pkg.changes;
- print " Changes:";
- # Mandatory changes fields
- for i in [ "source", "version", "maintainer", "urgency", "changedby822",
- "changedby2047", "changedbyname", "maintainer822",
- "maintainer2047", "maintainername", "maintaineremail",
- "fingerprint", "changes" ]:
- print " %s: %s" % (i.capitalize(), changes[i]);
- del changes[i];
- # Mandatory changes lists
- for i in [ "distribution", "architecture", "closes" ]:
- print " %s: %s" % (i.capitalize(), " ".join(changes[i].keys()));
- del changes[i];
- # Optional changes fields
- for i in [ "changed-by", "filecontents", "format" ]:
- if changes.has_key(i):
- print " %s: %s" % (i.capitalize(), changes[i]);
- del changes[i];
- print;
- if changes:
- utils.warn("changes still has following unrecognised keys: %s" % (changes.keys()));
-
- dsc = k.pkg.dsc;
- print " Dsc:";
- for i in [ "source", "version", "maintainer", "fingerprint", "uploaders",
- "bts changelog" ]:
- if dsc.has_key(i):
- print " %s: %s" % (i.capitalize(), dsc[i]);
- del dsc[i];
- print;
- if dsc:
- utils.warn("dsc still has following unrecognised keys: %s" % (dsc.keys()));
-
- files = k.pkg.files;
- print " Files:"
- for file in files.keys():
- print " %s:" % (file);
- for i in [ "package", "version", "architecture", "type", "size",
- "md5sum", "component", "location id", "source package",
- "source version", "maintainer", "dbtype", "files id",
- "new", "section", "priority", "pool name" ]:
- if files[file].has_key(i):
- print " %s: %s" % (i.capitalize(), files[file][i]);
- del files[file][i];
- if files[file]:
- utils.warn("files[%s] still has following unrecognised keys: %s" % (file, files[file].keys()));
- print;
-
- dsc_files = k.pkg.dsc_files;
- print " Dsc Files:";
- for file in dsc_files.keys():
- print " %s:" % (file);
- # Mandatory fields
- for i in [ "size", "md5sum" ]:
- print " %s: %s" % (i.capitalize(), dsc_files[file][i]);
- del dsc_files[file][i];
- # Optional fields
- for i in [ "files id" ]:
- if dsc_files[file].has_key(i):
- print " %s: %s" % (i.capitalize(), dsc_files[file][i]);
- del dsc_files[file][i];
- if dsc_files[file]:
- utils.warn("dsc_files[%s] still has following unrecognised keys: %s" % (file, dsc_files[file].keys()));
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Prepare and maintain partial trees by architecture
-# Copyright (C) 2004 Daniel Silverstone <dsilvers@digital-scurf.org>
-# $Id: billie,v 1.4 2004-11-27 16:06:42 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-
-###############################################################################
-## <kinnison> So Martin, do you have a quote for me yet?
-## <tbm> Make something damned stupid up and attribute it to me, that's okay
-###############################################################################
-
-import pg, pwd, sys;
-import utils, db_access;
-import apt_pkg, logging;
-
-from stat import S_ISDIR, S_ISLNK, S_ISREG;
-import os;
-import cPickle;
-
-## Master path is the main repository
-#MASTER_PATH = "/org/ftp.debian.org/scratch/dsilvers/master";
-
-MASTER_PATH = "***Configure Billie::FTPPath Please***";
-TREE_ROOT = "***Configure Billie::TreeRootPath Please***";
-TREE_DB_ROOT = "***Configure Billie::TreeDatabasePath Please***";
-trees = []
-
-###############################################################################
-# A BillieTarget is a representation of a target. It is a set of archs, a path
-# and whether or not the target includes source.
-##################
-
-class BillieTarget:
- def __init__(self, name, archs, source):
- self.name = name;
- self.root = "%s/%s" % (TREE_ROOT,name);
- self.archs = archs.split(",");
- self.source = source;
- self.dbpath = "%s/%s.db" % (TREE_DB_ROOT,name);
- self.db = BillieDB();
- if os.path.exists( self.dbpath ):
- self.db.load_from_file( self.dbpath );
-
- ## Save the db back to disk
- def save_db(self):
- self.db.save_to_file( self.dbpath );
-
- ## Returns true if it's a poolish match
- def poolish_match(self, path):
- for a in self.archs:
- if path.endswith( "_%s.deb" % (a) ):
- return 1;
- if path.endswith( "_%s.udeb" % (a) ):
- return 1;
- if self.source:
- if (path.endswith( ".tar.gz" ) or
- path.endswith( ".diff.gz" ) or
- path.endswith( ".dsc" )):
- return 1;
- return 0;
-
- ## Returns false if it's a badmatch distswise
- def distish_match(self,path):
- for a in self.archs:
- if path.endswith("/Contents-%s.gz" % (a)):
- return 1;
- if path.find("/binary-%s/" % (a)) != -1:
- return 1;
- if path.find("/installer-%s/" % (a)) != -1:
- return 1;
- if path.find("/source/") != -1:
- if self.source:
- return 1;
- else:
- return 0;
- if path.find("/Contents-") != -1:
- return 0;
- if path.find("/binary-") != -1:
- return 0;
- if path.find("/installer-") != -1:
- return 0;
- return 1;
-
-##############################################################################
-# The applicable function is basically a predicate. Given a path and a
-# target object its job is to decide if the path conforms for the
-# target and thus is wanted.
-#
-# 'verbatim' is a list of files which are copied regardless
-# it should be loaded from a config file eventually
-##################
-
-verbatim = [
- ];
-
-verbprefix = [
- "/tools/",
- "/README",
- "/doc/"
- ];
-
-def applicable(path, target):
- if path.startswith("/pool/"):
- return target.poolish_match(path);
- if (path.startswith("/dists/") or
- path.startswith("/project/experimental/")):
- return target.distish_match(path);
- if path in verbatim:
- return 1;
- for prefix in verbprefix:
- if path.startswith(prefix):
- return 1;
- return 0;
-
-
-##############################################################################
-# A BillieDir is a representation of a tree.
-# It distinguishes files dirs and links
-# Dirs are dicts of (name, BillieDir)
-# Files are dicts of (name, inode)
-# Links are dicts of (name, target)
-##############
-
-class BillieDir:
- def __init__(self):
- self.dirs = {};
- self.files = {};
- self.links = {};
-
-##############################################################################
-# A BillieDB is a container for a BillieDir...
-##############
-
-class BillieDB:
- ## Initialise a BillieDB as containing nothing
- def __init__(self):
- self.root = BillieDir();
-
- def _internal_recurse(self, path):
- bdir = BillieDir();
- dl = os.listdir( path );
- dl.sort();
- dirs = [];
- for ln in dl:
- lnl = os.lstat( "%s/%s" % (path, ln) );
- if S_ISDIR(lnl[0]):
- dirs.append(ln);
- elif S_ISLNK(lnl[0]):
- bdir.links[ln] = os.readlink( "%s/%s" % (path, ln) );
- elif S_ISREG(lnl[0]):
- bdir.files[ln] = lnl[1];
- else:
- util.fubar( "Confused by %s/%s -- not a dir, link or file" %
- ( path, ln ) );
- for d in dirs:
- bdir.dirs[d] = self._internal_recurse( "%s/%s" % (path,d) );
-
- return bdir;
-
- ## Recurse through a given path, setting the sequence accordingly
- def init_from_dir(self, dirp):
- self.root = self._internal_recurse( dirp );
-
- ## Load this BillieDB from file
- def load_from_file(self, fname):
- f = open(fname, "r");
- self.root = cPickle.load(f);
- f.close();
-
- ## Save this BillieDB to a file
- def save_to_file(self, fname):
- f = open(fname, "w");
- cPickle.dump( self.root, f, 1 );
- f.close();
-
-
-##############################################################################
-# Helper functions for the tree syncing...
-##################
-
-def _pth(a,b):
- return "%s/%s" % (a,b);
-
-def do_mkdir(targ,path):
- if not os.path.exists( _pth(targ.root, path) ):
- os.makedirs( _pth(targ.root, path) );
-
-def do_mkdir_f(targ,path):
- do_mkdir(targ, os.path.dirname(path));
-
-def do_link(targ,path):
- do_mkdir_f(targ,path);
- os.link( _pth(MASTER_PATH, path),
- _pth(targ.root, path));
-
-def do_symlink(targ,path,link):
- do_mkdir_f(targ,path);
- os.symlink( link, _pth(targ.root, path) );
-
-def do_unlink(targ,path):
- os.unlink( _pth(targ.root, path) );
-
-def do_unlink_dir(targ,path):
- os.system( "rm -Rf '%s'" % _pth(targ.root, path) );
-
-##############################################################################
-# Reconciling a target with the sourcedb
-################
-
-def _internal_reconcile( path, srcdir, targdir, targ ):
- # Remove any links in targdir which aren't in srcdir
- # Or which aren't applicable
- rm = []
- for k in targdir.links.keys():
- if applicable( _pth(path, k), targ ):
- if not srcdir.links.has_key(k):
- rm.append(k);
- else:
- rm.append(k);
- for k in rm:
- #print "-L-", _pth(path,k)
- do_unlink(targ, _pth(path,k))
- del targdir.links[k];
-
- # Remove any files in targdir which aren't in srcdir
- # Or which aren't applicable
- rm = []
- for k in targdir.files.keys():
- if applicable( _pth(path, k), targ ):
- if not srcdir.files.has_key(k):
- rm.append(k);
- else:
- rm.append(k);
- for k in rm:
- #print "-F-", _pth(path,k)
- do_unlink(targ, _pth(path,k))
- del targdir.files[k];
-
- # Remove any dirs in targdir which aren't in srcdir
- rm = []
- for k in targdir.dirs.keys():
- if not srcdir.dirs.has_key(k):
- rm.append(k);
- for k in rm:
- #print "-D-", _pth(path,k)
- do_unlink_dir(targ, _pth(path,k))
- del targdir.dirs[k];
-
- # Add/update files
- for k in srcdir.files.keys():
- if applicable( _pth(path,k), targ ):
- if not targdir.files.has_key(k):
- #print "+F+", _pth(path,k)
- do_link( targ, _pth(path,k) );
- targdir.files[k] = srcdir.files[k];
- else:
- if targdir.files[k] != srcdir.files[k]:
- #print "*F*", _pth(path,k);
- do_unlink( targ, _pth(path,k) );
- do_link( targ, _pth(path,k) );
- targdir.files[k] = srcdir.files[k];
-
- # Add/update links
- for k in srcdir.links.keys():
- if applicable( _pth(path,k), targ ):
- if not targdir.links.has_key(k):
- targdir.links[k] = srcdir.links[k];
- #print "+L+",_pth(path,k), "->", srcdir.links[k]
- do_symlink( targ, _pth(path,k), targdir.links[k] );
- else:
- if targdir.links[k] != srcdir.links[k]:
- do_unlink( targ, _pth(path,k) );
- targdir.links[k] = srcdir.links[k];
- #print "*L*", _pth(path,k), "to ->", srcdir.links[k]
- do_symlink( targ, _pth(path,k), targdir.links[k] );
-
- # Do dirs
- for k in srcdir.dirs.keys():
- if not targdir.dirs.has_key(k):
- targdir.dirs[k] = BillieDir();
- #print "+D+", _pth(path,k)
- _internal_reconcile( _pth(path,k), srcdir.dirs[k],
- targdir.dirs[k], targ );
-
-
-def reconcile_target_db( src, targ ):
- _internal_reconcile( "", src.root, targ.db.root, targ );
-
-###############################################################################
-
-def load_config():
- global MASTER_PATH
- global TREE_ROOT
- global TREE_DB_ROOT
- global trees
-
- MASTER_PATH = Cnf["Billie::FTPPath"];
- TREE_ROOT = Cnf["Billie::TreeRootPath"];
- TREE_DB_ROOT = Cnf["Billie::TreeDatabasePath"];
-
- for a in Cnf.ValueList("Billie::BasicTrees"):
- trees.append( BillieTarget( a, "%s,all" % a, 1 ) )
-
- for n in Cnf.SubTree("Billie::CombinationTrees").List():
- archs = Cnf.ValueList("Billie::CombinationTrees::%s" % n)
- source = 0
- if "source" in archs:
- source = 1
- archs.remove("source")
- archs = ",".join(archs)
- trees.append( BillieTarget( n, archs, source ) );
-
-def do_list ():
- print "Master path",MASTER_PATH
- print "Trees at",TREE_ROOT
- print "DBs at",TREE_DB_ROOT
-
- for tree in trees:
- print tree.name,"contains",", ".join(tree.archs),
- if tree.source:
- print " [source]"
- else:
- print ""
-
-def do_help ():
- print """Usage: billie [OPTIONS]
-Generate hardlink trees of certain architectures
-
- -h, --help show this help and exit
- -l, --list list the configuration and exit
-"""
-
-
-def main ():
- global Cnf
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Billie::Options::Help"),
- ('l',"list","Billie::Options::List"),
- ];
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Cnf["Billie::Options::cake"] = "";
- Options = Cnf.SubTree("Billie::Options")
-
- print "Loading configuration..."
- load_config();
- print "Loaded."
-
- if Options.has_key("Help"):
- do_help();
- return;
- if Options.has_key("List"):
- do_list();
- return;
-
-
- src = BillieDB()
- print "Scanning", MASTER_PATH
- src.init_from_dir(MASTER_PATH)
- print "Scanned"
-
- for tree in trees:
- print "Reconciling tree:",tree.name
- reconcile_target_db( src, tree );
- print "Saving updated DB...",
- tree.save_db();
- print "Done"
-
-##############################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Poolify (move packages from "legacy" type locations to pool locations)
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: catherine,v 1.19 2004-03-11 00:20:51 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# "Welcome to where time stands still,
-# No one leaves and no one will."
-# - Sanitarium - Metallica / Master of the puppets
-
-################################################################################
-
-import os, pg, re, stat, sys;
-import utils, db_access;
-import apt_pkg, apt_inst;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-re_isadeb = re.compile (r"(.+?)_(.+?)(_(.+))?\.u?deb$");
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: catherine [OPTIONS]
-Migrate packages from legacy locations into the pool.
-
- -l, --limit=AMOUNT only migrate AMOUNT Kb of packages
- -n, --no-action don't do anything
- -v, --verbose explain what is being done
- -h, --help show this help and exit"""
-
- sys.exit(exit_code)
-
-################################################################################
-
-# Q is a python-postgresql query result set and must have the
-# following four columns:
-# o files.id (as 'files_id')
-# o files.filename
-# o location.path
-# o component.name (as 'component')
-#
-# limit is a value in bytes or -1 for no limit (use with care!)
-# verbose and no_action are booleans
-
-def poolize (q, limit, verbose, no_action):
- poolized_size = 0L;
- poolized_count = 0;
-
- # Parse -l/--limit argument
- qd = q.dictresult();
- for qid in qd:
- legacy_filename = qid["path"]+qid["filename"];
- size = os.stat(legacy_filename)[stat.ST_SIZE];
- if (poolized_size + size) > limit and limit >= 0:
- utils.warn("Hit %s limit." % (utils.size_type(limit)));
- break;
- poolized_size += size;
- poolized_count += 1;
- base_filename = os.path.basename(legacy_filename);
- destination_filename = base_filename;
- # Work out the source package name
- if re_isadeb.match(base_filename):
- control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(legacy_filename)))
- package = control.Find("Package", "");
- source = control.Find("Source", package);
- if source.find("(") != -1:
- m = utils.re_extract_src_version.match(source)
- source = m.group(1)
- # If it's a binary, we need to also rename the file to include the architecture
- version = control.Find("Version", "");
- architecture = control.Find("Architecture", "");
- if package == "" or version == "" or architecture == "":
- utils.fubar("%s: couldn't determine required information to rename .deb file." % (legacy_filename));
- version = utils.re_no_epoch.sub('', version);
- destination_filename = "%s_%s_%s.deb" % (package, version, architecture);
- else:
- m = utils.re_issource.match(base_filename)
- if m:
- source = m.group(1);
- else:
- utils.fubar("expansion of source filename '%s' failed." % (legacy_filename));
- # Work out the component name
- component = qid["component"];
- if component == "":
- q = projectB.query("SELECT DISTINCT(c.name) FROM override o, component c WHERE o.package = '%s' AND o.component = c.id;" % (source));
- ql = q.getresult();
- if not ql:
- utils.fubar("No override match for '%s' so I can't work out the component." % (source));
- if len(ql) > 1:
- utils.fubar("Multiple override matches for '%s' so I can't work out the component." % (source));
- component = ql[0][0];
- # Work out the new location
- q = projectB.query("SELECT l.id FROM location l, component c WHERE c.name = '%s' AND c.id = l.component AND l.type = 'pool';" % (component));
- ql = q.getresult();
- if len(ql) != 1:
- utils.fubar("couldn't determine location ID for '%s'. [query returned %d matches, not 1 as expected]" % (source, len(ql)));
- location_id = ql[0][0];
- # First move the files to the new location
- pool_location = utils.poolify (source, component);
- pool_filename = pool_location + destination_filename;
- destination = Cnf["Dir::Pool"] + pool_location + destination_filename;
- if os.path.exists(destination):
- utils.fubar("'%s' already exists in the pool; serious FUBARity." % (legacy_filename));
- if verbose:
- print "Moving: %s -> %s" % (legacy_filename, destination);
- if not no_action:
- utils.move(legacy_filename, destination);
- # Then Update the DB's files table
- if verbose:
- print "SQL: UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, location_id, qid["files_id"]);
- if not no_action:
- q = projectB.query("UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, location_id, qid["files_id"]));
-
- sys.stderr.write("Poolized %s in %s files.\n" % (utils.size_type(poolized_size), poolized_count));
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- for i in ["help", "limit", "no-action", "verbose" ]:
- if not Cnf.has_key("Catherine::Options::%s" % (i)):
- Cnf["Catherine::Options::%s" % (i)] = "";
-
-
- Arguments = [('h',"help","Catherine::Options::Help"),
- ('l',"limit", "Catherine::Options::Limit", "HasArg"),
- ('n',"no-action","Catherine::Options::No-Action"),
- ('v',"verbose","Catherine::Options::Verbose")];
-
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Catherine::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- if not Options["Limit"]:
- limit = -1;
- else:
- limit = int(Options["Limit"]) * 1024;
-
- # -n/--no-action implies -v/--verbose
- if Options["No-Action"]:
- Options["Verbose"] = "true";
-
- # Sanity check the limit argument
- if limit > 0 and limit < 1024:
- utils.fubar("-l/--limit takes an argument with a value in kilobytes.");
-
- # Grab a list of all files not already in the pool
- q = projectB.query("""
-SELECT l.path, f.filename, f.id as files_id, c.name as component
- FROM files f, location l, component c WHERE
- NOT EXISTS (SELECT 1 FROM location l WHERE l.type = 'pool' AND f.location = l.id)
- AND NOT (f.filename ~ '^potato') AND f.location = l.id AND l.component = c.id
-UNION SELECT l.path, f.filename, f.id as files_id, null as component
- FROM files f, location l WHERE
- NOT EXISTS (SELECT 1 FROM location l WHERE l.type = 'pool' AND f.location = l.id)
- AND NOT (f.filename ~ '^potato') AND f.location = l.id AND NOT EXISTS
- (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);""");
-
- poolize(q, limit, Options["Verbose"], Options["No-Action"]);
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Generate Maintainers file used by e.g. the Debian Bug Tracking System
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: charisma,v 1.18 2004-06-17 15:02:02 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# ``As opposed to "Linux sucks. Respect my academic authoritah, damn
-# you!" or whatever all this hot air amounts to.''
-# -- ajt@ in _that_ thread on debian-devel@
-
-################################################################################
-
-import pg, sys;
-import db_access, utils;
-import apt_pkg;
-
-################################################################################
-
-projectB = None
-Cnf = None
-maintainer_from_source_cache = {}
-packages = {}
-fixed_maintainer_cache = {}
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: charisma [OPTION] EXTRA_FILE[...]
-Generate an index of packages <=> Maintainers.
-
- -h, --help show this help and exit
-"""
- sys.exit(exit_code)
-
-################################################################################
-
-def fix_maintainer (maintainer):
- global fixed_maintainer_cache;
-
- if not fixed_maintainer_cache.has_key(maintainer):
- fixed_maintainer_cache[maintainer] = utils.fix_maintainer(maintainer)[0]
-
- return fixed_maintainer_cache[maintainer]
-
-def get_maintainer (maintainer):
- return fix_maintainer(db_access.get_maintainer(maintainer));
-
-def get_maintainer_from_source (source_id):
- global maintainer_from_source_cache
-
- if not maintainer_from_source_cache.has_key(source_id):
- q = projectB.query("SELECT m.name FROM maintainer m, source s WHERE s.id = %s and s.maintainer = m.id" % (source_id));
- maintainer = q.getresult()[0][0]
- maintainer_from_source_cache[source_id] = fix_maintainer(maintainer)
-
- return maintainer_from_source_cache[source_id]
-
-################################################################################
-
-def main():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Charisma::Options::Help")];
- if not Cnf.has_key("Charisma::Options::Help"):
- Cnf["Charisma::Options::Help"] = "";
-
- extra_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Charisma::Options");
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- for suite in Cnf.SubTree("Suite").List():
- suite = suite.lower();
- suite_priority = int(Cnf["Suite::%s::Priority" % (suite)]);
-
- # Source packages
- q = projectB.query("SELECT s.source, s.version, m.name FROM src_associations sa, source s, suite su, maintainer m WHERE su.suite_name = '%s' AND sa.suite = su.id AND sa.source = s.id AND m.id = s.maintainer" % (suite))
- sources = q.getresult();
- for source in sources:
- package = source[0];
- version = source[1];
- maintainer = fix_maintainer(source[2]);
- if packages.has_key(package):
- if packages[package]["priority"] <= suite_priority:
- if apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
- packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
- else:
- packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
-
- # Binary packages
- q = projectB.query("SELECT b.package, b.source, b.maintainer, b.version FROM bin_associations ba, binaries b, suite s WHERE s.suite_name = '%s' AND ba.suite = s.id AND ba.bin = b.id" % (suite));
- binaries = q.getresult();
- for binary in binaries:
- package = binary[0];
- source_id = binary[1];
- version = binary[3];
- # Use the source maintainer first; falling back on the binary maintainer as a last resort only
- if source_id:
- maintainer = get_maintainer_from_source(source_id);
- else:
- maintainer = get_maintainer(binary[2]);
- if packages.has_key(package):
- if packages[package]["priority"] <= suite_priority:
- if apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
- packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
- else:
- packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
-
- # Process any additional Maintainer files (e.g. from non-US or pseudo packages)
- for filename in extra_files:
- file = utils.open_file(filename);
- for line in file.readlines():
- line = utils.re_comments.sub('', line).strip();
- if line == "":
- continue;
- split = line.split();
- lhs = split[0];
- maintainer = fix_maintainer(" ".join(split[1:]));
- if lhs.find('~') != -1:
- (package, version) = lhs.split('~');
- else:
- package = lhs;
- version = '*';
- # A version of '*' overwhelms all real version numbers
- if not packages.has_key(package) or version == '*' \
- or apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
- packages[package] = { "maintainer": maintainer, "version": version };
- file.close();
-
- package_keys = packages.keys()
- package_keys.sort()
- for package in package_keys:
- lhs = "~".join([package, packages[package]["version"]]);
- print "%-30s %s" % (lhs, packages[package]["maintainer"]);
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Cruft checker and hole filler for overrides
-# Copyright (C) 2000, 2001, 2002, 2004 James Troup <james@nocrew.org>
-# Copyright (C) 2005 Jeroen van Wolffelaar <jeroen@wolffelaar.nl>
-# $Id: cindy,v 1.14 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-######################################################################
-# NB: cindy is not a good idea with New Incoming as she doesn't take #
-# into account accepted. You can minimize the impact of this by #
-# running her immediately after kelly but that's still racy because #
-# lisa doesn't lock with kelly. A better long term fix is the evil #
-# plan for accepted to be in the DB. #
-######################################################################
-
-# cindy should now work fine being done during cron.daily, for example just
-# before denise (after kelly and jenna). At that point, queue/accepted should
-# be empty and installed, so... Cindy does now take into account suites
-# sharing overrides
-
-# TODO:
-# * Only update out-of-sync overrides when corresponding versions are equal to
-# some degree
-# * consistency checks like:
-# - section=debian-installer only for udeb and # dsc
-# - priority=source iff dsc
-# - (suite, package, 'dsc') is unique,
-# - just as (suite, package, (u)deb) (yes, across components!)
-# - sections match their component (each component has an own set of sections,
-# could probably be reduced...)
-
-################################################################################
-
-import pg, sys, os;
-import utils, db_access, logging;
-import apt_pkg;
-
-################################################################################
-
-Options = None;
-projectB = None;
-Logger = None
-sections = {}
-priorities = {}
-blacklist = {}
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: cindy
-Check for cruft in overrides.
-
- -n, --no-action don't do anything
- -h, --help show this help and exit"""
-
- sys.exit(exit_code)
-
-################################################################################
-
-def gen_blacklist(dir):
- for entry in os.listdir(dir):
- entry = entry.split('_')[0]
- blacklist[entry] = 1
-
-def process(osuite, affected_suites, originosuite, component, type):
- global Logger, Options, projectB, sections, priorities;
-
- osuite_id = db_access.get_suite_id(osuite);
- if osuite_id == -1:
- utils.fubar("Suite '%s' not recognised." % (osuite));
- originosuite_id = None
- if originosuite:
- originosuite_id = db_access.get_suite_id(originosuite);
- if originosuite_id == -1:
- utils.fubar("Suite '%s' not recognised." % (originosuite));
-
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- utils.fubar("Component '%s' not recognised." % (component));
-
- type_id = db_access.get_override_type_id(type);
- if type_id == -1:
- utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (type));
- dsc_type_id = db_access.get_override_type_id("dsc");
- deb_type_id = db_access.get_override_type_id("deb")
-
- source_priority_id = db_access.get_priority_id("source")
-
- if type == "deb" or type == "udeb":
- packages = {};
- q = projectB.query("""
-SELECT b.package FROM binaries b, bin_associations ba, files f,
- location l, component c
- WHERE b.type = '%s' AND b.id = ba.bin AND f.id = b.file AND l.id = f.location
- AND c.id = l.component AND ba.suite IN (%s) AND c.id = %s
-""" % (type, ",".join(map(str,affected_suites)), component_id));
- for i in q.getresult():
- packages[i[0]] = 0;
-
- src_packages = {};
- q = projectB.query("""
-SELECT s.source FROM source s, src_associations sa, files f, location l,
- component c
- WHERE s.id = sa.source AND f.id = s.file AND l.id = f.location
- AND c.id = l.component AND sa.suite IN (%s) AND c.id = %s
-""" % (",".join(map(str,affected_suites)), component_id));
- for i in q.getresult():
- src_packages[i[0]] = 0;
-
- # -----------
- # Drop unused overrides
-
- q = projectB.query("SELECT package, priority, section, maintainer FROM override WHERE suite = %s AND component = %s AND type = %s" % (osuite_id, component_id, type_id));
- projectB.query("BEGIN WORK");
- if type == "dsc":
- for i in q.getresult():
- package = i[0];
- if src_packages.has_key(package):
- src_packages[package] = 1
- else:
- if blacklist.has_key(package):
- utils.warn("%s in incoming, not touching" % package)
- continue
- Logger.log(["removing unused override", osuite, component,
- type, package, priorities[i[1]], sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""DELETE FROM override WHERE package =
- '%s' AND suite = %s AND component = %s AND type =
- %s""" % (package, osuite_id, component_id, type_id));
- # create source overrides based on binary overrides, as source
- # overrides not always get created
- q = projectB.query(""" SELECT package, priority, section,
- maintainer FROM override WHERE suite = %s AND component = %s
- """ % (osuite_id, component_id));
- for i in q.getresult():
- package = i[0]
- if not src_packages.has_key(package) or src_packages[package]:
- continue
- src_packages[package] = 1
-
- Logger.log(["add missing override", osuite, component,
- type, package, "source", sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""INSERT INTO override (package, suite,
- component, priority, section, type, maintainer) VALUES
- ('%s', %s, %s, %s, %s, %s, '%s')""" % (package,
- osuite_id, component_id, source_priority_id, i[2],
- dsc_type_id, i[3]));
- # Check whether originosuite has an override for us we can
- # copy
- if originosuite:
- q = projectB.query("""SELECT origin.package, origin.priority,
- origin.section, origin.maintainer, target.priority,
- target.section, target.maintainer FROM override origin LEFT
- JOIN override target ON (origin.package = target.package AND
- target.suite=%s AND origin.component = target.component AND origin.type =
- target.type) WHERE origin.suite = %s AND origin.component = %s
- AND origin.type = %s""" %
- (osuite_id, originosuite_id, component_id, type_id));
- for i in q.getresult():
- package = i[0]
- if not src_packages.has_key(package) or src_packages[package]:
- if i[4] and (i[1] != i[4] or i[2] != i[5] or i[3] != i[6]):
- Logger.log(["syncing override", osuite, component,
- type, package, "source", sections[i[5]], i[6], "source", sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""UPDATE override SET section=%s,
- maintainer='%s' WHERE package='%s' AND
- suite=%s AND component=%s AND type=%s""" %
- (i[2], i[3], package, osuite_id, component_id,
- dsc_type_id));
- continue
- # we can copy
- src_packages[package] = 1
- Logger.log(["copying missing override", osuite, component,
- type, package, "source", sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""INSERT INTO override (package, suite,
- component, priority, section, type, maintainer) VALUES
- ('%s', %s, %s, %s, %s, %s, '%s')""" % (package,
- osuite_id, component_id, source_priority_id, i[2],
- dsc_type_id, i[3]));
-
- for package, hasoverride in src_packages.items():
- if not hasoverride:
- utils.warn("%s has no override!" % package)
-
- else: # binary override
- for i in q.getresult():
- package = i[0];
- if packages.has_key(package):
- packages[package] = 1
- else:
- if blacklist.has_key(package):
- utils.warn("%s in incoming, not touching" % package)
- continue
- Logger.log(["removing unused override", osuite, component,
- type, package, priorities[i[1]], sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""DELETE FROM override WHERE package =
- '%s' AND suite = %s AND component = %s AND type =
- %s""" % (package, osuite_id, component_id, type_id));
-
- # Check whether originosuite has an override for us we can
- # copy
- if originosuite:
- q = projectB.query("""SELECT origin.package, origin.priority,
- origin.section, origin.maintainer, target.priority,
- target.section, target.maintainer FROM override origin LEFT
- JOIN override target ON (origin.package = target.package AND
- target.suite=%s AND origin.component = target.component AND
- origin.type = target.type) WHERE origin.suite = %s AND
- origin.component = %s AND origin.type = %s""" % (osuite_id,
- originosuite_id, component_id, type_id));
- for i in q.getresult():
- package = i[0]
- if not packages.has_key(package) or packages[package]:
- if i[4] and (i[1] != i[4] or i[2] != i[5] or i[3] != i[6]):
- Logger.log(["syncing override", osuite, component,
- type, package, priorities[i[4]], sections[i[5]],
- i[6], priorities[i[1]], sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""UPDATE override SET priority=%s, section=%s,
- maintainer='%s' WHERE package='%s' AND
- suite=%s AND component=%s AND type=%s""" %
- (i[1], i[2], i[3], package, osuite_id,
- component_id, type_id));
- continue
- # we can copy
- packages[package] = 1
- Logger.log(["copying missing override", osuite, component,
- type, package, priorities[i[1]], sections[i[2]], i[3]])
- if not Options["No-Action"]:
- projectB.query("""INSERT INTO override (package, suite,
- component, priority, section, type, maintainer) VALUES
- ('%s', %s, %s, %s, %s, %s, '%s')""" % (package, osuite_id, component_id, i[1], i[2], type_id, i[3]));
-
- for package, hasoverride in packages.items():
- if not hasoverride:
- utils.warn("%s has no override!" % package)
-
- projectB.query("COMMIT WORK");
- sys.stdout.flush()
-
-
-################################################################################
-
-def main ():
- global Logger, Options, projectB, sections, priorities;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Cindy::Options::Help"),
- ('n',"no-action", "Cindy::Options::No-Action")];
- for i in [ "help", "no-action" ]:
- if not Cnf.has_key("Cindy::Options::%s" % (i)):
- Cnf["Cindy::Options::%s" % (i)] = "";
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
- Options = Cnf.SubTree("Cindy::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- # init sections, priorities:
- q = projectB.query("SELECT id, section FROM section")
- for i in q.getresult():
- sections[i[0]] = i[1]
- q = projectB.query("SELECT id, priority FROM priority")
- for i in q.getresult():
- priorities[i[0]] = i[1]
-
- if not Options["No-Action"]:
- Logger = logging.Logger(Cnf, "cindy")
- else:
- Logger = logging.Logger(Cnf, "cindy", 1)
-
- gen_blacklist(Cnf["Dir::Queue::Accepted"])
-
- for osuite in Cnf.SubTree("Cindy::OverrideSuites").List():
- if "1" != Cnf["Cindy::OverrideSuites::%s::Process" % osuite]:
- continue
-
- osuite = osuite.lower()
-
- originosuite = None
- originremark = ""
- try:
- originosuite = Cnf["Cindy::OverrideSuites::%s::OriginSuite" % osuite];
- originosuite = originosuite.lower()
- originremark = " taking missing from %s" % originosuite
- except KeyError:
- pass
-
- print "Processing %s%s..." % (osuite, originremark);
- # Get a list of all suites that use the override file of 'osuite'
- ocodename = Cnf["Suite::%s::codename" % osuite]
- suites = []
- for suite in Cnf.SubTree("Suite").List():
- if ocodename == Cnf["Suite::%s::OverrideCodeName" % suite]:
- suites.append(suite)
-
- q = projectB.query("SELECT id FROM suite WHERE suite_name in (%s)" \
- % ", ".join(map(repr, suites)).lower())
-
- suiteids = []
- for i in q.getresult():
- suiteids.append(i[0])
-
- if len(suiteids) != len(suites) or len(suiteids) < 1:
- utils.fubar("Couldn't find id's of all suites: %s" % suites)
-
- for component in Cnf.SubTree("Component").List():
- if component == "mixed":
- continue; # Ick
- # It is crucial for the dsc override creation based on binary
- # overrides that 'dsc' goes first
- otypes = Cnf.ValueList("OverrideType")
- otypes.remove("dsc")
- otypes = ["dsc"] + otypes
- for otype in otypes:
- print "Processing %s [%s - %s] using %s..." \
- % (osuite, component, otype, suites);
- sys.stdout.flush()
- process(osuite, suiteids, originosuite, component, otype);
-
- Logger.close()
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# 'Fix' stable to make debian-cd and dpkg -BORGiE users happy
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: claire.py,v 1.19 2003-09-07 13:52:11 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# _ _ ____
-# | \ | | __ )_
-# | \| | _ (_)
-# | |\ | |_) | This has been obsoleted since the release of woody.
-# |_| \_|____(_)
-#
-
-################################################################################
-
-import os, pg, re, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-re_strip_section_prefix = re.compile(r'.*/');
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: claire [OPTIONS]
-Create compatibility symlinks from legacy locations to the pool.
-
- -v, --verbose explain what is being done
- -h, --help show this help and exit"""
-
- sys.exit(exit_code)
-
-################################################################################
-
-def fix_component_section (component, section):
- if component == "":
- component = utils.extract_component_from_section(section)[1];
-
- # FIXME: ugly hacks to work around override brain damage
- section = re_strip_section_prefix.sub('', section);
- section = section.lower().replace('non-us', '');
- if section == "main" or section == "contrib" or section == "non-free":
- section = '';
- if section != '':
- section += '/';
-
- return (component, section);
-
-################################################################################
-
-def find_dislocated_stable(Cnf, projectB):
- dislocated_files = {}
-
- codename = Cnf["Suite::Stable::Codename"];
-
- # Source
- q = projectB.query("""
-SELECT DISTINCT ON (f.id) c.name, sec.section, l.path, f.filename, f.id
- FROM component c, override o, section sec, source s, files f, location l,
- dsc_files df, suite su, src_associations sa, files f2, location l2
- WHERE su.suite_name = 'stable' AND sa.suite = su.id AND sa.source = s.id
- AND f2.id = s.file AND f2.location = l2.id AND df.source = s.id
- AND f.id = df.file AND f.location = l.id AND o.package = s.source
- AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
- AND l.component = c.id AND o.suite = su.id
-""" % (codename));
-# Only needed if you have files in legacy-mixed locations
-# UNION SELECT DISTINCT ON (f.id) null, sec.section, l.path, f.filename, f.id
-# FROM component c, override o, section sec, source s, files f, location l,
-# dsc_files df, suite su, src_associations sa, files f2, location l2
-# WHERE su.suite_name = 'stable' AND sa.suite = su.id AND sa.source = s.id
-# AND f2.id = s.file AND f2.location = l2.id AND df.source = s.id
-# AND f.id = df.file AND f.location = l.id AND o.package = s.source
-# AND sec.id = o.section AND NOT (f.filename ~ '^%s/') AND o.suite = su.id
-# AND NOT EXISTS (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);
- for i in q.getresult():
- (component, section) = fix_component_section(i[0], i[1]);
- if Cnf.FindB("Dinstall::LegacyStableHasNoSections"):
- section="";
- dest = "%sdists/%s/%s/source/%s%s" % (Cnf["Dir::Root"], codename, component, section, os.path.basename(i[3]));
- if not os.path.exists(dest):
- src = i[2]+i[3];
- src = utils.clean_symlink(src, dest, Cnf["Dir::Root"]);
- if Cnf.Find("Claire::Options::Verbose"):
- print src+' -> '+dest
- os.symlink(src, dest);
- dislocated_files[i[4]] = dest;
-
- # Binary
- architectures = filter(utils.real_arch, Cnf.ValueList("Suite::Stable::Architectures"));
- q = projectB.query("""
-SELECT DISTINCT ON (f.id) c.name, a.arch_string, sec.section, b.package,
- b.version, l.path, f.filename, f.id
- FROM architecture a, bin_associations ba, binaries b, component c, files f,
- location l, override o, section sec, suite su
- WHERE su.suite_name = 'stable' AND ba.suite = su.id AND ba.bin = b.id
- AND f.id = b.file AND f.location = l.id AND o.package = b.package
- AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
- AND b.architecture = a.id AND l.component = c.id AND o.suite = su.id""" %
- (codename));
-# Only needed if you have files in legacy-mixed locations
-# UNION SELECT DISTINCT ON (f.id) null, a.arch_string, sec.section, b.package,
-# b.version, l.path, f.filename, f.id
-# FROM architecture a, bin_associations ba, binaries b, component c, files f,
-# location l, override o, section sec, suite su
-# WHERE su.suite_name = 'stable' AND ba.suite = su.id AND ba.bin = b.id
-# AND f.id = b.file AND f.location = l.id AND o.package = b.package
-# AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
-# AND b.architecture = a.id AND o.suite = su.id AND NOT EXISTS
-# (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);
- for i in q.getresult():
- (component, section) = fix_component_section(i[0], i[2]);
- if Cnf.FindB("Dinstall::LegacyStableHasNoSections"):
- section="";
- architecture = i[1];
- package = i[3];
- version = utils.re_no_epoch.sub('', i[4]);
- src = i[5]+i[6];
-
- dest = "%sdists/%s/%s/binary-%s/%s%s_%s.deb" % (Cnf["Dir::Root"], codename, component, architecture, section, package, version);
- src = utils.clean_symlink(src, dest, Cnf["Dir::Root"]);
- if not os.path.exists(dest):
- if Cnf.Find("Claire::Options::Verbose"):
- print src+' -> '+dest;
- os.symlink(src, dest);
- dislocated_files[i[7]] = dest;
- # Add per-arch symlinks for arch: all debs
- if architecture == "all":
- for arch in architectures:
- dest = "%sdists/%s/%s/binary-%s/%s%s_%s.deb" % (Cnf["Dir::Root"], codename, component, arch, section, package, version);
- if not os.path.exists(dest):
- if Cnf.Find("Claire::Options::Verbose"):
- print src+' -> '+dest
- os.symlink(src, dest);
-
- return dislocated_files
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Claire::Options::Help"),
- ('v',"verbose","Claire::Options::Verbose")];
- for i in ["help", "verbose" ]:
- if not Cnf.has_key("Claire::Options::%s" % (i)):
- Cnf["Claire::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Claire::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
-
- db_access.init(Cnf, projectB);
-
- find_dislocated_stable(Cnf, projectB);
-
-#######################################################################################
-
-if __name__ == '__main__':
- main();
-
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/non-us.debian.org/ftp/";
+ OverrideDir "/org/non-us.debian.org/scripts/override/";
+ CacheDir "/org/non-us.debian.org/database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress "gzip";
+ Contents::Compress "gzip";
+ DeLinkLimit 0;
+ MaxContentsChange 6000;
+ FileMode 0664;
+}
+
+TreeDefault
+{
+ Contents::Header "/org/non-us.debian.org/katie/Contents.top";
+};
+
+tree "dists/proposed-updates/non-US"
+{
+ FileList "/org/non-us.debian.org/database/dists/proposed-updates_non-US/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/non-us.debian.org/database/dists/proposed-updates_non-US/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.woody.$(SECTION)";
+ SrcOverride "override.woody.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/testing/non-US"
+{
+ FileList "/org/non-us.debian.org/database/dists/testing_non-US/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/non-us.debian.org/database/dists/testing_non-US/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.sarge.$(SECTION)";
+ SrcOverride "override.sarge.$(SECTION).src";
+};
+
+tree "dists/testing-proposed-updates/non-US"
+{
+ FileList "/org/non-us.debian.org/database/dists/testing-proposed-updates_non-US/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/non-us.debian.org/database/dists/testing-proposed-updates_non-US/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.sarge.$(SECTION)";
+ SrcOverride "override.sarge.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/unstable/non-US"
+{
+ FileList "/org/non-us.debian.org/database/dists/unstable_non-US/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/non-us.debian.org/database/dists/unstable_non-US/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
+ BinOverride "override.sid.$(SECTION)";
+ SrcOverride "override.sid.$(SECTION).src";
+};
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/non-us.debian.org/ftp/";
+ OverrideDir "/org/non-us.debian.org/scripts/override/";
+ CacheDir "/org/non-us.debian.org/database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress "gzip";
+ Contents::Compress "gzip";
+ DeLinkLimit 0;
+ FileMode 0664;
+}
+
+TreeDefault
+{
+ Contents::Header "/org/non-us.debian.org/katie/Contents.top";
+};
+
+tree "dists/stable/non-US"
+{
+ FileList "/org/non-us.debian.org/database/dists/stable_non-US/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/non-us.debian.org/database/dists/stable_non-US/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.woody.$(SECTION)";
+ SrcOverride "override.woody.$(SECTION).src";
+};
--- /dev/null
+#! /bin/sh
+#
+# Executed daily via cron, out of troup's crontab.
+
+set -e
+export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
+. $SCRIPTVARS
+
+################################################################################
+
+echo Archive maintenance started at $(date +%X)
+
+NOTICE="$ftpdir/Archive_Maintenance_In_Progress"
+
+cleanup() {
+ rm -f "$NOTICE"
+}
+trap cleanup 0
+
+rm -f "$NOTICE"
+cat > "$NOTICE" <<EOF
+Packages are currently being installed and indices rebuilt.
+Maintenance is automatic, starting at 13:52 US Central time, and
+ending at about 15:30. This file is then removed.
+
+You should not mirror the archive during this period.
+EOF
+
+################################################################################
+
+echo "Creating pre-daily-cron-job backup of projectb database..."
+pg_dump projectb > /org/non-us.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+
+################################################################################
+
+update-readmenonus
+
+################################################################################
+
+if [ ! -z "$(find "$accepted" -name \*.changes -maxdepth 1 -mindepth 1)" ]; then
+ cd $accepted
+ rm -f REPORT
+ kelly -pa *.changes | tee REPORT | \
+ mail -s "Non-US Install for $(date +%D)" ftpmaster@ftp-master.debian.org
+ chgrp debadmin REPORT
+ chmod 664 REPORT
+else
+ echo "kelly: Nothing to install."
+fi
+
+cd $masterdir
+symlinks -d -r $ftpdir
+
+cd $masterdir
+jenna
+
+# Generate override files
+cd $overridedir
+denise
+# FIXME
+rm -f override.potato.all3
+for i in main contrib non-free; do cat override.potato.$i >> override.potato.all3; done
+
+# Generate Packages and Sources files
+cd $masterdir
+apt-ftparchive generate apt.conf-non-US
+# Generate Release files
+ziyi
+
+# Clean out old packages
+rhona
+shania
+
+# Generate the Maintainers file
+cd $indices
+charisma > .new-maintainers_versions
+mv -f .new-maintainers_versions Maintainers_Versions
+sed -e "s/~[^ ]*\([ ]\)/\1/" < Maintainers_Versions | awk '{printf "%-20s ", $1; for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n";}' > .new-maintainers
+mv -f .new-maintainers Maintainers
+gzip -9v <Maintainers >.new-maintainers.gz
+mv -f .new-maintainers.gz Maintainers.gz
+gzip -9v <Maintainers_Versions >.new-maintainers_versions.gz
+mv -f .new-maintainers_versions.gz Maintainers_Versions.gz
+rm -f Maintainers_Versions
+
+cd $masterdir
+copyoverrides
+mklslar
+mkchecksums
+
+rm -f $NOTICE
+echo Archive maintenance finished at $(date +%X)
+
+################################################################################
+
+echo "Creating post-daily-cron-job backup of projectb database..."
+pg_dump projectb > /org/non-us.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+
+################################################################################
+
+# Vacuum the database
+echo "VACUUM; VACUUM ANALYZE;" | psql projectb 2>&1 | grep -v "^NOTICE: Skipping.*only table owner can VACUUM it$"
+
+################################################################################
+
+# Send a report on NEW/BYHAND packages
+helena | mail -e -s "[non-US] NEW and BYHAND on $(date +%D)" ftpmaster@ftp-master.debian.org
+# and one on crufty packages
+rene | mail -e -s "[non-US] rene run for $(date +%D)" ftpmaster@ftp-master.debian.org
+
+################################################################################
+ulimit -m 90000 -d 90000 -s 10000 -v 90000
+
+run-parts --report /org/non-us.debian.org/scripts/distmnt
+
+echo Daily cron scripts successful.
--- /dev/null
+#! /bin/sh
+#
+# Executed hourly via cron, out of troup's crontab.
+
+set -e
+export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
+. $SCRIPTVARS
+
+cd $masterdir
+julia
--- /dev/null
+#! /bin/sh
+
+set -e
+export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
+. $SCRIPTVARS
+
+cd $unchecked
+
+changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
+report=$queuedir/REPORT
+timestamp=$(date "+%Y-%m-%d %H:%M")
+
+if [ ! -z "$changes" ]; then
+ echo "$timestamp": "$changes" >> $report
+ jennifer -a $changes >> $report
+ echo "--" >> $report
+else
+ echo "$timestamp": Nothing to do >> $report
+fi;
--- /dev/null
+#!/bin/sh
+#
+# Run once a week via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
+. $SCRIPTVARS
+
+################################################################################
+
+# Purge empty directories
+
+if [ ! -z "$(find $ftpdir/pool/ -type d -empty)" ]; then
+ find $ftpdir/pool/ -type d -empty | xargs rmdir;
+fi
+
+# Clean up apt-ftparchive's databases
+
+cd $masterdir
+apt-ftparchive -q clean apt.conf-non-US
+
+################################################################################
--- /dev/null
+Dinstall
+{
+ PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
+ GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
+ SigningKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/secring.gpg";
+ SigningPubKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
+ SigningKeyIds "1DB114E0";
+ SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
+ MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
+ MyAdminAddress "ftpmaster@debian.org";
+ MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
+ MyDistribution "Debian"; // Used in emails
+ BugServer "bugs.debian.org";
+ PackagesServer "packages.debian.org";
+ TrackingServer "packages.qa.debian.org";
+ LockFile "/org/non-us.debian.org/katie/lock";
+ Bcc "archive@ftp-master.debian.org";
+ GroupOverrideFilename "override.group-maint";
+ FutureTimeTravelGrace 28800; // 8 hours
+ PastCutoffYear "1984";
+ SkipTime 300;
+ CloseBugs "true";
+ SuiteSuffix "non-US";
+ OverrideDisparityCheck "true";
+ StableDislocationSupport "false";
+ Reject
+ {
+ NoSourceOnly "true";
+ };
+};
+
+Lauren
+{
+ StableRejector "Martin (Joey) Schulze <joey@debian.org>";
+ MoreInfoURL "http://people.debian.org/~joey/3.0r4/";
+};
+
+Julia
+{
+ ValidGID "800";
+ // Comma separated list of users who are in Postgres but not the passwd file
+ KnownPostgres "udmsearch,postgres,www-data,katie,auric";
+};
+
+Shania
+{
+ Options
+ {
+ Days 14;
+ };
+ MorgueSubDir "shania";
+};
+
+
+Catherine
+{
+ Options
+ {
+ Limit 10240;
+ };
+};
+
+Natalie
+{
+ Options
+ {
+ Component "non-US/main";
+ Suite "unstable";
+ Type "deb";
+ };
+ ComponentPosition "suffix"; // Whether the component is prepended or appended to the section name
+};
+
+Melanie
+{
+ Options
+ {
+ Suite "unstable";
+ };
+ MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
+ LogFile "/home/troup/public_html/removals.txt";
+ Bcc "removed-packages@qa.debian.org";
+};
+
+Neve
+{
+ ExportDir "/org/non-us.debian.org/katie/neve-files/";
+};
+
+Rhona
+{
+ // How long (in seconds) dead packages are left before being killed
+ StayOfExecution 129600; // 1.5 days
+ MorgueSubDir "rhona";
+};
+
+Suite
+{
+
+ Stable
+ {
+ Components
+ {
+ non-US/main;
+ non-US/contrib;
+ non-US/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-changes@lists.debian.org";
+ Version "3.0r4";
+ Origin "Debian";
+ Description "Debian 3.0r4 Released 31st December 2004";
+ CodeName "woody";
+ OverrideCodeName "woody";
+ Priority "3";
+ Untouchable "1";
+ ChangeLogBase "dists/stable/non-US/";
+ };
+
+ Proposed-Updates
+ {
+ Components
+ {
+ non-US/main;
+ non-US/contrib;
+ non-US/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-changes@lists.debian.org";
+ CopyChanges "dists/proposed-updates/";
+ CopyKatie "/org/non-us.debian.org/queue/proposed-updates/";
+ Version "3.0-updates";
+ Origin "Debian";
+ Description "Debian 3.0 Proposed Updates - Not Released";
+ CodeName "proposed-updates";
+ OverrideCodeName "woody";
+ OverrideSuite "stable";
+ Priority "4";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ };
+ MustBeOlderThan
+ {
+ Unstable;
+ Experimental;
+ };
+ };
+ };
+
+ Testing
+ {
+ Components
+ {
+ non-US/main;
+ non-US/contrib;
+ non-US/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Origin "Debian";
+ Description "Debian Testing distribution - Not Released";
+ CodeName "sarge";
+ OverrideCodeName "sarge";
+ Priority "5";
+ };
+
+ Testing-Proposed-Updates
+ {
+ Components
+ {
+ non-US/main;
+ non-US/contrib;
+ non-US/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Origin "Debian";
+ Description "Debian Testing distribution updates - Not Released";
+ CodeName "testing-proposed-updates";
+ OverrideCodeName "sarge";
+ OverrideSuite "unstable";
+ Priority "6";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ Proposed-Updates;
+ Testing;
+ };
+ MustBeOlderThan
+ {
+ Unstable;
+ Experimental;
+ };
+ };
+ };
+
+ Unstable
+ {
+ Components
+ {
+ non-US/main;
+ non-US/contrib;
+ non-US/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ hurd-i386;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sh;
+ sparc;
+ };
+ Announce "debian-devel-changes@lists.debian.org";
+ Origin "Debian";
+ Description "Debian Unstable - Not Released";
+ CodeName "sid";
+ OverrideCodeName "sid";
+ Priority "7";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ Proposed-Updates;
+ Testing;
+ Testing-Proposed-Updates;
+ };
+ };
+ };
+
+};
+
+SuiteMappings
+{
+ // JT - temp measure
+ "map testing-security proposed-updates";
+
+ "map stable proposed-updates";
+ "map stable-security proposed-updates";
+ "map-unreleased stable unstable";
+ "map-unreleased proposed-updates unstable";
+ "map testing testing-proposed-updates";
+ //"map testing-security testing-proposed-updates";
+ "map-unreleased testing unstable";
+ "map-unreleased testing-proposed-updates unstable";
+};
+
+Dir
+{
+ Root "/org/non-us.debian.org/ftp/";
+ Pool "/org/non-us.debian.org/ftp/pool/";
+ PoolRoot "pool/";
+ Templates "/org/non-us.debian.org/katie/templates/";
+ Override "/org/non-us.debian.org/scripts/override/";
+ Lists "/org/non-us.debian.org/database/dists/";
+ Log "/org/non-us.debian.org/log/";
+ Morgue "/org/non-us.debian.org/morgue/";
+ MorgueReject "reject";
+ UrgencyLog "/org/non-us.debian.org/testing/";
+ Queue
+ {
+ Accepted "/org/non-us.debian.org/queue/accepted/";
+ Byhand "/org/non-us.debian.org/queue/byhand/";
+ Done "/org/non-us.debian.org/queue/done/";
+ Holding "/org/non-us.debian.org/queue/holding/";
+ New "/org/non-us.debian.org/queue/new/";
+ Reject "/org/non-us.debian.org/queue/reject/";
+ Unchecked "/org/non-us.debian.org/queue/unchecked/";
+ };
+};
+
+DB
+{
+ Name "projectb";
+ Host "";
+ Port -1;
+};
+
+Architectures
+{
+
+ source "Source";
+ all "Architecture Independent";
+ alpha "DEC Alpha";
+ hurd-i386 "Intel ia32 running the HURD";
+ hppa "HP PA RISC";
+ arm "ARM";
+ i386 "Intel ia32";
+ ia64 "Intel ia64";
+ m68k "Motorola Mc680x0";
+ mips "MIPS (Big Endian)";
+ mipsel "MIPS (Little Endian)";
+ powerpc "PowerPC";
+ s390 "IBM S/390";
+ sh "Hitatchi SuperH";
+ sparc "Sun SPARC/UltraSPARC";
+
+};
+
+Archive
+{
+
+ non-US
+ {
+ OriginServer "non-us.debian.org";
+ PrimaryMirror "non-us.debian.org";
+ Description "Non-US Archive for the Debian project";
+ };
+
+};
+
+Component
+{
+
+ non-US/main
+ {
+ Description "Main (non-US)";
+ MeetsDFSG "true";
+ };
+
+ non-US/contrib
+ {
+ Description "Contrib (non-US)";
+ MeetsDFSG "true";
+ };
+
+ non-US/non-free
+ {
+ Description "Software that fails to meet the DFSG (non-US)";
+ MeetsDFSG "false";
+ };
+
+};
+
+Section
+{
+
+ non-US;
+
+};
+
+Priority
+{
+
+ required 1;
+ important 2;
+ standard 3;
+ optional 4;
+ extra 5;
+ source 0; // i.e. unused
+
+};
+
+OverrideType
+{
+
+ deb;
+ udeb;
+ dsc;
+
+};
+
+Location
+{
+ /org/non-us.debian.org/ftp/dists/
+ {
+ Archive "non-US";
+ Type "legacy";
+ };
+
+ /org/non-us.debian.org/ftp/dists/old-proposed-updates/
+ {
+ Archive "non-US";
+ Type "legacy-mixed";
+ };
+
+ /org/non-us.debian.org/ftp/pool/
+ {
+ Archive "non-US";
+ Suites
+ {
+ OldStable;
+ Stable;
+ Proposed-Updates;
+ Testing;
+ Testing-Proposed-Updates;
+ Unstable;
+ };
+ Type "pool";
+ };
+
+};
+
+Urgency
+{
+ Default "low";
+ Valid
+ {
+ low;
+ medium;
+ high;
+ emergency;
+ critical;
+ };
+};
--- /dev/null
+# locations used by many scripts
+
+nonushome=/org/non-us.debian.org
+ftpdir=$nonushome/ftp
+indices=$ftpdir/indices-non-US
+archs="alpha arm hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sh sparc"
+
+masterdir=$nonushome/katie
+overridedir=$nonushome/scripts/override
+dbdir=$nonushome/database/
+queuedir=$nonushome/queue/
+unchecked=$queuedir/unchecked/
+accepted=$queuedir/accepted/
+incoming=$nonushome/incoming/
+
+packagesfiles=packagesfiles-non-US
+sourcesfiles=sourcesfiles-non-US
+contentsfiles=contentsfiles-non-US
+
+copyoverrides="potato.contrib potato.contrib.src potato.main potato.main.src potato.non-free potato.non-free.src woody.contrib woody.contrib.src woody.main woody.main.src woody.non-free woody.non-free.src sarge.contrib sarge.contrib.src sarge.main sarge.main.src sarge.non-free sarge.non-free.src sid.contrib sid.contrib.src sid.main sid.main.src sid.non-free sid.non-free.src"
+
+PATH=$masterdir:$PATH
+umask 022
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/security.debian.org/ftp/";
+ OverrideDir "/org/security.debian.org/override/";
+ CacheDir "/org/security.debian.org/katie-database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress "gzip";
+ DeLinkLimit 0;
+ FileMode 0664;
+}
+
+tree "dists/oldstable/updates"
+{
+ FileList "/org/security.debian.org/katie-database/dists/oldstable_updates/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/security.debian.org/katie-database/dists/oldstable_updates/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
+ BinOverride "override.woody.$(SECTION)";
+ ExtraOverride "override.woody.extra.$(SECTION)";
+ SrcOverride "override.woody.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/stable/updates"
+{
+ FileList "/org/security.debian.org/katie-database/dists/stable_updates/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/security.debian.org/katie-database/dists/stable_updates/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha amd64 arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
+ BinOverride "override.sarge.$(SECTION)";
+ ExtraOverride "override.sarge.extra.$(SECTION)";
+ SrcOverride "override.sarge.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/testing/updates"
+{
+ FileList "/org/security.debian.org/katie-database/dists/testing_updates/$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/security.debian.org/katie-database/dists/testing_updates/$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 mips mipsel m68k powerpc s390 sparc source";
+ BinOverride "override.etch.$(SECTION)";
+ ExtraOverride "override.etch.extra.$(SECTION)";
+ SrcOverride "override.etch.$(SECTION).src";
+ Contents " ";
+};
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/security.debian.org/buildd/";
+ OverrideDir "/org/security.debian.org/override/";
+ CacheDir "/org/security.debian.org/katie-database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress ". gzip";
+ DeLinkLimit 0;
+ FileMode 0664;
+}
+
+bindirectory "etch"
+{
+ Packages "etch/Packages";
+ Sources "etch/Sources";
+ Contents " ";
+
+ BinOverride "override.etch.all3";
+ BinCacheDB "packages-accepted-etch.db";
+ PathPrefix "";
+ Packages::Extensions ".deb .udeb";
+};
+
+bindirectory "woody"
+{
+ Packages "woody/Packages";
+ Sources "woody/Sources";
+ Contents " ";
+
+ BinOverride "override.woody.all3";
+ BinCacheDB "packages-accepted-woody.db";
+ PathPrefix "";
+ Packages::Extensions ".deb .udeb";
+};
+
+bindirectory "sarge"
+{
+ Packages "sarge/Packages";
+ Sources "sarge/Sources";
+ Contents " ";
+
+ BinOverride "override.sarge.all3";
+ BinCacheDB "packages-accepted-sarge.db";
+ PathPrefix "";
+ Packages::Extensions ".deb .udeb";
+};
+
--- /dev/null
+#! /bin/bash
+#
+# Executed after jennifer (merge there??)
+
+ARCHS_oldstable="alpha arm hppa i386 ia64 m68k mips mipsel powerpc sparc s390"
+ARCHS_stable="$ARCHS_oldstable"
+ARCHS_testing="$ARCHS_stable"
+DISTS="oldstable stable testing"
+SSH_SOCKET=~/.ssh/buildd.debian.org.socket
+
+set -e
+export SCRIPTVARS=/org/security.debian.org/katie/vars-security
+. $SCRIPTVARS
+
+if [ ! -e $ftpdir/Archive_Maintenance_In_Progress ]; then
+ cd $masterdir
+ for d in $DISTS; do
+ eval SOURCES_$d=`stat -c "%Y" $base/buildd/$d/Sources.gz`
+ eval PACKAGES_$d=`stat -c "%Y" $base/buildd/$d/Packages.gz`
+ done
+ apt-ftparchive -qq generate apt.conf.buildd-security
+ dists=
+ for d in $DISTS; do
+ eval NEW_SOURCES_$d=`stat -c "%Y" $base/buildd/$d/Sources.gz`
+ eval NEW_PACKAGES_$d=`stat -c "%Y" $base/buildd/$d/Packages.gz`
+ old=SOURCES_$d
+ new=NEW_$old
+ if [ ${!new} -gt ${!old} ]; then
+ if [ -z "$dists" ]; then
+ dists="$d"
+ else
+ dists="$dists $d"
+ fi
+ continue
+ fi
+ old=PACKAGES_$d
+ new=NEW_$old
+ if [ ${!new} -gt ${!old} ]; then
+ if [ -z "$dists" ]; then
+ dists="$d"
+ else
+ dists="$dists $d"
+ fi
+ continue
+ fi
+ done
+ if [ ! -z "$dists" ]; then
+ # setup ssh master process
+ ssh buildd@buildd -S $SSH_SOCKET -MN 2> /dev/null &
+ SSH_PID=$!
+ while [ ! -S $SSH_SOCKET ]; do
+ sleep 1
+ done
+ trap 'kill -TERM $SSH_PID' 0
+ for d in $dists; do
+ archs=ARCHS_$d
+ ARCHS=${!archs}
+ cd /org/security.debian.org/buildd/$d
+ for a in $ARCHS; do
+ quinn-diff -a /org/security.debian.org/buildd/Packages-arch-specific -A $a 2>/dev/null | ssh buildd@buildd -S $SSH_SOCKET wanna-build -d $d-security -b $a/build-db --merge-partial-quinn
+ ssh buildd@buildd -S $SSH_SOCKET wanna-build -d $d-security -A $a -b $a/build-db --merge-packages < Packages
+ done
+ done
+ fi
+fi
+
+ssh buildd@bester.farm.ftbfs.de -i ~/.ssh/id_bester sleep 1
--- /dev/null
+#! /bin/sh
+#
+# Executed daily via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/security.debian.org/katie/vars-security
+. $SCRIPTVARS
+
+################################################################################
+
+# Fix overrides
+
+rsync -ql ftp-master::indices/override\* $overridedir
+
+cd $overridedir
+find . -name override\*.gz -type f -maxdepth 1 -mindepth 1 | xargs gunzip -f
+find . -type l -maxdepth 1 -mindepth 1 | xargs rm
+
+rm -fr non-US
+mkdir non-US
+cd non-US
+rsync -ql non-us::indices/override\* .
+find . -name override\*.gz -type f -maxdepth 1 -mindepth 1 | xargs gunzip
+find . -type l -maxdepth 1 -mindepth 1 | xargs rm
+for i in *; do
+ if [ -f ../$i ]; then
+ cat $i >> ../$i;
+ fi;
+done
+cd ..
+rm -fr non-US
+
+for suite in $suites; do
+ case $suite in
+ oldstable) override_suite=woody;;
+ stable) override_suite=sarge;;
+ testing) override_suite=etch;;
+ *) echo "Unknown suite type ($suite)"; exit 1;;
+ esac
+ for component in $components; do
+ for override_type in $override_types; do
+ case $override_type in
+ deb) type="" ;;
+ dsc) type=".src" ;;
+ udeb) type=".debian-installer" ;;
+ esac
+ # XXX RUN AFUCKINGAWAY
+ if [ "$override_type" = "udeb" ]; then
+ if [ ! "$component" = "main" ]; then
+ continue;
+ fi
+ if [ "$suite" = "unstable" ]; then
+ $masterdir/natalie -q -S -t $override_type -s $suite -c updates/$component < override.$override_suite.$component$type
+ fi
+ else
+ $masterdir/natalie -q -S -t $override_type -s $suite -c updates/$component < override.$override_suite.$component$type
+ fi
+ case $suite in
+ oldstable)
+ if [ ! "$override_type" = "udeb" ]; then
+ $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sarge.$component$type
+ fi
+ $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
+ ;;
+ stable)
+ $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
+ ;;
+ testing)
+ $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
+ ;;
+ *) echo "Unknown suite type ($suite)"; exit 1;;
+ esac
+ done
+ done
+done
+
+# Generate .all3 overides for the buildd support
+for dist in woody sarge etch; do
+ rm -f override.$dist.all3
+ components="main contrib non-free";
+ if [ -f override.$dist.main.debian-installer ]; then
+ components="$components main.debian-installer";
+ fi
+ for component in $components; do
+ cat override.$dist.$component >> override.$dist.all3;
+ done;
+done
+
+################################################################################
+
+# Freshen Packages-Arch-Specific
+
+wget -qN http://buildd.debian.org/quinn-diff/Packages-arch-specific -O $base/buildd/Packages-arch-specific
+
+################################################################################
+
+cd $masterdir
+shania
+rhona
+apt-ftparchive -q clean apt.conf-security
+apt-ftparchive -q clean apt.conf.buildd-security
+
+symlinks -d -r $ftpdir
+
+pg_dump obscurity > /org/security.debian.org/katie-backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+
+# Vacuum the database
+set +e
+echo "VACUUM; VACUUM ANALYZE;" | psql obscurity 2>&1 | egrep -v "^NOTICE: Skipping \"pg_.*only table or database owner can VACUUM it$|^VACUUM$"
+set -e
+
+################################################################################
--- /dev/null
+#! /bin/sh
+
+set -e
+export SCRIPTVARS=/org/security.debian.org/katie/vars-security
+. $SCRIPTVARS
+
+cd $unchecked
+
+changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
+report=$queuedir/REPORT
+timestamp=$(date "+%Y-%m-%d %H:%M")
+
+if [ -z "$changes" ]; then
+ echo "$timestamp": Nothing to do >> $report
+ exit 0;
+fi;
+
+echo "$timestamp": "$changes" >> $report
+jennifer -a $changes >> $report
+echo "--" >> $report
+
+sh $masterdir/cron.buildd-security
--- /dev/null
+Dinstall
+{
+ PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
+ GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
+ SigningKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/secring.gpg";
+ SigningPubKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
+ SigningKeyIds "4F368D5D";
+ SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
+ MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
+ MyAdminAddress "ftpmaster@debian.org";
+ MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
+ MyDistribution "Debian"; // Used in emails
+ BugServer "bugs.debian.org";
+ PackagesServer "packages.debian.org";
+ LockFile "/org/security.debian.org/katie/lock";
+ Bcc "archive@ftp-master.debian.org";
+ // GroupOverrideFilename "override.group-maint";
+ FutureTimeTravelGrace 28800; // 8 hours
+ PastCutoffYear "1984";
+ SkipTime 300;
+ CloseBugs "false";
+ OverrideDisparityCheck "false";
+ BXANotify "false";
+ QueueBuildSuites
+ {
+ oldstable;
+ stable;
+ testing;
+ };
+ SecurityQueueHandling "true";
+ SecurityQueueBuild "true";
+ DefaultSuite "Testing";
+ SuiteSuffix "updates";
+ OverrideMaintainer "katie@security.debian.org";
+ StableDislocationSupport "false";
+ LegacyStableHasNoSections "false";
+};
+
+Julia
+{
+ ValidGID "800";
+ // Comma separated list of users who are in Postgres but not the passwd file
+ KnownPostgres "postgres,katie,www-data,udmsearch";
+};
+
+Helena
+{
+ Directories
+ {
+ byhand;
+ new;
+ accepted;
+ };
+};
+
+Shania
+{
+ Options
+ {
+ Days 14;
+ };
+ MorgueSubDir "shania";
+};
+
+Melanie
+{
+ Options
+ {
+ Suite "unstable";
+ };
+
+ MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
+ LogFile "/org/security.debian.org/katie-log/removals.txt";
+};
+
+Neve
+{
+ ExportDir "/org/security.debian.org/katie/neve-files/";
+};
+
+Rhona
+{
+ // How long (in seconds) dead packages are left before being killed
+ StayOfExecution 129600; // 1.5 days
+ QueueBuildStayOfExecution 86400; // 24 hours
+ MorgueSubDir "rhona";
+ OverrideFilename "override.source-only";
+};
+
+Amber
+{
+ ComponentMappings
+ {
+ main "ftp-master.debian.org:/pub/UploadQueue";
+ contrib "ftp-master.debian.org:/pub/UploadQueue";
+ non-free "ftp-master.debian.org:/pub/UploadQueue";
+ non-US/main "non-us.debian.org:/pub/UploadQueue";
+ non-US/contrib "non-us.debian.org:/pub/UploadQueue";
+ non-US/non-free "non-us.debian.org:/pub/UploadQueue";
+ };
+};
+
+Suite
+{
+ // Priority determines which suite is used for the Maintainers file
+ // as generated by charisma (highest wins).
+
+ Oldstable
+ {
+ Components
+ {
+ updates/main;
+ updates/contrib;
+ updates/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "katie@security.debian.org";
+ Version "3.0";
+ Origin "Debian";
+ Label "Debian-Security";
+ Description "Debian 3.0 Security Updates";
+ CodeName "woody";
+ OverrideCodeName "woody";
+ CopyKatie "/org/security.debian.org/queue/done/";
+ };
+
+ Stable
+ {
+ Components
+ {
+ updates/main;
+ updates/contrib;
+ updates/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ amd64;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "katie@security.debian.org";
+ Version "3.1";
+ Origin "Debian";
+ Label "Debian-Security";
+ Description "Debian 3.1 Security Updates";
+ CodeName "sarge";
+ OverrideCodeName "sarge";
+ CopyKatie "/org/security.debian.org/queue/done/";
+ };
+
+ Testing
+ {
+ Components
+ {
+ updates/main;
+ updates/contrib;
+ updates/non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ amd64;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "katie@security.debian.org";
+ Version "x.y";
+ Origin "Debian";
+ Label "Debian-Security";
+ Description "Debian x.y Security Updates";
+ CodeName "etch";
+ OverrideCodeName "etch";
+ CopyKatie "/org/security.debian.org/queue/done/";
+ };
+
+};
+
+SuiteMappings
+{
+ "silent-map oldstable-security oldstable";
+ "silent-map stable-security stable";
+ // JT - FIXME, hackorama
+ // "silent-map testing-security stable";
+ "silent-map testing-security testing";
+};
+
+Dir
+{
+ Root "/org/security.debian.org/ftp/";
+ Pool "/org/security.debian.org/ftp/pool/";
+ Katie "/org/security.debian.org/katie/";
+ Templates "/org/security.debian.org/katie/templates/";
+ PoolRoot "pool/";
+ Override "/org/security.debian.org/override/";
+ Lock "/org/security.debian.org/lock/";
+ Lists "/org/security.debian.org/katie-database/dists/";
+ Log "/org/security.debian.org/katie-log/";
+ Morgue "/org/security.debian.org/morgue/";
+ MorgueReject "reject";
+ Override "/org/security.debian.org/scripts/override/";
+ QueueBuild "/org/security.debian.org/buildd/";
+ Queue
+ {
+ Accepted "/org/security.debian.org/queue/accepted/";
+ Byhand "/org/security.debian.org/queue/byhand/";
+ Done "/org/security.debian.org/queue/done/";
+ Holding "/org/security.debian.org/queue/holding/";
+ New "/org/security.debian.org/queue/new/";
+ Reject "/org/security.debian.org/queue/reject/";
+ Unchecked "/org/security.debian.org/queue/unchecked/";
+
+ Embargoed "/org/security.debian.org/queue/embargoed/";
+ Unembargoed "/org/security.debian.org/queue/unembargoed/";
+ Disembargo "/org/security.debian.org/queue/unchecked-disembargo/";
+ };
+};
+
+DB
+{
+ Name "obscurity";
+ Host "";
+ Port -1;
+
+};
+
+Architectures
+{
+
+ source "Source";
+ all "Architecture Independent";
+ alpha "DEC Alpha";
+ hppa "HP PA RISC";
+ arm "ARM";
+ i386 "Intel ia32";
+ ia64 "Intel ia64";
+ m68k "Motorola Mc680x0";
+ mips "MIPS (Big Endian)";
+ mipsel "MIPS (Little Endian)";
+ powerpc "PowerPC";
+ s390 "IBM S/390";
+ sparc "Sun SPARC/UltraSPARC";
+ amd64 "AMD x86_64 (AMD64)";
+
+};
+
+Archive
+{
+
+ security
+ {
+ OriginServer "security.debian.org";
+ PrimaryMirror "security.debian.org";
+ Description "Security Updates for the Debian project";
+ };
+
+};
+
+Component
+{
+
+ updates/main
+ {
+ Description "Main (updates)";
+ MeetsDFSG "true";
+ };
+
+ updates/contrib
+ {
+ Description "Contrib (updates)";
+ MeetsDFSG "true";
+ };
+
+ updates/non-free
+ {
+ Description "Software that fails to meet the DFSG";
+ MeetsDFSG "false";
+ };
+
+};
+
+ComponentMappings
+{
+ "main updates/main";
+ "contrib updates/contrib";
+ "non-free updates/non-free";
+ "non-US/main updates/main";
+ "non-US/contrib updates/contrib";
+ "non-US/non-free updates/non-free";
+};
+
+Section
+{
+ admin;
+ base;
+ comm;
+ debian-installer;
+ devel;
+ doc;
+ editors;
+ electronics;
+ embedded;
+ games;
+ gnome;
+ graphics;
+ hamradio;
+ interpreters;
+ kde;
+ libdevel;
+ libs;
+ mail;
+ math;
+ misc;
+ net;
+ news;
+ oldlibs;
+ otherosfs;
+ perl;
+ python;
+ science;
+ shells;
+ sound;
+ tex;
+ text;
+ utils;
+ web;
+ x11;
+ non-US;
+};
+
+Priority
+{
+ required 1;
+ important 2;
+ standard 3;
+ optional 4;
+ extra 5;
+ source 0; // i.e. unused
+};
+
+OverrideType
+{
+ deb;
+ udeb;
+ dsc;
+};
+
+Location
+{
+ /org/security.debian.org/ftp/dists/
+ {
+ Archive "security";
+ Type "legacy";
+ };
+
+ /org/security.debian.org/ftp/pool/
+ {
+ Archive "security";
+ Suites
+ {
+ Oldstable;
+ Stable;
+ Testing;
+ };
+ Type "pool";
+ };
+};
+
+Urgency
+{
+ Default "low";
+ Valid
+ {
+ low;
+ medium;
+ high;
+ emergency;
+ critical;
+ };
+};
--- /dev/null
+# locations used by many scripts
+
+base=/org/security.debian.org
+ftpdir=$base/ftp/
+masterdir=$base/katie/
+overridedir=$base/override
+queuedir=$base/queue/
+unchecked=$queuedir/unchecked/
+accepted=$queuedir/accepted/
+done=$queuedir/done/
+
+uploadhost=ftp-master.debian.org
+uploaddir=/pub/UploadQueue/
+
+components="main non-free contrib"
+suites="oldstable stable testing"
+override_types="deb dsc udeb"
+
+PATH=$masterdir:$PATH
+umask 022
+
--- /dev/null
+This file maps each file available in the Debian GNU/Linux system to
+the package from which it originates. It includes packages from the
+DIST distribution for the ARCH architecture.
+
+You can use this list to determine which package contains a specific
+file, or whether or not a specific file is available. The list is
+updated weekly, each architecture on a different day.
+
+When a file is contained in more than one package, all packages are
+listed. When a directory is contained in more than one package, only
+the first is listed.
+
+The best way to search quickly for a file is with the Unix `grep'
+utility, as in `grep <regular expression> CONTENTS':
+
+ $ grep nose Contents
+ etc/nosendfile net/sendfile
+ usr/X11R6/bin/noseguy x11/xscreensaver
+ usr/X11R6/man/man1/noseguy.1x.gz x11/xscreensaver
+ usr/doc/examples/ucbmpeg/mpeg_encode/nosearch.param graphics/ucbmpeg
+ usr/lib/cfengine/bin/noseyparker admin/cfengine
+
+This list contains files in all packages, even though not all of the
+packages are installed on an actual system at once. If you want to
+find out which packages on an installed Debian system provide a
+particular file, you can use `dpkg --search <filename>':
+
+ $ dpkg --search /usr/bin/dselect
+ dpkg: /usr/bin/dselect
+
+
+FILE LOCATION
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/ftp.debian.org/ftp/";
+ OverrideDir "/org/ftp.debian.org/scripts/override/";
+ CacheDir "/org/ftp.debian.org/database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress "gzip";
+ Contents::Compress "gzip";
+ DeLinkLimit 0;
+ MaxContentsChange 25000;
+ FileMode 0664;
+}
+
+TreeDefault
+{
+ Contents::Header "/org/ftp.debian.org/katie/Contents.top";
+};
+
+tree "dists/proposed-updates"
+{
+ FileList "/org/ftp.debian.org/database/dists/proposed-updates_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/proposed-updates_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.sarge.$(SECTION)";
+ ExtraOverride "override.sarge.extra.$(SECTION)";
+ SrcOverride "override.sarge.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/testing"
+{
+ FakeDI "dists/unstable";
+ FileList "/org/ftp.debian.org/database/dists/testing_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/testing_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.etch.$(SECTION)";
+ ExtraOverride "override.etch.extra.$(SECTION)";
+ SrcOverride "override.etch.$(SECTION).src";
+ Packages::Compress ". gzip bzip2";
+ Sources::Compress "gzip bzip2";
+};
+
+tree "dists/testing-proposed-updates"
+{
+ FileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.etch.$(SECTION)";
+ ExtraOverride "override.etch.extra.$(SECTION)";
+ SrcOverride "override.etch.$(SECTION).src";
+ Contents " ";
+};
+
+tree "dists/unstable"
+{
+ FileList "/org/ftp.debian.org/database/dists/unstable_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/unstable_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
+ BinOverride "override.sid.$(SECTION)";
+ ExtraOverride "override.sid.extra.$(SECTION)";
+ SrcOverride "override.sid.$(SECTION).src";
+ Packages::Compress "gzip bzip2";
+ Sources::Compress "gzip bzip2";
+};
+
+// debian-installer
+
+tree "dists/testing/main"
+{
+ FileList "/org/ftp.debian.org/database/dists/testing_main_$(SECTION)_binary-$(ARCH).list";
+ Sections "debian-installer";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
+ BinOverride "override.etch.main.$(SECTION)";
+ SrcOverride "override.etch.main.src";
+ BinCacheDB "packages-debian-installer-$(ARCH).db";
+ Packages::Extensions ".udeb";
+ Contents "$(DIST)/../Contents-udeb";
+};
+
+tree "dists/testing-proposed-updates/main"
+{
+ FileList "/org/ftp.debian.org/database/dists/testing-proposed-updates_main_$(SECTION)_binary-$(ARCH).list";
+ Sections "debian-installer";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
+ BinOverride "override.etch.main.$(SECTION)";
+ SrcOverride "override.etch.main.src";
+ BinCacheDB "packages-debian-installer-$(ARCH).db";
+ Packages::Extensions ".udeb";
+ Contents " ";
+};
+
+tree "dists/unstable/main"
+{
+ FileList "/org/ftp.debian.org/database/dists/unstable_main_$(SECTION)_binary-$(ARCH).list";
+ Sections "debian-installer";
+ Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc";
+ BinOverride "override.sid.main.$(SECTION)";
+ SrcOverride "override.sid.main.src";
+ BinCacheDB "packages-debian-installer-$(ARCH).db";
+ Packages::Extensions ".udeb";
+ Contents "Contents $(DIST)/../Contents-udeb";
+};
+
+// Experimental
+
+tree "project/experimental"
+{
+ FileList "/org/ftp.debian.org/database/dists/experimental_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/experimental_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa hurd-i386 i386 ia64 mips mipsel m68k powerpc s390 sh sparc source";
+ BinOverride "override.sid.$(SECTION)";
+ SrcOverride "override.sid.$(SECTION).src";
+ Contents " ";
+};
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/incoming.debian.org/buildd/";
+ OverrideDir "/org/ftp.debian.org/scripts/override/";
+ CacheDir "/org/ftp.debian.org/database/";
+};
+
+Default
+{
+ Packages::Compress "gzip";
+ Sources::Compress "gzip";
+ DeLinkLimit 0;
+ FileMode 0664;
+}
+
+bindirectory "incoming"
+{
+ Packages "Packages";
+ Contents " ";
+
+ BinOverride "override.sid.all3";
+ BinCacheDB "packages-accepted.db";
+
+ FileList "/org/ftp.debian.org/database/dists/unstable_accepted.list";
+
+ PathPrefix "";
+ Packages::Extensions ".deb .udeb";
+};
+
+bindirectory "incoming/"
+{
+ Sources "Sources";
+ BinOverride "override.sid.all3";
+ SrcOverride "override.sid.all3.src";
+ SourceFileList "/org/ftp.debian.org/database/dists/unstable_accepted.list";
+};
+
--- /dev/null
+Dir
+{
+ ArchiveDir "/org/ftp.debian.org/ftp/";
+ OverrideDir "/org/ftp.debian.org/scripts/override/";
+ CacheDir "/org/ftp.debian.org/database/";
+};
+
+Default
+{
+ Packages::Compress ". gzip";
+ Sources::Compress "gzip";
+ Contents::Compress "gzip";
+ DeLinkLimit 0;
+ FileMode 0664;
+}
+
+TreeDefault
+{
+ Contents::Header "/org/ftp.debian.org/katie/Contents.top";
+};
+
+tree "dists/stable"
+{
+ FileList "/org/ftp.debian.org/database/dists/stable_$(SECTION)_binary-$(ARCH).list";
+ SourceFileList "/org/ftp.debian.org/database/dists/stable_$(SECTION)_source.list";
+ Sections "main contrib non-free";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc source";
+ BinOverride "override.sarge.$(SECTION)";
+ ExtraOverride "override.sarge.extra.$(SECTION)";
+ SrcOverride "override.sarge.$(SECTION).src";
+};
+
+// debian-installer
+
+tree "dists/stable/main"
+{
+ FileList "/org/ftp.debian.org/database/dists/stable_main_$(SECTION)_binary-$(ARCH).list";
+ Sections "debian-installer";
+ Architectures "alpha arm hppa i386 ia64 m68k mips mipsel powerpc s390 sparc";
+ BinOverride "override.sarge.main.$(SECTION)";
+ SrcOverride "override.sarge.main.src";
+ BinCacheDB "packages-debian-installer-$(ARCH).db";
+ Packages::Extensions ".udeb";
+ Contents " ";
+};
--- /dev/null
+#! /bin/sh
+#
+# Executed hourly via cron, out of katie's crontab.
+
+ARCHS="alpha arm hppa i386 ia64 m68k mips mipsel powerpc sparc s390"
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+LOCKFILE="/org/wanna-build/tmp/DB_Maintenance_In_Progress"
+
+if [ ! -e "$ftpdir/Archive_Maintenance_In_Progress" ]; then
+ if lockfile -r3 $LOCKFILE; then
+ cleanup() {
+ rm -f "$LOCKFILE"
+ }
+ trap cleanup 0
+ cd /org/incoming.debian.org/buildd
+ cp /org/wanna-build/tmp/Sources.unstable-old Sources
+ gzip -cd Sources.gz >> Sources
+ for a in $ARCHS; do
+ cp /org/wanna-build/tmp/Packages.unstable.$a-old Packages
+ gzip -cd /org/incoming.debian.org/buildd/Packages.gz >> Packages
+ quinn-diff -i -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -A $a 2>/dev/null | perl -pi -e 's#^(non-US/)?(non-free)/.*$##msg' | wanna-build -b $a/build-db --merge-partial-quinn 2> /dev/null
+ wanna-build -A $a -b $a/build-db --merge-packages Packages 2>/dev/null
+ done
+ rm -f Sources Packages
+ fi
+fi
--- /dev/null
+#! /bin/sh
+#
+# Executed daily via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+################################################################################
+
+echo Archive maintenance started at $(date +%X)
+
+NOTICE="$ftpdir/Archive_Maintenance_In_Progress"
+LOCKCU="$lockdir/daily.lock"
+LOCKAC="$lockdir/unchecked.lock"
+
+cleanup() {
+ rm -f "$NOTICE"
+ rm -f "$LOCKCU"
+}
+trap cleanup 0
+
+rm -f "$NOTICE"
+lockfile -l 3600 $LOCKCU
+cat > "$NOTICE" <<EOF
+Packages are currently being installed and indices rebuilt.
+Maintenance is automatic, starting at 13:52 US Central time, and
+ending at about 15:30. This file is then removed.
+
+You should not mirror the archive during this period.
+EOF
+
+################################################################################
+
+echo "Creating pre-daily-cron-job backup of projectb database..."
+pg_dump projectb > /org/ftp.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+
+################################################################################
+
+update-bugdoctxt
+update-mirrorlists
+update-mailingliststxt
+
+################################################################################
+
+lockfile $LOCKAC
+cd $accepted
+rm -f REPORT
+kelly -pa *.changes | tee REPORT | \
+ mail -s "Install for $(date +%D)" ftpmaster@ftp-master.debian.org
+chgrp debadmin REPORT
+chmod 664 REPORT
+
+cd $masterdir
+cindy
+rm -f $LOCKAC
+
+symlinks -d -r $ftpdir
+
+cd $masterdir
+jenna
+
+# Update fingerprints
+# [JT - disabled, emilie currently can ask questions]
+#emilie
+
+# Generate override files
+cd $overridedir
+denise
+
+# Update task overrides for testing and unstable
+# [JT 2004-02-04 disabled; copying in by hand for now]
+#cat $extoverridedir/task | perl -ne 'print if /^\S+\sTask\s\S+(,\s*\S+)*$/;' > override.sarge.extra.main
+#cat $extoverridedir/task | perl -ne 'print if /^\S+\sTask\s\S+(,\s*\S+)*$/;' > override.sid.extra.main
+
+# FIXME
+rm -f override.potato.all3 override.sid.all3
+for i in main contrib non-free; do cat override.potato.$i >> override.potato.all3; done
+for i in main contrib non-free main.debian-installer; do cat override.sid.$i >> override.sid.all3; done
+
+# Generate Packages and Sources files
+cd $masterdir
+apt-ftparchive generate apt.conf
+# Generate *.diff/ incremental updates
+tiffani
+# Generate Release files
+ziyi
+
+# Clean out old packages
+rhona
+shania
+
+# Needs to be rebuilt, as files have moved. Due to unaccepts, we need to
+# update this before wanna-build is updated.
+psql projectb -A -t -q -c "SELECT filename FROM queue_build WHERE suite = 5 AND queue = 0 AND in_queue = true AND filename ~ 'd(sc|eb)$'" > $dbdir/dists/unstable_accepted.list
+apt-ftparchive generate apt.conf.buildd
+
+mkmaintainers
+copyoverrides
+mklslar
+mkchecksums
+#
+# Fetch bugs information before unchecked processing is allowed again.
+/org/ftp.debian.org/testing/britney bugs
+rm -f $NOTICE
+sudo -u archvsync /home/archvsync/pushmerkel
+
+rm -f $LOCKCU
+echo Archive maintenance finished at $(date +%X)
+
+################################################################################
+
+echo "Creating post-daily-cron-job backup of projectb database..."
+POSTDUMP=/org/ftp.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
+pg_dump projectb > $POSTDUMP
+(cd /org/ftp.debian.org/backup; ln -sf $POSTDUMP current)
+
+################################################################################
+
+# Vacuum the database
+echo "VACUUM; VACUUM ANALYZE;" | psql projectb 2>&1 | grep -v "^NOTICE: Skipping.*only table owner can VACUUM it$"
+
+################################################################################
+
+# Send a report on NEW/BYHAND packages
+helena | mail -e -s "NEW and BYHAND on $(date +%D)" ftpmaster@ftp-master.debian.org
+# and one on crufty packages
+rene | tee $webdir/rene-daily.txt | mail -e -s "rene run for $(date +%D)" ftpmaster@ftp-master.debian.org
+
+################################################################################
+
+# Run billie
+
+#time billie
+
+################################################################################
+
+ulimit -m 90000 -d 90000 -s 10000 -v 90000
+
+run-parts --report /org/ftp.debian.org/scripts/distmnt
+
+echo Daily cron scripts successful.
+# Stats pr0n
+
+cd $masterdir
+update-ftpstats $base/log/* > $base/misc/ftpstats.data
+R --slave --vanilla < $base/misc/ftpstats.R
--- /dev/null
+#! /bin/sh
+#
+# Executed hourly via cron, out of troup's crontab.
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+cd $masterdir
+julia
+helena -n > $webdir/new.html
--- /dev/null
+#!/bin/sh
+#
+# Run at the beginning of the month via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+################################################################################
+
+DATE=`date -d yesterday +%y%m`
+
+cd /org/ftp.debian.org/mail/archive
+for m in mail bxamail; do
+ if [ -f $m ]; then
+ mv $m ${m}-$DATE
+ sleep 20
+ gzip -9 ${m}-$DATE
+ chgrp debadmin ${m}-$DATE.gz
+ chmod 660 ${m}-$DATE.gz
+ fi;
+done
+
+################################################################################
--- /dev/null
+#! /bin/sh
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+LOCKFILE="$lockdir/unchecked.lock"
+NOTICE="$lockdir/daily.lock"
+
+cleanup() {
+ rm -f "$LOCKFILE"
+ if [ ! -z "$LOCKDAILY" ]; then
+ rm -f "$NOTICE"
+ fi
+}
+
+# only run one cron.unchecked
+if lockfile -r3 $LOCKFILE; then
+ trap cleanup 0
+ cd $unchecked
+
+ changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
+ report=$queuedir/REPORT
+ timestamp=$(date "+%Y-%m-%d %H:%M")
+
+ if [ ! -z "$changes" ]; then
+ echo "$timestamp": "$changes" >> $report
+ jennifer -a $changes >> $report
+ echo "--" >> $report
+
+ if lockfile -r3 $NOTICE; then
+ LOCKDAILY="YES"
+ psql projectb -A -t -q -c "SELECT filename FROM queue_build WHERE queue = 0 AND suite = 5 AND in_queue = true AND filename ~ 'd(sc|eb)$'" > $dbdir/dists/unstable_accepted.list
+ cd $overridedir
+ denise &>/dev/null
+ rm -f override.sid.all3 override.sid.all3.src
+ for i in main contrib non-free main.debian-installer; do
+ cat override.sid.$i >> override.sid.all3
+ if [ "$i" != "main.debian-installer" ]; then
+ cat override.sid.$i.src >> override.sid.all3.src
+ fi
+ done
+ cd $masterdir
+ apt-ftparchive -qq generate apt.conf.buildd
+ . $masterdir/cron.buildd
+ fi
+ else
+ echo "$timestamp": Nothing to do >> $report
+ fi
+fi
+
--- /dev/null
+#!/bin/sh
+#
+# Run once a week via cron, out of katie's crontab.
+
+set -e
+export SCRIPTVARS=/org/ftp.debian.org/katie/vars
+. $SCRIPTVARS
+
+################################################################################
+
+# Purge empty directories
+
+if [ ! -z "$(find $ftpdir/pool/ -type d -empty)" ]; then
+ find $ftpdir/pool/ -type d -empty | xargs rmdir;
+fi
+
+# Clean up apt-ftparchive's databases
+
+cd $masterdir
+apt-ftparchive -q clean apt.conf
+apt-ftparchive -q clean apt.conf.buildd
+
+################################################################################
--- /dev/null
+Dinstall
+{
+ PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
+ GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
+ SigningKeyring "/org/ftp.debian.org/s3kr1t/dot-gnupg/secring.gpg";
+ SigningPubKeyring "/org/ftp.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
+ SigningKeyIds "4F368D5D";
+ SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
+ MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
+ MyAdminAddress "ftpmaster@debian.org";
+ MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
+ MyDistribution "Debian"; // Used in emails
+ BugServer "bugs.debian.org";
+ PackagesServer "packages.debian.org";
+ TrackingServer "packages.qa.debian.org";
+ LockFile "/org/ftp.debian.org/lock/dinstall.lock";
+ Bcc "archive@ftp-master.debian.org";
+ GroupOverrideFilename "override.group-maint";
+ FutureTimeTravelGrace 28800; // 8 hours
+ PastCutoffYear "1984";
+ SkipTime 300;
+ BXANotify "true";
+ CloseBugs "true";
+ OverrideDisparityCheck "true";
+ StableDislocationSupport "false";
+ DefaultSuite "unstable";
+ QueueBuildSuites
+ {
+ unstable;
+ };
+ Reject
+ {
+ NoSourceOnly "true";
+ };
+};
+
+Tiffani
+{
+ Options
+ {
+ TempDir "/org/ftp.debian.org/tiffani";
+ MaxDiffs { Default 90; };
+ };
+};
+
+Alicia
+{
+ MyEmailAddress "Debian FTP Masters <ftpmaster@ftp-master.debian.org>";
+};
+
+Billie
+{
+ FTPPath "/org/ftp.debian.org/ftp";
+ TreeRootPath "/org/ftp.debian.org/scratch/dsilvers/treeroots";
+ TreeDatabasePath "/org/ftp.debian.org/scratch/dsilvers/treedbs";
+ BasicTrees { alpha; arm; hppa; hurd-i386; i386; ia64; mips; mipsel; powerpc; s390; sparc; m68k };
+ CombinationTrees
+ {
+ popular { i386; powerpc; all; source; };
+ source { source; };
+ everything { source; all; alpha; arm; hppa; hurd-i386; i386; ia64; mips; mipsel; powerpc; s390; sparc; m68k; };
+ };
+};
+
+Julia
+{
+ ValidGID "800";
+ // Comma separated list of users who are in Postgres but not the passwd file
+ KnownPostgres "postgres,katie";
+};
+
+Shania
+{
+ Options
+ {
+ Days 14;
+ };
+ MorgueSubDir "shania";
+};
+
+Natalie
+{
+ Options
+ {
+ Component "main";
+ Suite "unstable";
+ Type "deb";
+ };
+
+ ComponentPosition "prefix"; // Whether the component is prepended or appended to the section name
+};
+
+Melanie
+{
+ Options
+ {
+ Suite "unstable";
+ };
+
+ MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
+ LogFile "/org/ftp.debian.org/web/removals.txt";
+ Bcc "removed-packages@qa.debian.org";
+};
+
+Neve
+{
+ ExportDir "/org/ftp.debian.org/katie/neve-files/";
+};
+
+Lauren
+{
+ StableRejector "Martin (Joey) Schulze <joey@debian.org>";
+ MoreInfoURL "http://people.debian.org/~joey/3.1r2/";
+};
+
+Emilie
+{
+ LDAPDn "ou=users,dc=debian,dc=org";
+ LDAPServer "db.debian.org";
+ ExtraKeyrings
+ {
+ "/org/keyring.debian.org/keyrings/removed-keys.pgp";
+ "/org/keyring.debian.org/keyrings/removed-keys.gpg";
+ "/org/keyring.debian.org/keyrings/extra-keys.pgp";
+ };
+ KeyServer "wwwkeys.eu.pgp.net";
+};
+
+Rhona
+{
+ // How long (in seconds) dead packages are left before being killed
+ StayOfExecution 129600; // 1.5 days
+ QueueBuildStayOfExecution 86400; // 24 hours
+ MorgueSubDir "rhona";
+};
+
+Lisa
+{
+ AcceptedLockFile "/org/ftp.debian.org/lock/unchecked.lock";
+};
+
+Cindy
+{
+ OverrideSuites
+ {
+ Stable
+ {
+ Process "0";
+ };
+
+ Testing
+ {
+ Process "1";
+ OriginSuite "Unstable";
+ };
+
+ Unstable
+ {
+ Process "1";
+ };
+ };
+};
+
+Suite
+{
+ Oldstable
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-changes@lists.debian.org";
+ Version "3.0r6";
+ Origin "Debian";
+ Description "Debian 3.0r6 Released 31 May 2005";
+ CodeName "woody";
+ OverrideCodeName "woody";
+ Priority "1";
+ Untouchable "1";
+ };
+
+ Stable
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-changes@lists.debian.org";
+ Version "3.1r1";
+ Origin "Debian";
+ Description "Debian 3.1r1 Released 17 December 2005";
+ CodeName "sarge";
+ OverrideCodeName "sarge";
+ Priority "3";
+ Untouchable "1";
+ ChangeLogBase "dists/stable/";
+ UdebComponents
+ {
+ main;
+ };
+ };
+
+ Proposed-Updates
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-changes@lists.debian.org";
+ CopyChanges "dists/proposed-updates/";
+ CopyKatie "/org/ftp.debian.org/queue/proposed-updates/";
+ Version "3.1-updates";
+ Origin "Debian";
+ Description "Debian 3.1 Proposed Updates - Not Released";
+ CodeName "proposed-updates";
+ OverrideCodeName "sarge";
+ OverrideSuite "stable";
+ Priority "4";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ };
+ MustBeOlderThan
+ {
+ Testing;
+ Unstable;
+ Experimental;
+ };
+ Enhances
+ {
+ Stable;
+ };
+ };
+ UdebComponents
+ {
+ main;
+ };
+ };
+
+ Testing
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-testing-changes@lists.debian.org";
+ Origin "Debian";
+ Description "Debian Testing distribution - Not Released";
+ CodeName "etch";
+ OverrideCodeName "etch";
+ Priority "5";
+ UdebComponents
+ {
+ main;
+ };
+ };
+
+ Testing-Proposed-Updates
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sparc;
+ };
+ Announce "debian-testing-changes@lists.debian.org";
+ Origin "Debian";
+ Description "Debian Testing distribution updates - Not Released";
+ CodeName "testing-proposed-updates";
+ OverrideCodeName "etch";
+ OverrideSuite "testing";
+ Priority "6";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ Proposed-Updates;
+ Testing;
+ };
+ MustBeOlderThan
+ {
+ Unstable;
+ Experimental;
+ };
+ Enhances
+ {
+ Testing;
+ };
+ };
+ UdebComponents
+ {
+ main;
+ };
+ };
+
+ Unstable
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ hurd-i386;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sh;
+ sparc;
+ };
+ Announce "debian-devel-changes@lists.debian.org";
+ Origin "Debian";
+ Description "Debian Unstable - Not Released";
+ CodeName "sid";
+ OverrideCodeName "sid";
+ Priority "7";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ Proposed-Updates;
+ Testing;
+ Testing-Proposed-Updates;
+ };
+ };
+ UdebComponents
+ {
+ main;
+ };
+ };
+
+ Experimental
+ {
+ Components
+ {
+ main;
+ contrib;
+ non-free;
+ };
+ Architectures
+ {
+ source;
+ all;
+ alpha;
+ arm;
+ hppa;
+ hurd-i386;
+ i386;
+ ia64;
+ m68k;
+ mips;
+ mipsel;
+ powerpc;
+ s390;
+ sh;
+ sparc;
+ };
+ Announce "debian-devel-changes@lists.debian.org";
+ Origin "Debian";
+ Description "Experimental packages - not released; use at your own risk.";
+ CodeName "experimental";
+ NotAutomatic "yes";
+ OverrideCodeName "sid";
+ OverrideSuite "unstable";
+ Priority "0";
+ Tree "project/experimental";
+ VersionChecks
+ {
+ MustBeNewerThan
+ {
+ Stable;
+ Proposed-Updates;
+ Testing;
+ Testing-Proposed-Updates;
+ Unstable;
+ };
+ };
+
+ };
+
+};
+
+SuiteMappings
+{
+ "propup-version stable-security testing testing-proposed-updates unstable";
+ "propup-version testing-security unstable";
+ "map stable proposed-updates";
+ "map stable-security proposed-updates";
+ "map-unreleased stable unstable";
+ "map-unreleased proposed-updates unstable";
+ "map testing testing-proposed-updates";
+ "map testing-security testing-proposed-updates";
+ "map-unreleased testing unstable";
+ "map-unreleased testing-proposed-updates unstable";
+};
+
+Dir
+{
+ Root "/org/ftp.debian.org/ftp/";
+ Pool "/org/ftp.debian.org/ftp/pool/";
+ Templates "/org/ftp.debian.org/katie/templates/";
+ PoolRoot "pool/";
+ Lists "/org/ftp.debian.org/database/dists/";
+ Log "/org/ftp.debian.org/log/";
+ Lock "/org/ftp.debian.org/lock";
+ Morgue "/org/ftp.debian.org/morgue/";
+ MorgueReject "reject";
+ Override "/org/ftp.debian.org/scripts/override/";
+ QueueBuild "/org/incoming.debian.org/buildd/";
+ UrgencyLog "/org/ftp.debian.org/testing/urgencies/";
+ Queue
+ {
+ Accepted "/org/ftp.debian.org/queue/accepted/";
+ Byhand "/org/ftp.debian.org/queue/byhand/";
+ Done "/org/ftp.debian.org/queue/done/";
+ Holding "/org/ftp.debian.org/queue/holding/";
+ New "/org/ftp.debian.org/queue/new/";
+ Reject "/org/ftp.debian.org/queue/reject/";
+ Unchecked "/org/ftp.debian.org/queue/unchecked/";
+ BTSVersionTrack "/org/ftp.debian.org/queue/bts_version_track/";
+ };
+};
+
+DB
+{
+ Name "projectb";
+ Host "";
+ Port -1;
+
+ NonUSName "projectb";
+ NonUSHost "non-US.debian.org";
+ NonUSPort -1;
+ NonUSUser "auric";
+ NonUSPassword "moo";
+};
+
+Architectures
+{
+ source "Source";
+ all "Architecture Independent";
+ alpha "DEC Alpha";
+ hurd-i386 "Intel ia32 running the HURD";
+ hppa "HP PA RISC";
+ arm "ARM";
+ i386 "Intel ia32";
+ ia64 "Intel ia64";
+ m68k "Motorola Mc680x0";
+ mips "MIPS (Big Endian)";
+ mipsel "MIPS (Little Endian)";
+ powerpc "PowerPC";
+ s390 "IBM S/390";
+ sh "Hitatchi SuperH";
+ sparc "Sun SPARC/UltraSPARC";
+};
+
+Archive
+{
+ ftp-master
+ {
+ OriginServer "ftp-master.debian.org";
+ PrimaryMirror "ftp.debian.org";
+ Description "Master Archive for the Debian project";
+ };
+};
+
+Component
+{
+ main
+ {
+ Description "Main";
+ MeetsDFSG "true";
+ };
+
+ contrib
+ {
+ Description "Contrib";
+ MeetsDFSG "true";
+ };
+
+ non-free
+ {
+ Description "Software that fails to meet the DFSG";
+ MeetsDFSG "false";
+ };
+
+ mixed // **NB:** only used for overrides; not yet used in other code
+ {
+ Description "Legacy Mixed";
+ MeetsDFSG "false";
+ };
+};
+
+Section
+{
+ admin;
+ base;
+ comm;
+ debian-installer;
+ devel;
+ doc;
+ editors;
+ embedded;
+ electronics;
+ games;
+ gnome;
+ graphics;
+ hamradio;
+ interpreters;
+ kde;
+ libdevel;
+ libs;
+ mail;
+ math;
+ misc;
+ net;
+ news;
+ oldlibs;
+ otherosfs;
+ perl;
+ python;
+ science;
+ shells;
+ sound;
+ tex;
+ text;
+ utils;
+ web;
+ x11;
+};
+
+Priority
+{
+ required 1;
+ important 2;
+ standard 3;
+ optional 4;
+ extra 5;
+ source 0; // i.e. unused
+};
+
+OverrideType
+{
+ deb;
+ udeb;
+ dsc;
+};
+
+Location
+{
+
+ // Pool locations on ftp-master.debian.org
+ /org/ftp.debian.org/ftp/pool/
+ {
+ Archive "ftp-master";
+ Type "pool";
+ };
+
+};
+
+Urgency
+{
+ Default "low";
+ Valid
+ {
+ low;
+ medium;
+ high;
+ emergency;
+ critical;
+ };
+};
--- /dev/null
+base Base system (baseX_Y.tgz) general bugs
+install Installation system
+installation Installation system
+cdrom Installation system
+boot-floppy Installation system
+spam Spam (reassign spam to here so we can complain about it)
+press Press release issues
+kernel Problems with the Linux kernel, or that shipped with Debian
+project Problems related to project administration
+general General problems (e.g. "many manpages are mode 755")
+slink-cd Slink CD
+potato-cd Potato CD
+listarchives Problems with the WWW mailing list archives
+qa.debian.org The Quality Assurance group
+ftp.debian.org Problems with the FTP site
+www.debian.org Problems with the WWW site
+bugs.debian.org The bug tracking system, @bugs.debian.org
+nonus.debian.org Problems with the non-US FTP site
+lists.debian.org The mailing lists, debian-*@lists.debian.org
+wnpp Work-Needing and Prospective Packages list
+cdimage.debian.org CD Image issues
+tech-ctte The Debian Technical Committee (see the Constitution)
+mirrors Problems with the official mirrors
+security.debian.org The Debian Security Team
+installation-reports Reports of installation problems with stable & testing
+upgrade-reports Reports of upgrade problems for stable & testing
+release-notes Problems with the Release Notes
--- /dev/null
+base Anthony Towns <debootstrap@packages.debian.org>
+install Debian Install Team <debian-boot@lists.debian.org>
+installation Debian Install Team <debian-boot@lists.debian.org>
+cdrom Debian CD-ROM Team <debian-cd@lists.debian.org>
+boot-floppy Debian Install Team <debian-boot@lists.debian.org>
+press press@debian.org
+bugs.debian.org Debian Bug Tracking Team <owner@bugs.debian.org>
+ftp.debian.org James Troup and others <ftpmaster@ftp-master.debian.org>
+qa.debian.org debian-qa@lists.debian.org
+nonus.debian.org Michael Beattie and others <ftpmaster@debian.org>
+www.debian.org Debian WWW Team <debian-www@lists.debian.org>
+mirrors Debian Mirrors Team <mirrors@debian.org>
+listarchives Debian List Archive Team <listarchives@debian.org>
+project debian-project@lists.debian.org
+general debian-devel@lists.debian.org
+kernel Debian Kernel Team <debian-kernel@lists.debian.org>
+lists.debian.org Debian Listmaster Team <listmaster@lists.debian.org>
+spam spam@debian.org
+slink-cd Steve McIntyre <stevem@chiark.greenend.org.uk>
+potato-cd Steve McIntyre <stevem@chiark.greenend.org.uk>
+wnpp wnpp@debian.org
+cdimage.debian.org Debian CD-ROM Team <debian-cd@lists.debian.org>
+tech-ctte Technical Committee <debian-ctte@lists.debian.org>
+security.debian.org Debian Security Team <team@security.debian.org>
+installation-reports Debian Install Team <debian-boot@lists.debian.org>
+upgrade-reports Debian Testing Group <debian-testing@lists.debian.org>
+release-notes Debian Documentation Team <debian-doc@lists.debian.org>
--- /dev/null
+# locations used by many scripts
+
+base=/org/ftp.debian.org
+ftpdir=$base/ftp
+webdir=$base/web
+indices=$ftpdir/indices
+archs="alpha arm hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sh sparc"
+
+scriptdir=$base/scripts
+masterdir=$base/katie/
+dbdir=$base/database/
+lockdir=$base/lock/
+overridedir=$scriptdir/override
+extoverridedir=$scriptdir/external-overrides
+
+queuedir=$base/queue/
+unchecked=$queuedir/unchecked/
+accepted=$queuedir/accepted/
+incoming=$base/incoming
+
+ftpgroup=debadmin
+
+copyoverrides="etch.contrib etch.contrib.src etch.main etch.main.src etch.non-free etch.non-free.src etch.extra.main etch.extra.non-free etch.extra.contrib etch.main.debian-installer woody.contrib woody.contrib.src woody.main woody.main.src woody.non-free woody.non-free.src sarge.contrib sarge.contrib.src sarge.main sarge.main.src sarge.non-free sarge.non-free.src sid.contrib sid.contrib.src sid.main sid.main.debian-installer sid.main.src sid.non-free sid.non-free.src sid.extra.contrib sid.extra.main sid.extra.non-free woody.extra.contrib woody.extra.main woody.extra.non-free sarge.extra.contrib sarge.extra.main sarge.extra.non-free"
+
+PATH=$masterdir:$PATH
+umask 022
+
--- /dev/null
+// Example /etc/katie/katie.conf
+
+Config
+{
+ // FQDN hostname
+ auric.debian.org
+ {
+
+ // Optional hostname as it appears in the database (if it differs
+ // from the FQDN hostname).
+ DatbaseHostname "ftp-master";
+
+ // Optional filename of katie's config file; if not present, this
+ // file is assumed to contain katie config info.
+ KatieConfig "/org/ftp.debian.org/katie/katie.conf";
+
+ // Optional filename of apt-ftparchive's config file; if not
+ // present, the file is assumed to be 'apt.conf' in the same
+ // directory as this file.
+ AptConfig "/org/ftp.debian.org/katie/apt.conf";
+ }
+
+}
+++ /dev/null
-Index: katie
-===================================================================
-RCS file: /cvs/dak/dak/katie,v
-retrieving revision 1.28
-diff -u -r1.28 katie
---- katie 2001/02/06 00:39:44 1.28
-+++ katie 2001/02/09 18:14:49
-@@ -102,7 +102,13 @@
- def check_signature (filename):
- global reject_message
-
-- (result, output) = commands.getstatusoutput("gpg --emulate-md-encode-bug --batch --no-options --no-default-keyring --always-trust --keyring=%s --keyring=%s < %s >/dev/null" % (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"], filename))
-+ if Cnf.FindB("Dinstall::NoSigCheck"):
-+ return 1
-+ keyrings = ""
-+ for keyring in Cnf.ValueList("Dinstall::Keyrings"):
-+ keyrings = keyrings + " --keyring " + keyring;
-+
-+ (result, output) = commands.getstatusoutput("gpg --emulate-md-encode-bug --batch --no-options --no-default-keyring --always-trust %s < %s >/dev/null" % (keyrings, filename))
- if (result != 0):
- reject_message = "Rejected: GPG signature check failed on `%s'.\n%s\n" % (os.path.basename(filename), output)
- return 0
+++ /dev/null
-#! /bin/sh
-# $Id: copyoverrides,v 1.2 2001-01-10 06:01:07 troup Exp $
-
-set -e
-. $SCRIPTVARS
-echo 'Copying override files into public view ...'
-
-for f in $copyoverrides ; do
- cd $overridedir
- chmod g+w override.$f
-
- cd $indices
- rm -f .newover-$f.gz
- pc="`gzip 2>&1 -9nv <$overridedir/override.$f >.newover-$f.gz`"
- set +e
- nf=override.$f.gz
- cmp -s .newover-$f.gz $nf
- rc=$?
- set -e
- if [ $rc = 0 ]; then
- rm -f .newover-$f.gz
- elif [ $rc = 1 -o ! -f $nf ]; then
- echo " installing new $nf $pc"
- mv -f .newover-$f.gz $nf
- chmod g+w $nf
- else
- echo $? $pc
- exit 1
- fi
-done
+++ /dev/null
-#! /bin/sh
-#
-# Executed hourly via cron, out of katie's crontab.
-
-ARCHS="alpha arm hppa i386 ia64 m68k mips mipsel powerpc sparc s390"
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-LOCKFILE="/org/wanna-build/tmp/DB_Maintenance_In_Progress"
-
-if [ ! -e "$ftpdir/Archive_Maintenance_In_Progress" ]; then
- if lockfile -r3 $LOCKFILE; then
- cleanup() {
- rm -f "$LOCKFILE"
- }
- trap cleanup 0
- cd /org/incoming.debian.org/buildd
- cp /org/wanna-build/tmp/Sources.unstable-old Sources
- gzip -cd Sources.gz >> Sources
- for a in $ARCHS; do
- cp /org/wanna-build/tmp/Packages.unstable.$a-old Packages
- gzip -cd /org/incoming.debian.org/buildd/Packages.gz >> Packages
- quinn-diff -i -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -A $a 2>/dev/null | perl -pi -e 's#^(non-US/)?(non-free)/.*$##msg' | wanna-build -b $a/build-db --merge-partial-quinn 2> /dev/null
- wanna-build -A $a -b $a/build-db --merge-packages Packages 2>/dev/null
- done
- rm -f Sources Packages
- fi
-fi
+++ /dev/null
-#! /bin/bash
-#
-# Executed after jennifer (merge there??)
-
-ARCHS_oldstable="alpha arm hppa i386 ia64 m68k mips mipsel powerpc sparc s390"
-ARCHS_stable="$ARCHS_oldstable"
-ARCHS_testing="$ARCHS_stable"
-DISTS="oldstable stable testing"
-SSH_SOCKET=~/.ssh/buildd.debian.org.socket
-
-set -e
-export SCRIPTVARS=/org/security.debian.org/katie/vars-security
-. $SCRIPTVARS
-
-if [ ! -e $ftpdir/Archive_Maintenance_In_Progress ]; then
- cd $masterdir
- for d in $DISTS; do
- eval SOURCES_$d=`stat -c "%Y" $base/buildd/$d/Sources.gz`
- eval PACKAGES_$d=`stat -c "%Y" $base/buildd/$d/Packages.gz`
- done
- apt-ftparchive -qq generate apt.conf.buildd-security
- dists=
- for d in $DISTS; do
- eval NEW_SOURCES_$d=`stat -c "%Y" $base/buildd/$d/Sources.gz`
- eval NEW_PACKAGES_$d=`stat -c "%Y" $base/buildd/$d/Packages.gz`
- old=SOURCES_$d
- new=NEW_$old
- if [ ${!new} -gt ${!old} ]; then
- if [ -z "$dists" ]; then
- dists="$d"
- else
- dists="$dists $d"
- fi
- continue
- fi
- old=PACKAGES_$d
- new=NEW_$old
- if [ ${!new} -gt ${!old} ]; then
- if [ -z "$dists" ]; then
- dists="$d"
- else
- dists="$dists $d"
- fi
- continue
- fi
- done
- if [ ! -z "$dists" ]; then
- # setup ssh master process
- ssh buildd@buildd -S $SSH_SOCKET -MN 2> /dev/null &
- SSH_PID=$!
- while [ ! -S $SSH_SOCKET ]; do
- sleep 1
- done
- trap 'kill -TERM $SSH_PID' 0
- for d in $dists; do
- archs=ARCHS_$d
- ARCHS=${!archs}
- cd /org/security.debian.org/buildd/$d
- for a in $ARCHS; do
- quinn-diff -a /org/security.debian.org/buildd/Packages-arch-specific -A $a 2>/dev/null | ssh buildd@buildd -S $SSH_SOCKET wanna-build -d $d-security -b $a/build-db --merge-partial-quinn
- ssh buildd@buildd -S $SSH_SOCKET wanna-build -d $d-security -A $a -b $a/build-db --merge-packages < Packages
- done
- done
- fi
-fi
-
-ssh buildd@bester.farm.ftbfs.de -i ~/.ssh/id_bester sleep 1
+++ /dev/null
-#! /bin/sh
-#
-# Executed daily via cron, out of katie's crontab.
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-################################################################################
-
-echo Archive maintenance started at $(date +%X)
-
-NOTICE="$ftpdir/Archive_Maintenance_In_Progress"
-LOCKCU="$lockdir/daily.lock"
-LOCKAC="$lockdir/unchecked.lock"
-
-cleanup() {
- rm -f "$NOTICE"
- rm -f "$LOCKCU"
-}
-trap cleanup 0
-
-rm -f "$NOTICE"
-lockfile -l 3600 $LOCKCU
-cat > "$NOTICE" <<EOF
-Packages are currently being installed and indices rebuilt.
-Maintenance is automatic, starting at 13:52 US Central time, and
-ending at about 15:30. This file is then removed.
-
-You should not mirror the archive during this period.
-EOF
-
-################################################################################
-
-echo "Creating pre-daily-cron-job backup of projectb database..."
-pg_dump projectb > /org/ftp.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
-
-################################################################################
-
-update-bugdoctxt
-update-mirrorlists
-update-mailingliststxt
-
-################################################################################
-
-lockfile $LOCKAC
-cd $accepted
-rm -f REPORT
-kelly -pa *.changes | tee REPORT | \
- mail -s "Install for $(date +%D)" ftpmaster@ftp-master.debian.org
-chgrp debadmin REPORT
-chmod 664 REPORT
-
-cd $masterdir
-cindy
-rm -f $LOCKAC
-
-symlinks -d -r $ftpdir
-
-cd $masterdir
-jenna
-
-# Update fingerprints
-# [JT - disabled, emilie currently can ask questions]
-#emilie
-
-# Generate override files
-cd $overridedir
-denise
-
-# Update task overrides for testing and unstable
-# [JT 2004-02-04 disabled; copying in by hand for now]
-#cat $extoverridedir/task | perl -ne 'print if /^\S+\sTask\s\S+(,\s*\S+)*$/;' > override.sarge.extra.main
-#cat $extoverridedir/task | perl -ne 'print if /^\S+\sTask\s\S+(,\s*\S+)*$/;' > override.sid.extra.main
-
-# FIXME
-rm -f override.potato.all3 override.sid.all3
-for i in main contrib non-free; do cat override.potato.$i >> override.potato.all3; done
-for i in main contrib non-free main.debian-installer; do cat override.sid.$i >> override.sid.all3; done
-
-# Generate Packages and Sources files
-cd $masterdir
-apt-ftparchive generate apt.conf
-# Generate *.diff/ incremental updates
-tiffani
-# Generate Release files
-ziyi
-
-# Clean out old packages
-rhona
-shania
-
-# Needs to be rebuilt, as files have moved. Due to unaccepts, we need to
-# update this before wanna-build is updated.
-psql projectb -A -t -q -c "SELECT filename FROM queue_build WHERE suite = 5 AND queue = 0 AND in_queue = true AND filename ~ 'd(sc|eb)$'" > $dbdir/dists/unstable_accepted.list
-apt-ftparchive generate apt.conf.buildd
-
-mkmaintainers
-copyoverrides
-mklslar
-mkchecksums
-#
-# Fetch bugs information before unchecked processing is allowed again.
-/org/ftp.debian.org/testing/britney bugs
-rm -f $NOTICE
-sudo -u archvsync /home/archvsync/pushmerkel
-
-rm -f $LOCKCU
-echo Archive maintenance finished at $(date +%X)
-
-################################################################################
-
-echo "Creating post-daily-cron-job backup of projectb database..."
-POSTDUMP=/org/ftp.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
-pg_dump projectb > $POSTDUMP
-(cd /org/ftp.debian.org/backup; ln -sf $POSTDUMP current)
-
-################################################################################
-
-# Vacuum the database
-echo "VACUUM; VACUUM ANALYZE;" | psql projectb 2>&1 | grep -v "^NOTICE: Skipping.*only table owner can VACUUM it$"
-
-################################################################################
-
-# Send a report on NEW/BYHAND packages
-helena | mail -e -s "NEW and BYHAND on $(date +%D)" ftpmaster@ftp-master.debian.org
-# and one on crufty packages
-rene | tee $webdir/rene-daily.txt | mail -e -s "rene run for $(date +%D)" ftpmaster@ftp-master.debian.org
-
-################################################################################
-
-# Run billie
-
-#time billie
-
-################################################################################
-
-ulimit -m 90000 -d 90000 -s 10000 -v 90000
-
-run-parts --report /org/ftp.debian.org/scripts/distmnt
-
-echo Daily cron scripts successful.
-# Stats pr0n
-
-cd $masterdir
-update-ftpstats $base/log/* > $base/misc/ftpstats.data
-R --slave --vanilla < $base/misc/ftpstats.R
+++ /dev/null
-#! /bin/sh
-#
-# Executed daily via cron, out of troup's crontab.
-
-set -e
-export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
-. $SCRIPTVARS
-
-################################################################################
-
-echo Archive maintenance started at $(date +%X)
-
-NOTICE="$ftpdir/Archive_Maintenance_In_Progress"
-
-cleanup() {
- rm -f "$NOTICE"
-}
-trap cleanup 0
-
-rm -f "$NOTICE"
-cat > "$NOTICE" <<EOF
-Packages are currently being installed and indices rebuilt.
-Maintenance is automatic, starting at 13:52 US Central time, and
-ending at about 15:30. This file is then removed.
-
-You should not mirror the archive during this period.
-EOF
-
-################################################################################
-
-echo "Creating pre-daily-cron-job backup of projectb database..."
-pg_dump projectb > /org/non-us.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
-
-################################################################################
-
-update-readmenonus
-
-################################################################################
-
-if [ ! -z "$(find "$accepted" -name \*.changes -maxdepth 1 -mindepth 1)" ]; then
- cd $accepted
- rm -f REPORT
- kelly -pa *.changes | tee REPORT | \
- mail -s "Non-US Install for $(date +%D)" ftpmaster@ftp-master.debian.org
- chgrp debadmin REPORT
- chmod 664 REPORT
-else
- echo "kelly: Nothing to install."
-fi
-
-cd $masterdir
-symlinks -d -r $ftpdir
-
-cd $masterdir
-jenna
-
-# Generate override files
-cd $overridedir
-denise
-# FIXME
-rm -f override.potato.all3
-for i in main contrib non-free; do cat override.potato.$i >> override.potato.all3; done
-
-# Generate Packages and Sources files
-cd $masterdir
-apt-ftparchive generate apt.conf-non-US
-# Generate Release files
-ziyi
-
-# Clean out old packages
-rhona
-shania
-
-# Generate the Maintainers file
-cd $indices
-charisma > .new-maintainers_versions
-mv -f .new-maintainers_versions Maintainers_Versions
-sed -e "s/~[^ ]*\([ ]\)/\1/" < Maintainers_Versions | awk '{printf "%-20s ", $1; for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n";}' > .new-maintainers
-mv -f .new-maintainers Maintainers
-gzip -9v <Maintainers >.new-maintainers.gz
-mv -f .new-maintainers.gz Maintainers.gz
-gzip -9v <Maintainers_Versions >.new-maintainers_versions.gz
-mv -f .new-maintainers_versions.gz Maintainers_Versions.gz
-rm -f Maintainers_Versions
-
-cd $masterdir
-copyoverrides
-mklslar
-mkchecksums
-
-rm -f $NOTICE
-echo Archive maintenance finished at $(date +%X)
-
-################################################################################
-
-echo "Creating post-daily-cron-job backup of projectb database..."
-pg_dump projectb > /org/non-us.debian.org/backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
-
-################################################################################
-
-# Vacuum the database
-echo "VACUUM; VACUUM ANALYZE;" | psql projectb 2>&1 | grep -v "^NOTICE: Skipping.*only table owner can VACUUM it$"
-
-################################################################################
-
-# Send a report on NEW/BYHAND packages
-helena | mail -e -s "[non-US] NEW and BYHAND on $(date +%D)" ftpmaster@ftp-master.debian.org
-# and one on crufty packages
-rene | mail -e -s "[non-US] rene run for $(date +%D)" ftpmaster@ftp-master.debian.org
-
-################################################################################
-ulimit -m 90000 -d 90000 -s 10000 -v 90000
-
-run-parts --report /org/non-us.debian.org/scripts/distmnt
-
-echo Daily cron scripts successful.
+++ /dev/null
-#! /bin/sh
-#
-# Executed daily via cron, out of katie's crontab.
-
-set -e
-export SCRIPTVARS=/org/security.debian.org/katie/vars-security
-. $SCRIPTVARS
-
-################################################################################
-
-# Fix overrides
-
-rsync -ql ftp-master::indices/override\* $overridedir
-
-cd $overridedir
-find . -name override\*.gz -type f -maxdepth 1 -mindepth 1 | xargs gunzip -f
-find . -type l -maxdepth 1 -mindepth 1 | xargs rm
-
-rm -fr non-US
-mkdir non-US
-cd non-US
-rsync -ql non-us::indices/override\* .
-find . -name override\*.gz -type f -maxdepth 1 -mindepth 1 | xargs gunzip
-find . -type l -maxdepth 1 -mindepth 1 | xargs rm
-for i in *; do
- if [ -f ../$i ]; then
- cat $i >> ../$i;
- fi;
-done
-cd ..
-rm -fr non-US
-
-for suite in $suites; do
- case $suite in
- oldstable) override_suite=woody;;
- stable) override_suite=sarge;;
- testing) override_suite=etch;;
- *) echo "Unknown suite type ($suite)"; exit 1;;
- esac
- for component in $components; do
- for override_type in $override_types; do
- case $override_type in
- deb) type="" ;;
- dsc) type=".src" ;;
- udeb) type=".debian-installer" ;;
- esac
- # XXX RUN AFUCKINGAWAY
- if [ "$override_type" = "udeb" ]; then
- if [ ! "$component" = "main" ]; then
- continue;
- fi
- if [ "$suite" = "unstable" ]; then
- $masterdir/natalie -q -S -t $override_type -s $suite -c updates/$component < override.$override_suite.$component$type
- fi
- else
- $masterdir/natalie -q -S -t $override_type -s $suite -c updates/$component < override.$override_suite.$component$type
- fi
- case $suite in
- oldstable)
- if [ ! "$override_type" = "udeb" ]; then
- $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sarge.$component$type
- fi
- $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
- ;;
- stable)
- $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
- ;;
- testing)
- $masterdir/natalie -q -a -t $override_type -s $suite -c updates/$component < override.sid.$component$type
- ;;
- *) echo "Unknown suite type ($suite)"; exit 1;;
- esac
- done
- done
-done
-
-# Generate .all3 overides for the buildd support
-for dist in woody sarge etch; do
- rm -f override.$dist.all3
- components="main contrib non-free";
- if [ -f override.$dist.main.debian-installer ]; then
- components="$components main.debian-installer";
- fi
- for component in $components; do
- cat override.$dist.$component >> override.$dist.all3;
- done;
-done
-
-################################################################################
-
-# Freshen Packages-Arch-Specific
-
-wget -qN http://buildd.debian.org/quinn-diff/Packages-arch-specific -O $base/buildd/Packages-arch-specific
-
-################################################################################
-
-cd $masterdir
-shania
-rhona
-apt-ftparchive -q clean apt.conf-security
-apt-ftparchive -q clean apt.conf.buildd-security
-
-symlinks -d -r $ftpdir
-
-pg_dump obscurity > /org/security.debian.org/katie-backup/dump_$(date +%Y.%m.%d-%H:%M:%S)
-
-# Vacuum the database
-set +e
-echo "VACUUM; VACUUM ANALYZE;" | psql obscurity 2>&1 | egrep -v "^NOTICE: Skipping \"pg_.*only table or database owner can VACUUM it$|^VACUUM$"
-set -e
-
-################################################################################
+++ /dev/null
-#! /bin/sh
-#
-# Executed hourly via cron, out of troup's crontab.
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-cd $masterdir
-julia
-helena -n > $webdir/new.html
+++ /dev/null
-#! /bin/sh
-#
-# Executed hourly via cron, out of troup's crontab.
-
-set -e
-export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
-. $SCRIPTVARS
-
-cd $masterdir
-julia
+++ /dev/null
-#!/bin/sh
-#
-# Run at the beginning of the month via cron, out of katie's crontab.
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-################################################################################
-
-DATE=`date -d yesterday +%y%m`
-
-cd /org/ftp.debian.org/mail/archive
-for m in mail bxamail; do
- if [ -f $m ]; then
- mv $m ${m}-$DATE
- sleep 20
- gzip -9 ${m}-$DATE
- chgrp debadmin ${m}-$DATE.gz
- chmod 660 ${m}-$DATE.gz
- fi;
-done
-
-################################################################################
+++ /dev/null
-#! /bin/sh
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-LOCKFILE="$lockdir/unchecked.lock"
-NOTICE="$lockdir/daily.lock"
-
-cleanup() {
- rm -f "$LOCKFILE"
- if [ ! -z "$LOCKDAILY" ]; then
- rm -f "$NOTICE"
- fi
-}
-
-# only run one cron.unchecked
-if lockfile -r3 $LOCKFILE; then
- trap cleanup 0
- cd $unchecked
-
- changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
- report=$queuedir/REPORT
- timestamp=$(date "+%Y-%m-%d %H:%M")
-
- if [ ! -z "$changes" ]; then
- echo "$timestamp": "$changes" >> $report
- jennifer -a $changes >> $report
- echo "--" >> $report
-
- if lockfile -r3 $NOTICE; then
- LOCKDAILY="YES"
- psql projectb -A -t -q -c "SELECT filename FROM queue_build WHERE queue = 0 AND suite = 5 AND in_queue = true AND filename ~ 'd(sc|eb)$'" > $dbdir/dists/unstable_accepted.list
- cd $overridedir
- denise &>/dev/null
- rm -f override.sid.all3 override.sid.all3.src
- for i in main contrib non-free main.debian-installer; do
- cat override.sid.$i >> override.sid.all3
- if [ "$i" != "main.debian-installer" ]; then
- cat override.sid.$i.src >> override.sid.all3.src
- fi
- done
- cd $masterdir
- apt-ftparchive -qq generate apt.conf.buildd
- . $masterdir/cron.buildd
- fi
- else
- echo "$timestamp": Nothing to do >> $report
- fi
-fi
-
+++ /dev/null
-#! /bin/sh
-
-set -e
-export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
-. $SCRIPTVARS
-
-cd $unchecked
-
-changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
-report=$queuedir/REPORT
-timestamp=$(date "+%Y-%m-%d %H:%M")
-
-if [ ! -z "$changes" ]; then
- echo "$timestamp": "$changes" >> $report
- jennifer -a $changes >> $report
- echo "--" >> $report
-else
- echo "$timestamp": Nothing to do >> $report
-fi;
+++ /dev/null
-#! /bin/sh
-
-set -e
-export SCRIPTVARS=/org/security.debian.org/katie/vars-security
-. $SCRIPTVARS
-
-cd $unchecked
-
-changes=$(find . -maxdepth 1 -mindepth 1 -type f -name \*.changes | sed -e "s,./,," | xargs)
-report=$queuedir/REPORT
-timestamp=$(date "+%Y-%m-%d %H:%M")
-
-if [ -z "$changes" ]; then
- echo "$timestamp": Nothing to do >> $report
- exit 0;
-fi;
-
-echo "$timestamp": "$changes" >> $report
-jennifer -a $changes >> $report
-echo "--" >> $report
-
-sh $masterdir/cron.buildd-security
+++ /dev/null
-#!/bin/sh
-#
-# Run once a week via cron, out of katie's crontab.
-
-set -e
-export SCRIPTVARS=/org/ftp.debian.org/katie/vars
-. $SCRIPTVARS
-
-################################################################################
-
-# Purge empty directories
-
-if [ ! -z "$(find $ftpdir/pool/ -type d -empty)" ]; then
- find $ftpdir/pool/ -type d -empty | xargs rmdir;
-fi
-
-# Clean up apt-ftparchive's databases
-
-cd $masterdir
-apt-ftparchive -q clean apt.conf
-apt-ftparchive -q clean apt.conf.buildd
-
-################################################################################
+++ /dev/null
-#!/bin/sh
-#
-# Run once a week via cron, out of katie's crontab.
-
-set -e
-export SCRIPTVARS=/org/non-us.debian.org/katie/vars-non-US
-. $SCRIPTVARS
-
-################################################################################
-
-# Purge empty directories
-
-if [ ! -z "$(find $ftpdir/pool/ -type d -empty)" ]; then
- find $ftpdir/pool/ -type d -empty | xargs rmdir;
-fi
-
-# Clean up apt-ftparchive's databases
-
-cd $masterdir
-apt-ftparchive -q clean apt.conf-non-US
-
-################################################################################
--- /dev/null
+2005-12-16 Ryan Murray <rmurray@debian.org>
+
+ * halle: add support for udebs
+ * kelly: stable_install: add support for binNMU versions
+
+2005-12-05 Anthony Towns <aj@erisian.com.au>
+
+ * katie.py: Move accept() autobuilding support into separate function
+ (queue_build), and generalise to build different queues
+
+ * db_access.py: Add get_or_set_queue_id instead of hardcoding accepted=0
+
+ * jennifer: Initial support for enabling embargo handling with the
+ Dinstall::SecurityQueueHandling option.
+ * jennifer: Shift common code into remove_from_unchecked and move_to_dir
+ functions.
+
+ * katie.conf-security: Include embargo options
+ * katie.conf-security: Add Lock dir
+ * init_pool.sql-security: Create disembargo table
+ * init_pool.sql-security: Add constraints for disembargo table
+
+2005-11-26 Anthony Towns <aj@erisian.com.au>
+
+ * Merge of changes from klecker, by various people
+
+ * amber: special casing for not passing on amd64 and oldstable updates
+ * amber: security mirror triggering
+ * templates/amber.advisory: updated advisory structure
+ * apt.conf.buildd-security: update for sarge's release
+ * apt.conf-security: update for sarge's release
+ * cron.buildd-security: generalise suite support, update for sarge's release
+ * cron.daily-security: update for sarge's release, add udeb support
+ * vars-security: update for sarge's release
+ * katie.conf-security: update for sarge's release, add amd64 support,
+ update signing key
+
+ * docs/README.names, docs/README.quotes: include the additions
+
+2005-11-25 Anthony Towns <aj@erisian.com.au>
+
+ * Changed accepted_autobuild to queue_build everywhere.
+ * Add a queue table.
+ * Add a "queue" field in the queue_build table (currently always 0)
+
+ * jennifer: Restructure to make it easier to support special
+ purpose queues between unchecked and accepted.
+
+2005-11-25 Anthony Towns <aj@erisian.com.au>
+
+ * Finishing merge of changes from spohr, by various people still
+
+ * jennifer: If changed-by parsing fails, set variables to "" so REJECT
+ works
+ * jennifer: Re-enable .deb ar format checking
+ * katie.py: Convert to +bX binNMU special casing
+ * rhona: Add some debug output when deleting binaries
+ * cron.daily: Add emilie
+ * cron.unchecked: Add lock files
+
+2005-11-15 Anthony Towns <aj@erisian.com.au>
+
+ * Merge of changes from spohr, by various people.
+
+ * tiffani: new script to do patches to Packages, Sources and Contents
+ files for quicker downloads.
+ * ziyi: update to authenticate tiffani generated files
+
+ * dak: new script to provide a single binary with less arbitrary names
+ for access to dak functionality.
+
+ * cindy: script implemented
+
+ * saffron: cope with suites that don't have a Priority specified
+ * heidi: use get_suite_id()
+ * denise: don't hardcode stable and unstable, or limit udebs to unstable
+ * denise: remove override munging for testing (now done by cindy)
+ * helena: expanded help, added new, sort and age options, and fancy headers
+ * jennifer: require description, add a reject for missing dsc file
+ * jennifer: change lock file
+ * kelly: propogation support
+ * lisa: honour accepted lock, use mtime not ctime, add override type_id
+ * madison: don't say "dep-retry"
+ * melanie: bug fix in output (missing %)
+ * natalie: cope with maintainer_override == None; add type_id for overrides
+ * nina: use mtime, not ctime
+
+ * katie.py: propogation bug fixes
+ * logging.py: add debugging support, use | as the logfile separator
+
+ * katie.conf: updated signing key (4F368D5D)
+ * katie.conf: changed lockfile to dinstall.lock
+ * katie.conf: added Lisa::AcceptedLockFile, Dir::Lock
+ * katie.conf: added tiffani, cindy support
+ * katie.conf: updated to match 3.0r6 release
+ * katie.conf: updated to match sarge's release
+
+ * apt.conf: update for sarge's release
+ * apt.conf.stable: update for sarge's release
+ * apt.conf: bump daily max Contents change to 25MB from 12MB
+
+ * cron.daily: add accepted lock and invoke cindy
+ * cron.daily: add daily.lock
+ * cron.daily: invoke tiffani
+ * cron.daily: rebuild accepted buildd stuff
+ * cron.daily: save rene-daily output on the web site
+ * cron.daily: disable billie
+ * cron.daily: add stats pr0n
+
+ * cron.hourly: invoke helena
+
+ * pseudo-packages.maintainers,.descriptions: miscellaneous updates
+ * vars: add lockdir, add etch to copyoverrides
+ * Makefile: add -Ipostgresql/server to CXXFLAGS
+
+ * docs/: added README.quotes
+ * docs/: added manpages for alicia, catherine, charisma, cindy, heidi,
+ julia, katie, kelly, lisa, madison, melanie, natalie, rhona.
+
+ * TODO: correct spelling of "conflicts"
+
+2005-05-28 James Troup <james@nocrew.org>
+
+ * helena (process_changes_files): use MTIME rather than CTIME (the
+ C's not for 'creation', stupid).
+ * lisa (sort_changes): likewise.
+
+ * jennifer (check_distributions): use has_key rather than an 'in'
+ test which doesn't work with python2.1. [Probably by AJ]
+
+2005-03-19 James Troup <james@nocrew.org>
+
+ * rene (main): use Suite::<suite>::UdebComponents to determine
+ what components have udebs rather than assuming only 'main' does.
+
+2005-03-18 James Troup <james@nocrew.org>
+
+ * utils.py (rfc2047_encode): use codecs.lookup() rather than
+ encodings.<encoding>.Codec().decode() as encodings.utf_8 no longer
+ has a Codec() module in python2.4. Thanks to Andrew Bennetts
+ <andrew@ubuntu.com>.
+
+2005-03-06 Joerg Jaspert <ganneff@debian.org>
+
+ * helena: add -n/--new HTML output option and improved sorting
+ options.
+
+2005-03-06 Ryan Murray <rmurray@debian.org>
+
+ * shania(main): use Cnf::Dir::Reject instead of REJECT
+
+2005-02-08 James Troup <james@nocrew.org>
+
+ * rene (main): add partial NBS support by checking that binary
+ packages are built by their real parent and not some random
+ stranger.
+ (do_partial_nbs): likewise.
+
+2005-01-18 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.build_summaries): avoid leaking file handle when
+ extracting package description.
+ (Katie.force_reject): remember and close each file descriptor we
+ use.
+ (Katie.do_reject): s/file/temp_fh/ to avoid pychecker warning.
+ s/reason_file/reason_fd/ because it's a file descriptor.
+ (Katie.check_dsc_against_db): avoid leaking file handle whenever
+ invoking apt_pkg.md5sum().
+
+ * jennifer (check_deb_ar): new function: sanity check the ar
+ contents of a .deb.
+ (check_files): use it.
+ (check_timestamps): check for data.tar.bz2 if data.tar.gz can't be
+ found.
+ (check_files): accept 'raw-installer' as an alias for 'byhand'.
+
+2005-01-14 Anthony Towns <ajt@debian.org>
+
+ * kelly: when UNACCEPTing, don't double up the "Rejecting:"
+
+ * propup stuff (thanks to Andreas Barth)
+ * katie.conf: add stable MustBeOlderThan testing, add -security
+ propup
+ * jennifer: set distribution-version in .katie if propup may be needed
+ * katie.py: add propogation to cross_suite_version_check
+
+2004-11-27 James Troup <james@nocrew.org>
+
+ * nina: new script to split monolithic queue/done into date-based
+ hierarchy.
+
+ * rene (usage): document -s/--suite.
+ (add_nbs): use .setdefault().
+ (do_anais): likewise.
+ (do_nbs): don't set a string to "" and then += it.
+ (do_obsolete_source): new function - looks for obsolete source
+ packages (i.e source packages whose binary packages are ALL a)
+ claimed by someone else and b) newer when built from the other
+ source package).
+ (main): support -s/--suite. Add 'obsolete source' to both 'daily'
+ and 'full' check modes. Check for obsolete source packages.
+ linux-wlan-ng has been fixed - remove hideous bodge.
+
+ * jennifer (check_distributions): support 'reject' suite map type.
+
+ * utils.py (validate_changes_file_arg): s/file/filename/.
+ s/fatal/require_changes/. If require_changes is -1, ignore errors
+ and return the .changes filename regardless.
+ (re_no_epoch): s/\*/+/ as there must be a digit in an epoch.
+ (re_no_revision): don't escape '-', it's not a special character.
+ s/\*/+/ as there must be at least one non-dash character after the
+ dash in a revision. Thanks to Christian Reis for noticing both of
+ these.
+
+ * ashley (main): pass require_changes=-1 to
+ utils.validate_changes_file_arg().
+
+ * pseudo-packages.maintainers (kernel): switch to 'Debian Kernel
+ Team <debian-kernel@lists.debian.org>'.
+
+ * katie.py (Katie.in_override_p): fix .startswith() usage.
+
+ * katie.conf (Dinstall::DefaultSuite): add as 'unstable'.
+ (Lauren::MoreInfoURL): update to 3.0r3.
+ (Suite::Stable::Version): likewise.
+ (Suite::Stable::Description): likewise.
+
+ * cron.daily: disable automatic task override generation.
+
+ * cindy (process): restrict "find all packages" queries by
+ component. Respect Options["No-Action"].
+ (main): add -n/--no-action support. Only run on unstable. Rename
+ type to otype (pychecker).
+
+2004-11-27 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * katie.conf (Billie::BasicTrees): add all architectures.
+ (Billie::CombinationTrees): remove 'welovehp' and 'embedded', add
+ 'everything'.
+
+ * cron.daily: Update a 'current' symlink when creating the
+ post-daily-cron-job database backup to aid mirroring to merkel.
+ Run billie.
+
+ * billie (BillieTarget.poolish_match): handle .udeb too.
+
+2004-10-13 Ryan Murray <rmurray@debian.org>
+
+ * amber (do_upload): Sort changes files in "katie" order so that
+ source always arrives before binary-only rebuilds
+
+2004-10-05 James Troup <james@nocrew.org>
+
+ * jennifer (check_dsc): correct reject message on invalid
+ Maintainer field.
+
+2004-09-20 James Troup <james@nocrew.org>
+
+ * alicia: remove unused 'pwd' import.
+
+ * tea (check_override): underline suite name in output properly.
+
+ * rene (main): read a compressed Packages file.
+ * tea (validate_packages): likewise.
+
+ * katie.py (re_fdnic): add 'r' prefix.
+ (re_bin_only_nmu_of_mu): likewise.
+ (re_bin_only_nmu_of_nmu): likewise.
+
+ * madison (main): retrieve component information too and display
+ it if it's not 'main'.
+ * melanie (reverse_depends_check): likewise.
+
+ * utils.py (pp_dep): renamed...
+ (pp_deps): ... to this.
+ * jeri (check_dep): update calls to utils.pp_deps().
+ * melanie (reverse_depends_check): likewise.
+
+ * jennifer (check_changes): move initalization of email variables
+ from here...
+ (process_it): ...to here as we no longer always run
+ check_changes(). Don't bother to initialize
+ changes["architecture"].
+
+ * denise (list): renamed to...
+ (do_list): ...this to avoid name clash with builtin 'list'.
+ Similarly, s/file/output_file/, s/type/otype/. Use .setdefault()
+ for dictionaries.
+ (main): Likewise for name clash avoidance and also
+ s/override_type/suffix/. Adjust call to do_list().
+
+2004-09-01 Ryan Murray <rmurray@debian.org>
+
+ * tea (check_files): check the pool/ directory instead of dists/
+
+2004-08-04 James Troup <james@nocrew.org>
+
+ * jenna (cleanup): use .setdefault() for dictionaries.
+ (write_filelists): likewise.
+
+ (write_filelists): Use utils.split_args() not split() to split
+ command line arguments.
+ (stable_dislocation_p): likewise.
+
+ (write_filelists): Add support for mapping side of suite-based
+ "Arch: all mapping".
+ (do_da_do_da): ensure that if we're not doing all suites that we
+ process enough to be able correct map arch: all packages.
+
+ * utils.py (cant_open_exc): correct exception string,
+ s/read/open/, s/.$//.
+
+ * templates/amber.advisory: update to match reality a little
+ better.
+
+ * melanie (reverse_depends_check): read Packages.gz rather than
+ Packages.
+
+ * jennifer (check_files): check for unknown component before
+ checking for NEWness.
+
+ * katie.py (Katie.in_override_p): use .startswith in favour of a
+ slice.
+
+ * docs/melanie.1.sgml: document -R/--rdep-check.
+
+2004-07-12 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * billie (main): Make the verbatim lists include all the README
+ elements.
+ * docs/README.names: Add billie in (correcting oversight)
+
+2004-07-01 James Troup <james@nocrew.org>
+
+ * emilie (main): handle woody's case-sensitive python-ldap,
+ s/keyfingerprint/keyFingerPrint/.
+
+2004-06-25 James Troup <james@nocrew.org>
+
+ * debian/control (Depends): add dpkg-dev since jennifer uses
+ dpkg-source.
+
+2004-06-24 James Troup <james@nocrew.org>
+
+ * melanie (main): s/file/temp_file/ and close file handle before
+ removing the temporary file.
+ (main): don't warn about needing a --carbon-copy if in no-action
+ mode.
+
+ * rene (do_nbs): pcmcia-cs has been fixed - remove hideous bodge.
+ (main): likewise.
+
+ * test/006/test.py (main): check bracketed email-only form.
+
+ * utils.py (fix_maintainer): if the Maintainer string is bracketed
+ email-only, strip the brackets so we don't end up with
+ <<james@nocrew.org>>.
+
+2004-06-20 James Troup <james@nocrew.org>
+
+ * jennifer (process_it): only run check_changes() if
+ check_signature() returns something. (Likewise)
+
+ * utils.py (changes_compare): if there's no changes["version"] use
+ "0" rather than None. (Avoids a crash on unsigned changes file.)
+
+2004-06-17 Martin Michlmayr <tbm@cyrius.com>
+
+ * jeri (pp_dep): moved from here to ...
+ * utils.py (pp_dep): here.
+
+ * melanie (main): add reverse dependency checking.
+
+2004-06-17 James Troup <james@nocrew.org>
+
+ * jennifer (check_dsc): s/dsc_whitespace_rules/signing_rules/.
+ * tea (check_dscs): likewise.
+
+ * utils.py (parse_changes): s/dsc_whitespace_rules/signing_rules/,
+ change from boolean to a variable with 3 possible values, 0 and 1
+ as before, -1 means don't require a signature. Makes
+ parse_changes() useful for parsing arbitary RFC822-style files,
+ e.g. 'Release' files.
+ (check_signature): add support for detached signatures by passing
+ the files the signature is for as an optional third argument.
+ s/filename/sig_filename/g. Add a fourth optional argument to
+ choose the keyring(s) to use. Don't os.path.basename() the
+ sig_filename before checking it for taint.
+ (re_taint_free): allow '/'.
+
+2004-06-11 James Troup <james@nocrew.org>
+
+ * tea (check_files): make override.unreadable optional.
+ (validate_sources): close the Sources file handle.
+
+ * docs/README.first: clarify that 'alyson' and running
+ add_constraints.sql by hand is something you only want to do if
+ you're not running 'neve'.
+
+ * docs/README.config (Location::$LOCATION::Suites): document.
+
+ * db_access.py (do_query): also print out the result of the query.
+
+2004-06-10 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.cross_suite_version_check): post-woody versions
+ of python-apt's apt_pkg.VersionCompare() function apparently
+ returns variable integers for less than or greater than results -
+ update our result checking to match.
+ * jenna (resolve_arch_all_vs_any): likewise.
+ * charisma (main): likewise.
+
+2004-06-09 James Troup <james@nocrew.org>
+
+ * jennifer (process_it): s/changes_valid/valid_changes_p/. Add
+ valid_dsc_p and don't run check_source() if check_dsc() failed.
+ (check_dsc): on fatal failures return 0 so check_source() isn't
+ run (since it makes fatal assumptions about the presence of
+ mandatory .dsc fields).
+ Remove unused and obsolete re_bad_diff and re_is_changes regexps.
+
+2004-05-07 James Troup <james@nocrew.org>
+
+ * katie.conf (Rhona::OverrideFilename): unused and obsolete, remove.
+ * katie.conf-non-US (Rhona::OverrideFilename): likewise.
+
+ * katie.conf (Dir::Override): remove duplicate definition.
+
+ * neve (get_or_set_files_id): add an always-NULL last_used column
+ to output.
+
+2004-04-27 James Troup <james@nocrew.org>
+
+ * apt.conf-security (tree "dists/stable/updates"): add
+ ExtraOverride - noticed by Joey Hess (#246050).
+ (tree "dists/testing/updates"): likewise.
+
+2004-04-20 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): check for existing .changes or .katie
+ files of the same name in the Suite::<suite>::Copy{Changes,Katie}
+ directories.
+
+2004-04-19 James Troup <james@nocrew.org>
+
+ * jennifer (check_source): handle failure to remove the temporary
+ directory (used for source tree extraction) better, specifically:
+ if we fail with -EACCES, chmod -R u+rwx the temporary directory
+ and try again and if that works, REJECT the package.
+
+2004-04-17 James Troup <james@nocrew.org>
+
+ * docs/madison.1.sgml: document -b/--binary-type,
+ -g/--greaterorequal and -G/--greaterthan.
+
+ * madison (usage): -b/--binary-type only takes a single argument.
+ Document -g/--greaterorequal and -G/--greaterthan.
+ (main): add support for -g/--greaterorequal and -G/--greaterthan.
+
+2004-04-12 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * billie: Cleaned up a load of comments, added /README.non-US to
+ the verbatim matches list.
+
+2004-04-07 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * utils.py (size_type): Make it use real binary megabytes and
+ kilobytes, instead of the marketing terms used before.
+
+2004-04-07 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.check_dsc_against_db): in the case we're
+ ignoring an identical-to-existing orig.tar.gz remember the path to
+ the existent version in pkg.orig_tar_gz. Adjust query to grab
+ location.path too to be able to do so.
+
+2004-04-03 James Troup <james@nocrew.org>
+
+ * debian/control (Depends): add python2.1-email | python (>= 2.2)
+ needed for new utils.rfc2047_encode() function.
+
+ * utils.py (re_parse_maintainer): allow whitespace inside the
+ email address.
+ (Error): new exception base class.
+ (ParseMaintError): new exception class.
+ (force_to_utf8): new function.
+ (rfc2047_encode): likewise.
+ (fix_maintainer): rework. use force_to_utf8() to force name and
+ rfc822 return values to always use UTF-8. use rfc2047_encode() to
+ return an rfc2047 value. Validate the address to catch missing
+ email addresses and (some) broken ones.
+
+ * katie.py (nmu_p.is_an_nmu): adapt for new utils.fix_maintainer()
+ by adopting foo2047 return value.
+ (Katie.dump_vars): add changedby2047 and maintainer2047 as
+ mandatory changes fields. Promote changes and maintainer822 to
+ mandatory fields.
+ (Katie.update_subst): default maintainer2047 rather than
+ maintainer822. User foo2047 rather than foo822 when setting
+ __MAINTAINER_TO__ or __MAINTAINER_FROM__.
+
+ * jennifer (check_changes): set default changes["maintainer2047"]
+ and changes["changedby2047"] values rather than their 822
+ equivalents. Makes changes["changes"] a mandatory field. Adapt
+ to new utils.fix_maintainer() - reject on exception and adopt
+ foo2047 return value.
+ (check_dsc): if a mandatory field is missing don't do any further
+ checks and as a result reduce paranoia about dsc[var] existence.
+ Validate the maintainer field by calling new
+ utils.fix_maintainer().
+
+ * ashley (main): add changedby2047 and maintainer2047 to mandatory
+ changes fields. Promote maintainer822 to a mandatory changes
+ field. add "pool name" to files fields.
+
+ * test/006/test.py: new file - tests for new
+ utils.fix_maintainer().
+
+2004-04-01 James Troup <james@nocrew.org>
+
+ * templates/lisa.prod (To): use __MAINTAINER_TO__ not __MAINTAINER__.
+
+ * jennifer (get_changelog_versions): create a symlink mirror of
+ the source files in the temporary directory.
+ (check_source): if check_dsc_against_db() couldn't find the
+ orig.tar.gz bail out.
+
+ * katie.py (Katie.check_dsc_against_db): if the orig.tar.gz is not
+ part of the upload store the path to it in pkg.orig_tar_gz and if
+ it can't be found set pkg.orig_tar_gz to -1.
+
+ Explicitly return the second value as None in the (usual) case
+ where we don't have to reprocess. Remove obsolete diagnostic
+ logs.
+
+ * lisa (prod_maintainer): don't return anything, no one cares. (pychecker)
+
+ * utils.py (temp_filename): new helper function that wraps around
+ tempfile.mktemp().
+
+ * katie.py (Katie.do_reject): use it and don't import tempfile.
+ * lisa (prod_maintainer): likewise.
+ (edit_note): likewise.
+ (edit_new): likewise.
+ * lauren (reject): likewise.
+ * melanie (main): likewise.
+ * neve (do_sources): likewise.
+ * rene (main): likewise.
+ * tea (validate_sources): likewise.
+
+2004-03-31 James Troup <james@nocrew.org>
+
+ * tea (validate_sources): remove unused 's' temporary variable.
+
+2004-03-15 James Troup <james@nocrew.org>
+
+ * jennifer (check_dsc): check changes["architecture"] for
+ source before we do anything else.
+
+2004-03-21 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * billie: Added
+ * katie.conf (Billie): Added sample Billie stanza to katie.conf
+
+2004-03-12 James Troup <james@nocrew.org>
+
+ * docs/README.config (Dir::Queue::BTSVersionTrack): document.
+
+ * katie.conf (Dir::Queue::BTSVersionTrack): define.
+
+ * katie.py (Katie.accept): add support for DebBugs Version
+ Tracking by writing out .versions (generated in jennifer's
+ get_changelog_versions()) and .debinfo (mapping of binary ->
+ source) files.
+
+ * ashley (main): add dsc["bts changelog"].
+
+ * katie.py (Katie.dump_vars): store dsc["bts changelog"] too.
+
+ * jennifer (check_diff): obsoleted by check_source(), removed.
+ (check_source): new function: create a temporary directory and
+ move into it and call get_changelog_versions().
+ (get_changelog_versions): new function: extract the source package
+ and optionally parse debian/changelog to obtain the version
+ history for the BTS.
+ (process_it): call check_source() rather than check_diff().
+
+2004-03-08 James Troup <james@nocrew.org>
+
+ * lisa (edit_index): Fix logic swapo from 'use "if varfoo in
+ listbar" rather than "if listbar.count(varfoo)"' change on
+ 2004-02-24.
+
+2004-03-05 James Troup <james@nocrew.org>
+
+ * alicia (main): don't warn about not closing bugs - we don't
+ manage overrides through the BTS.
+
+2004-02-27 Martin Michlmayr <tbm@cyrius.com>
+
+ * docs/README.config: lots of updates and corrections.
+ * docs/README.first: likewise.
+
+ * docs/README.config: drop unused Dir::Queue::Root.
+ * katie.conf-non-US: likewise.
+ * katie.conf: likewise.
+ * katie.conf-security: likewise.
+
+2004-02-27 James Troup <james@nocrew.org>
+
+ * rose (process_tree): use 'if var in [ list ]' rather than long
+ 'if var == foo or var == bar or var == baz'. Suggested by Martin
+ Michlmayr.
+
+ * jennifer (check_files): reduce 'if var != None' to 'if var' as
+ suggested by Martin Michlmayr.
+ * catherine (poolize): likewise.
+ * charisma (main): likewise.
+ * halle (check_changes): likewise.
+ * heidi (main): likewise.
+ (process_file): likewise.
+ * kelly (install): likewise.
+ (stable_install): likewise.
+ * utils.py (fix_maintainer): likewise.
+
+ * apt.conf: add support for debian-installer in testing-proposed-updates.
+ * katie.conf (Suite::Testing-Proposed-Updates::UdebComponents):
+ add - set to main.
+
+ * mkmaintainers: add "-T15" option to wget of non-US packages file
+ so that we don't hang cron.daily if non-US is down.
+
+ * templates/lisa.prod (Subject): Prefix with "Comments regarding".
+
+ * templates/jennifer.bug-close: add Source and Source-Version
+ pseudo-headers that may be used for BTS Version Tracking someday
+ [ajt@].
+
+ * rene (do_nbs): special case linux-wlan-ng like we do for pcmcia.
+ (main): likewise.
+
+ * cron.unchecked: it's /org/ftp.debian.org not ftp-master.
+
+2004-02-25 James Troup <james@nocrew.org>
+
+ * katie.conf (SuiteMappings): don't map testing-security to
+ proposed-updates.
+
+2004-02-24 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.__init__): remove unused 'values' field.
+
+ * utils.py (extract_component_from_section): use 's.find(c) != -1'
+ rather than 's.count(c) > 0'.
+
+ * katie.py (Katie.source_exists): use "if varfoo in listbar"
+ rather than "if listbar.count(varfoo)".
+ * halle (check_joey): likewise.
+ * jeri (check_joey): likewise.
+ * lisa (edit_index): likewise.
+ * jenna (stable_dislocation_p): likewise.
+
+ * jennifer (main): remove unused global 'nmu'.
+
+2004-02-03 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * pseudo-packages.maintainers (ftp.debian.org): Changed the maintainer
+ to be ftpmaster@ftp-master.debian.org to bring it into line with how
+ the dak tools close bugs.
+
+2004-02-02 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * katie.conf (Alicia): Added an Alicia section with email address
+ * templates/alicia.bug-close: Added
+ * docs/alicia.1.sgml: Added the docs for the -d/--done argument
+ * alicia (main): Added a -d/--done argument
+
+2004-02-02 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * templates/lisa.prod: Oops, missed a BITCH->PROD conversion
+
+2004-01-29 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * lisa (prod_maintainer): Added function to prod the maintainer without
+ accepting or rejecting the package
+ * templates/lisa.prod: Added this template for the prodding mail
+
+ * .cvsignore: Added neve-files which turns up in new installations
+
+2004-01-30 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * alicia (usage): Fixed usage message to offer section and priority
+ as seperately optional arguments.
+ * alicia (main): Added a % (arg) interpolation needed when only
+ one of section or priority is provided and it cannot be found.
+
+2004-01-29 Daniel Silverstone <dsilvers@digital-scurf.org>
+
+ * alicia: Added
+ * docs/alicia.1.sgml: Added
+ * docs/Makefile: Added alicia to the list of manpages to build
+ * docs/README.names: Noted what alicia does
+ * docs/README.first: Noted where alicia is useful
+
+2004-01-21 James Troup <james@nocrew.org>
+
+ * madison (main): add -b/--binary-type.
+ (usage): likewise.
+
+ * denise (main): generate debian-installer overrides for testing
+ too.
+ * apt.conf: add support for debian-installer in testing.
+ * katie.conf (Suite::Testing::UdebComponents): set to main.
+
+ * katie.conf (Dinstall::SigningKeyIds): 2004 key.
+ * katie.conf-non-US (Dinstall::SigningKeyIds): likewise.
+ * katie.conf-security (Dinstall::SigningKeyIds): likewise.
+
+ * utils.py (parse_changes): don't process data not inside the
+ signed data. Thanks to Andrew Suffield <asuffield@debian.org> for
+ pointing this out.
+ * test/005/test.py (main): new test to test for above.
+
+2004-01-04 James Troup <james@nocrew.org>
+
+ * jenna (write_filelists): correct typo, s/Components/Component/
+ for Options.
+
+2004-01-04 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: move update of overrides and Packages file...
+ * cron.unchecked: to here.
+ * katie.conf-non-US: (Dinstall::SingingKeyIds) update for 2003v2 key
+ * katie.conf-security: likewise
+
+2003-11-20 James Troup <james@nocrew.org>
+
+ * jenna (main): don't use utils.try_with_debug(), it produces way
+ too much output.
+
+ * halle (check_changes): don't error out if a .changes refers to a
+ non-existent package, just warn and skip the file.
+
+ * docs/README.stable-point-release: mention halle and .changes
+ obsoleted by removal through melanie. Update for 3.0r2.
+
+ * katie.conf (Suite::Stable::Version): bump to 3.0r2.
+ (Suite::Stable::Description): update for 3.0r2.
+ (Lauren::MoreInfoURL): likewise.
+ * katie.conf-non-US (Suite::Stable::Version): likewise.
+ (Suite::Stable::Description): likewise.
+ (Lauren::MoreInfoURL): likewise.
+
+ * apt.conf.stable (Default): don't define MaxContentsChange.
+ * apt.conf.stable-non-US (Default): likewise.
+
+ * lauren (reject): hack to work around partial replacement of an
+ upload, i.e. one or more binaries superseded by another source
+ package.
+
+2003-11-17 James Troup <james@nocrew.org>
+
+ * pseudo-packages.maintainers: point installation-reports at
+ debian-boot@l.d.o rather than debian-testing@l.d.o at jello@d.o's
+ request.
+
+ * utils.py (parse_changes): calculate the number of lines once
+ with len() rather than max().
+
+ * jennifer (check_dsc): handle the .orig.tar.gz disappearing from
+ files, since check_dsc_against_db() deletes the .orig.tar.gz
+ entry.
+
+2003-11-13 Ryan Murray <rmurray@debian.org>
+
+ * apt.conf: specify a src override file for debian-installer
+
+2003-11-10 James Troup <james@nocrew.org>
+
+ * fernanda.py (strip_pgp_signature): new function - strips PGP
+ signature from a file and returns the modified contents of the
+ file in a string.
+ (display_changes): use it.
+ (read_dsc): likewise.
+
+2003-11-09 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: export accepted_autobuild table for unstable, and use
+ it to generate the incoming Packages/Sources rather than having apt
+ walk the directory.
+ * apt.conf.buildd: use exported table from cron.buildd to generate
+ Packages/Sources
+
+2003-11-07 James Troup <james@nocrew.org>
+
+ * kelly: import errno.
+
+ * katie.py (Katie.build_summaries): sort override disparities.
+
+ * kelly (install): set dsc_component based on the .dsc's component
+ not a random binaries.
+
+2003-10-29 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.build_summaries): don't assume changes["source"]
+ exists since it might not.
+
+2003-10-20 James Troup <james@nocrew.org>
+
+ * pseudo-packages.maintainers: update security.d.o to use
+ team@s.d.o at joy@'s request.
+
+2003-10-17 James Troup <james@nocrew.org>
+
+ * jennifer (check_dsc): use .startswith rather than .find() == 0.
+
+2003-10-17 Martin Michlmayr <tbm@cyrius.com>
+
+ * tea (chk_bd_process_dir): use .endswith rather than slice.
+
+2003-10-14 James Troup <james@nocrew.org>
+
+ * tea (check_build_depends): new function.
+ (chk_bd_process_dir): likewise. Validates build-depends in .dsc's
+ in the archive.
+ (main): update for new function.
+ (usage): likewise.
+
+ * katie.py (Katie.do_reject): sanitize variable names,
+ s/reject_filename/reason_filename/, s/fd/reason_fd/. Move shared
+ os.close() to outside if clause.
+
+ * jennifer (check_dsc): check build-depends and
+ build-depends-indep by running them past apt_pkg.ParseSrcDepends.
+ Fold the ARRAY check into the same code block and tidy up it's
+ rejection message.
+ (check_changes): ensure that the Files field is non-empty.
+ Suggested by Santiago Vila <sanvila@unex.es>
+ (check_changes): normalize reject messages.
+ (check_dsc): instead of doing most of the checks inside a for loop
+ and an if, find the dsc_filename in a short loop over files first
+ and then do all the checks. Add check for more than one .dsc in a
+ .changes which we can't handle. Normalize reject messages.
+
+2003-10-13 James Troup <james@nocrew.org>
+
+ * katie.conf (Dinstall::Reject::NoSourceOnly): set to true.
+ * katie.conf-non-US (Dinstall::Reject::NoSourceOnly): likewise.
+
+ * jennifer (check_files): Set 'has_binaries' and 'has_source'
+ variables while iterating over 'files'. Don't regenerate it when
+ checking for source if source is mentioned.
+
+ Reject source only uploads if the config variable
+ Dinstall::Reject::NoSourceOnly is set.
+
+2003-10-03 James Troup <james@nocrew.org>
+
+ * rene (main): add nasty hardcoded reference to debian-installer
+ so we detect NBS .udebs.
+
+2003-09-29 James Troup <james@nocrew.org>
+
+ * apt.conf (old-proposed-updates): remove.
+ * apt.conf-non-US (old-proposed-updates): likewise.
+
+2003-09-24 James Troup <james@nocrew.org>
+
+ * tea (check_files_not_symlinks): new function, ensure files
+ mentioned in the database aren't symlinks. Includes code to
+ update any files that are like this to their real filenames +
+ location; commented out by though.
+ (usage): update for new function.
+ (main): likewise.
+
+2003-09-24 Anthony Towns <ajt@debian.org>
+
+ * vars: external-overrides variable added
+ * cron.daily: Update testing/unstable Task: overrides from joeyh
+ managed external source.
+
+2003-09-22 James Troup <james@nocrew.org>
+
+ * kelly (install): if we can't move the .changes into queue/done,
+ fail don't warn and carry on. The old behaviour pre-dates NI and
+ doesn't make much sense now since jennifer checks both
+ queue/accepted and queue/done for any .changes files she's
+ processing.
+
+ * utils.py (move): don't throw exceptions on existing files or
+ can't overwrite, instead just fubar out.
+
+ * jennifer (check_dsc): also check Build-Depends-Indep for
+ ARRAY-lossage. Noticed by Matt Zimmerman <mdz@debian.org>.
+
+2003-09-18 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.close_bugs): only log the bugs we've closed
+ once.
+
+ * kelly (main): log as 'kelly', not 'katie'.
+
+2003-09-16 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.check_binary_against_db): likewise noramlize.
+
+ * jennifer (check_changes): normalize reject message for "changes
+ file already exists" to be %s: <foo>.
+ (check_dsc): add a check for 'Build-Depends: ARRAY(<hex>)'
+ produced by broken dpkg-source in 1.10.11. Tone down and
+ normalize rejection message for incompatible 'Format' version
+ numbers.
+ (check_diff): likewise tone down and normalize.
+
+2003-09-07 James Troup <james@nocrew.org>
+
+ * utils.py (parse_changes): if dsc_whitespace_rules is false,
+ don't bomb out on bogus empty lines.
+ (build_file_list): check for changes["files"] earlier. use Dict
+ to create files[name] dictionary.
+ (send_mail): don't bother validating arguments.
+ (check_signature): minor improvements to some of the rejection
+ messages including listing the key id of the key that wasn't found
+ in the keyring.
+ (wrap): new function.
+
+ * tea: add new check 'validate-indices' that ensures all files
+ mentioned in indices (Packages, Sources) files do in fact exist.
+
+ * catherine (poolize): use a local re_isadeb which handles legacy
+ (i.e. no architecture) style .deb filenames.
+
+ * rosamund: new script.
+
+ * rhona (check_binaries): when checking for binary packages not in
+ a suite, don't bother selecting files that already have a
+ last_used date.
+ (check_sources): likewise.
+
+ * rhona: change all SQL EXISTS sub-query clauses to use the
+ postgres suggested convention of "SELECT 1 FROM".
+ * andrea (main): likewise.
+ * tea (check_override): likewise.
+ * catherine (main): likewise.
+
+ * katie.conf (Suite): remove OldStable and Old-Proposed-Updates
+ entries and in other suites MustBeNewerThan's.
+ (SuiteMappings): likewise
+ * katie.conf-non-US: likewise.
+ * katie.conf-security: likewise.
+
+ * apt.conf-security: remove oldstable.
+ * apt.conf.stable: likewise.
+ * apt.conf.stable-non-US: likewise.
+ * cron.buildd-security: likewise.
+ * cron.daily-security: likewise.
+ * vars-security (suites): likewise.
+ * wanna-build/trigger.daily: likewise.
+
+ * claire.py (clean_symlink): move...
+ * utils.py (clean_symlink): here.
+
+ * claire.py (find_dislocated_stable): update accordingly.
+
+2003-08-16 Anthony Towns <ajt@debian.org>
+
+ * katie.py (source_exists): expand the list of distributions
+ the source may exist in to include any suite that's mapped to
+ the destination suite (even transitively (a->b->c)). This should
+ unbreak binary uploads to *-proposed-updates.
+
+2003-08-09 Randall Donald <rdonald@debian.org>
+
+ * lisa (recheck): change changes["distribution"].keys() to
+ Katie.pkg.changes...
+
+2003-08-08 Randall Donald <rdonald@debian.org>
+
+ * katie.py: only tag bugs as fixed-in-experimental for
+ experimental uploads
+
+2003-07-26 Anthony Towns <ajt@debian.org>
+
+ * katie.py (source_exists): add an extra parameter to limit the
+ distribution(s) the source must exist in.
+ * kelly, lisa, jennifer: update to use the new source_exists
+
+2003-07-15 Anthony Towns <ajt@debian.org>
+
+ * ziyi: quick hack to support a FakeDI line in apt.conf to generate
+ checksums for debian-installer stuff even when it's just a symlink to
+ another suite
+
+ * apt.conf: add the FakeDI line
+
+2003-06-09 James Troup <james@nocrew.org>
+
+ * kelly (check): make sure the 'file' we're looking for in 'files'
+ hasn't been deleted by katie.check_dsc_against_db().
+
+2003-05-07 James Troup <james@nocrew.org>
+
+ * helena (time_pp): fix s/years/year/ typo.
+
+2003-04-29 James Troup <james@nocrew.org>
+
+ * madison (usage): document -c/--component.
+
+ * madison (usage): Fix s/seperated/separated/.
+ * melanie (usage): likewise.
+ * jenna (usage): likewise.
+
+2003-04-24 James Troup <james@nocrew.org>
+
+ * cron.daily-non-US: if there's nothing for kelly to install, say
+ so.
+
+ * jennifer (check_timestamps): print sys.exc_value as well as
+ sys.exc_type when capturing exceptions. Prefix 'timestamp check
+ failed' with 'deb contents' to make it clearer what timestamp(s)
+ are being checked.
+
+2003-04-15 James Troup <james@nocrew.org>
+
+ * cron.daily-non-US: only run kelly if there are some .changes
+ files in accepted.
+
+ * rene: add -m/--mode argument which can be either daily (default)
+ or full. In daily mode only 'nviu' and 'nbs' checks are run.
+ Various changes to make this possible including a poor attempt at
+ splitting main() up a little. De-hardcode suite numbers from SQL
+ queries and return quietly from do_nviu() if experimental doesn't
+ exist (i.e. non-US). Hardcode pcmcia-cs as dubious NBS since it
+ is.
+
+ * debian/control (Depends): remove python-zlib as it's obsolete.
+
+ * charisma (main): don't slice the \n off strings when we're
+ strip()-ing it anyway.
+ * heidi (set_suite): likewise.
+ (process_file): likewise.
+ * natalie (process_file): likewise.
+
+2003-04-08 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.check_dsc_against_db): improve the speed of two
+ slow queries by using one LIKE '%foo%' and then matching against
+ '%s' or '/%s$' in python. Also only join location when we need it
+ (i.e. the .orig.tar.gz query). On auric, this knocks ~3s of each
+ query, so 6s for each sourceful package.
+
+ * cron.daily: invoke rene and send the report to ftpmaster.
+ * cron.daily-non-US: likewise.
+
+2003-03-14 James Troup <james@nocrew.org>
+
+ * utils.py (send_mail): default filename to blank.
+ * amber (make_advisory): adapt.
+ * jennifer (acknowledge_new): likewise.
+ * katie.py (Katie.close_bugs): likewise.
+ (Katie.announce): likewise.
+ (Katie.accept): likewise.
+ (Katie.check_override): likewise.
+ (Katie.do_reject): likewise.
+ * kelly (do_reject): likewise.
+ (stable_install): likewise.
+ * lisa (do_bxa_notification): likewise.
+ * lauren (reject): likewise.
+ * melanie (main): likewise.
+
+ * rene (add_nbs): check any NBS packages against unstable to see
+ if they haven't been removed already.
+
+ * templates/katie.rejected: remove paragraph about rejected files
+ since they're o-rwx due to c-i-m and the uploader can't do
+ anything about them and shania will take care of them anyway.
+
+ * madison (usage): update usage example to use comma seperation.
+ * melanie (usage): likewise.
+
+ * utils.py (split_args): new function; splits command line
+ arguments either by space or comma (whichever is used). Also has
+ optional-but-default DWIM spurious space detection to avoid
+ 'command -a i386, m68k' problems.
+ (parse_args): use it.
+ * melanie (main): likewise.
+
+ * melanie (main): force the admin to tell someone if we're not
+ doing a rene-led removal (or closing a bug, which counts as
+ telling someone).
+
+2003-03-05 James Troup <james@nocrew.org>
+
+ * katie.conf (Section): add embedded, gnome, kde, libdevel, perl
+ and python sections.
+ * katie.conf-security (Section): likewise.
+
+ * add_constraints.sql: add uid and uid_id_seq to grants.
+
+ * lisa (determine_new): also warn about adding overrides to
+ oldstable.
+
+ * madison (main): make the -S/--source-and-binary query obey
+ -s/--suite restrictions.
+
+2003-03-03 James Troup <james@nocrew.org>
+
+ * madison (main): if the Archive_Maintenance_In_Progress lockfile
+ exists, warn the user that our output might seem strange. (People
+ get confused by multiple versions in a suite which happens
+ post-kelly but pre-jenna.)
+
+2003-02-21 James Troup <james@nocrew.org>
+
+ * kelly (main): we don't need to worry about StableRejector.
+
+ * melanie (main): sort versions with apt_pkg.VersionCompare()
+ prior to output.
+
+ * lauren: new script to manually reject packages from
+ proposed-updates. Updated code from pre-NI kelly (nee katie).
+
+2003-02-20 James Troup <james@nocrew.org>
+
+ * kelly (init): remove unused -m/--manual-reject argument.
+
+ * katie.py (Katie.force_reject): renamed from force_move to make
+ it more explicit what this function does.
+ (Katie.do_reject): update to match.
+
+ * utils.py (prefix_multi_line_string): add an optional argument
+ include_blank_lines which defaults to 0. If non-zero, blank lines
+ will be includes in the output.
+
+ * katie.py (Katie.do_reject): don't add leading space to each line
+ of the reject message. Include blank lines when showing the
+ message to the user.
+
+2003-02-19 Martin Michlmayr <tbm@cyrius.com>
+
+ * utils.py (fix_maintainer): replace pointless re.sub() with
+ simple string format.
+
+2003-02-11 James Troup <james@nocrew.org>
+
+ * lisa (edit_overrides): only strip-to-one-char and upper-case
+ non-numeric answers. Fixes editing of items with indices >= 10;
+ noticed by Randall.
+ (edit_overrides): correct order of arguments to "not a valid
+ index" error message.
+
+ * jenna (cleanup): call resolve_arch_all_vs_any() rather than
+ remove_duplicate_versions(); thanks to aj for the initial
+ diagnosis.
+ (remove_duplicate_versions): correct how we return
+ dominant_versions.
+ (resolve_arch_all_vs_any): arch_all_versions needs to be a list of
+ a tuple rather than just a tuple.
+
+2003-02-10 James Troup <james@nocrew.org>
+
+ * emilie: new script - sync fingerprint and uid tables with a
+ debian.org LDAP DB.
+
+ * init_pool.sql: new table 'uid'; contains user ids. Reference it
+ in 'fingerprint'.
+
+ * db_access.py (get_or_set_uid_id): new function.
+
+ * jennifer (main): update locking to a) not used FCNTL (deprecated
+ in python >= 2.2) and b) acknowledge upstream's broken
+ implementation of lockf (see Debian bug #74777), c) try to acquire
+ the lock non-blocking.
+ * kelly (main): likewise.
+
+ * contrib/python_1.5.2-fcntl_lockf.diff: obsolete, removed.
+
+ * madison (main): only append the package to new_packages if it's
+ not already in there; fixes -S/--source-and-binary for cases where
+ the source builds a binary package of the same name.
+
+2003-02-10 Anthony Towns <ajt@debian.org>
+
+ * madison (main): use explicit JOIN syntax for
+ -S/--source-and-binary queries to reduce the query's runtime from
+ >10 seconds to negligible.
+
+2003-02-08 James Troup <james@nocrew.org>
+
+ * rene (main): in the NVIU output, append items to lists, not
+ extend them; fixes amusing suggestion that "g n u m e r i c" (sic)
+ should be removed.
+
+2003-02-07 James Troup <james@nocrew.org>
+
+ * apt.conf (tree "dists/unstable"): Add bzip2-ed Packages and
+ Sources [aj].
+
+ * pseudo-packages.maintainers (bugs.debian.org): s/Darren
+ O. Benham/Adam Heath/.
+
+ * katie.conf (Suite::Stable::Version): bump to 3.0r1a.
+ (Suite::Stable::Description): update for 3.0r1a.
+ (Dinstall::SigningKeyIds): update for 2003 key [aj].
+
+ * utils.py (gpgv_get_status_output): rename from
+ get_status_output().
+
+ * neve (check_signature): use gpgv_get_status_output and Dict from
+ utils(). Add missing newline to error message about duplicate tokens.
+
+ * saffron (per_arch_space_use): also print space used by source.
+ (output_format): correct string.join() invocation.
+
+ * jennifer (check_signature): ignored duplicate EXPIRED tokens.
+
+2003-02-04 James Troup <james@nocrew.org>
+
+ * cron.buildd: correct generation of Packages/Sources and grep out
+ non-US/non-free as well as non-free.
+
+2003-02-03 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: generate quinn-diff output with full Packages/Sources
+ files to get out-of-date vs. uncompiled right.
+ * apt.conf.buildd: no longer generate uncompressed files, as they
+ are generated in cron.buildd instead
+ * add -i option to quinn-diff to ignore binary-all packages
+ * apt.conf.buildd: remove and readd udeb to extensions. If the udebs
+ aren't in the packages file, the arch that uploaded them will build
+ them anyways...
+
+2003-01-30 James Troup <james@nocrew.org>
+
+ * rene (main): only print suggested melanie command when there's
+ some NBS to remove.
+
+2003-01-30 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: fix incorrectly inverted lockfile check
+
+2003-01-29 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: generate override.sid.all3.src
+ * apt.conf.buildd: use generated override.sid.all3.src
+
+2003-01-27 Martin Michlmayr <tbm@cyrius.com>
+
+ * utils.py (get_status_output): moved from jennifer.
+ (Dict): likewise.
+ (check_signature): likewise.
+
+ * jennifer (get_status_output): moved to utils.py.
+ (Dict): likewise.
+ (check_signature): likewise.
+
+ * utils.py (check_signature): add an argument to specifiy which
+ function to call when an error was found.
+ (check_signature): document this function better.
+
+ * jennifer (check_files): pass the reject function as an argument
+ to utils.check_signature.
+ (process_it): likewise.
+
+2003-01-20 James Troup <james@nocrew.org>
+
+ * rene (main): lots of changes to improve the output and make it
+ more useful.
+
+ * katie.py (Katie.check_override): make the override messages
+ clearer (hopefully).
+
+2002-12-26 James Troup <james@nocrew.org>
+
+ * ziyi (usage): document the ability to pass suite(s) as
+ argument(s).
+ (main): read apt.conf after checking for -h/--help.
+
+ * tea (main): take the check to run as an argument.
+
+ * saffron.R: R script to graph daily install runs.
+
+ * saffron: new script; various stats functions.
+
+ * rhona (main): connect to the database after checking for -h/--help.
+
+ * neve (do_da_do_da): if no -a/--action option is given, bail out.
+
+ * melanie (main): sort versions with utils.arch_compare_sw().
+
+ * madison (usage): alphabetize order of options.
+ * melanie (usage): likewise.
+
+ * kelly (usage): fix usage short description (we aren't dinstall).
+
+ * julia (usage): fix usage description and alphabetize order of
+ options.
+
+ * jeri (usage): fix usage short description.
+
+ * jennifer (main): move --help and --version checks from here...
+ (init): to here so that they work with an empty katie.conf.
+ * kelly: likewise.
+
+ * alyson (usage): new function.
+ (main): use it.
+ * andrea: likewise.
+ * ashley: likewise.
+ * cindy: likewise.
+ * denise: likewise.
+ * helena: likewise.
+ * neve: likewise.
+ * rene: likewise.
+ * rose: likewise.
+ * tea: likewise.
+
+ * apt.conf.stable (tree "dists/stable"): add missing ExtraOverride
+ entry that caused tasks to be omitted from 3.0r1.
+
+2002-12-10 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): sanity check the Depends field to ensure
+ it's non-empty if present since apt chokes on an empty one.
+ Thanks to Ryan Murray for the idea.
+
+2002-12-08 James Troup <james@nocrew.org>
+
+ * katie.conf-security (Helena::Directories): new; include accepted
+ in addition to byhand and new.
+
+ * helena (process_changes_files): use utils.arch_compare_sw().
+ Justify things based on the longest [package, version,
+ architecture]. Reduce '[note]' to '[N]' to save space, and remove
+ the commas in architecture and version lists for the same reason.
+ (main): make directories we process configurable through
+ Helena::Directories in the config file; if that doesn't exist
+ default to the old hardcoded values (byhand & new).
+
+ * utils.py (arch_compare_sw): moved here from madison.
+ * madison (main): adjust to compensate.
+
+2002-12-06 James Troup <james@nocrew.org>
+
+ * ziyi (main): fix "suite foo not in apt.conf" msg to use the
+ right filename.
+
+2002-12-05 James Troup <james@nocrew.org>
+
+ * katie.conf-non-US (Julia::KnownPostgres): add 'udmsearch'.
+
+2002-11-28 Randall Donald <rdonald@debian.org>
+
+ * fernanda.py (read_control): fix typo of 'Architecture'.
+
+2002-11-26 James Troup <james@nocrew.org>
+
+ * lisa (check_pkg): call less with '-R' so we see the colour from
+ Randall's fernanda changes.
+
+ * neve (process_sources): if Directory points to a legacy location
+ but the .dsc isn't there; assume it's broken and look in the pool.
+ (update_section): new, borroed from alyson.
+ (do_da_do_da): use it.
+ (process_packages): add suite_it to the cache key used for
+ arch_all_cache since otherwise we only add a package to the first
+ suite it's in and ignore any subsequent ones.
+
+ * katie.conf-non-US (Location): fixed to reflect reality (all
+ suites, except old-proposed-updates (which is legacy-mixed)) are
+ pool.
+
+ * utils.py (try_with_debug): wrapper for print_exc().
+ * jenna (main): use it.
+ * neve (main): likewise.
+
+2002-11-25 Randall Donald <rdonald@debian.org>
+
+ * fernanda.py (main): added -R to less command line for raw control
+ character support to print colours
+ (check_deb): Instead of running dpkg -I on deb file, call
+ output_deb_info, the new colourized control reporter.
+ (check_dsc): add call to colourized dsc info reader, read_dsc, instead
+ of printing out each .dsc line.
+ (output_deb_info): new function. Outputs each key/value pair from
+ read_control except in special cases where we highlight section,
+ maintainer, architecture, depends and recommends.
+ (create_depends_string): new function. Takes Depends tree and looks
+ up it's compontent via projectb db, colourizes and constructs a
+ depends string in original order.
+ (read_dsc): new function. reads and parses .dsc info via
+ utils.parse_changes. Build-Depends and Build-Depends-Indep are
+ colourized.
+ (read_control): new function. reads and parses control info via
+ apt_pkg. Depends and Recommends are split in to list structures,
+ Section and Architecture are colourized. Maintainer is colourized
+ if it has a localhost.localdomain address.
+ (split_depends): new function. Creates a list of lists of
+ dictionaries of depends (package,version relation). Top list is
+ colected from comma delimited items. Sub lists are | delimited.
+ (get_comma_list): new function. splits string input among commas
+ (get_or_list): new function. splits string input among | delimiters
+ (get_depends_parts): new function. Creates dictionary of package name
+ and version relation from dependancy string.
+ Colours for section and depends are per component. Unfound depends
+ are in bold. Lookups using version info is not supported yet.
+
+2002-11-22 James Troup <james@nocrew.org>
+
+ * katie.conf-security (Julia::KnownPostgres): add 'www-data' and
+ 'udmsearch'.
+
+ * amber (make_advisory): string.atol() is deprecated and hasn't
+ been ported to string methods. Use long() instead.
+
+ * init_pool.sql: explicitly specify the encoding (SQL_ASCII) when
+ creating the database since we'll fail badly if it's created with
+ e.g. UNICODE encoding.
+
+ * rose (main): AptCnf is a global.
+
+ * neve (get_location_path): new function determines the location
+ from the the first (left-most) directory of a Filename/Directory.
+ (process_sources): don't need 'location' anymore. Use
+ utils.warn(). Use the Directory: field for each package to find
+ the .dsc. Use get_location_path() to determine the location for
+ each .dsc.
+ (process_packages): do't need 'location' anymore. Use
+ utils.warn(). Use get_location_path().
+ (do_sources): don't need 'location', drop 'prefix' in favour of
+ being told the full path to the Sources file, like
+ process_packages().
+ (do_da_do_da): main() renamed, so that main can call us in a
+ try/except. Adapt for the changes in do_sources() and
+ process_packages() above. Assume Sources and Packages file are in
+ <root>/dists/<etc.>. Treat pool locations like we do legacy ones.
+
+ * katie.conf-security (Location): fixed to reflect reality (all
+ suites are pool, not legacy).
+
+ * utils.py (print_exc): more useful (i.e. much more verbose)
+ traceback; a recipe from the Python cookbook.
+ * jenna (main): use it.
+ * neve (main): likewise.
+
+2002-11-19 James Troup <james@nocrew.org>
+
+ * kelly (install): fix brain-damaged CopyChanges/CopyKatie
+ handling which was FUBAR for multi-suite uploads. Now we just
+ make a dictionary of destinations to copy to and iterate over
+ those.
+
+ * fernanda.py (check_deb): run linda as well as lintian.
+
+2002-10-21 James Troup <james@nocrew.org>
+
+ * melanie (main): change X-Melanie to X-Katie and prefix it with
+ 'melanie '.
+
+ * lisa (main): prefix X-Katie with 'lisa '.
+
+ * jennifer (clean_holding): fix typo in string method changes;
+ s/file.find(file/file.find(/.
+
+ * cron.daily: invoke helena and send the report to ftpmaster.
+ * cron.daily-non-US: likewise.
+
+2002-10-16 James Troup <james@nocrew.org>
+
+ * kelly (check): call reject() with a blank prefix when parsing
+ the return of check_dsc_against_db() since it does its own
+ prefix-ing.
+
+ * rose: new script; only handles directory creation initally.
+
+ * katie.conf (Dinstall::NewAckList): obsolete, removed.
+ * katie.conf-non-US (Dinstall::NewAckList): likewise.
+
+2002-10-06 James Troup <james@nocrew.org>
+
+ * rene (main): remove bogus argument handling.
+
+ * kelly: katie, renamed.
+ * cron.daily: adapt for katie being renamed to kelly.
+ * cron.daily-non-US: likewise.
+ * amber (main): likewise.
+
+ * Changes for python 2.1.
+
+ * kelly: time.strftime no longer requires a second argument of
+ "time.localtime(time.time())".
+ * logging.py: likewise.
+ * rhona: likewise.
+ * shania (init): likewise.
+
+ * amber: use augmented assignment.
+ * catherine (poolize): likewise.
+ * claire.py (fix_component_section): likewise.
+ * halle (check_changes): likewise.
+ * helena: likewise.
+ * jenna: likewise.
+ * jennifer: likewise.
+ * jeri: likewise.
+ * katie.py: likewise.
+ * kelly: likewise.
+ * lisa: likewise.
+ * madison (main): likewise.
+ * melanie: likewise.
+ * natalie: likewise.
+ * neve: likewise.
+ * rhona: likewise.
+ * tea: likewise.
+ * utils.py: likewise.
+ * ziyi: likewise.
+
+ * amber: use .endswith.
+ * fernanda.py: likewise.
+ * halle (main): likewise.
+ * jennifer: likewise.
+ * jeri: likewise.
+ * katie.py: likewise.
+ * kelly: likewise.
+ * lisa: likewise.
+ * neve: likewise.
+ * shania (main): likewise.
+ * utils.py: likewise.
+
+ * alyson: use string methods.
+ * amber: likewise.
+ * andrea: likewise.
+ * ashley: likewise.
+ * catherine: likewise.
+ * charisma: likewise.
+ * claire.py: likewise.
+ * db_access.py: likewise.
+ * denise: likewise.
+ * halle: likewise.
+ * heidi: likewise.
+ * helena: likewise.
+ * jenna: likewise.
+ * jennifer: likewise.
+ * jeri: likewise.
+ * julia: likewise.
+ * katie.py: likewise.
+ * kelly: likewise.
+ * lisa: likewise.
+ * logging.py: likewise.
+ * madison: likewise.
+ * melanie: likewise.
+ * natalie: likewise.
+ * neve: likewise.
+ * rene: likewise.
+ * tea: likewise.
+ * utils.py: likewise.
+ * ziyi: likewise.
+
+2002-09-20 Martin Michlmayr <tbm@cyrius.com>
+
+ * utils.py (parse_changes): use <string>.startswith() rather than
+ string.find().
+
+2002-08-27 Anthony Towns <ajt@debian.org>
+
+ * katie.py (in_override_p): when searching for a source override,
+ and the dsc query misses, search for both udeb and deb overrides
+ as well. Should fix the UNACCEPT issues with udebs.
+
+2002-08-24 James Troup <james@nocrew.org>
+
+ * melanie (main): remove gratuitous WHERE EXISTS sub-select from
+ source+binary package finding code which was causing severe
+ performance degradation with postgres 7.2.
+
+2002-08-14 James Troup <james@nocrew.org>
+
+ * julia (main): use the pwd.getpwall() to get system user info
+ rather than trying to read a password file. Add a -n/--no-action
+ option.
+
+ * cron.hourly: julia no longer takes any arguments.
+ * cron.hourly-non-US: likewise.
+
+2002-08-07 James Troup <james@nocrew.org>
+
+ * katie (install): handle multi-suite uploads when CopyChanges
+ and/or CopyKatie are in use, ensuring we only copy stuff once.
+
+2002-08-01 Ryan Murray <rmurray@debian.org>
+
+ * wanna-build/trigger.daily: initial commit, with locking
+ * cron.buildd: add locking against daily run
+
+2002-07-30 James Troup <james@nocrew.org>
+
+ * melanie (main): readd creation of suite_ids_list so melanie is
+ remotely useful again.
+
+ * katie.conf: adopt for woody release; diable
+ StableDislocationSupport, add oldstable, adjust other suites and
+ mappings, fix up location.
+ * katie.conf-non-US: likewise.
+ * katie.conf-security: likewise.
+
+ * apt.conf.stable: adapt for woody release; add oldstable, adjust
+ stable.
+ * apt.conf.stable-non-US: likewise.
+
+ * apt.conf-security: adapt for woody release; adding oldstable,
+ oldstable, adjust stable and testing.
+ * cron.daily-security: likewise.
+ * cron.buildd-security: likewise.
+
+ * apt.conf: adapt for woody release; rename woody-proposed-updates
+ to testing-proposed-updates and proposed-updates to
+ old-proposed-updates.
+ * apt.conf-non-US: likewise.
+
+ * vars-non-US (copyoverrides): add sarge.
+ * vars (copyoverrides): likewise.
+
+ * vars-security (suites): add oldstable.
+
+2002-07-22 Ryan Murray <rmurray@debian.org>
+
+ * apt.conf.security-buildd: use suite codenames instead of
+ distnames.
+
+2002-07-16 James Troup <james@nocrew.org>
+
+ * denise (main): fix filenames for testing override files.
+
+2002-07-14 James Troup <james@nocrew.org>
+
+ * jennifer (process_it): call check_md5sums later so we can check
+ files in the .dsc too
+ (check_md5sums): check files in the .dsc too. Check both md5sum
+ and size.
+
+ * melanie (main): use parse_args() and join_with_commas_and() from
+ utils. If there's nothing to do, say so and exit, don't ask for
+ confirmation etc.
+
+ * amber (join_with_commas_and): moved from here to ...
+ * utils.py (join_with_commas_and): here.
+
+2002-07-13 James Troup <james@nocrew.org>
+
+ * madison (main): use parse_args() from utils. Support
+ -c/--component.
+
+ * jenna (parse_args): moved from here to ...
+ * utils.py (parse_args): here.
+
+ * katie.conf (Architectures): minor corrections to the description
+ for arm, mips and mipsel.
+ * katie.conf-non-US (Architectures): likewise.
+ * katie.conf-security (Architectures): likewise.
+
+ * cron.daily-security: use natalie's new -a/--add functionality to
+ flesh out the security overrides.
+
+2002-07-12 James Troup <james@nocrew.org>
+
+ * cron.buildd (ARCHS): add arm.
+
+ * katie.conf: 2.2r7 was released.
+ * katie.conf-non-US: likewise.
+
+ * utils.py (parse_changes): handle a multi-line field with no
+ starting line.
+
+2002-06-25 James Troup <james@nocrew.org>
+
+ * templates/amber.advisory (To): add missing email address since
+ __WHOAMI__ is only a name.
+
+ * katie.conf-security (Melane::LogFile): correct to go somewhere
+ katie has write access to.
+ (Location::/org/security.debian.org/ftp/dists/::Suites): add
+ Testing.
+
+ * natalie: add support for -a/-add which adds packages only
+ (ignoring changes and deletions).
+
+ * katie.py (Katie.announce): Dinstall::CloseBugs is a boolean so
+ use FindB, not get.
+
+2002-06-22 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): validate the package name and version
+ field. If 'Package', 'Version' or 'Architecture' are missing,
+ don't try any further checks.
+ (check_dsc): likewise.
+
+ * utils.py (re_taint_free): add '~' as a valid character.
+
+2002-06-20 Anthony Towns <ajt@debian.org>
+
+ * katie.conf-non-US: add OverrideSuite for w-p-u to allow uploads
+
+2002-06-09 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): reduce useless code.
+
+ * cron.daily-security: run symlinks -dr on $ftpdir.
+
+ * vars-security (ftpdir): add.
+
+2002-06-08 James Troup <james@nocrew.org>
+
+ * neve (update_override_type): transaction is handled higher up in
+ main().
+ (update_priority): likewise.
+ (process_sources): remove code that makes testing a duplicate of
+ stable.
+ (process_packages): likewise.
+
+ * templates/amber.advisory: add missing mail headers.
+
+ * cron.daily-security: also call apt-ftparchive clean for
+ apt.conf.buildd-security.
+ * cron.weekly: likewise.
+
+ * amber (do_upload): write out a list of source packages (and
+ their version) uploaded for testing.
+ (make_advisory): add more Subst mappings for the mail headers.
+ (spawn): check for suspicious characters in the command and abort
+ if their found.
+
+2002-06-07 James Troup <james@nocrew.org>
+
+ * ziyi (main): remove the 'nonus'/'security' hacks and use
+ Dinstall::SuiteSuffix (if it exists) instead. Always try to write
+ the lower level Release files, even if they don't exist. fubar
+ out if we can't open a lower level Release file for writing.
+
+ * katie.conf-non-US (Dinstall): add SuiteSuffix, used to simplify
+ ziyi.
+ * katie.conf-security (Dinstall): likewise.
+
+ * amber (do_upload): renamed from get_file_list(). Determine the
+ upload host from the original component.
+ (init): Add a -n/--no-action option. Fix how we get changes_files
+ (i.e. from the return value of apt_pkg.ParseCommandLine(), not
+ sys.argv). Add an Options global.
+ (make_advisory): support -n/--no-action.
+ (spawn): likewise.
+ (main): likewise.
+ (usage): document -n/--no-action.
+
+ * cron.buildd-security: fix path to Packages-arch-specific in
+ quinn-diff invocation.
+
+ * katie.conf-security (Dinstall::AcceptedAutoBuildSuites): change
+ to proper suite names (i.e. stable, testing) rather than codenames
+ (potato, woody).
+ (Dinstall::DefaultSuite): likewise.
+ (Suite): likewise.
+ (Location::/org/security.debian.org/ftp/dists/::Suites): likewise.
+ * vars-security (suites): likewise.
+ * apt.conf-security: likewise.
+
+ * katie.conf-security (Component): add "updates/" prefix.
+ (Suite::*::Components): likewise.
+ (ComponentMappings): new; map all {ftp-master,non-US} components
+ -> updates/<foo>.
+
+ * katie.conf-security (Natalie): removed; the options have
+ defaults and ComponentPosition is used by alyson which doesn't
+ work on security.d.o.
+ (Amber): replace UploadHost and UploadDir with ComponentMappings
+ which is a mapping of components -> URI.
+ (Suite::*::CodeName): strip bogus "/updates" suffix hack.
+ (SuiteMappings): use "silent-map" in preference to "map".
+
+ * cron.unchecked-security: fix call to cron.buildd-security.
+
+ * cron.daily-security: map local suite (stable) -> override suite
+ (potato) when fixing overrides. Correct component in natalie call
+ to take into account "updates/" prefix. Fix cut'n'waste in
+ override.$dist.all3 generation, the old files weren't being
+ removed, so they were endlessly growing.
+
+ * neve (main): don't use .Find for the CodeName since we require
+ it. Location::*::Suites is a ValueList.
+ (check_signature): ignore duplicate SIGEXPIRED tokens. Don't bomb
+ out on expired keys, just warn.
+ (update_override_type): new function; lifted from alyson.
+ (update_priority): likewise.
+ (main): use update_{override_type,priority}().
+
+ * jennifer (check_distributions): remove redunant test for
+ SuiteMappings; ValueList("does-not-exist") returns [] which is
+ fine. Add support for a "silent-map" type which doesn't warn
+ about the mapping to the user.
+ (check_files): add support for ComponentMappings, similar to
+ SuiteMappings, but there's no type, just a source and a
+ destination and the original component is stored in "original
+ component".
+ * katie.py (Katie.dump_vars): add "original component" as an
+ optionsal files[file] dump variable.
+
+ * claire.py (find_dislocated_stable): dehardcode 'potato' in SQL
+ query. Add support for section-less legacy locations like current
+ security.debian.org through YetAnotherConfigBoolean
+ 'LegacyStableHasNoSections'.
+ * katie.conf-security (Dinstall): LegacyStableHasNoSections is true.
+
+ * utils.py (real_arch): moved here from ziyi.
+ * ziyi (real_arch): moved to utils.py.
+ * ziyi (main): likewise.
+
+ * claire.py (find_dislocated_stable): use real_arch() with
+ filter() to strip out source and all.
+ * neve (main): likewise.
+ * rene (main): likewise.
+ * jeri (parse_packages): likewise.
+
+2002-06-06 James Troup <james@nocrew.org>
+
+ * tea (check_missing_tar_gz_in_dsc): modifed patch from Martin
+ Michlmayr <tbm@cyrius.com> to be more verbose about what we're
+ doing.
+
+2002-05-23 Martin Michlmayr <tbm@cyrius.com>
+
+ * jeri (check_joey): check if the line contains two elements
+ before accessing the second. Also, strip trailing spaces as well
+ as the newline.
+ * halle (check_joey): likewise.
+
+2002-06-05 James Troup <james@nocrew.org>
+
+ * cron.unchecked-security: new file; like cron.unchecked but if
+ there's nothing to do exit so we don't call cron.buildd-security.
+
+ * apt.conf.buildd-security: new file.
+
+ * vars (archs): alphabetize.
+ * vars-non-US (archs): likewise.
+
+ * vars-security: add unchecked.
+
+ * madison (main): reduce rather bizarrely verbose and purposeless
+ code to print arches to a simple string.join.
+
+ * katie.conf (Suites::Unstable): add UdebComponents, a new
+ valuelist of suites, used by jenna to flesh out the list of
+ <suite>_main-debian-installer-binary-<arch>.list files generated.
+ (Dinstall): add StableDislocationSupport, a new boolean used by
+ jenna to enable or disable stable dislocation support
+ (i.e. claire), as true.
+
+ * katie.conf (Dinstall): add StableDislocationSupport, a new
+ boolean used by jenna to enable or disable stable dislocation
+ support (i.e. claire), as true.
+ * katie.conf-non-US: likewise.
+ * katie.conf-security: likewise.
+
+ * cron.daily-security: generate .all3 overrides for the buildd
+ support. Freshen a local copy of Packages-arch-specific from
+ buildd.debian.org.
+
+ * claire.py (find_dislocated_stable): disable the support for
+ files in legacy-mixed locations since none of the Debian archives
+ have any anymore.
+
+ * helena: new script; produces a report on NEW and BYHAND
+ packages.
+
+ * jenna: rewritten from scratch to fix speed problems. Run time
+ on auric goes down from 31.5 minutes to 3.5 minutes. Of that 3.5
+ minutes, 105 seconds are the monster query and 70 odd seconds is
+ claire.
+
+ * apt.conf.buildd (Default): remove MaxContentsChange as it's
+ irrelevant.
+
+2002-06-05 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd-security: new file.
+
+2002-06-05 Matt Kraai <kraai@alumni.cmu.edu>
+
+ * denise (list): take a file argument and use it.
+ (main): don't abuse sys.stdout, just write to the file.
+
+ * claire.py (usage): Fix misspelling.
+ (clean_symlink): Simplify.
+ (find_dislocated_stable): Avoid unnecessary work.
+
+2002-05-29 James Troup <james@nocrew.org>
+
+ * cameron: removed; apt-ftparchive can simply walk the directory.
+
+2002-05-26 Anthony Towns <ajt@debian.org>
+
+ * katie.conf{,-non-US}: Map testing to testing-proposed-updates
+ for the autobuilders.
+
+2002-05-24 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: update override files before running apt-ftparchive
+
+2002-05-23 Martin Michlmayr <tbm@cyrius.com>
+
+ * amber (main): remove extra space in prompt.
+
+ * utils.py (validate_changes_file_arg): use original filename in
+ error messages.
+
+ * jeri (check_joey): close file after use.
+ (parse_packages): likewise.
+ (main): setup debug option properly.
+
+ * melanie (main): remove unused packages variable and simplify the
+ code to build up con_packages by using repr().
+
+2002-05-23 James Troup <james@nocrew.org>
+
+ * lisa (recheck): when we reject, also return 0 so the package is
+ skipped.
+ (sg_compare): fix note sorting.
+ (recheck): remove the .katie file after rejection.
+
+ * katie.py (Katie.accept): accepted auto-build support take 3;
+ this time adding support for security. Security needs a) non-pool
+ files copied rather than symlinked since accepted is readable only
+ by katie/security and www-data needs to be able to see the files,
+ b) needs per-suite directories. SpecialAcceptedAutoBuild becomes
+ AcceptedAutoBuildSuites and is a ValueList containing the suites.
+ SecurityAcceptedAutoBuild is a new boolean which controls whether
+ or not normal or security style is used. The unstable_accepted
+ table was renamed to accepted_autobuild and a suite column added.
+ Also fix a bug noticed by Ryan where an existent orig.tar.gz
+ didn't have it's last_used/in_accepted flags correctly updated.
+ * katie (install): likewise.
+ * rhona (clean_accepted_autobuild): likewise.
+
+2002-05-22 James Troup <james@nocrew.org>
+
+ * lisa (sort_changes): new function; sorts changes properly.
+ Finally.
+ (sg_compare): new function; helper for sort_changes(). Sorts by
+ have note and time of oldest upload.
+ (indiv_sg_compare): new function; helper for sort_changes().
+ Sorts by source version, have source and filename.
+ (main): use sort_changes().
+ (changes_compare): obsoleted; removed.
+
+2002-05-20 James Troup <james@nocrew.org>
+
+ * rhona (clean_accepted_autobuild): don't die if a file we're
+ trying to remove doesn't exist. Makes rhona more friendly to
+ katie/katie.py crashes/bugs without any undue cost.
+
+2002-05-19 James Troup <james@nocrew.org>
+
+ * lisa (main): if sorting a large number of changes give some
+ feedback.
+ (recheck): new function, run the same checks (modulo NEW,
+ obviously) as katie does, if they fail do the standard
+ reject/skip/quit dance.
+ (do_pkg): use it.
+
+ * katie (install): don't try to unlink the symlink in the
+ AcceptedAutoBuild support if the destination is not a symlink (or
+ doesn't exist). Avoids unnecessary bombs on previous partial
+ accepts and will still bomb hard if the file exists and isn't a
+ symlink.
+
+ * utils.py: blah, commands _is_ used when the mail stuff isn't
+ commented out like it is in my test environment.
+
+ * lisa (changes_compare): "has note" overrides everything else.
+ Use .katie files rather than running parse_changes, faster and
+ allows "has note" to work. Correct version compare, it was
+ reversed. Ctime check should only kick in if the source packages
+ are not the same.
+ (print_new): print out and return any note. Rename 'ret_code' to
+ 'broken'.
+ (edit_new): renamed from spawn_editor. Don't leak file
+ descriptors. Clean up error message if editor fails.
+ (edit_note): new function, allows one to edit notes.
+ (do_new): add note support, editing and removing.
+ (init): kill -s/--sort; with notes we always want to use our
+ sorting method.
+ (usage): likewise.
+
+ * katie.py (Katie.dump_vars): add "lisa note" as an optional
+ changes field.
+
+ * utils.py (build_file_list): rename 'dsc' to 'is_a_dsc' and have
+ it default to 0. Adapt tests to assume it's boolean.
+ * fernanda.py (check_changes): adjust call appropriately.
+ * halle (check_changes): likewise.
+ * jennifer (check_changes): likewise.
+ * jeri (check_changes): likewise.
+ * shania (flush_orphans): likewise.
+
+ * jennifer (check_dsc): pass is_a_dsc by name when calling
+ build_file_list() for clarity.
+ * shania (flush_orphans): likewise.
+ * tea (check_missing_tar_gz_in_dsc): likewise.
+
+ * jennifer (check_dsc): pass dsc_whitespace_rules by name when
+ calling parse_changes() for clarity.
+ * tea (check_dscs): likewise.
+
+ * utils.py (parse_changes): make dsc_whitespace_rules default to
+ not true.
+ * halle (check_changes): adjust call appropriately.
+ * jennifer (check_changes): likewise.
+ * jeri (check_changes): likewise.
+ * lisa (changes_compare): likewise.
+ * utils.py (changes_compare): likewise.
+ * melanie (main): likewise.
+ * shania (flush_orphans): likewise.
+ * fernanda.py (check_changes): likewise.
+
+2002-05-18 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.dump_vars): make the .katie file unreadable,
+ it's not useful and by and large a duplication of information
+ available in readable format in other files.
+
+2002-05-16 Ryan Murray <rmurray@debian.org>
+
+ * melanie: Dir::TemplatesDir -> Dir::Templates
+
+2002-05-15 Ryan Murray <rmurray@debian.org>
+
+ * cameron: correct the use of os.path.join
+
+2002-05-15 Anthony Towns <ajt@debian.org>
+
+ * ziyi: Update to match the new format for Architectures/Components
+ in katie.conf.
+
+2002-05-14 James Troup <james@nocrew.org>
+
+ * amber: new script; 'installer' wrapper script for the security
+ team.
+
+ * katie.py (Katie.announce): remove unused 'dsc' local
+ variable. (pychecker)
+
+ * ziyi: pre-define AptCnf and out globals to None. (pychecker)
+
+ * neve: don't import sys, we don't use it. (pychecker)
+ (check_signature): fix return type mismatch. (pychecker)
+
+ * utils.py: don't import commands, we don't use it. (pychecker)
+
+ * katie (install): SpecialAcceptedAutoBuild is a boolean.
+
+ * katie.py (Katie.dump_vars): don't store "oldfiles", it's
+ obsoleted by the change to "othercomponents" handling in jennifer
+ detailed below.
+ (Katie.cross_suite_version_check): new function; implements
+ cross-suite version checking rules specified in the conf file
+ while also enforcing the standard "must be newer than target
+ suite" rule.
+ (Katie.check_binary_against_db): renamed, since it's invoked once
+ per-binary, "binaries" was inaccurate. Use
+ cross_suite_version_check() and don't bother with the "oldfiles"
+ rubbish as jennifer works out "othercomponents" herself now.
+ (Katie.check_source_against_db): use cross_suite_version_check().
+
+ * katie (check): the version and file overwrite checks
+ (check_{binary,source,dsc}_against_db) are not per-suite.
+
+ * jennifer (check_files): less duplication of
+ 'control.Find("Architecture", "")' by putting it in a local
+ variable.
+ (check_files): call check_binary_against_db higher up since it's
+ not a per-suite check.
+ (check_files): get "othercomponents" directly rather than having
+ check_binary_against_db do it for us.
+
+ * heidi (main): 'if x:', not 'if x != []:'.
+ * katie.py (Katie.in_override_p): likewise.
+ (Katie.check_dsc_against_db): likewise.
+ * natalie (main): likewise.
+ * rene (main): likewise.
+ * ziyi (real_arch): likewise.
+
+ * alyson (main): Suite::%s::Architectures, Suite::%s::Components
+ and OverrideType are now value lists, not lists.
+ * andrea (main): likewise.
+ * cindy (main): likewise.
+ * claire.py (find_dislocated_stable): likewise.
+ * denise (main): likewise.
+ * jenna (main): likewise.
+ * jennifer (check_distributions): likewise.
+ (check_files): likewise.
+ (check_urgency): likewise (Urgency::Valid).
+ * jeri (parse_packages): likewise.
+ * neve (main): likewise (and Location::%s::Suites).
+ * rene (main): likewise.
+
+2002-05-13 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.check_source_against_db): correct case of reject
+ message to be consistent with binary checks.
+
+ * jennifer (get_status_output): don't leak 2 file descriptors per
+ invocation.
+ (check_signature): add missing '\n' to "duplicate status token"
+ error message.
+
+2002-05-09 James Troup <james@nocrew.org>
+
+ * utils.py (validate_changes_file_arg): new function; validates an
+ argument which should be a .changes file.
+ * ashley (main): use it.
+ * lisa (main): likewise.
+
+ * katie.py (Katie.check_dsc_against_db): since there can be more
+ than one .orig.tar.gz make sure we don't assume the .orig.tar.gz
+ entry still exists in files.
+
+ * jennifer (check_dsc): handle the .orig.tar.gz disappearing from
+ files, since check_dsc_against_db() deletes the .orig.tar.gz
+ entry.
+
+ * cameron: cleanups.
+
+ * utils.py (changes_compare): change sort order so that source
+ name and source version trump 'have source'; this should fix
+ UNACCEPT problems in katie where -1 hppa+source & i386, -2
+ i386&source & hppa lead to -1 i386 unaccept. Problem worked out
+ by Ryan.
+
+ * lisa (main): allow the arguments to be .katie files too.
+
+2002-05-07 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: add s390 to arch list again
+
+2002-05-05 Ryan Murray <rmurray@debian.org>
+
+ * cron.buildd: new script, update w-b database from unstable_accepted
+ table
+ * cameron: new script, take list in unstable_accepted and write out
+ a file list for apt-ftparchive
+ * apt.conf.buildd: new apt configuration for Packages/Sources for
+ unstable_accepted
+ * vars: add s390 to arch list.
+
+2002-05-03 James Troup <james@nocrew.org>
+
+ * neve (main): don't hard code the calling user as that doesn't
+ work with modern postgres installs. Fix psql invocation for
+ init_pool.sql (database name required). Dont' hard code the
+ database name.
+ (process_sources): add support for fingerprint and install_date.
+ (process_packages): add support for fingerprint.
+ (do_sources): pass in the directory, fingerprint support needs it.
+ (get_status_output): borrowed from jennifer.
+ (reject): likewise.
+ (check_signature): likewise.
+
+ * katie (install): only try to log urgencies if Urgency_Logger is
+ defined.
+ (main): only initialize Urgency_Logger is Dir::UrgencyLog is
+ defined; only close Urgency_Logger if it's defined.
+
+ * catherine (poolize): adapt for Dir rationalization.
+ * claire.py (find_dislocated_stable): likewise.
+ * denise (main): likewise.
+ * halle (check_joey): likewise.
+ * jenna: likewise.
+ * jennifer: likewise.
+ * jeri: likewise.
+ * katie.py: likewise.
+ * katie: likewise.
+ * lisa (do_bxa_notification): likewise.
+ * logging.py (Logger.__init__): likewise
+ * rene (main): likewise.
+ * rhona (clean): likewise.
+ * shania (init): likewise.
+ * tea: likewise.
+ * ziyi: likewise.
+
+ * lisa (add_overrides): Dinstall::BXANotify is a boolean, use
+ FindB, not FindI.
+
+ * rhona (clean_accepted_autobuild): SpecialAcceptedAutoBuild is a
+ boolean, use FindB, not get.
+
+ * katie.py (Katie.check_dsc_against_db): ignore duplicate
+ .orig.tar.gz's which are an exact (size/md5sum) match.
+
+ * ashley (main): really allow *.katie files as arguments too;
+ noticed by aj.
+
+ * sql-aptvc.cpp: postgres.h moved to a "server" subdirectory.
+
+2002-05-03 Anthony Towns <ajt@debian.org>
+
+ * ziyi: support for security.
+
+2002-05-02 James Troup <james@nocrew.org>
+
+ * jennifer (accept): call Katie.check_override() unconditional as
+ no-mail check moved into that function.
+ (do_byhand): likewise.
+
+ * katie.py (Katie.check_override): don't do anything if we're a)
+ not sending mail or b) the override disparity checks have been
+ disbled via Dinstall::OverrideDisparityCheck.
+
+ * jennifer (check_files): don't hard code Unstable as the suite
+ used to check for architecture validity; use
+ Dinstall::DefaultSuite instead, if it exists.
+ (accept): conditionalize
+
+ * katie.py (Katie.update_subst): support global maintainer
+ override with Dinstall::OverrideMaintainer.
+
+ * jennifer (check_distributions): new function, Distribution
+ validation and mapping. Uses new SuiteMappings variable from
+ config file to abstract suite mappings.
+ (check_changes): call it.
+
+ * natalie: renamed; nothing imports or likely will for some time.
+
+ * denise (main): remove unused natalie import and init().
+
+ * natalie.py (init): removed.
+ (main): initalize here instead and don't hardcode the database
+ name.
+
+2002-04-30 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.close_bugs): new function, split out from
+ announce().
+ (Katie.announce): only call close_bugs() if Dinstall::CloseBugs is
+ true.
+ (Katie.close_bugs): new function, split out
+ (Katie.close_bugs): return immediately if there are no bugs to
+ close.
+
+ * jennifer (acknowledge_new): adapt for new utils.TemplateSubst().
+ * katie (do_reject): likewise.
+ (stable_install): likewise.
+ * katie.py (Katie.announce): likewise.
+ (Katie.accept): likewise.
+ (Katie.check_override): likewise.
+ (Katie.do_reject): likewise.
+ * lisa (do_bxa_notification): likewise.
+ * melanie (main): likewise.
+
+ * utils.py (TemplateSubst): change second argument to be a
+ filename rather than a template since every caller opened a file
+ on the fly which was ugly and leaked file descriptor.
+
+2002-04-29 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.announce): (modified) patch from Raphael Hertzog
+ <hertzog@debian.org> to send 'accepted' announce mails to the
+ PTS. [#128140]
+
+2002-04-24 James Troup <james@nocrew.org>
+
+ * init_pool.sql (unstable_accepted): add two new fields to
+ unstable_accepted; in_accepted is a boolean indicating whether or
+ not the file is in accepted and last_used is a timestamp used by
+ rhona to determine when to remove symlinks for installed packages.
+
+ * katie.py (Katie.accept): auto-build support take 2. Create
+ symlinks for all files into a seperate directory. Add files to
+ unstable_accepted as paths to the new directory; mark them as
+ being in accepted for cameron. Properly conditionalize it on a
+ configuration variable.
+
+ * katie (install): likewise. Update symlinks to point into the
+ pool; mark the files for later deletion by rhona and mark them as
+ not being in accepted for cameron.
+
+ * rhona (clean_accepted_autobuild): new function.
+
+2002-04-22 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): handle db_access.get_location_id()
+ returning -1 properly/better.
+
+ * rhona (clean_fingerprints): new function.
+
+2002-04-21 James Troup <james@nocrew.org>
+
+ * utils.py (touch_file): unused; remove.
+ (plural): likewise.
+
+ * jennifer (check_files): close file descriptor used to get the
+ control file.
+ (check_md5sums): likewise.
+ (callback): likewise.
+
+ * katie.py (Katie.do_reject): handle manual rejects much better;
+ call the editor first and get confirmation from the user before
+ proceeding.
+
+ * jennifer (check_signature): prefix_multi_line_string() moved to
+ utils.
+
+ * utils.py (prefix_multi_line_string): moved here from jennifer.
+
+2002-04-20 James Troup <james@nocrew.org>
+
+ * lisa (main): handle non-existent files.
+
+ * ashley (main): allow *.katie files as arguments too.
+
+2002-04-19 James Troup <james@nocrew.org>
+
+ * katie.py (Katie.accept): add stuff to help auto-building from
+ accepted; if the .orig.tar.gz is not part of the upload (i.e. it's
+ in the pool), create a symlink to it in the accepted directory and
+ add the .dsc and .{u,}deb(s) to a new 'unstable_accepted' table.
+
+ * katie (install): undo the "auto-building from accepted" stuff
+ (i.e. remove the .orig.tar.gz symlink and remove the files from
+ unstable_accepted table).
+
+2002-04-16 James Troup <james@nocrew.org>
+
+ * jennifer (upload_too_new): fix typo which was causing all
+ timestamp comparisons to be against the .changes file. Also move
+ back to the original directory so we do the comparisons against
+ accurate timestamps.
+
+ * tea (check_missing_tar_gz_in_dsc): new function.
+
+ * jennifer (check_dsc): add a check to ensure there is a .tar.gz
+ file mentioned in the .dsc.
+
+ * lisa (main): use X-Katie in the mail headers, not X-Lisa; that
+ way mails reach debian-{devel-,}changes@l.d.o.
+
+2002-04-02 Ryan Murray <rmurray@debian.org>
+
+ * cron.daily: run shania after rhona
+ * cron.daily-non-US: likewise.
+
+2002-04-01 James Troup <james@nocrew.org>
+
+ * katie: re-add proposed-updates/stable install support.
+
+ * katie.py (Katie.dump_vars): add changes["changes"] as an
+ optional field; should be mandatory later.
+
+2002-03-31 James Troup <james@nocrew.org>
+
+ * katie (install): support a Suite::<foo>::CopyKatie similar to
+ CopyChanges. Done seperately because .katie files don't need to
+ be mirrored and will probably be copied to another directory as a
+ result.
+
+ * halle (main): add missing debug to options.
+
+2002-03-29 James Troup <james@nocrew.org>
+
+ * madison (main): add a -r/--regex option.
+
+2002-03-26 James Troup <james@nocrew.org>
+
+ * lisa: don't trample on changes["distribution"]; make a copy of
+ it as changes["suite"] instead and use that.
+
+2002-03-16 Anthony Towns <ajt@debian.org>
+
+ * templates/lisa.bxa_notification: Fix some grammatical errors.
+ Encourage contact via bxa@ftp-master email address.
+
+2002-03-15 James Troup <james@nocrew.org>
+
+ * jennifer (check_timestamps): remove bogus raise in except.
+
+2002-03-15 Anthony Towns <ajt@debian.org>
+
+ * cron.monthly: rotate mail/archive/bxamail as well as
+ mail/archive/mail. This is for a complete archive of
+ correspondence with the BXA.
+
+2002-03-14 Anthony Towns <ajt@debian.org>
+
+ * crypto-in-main changes.
+
+ * utils.py (move, copy): add an optional perms= parameter to let you
+ set the resulting permissions of the moved/copied file
+ * katie.py (force_move): rejected/morgued files should be unreadable
+ * jennifer (do_byhand, acknowledge_new): pending new and byhand files
+ should be unreadable.
+
+2002-03-07 Ryan Murray <rmurray@debian.org>
+
+ * katie (install): check for existance of "files id" key as well as
+ it being set to a valid value.
+ * katie (install): check for existense and valid value for location
+ id as well
+
+2002-03-05 Ryan Murray <rmurray@debian.org>
+
+ * katie.py (do_reject): reread the reason file after editing it.
+
+2002-02-25 James Troup <james@nocrew.org>
+
+ * jennifer (check_changes): don't enforce sanity in .changes file
+ names since it doesn't seem to be possible; pcmica-cs and similar
+ freak show packages in particular cause real problems.
+
+ * katie.py (Katie.check_dsc_against_db): initialize 'found' for
+ each dsc_file since the .orig.tar.gz checking code now uses it as
+ a boolean. Fixes bizarro rejections which bogusly claimed
+ .diff.gz md5sum/size was incorrect.
+
+2002-02-24 James Troup <james@nocrew.org>
+
+ * katie (process_it): reset reject_message.
+
+2002-02-22 James Troup <james@nocrew.org>
+
+ * db_access.py(set_files_id): disable use of
+ currval('files_id_seq') because it was taking 3 seconds on auric
+ which is insane (most calls take < 0.01) and simply call
+ get_files_id() for the return value instead.
+
+ * katie.py (Katie.do_query): convenience function; unused by
+ default, useful for profiling.
+ * db_access.py (do_query): likewise.
+
+ * katie (install): fix insert SQL call when binary has no source.
+
+ * lisa (determine_new): auto-correct non-US/main to non-US.
+ (determine_new): add a warning when adding things to stable.
+ (edit_index): use our_raw_input().
+ (edit_overrides): likewise.
+ (do_new): likewise. Use re.search() not re.match() since the
+ default answer may not be the first one.
+ (do_byhand): likewise.
+ (do_new): Default to 'S'kip and never 'A'dd.
+
+ * jennifer (action): pass prompt to our_raw_input().
+ * melanie (game_over): likewise.
+ * katie (action): likewise.
+
+ * utils.py (our_raw_input): add an optional prompt argument to
+ make the function more usable as a drop in replacement for
+ raw_input().
+
+ * jennifer (check_files): correct reject() to not double prefix
+ when using katie.py based functions.
+ (check_dsc): likewise.
+
+ * katie.py (Katie.reject): prepend a new line if appropriate
+ rathen than appending one to avoid double new lines when caller
+ adds one of his own.
+
+ * lisa (determine_new): warn if the package is also in other
+ components.
+
+2002-02-20 James Troup <james@nocrew.org>
+
+ * jennifer (check_files): if .changes file lists "source" in
+ Architecture field, there must be a .dsc.
+
+2002-02-15 James Troup <james@nocrew.org>
+
+ * ashley (main): add some missing fields.
+
+ * katie.py (Katie.check_dsc_against_db): fix to take into account
+ the fact that the .orig.tar.gz might be in byhand, accepted or
+ new. Also fix calling of reject().
+ (Katie.check_binaries_against_db): fix calling of reject().
+ (Katie.check_source_against_db): likewise.
+ (Katie.dump_vars): add missing variables used for bug closures.
+
+ * lisa (changes_compare_by_time): sort by reverse time.
+
+ * katie.py (Katie.accept): log.
+ (Katie.dump_vars): missing has_key test for optional changes fields.
+
+ * jennifer (main): print "Accepted blah blah" to stdout, not stderr.
+ (process_it): traceback goes to stderr, not stdout.
+ (acknowledge_new): log.
+ (do_byhand): likewise.
+
+ * katie.py (Katie.update_subst): fix typo (Cnf vs. self.Cnf).
+
+ * add_constraints.sql: add grants for the new fingerprint table.
+
+2002-02-13 James Troup <james@nocrew.org>
+
+ * katie (do_reject): basename the .changes filename before trying
+ to use it to construct the .reason filename.
+ (process_it): call Katie.update_subst() so do_reject() DTRT with
+ the mail template.
+ (do_reject): setup the mail template correctly.
+
+2002-02-12 James Troup <james@nocrew.org>
+
+ * tea (process_dir): renamed 'arg' to 'unused' for clarity.
+ (check_files): don't abuse global dictionaries.
+ (Ent): use all variables.
+ (check_timestamps): don't abuse global dictionaries.
+
+ * fernanda.py: renamed to .py so lisa can import it.
+ (check_dsc): remove unused local variables (pychecker).
+ (display_changes): split off from check_changes.
+
+ * katie: rewritten; most of the functionality moves to jennifer;
+ what's left is the code to install packages once a day from the
+ 'accepted' directory.
+
+ * jennifer: new program, processes packages in 'unchecked'
+ (i.e. most of the non-install functionality of old katie).
+
+ * katie.py: common functions shared between the clique of
+ jennifer, lisa and katie.
+
+ * lisa: new program; handles NEW and BYHAND packages.
+
+ * jeri (usage): new function.
+ (main): use it.
+ (check_package): remove unused local variable (pychecker).
+
+ * init_pool.sql: new table fingerprint. Add fingerprint colums to
+ binaries and source. Add install_date to source.
+
+ * halle (usage): new function.
+ (main): use it. Remove unused options.
+ (check_changes): remove unused local variable (pychecker).
+
+ * add_constraints.sql: add fingerprint references.
+
+ * db_access.py (get_or_set_fingerprint_id): new function.
+
+ * ashley (main): new program; dumps the contents of a .katie file
+ to stdout.
+
+ * alyson (main): remove option handling since we don't actually
+ support any.
+ * cindy (main): likewise.
+
+ * remove unnecessary imports and pre-define globals (pychecker).
+
+2002-02-11 Anthony Towns <ajt@debian.org>
+
+ * added installation-report and upgrade-report pseudo-packages
+
+2002-01-28 Martin Michlmayr <tbm@cyrius.com>
+
+ * katie (update_subst): use Dinstall::TrackingServer.
+ * melanie (main): likewise.
+
+2002-01-27 James Troup <james@nocrew.org>
+
+ * shania (main): it's IntLevel not IntVal; thanks to tbm@ for
+ noticing, jgg@ for fix.
+
+2002-01-19 James Troup <james@nocrew.org>
+
+ * utils.py (extract_component_from_section): handle non-US
+ non-main properly.
+
+2002-01-12 James Troup <james@nocrew.org>
+
+ * madison: add support for -S/--source-and-binary which displays
+ information for the source package and all it's binary children.
+
+2002-01-13 Anthony Towns <ajt@debian.org>
+
+ * katie.conf: Remove Catherine Limit and bump stable to 2.2r5
+ * katie.conf: Add Dinstall::SigningKeyIds option, set to the 2001
+ and 2002 key ids.
+ * katie.conf-non-US: Likewise.
+ * ziyi: Suppoer Dinstall::SigningKeyIds to sign a Release file with
+ multiple keys automatically. This is probably only useful for
+ transitioning from an expired (or revoked?) key.
+
+2002-01-08 Ryan Murray <rmurray@debian.org>
+
+ * debian/python-dep: new file that prints out python:Depends for
+ substvars
+ * debian/control: use python:Depends, build-depend on python
+ lower Depends: on postgresql to Suggests:
+ * debian/rules: determine python version, install to the correct
+ versioned dir
+
+2001-12-18 Anthony Towns <ajt@debian.org>
+
+ * ziyi: unlink Release files before overwriting them (in case they've
+ been merged)
+ * ziyi: always include checksums/sizes for the uncompressed versions
+ of Sources and Packages, even if they're not present on disk
+
+2001-11-26 Ryan Murray <rmurray@debian.org>
+
+ * ziyi (main): add SigningPubKey config option
+ * katie.conf: use SigningPubKey config option
+ * katie.conf-non-US: likewise
+
+2001-11-24 James Troup <james@nocrew.org>
+
+ * katie (acknowledge_new): log newness.
+
+2001-11-24 Anthony Towns <ajt@debian.org>
+
+ * ziyi (real_arch): bail out if some moron forgot to reset
+ untouchable on stable.
+ (real_arch): source Release files.
+
+2001-11-19 James Troup <james@nocrew.org>
+
+ * claire.py (main): don't use apt_pkg.ReadConfigFileISC and
+ utils.get_conf().
+ * shania (main): likewise.
+
+ * rhona (main): add default options.
+
+ * db_access.py (get_archive_id): case independent.
+
+ * katie (action): sort files so that ordering is consistent
+ between mails; noticed/requested by Joey.
+
+2001-11-17 Ryan Murray <rmurray@debian.org>
+
+ * utils.py: add get_conf function, change startup code to read all
+ config files to the Cnf that get_conf returns
+ use the component list from the katie conf rather than the hardcoded
+ list.
+ * all scripts: use new get_conf function
+ * shania: fix try/except around changes files
+ * jenna: only do debian-installer if it is a section in Cnf
+
+2001-11-16 Ryan Murray <rmurray@debian.org>
+
+ * shania (main): Initialize days to a string of a number.
+ (main): Initialize Cnf vars before reading in Cnf
+
+2001-11-14 Ryan Murray <rmurray@debian.org>
+
+ * shania (main): Initialize days to a number.
+
+2001-11-04 James Troup <james@nocrew.org>
+
+ * docs/Makefile: use docbook-utils' docbook2man binary.
+
+ * Change all "if foo == []" constructs into "if not foo".
+
+ * katie (check_changes): when installing into stable from
+ proposed-updates, remove all non-stable target distributions.
+ (check_override): don't check for override disparities on stable
+ installs.
+ (stable_install): update install_bytes appropriately.
+ (reject): stable rejection support; i.e. don't remove files when
+ rejecting files in the pool, rather remove them from the
+ proposed-update suite instead, rhona will do the rest.
+ (manual_reject): support for a stable specific template.
+ (main): setup up stable rejector in subst.
+
+2001-11-04 Martin Michlmayr <tbm@cyrius.com>
+
+ * debian/control (Build-Depends): docbook2man has been superseded
+ by docbook-utils.
+
+ * neve (main): exit with a more useful error message.
+ (update_suites): Suite::<suite>::Version, Origin and Description
+ are not required, so don't fail if they don't exist.
+
+ * db_access.py (get_archive_id): return -1 on error rather than
+ raise an exception.
+ (get_location_id): likewise.
+
+ * madison (main): don't exit on the first not-found package,
+ rather exit with an appropriate return code after processing all
+ packages.
+
+2001-11-03 James Troup <james@nocrew.org>
+
+ * claire.py (find_dislocated_stable): add per-architecture
+ symlinks for dislocated architecture: all debs.
+
+2001-10-19 Anthony Towns <ajt@debian.org>
+
+ * apt.conf*, katie.conf*: add mips, mipsel, s390 to testing.
+
+2001-10-10 Anthony Towns <ajt@debian.org>
+
+ * claire.py (fix_component_section): do _not_ assign to None under
+ any circumstances
+
+2001-10-07 Martin Michlmayr <tbm@cyrius.com>
+
+ * melanie (main): don't duplicate architectures when removing from
+ more than one suite.
+
+ * heidi (main, process_file, get_list): report suite name not ID.
+
+ * naima (nmu_p): be case insensitive.
+
+ * naima (action): more list handling clean ups.
+
+ * melanie (main): clean up lists handling to use string.join and
+ IN's.
+
+ * madison (main): clean up suite and architecture argument parsing
+ to use slices less and string.join more.
+
+ * utils.py (parse_changes): Use string.find() instead of slices for
+ string comparisons thereby avoid hardcoding the length of strings.
+ * ziyi (main): likewise.
+
+2001-10-07 James Troup <james@nocrew.org>
+
+ * Remove mode argument from utils.open_files() calls if it's the
+ default, i.e. 'r'.
+
+2001-09-27 James Troup <james@nocrew.org>
+
+ * katie (init): new function; options clean up.
+ (usage): add missing options, remove obsolete ones.
+ (main): adapt for the two changes above. If the lock file or
+ new-ack log file don't exist, create them. Don't try to open the
+ new-ack log file except running in new-ack mode.
+
+ * alyson (main): initialize all the tables that are based on the
+ conf file.
+
+ * utils.py (touch_file): like touch(1).
+ (where_am_i): typo.
+
+ * catherine (usage): new.
+ (main): use it. options cleanup.
+ * claire.py: likewise.
+ * fernanda: likewise.
+ * heidi: likewise.
+ * jenna: likewise.
+ * shania: likewise.
+ * ziyi: likewise.
+
+ * andrea: options cleanup.
+ * charisma: likewise.
+ * julia: likewise.
+ * madison: likewise.
+ * melanie: likewise.
+ * natalie: likewise.
+ * rhona: likewise.
+ * tea: likewise.
+
+2001-09-26 James Troup <james@nocrew.org>
+
+ * utils.py: default to sane config file locations
+ (/etc/katie/{apt,katie}.conf. They can be the actual config files
+ or they can point to the real ones through use of a new Config
+ section. Based on an old patch by Adam Heath.
+ (where_am_i): use the new default config stuff.
+ (which_conf_file): likewise.
+ (which_apt_conf_file): likewise.
+
+ * charisma (main): output defaults to
+ `Package~Version\tMaintainer'; input can be of either form. When
+ parsing the new format do version comparisons, when parsing the
+ old format assume anything in the extra file wins. This fixes the
+ problem of newer non-US packages being overwhelmed by older
+ versions still in stable on main.
+
+2001-09-17 James Troup <james@nocrew.org>
+
+ * natalie.py (list): use result_join().
+
+ * denise (main): result_join() moved to utils.
+
+ * utils.py (result_join): move to utils; add an optional seperator
+ argument.
+
+2001-09-14 James Troup <james@nocrew.org>
+
+ * heidi (set_suite): new function; does --set like natalie does,
+ i.e. turns it into a sequence of --add's and --remove's
+ internally. This is a major win (~20 minute run time > ~20
+ seconds) in the common, everday (i.e. testing) case.
+ (get_id): common code used by set_suite() and process_file().
+ (process_file): call set_suite() and get_id().
+ (main): add logging support.
+
+ * julia: new script; syncs PostgeSQL with (LDAP-generated) passwd
+ files.
+
+ * utils.py (parse_changes): use slices or simple string comparison
+ in favour of regexes where possible.
+
+ * sql-aptvc.cpp (versioncmp): rewritten to take into account the
+ fact that the string VARDATA() points to are not null terminated.
+
+ * denise (result_join): new function; like string.join() but
+ handles None's.
+ (list): use it.
+ (main): likewise.
+
+ * charisma (main): python-pygresql 7.1 returns None not "".
+
+2001-09-14 Ryan Murray <rmurray@debian.org>
+
+ * natalie.py (list): handle python-pygresql 7.1 returning None.
+
+2001-09-10 Martin Michlmayr <tbm@cyrius.com>
+
+ * madison (main): return 1 if no package is found.
+
+2001-09-08 Martin Michlmayr <tbm@cyrius.com>
+
+ * madison (main): better error handling for incorrect
+ -a/--architecture or -s/--suite arguments.
+ (usage): new.
+ (main): use it.
+
+2001-09-05 Ryan Murray <rmurray@debian.org>
+
+ * charisma, madison, katie: remove use of ROUser
+ * katie.conf,katie.conf-non-US: remove defintion of ROUser
+
+2001-08-26 James Troup <james@nocrew.org>
+
+ * katie (nmu_p.is_an_nmu): use maintaineremail to check for group
+ maintained packages at cjwatson@'s request.
+
+2001-08-21 James Troup <james@nocrew.org>
+
+ * madison (main): add -a/--architecture support.
+
+ * jenna: use logging instead of being overly verbose on stdout.
+
+2001-08-11 Ryan Murray <rmurray@debian.org>
+
+ * melanie: add functional help option
+
+2001-08-07 Anthony Towns <ajt@debian.org>
+
+ * apt.conf, katie.conf: Add ia64 and hppa to testing.
+
+2001-07-28 James Troup <james@nocrew.org>
+
+ * katie (check_dsc): ensure source version is >> than existing
+ source in target suite.
+
+2001-07-25 James Troup <james@nocrew.org>
+
+ * natalie.py: add logging support.
+
+ * utils.py (open_file): make the second argument optional and
+ default to read-only.
+
+ * rene (main): work around broken source packages that duplicate
+ arch: all packages with arch: !all packages (no longer allowed
+ into the archive by katie).
+
+2001-07-13 James Troup <james@nocrew.org>
+
+ * katie (action): don't assume distribution is a dictionary.
+ (update_subst): don't assume architecture is a dictionary and that
+ maintainer822 is defined.
+ (check_changes): recognise nk_format exceptions.
+ (check_changes): reject on 'testing' only uploads.
+ (check_files): when checking to ensure all packages are newer
+ versions check against arch-all packages too.
+ (check_dsc): enforce the existent of a sane set of mandatory
+ fields. Ensure the version number in the .dsc (modulo epoch)
+ matches the version number in the .changes file.
+
+ * utils.py (changes_compare): ignore all exceptions when parsing
+ the changes files.
+ (build_file_list): don't UNDEF on a changes file with no format
+ field.
+
+2001-07-07 James Troup <james@nocrew.org>
+
+ * katie (nmu_p.is_an_nmu): check 'changedby822' for emptiness
+ rather than 'changedbyname' to avoid false negatives on uploads
+ with an email-address-only Changed-By field.
+ (check_dsc): don't overwrite reject_message; append to it.
+ (check_signature): likewise.
+ (check_changes): likewise.
+ (announce): condition logging on 'action'.
+
+ * logging.py: new logging module.
+
+ * katie: Cleaned up code by putting Cnf["Dinstall::Options"]
+ sub-tree into a separate (global) variable.
+ (check_dsc): ensure format is 1.0 to retain backwards
+ compatability with dpkg-source in potato.
+ (main): only try to obtain the lock when not running in no-action
+ mode.
+ Use the new logging module.
+
+ * christina: initial version; only partially usable.
+
+2001-06-28 Anthony Towns <ajt@debian.org>
+
+ * apt.conf: Add ExtraOverrides to auric.
+
+2001-06-25 James Troup <james@nocrew.org>
+
+ * katie (nmu_p.is_an_nmu): the wonderful dpkg developers decided
+ they preferred the name 'Uploaders'.
+
+2001-06-23 James Troup <james@nocrew.org>
+
+ * katie (check_files): fix typo in uncommon rejection message,
+ s/sourceversion/source version/.
+
+ * denise (main): we can't use print because stdout has been
+ redirected.
+
+ * katie (source_exists): new function; moved out of check_files()
+ and added support for binary-only NMUs of earlier sourceful NMUs.
+
+ * rhona (clean): find_next_free has moved.
+
+ * utils.py (find_next_free): new function; moved here from rhona.
+ Change too_many to be an argument with a default value, rather
+ than a hardcoded variable.
+
+ * shania: rewritten to work better; REJECTion reminder mail
+ handling got lost though.
+
+2001-06-22 James Troup <james@nocrew.org>
+
+ * rhona (main): remove unused override code.
+
+ * fernanda (main): remove extraneous \n's from utils.warn calls.
+ * natalie.py (list): likewise.
+
+ * catherine, cindy, denise, heidi, jenna, katie, neve, rhona, tea:
+ use utils.{warn,fubar} where appropriate.
+
+2001-06-21 James Troup <james@nocrew.org>
+
+ * katie (nmu_p): new class that encapsulates the "is a nmu?"
+ functionality.
+ (nmu_p.is_an_nmu): add support for multiple maintainers specified
+ by the "Maintainers" field in the .dsc file and maintainer groups.
+ (nmu_p.__init__): read in the list of group maintainer names.
+ (announce): use nmu_p.
+
+2001-06-20 James Troup <james@nocrew.org>
+
+ * rene (main): hardcode the suite experimental is compared to by
+ name rather than number.
+
+ * katie (check_files): differentiate between doesn't-exist and
+ permission-denied in "can not read" rejections; requested by edd@.
+ (check_dsc): use os.path.exists rather than os.access to allow the
+ above check to kick in.
+
+ * heidi (process_file): read all input before doing anything and
+ use transactions.
+
+2001-06-15 James Troup <james@nocrew.org>
+
+ * fernanda: new script; replaces old 'check' shell script
+ nastiness.
+
+2001-06-14 James Troup <james@nocrew.org>
+
+ * katie: actually import traceback module to avoid amusing
+ infinite loop.
+
+2001-06-10 James Troup <james@nocrew.org>
+
+ * utils.py (extract_component_from_section): fix to handle just
+ 'non-free' and 'contrib'. Also fix to handle non-US in a
+ completely case insensitive manner as a component.
+
+2001-06-08 James Troup <james@nocrew.org>
+
+ * madison (arch_compare): sort function that sorts 'source' first
+ then alphabetically.
+ (main): use it.
+
+2001-06-05 Jeff Licquia <jlicquia@progeny.com>
+
+ * catherine (poolize): explicitly make poolized_size a long so it
+ doesn't overflow when poolizing e.g. entire archives.
+
+2001-06-01 James Troup <james@nocrew.org>
+
+ * utils.py (send_mail): throw exceptions rather than exiting.
+
+ * katie (process_it): catch exceptions and ignore them.
+
+2001-06-01 Michael Beattie <mjb@debian.org>
+
+ * added update-mailingliststxt and update-readmenonus to update
+ those files, respectively. modified cron.daily{,-non-US} to
+ use them.
+
+2001-05-31 Anthony Towns <ajt@debian.org>
+
+ * rhona: make StayOfExecution work.
+
+2001-05-31 James Troup <james@nocrew.org>
+
+ * rhona (find_next_free): fixes to not overwrite files but rename
+ them by appending .<n> instead.
+ (clean): use find_next_free and use dated sub-directories in the
+ morgue.
+
+ * utils.py (move): don't overwrite files unless forced to.
+ (copy): likewise.
+
+2001-05-24 James Troup <james@nocrew.org>
+
+ * katie (check_files): determine the source version here instead
+ of during install().
+ (check_files): check for existent source with bin-only NMU
+ support.
+ (main): sort the list of changes so that the source-must-exist
+ check Does The Right Thing(tm).
+
+ * utils.py (changes_compare): new function; sorts a list of
+ changes files by 'have-source', source, version.
+ (cc_fix_changes): helper function.
+ (parse_changes): use compiled regexes.
+ (fix_maintainer): likewise.
+
+ * rene (main): warn about packages in experimental that are
+ superseded by newer versions in unstable.
+
+2001-05-21 James Troup <james@nocrew.org>
+
+ * rene (main): add support for checking for ANAIS (Architecture
+ Not Allowed In Source) packages.
+
+2001-05-17 James Troup <james@nocrew.org>
+
+ * katie (check_changes): initalize `architecture' dictionary in
+ changes global so that if we can't parse the changes file for
+ whatever reason we don't undef later on.
+
+ * utils.py (parse_changes): fix handling of multi-line fields
+ where the first line did have data.
+
+2001-05-05 Anthony Towns <ajt@debian.org>
+
+ * ziyi: Add "NotAutomatic: yes" to experimental Release files.
+ (It should always have been there. Ooopsy.)
+
+2001-05-03 Anthony Towns <ajt@debian.org>
+
+ * jenna: Cleanup packages that move from arch:any to arch:all or
+ vice-versa.
+
+2001-04-24 Anthony Towns <ajt@debian.org>
+
+ * ziyi: add ``SHA1:'' info to Release files. Also hack them up to
+ cope with debian-installer and boot-floppies' md5sum.txt.
+
+2001-04-16 James Troup <james@nocrew.org>
+
+ * katie (check_changes): add missing %s format string argument.
+ (stable_install): temporary work around for morgue location to
+ move installed changes files into.
+ (stable_install): helps if you actually read in the template.
+ (manual_reject): fix for editing of reject messages which was
+ using the wrong variable name.
+
+ * jenna (generate_src_list): typo; s/package/source/; fixes undef crash.
+
+2001-04-13 James Troup <james@nocrew.org>
+
+ * katie (manual_reject): Cc the installer.
+ (reject): don't.
+ (check_changes): remove unused maintainer-determination code.
+ (update_subst): add support for Changed-By when setting the
+ *MAINTAINER* variables.
+
+ * rene (bar): new function to check for packages on architectures
+ when they shouldn't be.
+
+ * natalie.py (main): use fubar() and warn() from utils.
+
+ * utils.py (whoami): new mini-function().
+ * melanie (main): use it.
+ * katie (manual_reject): likewise.
+
+2001-04-03 James Troup <james@nocrew.org>
+
+ * katie (action): ignore exceptions from os.path.getmtime() so we
+ don't crash on non-existent changes files (e.g. when they are
+ moved between the start of the install run in cron.daily and the
+ time we get round to processing them).
+
+ * madison (main): also list source and accept -s/--suite.
+
+ * jenna (generate_src_list): missing \n in error message.
+
+ * katie (update_subst): add sane defaults for when changes is
+ skeletal.
+
+2001-03-29 James Troup <james@nocrew.org>
+
+ * melanie (main): use fubar() and warn() from utils. Remember who
+ the maintainer for the removed packages are and display that info
+ to the user. Readd support for melanie-specific Bcc-ing that got
+ lost in the TemplateSubst transition.
+
+ * utils.py (fubar): new function.
+ (warn): like wise.
+
+ * db_access.py (get_maintainer): as below.
+
+ * charisma (get_maintainer): moved the bulk of this function to
+ db_access so that melanie can use it too.
+
+ * claire.py (find_dislocated_stable): restrict the override join
+ to those entries where the suite is stable; this avoids problems
+ with packages which have moved to new sections (e.g. science)
+ between stable and unstable.
+
+2001-03-24 James Troup <james@nocrew.org>
+
+ * catherine (poolize): new function; not really independent of
+ main() fully, yet.
+ (main): use it.
+
+ * katie (stable_install): __SUITE__ needs to be space prefixed
+ because buildd's check for 'INSTALLED$'.
+
+2001-03-22 James Troup <james@nocrew.org>
+
+ * utils.py (regex_safe): also need to escape '.'; noticed by ajt@.
+
+ * jenna: rewritten; now does deletions on a per-suite level
+ instead of a per-suite-component-architecture-type level. This
+ allows mutli-component packages to be auto-cleaned (and as a
+ bonus, reduces the code size and duplication).
+
+2001-03-22 Anthony Towns <ajt@debian.org>
+
+ * ziyi (main): fix ziyi to overwrite existing Release.gpg files
+ instead of just giving a gpg error.
+
+2001-03-21 James Troup <james@nocrew.org>
+
+ * madison (main): use apt_pkg.VersionCompare to sort versions so
+ that output is correctly sorted for packages like debhlper.
+ Noticed by ajt@.
+
+ * tea (check_source_in_one_dir): actually find problematic source
+ packages.
+
+ * katie (check_dsc): remember the orig.tar.gz's location ID if
+ it's not in a legacy suite.
+ (check_diff): we don't use orig_tar_id.
+ (install): add code to handle sourceful diff-only upload of
+ packages which change components by copying the .orig.tar.gz into
+ the new component, if it doesn't already exist there.
+ (process_it): reset orig_tar_location (as above).
+
+ * melanie (main): use template substiution for the bug closing
+ emails.
+ (main): don't hardcode bugs.debian.org or packages.debian.org
+ either; use configuration items.
+
+ * katie: likewise.
+
+ * natalie.py (init): use None rather than 'localhost' for the
+ hostname given to pg.connect.
+
+ * utils.py (TemplateSubst): new function; lifted from
+ userdir-ldap.
+
+2001-03-21 Ryan Murray <rmurray@debian.org>
+
+ * katie (announce): fix the case of non-existent
+ Suite::$SUITE::Announce.
+
+2001-03-20 Ryan Murray <rmurray@debian.org>
+
+ * debian/rules (binary-indep): install melanie into /usr/bin/ not
+ /usr/.
+
+ * alyson (main): use config variable for database name.
+ * andrea (main): likewise.
+ * catherine (main): likewise.
+ * charisma (main): likewise.
+ * cindy (main): likewise.
+ * claire.py (main): likewise.
+ * denise (main): likewise.
+ * heidi (main): likewise.
+ * jenna (main): likewise.
+ * katie (main): likewise.
+ * madison (main): likewise.
+ * melanie (main): likewise.
+ * neve (main): likewise.
+ * rhona (main): likewise.
+ * tea (main): likewise.
+
+2001-03-15 James Troup <james@nocrew.org>
+
+ * rhona (check_sources): fixed evil off by one (letter) error
+ which was causing only .dsc files to be deleted when cleaning
+ source packages.
+
+ * charisma (get_maintainer_from_source): remove really stupid and
+ gratuitous IN sub-query and replace with normal inner join.
+ (main): connect as read-only user nobody.
+
+ * rhona (clean_maintainers): rewritten to use SELECT and sub-query
+ with EXISTS.
+ (check_files): likewise; still disabled by default though.
+ (clean_binaries): add ' seconds' to the mysterious number in the
+ output.
+ (clean): likewise.
+
+ * tea (check_files): add missing global declaration on db_files.
+
+2001-03-14 James Troup <james@nocrew.org>
+
+ * rhona: rewritten large chunks. Removed a lot of the silly
+ selecting into dictionaries and replaced it with 'where exists'
+ based sub queries. Added support for StayOfExecution. Fix the
+ problem with deleting dsc_files too early and disable cleaning of
+ unattached files.
+
+2001-03-14 Anthony Towns <ajt@debian.org>
+
+ * katie (announce): also check Changed-By when trying to detect
+ NMUs.
+
+2001-03-06 Anthony Towns <ajt@debian.org>
+
+ * ziyi (main): Generate Release.gpg files as well, using the key from
+ Dinstall::SigningKey in katie.conf, if present. That key has to be
+ passwordless, and hence kept fairly secretly.
+
+2001-03-02 James Troup <james@nocrew.org>
+
+ * utils.py (str_isnum): new function; checks to see if the string
+ is a number.
+
+ * shania (main): fix _hideous_ bug which was causing all files > 2
+ weeks old to be deleted from incoming, even if they were part of a
+ valid upload.
+
+2001-02-27 James Troup <james@nocrew.org>
+
+ * melanie (main): accept new argument -C/--carbon-copy which
+ allows arbitarty carbon-copying of the bug closure messages.
+ Cleaned up code by putting Cnf["Melanie::Options"] sub-tree into a
+ separate variable.
+
+2001-02-27 Anthony Towns <ajt@debian.org>
+
+ * ziyi: new program; generates Release files.
+
+2001-02-25 James Troup <james@nocrew.org>
+
+ * katie (reject): add missing '\n' to error message.
+ (manual_reject): likewise.
+ (install): catch exceptions from moving the changes file into DONE
+ and ignore them.
+
+ * tea (check_md5sums): new function.
+
+2001-02-25 Michael Beattie <mjb@debian.org>
+
+ * melanie: use $EDITOR if available.
+
+2001-02-15 James Troup <james@nocrew.org>
+
+ * utils.py (parse_changes): don't crash and burn on empty .changes
+ files. Symptoms noticed by mjb@.
+
+2001-02-15 Adam Heath <doogie@debian.org>
+
+ * denise (main): use an absolute path for the output filename.
+
+ * sql-aptvc.cpp: don't #include <utils/builtins.h> as it causes
+ compile errors with postgresql-dev >= 7.0.
+
+2001-02-12 James Troup <james@nocrew.org>
+
+ * rene: initial version.
+
+ * andrea: initial version.
+
+ * catherine (main): remove obsolete assignment of arguments.
+
+2001-02-09 James Troup <james@nocrew.org>
+
+ * catherine: first working version.
+
+2001-02-06 James Troup <james@nocrew.org>
+
+ * katie (check_files): validate the priority field; i.e. ensure it
+ doesn't contain a '/' (to catch people prepending the priority
+ with the component rather than the section).
+ (check_override): don't warn about source packages; the only check
+ is on section and we have no GUI tools that would use the Section
+ field for a Sources file.
+ (announce): use tags rather than severities for NMUs. Requested
+ by Josip Rodin <joy@>. [#78035]
+
+2001-02-04 James Troup <james@nocrew.org>
+
+ * tea (check_override): new function; ensures packages in suites
+ are also in the override file. Thanks to bod@ for noticing that
+ such packages existed.
+
+ * katie: move file type compiled regular expressions to utils as
+ catherine uses them too.
+ (check_changes): always default maintainer822 to the installer
+ address so that any bail out won't cause undefs later.
+ (check_files): update file type re's to match above.
+ (stable_install): likewise.
+ (reject): handle any except from moving the changes files. Fixes
+ crashes on unreadable changes files.
+
+ * melanie (main): add an explanation of why things are not removed
+ from testing.
+
+2001-01-31 James Troup <james@nocrew.org>
+
+ * melanie (main): ignore a) no message, b) removing from stable or
+ testing when invoked with -n/--no-action.
+
+ * katie (check_override): lower section before checking to see if
+ we're whining about 'non-US' versus 'non-US/main'.
+
+ * sql-aptvc.cpp: new file; wrapper around apt's version comparison
+ function so that we can use inside of PostgreSQL.
+
+2001-01-28 James Troup <james@nocrew.org>
+
+ * katie: updated to pass new flag to parse_changes() and deal with
+ the exception raised on invalid .dsc's if appropriate.
+ * shania (main): likewise.
+ * melanie (main): likewise.
+
+ * tea (check_dscs): new function to validate all .dsc files in
+ unstable.
+
+ * utils.py (parse_changes): if passed an additional flag, validate
+ the .dsc file to ensure it's extractable by dpkg-source.
+ Requested by Ben Collins <bcollins@>.
+
+2001-01-27 James Troup <james@nocrew.org>
+
+ * madison (main): connect to the DB as nobody.
+
+ * katie (check_files): remove support for -r/--no-version-check
+ since it makes no sense under katie (jenna will automatically
+ remove the (new) older version) and was evil in any event.
+ (check_changes): add missing new line to rejection message.
+ (check_dsc): likewise.
+ (process_it): reset reject_message here.
+ (main): not here. Also remove support for -r.
+
+2001-01-26 James Troup <james@nocrew.org>
+
+ * katie (check_override): don't whine about 'non-US/main' versus
+ 'non-US'.
+
+2001-01-26 Michael Beattie <mjb@debian.org>
+
+ * natalie.py (usage): new function.
+ (main): use it.
+
+2001-01-25 Antti-Juhani Kaijanaho <gaia@iki.fi>
+
+ * update-mirrorlists: Update README.non-US too (request from Joy).
+
+2001-01-25 James Troup <james@nocrew.org>
+
+ * katie (reject): catch any exception from utils.move() and just
+ pass, we previously only caught can't-overwrite errors and not
+ can't-read ones.
+
+ * jenna (generate_src_list): use ORDER BY in selects to avoid
+ unnecessary changes to Packages files.
+ (generate_bin_list): likewise.
+
+ * utils.py (extract_component_from_section): separated out from
+ build_file_list() as it's now used by claire too.
+
+ * claire.py (find_dislocated_stable): rewrite the query to extract
+ section information and handle component-less locations properly.
+ Thanks to ajt@ for the improved queries.
+ (fix_component_section): new function to fix components and
+ sections.
+
+2001-01-23 James Troup <james@nocrew.org>
+
+ * katie (check_files): set file type for (u?)debs first thing, so
+ that if we continue, other functions which rely on file type
+ existing don't bomb out. If apt_pkg or apt_inst raise an
+ exception when parsing the control file, don't try any other
+ checks, just drop out.
+ (check_changes): new test to ensure there is actually a target
+ distribution.
+
+2001-01-22 James Troup <james@nocrew.org>
+
+ * katie (usage): s/dry-run/no-action/. Noticed by Peter Gervai
+ <grin@>.
+ (check_changes): when mapping to unstable, remember to actually
+ add unstable to the suite list and not just remove the invalid
+ suite.
+
+2001-01-21 James Troup <james@nocrew.org>
+
+ * katie (check_files): catch exceptions from debExtractControl()
+ and reject packages which raise any.
+
+2001-01-19 James Troup <james@nocrew.org>
+
+ * katie (check_signature): basename() file name in rejection
+ message.
+
+2001-01-18 James Troup <james@nocrew.org>
+
+ * katie (in_override_p): remember the section and priority from
+ the override file so we can check them against the package later.
+ (check_override): new function; checks section and priority (for
+ binaries) from the package against the override file and mails the
+ maintainer about any disparities.
+ (install): call check_override after announcing the upload.
+
+2001-01-16 James Troup <james@nocrew.org>
+
+ * utils.py (build_file_list): catch ValueError's from splitting up
+ the files field and translate it into a parse error.
+
+ * tea: add support for finding unreferenced files.
+
+ * katie (in_override_p): add support for suite-aliasing so that
+ proposed-updates uploads work again.
+ (check_changes): catch parses errors from utils.build_file_list().
+ (check_dsc): likewise.
+ (check_diff): yet more dpkg breakage so we require even newer a
+ version.
+
+ * jenna (generate_bin_list): don't do nasty path munging that's no
+ longer needed.
+
+ * denise (main): support for non-US; and rename testing's override
+ files so they're based on testing's codename.
+
+2001-01-16 Martin Michlmayr <tbm@cyrius.com>
+
+ * melanie: add to the bug closing message explaining what happens
+ (or rather doesn't) with bugs against packages that have been
+ removed.
+
+2001-01-14 James Troup <james@nocrew.org>
+
+ * charisma (main): fix silly off-by-one error; suite priority
+ checking was done using "less than" rather than "less than or
+ equal to" which was causing weird hesienbugs with wrong Maintainer
+ fields.
+
+2001-01-10 James Troup <james@nocrew.org>
+
+ * katie (in_override_p): adapted to use SQL-based overrides.
+ read_override_file function disappears.
+
+ * db_access.py: add new functions get_section_id, get_priority_id
+ and get_override_type_id.
+ (get_architecture_id): return -1 if the architecture is not found.
+
+ * heidi: switch %d -> %d in all SQL queries.
+ (get_list): Use string.join where appropriate.
+
+ * rhona (in_override_p): don't die if the override file doesn't
+ exist.
+ (main): warn if the override file doesn't exist.
+
+ * alyson: new script; will eventually sync the config file and the
+ SQL database.
+
+ * natalie.py: new script; manipulates overrides.
+
+ * melanie: new script; removes packages from suites.
+
+2001-01-08 James Troup <james@nocrew.org>
+
+ * katie (re_bad_diff): whee; dpkg 1.8.1.1 didn't actually fix
+ anything it just changed the symptom. Recognise the new breakage
+ and reject them too.
+
+2001-01-07 James Troup <james@nocrew.org>
+
+ * katie (check_dsc): when adding the cwd copy of the .orig.tar.gz
+ to the .changes file, be sure to set up files[filename]["type"]
+ too.
+
+2001-01-06 James Troup <james@nocrew.org>
+
+ * katie (check_diff): new function; detects bad diff files
+ produced by dpkg 1.8.1 and rejects thems.
+ (process_it): call check_diff().
+ (check_dsc): gar. Add support for multiple versions of the
+ .orig.tar.gz file in the archive from -sa uploads. Check md5sum
+ and size against all versions and use one which matches if
+ possible and exclude any that don't from being poolized to avoid
+ file overwrites. Thanks to broonie@ for providing the example.
+ (install): skip any files marked as excluded as above.
+
+2001-01-05 James Troup <james@nocrew.org>
+
+ * heidi (process_file): add missing argument to error message.
+
+2001-01-04 James Troup <james@nocrew.org>
+
+ * heidi (main): fix handling of multiple files by reading all
+ files not just the first file n times (where n = the number of
+ files passed as arguments).
+
+2001-01-04 Anthony Towns <ajt@debian.org>
+
+ * katie (check_dsc): proper fix for the code which locates the
+ .orig.tar.gz; check for '<filename>$' or '^<filename>$'.
+
+2000-12-20 James Troup <james@nocrew.org>
+
+ * rhona: replace IN's with EXISTS's to make DELETE time for
+ binaries and source sane on auric. Add a -n/--no-action flag and
+ make it stop actions if used. Fixed a bug in binaries deletion
+ with no StayOfExecution (s/</<=/). Add working -h/--help and
+ -V/--version. Giving timing info on deletion till I'm sure it's
+ sane.
+
+ * katie (check_changes): map testing to unstable.
+
+ * madison: new script; shows versions in different architectures.
+
+ * katie (check_dsc): ensure size matches as well as md5sum;
+ suggested by Ben Collins <bcollins@debian.org> in Debian Bug
+ #69702.
+
+2000-12-19 James Troup <james@nocrew.org>
+
+ * katie (reject): ignore the "can't overwrite file" exception from
+ utils.move() and leave the files where they are.
+ (reject): doh! os.access() test was reversed so we only tried to
+ move files which didn't exist... replaced with os.path.exists()
+ test the right way round.
+
+ * utils.py (move): raise an exception if we can't overwrite the
+ destination file.
+ (copy): likewise.
+
+2000-12-18 James Troup <james@nocrew.org>
+
+ * rhona: first working version.
+
+ * db_access.py (get_files_id): force both sizes to be integers.
+
+ * katie (main): use size_type().
+
+ * utils.py (size_type): new function; pretty prints a file size.
+
+2000-12-17 James Troup <james@nocrew.org>
+
+ * charisma (main): do version compares so that older packages do
+ not override newer ones and process source first as source wins
+ over binaries in terms of who we think of as the Maintainer.
+
+2000-12-15 James Troup <james@nocrew.org>
+
+ * katie (install): use the files id for the .orig.tar.gz from
+ check_dsc().
+ (install): limit select for legacy source to a) source in legacy
+ or legacy-mixed type locations and b) distinct on files.id.
+ (install): rather than the bizarre insert new, delete old method
+ for moving legacy source into the pool, use a simple update of
+ files.
+ (process_it): initalize some globals before each process.
+
+2000-12-14 James Troup <james@nocrew.org>
+
+ * katie (in_override_p): index on binary_type too since .udeb
+ overrides are in a different file.
+ (read_override_file): likewise.
+ (check_files): correct filename passed to get_files_id().
+ (check_dsc): we _have_ to preprend '/' to the filename to avoid
+ mismatches like jabber.orig.tar.gz versus libjabber.orig.tar.gz.
+ (check_dsc): remember the files id of the .orig.tar.gz, not the
+ location id.
+
+2000-12-13 James Troup <james@nocrew.org>
+
+ * utils.py (poolify): force the component to lower case except for
+ non-US.
+
+ * katie (in_override_p): handle .udeb-specific override files.
+ (check_files): pass the binary type to in_override_p().
+ (check_dsc): remember the location id of the old .orig.tar.gz in
+ case it's not in the pool.
+ (install): use location id from dsc_files; which is where
+ check_dsc() puts it for old .orig.tar.gz files.
+ (install): install files after all DB work is complete.
+ (reject): basename() the changes filename.
+ (manual_reject): likewise.
+
+ * shania: new progam; replaces incomingcleaner.
+
+2000-12-05 James Troup <james@nocrew.org>
+
+ * katie (check_changes): if inside stable and can't find files
+ from the .changes; assume it's installed in the pool and chdir()
+ to there.
+ (check_files): we are not installing for stable installs, so don't
+ check for overwriting existing files.
+ (check_dsc): likewise.
+ (check_dsc): reorder .orig.tar.gz handling so that we search in
+ the pool first and only then fall back on any .orig.tar.gz in the
+ cwd; this avoids false positives on the overwrite check when
+ people needlessly reupoad the .orig.tar.gz in a non-sa upload.
+ (install): if this is a stable install, bail out to
+ stable_install() immediately.
+ (install): dsc_files handling was horribly broken. a) we need to
+ add files from the .dsc and not the .changes (duh), b) we need to
+ add the .dsc file itself to dsc_files (to be consistent with neve
+ if for no other reason).
+ (stable_install): new function; handles installs from inside
+ proposed-updates to stable.
+ (acknowledge_new): basename changes_filename before doing
+ anything.
+ (process_it): absolutize the changes filename to avoid the
+ requirement of being in the same directory as the .changes file.
+ (process_it): save and restore the cwd as stable installs can
+ potentially jump into the pool to find files.
+
+ * jenna: dislocated_files support using claire.
+
+ * heidi (process_file): select package field from binaries
+ explicitly.
+
+ * db_access.py (get_files_id): fix cache key used.
+
+ * utils.py (build_file_list): fix 'non-US/non-free' case in
+ section/component splitting.
+ (move): use os.path.isdir() rather than stat.
+ (copy): likewise.
+
+ * claire.py: new file; stable in non-stable munger.
+
+ * tea: new file; simply ensures all files in the DB exist.
+
+2000-12-01 James Troup <james@nocrew.org>
+
+ * katie (check_dsc): use regex_safe().
+ (check_changes): typo in changes{} key:
+ s/distributions/distribution/.
+ (install): use changes["source"], not files[file]["source"] as the
+ latter may not exist and the former is used elsewhere. Commit the
+ SQL transaction earlier.
+
+ * utils.py (regex_safe): new function; escapes characters which
+ have meaning to SQL's regex comparison operator ('~').
+
+2000-11-30 James Troup <james@nocrew.org>
+
+ * katie (install): pool_location is based on source package name,
+ not binary package.
+
+ * utils.py (move): if dest is a directory, append the filename
+ before chmod-ing.
+ (copy): ditto.
+
+ * katie (check_files): don't allow overwriting of existing .debs.
+ (check_dsc): don't allow overwriting of existing source files.
+
+2000-11-27 James Troup <james@nocrew.org>
+
+ * katie (check_signature): don't try to load rsaref; it's
+ obsolete.
+ (in_override_p): don't try to lookup override entries for packages
+ with an invalid suite name.
+ (announce): don't assume the suite name is valid; use Find() to
+ lookup the mailing list name for announcements.
+
+ * utils.py (where_am_i): typo; hostname is in the first element,
+ not second.
+
+ * db_access.py (get_suite_id): return -1 on an unknown suite.
+
+2000-11-26 James Troup <james@nocrew.org>
+
+ * katie (install): fix CopyChanges handling; typo in in checking
+ Cnf for CopyChanges flag and was calling non-existent function
+ copy_file.
+
+ * utils.py (copy): new function; clone of move without the
+ unlink().
+
+2000-11-25 James Troup <james@nocrew.org>
+
+ * utils.py (build_file_list): handle non-US prefixes properly
+ (i.e. 'non-US' -> 'non-US/main' and 'non-US/libs' -> 'non-US/main'
+ + 'libs' not 'non-US/libs').
+ (send_mail): add '-odq' to sendmail invocation to avoid DNS lookup
+ delays. This is possibly(/probably) exim speicifc and (like other
+ sendmail options) needs to be in the config file.
+
+2000-11-24 James Troup <james@nocrew.org>
+
+ * rhona (check_sources): we need file id from dsc_files; not id.
+ Handle non .dsc source files being re-referenced in dsc_files.
+
+ * katie (in_override_p): strip out any 'non-US' prefix.
+ (check_files): use utils.where_am_i() rather than hardcoding.
+ (check_files): validate the component.
+ (install): use utils.where_am_i() rather than hardcoding.
+ (install): fix mail to go to actual recepient.
+ (reject): likewise.
+ (manual_reject): likewise.
+ (acknowledge_new): likewise.
+ (announce): likewise.
+
+ * db_access.py (get_component_id): ignore case when searching for
+ the component and don't crash if the component can't be found, but
+ return -1.
+ (get_location_id): handle -1 from get_component_id().
+
+ * jenna (generate_src_list): don't bring 'suite' into our big
+ multi-table-joining select as we already know the 'suite_id'.
+ (generate_bin_list): likewise.
+
+ * neve (main): don't quit if not on ftp-master.
+ (process_packages): remove unused variable 'suite_codename'.
+
+ * utils.py (move): actually move the file.
+ (build_file_list): handle non-US prefixes in the section.
+
+ * catherine (main): use which_conf_file().
+ * charisma (main): likewise.
+ * heidi (main): likewise.
+ * jenna (main): likewise.
+ * katie (main): likewise.
+ * neve (main): likewise.
+ * rhona (main): likewise.
+
+ * utils.py (where_am_i): new routine; determines the archive as
+ understood by other 'dak' programs.
+ (which_conf_file): new routine; determines the conf file to read.
+
+2000-11-17 James Troup <james@nocrew.org>
+
+ * katie (install): fix where .changes files for proposed-updates
+ go.
+
+2000-11-04 James Troup <james@nocrew.org>
+
+ * jenna (main): handle architecture properly if no
+ -a/--architecture argument is given, i.e. reset architecture with
+ the values for the suite for each suite.
+
+ * Add apt_pkg.init() to the start of all scripts as it's now
+ required by python-apt.
+
+2000-10-29 James Troup <james@nocrew.org>
+
+ * jenna (generate_bin_list): take an additional argument 'type'
+ and use it in the SELECT.
+ (main): if processing component 'main', process udebs and debs.
+
+ * neve (process_packages): set up 'type' in 'binaries' (by
+ assuming .deb).
+
+ * katie (re_isadeb): accept ".udeb" or ".deb" as a file ending.
+ (check_files): set up files[file]["dbtype"].
+ (install): use files[file]["dbtype"] to set up the 'type' field in
+ the 'binaries' table.
+
+ * init_pool.sql: add a 'type' field to the 'binaries' table to
+ distinguisgh between ".udeb" and ".deb" files.
+
+ * utils.py (move): scrap basename() usage; use a "dir_p(dest) :
+ dest ? dirname(dest)" type check instead.
+
+ * katie (check_dsc): handle the case of an .orig.tar.gz not found
+ in the pool without crashing. Also handle the case of being asked
+ to look for something other than an .orig.tar.gz in the pool.
+
+2000-10-26 James Troup <james@nocrew.org>
+
+ * katie (install): fix filenames put into files table during
+ poolification of legacy source.
+
+ * utils.py (move): work around a bug in os.path.basename() which
+ cunningly returns '' if there is a trailing slash on the path
+ passed to it.
+
+ * katie (check_dsc): Remove more cruft. If we find the
+ .orig.tar.gz in the pool and it's in a legacy (or legacy-mixed)
+ location, make a note of that so we can fix things in install().
+ (install): as above. Move any old source out of legacy locations
+ so that 'apt-get source' will work.
+ (process_it): reset the flag that indicates to install that the
+ source needs moved.
+
+ * cron.daily: more. Nowhere near complete yet though.
+
+ * katie (install): don't run os.makedirs, a) utils.move() does
+ this now, b) we weren't removing the user's umask and were
+ creating dirs with SNAFU permissions.
+ (check_dsc): rewrite the .orig.tar.gz handling to take into
+ account, err, package pools. i.e. look anywhere in the pool
+ rather than faffing around with two simple paths.
+
+ * neve (process_sources): add the .dsc to dsc_files too.
+
+2000-10-25 James Troup <james@nocrew.org>
+
+ * neve (process_sources): don't duplicate .orig.tar.gz's.
+
+2000-10-23 James Troup <james@nocrew.org>
+
+ * utils.py (re_extract_src_version): moved here.
+
+ * neve: move re_extract_src_version to utils.
+ (process_packages): reflect change.
+
+ * katie (install): reflect change.
+
+2000-10-19 James Troup <james@nocrew.org>
+
+ * jenna (generate_src_list): handle locations with null
+ components.
+ (generate_bin_list): likewise.
+
--- /dev/null
+#!/usr/bin/env python
+
+# Various different sanity checks
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: tea,v 1.31 2004-11-27 18:03:11 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# And, lo, a great and menacing voice rose from the depths, and with
+# great wrath and vehemence it's voice boomed across the
+# land... ``hehehehehehe... that *tickles*''
+# -- aj on IRC
+
+################################################################################
+
+import commands, os, pg, stat, string, sys, time;
+import db_access, utils;
+import apt_pkg, apt_inst;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+db_files = {};
+waste = 0.0;
+excluded = {};
+current_file = None;
+future_files = {};
+current_time = time.time();
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: tea MODE
+Run various sanity checks of the archive and/or database.
+
+ -h, --help show this help and exit.
+
+The following MODEs are available:
+
+ md5sums - validate the md5sums stored in the database
+ files - check files in the database against what's in the archive
+ dsc-syntax - validate the syntax of .dsc files in the archive
+ missing-overrides - check for missing overrides
+ source-in-one-dir - ensure the source for each package is in one directory
+ timestamps - check for future timestamps in .deb's
+ tar-gz-in-dsc - ensure each .dsc lists a .tar.gz file
+ validate-indices - ensure files mentioned in Packages & Sources exist
+ files-not-symlinks - check files in the database aren't symlinks
+ validate-builddeps - validate build-dependencies of .dsc files in the archive
+"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def process_dir (unused, dirname, filenames):
+ global waste, db_files, excluded;
+
+ if dirname.find('/disks-') != -1 or dirname.find('upgrade-') != -1:
+ return;
+ # hack; can't handle .changes files
+ if dirname.find('proposed-updates') != -1:
+ return;
+ for name in filenames:
+ filename = os.path.abspath(dirname+'/'+name);
+ filename = filename.replace('potato-proposed-updates', 'proposed-updates');
+ if os.path.isfile(filename) and not os.path.islink(filename) and not db_files.has_key(filename) and not excluded.has_key(filename):
+ waste += os.stat(filename)[stat.ST_SIZE];
+ print filename
+
+################################################################################
+
+def check_files():
+ global db_files;
+
+ print "Building list of database files...";
+ q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id")
+ ql = q.getresult();
+
+ db_files.clear();
+ for i in ql:
+ filename = os.path.abspath(i[0] + i[1]);
+ db_files[filename] = "";
+ if os.access(filename, os.R_OK) == 0:
+ utils.warn("'%s' doesn't exist." % (filename));
+
+ filename = Cnf["Dir::Override"]+'override.unreferenced';
+ if os.path.exists(filename):
+ file = utils.open_file(filename);
+ for filename in file.readlines():
+ filename = filename[:-1];
+ excluded[filename] = "";
+
+ print "Checking against existent files...";
+
+ os.path.walk(Cnf["Dir::Root"]+'pool/', process_dir, None);
+
+ print
+ print "%s wasted..." % (utils.size_type(waste));
+
+################################################################################
+
+def check_dscs():
+ count = 0;
+ suite = 'unstable';
+ for component in Cnf.SubTree("Component").List():
+ if component == "mixed":
+ continue;
+ component = component.lower();
+ list_filename = '%s%s_%s_source.list' % (Cnf["Dir::Lists"], suite, component);
+ list_file = utils.open_file(list_filename);
+ for line in list_file.readlines():
+ file = line[:-1];
+ try:
+ utils.parse_changes(file, signing_rules=1);
+ except utils.invalid_dsc_format_exc, line:
+ utils.warn("syntax error in .dsc file '%s', line %s." % (file, line));
+ count += 1;
+
+ if count:
+ utils.warn("Found %s invalid .dsc files." % (count));
+
+################################################################################
+
+def check_override():
+ for suite in [ "stable", "unstable" ]:
+ print suite
+ print "-"*len(suite)
+ print
+ suite_id = db_access.get_suite_id(suite);
+ q = projectB.query("""
+SELECT DISTINCT b.package FROM binaries b, bin_associations ba
+ WHERE b.id = ba.bin AND ba.suite = %s AND NOT EXISTS
+ (SELECT 1 FROM override o WHERE o.suite = %s AND o.package = b.package)"""
+ % (suite_id, suite_id));
+ print q
+ q = projectB.query("""
+SELECT DISTINCT s.source FROM source s, src_associations sa
+ WHERE s.id = sa.source AND sa.suite = %s AND NOT EXISTS
+ (SELECT 1 FROM override o WHERE o.suite = %s and o.package = s.source)"""
+ % (suite_id, suite_id));
+ print q
+
+################################################################################
+
+# Ensure that the source files for any given package is all in one
+# directory so that 'apt-get source' works...
+
+def check_source_in_one_dir():
+ # Not the most enterprising method, but hey...
+ broken_count = 0;
+ q = projectB.query("SELECT id FROM source;");
+ for i in q.getresult():
+ source_id = i[0];
+ q2 = projectB.query("""
+SELECT l.path, f.filename FROM files f, dsc_files df, location l WHERE df.source = %s AND f.id = df.file AND l.id = f.location"""
+ % (source_id));
+ first_path = "";
+ first_filename = "";
+ broken = 0;
+ for j in q2.getresult():
+ filename = j[0] + j[1];
+ path = os.path.dirname(filename);
+ if first_path == "":
+ first_path = path;
+ first_filename = filename;
+ elif first_path != path:
+ symlink = path + '/' + os.path.basename(first_filename);
+ if not os.path.exists(symlink):
+ broken = 1;
+ print "WOAH, we got a live one here... %s [%s] {%s}" % (filename, source_id, symlink);
+ if broken:
+ broken_count += 1;
+ print "Found %d source packages where the source is not all in one directory." % (broken_count);
+
+################################################################################
+
+def check_md5sums():
+ print "Getting file information from database...";
+ q = projectB.query("SELECT l.path, f.filename, f.md5sum, f.size FROM files f, location l WHERE f.location = l.id")
+ ql = q.getresult();
+
+ print "Checking file md5sums & sizes...";
+ for i in ql:
+ filename = os.path.abspath(i[0] + i[1]);
+ db_md5sum = i[2];
+ db_size = int(i[3]);
+ try:
+ file = utils.open_file(filename);
+ except:
+ utils.warn("can't open '%s'." % (filename));
+ continue;
+ md5sum = apt_pkg.md5sum(file);
+ size = os.stat(filename)[stat.ST_SIZE];
+ if md5sum != db_md5sum:
+ utils.warn("**WARNING** md5sum mismatch for '%s' ('%s' [current] vs. '%s' [db])." % (filename, md5sum, db_md5sum));
+ if size != db_size:
+ utils.warn("**WARNING** size mismatch for '%s' ('%s' [current] vs. '%s' [db])." % (filename, size, db_size));
+
+ print "Done."
+
+################################################################################
+#
+# Check all files for timestamps in the future; common from hardware
+# (e.g. alpha) which have far-future dates as their default dates.
+
+def Ent(Kind,Name,Link,Mode,UID,GID,Size,MTime,Major,Minor):
+ global future_files;
+
+ if MTime > current_time:
+ future_files[current_file] = MTime;
+ print "%s: %s '%s','%s',%u,%u,%u,%u,%u,%u,%u" % (current_file, Kind,Name,Link,Mode,UID,GID,Size, MTime, Major, Minor);
+
+def check_timestamps():
+ global current_file;
+
+ q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id AND f.filename ~ '.deb$'")
+ ql = q.getresult();
+ db_files.clear();
+ count = 0;
+ for i in ql:
+ filename = os.path.abspath(i[0] + i[1]);
+ if os.access(filename, os.R_OK):
+ file = utils.open_file(filename);
+ current_file = filename;
+ sys.stderr.write("Processing %s.\n" % (filename));
+ apt_inst.debExtract(file,Ent,"control.tar.gz");
+ file.seek(0);
+ apt_inst.debExtract(file,Ent,"data.tar.gz");
+ count += 1;
+ print "Checked %d files (out of %d)." % (count, len(db_files.keys()));
+
+################################################################################
+
+def check_missing_tar_gz_in_dsc():
+ count = 0;
+
+ print "Building list of database files...";
+ q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id AND f.filename ~ '.dsc$'");
+ ql = q.getresult();
+ if ql:
+ print "Checking %d files..." % len(ql);
+ else:
+ print "No files to check."
+ for i in ql:
+ filename = os.path.abspath(i[0] + i[1]);
+ try:
+ # NB: don't enforce .dsc syntax
+ dsc = utils.parse_changes(filename);
+ except:
+ utils.fubar("error parsing .dsc file '%s'." % (filename));
+ dsc_files = utils.build_file_list(dsc, is_a_dsc=1);
+ has_tar = 0;
+ for file in dsc_files.keys():
+ m = utils.re_issource.match(file);
+ if not m:
+ utils.fubar("%s not recognised as source." % (file));
+ type = m.group(3);
+ if type == "orig.tar.gz" or type == "tar.gz":
+ has_tar = 1;
+ if not has_tar:
+ utils.warn("%s has no .tar.gz in the .dsc file." % (file));
+ count += 1;
+
+ if count:
+ utils.warn("Found %s invalid .dsc files." % (count));
+
+
+################################################################################
+
+def validate_sources(suite, component):
+ filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suite, component);
+ print "Processing %s..." % (filename);
+ # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
+ if (result != 0):
+ sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
+ sys.exit(result);
+ sources = utils.open_file(temp_filename);
+ Sources = apt_pkg.ParseTagFile(sources);
+ while Sources.Step():
+ source = Sources.Section.Find('Package');
+ directory = Sources.Section.Find('Directory');
+ files = Sources.Section.Find('Files');
+ for i in files.split('\n'):
+ (md5, size, name) = i.split();
+ filename = "%s/%s/%s" % (Cnf["Dir::Root"], directory, name);
+ if not os.path.exists(filename):
+ if directory.find("potato") == -1:
+ print "W: %s missing." % (filename);
+ else:
+ pool_location = utils.poolify (source, component);
+ pool_filename = "%s/%s/%s" % (Cnf["Dir::Pool"], pool_location, name);
+ if not os.path.exists(pool_filename):
+ print "E: %s missing (%s)." % (filename, pool_filename);
+ else:
+ # Create symlink
+ pool_filename = os.path.normpath(pool_filename);
+ filename = os.path.normpath(filename);
+ src = utils.clean_symlink(pool_filename, filename, Cnf["Dir::Root"]);
+ print "Symlinking: %s -> %s" % (filename, src);
+ #os.symlink(src, filename);
+ sources.close();
+ os.unlink(temp_filename);
+
+########################################
+
+def validate_packages(suite, component, architecture):
+ filename = "%s/dists/%s/%s/binary-%s/Packages.gz" \
+ % (Cnf["Dir::Root"], suite, component, architecture);
+ print "Processing %s..." % (filename);
+ # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
+ if (result != 0):
+ sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
+ sys.exit(result);
+ packages = utils.open_file(temp_filename);
+ Packages = apt_pkg.ParseTagFile(packages);
+ while Packages.Step():
+ filename = "%s/%s" % (Cnf["Dir::Root"], Packages.Section.Find('Filename'));
+ if not os.path.exists(filename):
+ print "W: %s missing." % (filename);
+ packages.close();
+ os.unlink(temp_filename);
+
+########################################
+
+def check_indices_files_exist():
+ for suite in [ "stable", "testing", "unstable" ]:
+ for component in Cnf.ValueList("Suite::%s::Components" % (suite)):
+ architectures = Cnf.ValueList("Suite::%s::Architectures" % (suite));
+ for arch in map(string.lower, architectures):
+ if arch == "source":
+ validate_sources(suite, component);
+ elif arch == "all":
+ continue;
+ else:
+ validate_packages(suite, component, arch);
+
+################################################################################
+
+def check_files_not_symlinks():
+ print "Building list of database files... ",;
+ before = time.time();
+ q = projectB.query("SELECT l.path, f.filename, f.id FROM files f, location l WHERE f.location = l.id")
+ print "done. (%d seconds)" % (int(time.time()-before));
+ q_files = q.getresult();
+
+# locations = {};
+# q = projectB.query("SELECT l.path, c.name, l.id FROM location l, component c WHERE l.component = c.id");
+# for i in q.getresult():
+# path = os.path.normpath(i[0] + i[1]);
+# locations[path] = (i[0], i[2]);
+
+# q = projectB.query("BEGIN WORK");
+ for i in q_files:
+ filename = os.path.normpath(i[0] + i[1]);
+# file_id = i[2];
+ if os.access(filename, os.R_OK) == 0:
+ utils.warn("%s: doesn't exist." % (filename));
+ else:
+ if os.path.islink(filename):
+ utils.warn("%s: is a symlink." % (filename));
+ # You probably don't want to use the rest of this...
+# print "%s: is a symlink." % (filename);
+# dest = os.readlink(filename);
+# if not os.path.isabs(dest):
+# dest = os.path.normpath(os.path.join(os.path.dirname(filename), dest));
+# print "--> %s" % (dest);
+# # Determine suitable location ID
+# # [in what must be the suckiest way possible?]
+# location_id = None;
+# for path in locations.keys():
+# if dest.find(path) == 0:
+# (location, location_id) = locations[path];
+# break;
+# if not location_id:
+# utils.fubar("Can't find location for %s (%s)." % (dest, filename));
+# new_filename = dest.replace(location, "");
+# q = projectB.query("UPDATE files SET filename = '%s', location = %s WHERE id = %s" % (new_filename, location_id, file_id));
+# q = projectB.query("COMMIT WORK");
+
+################################################################################
+
+def chk_bd_process_dir (unused, dirname, filenames):
+ for name in filenames:
+ if not name.endswith(".dsc"):
+ continue;
+ filename = os.path.abspath(dirname+'/'+name);
+ dsc = utils.parse_changes(filename);
+ for field_name in [ "build-depends", "build-depends-indep" ]:
+ field = dsc.get(field_name);
+ if field:
+ try:
+ apt_pkg.ParseSrcDepends(field);
+ except:
+ print "E: [%s] %s: %s" % (filename, field_name, field);
+ pass;
+
+################################################################################
+
+def check_build_depends():
+ os.path.walk(Cnf["Dir::Root"], chk_bd_process_dir, None);
+
+################################################################################
+
+def main ():
+ global Cnf, projectB, db_files, waste, excluded;
+
+ Cnf = utils.get_conf();
+ Arguments = [('h',"help","Tea::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Tea::Options::%s" % (i)):
+ Cnf["Tea::Options::%s" % (i)] = "";
+
+ args = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Tea::Options")
+ if Options["Help"]:
+ usage();
+
+ if len(args) < 1:
+ utils.warn("tea requires at least one argument");
+ usage(1);
+ elif len(args) > 1:
+ utils.warn("tea accepts only one argument");
+ usage(1);
+ mode = args[0].lower();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ if mode == "md5sums":
+ check_md5sums();
+ elif mode == "files":
+ check_files();
+ elif mode == "dsc-syntax":
+ check_dscs();
+ elif mode == "missing-overrides":
+ check_override();
+ elif mode == "source-in-one-dir":
+ check_source_in_one_dir();
+ elif mode == "timestamps":
+ check_timestamps();
+ elif mode == "tar-gz-in-dsc":
+ check_missing_tar_gz_in_dsc();
+ elif mode == "validate-indices":
+ check_indices_files_exist();
+ elif mode == "files-not-symlinks":
+ check_files_not_symlinks();
+ elif mode == "validate-builddeps":
+ check_build_depends();
+ else:
+ utils.warn("unknown mode '%s'" % (mode));
+ usage(1);
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
+
--- /dev/null
+#!/usr/bin/env python
+
+# Cruft checker and hole filler for overrides
+# Copyright (C) 2000, 2001, 2002, 2004 James Troup <james@nocrew.org>
+# Copyright (C) 2005 Jeroen van Wolffelaar <jeroen@wolffelaar.nl>
+# $Id: cindy,v 1.14 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+######################################################################
+# NB: cindy is not a good idea with New Incoming as she doesn't take #
+# into account accepted. You can minimize the impact of this by #
+# running her immediately after kelly but that's still racy because #
+# lisa doesn't lock with kelly. A better long term fix is the evil #
+# plan for accepted to be in the DB. #
+######################################################################
+
+# cindy should now work fine being done during cron.daily, for example just
+# before denise (after kelly and jenna). At that point, queue/accepted should
+# be empty and installed, so... Cindy does now take into account suites
+# sharing overrides
+
+# TODO:
+# * Only update out-of-sync overrides when corresponding versions are equal to
+# some degree
+# * consistency checks like:
+# - section=debian-installer only for udeb and # dsc
+# - priority=source iff dsc
+# - (suite, package, 'dsc') is unique,
+# - just as (suite, package, (u)deb) (yes, across components!)
+# - sections match their component (each component has an own set of sections,
+# could probably be reduced...)
+
+################################################################################
+
+import pg, sys, os;
+import utils, db_access, logging;
+import apt_pkg;
+
+################################################################################
+
+Options = None;
+projectB = None;
+Logger = None
+sections = {}
+priorities = {}
+blacklist = {}
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: cindy
+Check for cruft in overrides.
+
+ -n, --no-action don't do anything
+ -h, --help show this help and exit"""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+def gen_blacklist(dir):
+ for entry in os.listdir(dir):
+ entry = entry.split('_')[0]
+ blacklist[entry] = 1
+
+def process(osuite, affected_suites, originosuite, component, type):
+ global Logger, Options, projectB, sections, priorities;
+
+ osuite_id = db_access.get_suite_id(osuite);
+ if osuite_id == -1:
+ utils.fubar("Suite '%s' not recognised." % (osuite));
+ originosuite_id = None
+ if originosuite:
+ originosuite_id = db_access.get_suite_id(originosuite);
+ if originosuite_id == -1:
+ utils.fubar("Suite '%s' not recognised." % (originosuite));
+
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ utils.fubar("Component '%s' not recognised." % (component));
+
+ type_id = db_access.get_override_type_id(type);
+ if type_id == -1:
+ utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (type));
+ dsc_type_id = db_access.get_override_type_id("dsc");
+ deb_type_id = db_access.get_override_type_id("deb")
+
+ source_priority_id = db_access.get_priority_id("source")
+
+ if type == "deb" or type == "udeb":
+ packages = {};
+ q = projectB.query("""
+SELECT b.package FROM binaries b, bin_associations ba, files f,
+ location l, component c
+ WHERE b.type = '%s' AND b.id = ba.bin AND f.id = b.file AND l.id = f.location
+ AND c.id = l.component AND ba.suite IN (%s) AND c.id = %s
+""" % (type, ",".join(map(str,affected_suites)), component_id));
+ for i in q.getresult():
+ packages[i[0]] = 0;
+
+ src_packages = {};
+ q = projectB.query("""
+SELECT s.source FROM source s, src_associations sa, files f, location l,
+ component c
+ WHERE s.id = sa.source AND f.id = s.file AND l.id = f.location
+ AND c.id = l.component AND sa.suite IN (%s) AND c.id = %s
+""" % (",".join(map(str,affected_suites)), component_id));
+ for i in q.getresult():
+ src_packages[i[0]] = 0;
+
+ # -----------
+ # Drop unused overrides
+
+ q = projectB.query("SELECT package, priority, section, maintainer FROM override WHERE suite = %s AND component = %s AND type = %s" % (osuite_id, component_id, type_id));
+ projectB.query("BEGIN WORK");
+ if type == "dsc":
+ for i in q.getresult():
+ package = i[0];
+ if src_packages.has_key(package):
+ src_packages[package] = 1
+ else:
+ if blacklist.has_key(package):
+ utils.warn("%s in incoming, not touching" % package)
+ continue
+ Logger.log(["removing unused override", osuite, component,
+ type, package, priorities[i[1]], sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""DELETE FROM override WHERE package =
+ '%s' AND suite = %s AND component = %s AND type =
+ %s""" % (package, osuite_id, component_id, type_id));
+ # create source overrides based on binary overrides, as source
+ # overrides not always get created
+ q = projectB.query(""" SELECT package, priority, section,
+ maintainer FROM override WHERE suite = %s AND component = %s
+ """ % (osuite_id, component_id));
+ for i in q.getresult():
+ package = i[0]
+ if not src_packages.has_key(package) or src_packages[package]:
+ continue
+ src_packages[package] = 1
+
+ Logger.log(["add missing override", osuite, component,
+ type, package, "source", sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""INSERT INTO override (package, suite,
+ component, priority, section, type, maintainer) VALUES
+ ('%s', %s, %s, %s, %s, %s, '%s')""" % (package,
+ osuite_id, component_id, source_priority_id, i[2],
+ dsc_type_id, i[3]));
+ # Check whether originosuite has an override for us we can
+ # copy
+ if originosuite:
+ q = projectB.query("""SELECT origin.package, origin.priority,
+ origin.section, origin.maintainer, target.priority,
+ target.section, target.maintainer FROM override origin LEFT
+ JOIN override target ON (origin.package = target.package AND
+ target.suite=%s AND origin.component = target.component AND origin.type =
+ target.type) WHERE origin.suite = %s AND origin.component = %s
+ AND origin.type = %s""" %
+ (osuite_id, originosuite_id, component_id, type_id));
+ for i in q.getresult():
+ package = i[0]
+ if not src_packages.has_key(package) or src_packages[package]:
+ if i[4] and (i[1] != i[4] or i[2] != i[5] or i[3] != i[6]):
+ Logger.log(["syncing override", osuite, component,
+ type, package, "source", sections[i[5]], i[6], "source", sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""UPDATE override SET section=%s,
+ maintainer='%s' WHERE package='%s' AND
+ suite=%s AND component=%s AND type=%s""" %
+ (i[2], i[3], package, osuite_id, component_id,
+ dsc_type_id));
+ continue
+ # we can copy
+ src_packages[package] = 1
+ Logger.log(["copying missing override", osuite, component,
+ type, package, "source", sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""INSERT INTO override (package, suite,
+ component, priority, section, type, maintainer) VALUES
+ ('%s', %s, %s, %s, %s, %s, '%s')""" % (package,
+ osuite_id, component_id, source_priority_id, i[2],
+ dsc_type_id, i[3]));
+
+ for package, hasoverride in src_packages.items():
+ if not hasoverride:
+ utils.warn("%s has no override!" % package)
+
+ else: # binary override
+ for i in q.getresult():
+ package = i[0];
+ if packages.has_key(package):
+ packages[package] = 1
+ else:
+ if blacklist.has_key(package):
+ utils.warn("%s in incoming, not touching" % package)
+ continue
+ Logger.log(["removing unused override", osuite, component,
+ type, package, priorities[i[1]], sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""DELETE FROM override WHERE package =
+ '%s' AND suite = %s AND component = %s AND type =
+ %s""" % (package, osuite_id, component_id, type_id));
+
+ # Check whether originosuite has an override for us we can
+ # copy
+ if originosuite:
+ q = projectB.query("""SELECT origin.package, origin.priority,
+ origin.section, origin.maintainer, target.priority,
+ target.section, target.maintainer FROM override origin LEFT
+ JOIN override target ON (origin.package = target.package AND
+ target.suite=%s AND origin.component = target.component AND
+ origin.type = target.type) WHERE origin.suite = %s AND
+ origin.component = %s AND origin.type = %s""" % (osuite_id,
+ originosuite_id, component_id, type_id));
+ for i in q.getresult():
+ package = i[0]
+ if not packages.has_key(package) or packages[package]:
+ if i[4] and (i[1] != i[4] or i[2] != i[5] or i[3] != i[6]):
+ Logger.log(["syncing override", osuite, component,
+ type, package, priorities[i[4]], sections[i[5]],
+ i[6], priorities[i[1]], sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""UPDATE override SET priority=%s, section=%s,
+ maintainer='%s' WHERE package='%s' AND
+ suite=%s AND component=%s AND type=%s""" %
+ (i[1], i[2], i[3], package, osuite_id,
+ component_id, type_id));
+ continue
+ # we can copy
+ packages[package] = 1
+ Logger.log(["copying missing override", osuite, component,
+ type, package, priorities[i[1]], sections[i[2]], i[3]])
+ if not Options["No-Action"]:
+ projectB.query("""INSERT INTO override (package, suite,
+ component, priority, section, type, maintainer) VALUES
+ ('%s', %s, %s, %s, %s, %s, '%s')""" % (package, osuite_id, component_id, i[1], i[2], type_id, i[3]));
+
+ for package, hasoverride in packages.items():
+ if not hasoverride:
+ utils.warn("%s has no override!" % package)
+
+ projectB.query("COMMIT WORK");
+ sys.stdout.flush()
+
+
+################################################################################
+
+def main ():
+ global Logger, Options, projectB, sections, priorities;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Cindy::Options::Help"),
+ ('n',"no-action", "Cindy::Options::No-Action")];
+ for i in [ "help", "no-action" ]:
+ if not Cnf.has_key("Cindy::Options::%s" % (i)):
+ Cnf["Cindy::Options::%s" % (i)] = "";
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+ Options = Cnf.SubTree("Cindy::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ # init sections, priorities:
+ q = projectB.query("SELECT id, section FROM section")
+ for i in q.getresult():
+ sections[i[0]] = i[1]
+ q = projectB.query("SELECT id, priority FROM priority")
+ for i in q.getresult():
+ priorities[i[0]] = i[1]
+
+ if not Options["No-Action"]:
+ Logger = logging.Logger(Cnf, "cindy")
+ else:
+ Logger = logging.Logger(Cnf, "cindy", 1)
+
+ gen_blacklist(Cnf["Dir::Queue::Accepted"])
+
+ for osuite in Cnf.SubTree("Cindy::OverrideSuites").List():
+ if "1" != Cnf["Cindy::OverrideSuites::%s::Process" % osuite]:
+ continue
+
+ osuite = osuite.lower()
+
+ originosuite = None
+ originremark = ""
+ try:
+ originosuite = Cnf["Cindy::OverrideSuites::%s::OriginSuite" % osuite];
+ originosuite = originosuite.lower()
+ originremark = " taking missing from %s" % originosuite
+ except KeyError:
+ pass
+
+ print "Processing %s%s..." % (osuite, originremark);
+ # Get a list of all suites that use the override file of 'osuite'
+ ocodename = Cnf["Suite::%s::codename" % osuite]
+ suites = []
+ for suite in Cnf.SubTree("Suite").List():
+ if ocodename == Cnf["Suite::%s::OverrideCodeName" % suite]:
+ suites.append(suite)
+
+ q = projectB.query("SELECT id FROM suite WHERE suite_name in (%s)" \
+ % ", ".join(map(repr, suites)).lower())
+
+ suiteids = []
+ for i in q.getresult():
+ suiteids.append(i[0])
+
+ if len(suiteids) != len(suites) or len(suiteids) < 1:
+ utils.fubar("Couldn't find id's of all suites: %s" % suites)
+
+ for component in Cnf.SubTree("Component").List():
+ if component == "mixed":
+ continue; # Ick
+ # It is crucial for the dsc override creation based on binary
+ # overrides that 'dsc' goes first
+ otypes = Cnf.ValueList("OverrideType")
+ otypes.remove("dsc")
+ otypes = ["dsc"] + otypes
+ for otype in otypes:
+ print "Processing %s [%s - %s] using %s..." \
+ % (osuite, component, otype, suites);
+ sys.stdout.flush()
+ process(osuite, suiteids, originosuite, component, otype);
+
+ Logger.close()
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Dependency check proposed-updates
+# Copyright (C) 2001, 2002, 2004 James Troup <james@nocrew.org>
+# $Id: jeri,v 1.15 2005-02-08 22:43:45 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# | > amd64 is more mature than even some released architectures
+# |
+# | This might be true of the architecture, unfortunately it seems to be the
+# | exact opposite for most of the people involved with it.
+#
+# <1089213290.24029.6.camel@descent.netsplit.com>
+
+################################################################################
+
+import pg, sys, os;
+import utils, db_access
+import apt_pkg, apt_inst;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+Options = None;
+stable = {};
+stable_virtual = {};
+architectures = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: jeri [OPTION] <CHANGES FILE | DEB FILE | ADMIN FILE>[...]
+(Very) Basic dependency checking for proposed-updates.
+
+ -q, --quiet be quieter about what is being done
+ -v, --verbose be more verbose about what is being done
+ -h, --help show this help and exit
+
+Need either changes files, deb files or an admin.txt file with a '.joey' suffix."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def d_test (dict, key, positive, negative):
+ if not dict:
+ return negative;
+ if dict.has_key(key):
+ return positive;
+ else:
+ return negative;
+
+################################################################################
+
+def check_dep (depends, dep_type, check_archs, filename, files):
+ pkg_unsat = 0;
+ for arch in check_archs:
+ for parsed_dep in apt_pkg.ParseDepends(depends):
+ unsat = [];
+ for atom in parsed_dep:
+ (dep, version, constraint) = atom;
+ # As a real package?
+ if stable.has_key(dep):
+ if stable[dep].has_key(arch):
+ if apt_pkg.CheckDep(stable[dep][arch], constraint, version):
+ if Options["debug"]:
+ print "Found %s as a real package." % (utils.pp_deps(parsed_dep));
+ unsat = 0;
+ break;
+ # As a virtual?
+ if stable_virtual.has_key(dep):
+ if stable_virtual[dep].has_key(arch):
+ if not constraint and not version:
+ if Options["debug"]:
+ print "Found %s as a virtual package." % (utils.pp_deps(parsed_dep));
+ unsat = 0;
+ break;
+ # As part of the same .changes?
+ epochless_version = utils.re_no_epoch.sub('', version)
+ dep_filename = "%s_%s_%s.deb" % (dep, epochless_version, arch);
+ if files.has_key(dep_filename):
+ if Options["debug"]:
+ print "Found %s in the same upload." % (utils.pp_deps(parsed_dep));
+ unsat = 0;
+ break;
+ # Not found...
+ # [FIXME: must be a better way ... ]
+ error = "%s not found. [Real: " % (utils.pp_deps(parsed_dep))
+ if stable.has_key(dep):
+ if stable[dep].has_key(arch):
+ error += "%s:%s:%s" % (dep, arch, stable[dep][arch]);
+ else:
+ error += "%s:-:-" % (dep);
+ else:
+ error += "-:-:-";
+ error += ", Virtual: ";
+ if stable_virtual.has_key(dep):
+ if stable_virtual[dep].has_key(arch):
+ error += "%s:%s" % (dep, arch);
+ else:
+ error += "%s:-";
+ else:
+ error += "-:-";
+ error += ", Upload: ";
+ if files.has_key(dep_filename):
+ error += "yes";
+ else:
+ error += "no";
+ error += "]";
+ unsat.append(error);
+
+ if unsat:
+ sys.stderr.write("MWAAP! %s: '%s' %s can not be satisifed:\n" % (filename, utils.pp_deps(parsed_dep), dep_type));
+ for error in unsat:
+ sys.stderr.write(" %s\n" % (error));
+ pkg_unsat = 1;
+
+ return pkg_unsat;
+
+def check_package(filename, files):
+ try:
+ control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(filename)));
+ except:
+ utils.warn("%s: debExtractControl() raised %s." % (filename, sys.exc_type));
+ return 1;
+ Depends = control.Find("Depends");
+ Pre_Depends = control.Find("Pre-Depends");
+ #Recommends = control.Find("Recommends");
+ pkg_arch = control.Find("Architecture");
+ base_file = os.path.basename(filename);
+ if pkg_arch == "all":
+ check_archs = architectures;
+ else:
+ check_archs = [pkg_arch];
+
+ pkg_unsat = 0;
+ if Pre_Depends:
+ pkg_unsat += check_dep(Pre_Depends, "pre-dependency", check_archs, base_file, files);
+
+ if Depends:
+ pkg_unsat += check_dep(Depends, "dependency", check_archs, base_file, files);
+ #if Recommends:
+ #pkg_unsat += check_dep(Recommends, "recommendation", check_archs, base_file, files);
+
+ return pkg_unsat;
+
+################################################################################
+
+def pass_fail (filename, result):
+ if not Options["quiet"]:
+ print "%s:" % (os.path.basename(filename)),
+ if result:
+ print "FAIL";
+ else:
+ print "ok";
+
+################################################################################
+
+def check_changes (filename):
+ try:
+ changes = utils.parse_changes(filename);
+ files = utils.build_file_list(changes);
+ except:
+ utils.warn("Error parsing changes file '%s'" % (filename));
+ return;
+
+ result = 0;
+
+ # Move to the pool directory
+ cwd = os.getcwd();
+ file = files.keys()[0];
+ pool_dir = Cnf["Dir::Pool"] + '/' + utils.poolify(changes["source"], files[file]["component"]);
+ os.chdir(pool_dir);
+
+ changes_result = 0;
+ for file in files.keys():
+ if file.endswith(".deb"):
+ result = check_package(file, files);
+ if Options["verbose"]:
+ pass_fail(file, result);
+ changes_result += result;
+
+ pass_fail (filename, changes_result);
+
+ # Move back
+ os.chdir(cwd);
+
+################################################################################
+
+def check_deb (filename):
+ result = check_package(filename, {});
+ pass_fail(filename, result);
+
+
+################################################################################
+
+def check_joey (filename):
+ file = utils.open_file(filename);
+
+ cwd = os.getcwd();
+ os.chdir("%s/dists/proposed-updates" % (Cnf["Dir::Root"]));
+
+ for line in file.readlines():
+ line = line.rstrip();
+ if line.find('install') != -1:
+ split_line = line.split();
+ if len(split_line) != 2:
+ utils.fubar("Parse error (not exactly 2 elements): %s" % (line));
+ install_type = split_line[0];
+ if install_type not in [ "install", "install-u", "sync-install" ]:
+ utils.fubar("Unknown install type ('%s') from: %s" % (install_type, line));
+ changes_filename = split_line[1]
+ if Options["debug"]:
+ print "Processing %s..." % (changes_filename);
+ check_changes(changes_filename);
+ file.close();
+
+ os.chdir(cwd);
+
+################################################################################
+
+def parse_packages():
+ global stable, stable_virtual, architectures;
+
+ # Parse the Packages files (since it's a sub-second operation on auric)
+ suite = "stable";
+ stable = {};
+ components = Cnf.ValueList("Suite::%s::Components" % (suite));
+ architectures = filter(utils.real_arch, Cnf.ValueList("Suite::%s::Architectures" % (suite)));
+ for component in components:
+ for architecture in architectures:
+ filename = "%s/dists/%s/%s/binary-%s/Packages" % (Cnf["Dir::Root"], suite, component, architecture);
+ packages = utils.open_file(filename, 'r');
+ Packages = apt_pkg.ParseTagFile(packages);
+ while Packages.Step():
+ package = Packages.Section.Find('Package');
+ version = Packages.Section.Find('Version');
+ provides = Packages.Section.Find('Provides');
+ if not stable.has_key(package):
+ stable[package] = {};
+ stable[package][architecture] = version;
+ if provides:
+ for virtual_pkg in provides.split(","):
+ virtual_pkg = virtual_pkg.strip();
+ if not stable_virtual.has_key(virtual_pkg):
+ stable_virtual[virtual_pkg] = {};
+ stable_virtual[virtual_pkg][architecture] = "NA";
+ packages.close()
+
+################################################################################
+
+def main ():
+ global Cnf, projectB, Options;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('d', "debug", "Jeri::Options::Debug"),
+ ('q',"quiet","Jeri::Options::Quiet"),
+ ('v',"verbose","Jeri::Options::Verbose"),
+ ('h',"help","Jeri::Options::Help")];
+ for i in [ "debug", "quiet", "verbose", "help" ]:
+ if not Cnf.has_key("Jeri::Options::%s" % (i)):
+ Cnf["Jeri::Options::%s" % (i)] = "";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Jeri::Options")
+
+ if Options["Help"]:
+ usage(0);
+ if not arguments:
+ utils.fubar("need at least one package name as an argument.");
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ print "Parsing packages files...",
+ parse_packages();
+ print "done.";
+
+ for file in arguments:
+ if file.endswith(".changes"):
+ check_changes(file);
+ elif file.endswith(".deb"):
+ check_deb(file);
+ elif file.endswith(".joey"):
+ check_joey(file);
+ else:
+ utils.fubar("Unrecognised file type: '%s'." % (file));
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Remove obsolete .changes files from proposed-updates
+# Copyright (C) 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: halle,v 1.13 2005-12-17 10:57:03 rmurray Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, pg, re, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+Options = None;
+pu = {};
+
+re_isdeb = re.compile (r"^(.+)_(.+?)_(.+?).u?deb$");
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: halle [OPTION] <CHANGES FILE | ADMIN FILE>[...]
+Remove obsolete changes files from proposed-updates.
+
+ -v, --verbose be more verbose about what is being done
+ -h, --help show this help and exit
+
+Need either changes files or an admin.txt file with a '.joey' suffix."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def check_changes (filename):
+ try:
+ changes = utils.parse_changes(filename);
+ files = utils.build_file_list(changes);
+ except:
+ utils.warn("Couldn't read changes file '%s'." % (filename));
+ return;
+ num_files = len(files.keys());
+ for file in files.keys():
+ if utils.re_isadeb.match(file):
+ m = re_isdeb.match(file);
+ pkg = m.group(1);
+ version = m.group(2);
+ arch = m.group(3);
+ if Options["debug"]:
+ print "BINARY: %s ==> %s_%s_%s" % (file, pkg, version, arch);
+ else:
+ m = utils.re_issource.match(file)
+ if m:
+ pkg = m.group(1);
+ version = m.group(2);
+ type = m.group(3);
+ if type != "dsc":
+ del files[file];
+ num_files -= 1;
+ continue;
+ arch = "source";
+ if Options["debug"]:
+ print "SOURCE: %s ==> %s_%s_%s" % (file, pkg, version, arch);
+ else:
+ utils.fubar("unknown type, fix me");
+ if not pu.has_key(pkg):
+ # FIXME
+ utils.warn("%s doesn't seem to exist in p-u?? (from %s [%s])" % (pkg, file, filename));
+ continue;
+ if not pu[pkg].has_key(arch):
+ # FIXME
+ utils.warn("%s doesn't seem to exist for %s in p-u?? (from %s [%s])" % (pkg, arch, file, filename));
+ continue;
+ pu_version = utils.re_no_epoch.sub('', pu[pkg][arch]);
+ if pu_version == version:
+ if Options["verbose"]:
+ print "%s: ok" % (file);
+ else:
+ if Options["verbose"]:
+ print "%s: superseded, removing. [%s]" % (file, pu_version);
+ del files[file];
+
+ new_num_files = len(files.keys());
+ if new_num_files == 0:
+ print "%s: no files left, superseded by %s" % (filename, pu_version);
+ dest = Cnf["Dir::Morgue"] + "/misc/";
+ utils.move(filename, dest);
+ elif new_num_files < num_files:
+ print "%s: lost files, MWAAP." % (filename);
+ else:
+ if Options["verbose"]:
+ print "%s: ok" % (filename);
+
+################################################################################
+
+def check_joey (filename):
+ file = utils.open_file(filename);
+
+ cwd = os.getcwd();
+ os.chdir("%s/dists/proposed-updates" % (Cnf["Dir::Root"]));
+
+ for line in file.readlines():
+ line = line.rstrip();
+ if line.find('install') != -1:
+ split_line = line.split();
+ if len(split_line) != 2:
+ utils.fubar("Parse error (not exactly 2 elements): %s" % (line));
+ install_type = split_line[0];
+ if install_type not in [ "install", "install-u", "sync-install" ]:
+ utils.fubar("Unknown install type ('%s') from: %s" % (install_type, line));
+ changes_filename = split_line[1]
+ if Options["debug"]:
+ print "Processing %s..." % (changes_filename);
+ check_changes(changes_filename);
+
+ os.chdir(cwd);
+
+################################################################################
+
+def init_pu ():
+ global pu;
+
+ q = projectB.query("""
+SELECT b.package, b.version, a.arch_string
+ FROM bin_associations ba, binaries b, suite su, architecture a
+ WHERE b.id = ba.bin AND ba.suite = su.id
+ AND su.suite_name = 'proposed-updates' AND a.id = b.architecture
+UNION SELECT s.source, s.version, 'source'
+ FROM src_associations sa, source s, suite su
+ WHERE s.id = sa.source AND sa.suite = su.id
+ AND su.suite_name = 'proposed-updates'
+ORDER BY package, version, arch_string;
+""");
+ ql = q.getresult();
+ for i in ql:
+ pkg = i[0];
+ version = i[1];
+ arch = i[2];
+ if not pu.has_key(pkg):
+ pu[pkg] = {};
+ pu[pkg][arch] = version;
+
+def main ():
+ global Cnf, projectB, Options;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('d', "debug", "Halle::Options::Debug"),
+ ('v',"verbose","Halle::Options::Verbose"),
+ ('h',"help","Halle::Options::Help")];
+ for i in [ "debug", "verbose", "help" ]:
+ if not Cnf.has_key("Halle::Options::%s" % (i)):
+ Cnf["Halle::Options::%s" % (i)] = "";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Halle::Options")
+
+ if Options["Help"]:
+ usage(0);
+ if not arguments:
+ utils.fubar("need at least one package name as an argument.");
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ init_pu();
+
+ for file in arguments:
+ if file.endswith(".changes"):
+ check_changes(file);
+ elif file.endswith(".joey"):
+ check_joey(file);
+ else:
+ utils.fubar("Unrecognised file type: '%s'." % (file));
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Clean incoming of old unused files
+# Copyright (C) 2000, 2001, 2002 James Troup <james@nocrew.org>
+# $Id: shania,v 1.18 2005-03-06 21:51:51 rmurray Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <aj> Bdale, a ham-er, and the leader,
+# <aj> Willy, a GCC maintainer,
+# <aj> Lamont-work, 'cause he's the top uploader....
+# <aj> Penguin Puff' save the day!
+# <aj> Porting code, trying to build the world,
+# <aj> Here they come just in time...
+# <aj> The Penguin Puff' Guys!
+# <aj> [repeat]
+# <aj> Penguin Puff'!
+# <aj> willy: btw, if you don't maintain gcc you need to start, since
+# the lyrics fit really well that way
+
+################################################################################
+
+import os, stat, sys, time;
+import utils;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+Options = None;
+del_dir = None;
+delete_date = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: shania [OPTIONS]
+Clean out incoming directories.
+
+ -d, --days=DAYS remove anything older than DAYS old
+ -i, --incoming=INCOMING the incoming directory to clean
+ -n, --no-action don't do anything
+ -v, --verbose explain what is being done
+ -h, --help show this help and exit"""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+def init ():
+ global delete_date, del_dir;
+
+ delete_date = int(time.time())-(int(Options["Days"])*84600);
+
+ # Ensure a directory exists to remove files to
+ if not Options["No-Action"]:
+ date = time.strftime("%Y-%m-%d");
+ del_dir = Cnf["Dir::Morgue"] + '/' + Cnf["Shania::MorgueSubDir"] + '/' + date;
+ if not os.path.exists(del_dir):
+ os.makedirs(del_dir, 02775);
+ if not os.path.isdir(del_dir):
+ utils.fubar("%s must be a directory." % (del_dir));
+
+ # Move to the directory to clean
+ incoming = Options["Incoming"];
+ if incoming == "":
+ incoming = Cnf["Dir::Queue::Unchecked"];
+ os.chdir(incoming);
+
+# Remove a file to the morgue
+def remove (file):
+ if os.access(file, os.R_OK):
+ dest_filename = del_dir + '/' + os.path.basename(file);
+ # If the destination file exists; try to find another filename to use
+ if os.path.exists(dest_filename):
+ dest_filename = utils.find_next_free(dest_filename, 10);
+ utils.move(file, dest_filename, 0660);
+ else:
+ utils.warn("skipping '%s', permission denied." % (os.path.basename(file)));
+
+# Removes any old files.
+# [Used for Incoming/REJECT]
+#
+def flush_old ():
+ for file in os.listdir('.'):
+ if os.path.isfile(file):
+ if os.stat(file)[stat.ST_MTIME] < delete_date:
+ if Options["No-Action"]:
+ print "I: Would delete '%s'." % (os.path.basename(file));
+ else:
+ if Options["Verbose"]:
+ print "Removing '%s' (to '%s')." % (os.path.basename(file), del_dir);
+ remove(file);
+ else:
+ if Options["Verbose"]:
+ print "Skipping, too new, '%s'." % (os.path.basename(file));
+
+# Removes any files which are old orphans (not associated with a valid .changes file).
+# [Used for Incoming]
+#
+def flush_orphans ():
+ all_files = {};
+ changes_files = [];
+
+ # Build up the list of all files in the directory
+ for i in os.listdir('.'):
+ if os.path.isfile(i):
+ all_files[i] = 1;
+ if i.endswith(".changes"):
+ changes_files.append(i);
+
+ # Proces all .changes and .dsc files.
+ for changes_filename in changes_files:
+ try:
+ changes = utils.parse_changes(changes_filename);
+ files = utils.build_file_list(changes);
+ except:
+ utils.warn("error processing '%s'; skipping it. [Got %s]" % (changes_filename, sys.exc_type));
+ continue;
+
+ dsc_files = {};
+ for file in files.keys():
+ if file.endswith(".dsc"):
+ try:
+ dsc = utils.parse_changes(file);
+ dsc_files = utils.build_file_list(dsc, is_a_dsc=1);
+ except:
+ utils.warn("error processing '%s'; skipping it. [Got %s]" % (file, sys.exc_type));
+ continue;
+
+ # Ensure all the files we've seen aren't deleted
+ keys = [];
+ for i in (files.keys(), dsc_files.keys(), [changes_filename]):
+ keys.extend(i);
+ for key in keys:
+ if all_files.has_key(key):
+ if Options["Verbose"]:
+ print "Skipping, has parents, '%s'." % (key);
+ del all_files[key];
+
+ # Anthing left at this stage is not referenced by a .changes (or
+ # a .dsc) and should be deleted if old enough.
+ for file in all_files.keys():
+ if os.stat(file)[stat.ST_MTIME] < delete_date:
+ if Options["No-Action"]:
+ print "I: Would delete '%s'." % (os.path.basename(file));
+ else:
+ if Options["Verbose"]:
+ print "Removing '%s' (to '%s')." % (os.path.basename(file), del_dir);
+ remove(file);
+ else:
+ if Options["Verbose"]:
+ print "Skipping, too new, '%s'." % (os.path.basename(file));
+
+################################################################################
+
+def main ():
+ global Cnf, Options;
+
+ Cnf = utils.get_conf()
+
+ for i in ["Help", "Incoming", "No-Action", "Verbose" ]:
+ if not Cnf.has_key("Shania::Options::%s" % (i)):
+ Cnf["Shania::Options::%s" % (i)] = "";
+ if not Cnf.has_key("Shania::Options::Days"):
+ Cnf["Shania::Options::Days"] = "14";
+
+ Arguments = [('h',"help","Shania::Options::Help"),
+ ('d',"days","Shania::Options::Days", "IntLevel"),
+ ('i',"incoming","Shania::Options::Incoming", "HasArg"),
+ ('n',"no-action","Shania::Options::No-Action"),
+ ('v',"verbose","Shania::Options::Verbose")];
+
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Shania::Options")
+
+ if Options["Help"]:
+ usage();
+
+ init();
+
+ if Options["Verbose"]:
+ print "Processing incoming..."
+ flush_orphans();
+
+ reject = Cnf["Dir::Queue::Reject"]
+ if os.path.exists(reject) and os.path.isdir(reject):
+ if Options["Verbose"]:
+ print "Processing incoming/REJECT..."
+ os.chdir(reject);
+ flush_old();
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# rhona, cleans up unassociated binary and source packages
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: rhona,v 1.29 2005-11-25 06:59:45 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# 07:05|<elmo> well.. *shrug*.. no, probably not.. but to fix it,
+# | we're going to have to implement reference counting
+# | through dependencies.. do we really want to go down
+# | that road?
+#
+# 07:05|<Culus> elmo: Augh! <brain jumps out of skull>
+
+################################################################################
+
+import os, pg, stat, sys, time
+import apt_pkg
+import utils
+
+################################################################################
+
+projectB = None;
+Cnf = None;
+Options = None;
+now_date = None; # mark newly "deleted" things as deleted "now"
+delete_date = None; # delete things marked "deleted" earler than this
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: rhona [OPTIONS]
+Clean old packages from suites.
+
+ -n, --no-action don't do anything
+ -h, --help show this help and exit"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def check_binaries():
+ global delete_date, now_date;
+
+ print "Checking for orphaned binary packages..."
+
+ # Get the list of binary packages not in a suite and mark them for
+ # deletion.
+ q = projectB.query("""
+SELECT b.file FROM binaries b, files f
+ WHERE f.last_used IS NULL AND b.file = f.id
+ AND NOT EXISTS (SELECT 1 FROM bin_associations ba WHERE ba.bin = b.id)""");
+ ql = q.getresult();
+
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ file_id = i[0];
+ projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, file_id))
+ projectB.query("COMMIT WORK");
+
+ # Check for any binaries which are marked for eventual deletion
+ # but are now used again.
+ q = projectB.query("""
+SELECT b.file FROM binaries b, files f
+ WHERE f.last_used IS NOT NULL AND f.id = b.file
+ AND EXISTS (SELECT 1 FROM bin_associations ba WHERE ba.bin = b.id)""");
+ ql = q.getresult();
+
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ file_id = i[0];
+ projectB.query("UPDATE files SET last_used = NULL WHERE id = %s" % (file_id));
+ projectB.query("COMMIT WORK");
+
+########################################
+
+def check_sources():
+ global delete_date, now_date;
+
+ print "Checking for orphaned source packages..."
+
+ # Get the list of source packages not in a suite and not used by
+ # any binaries.
+ q = projectB.query("""
+SELECT s.id, s.file FROM source s, files f
+ WHERE f.last_used IS NULL AND s.file = f.id
+ AND NOT EXISTS (SELECT 1 FROM src_associations sa WHERE sa.source = s.id)
+ AND NOT EXISTS (SELECT 1 FROM binaries b WHERE b.source = s.id)""");
+
+ #### XXX: this should ignore cases where the files for the binary b
+ #### have been marked for deletion (so the delay between bins go
+ #### byebye and sources go byebye is 0 instead of StayOfExecution)
+
+ ql = q.getresult();
+
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ source_id = i[0];
+ dsc_file_id = i[1];
+
+ # Mark the .dsc file for deletion
+ projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, dsc_file_id))
+ # Mark all other files references by .dsc too if they're not used by anyone else
+ x = projectB.query("SELECT f.id FROM files f, dsc_files d WHERE d.source = %s AND d.file = f.id" % (source_id));
+ for j in x.getresult():
+ file_id = j[0];
+ y = projectB.query("SELECT id FROM dsc_files d WHERE d.file = %s" % (file_id));
+ if len(y.getresult()) == 1:
+ projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, file_id));
+ projectB.query("COMMIT WORK");
+
+ # Check for any sources which are marked for deletion but which
+ # are now used again.
+
+ q = projectB.query("""
+SELECT f.id FROM source s, files f, dsc_files df
+ WHERE f.last_used IS NOT NULL AND s.id = df.source AND df.file = f.id
+ AND ((EXISTS (SELECT 1 FROM src_associations sa WHERE sa.source = s.id))
+ OR (EXISTS (SELECT 1 FROM binaries b WHERE b.source = s.id)))""");
+
+ #### XXX: this should also handle deleted binaries specially (ie, not
+ #### reinstate sources because of them
+
+ ql = q.getresult();
+ # Could be done in SQL; but left this way for hysterical raisins
+ # [and freedom to innovate don'cha know?]
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ file_id = i[0];
+ projectB.query("UPDATE files SET last_used = NULL WHERE id = %s" % (file_id));
+ projectB.query("COMMIT WORK");
+
+########################################
+
+def check_files():
+ global delete_date, now_date;
+
+ # FIXME: this is evil; nothing should ever be in this state. if
+ # they are, it's a bug and the files should not be auto-deleted.
+
+ return;
+
+ print "Checking for unused files..."
+ q = projectB.query("""
+SELECT id FROM files f
+ WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.file = f.id)
+ AND NOT EXISTS (SELECT 1 FROM dsc_files df WHERE df.file = f.id)""");
+
+ projectB.query("BEGIN WORK");
+ for i in q.getresult():
+ file_id = i[0];
+ projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s" % (now_date, file_id));
+ projectB.query("COMMIT WORK");
+
+def clean_binaries():
+ global delete_date, now_date;
+
+ # We do this here so that the binaries we remove will have their
+ # source also removed (if possible).
+
+ # XXX: why doesn't this remove the files here as well? I don't think it
+ # buys anything keeping this separate
+ print "Cleaning binaries from the DB..."
+ if not Options["No-Action"]:
+ before = time.time();
+ sys.stdout.write("[Deleting from binaries table... ");
+ sys.stderr.write("DELETE FROM binaries WHERE EXISTS (SELECT 1 FROM files WHERE binaries.file = files.id AND files.last_used <= '%s')\n" % (delete_date));
+ projectB.query("DELETE FROM binaries WHERE EXISTS (SELECT 1 FROM files WHERE binaries.file = files.id AND files.last_used <= '%s')" % (delete_date));
+ sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+
+########################################
+
+def clean():
+ global delete_date, now_date;
+ count = 0;
+ size = 0;
+
+ print "Cleaning out packages..."
+
+ date = time.strftime("%Y-%m-%d");
+ dest = Cnf["Dir::Morgue"] + '/' + Cnf["Rhona::MorgueSubDir"] + '/' + date;
+ if not os.path.exists(dest):
+ os.mkdir(dest);
+
+ # Delete from source
+ if not Options["No-Action"]:
+ before = time.time();
+ sys.stdout.write("[Deleting from source table... ");
+ projectB.query("DELETE FROM dsc_files WHERE EXISTS (SELECT 1 FROM source s, files f, dsc_files df WHERE f.last_used <= '%s' AND s.file = f.id AND s.id = df.source AND df.id = dsc_files.id)" % (delete_date));
+ projectB.query("DELETE FROM source WHERE EXISTS (SELECT 1 FROM files WHERE source.file = files.id AND files.last_used <= '%s')" % (delete_date));
+ sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+
+ # Delete files from the pool
+ q = projectB.query("SELECT l.path, f.filename FROM location l, files f WHERE f.last_used <= '%s' AND l.id = f.location" % (delete_date));
+ for i in q.getresult():
+ filename = i[0] + i[1];
+ if not os.path.exists(filename):
+ utils.warn("can not find '%s'." % (filename));
+ continue;
+ if os.path.isfile(filename):
+ if os.path.islink(filename):
+ count += 1;
+ if Options["No-Action"]:
+ print "Removing symlink %s..." % (filename);
+ else:
+ os.unlink(filename);
+ else:
+ size += os.stat(filename)[stat.ST_SIZE];
+ count += 1;
+
+ dest_filename = dest + '/' + os.path.basename(filename);
+ # If the destination file exists; try to find another filename to use
+ if os.path.exists(dest_filename):
+ dest_filename = utils.find_next_free(dest_filename);
+
+ if Options["No-Action"]:
+ print "Cleaning %s -> %s ..." % (filename, dest_filename);
+ else:
+ utils.move(filename, dest_filename);
+ else:
+ utils.fubar("%s is neither symlink nor file?!" % (filename));
+
+ # Delete from the 'files' table
+ if not Options["No-Action"]:
+ before = time.time();
+ sys.stdout.write("[Deleting from files table... ");
+ projectB.query("DELETE FROM files WHERE last_used <= '%s'" % (delete_date));
+ sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+ if count > 0:
+ sys.stderr.write("Cleaned %d files, %s.\n" % (count, utils.size_type(size)));
+
+################################################################################
+
+def clean_maintainers():
+ print "Cleaning out unused Maintainer entries..."
+
+ q = projectB.query("""
+SELECT m.id FROM maintainer m
+ WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.maintainer = m.id)
+ AND NOT EXISTS (SELECT 1 FROM source s WHERE s.maintainer = m.id)""");
+ ql = q.getresult();
+
+ count = 0;
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ maintainer_id = i[0];
+ if not Options["No-Action"]:
+ projectB.query("DELETE FROM maintainer WHERE id = %s" % (maintainer_id));
+ count += 1;
+ projectB.query("COMMIT WORK");
+
+ if count > 0:
+ sys.stderr.write("Cleared out %d maintainer entries.\n" % (count));
+
+################################################################################
+
+def clean_fingerprints():
+ print "Cleaning out unused fingerprint entries..."
+
+ q = projectB.query("""
+SELECT f.id FROM fingerprint f
+ WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.sig_fpr = f.id)
+ AND NOT EXISTS (SELECT 1 FROM source s WHERE s.sig_fpr = f.id)""");
+ ql = q.getresult();
+
+ count = 0;
+ projectB.query("BEGIN WORK");
+ for i in ql:
+ fingerprint_id = i[0];
+ if not Options["No-Action"]:
+ projectB.query("DELETE FROM fingerprint WHERE id = %s" % (fingerprint_id));
+ count += 1;
+ projectB.query("COMMIT WORK");
+
+ if count > 0:
+ sys.stderr.write("Cleared out %d fingerprint entries.\n" % (count));
+
+################################################################################
+
+def clean_queue_build():
+ global now_date;
+
+ if not Cnf.ValueList("Dinstall::QueueBuildSuites") or Options["No-Action"]:
+ return;
+
+ print "Cleaning out queue build symlinks..."
+
+ our_delete_date = time.strftime("%Y-%m-%d %H:%M", time.localtime(time.time()-int(Cnf["Rhona::QueueBuildStayOfExecution"])));
+ count = 0;
+
+ q = projectB.query("SELECT filename FROM queue_build WHERE last_used <= '%s'" % (our_delete_date));
+ for i in q.getresult():
+ filename = i[0];
+ if not os.path.exists(filename):
+ utils.warn("%s (from queue_build) doesn't exist." % (filename));
+ continue;
+ if not Cnf.FindB("Dinstall::SecurityQueueBuild") and not os.path.islink(filename):
+ utils.fubar("%s (from queue_build) should be a symlink but isn't." % (filename));
+ os.unlink(filename);
+ count += 1;
+ projectB.query("DELETE FROM queue_build WHERE last_used <= '%s'" % (our_delete_date));
+
+ if count:
+ sys.stderr.write("Cleaned %d queue_build files.\n" % (count));
+
+################################################################################
+
+def main():
+ global Cnf, Options, projectB, delete_date, now_date;
+
+ Cnf = utils.get_conf()
+ for i in ["Help", "No-Action" ]:
+ if not Cnf.has_key("Rhona::Options::%s" % (i)):
+ Cnf["Rhona::Options::%s" % (i)] = "";
+
+ Arguments = [('h',"help","Rhona::Options::Help"),
+ ('n',"no-action","Rhona::Options::No-Action")];
+
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Rhona::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+
+ now_date = time.strftime("%Y-%m-%d %H:%M");
+ delete_date = time.strftime("%Y-%m-%d %H:%M", time.localtime(time.time()-int(Cnf["Rhona::StayOfExecution"])));
+
+ check_binaries();
+ clean_binaries();
+ check_sources();
+ check_files();
+ clean();
+ clean_maintainers();
+ clean_fingerprints();
+ clean_queue_build();
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Check for fixable discrepancies between stable and unstable
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: andrea,v 1.10 2003-09-07 13:52:13 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+
+################################################################################
+
+import pg, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: andrea
+Looks for fixable descrepancies between stable and unstable.
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf();
+ Arguments = [('h',"help","Andrea::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Andrea::Options::%s" % (i)):
+ Cnf["Andrea::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Andrea::Options")
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ src_suite = "stable";
+ dst_suite = "unstable";
+
+ src_suite_id = db_access.get_suite_id(src_suite);
+ dst_suite_id = db_access.get_suite_id(dst_suite);
+ arch_all_id = db_access.get_architecture_id("all");
+ dsc_type_id = db_access.get_override_type_id("dsc");
+
+ for arch in Cnf.ValueList("Suite::%s::Architectures" % (src_suite)):
+ if arch == "source":
+ continue;
+
+ # Arch: all doesn't work; consider packages which go from
+ # arch: all to arch: any, e.g. debconf... needs more checks
+ # and thought later.
+
+ if arch == "all":
+ continue;
+ arch_id = db_access.get_architecture_id(arch);
+ q = projectB.query("""
+SELECT b_src.package, b_src.version, a.arch_string
+ FROM binaries b_src, bin_associations ba, override o, architecture a
+ WHERE ba.bin = b_src.id AND ba.suite = %s AND b_src.architecture = %s
+ AND a.id = b_src.architecture AND o.package = b_src.package
+ AND o.suite = %s AND o.type != %s AND NOT EXISTS
+ (SELECT 1 FROM bin_associations ba2, binaries b_dst
+ WHERE ba2.bin = b_dst.id AND b_dst.package = b_src.package
+ AND (b_dst.architecture = %s OR b_dst.architecture = %s)
+ AND ba2.suite = %s AND EXISTS
+ (SELECT 1 FROM bin_associations ba3, binaries b2
+ WHERE ba3.bin = b2.id AND ba3.suite = %s AND b2.package = b_dst.package))
+ORDER BY b_src.package;"""
+ % (src_suite_id, arch_id, dst_suite_id, dsc_type_id, arch_id, arch_all_id, dst_suite_id, dst_suite_id));
+ for i in q.getresult():
+ print " ".join(i);
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Manipulate override files
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: natalie,v 1.7 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# On 30 Nov 1998, James Troup wrote:
+#
+# > James Troup<2> <troup2@debian.org>
+# >
+# > James is a clone of James; he's going to take over the world.
+# > After he gets some sleep.
+#
+# Could you clone other things too? Sheep? Llamas? Giant mutant turnips?
+#
+# Your clone will need some help to take over the world, maybe clone up an
+# army of penguins and threaten to unleash them on the world, forcing
+# governments to sway to the new James' will!
+#
+# Yes, I can envision a day when James' duplicate decides to take a horrific
+# vengance on the James that spawned him and unleashes his fury in the form
+# of thousands upon thousands of chickens that look just like Captin Blue
+# Eye! Oh the horror.
+#
+# Now you'll have to were name tags to people can tell you apart, unless of
+# course the new clone is truely evil in which case he should be easy to
+# identify!
+#
+# Jason
+# Chicken. Black. Helicopters.
+# Be afraid.
+
+# <Pine.LNX.3.96.981130011300.30365Z-100000@wakko>
+
+################################################################################
+
+import pg, sys, time;
+import utils, db_access, logging;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+Logger = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: natalie.py [OPTIONS]
+ -h, --help print this help and exit
+
+ -c, --component=CMPT list/set overrides by component
+ (contrib,*main,non-free)
+ -s, --suite=SUITE list/set overrides by suite
+ (experimental,stable,testing,*unstable)
+ -t, --type=TYPE list/set overrides by type
+ (*deb,dsc,udeb)
+
+ -a, --add add overrides (changes and deletions are ignored)
+ -S, --set set overrides
+ -l, --list list overrides
+
+ -q, --quiet be less verbose
+
+ starred (*) values are default"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def process_file (file, suite, component, type, action):
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ utils.fubar("Suite '%s' not recognised." % (suite));
+
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ utils.fubar("Component '%s' not recognised." % (component));
+
+ type_id = db_access.get_override_type_id(type);
+ if type_id == -1:
+ utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc.)" % (type));
+
+ # --set is done mostly internal for performance reasons; most
+ # invocations of --set will be updates and making people wait 2-3
+ # minutes while 6000 select+inserts are run needlessly isn't cool.
+
+ original = {};
+ new = {};
+ c_skipped = 0;
+ c_added = 0;
+ c_updated = 0;
+ c_removed = 0;
+ c_error = 0;
+
+ q = projectB.query("SELECT o.package, o.priority, o.section, o.maintainer, p.priority, s.section FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s and o.priority = p.id and o.section = s.id"
+ % (suite_id, component_id, type_id));
+ for i in q.getresult():
+ original[i[0]] = i[1:];
+
+ start_time = time.time();
+ projectB.query("BEGIN WORK");
+ for line in file.readlines():
+ line = utils.re_comments.sub('', line).strip();
+ if line == "":
+ continue;
+
+ maintainer_override = None;
+ if type == "dsc":
+ split_line = line.split(None, 2);
+ if len(split_line) == 2:
+ (package, section) = split_line;
+ elif len(split_line) == 3:
+ (package, section, maintainer_override) = split_line;
+ else:
+ utils.warn("'%s' does not break into 'package section [maintainer-override]'." % (line));
+ c_error += 1;
+ continue;
+ priority = "source";
+ else: # binary or udeb
+ split_line = line.split(None, 3);
+ if len(split_line) == 3:
+ (package, priority, section) = split_line;
+ elif len(split_line) == 4:
+ (package, priority, section, maintainer_override) = split_line;
+ else:
+ utils.warn("'%s' does not break into 'package priority section [maintainer-override]'." % (line));
+ c_error += 1;
+ continue;
+
+ section_id = db_access.get_section_id(section);
+ if section_id == -1:
+ utils.warn("'%s' is not a valid section. ['%s' in suite %s, component %s]." % (section, package, suite, component));
+ c_error += 1;
+ continue;
+ priority_id = db_access.get_priority_id(priority);
+ if priority_id == -1:
+ utils.warn("'%s' is not a valid priority. ['%s' in suite %s, component %s]." % (priority, package, suite, component));
+ c_error += 1;
+ continue;
+
+ if new.has_key(package):
+ utils.warn("Can't insert duplicate entry for '%s'; ignoring all but the first. [suite %s, component %s]" % (package, suite, component));
+ c_error += 1;
+ continue;
+ new[package] = "";
+ if original.has_key(package):
+ (old_priority_id, old_section_id, old_maintainer_override, old_priority, old_section) = original[package];
+ if action == "add" or old_priority_id == priority_id and \
+ old_section_id == section_id and \
+ ((old_maintainer_override == maintainer_override) or \
+ (old_maintainer_override == "" and maintainer_override == None)):
+ # If it's unchanged or we're in 'add only' mode, ignore it
+ c_skipped += 1;
+ continue;
+ else:
+ # If it's changed, delete the old one so we can
+ # reinsert it with the new information
+ c_updated += 1;
+ projectB.query("DELETE FROM override WHERE suite = %s AND component = %s AND package = '%s' AND type = %s"
+ % (suite_id, component_id, package, type_id));
+ # Log changes
+ if old_priority_id != priority_id:
+ Logger.log(["changed priority",package,old_priority,priority]);
+ if old_section_id != section_id:
+ Logger.log(["changed section",package,old_section,section]);
+ if old_maintainer_override != maintainer_override:
+ Logger.log(["changed maintainer override",package,old_maintainer_override,maintainer_override]);
+ update_p = 1;
+ else:
+ c_added += 1;
+ update_p = 0;
+
+ if maintainer_override:
+ projectB.query("INSERT INTO override (suite, component, type, package, priority, section, maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '%s')"
+ % (suite_id, component_id, type_id, package, priority_id, section_id, maintainer_override));
+ else:
+ projectB.query("INSERT INTO override (suite, component, type, package, priority, section,maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '')"
+ % (suite_id, component_id, type_id, package, priority_id, section_id));
+
+ if not update_p:
+ Logger.log(["new override",suite,component,type,package,priority,section,maintainer_override]);
+
+ if not action == "add":
+ # Delete any packages which were removed
+ for package in original.keys():
+ if not new.has_key(package):
+ projectB.query("DELETE FROM override WHERE suite = %s AND component = %s AND package = '%s' AND type = %s"
+ % (suite_id, component_id, package, type_id));
+ c_removed += 1;
+ Logger.log(["removed override",suite,component,type,package]);
+
+ projectB.query("COMMIT WORK");
+ if not Cnf["Natalie::Options::Quiet"]:
+ print "Done in %d seconds. [Updated = %d, Added = %d, Removed = %d, Skipped = %d, Errors = %d]" % (int(time.time()-start_time), c_updated, c_added, c_removed, c_skipped, c_error);
+ Logger.log(["set complete",c_updated, c_added, c_removed, c_skipped, c_error]);
+
+################################################################################
+
+def list(suite, component, type):
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ utils.fubar("Suite '%s' not recognised." % (suite));
+
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ utils.fubar("Component '%s' not recognised." % (component));
+
+ type_id = db_access.get_override_type_id(type);
+ if type_id == -1:
+ utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (type));
+
+ if type == "dsc":
+ q = projectB.query("SELECT o.package, s.section, o.maintainer FROM override o, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.section = s.id ORDER BY s.section, o.package" % (suite_id, component_id, type_id));
+ for i in q.getresult():
+ print utils.result_join(i);
+ else:
+ q = projectB.query("SELECT o.package, p.priority, s.section, o.maintainer, p.level FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.priority = p.id AND o.section = s.id ORDER BY s.section, p.level, o.package" % (suite_id, component_id, type_id));
+ for i in q.getresult():
+ print utils.result_join(i[:-1]);
+
+################################################################################
+
+def main ():
+ global Cnf, projectB, Logger;
+
+ Cnf = utils.get_conf();
+ Arguments = [('a', "add", "Natalie::Options::Add"),
+ ('c', "component", "Natalie::Options::Component", "HasArg"),
+ ('h', "help", "Natalie::Options::Help"),
+ ('l', "list", "Natalie::Options::List"),
+ ('q', "quiet", "Natalie::Options::Quiet"),
+ ('s', "suite", "Natalie::Options::Suite", "HasArg"),
+ ('S', "set", "Natalie::Options::Set"),
+ ('t', "type", "Natalie::Options::Type", "HasArg")];
+
+ # Default arguments
+ for i in [ "add", "help", "list", "quiet", "set" ]:
+ if not Cnf.has_key("Natalie::Options::%s" % (i)):
+ Cnf["Natalie::Options::%s" % (i)] = "";
+ if not Cnf.has_key("Natalie::Options::Component"):
+ Cnf["Natalie::Options::Component"] = "main";
+ if not Cnf.has_key("Natalie::Options::Suite"):
+ Cnf["Natalie::Options::Suite"] = "unstable";
+ if not Cnf.has_key("Natalie::Options::Type"):
+ Cnf["Natalie::Options::Type"] = "deb";
+
+ file_list = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+
+ if Cnf["Natalie::Options::Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ action = None;
+ for i in [ "add", "list", "set" ]:
+ if Cnf["Natalie::Options::%s" % (i)]:
+ if action:
+ utils.fubar("Can not perform more than one action at once.");
+ action = i;
+
+ (suite, component, type) = (Cnf["Natalie::Options::Suite"], Cnf["Natalie::Options::Component"], Cnf["Natalie::Options::Type"])
+
+ if action == "list":
+ list(suite, component, type);
+ else:
+ Logger = logging.Logger(Cnf, "natalie");
+ if file_list:
+ for file in file_list:
+ process_file(utils.open_file(file), suite, component, type, action);
+ else:
+ process_file(sys.stdin, suite, component, type, action);
+ Logger.close();
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Manipulate suite tags
+# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: heidi,v 1.19 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+#######################################################################################
+
+# 8to6Guy: "Wow, Bob, You look rough!"
+# BTAF: "Mbblpmn..."
+# BTAF <.oO>: "You moron! This is what you get for staying up all night drinking vodka and salad dressing!"
+# BTAF <.oO>: "This coffee I.V. drip is barely even keeping me awake! I need something with more kick! But what?"
+# BTAF: "OMIGOD! I OVERDOSED ON HEROIN"
+# CoWorker#n: "Give him air!!"
+# CoWorker#n+1: "We need a syringe full of adrenaline!"
+# CoWorker#n+2: "Stab him in the heart!"
+# BTAF: "*YES!*"
+# CoWorker#n+3: "Bob's been overdosing quite a bit lately..."
+# CoWorker#n+4: "Third time this week."
+
+# -- http://www.angryflower.com/8to6.gif
+
+#######################################################################################
+
+# Adds or removes packages from a suite. Takes the list of files
+# either from stdin or as a command line argument. Special action
+# "set", will reset the suite (!) and add all packages from scratch.
+
+#######################################################################################
+
+import pg, sys;
+import apt_pkg;
+import utils, db_access, logging;
+
+#######################################################################################
+
+Cnf = None;
+projectB = None;
+Logger = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: heidi [OPTIONS] [FILE]
+Display or alter the contents of a suite using FILE(s), or stdin.
+
+ -a, --add=SUITE add to SUITE
+ -h, --help show this help and exit
+ -l, --list=SUITE list the contents of SUITE
+ -r, --remove=SUITE remove from SUITE
+ -s, --set=SUITE set SUITE"""
+
+ sys.exit(exit_code)
+
+#######################################################################################
+
+def get_id (package, version, architecture):
+ if architecture == "source":
+ q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version))
+ else:
+ q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture))
+
+ ql = q.getresult();
+ if not ql:
+ utils.warn("Couldn't find '%s~%s~%s'." % (package, version, architecture));
+ return None;
+ if len(ql) > 1:
+ utils.warn("Found more than one match for '%s~%s~%s'." % (package, version, architecture));
+ return None;
+ id = ql[0][0];
+ return id;
+
+#######################################################################################
+
+def set_suite (file, suite_id):
+ lines = file.readlines();
+
+ projectB.query("BEGIN WORK");
+
+ # Build up a dictionary of what is currently in the suite
+ current = {};
+ q = projectB.query("SELECT b.package, b.version, a.arch_string, ba.id FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id" % (suite_id));
+ ql = q.getresult();
+ for i in ql:
+ key = " ".join(i[:3]);
+ current[key] = i[3];
+ q = projectB.query("SELECT s.source, s.version, sa.id FROM source s, src_associations sa WHERE sa.suite = %s AND sa.source = s.id" % (suite_id));
+ ql = q.getresult();
+ for i in ql:
+ key = " ".join(i[:2]) + " source";
+ current[key] = i[2];
+
+ # Build up a dictionary of what should be in the suite
+ desired = {};
+ for line in lines:
+ split_line = line.strip().split();
+ if len(split_line) != 3:
+ utils.warn("'%s' does not break into 'package version architecture'." % (line[:-1]));
+ continue;
+ key = " ".join(split_line);
+ desired[key] = "";
+
+ # Check to see which packages need removed and remove them
+ for key in current.keys():
+ if not desired.has_key(key):
+ (package, version, architecture) = key.split();
+ id = current[key];
+ if architecture == "source":
+ q = projectB.query("DELETE FROM src_associations WHERE id = %s" % (id));
+ else:
+ q = projectB.query("DELETE FROM bin_associations WHERE id = %s" % (id));
+ Logger.log(["removed",key,id]);
+
+ # Check to see which packages need added and add them
+ for key in desired.keys():
+ if not current.has_key(key):
+ (package, version, architecture) = key.split();
+ id = get_id (package, version, architecture);
+ if not id:
+ continue;
+ if architecture == "source":
+ q = projectB.query("INSERT INTO src_associations (suite, source) VALUES (%s, %s)" % (suite_id, id));
+ else:
+ q = projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%s, %s)" % (suite_id, id));
+ Logger.log(["added",key,id]);
+
+ projectB.query("COMMIT WORK");
+
+#######################################################################################
+
+def process_file (file, suite, action):
+
+ suite_id = db_access.get_suite_id(suite);
+
+ if action == "set":
+ set_suite (file, suite_id);
+ return;
+
+ lines = file.readlines();
+
+ projectB.query("BEGIN WORK");
+
+ for line in lines:
+ split_line = line.strip().split();
+ if len(split_line) != 3:
+ utils.warn("'%s' does not break into 'package version architecture'." % (line[:-1]));
+ continue;
+
+ (package, version, architecture) = split_line;
+
+ id = get_id(package, version, architecture);
+ if not id:
+ continue;
+
+ if architecture == "source":
+ # Find the existing assoications ID, if any
+ q = projectB.query("SELECT id FROM src_associations WHERE suite = %s and source = %s" % (suite_id, id));
+ ql = q.getresult();
+ if not ql:
+ assoication_id = None;
+ else:
+ assoication_id = ql[0][0];
+ # Take action
+ if action == "add":
+ if assoication_id:
+ utils.warn("'%s~%s~%s' already exists in suite %s." % (package, version, architecture, suite));
+ continue;
+ else:
+ q = projectB.query("INSERT INTO src_associations (suite, source) VALUES (%s, %s)" % (suite_id, id));
+ elif action == "remove":
+ if assoication_id == None:
+ utils.warn("'%s~%s~%s' doesn't exist in suite %s." % (package, version, architecture, suite));
+ continue;
+ else:
+ q = projectB.query("DELETE FROM src_associations WHERE id = %s" % (assoication_id));
+ else:
+ # Find the existing assoications ID, if any
+ q = projectB.query("SELECT id FROM bin_associations WHERE suite = %s and bin = %s" % (suite_id, id));
+ ql = q.getresult();
+ if not ql:
+ assoication_id = None;
+ else:
+ assoication_id = ql[0][0];
+ # Take action
+ if action == "add":
+ if assoication_id:
+ utils.warn("'%s~%s~%s' already exists in suite %s." % (package, version, architecture, suite));
+ continue;
+ else:
+ q = projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%s, %s)" % (suite_id, id));
+ elif action == "remove":
+ if assoication_id == None:
+ utils.warn("'%s~%s~%s' doesn't exist in suite %s." % (package, version, architecture, suite));
+ continue;
+ else:
+ q = projectB.query("DELETE FROM bin_associations WHERE id = %s" % (assoication_id));
+
+ projectB.query("COMMIT WORK");
+
+#######################################################################################
+
+def get_list (suite):
+ suite_id = db_access.get_suite_id(suite);
+ # List binaries
+ q = projectB.query("SELECT b.package, b.version, a.arch_string FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id" % (suite_id));
+ ql = q.getresult();
+ for i in ql:
+ print " ".join(i);
+
+ # List source
+ q = projectB.query("SELECT s.source, s.version FROM source s, src_associations sa WHERE sa.suite = %s AND sa.source = s.id" % (suite_id));
+ ql = q.getresult();
+ for i in ql:
+ print " ".join(i) + " source";
+
+#######################################################################################
+
+def main ():
+ global Cnf, projectB, Logger;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('a',"add","Heidi::Options::Add", "HasArg"),
+ ('h',"help","Heidi::Options::Help"),
+ ('l',"list","Heidi::Options::List","HasArg"),
+ ('r',"remove", "Heidi::Options::Remove", "HasArg"),
+ ('s',"set", "Heidi::Options::Set", "HasArg")];
+
+ for i in ["add", "help", "list", "remove", "set", "version" ]:
+ if not Cnf.has_key("Heidi::Options::%s" % (i)):
+ Cnf["Heidi::Options::%s" % (i)] = "";
+
+ file_list = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Heidi::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"],int(Cnf["DB::Port"]));
+
+ db_access.init(Cnf, projectB);
+
+ action = None;
+
+ for i in ("add", "list", "remove", "set"):
+ if Cnf["Heidi::Options::%s" % (i)] != "":
+ suite = Cnf["Heidi::Options::%s" % (i)];
+ if db_access.get_suite_id(suite) == -1:
+ utils.fubar("Unknown suite '%s'." %(suite));
+ else:
+ if action:
+ utils.fubar("Can only perform one action at a time.");
+ action = i;
+
+ # Need an action...
+ if action == None:
+ utils.fubar("No action specified.");
+
+ # Safety/Sanity check
+ if action == "set" and suite != "testing":
+ utils.fubar("Will not reset a suite other than testing.");
+
+ if action == "list":
+ get_list(suite);
+ else:
+ Logger = logging.Logger(Cnf, "heidi");
+ if file_list:
+ for file in file_list:
+ process_file(utils.open_file(file), suite, action);
+ else:
+ process_file(sys.stdin, suite, action);
+ Logger.close();
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Check for obsolete binary packages
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: rene,v 1.23 2005-04-16 09:19:20 rmurray Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# ``If you're claiming that's a "problem" that needs to be "fixed",
+# you might as well write some letters to God about how unfair entropy
+# is while you're at it.'' -- 20020802143104.GA5628@azure.humbug.org.au
+
+## TODO: fix NBS looping for version, implement Dubious NBS, fix up output of duplicate source package stuff, improve experimental ?, add support for non-US ?, add overrides, avoid ANAIS for duplicated packages
+
+################################################################################
+
+import commands, pg, os, string, sys, time;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+suite_id = None;
+no_longer_in_suite = {}; # Really should be static to add_nbs, but I'm lazy
+
+source_binaries = {};
+source_versions = {};
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: rene
+Check for obsolete or duplicated packages.
+
+ -h, --help show this help and exit.
+ -m, --mode=MODE chose the MODE to run in (full or daily).
+ -s, --suite=SUITE check suite SUITE."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def add_nbs(nbs_d, source, version, package):
+ # Ensure the package is still in the suite (someone may have already removed it)
+ if no_longer_in_suite.has_key(package):
+ return;
+ else:
+ q = projectB.query("SELECT b.id FROM binaries b, bin_associations ba WHERE ba.bin = b.id AND ba.suite = %s AND b.package = '%s' LIMIT 1" % (suite_id, package));
+ if not q.getresult():
+ no_longer_in_suite[package] = "";
+ return;
+
+ nbs_d.setdefault(source, {})
+ nbs_d[source].setdefault(version, {})
+ nbs_d[source][version][package] = "";
+
+################################################################################
+
+# Check for packages built on architectures they shouldn't be.
+def do_anais(architecture, binaries_list, source):
+ if architecture == "any" or architecture == "all":
+ return "";
+
+ anais_output = "";
+ architectures = {};
+ for arch in architecture.split():
+ architectures[arch.strip()] = "";
+ for binary in binaries_list:
+ q = projectB.query("SELECT a.arch_string, b.version FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id AND b.package = '%s'" % (suite_id, binary));
+ ql = q.getresult();
+ versions = [];
+ for i in ql:
+ arch = i[0];
+ version = i[1];
+ if architectures.has_key(arch):
+ versions.append(version);
+ versions.sort(apt_pkg.VersionCompare);
+ if versions:
+ latest_version = versions.pop()
+ else:
+ latest_version = None;
+ # Check for 'invalid' architectures
+ versions_d = {}
+ for i in ql:
+ arch = i[0];
+ version = i[1];
+ if not architectures.has_key(arch):
+ versions_d.setdefault(version, [])
+ versions_d[version].append(arch)
+
+ if versions_d != {}:
+ anais_output += "\n (*) %s_%s [%s]: %s\n" % (binary, latest_version, source, architecture);
+ versions = versions_d.keys();
+ versions.sort(apt_pkg.VersionCompare);
+ for version in versions:
+ arches = versions_d[version];
+ arches.sort();
+ anais_output += " o %s: %s\n" % (version, ", ".join(arches));
+ return anais_output;
+
+################################################################################
+
+def do_nviu():
+ experimental_id = db_access.get_suite_id("experimental");
+ if experimental_id == -1:
+ return;
+ # Check for packages in experimental obsoleted by versions in unstable
+ q = projectB.query("""
+SELECT s.source, s.version AS experimental, s2.version AS unstable
+ FROM src_associations sa, source s, source s2, src_associations sa2
+ WHERE sa.suite = %s AND sa2.suite = %d AND sa.source = s.id
+ AND sa2.source = s2.id AND s.source = s2.source
+ AND versioncmp(s.version, s2.version) < 0""" % (experimental_id,
+ db_access.get_suite_id("unstable")));
+ ql = q.getresult();
+ if ql:
+ nviu_to_remove = [];
+ print "Newer version in unstable";
+ print "-------------------------";
+ print ;
+ for i in ql:
+ (source, experimental_version, unstable_version) = i;
+ print " o %s (%s, %s)" % (source, experimental_version, unstable_version);
+ nviu_to_remove.append(source);
+ print
+ print "Suggested command:"
+ print " melanie -m \"[rene] NVIU\" -s experimental %s" % (" ".join(nviu_to_remove));
+ print
+
+################################################################################
+
+def do_nbs(real_nbs):
+ output = "Not Built from Source\n";
+ output += "---------------------\n\n";
+
+ nbs_to_remove = [];
+ nbs_keys = real_nbs.keys();
+ nbs_keys.sort();
+ for source in nbs_keys:
+ output += " * %s_%s builds: %s\n" % (source,
+ source_versions.get(source, "??"),
+ source_binaries.get(source, "(source does not exist)"));
+ output += " but no longer builds:\n"
+ versions = real_nbs[source].keys();
+ versions.sort(apt_pkg.VersionCompare);
+ for version in versions:
+ packages = real_nbs[source][version].keys();
+ packages.sort();
+ for pkg in packages:
+ nbs_to_remove.append(pkg);
+ output += " o %s: %s\n" % (version, ", ".join(packages));
+
+ output += "\n";
+
+ if nbs_to_remove:
+ print output;
+
+ print "Suggested command:"
+ print " melanie -m \"[rene] NBS\" -b %s" % (" ".join(nbs_to_remove));
+ print
+
+################################################################################
+
+def do_dubious_nbs(dubious_nbs):
+ print "Dubious NBS";
+ print "-----------";
+ print ;
+
+ dubious_nbs_keys = dubious_nbs.keys();
+ dubious_nbs_keys.sort();
+ for source in dubious_nbs_keys:
+ print " * %s_%s builds: %s" % (source,
+ source_versions.get(source, "??"),
+ source_binaries.get(source, "(source does not exist)"));
+ print " won't admit to building:"
+ versions = dubious_nbs[source].keys();
+ versions.sort(apt_pkg.VersionCompare);
+ for version in versions:
+ packages = dubious_nbs[source][version].keys();
+ packages.sort();
+ print " o %s: %s" % (version, ", ".join(packages));
+
+ print ;
+
+################################################################################
+
+def do_obsolete_source(duplicate_bins, bin2source):
+ obsolete = {}
+ for key in duplicate_bins.keys():
+ (source_a, source_b) = key.split('~')
+ for source in [ source_a, source_b ]:
+ if not obsolete.has_key(source):
+ if not source_binaries.has_key(source):
+ # Source has already been removed
+ continue;
+ else:
+ obsolete[source] = map(string.strip,
+ source_binaries[source].split(','))
+ for binary in duplicate_bins[key]:
+ if bin2source.has_key(binary) and bin2source[binary]["source"] == source:
+ continue
+ if binary in obsolete[source]:
+ obsolete[source].remove(binary)
+
+ to_remove = []
+ output = "Obsolete source package\n"
+ output += "-----------------------\n\n"
+ obsolete_keys = obsolete.keys()
+ obsolete_keys.sort()
+ for source in obsolete_keys:
+ if not obsolete[source]:
+ to_remove.append(source)
+ output += " * %s (%s)\n" % (source, source_versions[source])
+ for binary in map(string.strip, source_binaries[source].split(',')):
+ if bin2source.has_key(binary):
+ output += " o %s (%s) is built by %s.\n" \
+ % (binary, bin2source[binary]["version"],
+ bin2source[binary]["source"])
+ else:
+ output += " o %s is not built.\n" % binary
+ output += "\n"
+
+ if to_remove:
+ print output;
+
+ print "Suggested command:"
+ print " melanie -S -p -m \"[rene] obsolete source package\" %s" % (" ".join(to_remove));
+ print
+
+################################################################################
+
+def main ():
+ global Cnf, projectB, suite_id, source_binaries, source_versions;
+
+ Cnf = utils.get_conf();
+
+ Arguments = [('h',"help","Rene::Options::Help"),
+ ('m',"mode","Rene::Options::Mode", "HasArg"),
+ ('s',"suite","Rene::Options::Suite","HasArg")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Rene::Options::%s" % (i)):
+ Cnf["Rene::Options::%s" % (i)] = "";
+ Cnf["Rene::Options::Suite"] = Cnf["Dinstall::DefaultSuite"];
+
+ if not Cnf.has_key("Rene::Options::Mode"):
+ Cnf["Rene::Options::Mode"] = "daily";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Rene::Options")
+ if Options["Help"]:
+ usage();
+
+ # Set up checks based on mode
+ if Options["Mode"] == "daily":
+ checks = [ "nbs", "nviu", "obsolete source" ];
+ elif Options["Mode"] == "full":
+ checks = [ "nbs", "nviu", "obsolete source", "dubious nbs", "bnb", "bms", "anais" ];
+ else:
+ utils.warn("%s is not a recognised mode - only 'full' or 'daily' are understood." % (Options["Mode"]));
+ usage(1);
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ bin_pkgs = {};
+ src_pkgs = {};
+ bin2source = {}
+ bins_in_suite = {};
+ nbs = {};
+ source_versions = {};
+
+ anais_output = "";
+ duplicate_bins = {};
+
+ suite = Options["Suite"]
+ suite_id = db_access.get_suite_id(suite);
+
+ bin_not_built = {};
+
+ if "bnb" in checks:
+ # Initalize a large hash table of all binary packages
+ before = time.time();
+ sys.stderr.write("[Getting a list of binary packages in %s..." % (suite));
+ q = projectB.query("SELECT distinct b.package FROM binaries b, bin_associations ba WHERE ba.suite = %s AND ba.bin = b.id" % (suite_id));
+ ql = q.getresult();
+ sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+ for i in ql:
+ bins_in_suite[i[0]] = "";
+
+ # Checks based on the Sources files
+ components = Cnf.ValueList("Suite::%s::Components" % (suite));
+ for component in components:
+ filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suite, component);
+ # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
+ if (result != 0):
+ sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
+ sys.exit(result);
+ sources = utils.open_file(temp_filename);
+ Sources = apt_pkg.ParseTagFile(sources);
+ while Sources.Step():
+ source = Sources.Section.Find('Package');
+ source_version = Sources.Section.Find('Version');
+ architecture = Sources.Section.Find('Architecture');
+ binaries = Sources.Section.Find('Binary');
+ binaries_list = map(string.strip, binaries.split(','));
+
+ if "bnb" in checks:
+ # Check for binaries not built on any architecture.
+ for binary in binaries_list:
+ if not bins_in_suite.has_key(binary):
+ bin_not_built.setdefault(source, {})
+ bin_not_built[source][binary] = "";
+
+ if "anais" in checks:
+ anais_output += do_anais(architecture, binaries_list, source);
+
+ # Check for duplicated packages and build indices for checking "no source" later
+ source_index = component + '/' + source;
+ if src_pkgs.has_key(source):
+ print " %s is a duplicated source package (%s and %s)" % (source, source_index, src_pkgs[source]);
+ src_pkgs[source] = source_index;
+ for binary in binaries_list:
+ if bin_pkgs.has_key(binary):
+ key_list = [ source, bin_pkgs[binary] ]
+ key_list.sort()
+ key = '~'.join(key_list)
+ duplicate_bins.setdefault(key, [])
+ duplicate_bins[key].append(binary);
+ bin_pkgs[binary] = source;
+ source_binaries[source] = binaries;
+ source_versions[source] = source_version;
+
+ sources.close();
+ os.unlink(temp_filename);
+
+ # Checks based on the Packages files
+ for component in components + ['main/debian-installer']:
+ architectures = filter(utils.real_arch, Cnf.ValueList("Suite::%s::Architectures" % (suite)));
+ for architecture in architectures:
+ filename = "%s/dists/%s/%s/binary-%s/Packages.gz" % (Cnf["Dir::Root"], suite, component, architecture);
+ # apt_pkg.ParseTagFile needs a real file handle
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
+ if (result != 0):
+ sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
+ sys.exit(result);
+ packages = utils.open_file(temp_filename);
+ Packages = apt_pkg.ParseTagFile(packages);
+ while Packages.Step():
+ package = Packages.Section.Find('Package');
+ source = Packages.Section.Find('Source', "");
+ version = Packages.Section.Find('Version');
+ if source == "":
+ source = package;
+ if bin2source.has_key(package) and \
+ apt_pkg.VersionCompare(version, bin2source[package]["version"]) > 0:
+ bin2source[package]["version"] = version
+ bin2source[package]["source"] = source
+ else:
+ bin2source[package] = {}
+ bin2source[package]["version"] = version
+ bin2source[package]["source"] = source
+ if source.find("(") != -1:
+ m = utils.re_extract_src_version.match(source);
+ source = m.group(1);
+ version = m.group(2);
+ if not bin_pkgs.has_key(package):
+ nbs.setdefault(source,{})
+ nbs[source].setdefault(package, {})
+ nbs[source][package][version] = "";
+ else:
+ previous_source = bin_pkgs[package]
+ if previous_source != source:
+ key_list = [ source, previous_source ]
+ key_list.sort()
+ key = '~'.join(key_list)
+ duplicate_bins.setdefault(key, [])
+ if package not in duplicate_bins[key]:
+ duplicate_bins[key].append(package)
+ packages.close();
+ os.unlink(temp_filename);
+
+ if "obsolete source" in checks:
+ do_obsolete_source(duplicate_bins, bin2source)
+
+ # Distinguish dubious (version numbers match) and 'real' NBS (they don't)
+ dubious_nbs = {};
+ real_nbs = {};
+ for source in nbs.keys():
+ for package in nbs[source].keys():
+ versions = nbs[source][package].keys();
+ versions.sort(apt_pkg.VersionCompare);
+ latest_version = versions.pop();
+ source_version = source_versions.get(source,"0");
+ if apt_pkg.VersionCompare(latest_version, source_version) == 0:
+ add_nbs(dubious_nbs, source, latest_version, package);
+ else:
+ add_nbs(real_nbs, source, latest_version, package);
+
+ if "nviu" in checks:
+ do_nviu();
+
+ if "nbs" in checks:
+ do_nbs(real_nbs);
+
+ ###
+
+ if Options["Mode"] == "full":
+ print "="*75
+ print
+
+ if "bnb" in checks:
+ print "Unbuilt binary packages";
+ print "-----------------------";
+ print
+ keys = bin_not_built.keys();
+ keys.sort();
+ for source in keys:
+ binaries = bin_not_built[source].keys();
+ binaries.sort();
+ print " o %s: %s" % (source, ", ".join(binaries));
+ print ;
+
+ if "bms" in checks:
+ print "Built from multiple source packages";
+ print "-----------------------------------";
+ print ;
+ keys = duplicate_bins.keys();
+ keys.sort();
+ for key in keys:
+ (source_a, source_b) = key.split("~");
+ print " o %s & %s => %s" % (source_a, source_b, ", ".join(duplicate_bins[key]));
+ print ;
+
+ if "anais" in checks:
+ print "Architecture Not Allowed In Source";
+ print "----------------------------------";
+ print anais_output;
+ print ;
+
+ if "dubious nbs" in checks:
+ do_dubious_nbs(dubious_nbs);
+
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Dump variables from a .katie file to stdout
+# Copyright (C) 2001, 2002, 2004 James Troup <james@nocrew.org>
+# $Id: ashley,v 1.11 2004-11-27 16:05:12 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <elmo> ooooooooooooooohhhhhhhhhhhhhhhhhhhhhhhhh dddddddddeeeeeeeaaaaaaaarrrrrrrrrrr
+# <elmo> iiiiiiiiiiiii tttttttttthhhhhhhhiiiiiiiiiiiinnnnnnnnnkkkkkkkkkkkkk iiiiiiiiiiiiii mmmmmmmmmmeeeeeeeesssssssssssssssseeeeeeeddd uuuupppppppppppp ttttttttthhhhhhhheeeeeeee xxxxxxxssssssseeeeeeeeettttttttttttt aaaaaaaarrrrrrrggggggsssssssss
+#
+# ['xset r rate 30 250' bad, mmkay]
+
+################################################################################
+
+import sys;
+import katie, utils;
+import apt_pkg;
+
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: ashley FILE...
+Dumps the info in .katie FILE(s).
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def main():
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Ashley::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Ashley::Options::%s" % (i)):
+ Cnf["Ashley::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Ashley::Options")
+ if Options["Help"]:
+ usage();
+
+ k = katie.Katie(Cnf);
+ for arg in sys.argv[1:]:
+ arg = utils.validate_changes_file_arg(arg,require_changes=-1);
+ k.pkg.changes_file = arg;
+ print "%s:" % (arg);
+ k.init_vars();
+ k.update_vars();
+
+ changes = k.pkg.changes;
+ print " Changes:";
+ # Mandatory changes fields
+ for i in [ "source", "version", "maintainer", "urgency", "changedby822",
+ "changedby2047", "changedbyname", "maintainer822",
+ "maintainer2047", "maintainername", "maintaineremail",
+ "fingerprint", "changes" ]:
+ print " %s: %s" % (i.capitalize(), changes[i]);
+ del changes[i];
+ # Mandatory changes lists
+ for i in [ "distribution", "architecture", "closes" ]:
+ print " %s: %s" % (i.capitalize(), " ".join(changes[i].keys()));
+ del changes[i];
+ # Optional changes fields
+ for i in [ "changed-by", "filecontents", "format" ]:
+ if changes.has_key(i):
+ print " %s: %s" % (i.capitalize(), changes[i]);
+ del changes[i];
+ print;
+ if changes:
+ utils.warn("changes still has following unrecognised keys: %s" % (changes.keys()));
+
+ dsc = k.pkg.dsc;
+ print " Dsc:";
+ for i in [ "source", "version", "maintainer", "fingerprint", "uploaders",
+ "bts changelog" ]:
+ if dsc.has_key(i):
+ print " %s: %s" % (i.capitalize(), dsc[i]);
+ del dsc[i];
+ print;
+ if dsc:
+ utils.warn("dsc still has following unrecognised keys: %s" % (dsc.keys()));
+
+ files = k.pkg.files;
+ print " Files:"
+ for file in files.keys():
+ print " %s:" % (file);
+ for i in [ "package", "version", "architecture", "type", "size",
+ "md5sum", "component", "location id", "source package",
+ "source version", "maintainer", "dbtype", "files id",
+ "new", "section", "priority", "pool name" ]:
+ if files[file].has_key(i):
+ print " %s: %s" % (i.capitalize(), files[file][i]);
+ del files[file][i];
+ if files[file]:
+ utils.warn("files[%s] still has following unrecognised keys: %s" % (file, files[file].keys()));
+ print;
+
+ dsc_files = k.pkg.dsc_files;
+ print " Dsc Files:";
+ for file in dsc_files.keys():
+ print " %s:" % (file);
+ # Mandatory fields
+ for i in [ "size", "md5sum" ]:
+ print " %s: %s" % (i.capitalize(), dsc_files[file][i]);
+ del dsc_files[file][i];
+ # Optional fields
+ for i in [ "files id" ]:
+ if dsc_files[file].has_key(i):
+ print " %s: %s" % (i.capitalize(), dsc_files[file][i]);
+ del dsc_files[file][i];
+ if dsc_files[file]:
+ utils.warn("dsc_files[%s] still has following unrecognised keys: %s" % (file, dsc_files[file].keys()));
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Script to automate some parts of checking NEW packages
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: fernanda.py,v 1.10 2003-11-10 23:01:17 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <Omnic> elmo wrote docs?!!?!?!?!?!?!
+# <aj> as if he wasn't scary enough before!!
+# * aj imagines a little red furry toy sitting hunched over a computer
+# tapping furiously and giggling to himself
+# <aj> eventually he stops, and his heads slowly spins around and you
+# see this really evil grin and then he sees you, and picks up a
+# knife from beside the keyboard and throws it at you, and as you
+# breathe your last breath, he starts giggling again
+# <aj> but i should be telling this to my psychiatrist, not you guys,
+# right? :)
+
+################################################################################
+
+import errno, os, re, sys
+import utils
+import apt_pkg, apt_inst
+import pg, db_access
+
+################################################################################
+
+re_package = re.compile(r"^(.+?)_.*");
+re_doc_directory = re.compile(r".*/doc/([^/]*).*");
+
+re_contrib = re.compile('^contrib/')
+re_nonfree = re.compile('^non\-free/')
+
+re_arch = re.compile("Architecture: .*")
+re_builddep = re.compile("Build-Depends: .*")
+re_builddepind = re.compile("Build-Depends-Indep: .*")
+
+re_localhost = re.compile("localhost\.localdomain")
+re_version = re.compile('^(.*)\((.*)\)')
+
+re_newlinespace = re.compile('\n')
+re_spacestrip = re.compile('(\s)')
+
+################################################################################
+
+# Colour definitions
+
+# Main
+main_colour = "\033[36m";
+# Contrib
+contrib_colour = "\033[33m";
+# Non-Free
+nonfree_colour = "\033[31m";
+# Arch
+arch_colour = "\033[32m";
+# End
+end_colour = "\033[0m";
+# Bold
+bold_colour = "\033[1m";
+# Bad maintainer
+maintainer_colour = arch_colour;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+Cnf = utils.get_conf()
+projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]))
+db_access.init(Cnf, projectB);
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: fernanda [PACKAGE]...
+Check NEW package(s).
+
+ -h, --help show this help and exit
+
+PACKAGE can be a .changes, .dsc, .deb or .udeb filename."""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+def get_depends_parts(depend) :
+ v_match = re_version.match(depend)
+ if v_match:
+ d_parts = { 'name' : v_match.group(1), 'version' : v_match.group(2) }
+ else :
+ d_parts = { 'name' : depend , 'version' : '' }
+ return d_parts
+
+def get_or_list(depend) :
+ or_list = depend.split("|");
+ return or_list
+
+def get_comma_list(depend) :
+ dep_list = depend.split(",");
+ return dep_list
+
+def split_depends (d_str) :
+ # creates a list of lists of dictionaries of depends (package,version relation)
+
+ d_str = re_spacestrip.sub('',d_str);
+ depends_tree = [];
+ # first split depends string up amongs comma delimiter
+ dep_list = get_comma_list(d_str);
+ d = 0;
+ while d < len(dep_list):
+ # put depends into their own list
+ depends_tree.append([dep_list[d]]);
+ d += 1;
+ d = 0;
+ while d < len(depends_tree):
+ k = 0;
+ # split up Or'd depends into a multi-item list
+ depends_tree[d] = get_or_list(depends_tree[d][0]);
+ while k < len(depends_tree[d]):
+ # split depends into {package, version relation}
+ depends_tree[d][k] = get_depends_parts(depends_tree[d][k]);
+ k += 1;
+ d += 1;
+ return depends_tree;
+
+def read_control (filename):
+ recommends = [];
+ depends = [];
+ section = '';
+ maintainer = '';
+ arch = '';
+
+ deb_file = utils.open_file(filename);
+ try:
+ extracts = apt_inst.debExtractControl(deb_file);
+ control = apt_pkg.ParseSection(extracts);
+ except:
+ print "can't parse control info";
+ control = '';
+
+ deb_file.close();
+
+ control_keys = control.keys();
+
+ if control.has_key("Depends"):
+ depends_str = control.Find("Depends");
+ # create list of dependancy lists
+ depends = split_depends(depends_str);
+
+ if control.has_key("Recommends"):
+ recommends_str = control.Find("Recommends");
+ recommends = split_depends(recommends_str);
+
+ if control.has_key("Section"):
+ section_str = control.Find("Section");
+
+ c_match = re_contrib.search(section_str)
+ nf_match = re_nonfree.search(section_str)
+ if c_match :
+ # contrib colour
+ section = contrib_colour + section_str + end_colour
+ elif nf_match :
+ # non-free colour
+ section = nonfree_colour + section_str + end_colour
+ else :
+ # main
+ section = main_colour + section_str + end_colour
+ if control.has_key("Architecture"):
+ arch_str = control.Find("Architecture")
+ arch = arch_colour + arch_str + end_colour
+
+ if control.has_key("Maintainer"):
+ maintainer = control.Find("Maintainer")
+ localhost = re_localhost.search(maintainer)
+ if localhost:
+ #highlight bad email
+ maintainer = maintainer_colour + maintainer + end_colour;
+
+ return (control, control_keys, section, depends, recommends, arch, maintainer)
+
+def read_dsc (dsc_filename):
+ dsc = {};
+
+ dsc_file = utils.open_file(dsc_filename);
+ try:
+ dsc = utils.parse_changes(dsc_filename);
+ except:
+ print "can't parse control info"
+ dsc_file.close();
+
+ filecontents = strip_pgp_signature(dsc_filename);
+
+ if dsc.has_key("build-depends"):
+ builddep = split_depends(dsc["build-depends"]);
+ builddepstr = create_depends_string(builddep);
+ filecontents = re_builddep.sub("Build-Depends: "+builddepstr, filecontents);
+
+ if dsc.has_key("build-depends-indep"):
+ builddepindstr = create_depends_string(split_depends(dsc["build-depends-indep"]));
+ filecontents = re_builddepind.sub("Build-Depends-Indep: "+builddepindstr, filecontents);
+
+ if dsc.has_key("architecture") :
+ if (dsc["architecture"] != "any"):
+ newarch = arch_colour + dsc["architecture"] + end_colour;
+ filecontents = re_arch.sub("Architecture: " + newarch, filecontents);
+
+ return filecontents;
+
+def create_depends_string (depends_tree):
+ # just look up unstable for now. possibly pull from .changes later
+ suite = "unstable";
+ result = "";
+ comma_count = 1;
+ for l in depends_tree:
+ if (comma_count >= 2):
+ result += ", ";
+ or_count = 1
+ for d in l:
+ if (or_count >= 2 ):
+ result += " | "
+ # doesn't do version lookup yet.
+
+ q = projectB.query("SELECT DISTINCT(b.package), b.version, c.name, su.suite_name FROM binaries b, files fi, location l, component c, bin_associations ba, suite su WHERE b.package='%s' AND b.file = fi.id AND fi.location = l.id AND l.component = c.id AND ba.bin=b.id AND ba.suite = su.id AND su.suite_name='%s' ORDER BY b.version desc" % (d['name'], suite));
+ ql = q.getresult();
+ if ql:
+ i = ql[0];
+
+ if i[2] == "contrib":
+ result += contrib_colour + d['name'];
+ elif i[2] == "non-free":
+ result += nonfree_colour + d['name'];
+ else :
+ result += main_colour + d['name'];
+
+ if d['version'] != '' :
+ result += " (%s)" % (d['version']);
+ result += end_colour;
+ else:
+ result += bold_colour + d['name'];
+ if d['version'] != '' :
+ result += " (%s)" % (d['version']);
+ result += end_colour;
+ or_count += 1;
+ comma_count += 1;
+ return result;
+
+def output_deb_info(filename):
+ (control, control_keys, section, depends, recommends, arch, maintainer) = read_control(filename);
+
+ if control == '':
+ print "no control info"
+ else:
+ for key in control_keys :
+ output = " " + key + ": "
+ if key == 'Depends':
+ output += create_depends_string(depends);
+ elif key == 'Recommends':
+ output += create_depends_string(recommends);
+ elif key == 'Section':
+ output += section;
+ elif key == 'Architecture':
+ output += arch;
+ elif key == 'Maintainer':
+ output += maintainer;
+ elif key == 'Description':
+ desc = control.Find(key);
+ desc = re_newlinespace.sub('\n ', desc);
+ output += desc;
+ else:
+ output += control.Find(key);
+ print output;
+
+def do_command (command, filename):
+ o = os.popen("%s %s" % (command, filename));
+ print o.read();
+
+def print_copyright (deb_filename):
+ package = re_package.sub(r'\1', deb_filename);
+ o = os.popen("ar p %s data.tar.gz | tar tzvf - | egrep 'usr(/share)?/doc/[^/]*/copyright' | awk '{ print $6 }' | head -n 1" % (deb_filename));
+ copyright = o.read()[:-1];
+
+ if copyright == "":
+ print "WARNING: No copyright found, please check package manually."
+ return;
+
+ doc_directory = re_doc_directory.sub(r'\1', copyright);
+ if package != doc_directory:
+ print "WARNING: wrong doc directory (expected %s, got %s)." % (package, doc_directory);
+ return;
+
+ o = os.popen("ar p %s data.tar.gz | tar xzOf - %s" % (deb_filename, copyright));
+ print o.read();
+
+def check_dsc (dsc_filename):
+ print "---- .dsc file for %s ----" % (dsc_filename);
+ (dsc) = read_dsc(dsc_filename)
+ print dsc
+
+def check_deb (deb_filename):
+ filename = os.path.basename(deb_filename);
+
+ if filename.endswith(".udeb"):
+ is_a_udeb = 1;
+ else:
+ is_a_udeb = 0;
+
+ print "---- control file for %s ----" % (filename);
+ #do_command ("dpkg -I", deb_filename);
+ output_deb_info(deb_filename)
+
+ if is_a_udeb:
+ print "---- skipping lintian check for µdeb ----";
+ print ;
+ else:
+ print "---- lintian check for %s ----" % (filename);
+ do_command ("lintian", deb_filename);
+ print "---- linda check for %s ----" % (filename);
+ do_command ("linda", deb_filename);
+
+ print "---- contents of %s ----" % (filename);
+ do_command ("dpkg -c", deb_filename);
+
+ if is_a_udeb:
+ print "---- skipping copyright for µdeb ----";
+ else:
+ print "---- copyright of %s ----" % (filename);
+ print_copyright(deb_filename);
+
+ print "---- file listing of %s ----" % (filename);
+ do_command ("ls -l", deb_filename);
+
+# Read a file, strip the signature and return the modified contents as
+# a string.
+def strip_pgp_signature (filename):
+ file = utils.open_file (filename);
+ contents = "";
+ inside_signature = 0;
+ skip_next = 0;
+ for line in file.readlines():
+ if line[:-1] == "":
+ continue;
+ if inside_signature:
+ continue;
+ if skip_next:
+ skip_next = 0;
+ continue;
+ if line.startswith("-----BEGIN PGP SIGNED MESSAGE"):
+ skip_next = 1;
+ continue;
+ if line.startswith("-----BEGIN PGP SIGNATURE"):
+ inside_signature = 1;
+ continue;
+ if line.startswith("-----END PGP SIGNATURE"):
+ inside_signature = 0;
+ continue;
+ contents += line;
+ file.close();
+ return contents;
+
+# Display the .changes [without the signature]
+def display_changes (changes_filename):
+ print "---- .changes file for %s ----" % (changes_filename);
+ print strip_pgp_signature(changes_filename);
+
+def check_changes (changes_filename):
+ display_changes(changes_filename);
+
+ changes = utils.parse_changes (changes_filename);
+ files = utils.build_file_list(changes);
+ for file in files.keys():
+ if file.endswith(".deb") or file.endswith(".udeb"):
+ check_deb(file);
+ if file.endswith(".dsc"):
+ check_dsc(file);
+ # else: => byhand
+
+def main ():
+ global Cnf, projectB, db_files, waste, excluded;
+
+# Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Fernanda::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Frenanda::Options::%s" % (i)):
+ Cnf["Fernanda::Options::%s" % (i)] = "";
+
+ args = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Fernanda::Options")
+
+ if Options["Help"]:
+ usage();
+
+ stdout_fd = sys.stdout;
+
+ for file in args:
+ try:
+ # Pipe output for each argument through less
+ less_fd = os.popen("less -R -", 'w', 0);
+ # -R added to display raw control chars for colour
+ sys.stdout = less_fd;
+
+ try:
+ if file.endswith(".changes"):
+ check_changes(file);
+ elif file.endswith(".deb") or file.endswith(".udeb"):
+ check_deb(file);
+ elif file.endswith(".dsc"):
+ check_dsc(file);
+ else:
+ utils.fubar("Unrecognised file type: '%s'." % (file));
+ finally:
+ # Reset stdout here so future less invocations aren't FUBAR
+ less_fd.close();
+ sys.stdout = stdout_fd;
+ except IOError, e:
+ if errno.errorcode[e.errno] == 'EPIPE':
+ utils.warn("[fernanda] Caught EPIPE; skipping.");
+ pass;
+ else:
+ raise;
+ except KeyboardInterrupt:
+ utils.warn("[fernanda] Caught C-c; skipping.");
+ pass;
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Check for users with no packages in the archive
+# Copyright (C) 2003 James Troup <james@nocrew.org>
+# $Id: rosamund,v 1.1 2003-09-07 13:48:51 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import ldap, pg, sys, time;
+import apt_pkg;
+import utils;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: rosamund
+Checks for users with no packages in the archive
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def get_ldap_value(entry, value):
+ ret = entry.get(value);
+ if not ret:
+ return "";
+ else:
+ # FIXME: what about > 0 ?
+ return ret[0];
+
+def main():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Rosamund::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Rosamund::Options::%s" % (i)):
+ Cnf["Rosamund::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Rosamund::Options")
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+
+ before = time.time();
+ sys.stderr.write("[Getting info from the LDAP server...");
+ LDAPDn = Cnf["Emilie::LDAPDn"];
+ LDAPServer = Cnf["Emilie::LDAPServer"];
+ l = ldap.open(LDAPServer);
+ l.simple_bind_s("","");
+ Attrs = l.search_s(LDAPDn, ldap.SCOPE_ONELEVEL,
+ "(&(keyfingerprint=*)(gidnumber=%s))" % (Cnf["Julia::ValidGID"]),
+ ["uid", "cn", "mn", "sn", "createtimestamp"]);
+ sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+
+
+ db_uid = {};
+ db_unstable_uid = {};
+
+ before = time.time();
+ sys.stderr.write("[Getting UID info for entire archive...");
+ q = projectB.query("SELECT DISTINCT u.uid FROM uid u, fingerprint f WHERE f.uid = u.id;");
+ sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+ for i in q.getresult():
+ db_uid[i[0]] = "";
+
+ before = time.time();
+ sys.stderr.write("[Getting UID info for unstable...");
+ q = projectB.query("""
+SELECT DISTINCT u.uid FROM suite su, src_associations sa, source s, fingerprint f, uid u
+ WHERE f.uid = u.id AND sa.source = s.id AND sa.suite = su.id
+ AND su.suite_name = 'unstable' AND s.sig_fpr = f.id
+UNION
+SELECT DISTINCT u.uid FROM suite su, bin_associations ba, binaries b, fingerprint f, uid u
+ WHERE f.uid = u.id AND ba.bin = b.id AND ba.suite = su.id
+ AND su.suite_name = 'unstable' AND b.sig_fpr = f.id""");
+ sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+ for i in q.getresult():
+ db_unstable_uid[i[0]] = "";
+
+ now = time.time();
+
+ for i in Attrs:
+ entry = i[1];
+ uid = entry["uid"][0];
+ created = time.mktime(time.strptime(entry["createtimestamp"][0][:8], '%Y%m%d'));
+ diff = now - created;
+ # 31536000 is 1 year in seconds, i.e. 60 * 60 * 24 * 365
+ if diff < 31536000 / 2:
+ when = "Less than 6 months ago";
+ elif diff < 31536000:
+ when = "Less than 1 year ago";
+ elif diff < 31536000 * 1.5:
+ when = "Less than 18 months ago";
+ elif diff < 31536000 * 2:
+ when = "Less than 2 years ago";
+ elif diff < 31536000 * 3:
+ when = "Less than 3 years ago";
+ else:
+ when = "More than 3 years ago";
+ name = " ".join([get_ldap_value(entry, "cn"),
+ get_ldap_value(entry, "mn"),
+ get_ldap_value(entry, "sn")]);
+ if not db_uid.has_key(uid):
+ print "NONE %s (%s) %s" % (uid, name, when);
+ else:
+ if not db_unstable_uid.has_key(uid):
+ print "NOT_UNSTABLE %s (%s) %s" % (uid, name, when);
+
+############################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+###########################################################
+# generates partial package updates list
+
+# idea and basic implementation by Anthony, some changes by Andreas
+# parts are stolen from ziyi
+#
+# Copyright (C) 2004-5 Anthony Towns <aj@azure.humbug.org.au>
+# Copyright (C) 2004-5 Andreas Barth <aba@not.so.argh.org>
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+
+# < elmo> bah, don't bother me with annoying facts
+# < elmo> I was on a roll
+
+
+################################################################################
+
+import sys, os, tempfile
+import apt_pkg
+import utils
+
+################################################################################
+
+projectB = None;
+Cnf = None;
+Logger = None;
+Options = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: tiffani [OPTIONS] [suites]
+Write out ed-style diffs to Packages/Source lists
+
+ -h, --help show this help and exit
+ -c give the canonical path of the file
+ -p name for the patch (defaults to current time)
+ -n take no action
+ """
+ sys.exit(exit_code);
+
+
+def tryunlink(file):
+ try:
+ os.unlink(file)
+ except OSError:
+ print "warning: removing of %s denied" % (file)
+
+def smartstat(file):
+ for ext in ["", ".gz", ".bz2"]:
+ if os.path.isfile(file + ext):
+ return (ext, os.stat(file + ext))
+ return (None, None)
+
+def smartlink(f, t):
+ if os.path.isfile(f):
+ os.link(f,t)
+ elif os.path.isfile("%s.gz" % (f)):
+ os.system("gzip -d < %s.gz > %s" % (f, t))
+ elif os.path.isfile("%s.bz2" % (f)):
+ os.system("bzip2 -d < %s.bz2 > %s" % (f, t))
+ else:
+ print "missing: %s" % (f)
+ raise IOError, f
+
+def smartopen(file):
+ if os.path.isfile(file):
+ f = open(file, "r")
+ elif os.path.isfile("%s.gz" % file):
+ f = create_temp_file(os.popen("zcat %s.gz" % file, "r"))
+ elif os.path.isfile("%s.bz2" % file):
+ f = create_temp_file(os.popen("bzcat %s.bz2" % file, "r"))
+ else:
+ f = None
+ return f
+
+def pipe_file(f, t):
+ f.seek(0)
+ while 1:
+ l = f.read()
+ if not l: break
+ t.write(l)
+ t.close()
+
+class Updates:
+ def __init__(self, readpath = None, max = 14):
+ self.can_path = None
+ self.history = {}
+ self.history_order = []
+ self.max = max
+ self.readpath = readpath
+ self.filesizesha1 = None
+
+ if readpath:
+ try:
+ f = open(readpath + "/Index")
+ x = f.readline()
+
+ def read_hashs(ind, f, self, x=x):
+ while 1:
+ x = f.readline()
+ if not x or x[0] != " ": break
+ l = x.split()
+ if not self.history.has_key(l[2]):
+ self.history[l[2]] = [None,None]
+ self.history_order.append(l[2])
+ self.history[l[2]][ind] = (l[0], int(l[1]))
+ return x
+
+ while x:
+ l = x.split()
+
+ if len(l) == 0:
+ x = f.readline()
+ continue
+
+ if l[0] == "SHA1-History:":
+ x = read_hashs(0,f,self)
+ continue
+
+ if l[0] == "SHA1-Patches:":
+ x = read_hashs(1,f,self)
+ continue
+
+ if l[0] == "Canonical-Name:" or l[0]=="Canonical-Path:":
+ self.can_path = l[1]
+
+ if l[0] == "SHA1-Current:" and len(l) == 3:
+ self.filesizesha1 = (l[1], int(l[2]))
+
+ x = f.readline()
+
+ except IOError:
+ 0
+
+ def dump(self, out=sys.stdout):
+ if self.can_path:
+ out.write("Canonical-Path: %s\n" % (self.can_path))
+
+ if self.filesizesha1:
+ out.write("SHA1-Current: %s %7d\n" % (self.filesizesha1))
+
+ hs = self.history
+ l = self.history_order[:]
+
+ cnt = len(l)
+ if cnt > self.max:
+ for h in l[:cnt-self.max]:
+ tryunlink("%s/%s.gz" % (self.readpath, h))
+ del hs[h]
+ l = l[cnt-self.max:]
+ self.history_order = l[:]
+
+ out.write("SHA1-History:\n")
+ for h in l:
+ out.write(" %s %7d %s\n" % (hs[h][0][0], hs[h][0][1], h))
+ out.write("SHA1-Patches:\n")
+ for h in l:
+ out.write(" %s %7d %s\n" % (hs[h][1][0], hs[h][1][1], h))
+
+def create_temp_file(r):
+ f = tempfile.TemporaryFile()
+ while 1:
+ x = r.readline()
+ if not x: break
+ f.write(x)
+ r.close()
+ del x,r
+ f.flush()
+ f.seek(0)
+ return f
+
+def sizesha1(f):
+ size = os.fstat(f.fileno())[6]
+ f.seek(0)
+ sha1sum = apt_pkg.sha1sum(f)
+ return (sha1sum, size)
+
+def genchanges(Options, outdir, oldfile, origfile, maxdiffs = 14):
+ if Options.has_key("NoAct"):
+ print "not doing anything"
+ return
+
+ patchname = Options["PatchName"]
+
+ # origfile = /path/to/Packages
+ # oldfile = ./Packages
+ # newfile = ./Packages.tmp
+ # difffile = outdir/patchname
+ # index => outdir/Index
+
+ # (outdir, oldfile, origfile) = argv
+
+ newfile = oldfile + ".new"
+ difffile = "%s/%s" % (outdir, patchname)
+
+ upd = Updates(outdir, int(maxdiffs))
+ (oldext, oldstat) = smartstat(oldfile)
+ (origext, origstat) = smartstat(origfile)
+ if not origstat:
+ print "%s doesn't exist" % (origfile)
+ return
+ if not oldstat:
+ print "initial run"
+ os.link(origfile + origext, oldfile + origext)
+ return
+
+ if oldstat[1:3] == origstat[1:3]:
+ print "hardlink unbroken, assuming unchanged"
+ return
+
+ oldf = smartopen(oldfile)
+ oldsizesha1 = sizesha1(oldf)
+
+ # should probably early exit if either of these checks fail
+ # alternatively (optionally?) could just trim the patch history
+
+ if upd.filesizesha1:
+ if upd.filesizesha1 != oldsizesha1:
+ print "old file seems to have changed! %s %s => %s %s" % (upd.filesizesha1 + oldsizesha1)
+
+ # XXX this should be usable now
+ #
+ #for d in upd.history.keys():
+ # df = smartopen("%s/%s" % (outdir,d))
+ # act_sha1size = sizesha1(df)
+ # df.close()
+ # exp_sha1size = upd.history[d][1]
+ # if act_sha1size != exp_sha1size:
+ # print "patch file %s seems to have changed! %s %s => %s %s" % \
+ # (d,) + exp_sha1size + act_sha1size
+
+ if Options.has_key("CanonicalPath"): upd.can_path=Options["CanonicalPath"]
+
+ if os.path.exists(newfile): os.unlink(newfile)
+ smartlink(origfile, newfile)
+ newf = open(newfile, "r")
+ newsizesha1 = sizesha1(newf)
+ newf.close()
+
+ if newsizesha1 == oldsizesha1:
+ os.unlink(newfile)
+ oldf.close()
+ print "file unchanged, not generating diff"
+ else:
+ if not os.path.isdir(outdir): os.mkdir(outdir)
+ print "generating diff"
+ w = os.popen("diff --ed - %s | gzip -c -9 > %s.gz" %
+ (newfile, difffile), "w")
+ pipe_file(oldf, w)
+ oldf.close()
+
+ difff = smartopen(difffile)
+ difsizesha1 = sizesha1(difff)
+ difff.close()
+
+ upd.history[patchname] = (oldsizesha1, difsizesha1)
+ upd.history_order.append(patchname)
+
+ upd.filesizesha1 = newsizesha1
+
+ os.unlink(oldfile + oldext)
+ os.link(origfile + origext, oldfile + origext)
+ os.unlink(newfile)
+
+ f = open(outdir + "/Index", "w")
+ upd.dump(f)
+ f.close()
+
+
+def main():
+ global Cnf, Options, Logger
+
+ os.umask(0002);
+
+ Cnf = utils.get_conf();
+ Arguments = [ ('h', "help", "Tiffani::Options::Help"),
+ ('c', None, "Tiffani::Options::CanonicalPath", "hasArg"),
+ ('p', "patchname", "Tiffani::Options::PatchName", "hasArg"),
+ ('r', "rootdir", "Tiffani::Options::RootDir", "hasArg"),
+ ('d', "tmpdir", "Tiffani::Options::TempDir", "hasArg"),
+ ('m', "maxdiffs", "Tiffani::Options::MaxDiffs", "hasArg"),
+ ('n', "n-act", "Tiffani::Options::NoAct"),
+ ];
+ suites = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Tiffani::Options");
+ if Options.has_key("Help"): usage();
+
+ maxdiffs = Options.get("MaxDiffs::Default", "14")
+ maxpackages = Options.get("MaxDiffs::Packages", maxdiffs)
+ maxcontents = Options.get("MaxDiffs::Contents", maxdiffs)
+ maxsources = Options.get("MaxDiffs::Sources", maxdiffs)
+
+ if not Options.has_key("PatchName"):
+ format = "%Y-%m-%d-%H%M.%S"
+ i,o = os.popen2("date +%s" % (format))
+ i.close()
+ Options["PatchName"] = o.readline()[:-1]
+ o.close()
+
+ AptCnf = apt_pkg.newConfiguration()
+ apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file())
+
+ if Options.has_key("RootDir"): Cnf["Dir::Root"] = Options["RootDir"]
+
+ if not suites:
+ suites = Cnf.SubTree("Suite").List()
+
+ for suite in suites:
+ if suite == "Experimental": continue
+
+ print "Processing: " + suite
+ SuiteBlock = Cnf.SubTree("Suite::" + suite)
+
+ if SuiteBlock.has_key("Untouchable"):
+ print "Skipping: " + suite + " (untouchable)"
+ continue
+
+ suite = suite.lower()
+
+ architectures = SuiteBlock.ValueList("Architectures")
+
+ if SuiteBlock.has_key("Components"):
+ components = SuiteBlock.ValueList("Components")
+ else:
+ components = []
+
+ suite_suffix = Cnf.Find("Dinstall::SuiteSuffix");
+ if components and suite_suffix:
+ longsuite = suite + "/" + suite_suffix;
+ else:
+ longsuite = suite;
+
+ tree = SuiteBlock.get("Tree", "dists/%s" % (longsuite))
+
+ if AptCnf.has_key("tree::%s" % (tree)):
+ sections = AptCnf["tree::%s::Sections" % (tree)].split()
+ elif AptCnf.has_key("bindirectory::%s" % (tree)):
+ sections = AptCnf["bindirectory::%s::Sections" % (tree)].split()
+ else:
+ aptcnf_filename = os.path.basename(utils.which_apt_conf_file());
+ print "ALERT: suite %s not in %s, nor untouchable!" % (suite, aptcnf_filename);
+ continue
+
+ for architecture in architectures:
+ if architecture == "all":
+ continue
+
+ if architecture != "source":
+ # Process Contents
+ file = "%s/Contents-%s" % (Cnf["Dir::Root"] + tree,
+ architecture)
+ storename = "%s/%s_contents_%s" % (Options["TempDir"], suite, architecture)
+ print "running contents for %s %s : " % (suite, architecture),
+ genchanges(Options, file + ".diff", storename, file, \
+ Cnf.get("Suite::%s::Tiffani::MaxDiffs::Contents" % (suite), maxcontents))
+
+ # use sections instead of components since katie.conf
+ # treats "foo/bar main" as suite "foo", suitesuffix "bar" and
+ # component "bar/main". suck.
+
+ for component in sections:
+ if architecture == "source":
+ longarch = architecture
+ packages = "Sources"
+ maxsuite = maxsources
+ else:
+ longarch = "binary-%s"% (architecture)
+ packages = "Packages"
+ maxsuite = maxpackages
+
+ file = "%s/%s/%s/%s" % (Cnf["Dir::Root"] + tree,
+ component, longarch, packages)
+ storename = "%s/%s_%s_%s" % (Options["TempDir"], suite, component, architecture)
+ print "running for %s %s %s : " % (suite, component, architecture),
+ genchanges(Options, file + ".diff", storename, file, \
+ Cnf.get("Suite::%s::Tiffani::MaxDiffs::%s" % (suite, packages), maxsuite))
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Create all the Release files
+
+# Copyright (C) 2001, 2002 Anthony Towns <ajt@debian.org>
+# $Id: ziyi,v 1.27 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# ``Bored now''
+
+################################################################################
+
+import sys, os, popen2, tempfile, stat, time
+import utils
+import apt_pkg
+
+################################################################################
+
+Cnf = None
+projectB = None
+out = None
+AptCnf = None
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: ziyi [OPTION]... [SUITE]...
+Generate Release files (for SUITE).
+
+ -h, --help show this help and exit
+
+If no SUITE is given Release files are generated for all suites."""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+def add_tiffani (files, path, indexstem):
+ index = "%s.diff/Index" % (indexstem)
+ filepath = "%s/%s" % (path, index)
+ if os.path.exists(filepath):
+ #print "ALERT: there was a tiffani file %s" % (filepath)
+ files.append(index)
+
+def compressnames (tree,type,file):
+ compress = AptCnf.get("%s::%s::Compress" % (tree,type), AptCnf.get("Default::%s::Compress" % (type), ". gzip"))
+ result = []
+ cl = compress.split()
+ uncompress = ("." not in cl)
+ for mode in compress.split():
+ if mode == ".":
+ result.append(file)
+ elif mode == "gzip":
+ if uncompress:
+ result.append("<zcat/.gz>" + file)
+ uncompress = 0
+ result.append(file + ".gz")
+ elif mode == "bzip2":
+ if uncompress:
+ result.append("<bzcat/.bz2>" + file)
+ uncompress = 0
+ result.append(file + ".bz2")
+ return result
+
+def create_temp_file (cmd):
+ f = tempfile.TemporaryFile()
+ r = popen2.popen2(cmd)
+ r[1].close()
+ r = r[0]
+ size = 0
+ while 1:
+ x = r.readline()
+ if not x:
+ r.close()
+ del x,r
+ break
+ f.write(x)
+ size += len(x)
+ f.flush()
+ f.seek(0)
+ return (size, f)
+
+def print_md5sha_files (tree, files, hashop):
+ path = Cnf["Dir::Root"] + tree + "/"
+ for name in files:
+ try:
+ if name[0] == "<":
+ j = name.index("/")
+ k = name.index(">")
+ (cat, ext, name) = (name[1:j], name[j+1:k], name[k+1:])
+ (size, file_handle) = create_temp_file("%s %s%s%s" %
+ (cat, path, name, ext))
+ else:
+ size = os.stat(path + name)[stat.ST_SIZE]
+ file_handle = utils.open_file(path + name)
+ except utils.cant_open_exc:
+ print "ALERT: Couldn't open " + path + name
+ else:
+ hash = hashop(file_handle)
+ file_handle.close()
+ out.write(" %s %8d %s\n" % (hash, size, name))
+
+def print_md5_files (tree, files):
+ print_md5sha_files (tree, files, apt_pkg.md5sum)
+
+def print_sha1_files (tree, files):
+ print_md5sha_files (tree, files, apt_pkg.sha1sum)
+
+################################################################################
+
+def main ():
+ global Cnf, AptCnf, projectB, out
+ out = sys.stdout;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Ziyi::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Ziyi::Options::%s" % (i)):
+ Cnf["Ziyi::Options::%s" % (i)] = "";
+
+ suites = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv)
+ Options = Cnf.SubTree("Ziyi::Options")
+
+ if Options["Help"]:
+ usage();
+
+ AptCnf = apt_pkg.newConfiguration()
+ apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file())
+
+ if not suites:
+ suites = Cnf.SubTree("Suite").List()
+
+ for suite in suites:
+ print "Processing: " + suite
+ SuiteBlock = Cnf.SubTree("Suite::" + suite)
+
+ if SuiteBlock.has_key("Untouchable"):
+ print "Skipping: " + suite + " (untouchable)"
+ continue
+
+ suite = suite.lower()
+
+ origin = SuiteBlock["Origin"]
+ label = SuiteBlock.get("Label", origin)
+ version = SuiteBlock.get("Version", "")
+ codename = SuiteBlock.get("CodeName", "")
+
+ if SuiteBlock.has_key("NotAutomatic"):
+ notautomatic = "yes"
+ else:
+ notautomatic = ""
+
+ if SuiteBlock.has_key("Components"):
+ components = SuiteBlock.ValueList("Components")
+ else:
+ components = []
+
+ suite_suffix = Cnf.Find("Dinstall::SuiteSuffix");
+ if components and suite_suffix:
+ longsuite = suite + "/" + suite_suffix;
+ else:
+ longsuite = suite;
+
+ tree = SuiteBlock.get("Tree", "dists/%s" % (longsuite))
+
+ if AptCnf.has_key("tree::%s" % (tree)):
+ pass
+ elif AptCnf.has_key("bindirectory::%s" % (tree)):
+ pass
+ else:
+ aptcnf_filename = os.path.basename(utils.which_apt_conf_file());
+ print "ALERT: suite %s not in %s, nor untouchable!" % (suite, aptcnf_filename);
+ continue
+
+ print Cnf["Dir::Root"] + tree + "/Release"
+ out = open(Cnf["Dir::Root"] + tree + "/Release", "w")
+
+ out.write("Origin: %s\n" % (origin))
+ out.write("Label: %s\n" % (label))
+ out.write("Suite: %s\n" % (suite))
+ if version != "":
+ out.write("Version: %s\n" % (version))
+ if codename != "":
+ out.write("Codename: %s\n" % (codename))
+ out.write("Date: %s\n" % (time.strftime("%a, %d %b %Y %H:%M:%S UTC", time.gmtime(time.time()))))
+ if notautomatic != "":
+ out.write("NotAutomatic: %s\n" % (notautomatic))
+ out.write("Architectures: %s\n" % (" ".join(filter(utils.real_arch, SuiteBlock.ValueList("Architectures")))))
+ if components:
+ out.write("Components: %s\n" % (" ".join(components)))
+
+ out.write("Description: %s\n" % (SuiteBlock["Description"]))
+
+ files = []
+
+ if AptCnf.has_key("tree::%s" % (tree)):
+ for sec in AptCnf["tree::%s::Sections" % (tree)].split():
+ for arch in AptCnf["tree::%s::Architectures" % (tree)].split():
+ if arch == "source":
+ filepath = "%s/%s/Sources" % (sec, arch)
+ for file in compressnames("tree::%s" % (tree), "Sources", filepath):
+ files.append(file)
+ add_tiffani(files, Cnf["Dir::Root"] + tree, filepath)
+ else:
+ disks = "%s/disks-%s" % (sec, arch)
+ diskspath = Cnf["Dir::Root"]+tree+"/"+disks
+ if os.path.exists(diskspath):
+ for dir in os.listdir(diskspath):
+ if os.path.exists("%s/%s/md5sum.txt" % (diskspath, dir)):
+ files.append("%s/%s/md5sum.txt" % (disks, dir))
+
+ filepath = "%s/binary-%s/Packages" % (sec, arch)
+ for file in compressnames("tree::%s" % (tree), "Packages", filepath):
+ files.append(file)
+ add_tiffani(files, Cnf["Dir::Root"] + tree, filepath)
+
+ if arch == "source":
+ rel = "%s/%s/Release" % (sec, arch)
+ else:
+ rel = "%s/binary-%s/Release" % (sec, arch)
+ relpath = Cnf["Dir::Root"]+tree+"/"+rel
+
+ try:
+ release = open(relpath, "w")
+ #release = open(longsuite.replace("/","_") + "_" + arch + "_" + sec + "_Release", "w")
+ except IOError:
+ utils.fubar("Couldn't write to " + relpath);
+
+ release.write("Archive: %s\n" % (suite))
+ if version != "":
+ release.write("Version: %s\n" % (version))
+ if suite_suffix:
+ release.write("Component: %s/%s\n" % (suite_suffix,sec));
+ else:
+ release.write("Component: %s\n" % (sec));
+ release.write("Origin: %s\n" % (origin))
+ release.write("Label: %s\n" % (label))
+ if notautomatic != "":
+ release.write("NotAutomatic: %s\n" % (notautomatic))
+ release.write("Architecture: %s\n" % (arch))
+ release.close()
+ files.append(rel)
+
+ if AptCnf.has_key("tree::%s/main" % (tree)):
+ sec = AptCnf["tree::%s/main::Sections" % (tree)].split()[0]
+ if sec != "debian-installer":
+ print "ALERT: weird non debian-installer section in %s" % (tree)
+
+ for arch in AptCnf["tree::%s/main::Architectures" % (tree)].split():
+ if arch != "source": # always true
+ for file in compressnames("tree::%s/main" % (tree), "Packages", "main/%s/binary-%s/Packages" % (sec, arch)):
+ files.append(file)
+ elif AptCnf.has_key("tree::%s::FakeDI" % (tree)):
+ usetree = AptCnf["tree::%s::FakeDI" % (tree)]
+ sec = AptCnf["tree::%s/main::Sections" % (usetree)].split()[0]
+ if sec != "debian-installer":
+ print "ALERT: weird non debian-installer section in %s" % (usetree)
+
+ for arch in AptCnf["tree::%s/main::Architectures" % (usetree)].split():
+ if arch != "source": # always true
+ for file in compressnames("tree::%s/main" % (usetree), "Packages", "main/%s/binary-%s/Packages" % (sec, arch)):
+ files.append(file)
+
+ elif AptCnf.has_key("bindirectory::%s" % (tree)):
+ for file in compressnames("bindirectory::%s" % (tree), "Packages", AptCnf["bindirectory::%s::Packages" % (tree)]):
+ files.append(file.replace(tree+"/","",1))
+ for file in compressnames("bindirectory::%s" % (tree), "Sources", AptCnf["bindirectory::%s::Sources" % (tree)]):
+ files.append(file.replace(tree+"/","",1))
+ else:
+ print "ALERT: no tree/bindirectory for %s" % (tree)
+
+ out.write("MD5Sum:\n")
+ print_md5_files(tree, files)
+ out.write("SHA1:\n")
+ print_sha1_files(tree, files)
+
+ out.close()
+ if Cnf.has_key("Dinstall::SigningKeyring"):
+ keyring = "--secret-keyring \"%s\"" % Cnf["Dinstall::SigningKeyring"]
+ if Cnf.has_key("Dinstall::SigningPubKeyring"):
+ keyring += " --keyring \"%s\"" % Cnf["Dinstall::SigningPubKeyring"]
+
+ arguments = "--no-options --batch --no-tty --armour"
+ if Cnf.has_key("Dinstall::SigningKeyIds"):
+ signkeyids = Cnf["Dinstall::SigningKeyIds"].split()
+ else:
+ signkeyids = [""]
+
+ dest = Cnf["Dir::Root"] + tree + "/Release.gpg"
+ if os.path.exists(dest):
+ os.unlink(dest)
+
+ for keyid in signkeyids:
+ if keyid != "": defkeyid = "--default-key %s" % keyid
+ else: defkeyid = ""
+ os.system("gpg %s %s %s --detach-sign <%s >>%s" %
+ (keyring, defkeyid, arguments,
+ Cnf["Dir::Root"] + tree + "/Release", dest))
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Populate the DB
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: neve,v 1.20 2004-06-17 14:59:57 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+###############################################################################
+
+# 04:36|<aj> elmo: you're making me waste 5 seconds per architecture!!!!!! YOU BASTARD!!!!!
+
+###############################################################################
+
+# This code is a horrible mess for two reasons:
+
+# (o) For Debian's usage, it's doing something like 160k INSERTs,
+# even on auric, that makes the program unusable unless we get
+# involed in sorts of silly optimization games (local dicts to avoid
+# redundant SELECTS, using COPY FROM rather than INSERTS etc.)
+
+# (o) It's very site specific, because I don't expect to use this
+# script again in a hurry, and I don't want to spend any more time
+# on it than absolutely necessary.
+
+###############################################################################
+
+import commands, os, pg, re, sys, time;
+import apt_pkg;
+import db_access, utils;
+
+###############################################################################
+
+re_arch_from_filename = re.compile(r"binary-[^/]+")
+
+###############################################################################
+
+Cnf = None;
+projectB = None;
+files_id_cache = {};
+source_cache = {};
+arch_all_cache = {};
+binary_cache = {};
+location_path_cache = {};
+#
+files_id_serial = 0;
+source_id_serial = 0;
+src_associations_id_serial = 0;
+dsc_files_id_serial = 0;
+files_query_cache = None;
+source_query_cache = None;
+src_associations_query_cache = None;
+dsc_files_query_cache = None;
+orig_tar_gz_cache = {};
+#
+binaries_id_serial = 0;
+binaries_query_cache = None;
+bin_associations_id_serial = 0;
+bin_associations_query_cache = None;
+#
+source_cache_for_binaries = {};
+reject_message = "";
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: neve
+Initializes a projectB database from an existing archive
+
+ -a, --action actually perform the initalization
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+###############################################################################
+
+def reject (str, prefix="Rejected: "):
+ global reject_message;
+ if str:
+ reject_message += prefix + str + "\n";
+
+###############################################################################
+
+def check_signature (filename):
+ if not utils.re_taint_free.match(os.path.basename(filename)):
+ reject("!!WARNING!! tainted filename: '%s'." % (filename));
+ return None;
+
+ status_read, status_write = os.pipe();
+ cmd = "gpgv --status-fd %s --keyring %s --keyring %s %s" \
+ % (status_write, Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"], filename);
+ (output, status, exit_status) = utils.gpgv_get_status_output(cmd, status_read, status_write);
+
+ # Process the status-fd output
+ keywords = {};
+ bad = internal_error = "";
+ for line in status.split('\n'):
+ line = line.strip();
+ if line == "":
+ continue;
+ split = line.split();
+ if len(split) < 2:
+ internal_error += "gpgv status line is malformed (< 2 atoms) ['%s'].\n" % (line);
+ continue;
+ (gnupg, keyword) = split[:2];
+ if gnupg != "[GNUPG:]":
+ internal_error += "gpgv status line is malformed (incorrect prefix '%s').\n" % (gnupg);
+ continue;
+ args = split[2:];
+ if keywords.has_key(keyword) and keyword != "NODATA" and keyword != "SIGEXPIRED":
+ internal_error += "found duplicate status token ('%s').\n" % (keyword);
+ continue;
+ else:
+ keywords[keyword] = args;
+
+ # If we failed to parse the status-fd output, let's just whine and bail now
+ if internal_error:
+ reject("internal error while performing signature check on %s." % (filename));
+ reject(internal_error, "");
+ reject("Please report the above errors to the Archive maintainers by replying to this mail.", "");
+ return None;
+
+ # Now check for obviously bad things in the processed output
+ if keywords.has_key("SIGEXPIRED"):
+ utils.warn("%s: signing key has expired." % (filename));
+ if keywords.has_key("KEYREVOKED"):
+ reject("key used to sign %s has been revoked." % (filename));
+ bad = 1;
+ if keywords.has_key("BADSIG"):
+ reject("bad signature on %s." % (filename));
+ bad = 1;
+ if keywords.has_key("ERRSIG") and not keywords.has_key("NO_PUBKEY"):
+ reject("failed to check signature on %s." % (filename));
+ bad = 1;
+ if keywords.has_key("NO_PUBKEY"):
+ args = keywords["NO_PUBKEY"];
+ if len(args) < 1:
+ reject("internal error while checking signature on %s." % (filename));
+ bad = 1;
+ else:
+ fingerprint = args[0];
+ if keywords.has_key("BADARMOR"):
+ reject("ascii armour of signature was corrupt in %s." % (filename));
+ bad = 1;
+ if keywords.has_key("NODATA"):
+ utils.warn("no signature found for %s." % (filename));
+ return "NOSIG";
+ #reject("no signature found in %s." % (filename));
+ #bad = 1;
+
+ if bad:
+ return None;
+
+ # Next check gpgv exited with a zero return code
+ if exit_status and not keywords.has_key("NO_PUBKEY"):
+ reject("gpgv failed while checking %s." % (filename));
+ if status.strip():
+ reject(utils.prefix_multi_line_string(status, " [GPG status-fd output:] "), "");
+ else:
+ reject(utils.prefix_multi_line_string(output, " [GPG output:] "), "");
+ return None;
+
+ # Sanity check the good stuff we expect
+ if not keywords.has_key("VALIDSIG"):
+ if not keywords.has_key("NO_PUBKEY"):
+ reject("signature on %s does not appear to be valid [No VALIDSIG]." % (filename));
+ bad = 1;
+ else:
+ args = keywords["VALIDSIG"];
+ if len(args) < 1:
+ reject("internal error while checking signature on %s." % (filename));
+ bad = 1;
+ else:
+ fingerprint = args[0];
+ if not keywords.has_key("GOODSIG") and not keywords.has_key("NO_PUBKEY"):
+ reject("signature on %s does not appear to be valid [No GOODSIG]." % (filename));
+ bad = 1;
+ if not keywords.has_key("SIG_ID") and not keywords.has_key("NO_PUBKEY"):
+ reject("signature on %s does not appear to be valid [No SIG_ID]." % (filename));
+ bad = 1;
+
+ # Finally ensure there's not something we don't recognise
+ known_keywords = utils.Dict(VALIDSIG="",SIG_ID="",GOODSIG="",BADSIG="",ERRSIG="",
+ SIGEXPIRED="",KEYREVOKED="",NO_PUBKEY="",BADARMOR="",
+ NODATA="");
+
+ for keyword in keywords.keys():
+ if not known_keywords.has_key(keyword):
+ reject("found unknown status token '%s' from gpgv with args '%r' in %s." % (keyword, keywords[keyword], filename));
+ bad = 1;
+
+ if bad:
+ return None;
+ else:
+ return fingerprint;
+
+################################################################################
+
+# Prepares a filename or directory (s) to be file.filename by stripping any part of the location (sub) from it.
+def poolify (s, sub):
+ for i in xrange(len(sub)):
+ if sub[i:] == s[0:len(sub)-i]:
+ return s[len(sub)-i:];
+ return s;
+
+def update_archives ():
+ projectB.query("DELETE FROM archive")
+ for archive in Cnf.SubTree("Archive").List():
+ SubSec = Cnf.SubTree("Archive::%s" % (archive));
+ projectB.query("INSERT INTO archive (name, origin_server, description) VALUES ('%s', '%s', '%s')"
+ % (archive, SubSec["OriginServer"], SubSec["Description"]));
+
+def update_components ():
+ projectB.query("DELETE FROM component")
+ for component in Cnf.SubTree("Component").List():
+ SubSec = Cnf.SubTree("Component::%s" % (component));
+ projectB.query("INSERT INTO component (name, description, meets_dfsg) VALUES ('%s', '%s', '%s')" %
+ (component, SubSec["Description"], SubSec["MeetsDFSG"]));
+
+def update_locations ():
+ projectB.query("DELETE FROM location")
+ for location in Cnf.SubTree("Location").List():
+ SubSec = Cnf.SubTree("Location::%s" % (location));
+ archive_id = db_access.get_archive_id(SubSec["archive"]);
+ type = SubSec.Find("type");
+ if type == "legacy-mixed":
+ projectB.query("INSERT INTO location (path, archive, type) VALUES ('%s', %d, '%s')" % (location, archive_id, SubSec["type"]));
+ else:
+ for component in Cnf.SubTree("Component").List():
+ component_id = db_access.get_component_id(component);
+ projectB.query("INSERT INTO location (path, component, archive, type) VALUES ('%s', %d, %d, '%s')" %
+ (location, component_id, archive_id, SubSec["type"]));
+
+def update_architectures ():
+ projectB.query("DELETE FROM architecture")
+ for arch in Cnf.SubTree("Architectures").List():
+ projectB.query("INSERT INTO architecture (arch_string, description) VALUES ('%s', '%s')" % (arch, Cnf["Architectures::%s" % (arch)]))
+
+def update_suites ():
+ projectB.query("DELETE FROM suite")
+ for suite in Cnf.SubTree("Suite").List():
+ SubSec = Cnf.SubTree("Suite::%s" %(suite))
+ projectB.query("INSERT INTO suite (suite_name) VALUES ('%s')" % suite.lower());
+ for i in ("Version", "Origin", "Description"):
+ if SubSec.has_key(i):
+ projectB.query("UPDATE suite SET %s = '%s' WHERE suite_name = '%s'" % (i.lower(), SubSec[i], suite.lower()))
+ for architecture in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
+ architecture_id = db_access.get_architecture_id (architecture);
+ projectB.query("INSERT INTO suite_architectures (suite, architecture) VALUES (currval('suite_id_seq'), %d)" % (architecture_id));
+
+def update_override_type():
+ projectB.query("DELETE FROM override_type");
+ for type in Cnf.ValueList("OverrideType"):
+ projectB.query("INSERT INTO override_type (type) VALUES ('%s')" % (type));
+
+def update_priority():
+ projectB.query("DELETE FROM priority");
+ for priority in Cnf.SubTree("Priority").List():
+ projectB.query("INSERT INTO priority (priority, level) VALUES ('%s', %s)" % (priority, Cnf["Priority::%s" % (priority)]));
+
+def update_section():
+ projectB.query("DELETE FROM section");
+ for component in Cnf.SubTree("Component").List():
+ if Cnf["Natalie::ComponentPosition"] == "prefix":
+ suffix = "";
+ if component != 'main':
+ prefix = component + '/';
+ else:
+ prefix = "";
+ else:
+ prefix = "";
+ component = component.replace("non-US/", "");
+ if component != 'main':
+ suffix = '/' + component;
+ else:
+ suffix = "";
+ for section in Cnf.ValueList("Section"):
+ projectB.query("INSERT INTO section (section) VALUES ('%s%s%s')" % (prefix, section, suffix));
+
+def get_location_path(directory):
+ global location_path_cache;
+
+ if location_path_cache.has_key(directory):
+ return location_path_cache[directory];
+
+ q = projectB.query("SELECT DISTINCT path FROM location WHERE path ~ '%s'" % (directory));
+ try:
+ path = q.getresult()[0][0];
+ except:
+ utils.fubar("[neve] get_location_path(): Couldn't get path for %s" % (directory));
+ location_path_cache[directory] = path;
+ return path;
+
+################################################################################
+
+def get_or_set_files_id (filename, size, md5sum, location_id):
+ global files_id_cache, files_id_serial, files_query_cache;
+
+ cache_key = "~".join((filename, size, md5sum, repr(location_id)));
+ if not files_id_cache.has_key(cache_key):
+ files_id_serial += 1
+ files_query_cache.write("%d\t%s\t%s\t%s\t%d\t\\N\n" % (files_id_serial, filename, size, md5sum, location_id));
+ files_id_cache[cache_key] = files_id_serial
+
+ return files_id_cache[cache_key]
+
+###############################################################################
+
+def process_sources (filename, suite, component, archive):
+ global source_cache, source_query_cache, src_associations_query_cache, dsc_files_query_cache, source_id_serial, src_associations_id_serial, dsc_files_id_serial, source_cache_for_binaries, orig_tar_gz_cache, reject_message;
+
+ suite = suite.lower();
+ suite_id = db_access.get_suite_id(suite);
+ try:
+ file = utils.open_file (filename);
+ except utils.cant_open_exc:
+ utils.warn("can't open '%s'" % (filename));
+ return;
+ Scanner = apt_pkg.ParseTagFile(file);
+ while Scanner.Step() != 0:
+ package = Scanner.Section["package"];
+ version = Scanner.Section["version"];
+ directory = Scanner.Section["directory"];
+ dsc_file = os.path.join(Cnf["Dir::Root"], directory, "%s_%s.dsc" % (package, utils.re_no_epoch.sub('', version)));
+ # Sometimes the Directory path is a lie; check in the pool
+ if not os.path.exists(dsc_file):
+ if directory.split('/')[0] == "dists":
+ directory = Cnf["Dir::PoolRoot"] + utils.poolify(package, component);
+ dsc_file = os.path.join(Cnf["Dir::Root"], directory, "%s_%s.dsc" % (package, utils.re_no_epoch.sub('', version)));
+ if not os.path.exists(dsc_file):
+ utils.fubar("%s not found." % (dsc_file));
+ install_date = time.strftime("%Y-%m-%d", time.localtime(os.path.getmtime(dsc_file)));
+ fingerprint = check_signature(dsc_file);
+ fingerprint_id = db_access.get_or_set_fingerprint_id(fingerprint);
+ if reject_message:
+ utils.fubar("%s: %s" % (dsc_file, reject_message));
+ maintainer = Scanner.Section["maintainer"]
+ maintainer = maintainer.replace("'", "\\'");
+ maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
+ location = get_location_path(directory.split('/')[0]);
+ location_id = db_access.get_location_id (location, component, archive);
+ if not directory.endswith("/"):
+ directory += '/';
+ directory = poolify (directory, location);
+ if directory != "" and not directory.endswith("/"):
+ directory += '/';
+ no_epoch_version = utils.re_no_epoch.sub('', version);
+ # Add all files referenced by the .dsc to the files table
+ ids = [];
+ for line in Scanner.Section["files"].split('\n'):
+ id = None;
+ (md5sum, size, filename) = line.strip().split();
+ # Don't duplicate .orig.tar.gz's
+ if filename.endswith(".orig.tar.gz"):
+ cache_key = "%s~%s~%s" % (filename, size, md5sum);
+ if orig_tar_gz_cache.has_key(cache_key):
+ id = orig_tar_gz_cache[cache_key];
+ else:
+ id = get_or_set_files_id (directory + filename, size, md5sum, location_id);
+ orig_tar_gz_cache[cache_key] = id;
+ else:
+ id = get_or_set_files_id (directory + filename, size, md5sum, location_id);
+ ids.append(id);
+ # If this is the .dsc itself; save the ID for later.
+ if filename.endswith(".dsc"):
+ files_id = id;
+ filename = directory + package + '_' + no_epoch_version + '.dsc'
+ cache_key = "%s~%s" % (package, version);
+ if not source_cache.has_key(cache_key):
+ nasty_key = "%s~%s" % (package, version)
+ source_id_serial += 1;
+ if not source_cache_for_binaries.has_key(nasty_key):
+ source_cache_for_binaries[nasty_key] = source_id_serial;
+ tmp_source_id = source_id_serial;
+ source_cache[cache_key] = source_id_serial;
+ source_query_cache.write("%d\t%s\t%s\t%d\t%d\t%s\t%s\n" % (source_id_serial, package, version, maintainer_id, files_id, install_date, fingerprint_id))
+ for id in ids:
+ dsc_files_id_serial += 1;
+ dsc_files_query_cache.write("%d\t%d\t%d\n" % (dsc_files_id_serial, tmp_source_id,id));
+ else:
+ tmp_source_id = source_cache[cache_key];
+
+ src_associations_id_serial += 1;
+ src_associations_query_cache.write("%d\t%d\t%d\n" % (src_associations_id_serial, suite_id, tmp_source_id))
+
+ file.close();
+
+###############################################################################
+
+def process_packages (filename, suite, component, archive):
+ global arch_all_cache, binary_cache, binaries_id_serial, binaries_query_cache, bin_associations_id_serial, bin_associations_query_cache, reject_message;
+
+ count_total = 0;
+ count_bad = 0;
+ suite = suite.lower();
+ suite_id = db_access.get_suite_id(suite);
+ try:
+ file = utils.open_file (filename);
+ except utils.cant_open_exc:
+ utils.warn("can't open '%s'" % (filename));
+ return;
+ Scanner = apt_pkg.ParseTagFile(file);
+ while Scanner.Step() != 0:
+ package = Scanner.Section["package"]
+ version = Scanner.Section["version"]
+ maintainer = Scanner.Section["maintainer"]
+ maintainer = maintainer.replace("'", "\\'")
+ maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
+ architecture = Scanner.Section["architecture"]
+ architecture_id = db_access.get_architecture_id (architecture);
+ fingerprint = "NOSIG";
+ fingerprint_id = db_access.get_or_set_fingerprint_id(fingerprint);
+ if not Scanner.Section.has_key("source"):
+ source = package
+ else:
+ source = Scanner.Section["source"]
+ source_version = ""
+ if source.find("(") != -1:
+ m = utils.re_extract_src_version.match(source)
+ source = m.group(1)
+ source_version = m.group(2)
+ if not source_version:
+ source_version = version
+ filename = Scanner.Section["filename"]
+ location = get_location_path(filename.split('/')[0]);
+ location_id = db_access.get_location_id (location, component, archive)
+ filename = poolify (filename, location)
+ if architecture == "all":
+ filename = re_arch_from_filename.sub("binary-all", filename);
+ cache_key = "%s~%s" % (source, source_version);
+ source_id = source_cache_for_binaries.get(cache_key, None);
+ size = Scanner.Section["size"];
+ md5sum = Scanner.Section["md5sum"];
+ files_id = get_or_set_files_id (filename, size, md5sum, location_id);
+ type = "deb"; # FIXME
+ cache_key = "%s~%s~%s~%d~%d~%d~%d" % (package, version, repr(source_id), architecture_id, location_id, files_id, suite_id);
+ if not arch_all_cache.has_key(cache_key):
+ arch_all_cache[cache_key] = 1;
+ cache_key = "%s~%s~%s~%d" % (package, version, repr(source_id), architecture_id);
+ if not binary_cache.has_key(cache_key):
+ if not source_id:
+ source_id = "\N";
+ count_bad += 1;
+ else:
+ source_id = repr(source_id);
+ binaries_id_serial += 1;
+ binaries_query_cache.write("%d\t%s\t%s\t%d\t%s\t%d\t%d\t%s\t%s\n" % (binaries_id_serial, package, version, maintainer_id, source_id, architecture_id, files_id, type, fingerprint_id));
+ binary_cache[cache_key] = binaries_id_serial;
+ tmp_binaries_id = binaries_id_serial;
+ else:
+ tmp_binaries_id = binary_cache[cache_key];
+
+ bin_associations_id_serial += 1;
+ bin_associations_query_cache.write("%d\t%d\t%d\n" % (bin_associations_id_serial, suite_id, tmp_binaries_id));
+ count_total += 1;
+
+ file.close();
+ if count_bad != 0:
+ print "%d binary packages processed; %d with no source match which is %.2f%%" % (count_total, count_bad, (float(count_bad)/count_total)*100);
+ else:
+ print "%d binary packages processed; 0 with no source match which is 0%%" % (count_total);
+
+###############################################################################
+
+def do_sources(sources, suite, component, server):
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (sources, temp_filename));
+ if (result != 0):
+ utils.fubar("Gunzip invocation failed!\n%s" % (output), result);
+ print 'Processing '+sources+'...';
+ process_sources (temp_filename, suite, component, server);
+ os.unlink(temp_filename);
+
+###############################################################################
+
+def do_da_do_da ():
+ global Cnf, projectB, query_cache, files_query_cache, source_query_cache, src_associations_query_cache, dsc_files_query_cache, bin_associations_query_cache, binaries_query_cache;
+
+ Cnf = utils.get_conf();
+ Arguments = [('a', "action", "Neve::Options::Action"),
+ ('h', "help", "Neve::Options::Help")];
+ for i in [ "action", "help" ]:
+ if not Cnf.has_key("Neve::Options::%s" % (i)):
+ Cnf["Neve::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Neve::Options")
+ if Options["Help"]:
+ usage();
+
+ if not Options["Action"]:
+ utils.warn("""no -a/--action given; not doing anything.
+Please read the documentation before running this script.
+""");
+ usage(1);
+
+ print "Re-Creating DB..."
+ (result, output) = commands.getstatusoutput("psql -f init_pool.sql template1");
+ if (result != 0):
+ utils.fubar("psql invocation failed!\n", result);
+ print output;
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+
+ db_access.init (Cnf, projectB);
+
+ print "Adding static tables from conf file..."
+ projectB.query("BEGIN WORK");
+ update_architectures();
+ update_components();
+ update_archives();
+ update_locations();
+ update_suites();
+ update_override_type();
+ update_priority();
+ update_section();
+ projectB.query("COMMIT WORK");
+
+ files_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"files","w");
+ source_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"source","w");
+ src_associations_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"src_associations","w");
+ dsc_files_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"dsc_files","w");
+ binaries_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"binaries","w");
+ bin_associations_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"bin_associations","w");
+
+ projectB.query("BEGIN WORK");
+ # Process Sources files to popoulate `source' and friends
+ for location in Cnf.SubTree("Location").List():
+ SubSec = Cnf.SubTree("Location::%s" % (location));
+ server = SubSec["Archive"];
+ type = Cnf.Find("Location::%s::Type" % (location));
+ if type == "legacy-mixed":
+ sources = location + 'Sources.gz';
+ suite = Cnf.Find("Location::%s::Suite" % (location));
+ do_sources(sources, suite, "", server);
+ elif type == "legacy" or type == "pool":
+ for suite in Cnf.ValueList("Location::%s::Suites" % (location)):
+ for component in Cnf.SubTree("Component").List():
+ sources = Cnf["Dir::Root"] + "dists/" + Cnf["Suite::%s::CodeName" % (suite)] + '/' + component + '/source/' + 'Sources.gz';
+ do_sources(sources, suite, component, server);
+ else:
+ utils.fubar("Unknown location type ('%s')." % (type));
+
+ # Process Packages files to populate `binaries' and friends
+
+ for location in Cnf.SubTree("Location").List():
+ SubSec = Cnf.SubTree("Location::%s" % (location));
+ server = SubSec["Archive"];
+ type = Cnf.Find("Location::%s::Type" % (location));
+ if type == "legacy-mixed":
+ packages = location + 'Packages';
+ suite = Cnf.Find("Location::%s::Suite" % (location));
+ print 'Processing '+location+'...';
+ process_packages (packages, suite, "", server);
+ elif type == "legacy" or type == "pool":
+ for suite in Cnf.ValueList("Location::%s::Suites" % (location)):
+ for component in Cnf.SubTree("Component").List():
+ architectures = filter(utils.real_arch,
+ Cnf.ValueList("Suite::%s::Architectures" % (suite)));
+ for architecture in architectures:
+ packages = Cnf["Dir::Root"] + "dists/" + Cnf["Suite::%s::CodeName" % (suite)] + '/' + component + '/binary-' + architecture + '/Packages'
+ print 'Processing '+packages+'...';
+ process_packages (packages, suite, component, server);
+
+ files_query_cache.close();
+ source_query_cache.close();
+ src_associations_query_cache.close();
+ dsc_files_query_cache.close();
+ binaries_query_cache.close();
+ bin_associations_query_cache.close();
+ print "Writing data to `files' table...";
+ projectB.query("COPY files FROM '%s'" % (Cnf["Neve::ExportDir"]+"files"));
+ print "Writing data to `source' table...";
+ projectB.query("COPY source FROM '%s'" % (Cnf["Neve::ExportDir"]+"source"));
+ print "Writing data to `src_associations' table...";
+ projectB.query("COPY src_associations FROM '%s'" % (Cnf["Neve::ExportDir"]+"src_associations"));
+ print "Writing data to `dsc_files' table...";
+ projectB.query("COPY dsc_files FROM '%s'" % (Cnf["Neve::ExportDir"]+"dsc_files"));
+ print "Writing data to `binaries' table...";
+ projectB.query("COPY binaries FROM '%s'" % (Cnf["Neve::ExportDir"]+"binaries"));
+ print "Writing data to `bin_associations' table...";
+ projectB.query("COPY bin_associations FROM '%s'" % (Cnf["Neve::ExportDir"]+"bin_associations"));
+ print "Committing...";
+ projectB.query("COMMIT WORK");
+
+ # Add the constraints and otherwise generally clean up the database.
+ # See add_constraints.sql for more details...
+
+ print "Running add_constraints.sql...";
+ (result, output) = commands.getstatusoutput("psql %s < add_constraints.sql" % (Cnf["DB::Name"]));
+ print output
+ if (result != 0):
+ utils.fubar("psql invocation failed!\n%s" % (output), result);
+
+ return;
+
+################################################################################
+
+def main():
+ utils.try_with_debug(do_da_do_da);
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# Sync fingerprint and uid tables with a debian.org LDAP DB
+# Copyright (C) 2003, 2004 James Troup <james@nocrew.org>
+# $Id: emilie,v 1.3 2004-11-27 13:25:35 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <elmo> ping@debian.org ?
+# <aj> missing@ ? wtfru@ ?
+# <elmo> giggle
+# <elmo> I like wtfru
+# <aj> all you have to do is retrofit wtfru into an acronym and no one
+# could possibly be offended!
+# <elmo> aj: worried terriers for russian unity ?
+# <aj> uhhh
+# <aj> ooookkkaaaaay
+# <elmo> wthru is a little less offensive maybe? but myabe that's
+# just because I read h as heck, not hell
+# <elmo> ho hum
+# <aj> (surely the "f" stands for "freedom" though...)
+# <elmo> where the freedom are you?
+# <aj> 'xactly
+# <elmo> or worried terriers freed (of) russian unilateralism ?
+# <aj> freedom -- it's the "foo" of the 21st century
+# <aj> oo, how about "wat@" as in wherefore art thou?
+# <neuro> or worried attack terriers
+# <aj> Waning Trysts Feared - Return? Unavailable?
+# <aj> (i find all these terriers more worrying, than worried)
+# <neuro> worrying attack terriers, then
+
+################################################################################
+
+import commands, ldap, pg, re, sys, time;
+import apt_pkg;
+import db_access, utils;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+re_gpg_fingerprint = re.compile(r"^\s+Key fingerprint = (.*)$", re.MULTILINE);
+re_debian_address = re.compile(r"^.*<(.*)@debian\.org>$", re.MULTILINE);
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: emilie
+Syncs fingerprint and uid tables with a debian.org LDAP DB
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def get_ldap_value(entry, value):
+ ret = entry.get(value);
+ if not ret:
+ return "";
+ else:
+ # FIXME: what about > 0 ?
+ return ret[0];
+
+def main():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Emilie::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Emilie::Options::%s" % (i)):
+ Cnf["Emilie::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Emilie::Options")
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ #before = time.time();
+ #sys.stderr.write("[Getting info from the LDAP server...");
+ LDAPDn = Cnf["Emilie::LDAPDn"];
+ LDAPServer = Cnf["Emilie::LDAPServer"];
+ l = ldap.open(LDAPServer);
+ l.simple_bind_s("","");
+ Attrs = l.search_s(LDAPDn, ldap.SCOPE_ONELEVEL,
+ "(&(keyfingerprint=*)(gidnumber=%s))" % (Cnf["Julia::ValidGID"]),
+ ["uid", "keyfingerprint"]);
+ #sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
+
+
+ projectB.query("BEGIN WORK");
+
+
+ # Sync LDAP with DB
+ db_fin_uid = {};
+ ldap_fin_uid_id = {};
+ q = projectB.query("""
+SELECT f.fingerprint, f.id, u.uid FROM fingerprint f, uid u WHERE f.uid = u.id
+ UNION SELECT f.fingerprint, f.id, null FROM fingerprint f where f.uid is null""");
+ for i in q.getresult():
+ (fingerprint, fingerprint_id, uid) = i;
+ db_fin_uid[fingerprint] = (uid, fingerprint_id);
+
+ for i in Attrs:
+ entry = i[1];
+ fingerprints = entry["keyFingerPrint"];
+ uid = entry["uid"][0];
+ uid_id = db_access.get_or_set_uid_id(uid);
+ for fingerprint in fingerprints:
+ ldap_fin_uid_id[fingerprint] = (uid, uid_id);
+ if db_fin_uid.has_key(fingerprint):
+ (existing_uid, fingerprint_id) = db_fin_uid[fingerprint];
+ if not existing_uid:
+ q = projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
+ print "Assigning %s to 0x%s." % (uid, fingerprint);
+ else:
+ if existing_uid != uid:
+ utils.fubar("%s has %s in LDAP, but projectB says it should be %s." % (uid, fingerprint, existing_uid));
+
+ # Try to update people who sign with non-primary key
+ q = projectB.query("SELECT fingerprint, id FROM fingerprint WHERE uid is null");
+ for i in q.getresult():
+ (fingerprint, fingerprint_id) = i;
+ cmd = "gpg --no-default-keyring --keyring=%s --keyring=%s --fingerprint %s" \
+ % (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"],
+ fingerprint);
+ (result, output) = commands.getstatusoutput(cmd);
+ if result == 0:
+ m = re_gpg_fingerprint.search(output);
+ if not m:
+ print output
+ utils.fubar("0x%s: No fingerprint found in gpg output but it returned 0?\n%s" % (fingerprint, utils.prefix_multi_line_string(output, " [GPG output:] ")));
+ primary_key = m.group(1);
+ primary_key = primary_key.replace(" ","");
+ if not ldap_fin_uid_id.has_key(primary_key):
+ utils.fubar("0x%s (from 0x%s): no UID found in LDAP" % (primary_key, fingerprint));
+ (uid, uid_id) = ldap_fin_uid_id[primary_key];
+ q = projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
+ print "Assigning %s to 0x%s." % (uid, fingerprint);
+ else:
+ extra_keyrings = "";
+ for keyring in Cnf.ValueList("Emilie::ExtraKeyrings"):
+ extra_keyrings += " --keyring=%s" % (keyring);
+ cmd = "gpg --keyring=%s --keyring=%s %s --list-key %s" \
+ % (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"],
+ extra_keyrings, fingerprint);
+ (result, output) = commands.getstatusoutput(cmd);
+ if result != 0:
+ cmd = "gpg --keyserver=%s --allow-non-selfsigned-uid --recv-key %s" % (Cnf["Emilie::KeyServer"], fingerprint);
+ (result, output) = commands.getstatusoutput(cmd);
+ if result != 0:
+ print "0x%s: NOT found on keyserver." % (fingerprint);
+ print cmd
+ print result
+ print output
+ continue;
+ else:
+ cmd = "gpg --list-key %s" % (fingerprint);
+ (result, output) = commands.getstatusoutput(cmd);
+ if result != 0:
+ print "0x%s: --list-key returned error after --recv-key didn't." % (fingerprint);
+ print cmd
+ print result
+ print output
+ continue;
+ m = re_debian_address.search(output);
+ if m:
+ guess_uid = m.group(1);
+ else:
+ guess_uid = "???";
+ name = " ".join(output.split('\n')[0].split()[3:]);
+ print "0x%s -> %s -> %s" % (fingerprint, name, guess_uid);
+ # FIXME: make me optionally non-interactive
+ # FIXME: default to the guessed ID
+ uid = None;
+ while not uid:
+ uid = utils.our_raw_input("Map to which UID ? ");
+ Attrs = l.search_s(LDAPDn,ldap.SCOPE_ONELEVEL,"(uid=%s)" % (uid), ["cn","mn","sn"])
+ if not Attrs:
+ print "That UID doesn't exist in LDAP!"
+ uid = None;
+ else:
+ entry = Attrs[0][1];
+ name = " ".join([get_ldap_value(entry, "cn"),
+ get_ldap_value(entry, "mn"),
+ get_ldap_value(entry, "sn")]);
+ prompt = "Map to %s - %s (y/N) ? " % (uid, name.replace(" "," "));
+ yn = utils.our_raw_input(prompt).lower();
+ if yn == "y":
+ uid_id = db_access.get_or_set_uid_id(uid);
+ projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
+ print "Assigning %s to 0x%s." % (uid, fingerprint);
+ else:
+ uid = None;
+ projectB.query("COMMIT WORK");
+
+############################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Sync PostgreSQL users with system users
+# Copyright (C) 2001, 2002 James Troup <james@nocrew.org>
+# $Id: julia,v 1.9 2003-01-02 18:12:50 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <aj> ARRRGGGHHH
+# <aj> what's wrong with me!?!?!?
+# <aj> i was just nice to some mormon doorknockers!!!
+# <Omnic> AJ?!?!
+# <aj> i know!!!!!
+# <Omnic> I'm gonna have to kick your ass when you come over
+# <Culus> aj: GET THE HELL OUT OF THE CABAL! :P
+
+################################################################################
+
+import pg, pwd, sys;
+import utils;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: julia [OPTION]...
+Sync PostgreSQL's users with system users.
+
+ -h, --help show this help and exit
+ -n, --no-action don't do anything
+ -q, --quiet be quiet about what is being done
+ -v, --verbose explain what is being done"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('n', "no-action", "Julia::Options::No-Action"),
+ ('q', "quiet", "Julia::Options::Quiet"),
+ ('v', "verbose", "Julia::Options::Verbose"),
+ ('h', "help", "Julia::Options::Help")];
+ for i in [ "no-action", "quiet", "verbose", "help" ]:
+ if not Cnf.has_key("Julia::Options::%s" % (i)):
+ Cnf["Julia::Options::%s" % (i)] = "";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Julia::Options")
+
+ if Options["Help"]:
+ usage();
+ elif arguments:
+ utils.warn("julia takes no non-option arguments.");
+ usage(1);
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ valid_gid = int(Cnf.get("Julia::ValidGID",""));
+
+ passwd_unames = {};
+ for entry in pwd.getpwall():
+ uname = entry[0];
+ gid = entry[3];
+ if valid_gid and gid != valid_gid:
+ if Options["Verbose"]:
+ print "Skipping %s (GID %s != Valid GID %s)." % (uname, gid, valid_gid);
+ continue;
+ passwd_unames[uname] = "";
+
+ postgres_unames = {};
+ q = projectB.query("SELECT usename FROM pg_user");
+ ql = q.getresult();
+ for i in ql:
+ uname = i[0];
+ postgres_unames[uname] = "";
+
+ known_postgres_unames = {};
+ for i in Cnf.get("Julia::KnownPostgres","").split(","):
+ uname = i.strip();
+ known_postgres_unames[uname] = "";
+
+ keys = postgres_unames.keys()
+ keys.sort();
+ for uname in keys:
+ if not passwd_unames.has_key(uname)and not known_postgres_unames.has_key(uname):
+ print "W: %s is in Postgres but not the passwd file or list of known Postgres users." % (uname);
+
+ keys = passwd_unames.keys()
+ keys.sort();
+ for uname in keys:
+ if not postgres_unames.has_key(uname):
+ if not Options["Quiet"]:
+ print "Creating %s user in Postgres." % (uname);
+ if not Options["No-Action"]:
+ q = projectB.query('CREATE USER "%s"' % (uname));
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Sync the ISC configuartion file and the SQL database
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: alyson,v 1.12 2003-09-07 13:52:07 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import pg, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: alyson
+Initalizes some tables in the projectB database based on the config file.
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def get (c, i):
+ if c.has_key(i):
+ return "'%s'" % (c[i]);
+ else:
+ return "NULL";
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Alyson::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Alyson::Options::%s" % (i)):
+ Cnf["Alyson::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Alyson::Options")
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ # archive
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM archive");
+ for name in Cnf.SubTree("Archive").List():
+ Archive = Cnf.SubTree("Archive::%s" % (name));
+ origin_server = get(Archive, "OriginServer");
+ description = get(Archive, "Description");
+ projectB.query("INSERT INTO archive (name, origin_server, description) VALUES ('%s', %s, %s)" % (name, origin_server, description));
+ projectB.query("COMMIT WORK");
+
+ # architecture
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM architecture");
+ for arch in Cnf.SubTree("Architectures").List():
+ description = Cnf["Architectures::%s" % (arch)];
+ projectB.query("INSERT INTO architecture (arch_string, description) VALUES ('%s', '%s')" % (arch, description));
+ projectB.query("COMMIT WORK");
+
+ # component
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM component");
+ for name in Cnf.SubTree("Component").List():
+ Component = Cnf.SubTree("Component::%s" % (name));
+ description = get(Component, "Description");
+ if Component.get("MeetsDFSG").lower() == "true":
+ meets_dfsg = "true";
+ else:
+ meets_dfsg = "false";
+ projectB.query("INSERT INTO component (name, description, meets_dfsg) VALUES ('%s', %s, %s)" % (name, description, meets_dfsg));
+ projectB.query("COMMIT WORK");
+
+ # location
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM location");
+ for location in Cnf.SubTree("Location").List():
+ Location = Cnf.SubTree("Location::%s" % (location));
+ archive_id = db_access.get_archive_id(Location["Archive"]);
+ type = Location.get("type");
+ if type == "legacy-mixed":
+ projectB.query("INSERT INTO location (path, archive, type) VALUES ('%s', %d, '%s')" % (location, archive_id, Location["type"]));
+ elif type == "legacy" or type == "pool":
+ for component in Cnf.SubTree("Component").List():
+ component_id = db_access.get_component_id(component);
+ projectB.query("INSERT INTO location (path, component, archive, type) VALUES ('%s', %d, %d, '%s')" %
+ (location, component_id, archive_id, type));
+ else:
+ utils.fubar("E: type '%s' not recognised in location %s." % (type, location));
+ projectB.query("COMMIT WORK");
+
+ # suite
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM suite")
+ for suite in Cnf.SubTree("Suite").List():
+ Suite = Cnf.SubTree("Suite::%s" %(suite))
+ version = get(Suite, "Version");
+ origin = get(Suite, "Origin");
+ description = get(Suite, "Description");
+ projectB.query("INSERT INTO suite (suite_name, version, origin, description) VALUES ('%s', %s, %s, %s)"
+ % (suite.lower(), version, origin, description));
+ for architecture in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
+ architecture_id = db_access.get_architecture_id (architecture);
+ if architecture_id < 0:
+ utils.fubar("architecture '%s' not found in architecture table for suite %s." % (architecture, suite));
+ projectB.query("INSERT INTO suite_architectures (suite, architecture) VALUES (currval('suite_id_seq'), %d)" % (architecture_id));
+ projectB.query("COMMIT WORK");
+
+ # override_type
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM override_type");
+ for type in Cnf.ValueList("OverrideType"):
+ projectB.query("INSERT INTO override_type (type) VALUES ('%s')" % (type));
+ projectB.query("COMMIT WORK");
+
+ # priority
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM priority");
+ for priority in Cnf.SubTree("Priority").List():
+ projectB.query("INSERT INTO priority (priority, level) VALUES ('%s', %s)" % (priority, Cnf["Priority::%s" % (priority)]));
+ projectB.query("COMMIT WORK");
+
+ # section
+
+ projectB.query("BEGIN WORK");
+ projectB.query("DELETE FROM section");
+ for component in Cnf.SubTree("Component").List():
+ if Cnf["Natalie::ComponentPosition"] == "prefix":
+ suffix = "";
+ if component != "main":
+ prefix = component + '/';
+ else:
+ prefix = "";
+ else:
+ prefix = "";
+ component = component.replace("non-US/", "");
+ if component != "main":
+ suffix = '/' + component;
+ else:
+ suffix = "";
+ for section in Cnf.ValueList("Section"):
+ projectB.query("INSERT INTO section (section) VALUES ('%s%s%s')" % (prefix, section, suffix));
+ projectB.query("COMMIT WORK");
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Initial setup of an archive
+# Copyright (C) 2002, 2004 James Troup <james@nocrew.org>
+# $Id: rose,v 1.4 2004-03-11 00:20:51 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys;
+import utils;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+AptCnf = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: rose
+Creates directories for an archive based on katie.conf configuration file.
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def do_dir(target, config_name):
+ if os.path.exists(target):
+ if not os.path.isdir(target):
+ utils.fubar("%s (%s) is not a directory." % (target, config_name));
+ else:
+ print "Creating %s ..." % (target);
+ os.makedirs(target);
+
+def process_file(config, config_name):
+ if config.has_key(config_name):
+ target = os.path.dirname(config[config_name]);
+ do_dir(target, config_name);
+
+def process_tree(config, tree):
+ for entry in config.SubTree(tree).List():
+ entry = entry.lower();
+ if tree == "Dir":
+ if entry in [ "poolroot", "queue" , "morguereject" ]:
+ continue;
+ config_name = "%s::%s" % (tree, entry);
+ target = config[config_name];
+ do_dir(target, config_name);
+
+def process_morguesubdir(subdir):
+ config_name = "%s::MorgueSubDir" % (subdir);
+ if Cnf.has_key(config_name):
+ target = os.path.join(Cnf["Dir::Morgue"], Cnf[config_name]);
+ do_dir(target, config_name);
+
+######################################################################
+
+def create_directories():
+ # Process directories from apt.conf
+ process_tree(Cnf, "Dir");
+ process_tree(Cnf, "Dir::Queue");
+ for file in [ "Dinstall::LockFile", "Melanie::LogFile", "Neve::ExportDir" ]:
+ process_file(Cnf, file);
+ for subdir in [ "Shania", "Rhona" ]:
+ process_morguesubdir(subdir);
+
+ # Process directories from apt.conf
+ process_tree(AptCnf, "Dir");
+ for tree in AptCnf.SubTree("Tree").List():
+ config_name = "Tree::%s" % (tree);
+ tree_dir = os.path.join(Cnf["Dir::Root"], tree);
+ do_dir(tree_dir, tree);
+ for file in [ "FileList", "SourceFileList" ]:
+ process_file(AptCnf, "%s::%s" % (config_name, file));
+ for component in AptCnf["%s::Sections" % (config_name)].split():
+ for architecture in AptCnf["%s::Architectures" % (config_name)].split():
+ if architecture != "source":
+ architecture = "binary-"+architecture;
+ target = os.path.join(tree_dir,component,architecture);
+ do_dir(target, "%s, %s, %s" % (tree, component, architecture));
+
+
+################################################################################
+
+def main ():
+ global AptCnf, Cnf, projectB;
+
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Rose::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Rose::Options::%s" % (i)):
+ Cnf["Rose::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Rose::Options")
+ if Options["Help"]:
+ usage();
+
+ AptCnf = apt_pkg.newConfiguration();
+ apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file());
+
+ create_directories();
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# DB access fucntions
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: db_access.py,v 1.18 2005-12-05 05:08:10 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import sys, time, types;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+suite_id_cache = {};
+section_id_cache = {};
+priority_id_cache = {};
+override_type_id_cache = {};
+architecture_id_cache = {};
+archive_id_cache = {};
+component_id_cache = {};
+location_id_cache = {};
+maintainer_id_cache = {};
+source_id_cache = {};
+files_id_cache = {};
+maintainer_cache = {};
+fingerprint_id_cache = {};
+queue_id_cache = {};
+uid_id_cache = {};
+
+################################################################################
+
+def init (config, sql):
+ global Cnf, projectB
+
+ Cnf = config;
+ projectB = sql;
+
+
+def do_query(q):
+ sys.stderr.write("query: \"%s\" ... " % (q));
+ before = time.time();
+ r = projectB.query(q);
+ time_diff = time.time()-before;
+ sys.stderr.write("took %.3f seconds.\n" % (time_diff));
+ if type(r) is int:
+ sys.stderr.write("int result: %s\n" % (r));
+ elif type(r) is types.NoneType:
+ sys.stderr.write("result: None\n");
+ else:
+ sys.stderr.write("pgresult: %s\n" % (r.getresult()));
+ return r;
+
+################################################################################
+
+def get_suite_id (suite):
+ global suite_id_cache
+
+ if suite_id_cache.has_key(suite):
+ return suite_id_cache[suite]
+
+ q = projectB.query("SELECT id FROM suite WHERE suite_name = '%s'" % (suite))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ suite_id = ql[0][0];
+ suite_id_cache[suite] = suite_id
+
+ return suite_id
+
+def get_section_id (section):
+ global section_id_cache
+
+ if section_id_cache.has_key(section):
+ return section_id_cache[section]
+
+ q = projectB.query("SELECT id FROM section WHERE section = '%s'" % (section))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ section_id = ql[0][0];
+ section_id_cache[section] = section_id
+
+ return section_id
+
+def get_priority_id (priority):
+ global priority_id_cache
+
+ if priority_id_cache.has_key(priority):
+ return priority_id_cache[priority]
+
+ q = projectB.query("SELECT id FROM priority WHERE priority = '%s'" % (priority))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ priority_id = ql[0][0];
+ priority_id_cache[priority] = priority_id
+
+ return priority_id
+
+def get_override_type_id (type):
+ global override_type_id_cache;
+
+ if override_type_id_cache.has_key(type):
+ return override_type_id_cache[type];
+
+ q = projectB.query("SELECT id FROM override_type WHERE type = '%s'" % (type));
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ override_type_id = ql[0][0];
+ override_type_id_cache[type] = override_type_id;
+
+ return override_type_id;
+
+def get_architecture_id (architecture):
+ global architecture_id_cache;
+
+ if architecture_id_cache.has_key(architecture):
+ return architecture_id_cache[architecture];
+
+ q = projectB.query("SELECT id FROM architecture WHERE arch_string = '%s'" % (architecture))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ architecture_id = ql[0][0];
+ architecture_id_cache[architecture] = architecture_id;
+
+ return architecture_id;
+
+def get_archive_id (archive):
+ global archive_id_cache
+
+ archive = archive.lower();
+
+ if archive_id_cache.has_key(archive):
+ return archive_id_cache[archive]
+
+ q = projectB.query("SELECT id FROM archive WHERE lower(name) = '%s'" % (archive));
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ archive_id = ql[0][0]
+ archive_id_cache[archive] = archive_id
+
+ return archive_id
+
+def get_component_id (component):
+ global component_id_cache
+
+ component = component.lower();
+
+ if component_id_cache.has_key(component):
+ return component_id_cache[component]
+
+ q = projectB.query("SELECT id FROM component WHERE lower(name) = '%s'" % (component))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ component_id = ql[0][0];
+ component_id_cache[component] = component_id
+
+ return component_id
+
+def get_location_id (location, component, archive):
+ global location_id_cache
+
+ cache_key = location + '~' + component + '~' + location
+ if location_id_cache.has_key(cache_key):
+ return location_id_cache[cache_key]
+
+ archive_id = get_archive_id (archive)
+ if component != "":
+ component_id = get_component_id (component)
+ if component_id != -1:
+ q = projectB.query("SELECT id FROM location WHERE path = '%s' AND component = %d AND archive = %d" % (location, component_id, archive_id))
+ else:
+ q = projectB.query("SELECT id FROM location WHERE path = '%s' AND archive = %d" % (location, archive_id))
+ ql = q.getresult();
+ if not ql:
+ return -1;
+
+ location_id = ql[0][0]
+ location_id_cache[cache_key] = location_id
+
+ return location_id
+
+def get_source_id (source, version):
+ global source_id_cache
+
+ cache_key = source + '~' + version + '~'
+ if source_id_cache.has_key(cache_key):
+ return source_id_cache[cache_key]
+
+ q = projectB.query("SELECT id FROM source s WHERE s.source = '%s' AND s.version = '%s'" % (source, version))
+
+ if not q.getresult():
+ return None
+
+ source_id = q.getresult()[0][0]
+ source_id_cache[cache_key] = source_id
+
+ return source_id
+
+################################################################################
+
+def get_or_set_maintainer_id (maintainer):
+ global maintainer_id_cache
+
+ if maintainer_id_cache.has_key(maintainer):
+ return maintainer_id_cache[maintainer]
+
+ q = projectB.query("SELECT id FROM maintainer WHERE name = '%s'" % (maintainer))
+ if not q.getresult():
+ projectB.query("INSERT INTO maintainer (name) VALUES ('%s')" % (maintainer))
+ q = projectB.query("SELECT id FROM maintainer WHERE name = '%s'" % (maintainer))
+ maintainer_id = q.getresult()[0][0]
+ maintainer_id_cache[maintainer] = maintainer_id
+
+ return maintainer_id
+
+################################################################################
+
+def get_or_set_uid_id (uid):
+ global uid_id_cache;
+
+ if uid_id_cache.has_key(uid):
+ return uid_id_cache[uid];
+
+ q = projectB.query("SELECT id FROM uid WHERE uid = '%s'" % (uid))
+ if not q.getresult():
+ projectB.query("INSERT INTO uid (uid) VALUES ('%s')" % (uid));
+ q = projectB.query("SELECT id FROM uid WHERE uid = '%s'" % (uid));
+ uid_id = q.getresult()[0][0];
+ uid_id_cache[uid] = uid_id;
+
+ return uid_id;
+
+################################################################################
+
+def get_or_set_fingerprint_id (fingerprint):
+ global fingerprint_id_cache;
+
+ if fingerprint_id_cache.has_key(fingerprint):
+ return fingerprint_id_cache[fingerprint]
+
+ q = projectB.query("SELECT id FROM fingerprint WHERE fingerprint = '%s'" % (fingerprint));
+ if not q.getresult():
+ projectB.query("INSERT INTO fingerprint (fingerprint) VALUES ('%s')" % (fingerprint));
+ q = projectB.query("SELECT id FROM fingerprint WHERE fingerprint = '%s'" % (fingerprint));
+ fingerprint_id = q.getresult()[0][0];
+ fingerprint_id_cache[fingerprint] = fingerprint_id;
+
+ return fingerprint_id;
+
+################################################################################
+
+def get_files_id (filename, size, md5sum, location_id):
+ global files_id_cache
+
+ cache_key = "%s~%d" % (filename, location_id);
+
+ if files_id_cache.has_key(cache_key):
+ return files_id_cache[cache_key]
+
+ size = int(size);
+ q = projectB.query("SELECT id, size, md5sum FROM files WHERE filename = '%s' AND location = %d" % (filename, location_id));
+ ql = q.getresult();
+ if ql:
+ if len(ql) != 1:
+ return -1;
+ ql = ql[0];
+ orig_size = int(ql[1]);
+ orig_md5sum = ql[2];
+ if orig_size != size or orig_md5sum != md5sum:
+ return -2;
+ files_id_cache[cache_key] = ql[0];
+ return files_id_cache[cache_key]
+ else:
+ return None
+
+################################################################################
+
+def get_or_set_queue_id (queue):
+ global queue_id_cache
+
+ if queue_id_cache.has_key(queue):
+ return queue_id_cache[queue]
+
+ q = projectB.query("SELECT id FROM queue WHERE queue_name = '%s'" % (queue))
+ if not q.getresult():
+ projectB.query("INSERT INTO queue (queue_name) VALUES ('%s')" % (queue))
+ q = projectB.query("SELECT id FROM queue WHERE queue_name = '%s'" % (queue))
+ queue_id = q.getresult()[0][0]
+ queue_id_cache[queue] = queue_id
+
+ return queue_id
+
+################################################################################
+
+def set_files_id (filename, size, md5sum, location_id):
+ global files_id_cache
+
+ projectB.query("INSERT INTO files (filename, size, md5sum, location) VALUES ('%s', %d, '%s', %d)" % (filename, long(size), md5sum, location_id));
+
+ return get_files_id (filename, size, md5sum, location_id);
+
+ ### currval has issues with postgresql 7.1.3 when the table is big;
+ ### it was taking ~3 seconds to return on auric which is very Not
+ ### Cool(tm).
+ ##
+ ##q = projectB.query("SELECT id FROM files WHERE id = currval('files_id_seq')");
+ ##ql = q.getresult()[0];
+ ##cache_key = "%s~%d" % (filename, location_id);
+ ##files_id_cache[cache_key] = ql[0]
+ ##return files_id_cache[cache_key];
+
+################################################################################
+
+def get_maintainer (maintainer_id):
+ global maintainer_cache;
+
+ if not maintainer_cache.has_key(maintainer_id):
+ q = projectB.query("SELECT name FROM maintainer WHERE id = %s" % (maintainer_id));
+ maintainer_cache[maintainer_id] = q.getresult()[0][0];
+
+ return maintainer_cache[maintainer_id];
+
+################################################################################
--- /dev/null
+#!/usr/bin/env python
+
+# Logging functions
+# Copyright (C) 2001, 2002 James Troup <james@nocrew.org>
+# $Id: logging.py,v 1.4 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, pwd, time, sys;
+import utils;
+
+################################################################################
+
+class Logger:
+ "Logger object"
+ Cnf = None;
+ logfile = None;
+ program = None;
+
+ def __init__ (self, Cnf, program, debug=0):
+ "Initialize a new Logger object"
+ self.Cnf = Cnf;
+ self.program = program;
+ # Create the log directory if it doesn't exist
+ logdir = Cnf["Dir::Log"];
+ if not os.path.exists(logdir):
+ umask = os.umask(00000);
+ os.makedirs(logdir, 02775);
+ # Open the logfile
+ logfilename = "%s/%s" % (logdir, time.strftime("%Y-%m"));
+ logfile = None
+ if debug:
+ logfile = sys.stderr
+ else:
+ logfile = utils.open_file(logfilename, 'a');
+ self.logfile = logfile;
+ # Log the start of the program
+ user = pwd.getpwuid(os.getuid())[0];
+ self.log(["program start", user]);
+
+ def log (self, details):
+ "Log an event"
+ # Prepend the timestamp and program name
+ details.insert(0, self.program);
+ timestamp = time.strftime("%Y%m%d%H%M%S");
+ details.insert(0, timestamp);
+ # Force the contents of the list to be string.join-able
+ details = map(str, details);
+ # Write out the log in TSV
+ self.logfile.write("|".join(details)+'\n');
+ # Flush the output to enable tail-ing
+ self.logfile.flush();
+
+ def close (self):
+ "Close a Logger object"
+ self.log(["program end"]);
+ self.logfile.flush();
+ self.logfile.close();
--- /dev/null
+#!/usr/bin/env python
+
+# Utility functions for katie
+# Copyright (C) 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: katie.py,v 1.59 2005-12-17 10:57:03 rmurray Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+###############################################################################
+
+import cPickle, errno, os, pg, re, stat, string, sys, time;
+import utils, db_access;
+import apt_inst, apt_pkg;
+
+from types import *;
+
+###############################################################################
+
+re_isanum = re.compile (r"^\d+$");
+re_default_answer = re.compile(r"\[(.*)\]");
+re_fdnic = re.compile(r"\n\n");
+re_bin_only_nmu = re.compile(r"\+b\d+$");
+###############################################################################
+
+# Convenience wrapper to carry around all the package information in
+
+class Pkg:
+ def __init__(self, **kwds):
+ self.__dict__.update(kwds);
+
+ def update(self, **kwds):
+ self.__dict__.update(kwds);
+
+###############################################################################
+
+class nmu_p:
+ # Read in the group maintainer override file
+ def __init__ (self, Cnf):
+ self.group_maint = {};
+ self.Cnf = Cnf;
+ if Cnf.get("Dinstall::GroupOverrideFilename"):
+ filename = Cnf["Dir::Override"] + Cnf["Dinstall::GroupOverrideFilename"];
+ file = utils.open_file(filename);
+ for line in file.readlines():
+ line = utils.re_comments.sub('', line).lower().strip();
+ if line != "":
+ self.group_maint[line] = 1;
+ file.close();
+
+ def is_an_nmu (self, pkg):
+ Cnf = self.Cnf;
+ changes = pkg.changes;
+ dsc = pkg.dsc;
+
+ i = utils.fix_maintainer (dsc.get("maintainer",
+ Cnf["Dinstall::MyEmailAddress"]).lower());
+ (dsc_rfc822, dsc_rfc2047, dsc_name, dsc_email) = i;
+ # changes["changedbyname"] == dsc_name is probably never true, but better safe than sorry
+ if dsc_name == changes["maintainername"].lower() and \
+ (changes["changedby822"] == "" or changes["changedbyname"].lower() == dsc_name):
+ return 0;
+
+ if dsc.has_key("uploaders"):
+ uploaders = dsc["uploaders"].lower().split(",");
+ uploadernames = {};
+ for i in uploaders:
+ (rfc822, rfc2047, name, email) = utils.fix_maintainer (i.strip());
+ uploadernames[name] = "";
+ if uploadernames.has_key(changes["changedbyname"].lower()):
+ return 0;
+
+ # Some group maintained packages (e.g. Debian QA) are never NMU's
+ if self.group_maint.has_key(changes["maintaineremail"].lower()):
+ return 0;
+
+ return 1;
+
+###############################################################################
+
+class Katie:
+
+ def __init__(self, Cnf):
+ self.Cnf = Cnf;
+ # Read in the group-maint override file
+ self.nmu = nmu_p(Cnf);
+ self.accept_count = 0;
+ self.accept_bytes = 0L;
+ self.pkg = Pkg(changes = {}, dsc = {}, dsc_files = {}, files = {},
+ legacy_source_untouchable = {});
+
+ # Initialize the substitution template mapping global
+ Subst = self.Subst = {};
+ Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
+ Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
+ Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
+ Subst["__KATIE_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
+
+ self.projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, self.projectB);
+
+ ###########################################################################
+
+ def init_vars (self):
+ for i in [ "changes", "dsc", "files", "dsc_files", "legacy_source_untouchable" ]:
+ exec "self.pkg.%s.clear();" % (i);
+ self.pkg.orig_tar_id = None;
+ self.pkg.orig_tar_location = "";
+ self.pkg.orig_tar_gz = None;
+
+ ###########################################################################
+
+ def update_vars (self):
+ dump_filename = self.pkg.changes_file[:-8]+".katie";
+ dump_file = utils.open_file(dump_filename);
+ p = cPickle.Unpickler(dump_file);
+ for i in [ "changes", "dsc", "files", "dsc_files", "legacy_source_untouchable" ]:
+ exec "self.pkg.%s.update(p.load());" % (i);
+ for i in [ "orig_tar_id", "orig_tar_location" ]:
+ exec "self.pkg.%s = p.load();" % (i);
+ dump_file.close();
+
+ ###########################################################################
+
+ # This could just dump the dictionaries as is, but I'd like to avoid
+ # this so there's some idea of what katie & lisa use from jennifer
+
+ def dump_vars(self, dest_dir):
+ for i in [ "changes", "dsc", "files", "dsc_files",
+ "legacy_source_untouchable", "orig_tar_id", "orig_tar_location" ]:
+ exec "%s = self.pkg.%s;" % (i,i);
+ dump_filename = os.path.join(dest_dir,self.pkg.changes_file[:-8] + ".katie");
+ dump_file = utils.open_file(dump_filename, 'w');
+ try:
+ os.chmod(dump_filename, 0660);
+ except OSError, e:
+ if errno.errorcode[e.errno] == 'EPERM':
+ perms = stat.S_IMODE(os.stat(dump_filename)[stat.ST_MODE]);
+ if perms & stat.S_IROTH:
+ utils.fubar("%s is world readable and chmod failed." % (dump_filename));
+ else:
+ raise;
+
+ p = cPickle.Pickler(dump_file, 1);
+ for i in [ "d_changes", "d_dsc", "d_files", "d_dsc_files" ]:
+ exec "%s = {}" % i;
+ ## files
+ for file in files.keys():
+ d_files[file] = {};
+ for i in [ "package", "version", "architecture", "type", "size",
+ "md5sum", "component", "location id", "source package",
+ "source version", "maintainer", "dbtype", "files id",
+ "new", "section", "priority", "othercomponents",
+ "pool name", "original component" ]:
+ if files[file].has_key(i):
+ d_files[file][i] = files[file][i];
+ ## changes
+ # Mandatory changes fields
+ for i in [ "distribution", "source", "architecture", "version",
+ "maintainer", "urgency", "fingerprint", "changedby822",
+ "changedby2047", "changedbyname", "maintainer822",
+ "maintainer2047", "maintainername", "maintaineremail",
+ "closes", "changes" ]:
+ d_changes[i] = changes[i];
+ # Optional changes fields
+ for i in [ "changed-by", "filecontents", "format", "lisa note", "distribution-version" ]:
+ if changes.has_key(i):
+ d_changes[i] = changes[i];
+ ## dsc
+ for i in [ "source", "version", "maintainer", "fingerprint",
+ "uploaders", "bts changelog" ]:
+ if dsc.has_key(i):
+ d_dsc[i] = dsc[i];
+ ## dsc_files
+ for file in dsc_files.keys():
+ d_dsc_files[file] = {};
+ # Mandatory dsc_files fields
+ for i in [ "size", "md5sum" ]:
+ d_dsc_files[file][i] = dsc_files[file][i];
+ # Optional dsc_files fields
+ for i in [ "files id" ]:
+ if dsc_files[file].has_key(i):
+ d_dsc_files[file][i] = dsc_files[file][i];
+
+ for i in [ d_changes, d_dsc, d_files, d_dsc_files,
+ legacy_source_untouchable, orig_tar_id, orig_tar_location ]:
+ p.dump(i);
+ dump_file.close();
+
+ ###########################################################################
+
+ # Set up the per-package template substitution mappings
+
+ def update_subst (self, reject_message = ""):
+ Subst = self.Subst;
+ changes = self.pkg.changes;
+ # If jennifer crashed out in the right place, architecture may still be a string.
+ if not changes.has_key("architecture") or not isinstance(changes["architecture"], DictType):
+ changes["architecture"] = { "Unknown" : "" };
+ # and maintainer2047 may not exist.
+ if not changes.has_key("maintainer2047"):
+ changes["maintainer2047"] = self.Cnf["Dinstall::MyEmailAddress"];
+
+ Subst["__ARCHITECTURE__"] = " ".join(changes["architecture"].keys());
+ Subst["__CHANGES_FILENAME__"] = os.path.basename(self.pkg.changes_file);
+ Subst["__FILE_CONTENTS__"] = changes.get("filecontents", "");
+
+ # For source uploads the Changed-By field wins; otherwise Maintainer wins.
+ if changes["architecture"].has_key("source") and changes["changedby822"] != "" and (changes["changedby822"] != changes["maintainer822"]):
+ Subst["__MAINTAINER_FROM__"] = changes["changedby2047"];
+ Subst["__MAINTAINER_TO__"] = "%s, %s" % (changes["changedby2047"],
+ changes["maintainer2047"]);
+ Subst["__MAINTAINER__"] = changes.get("changed-by", "Unknown");
+ else:
+ Subst["__MAINTAINER_FROM__"] = changes["maintainer2047"];
+ Subst["__MAINTAINER_TO__"] = changes["maintainer2047"];
+ Subst["__MAINTAINER__"] = changes.get("maintainer", "Unknown");
+ if self.Cnf.has_key("Dinstall::TrackingServer") and changes.has_key("source"):
+ Subst["__MAINTAINER_TO__"] += "\nBcc: %s@%s" % (changes["source"], self.Cnf["Dinstall::TrackingServer"])
+
+ # Apply any global override of the Maintainer field
+ if self.Cnf.get("Dinstall::OverrideMaintainer"):
+ Subst["__MAINTAINER_TO__"] = self.Cnf["Dinstall::OverrideMaintainer"];
+ Subst["__MAINTAINER_FROM__"] = self.Cnf["Dinstall::OverrideMaintainer"];
+
+ Subst["__REJECT_MESSAGE__"] = reject_message;
+ Subst["__SOURCE__"] = changes.get("source", "Unknown");
+ Subst["__VERSION__"] = changes.get("version", "Unknown");
+
+ ###########################################################################
+
+ def build_summaries(self):
+ changes = self.pkg.changes;
+ files = self.pkg.files;
+
+ byhand = summary = new = "";
+
+ # changes["distribution"] may not exist in corner cases
+ # (e.g. unreadable changes files)
+ if not changes.has_key("distribution") or not isinstance(changes["distribution"], DictType):
+ changes["distribution"] = {};
+
+ file_keys = files.keys();
+ file_keys.sort();
+ for file in file_keys:
+ if files[file].has_key("byhand"):
+ byhand = 1
+ summary += file + " byhand\n"
+ elif files[file].has_key("new"):
+ new = 1
+ summary += "(new) %s %s %s\n" % (file, files[file]["priority"], files[file]["section"])
+ if files[file].has_key("othercomponents"):
+ summary += "WARNING: Already present in %s distribution.\n" % (files[file]["othercomponents"])
+ if files[file]["type"] == "deb":
+ deb_fh = utils.open_file(file)
+ summary += apt_pkg.ParseSection(apt_inst.debExtractControl(deb_fh))["Description"] + '\n';
+ deb_fh.close()
+ else:
+ files[file]["pool name"] = utils.poolify (changes.get("source",""), files[file]["component"])
+ destination = self.Cnf["Dir::PoolRoot"] + files[file]["pool name"] + file
+ summary += file + "\n to " + destination + "\n"
+
+ short_summary = summary;
+
+ # This is for direport's benefit...
+ f = re_fdnic.sub("\n .\n", changes.get("changes",""));
+
+ if byhand or new:
+ summary += "Changes: " + f;
+
+ summary += self.announce(short_summary, 0)
+
+ return (summary, short_summary);
+
+ ###########################################################################
+
+ def close_bugs (self, summary, action):
+ changes = self.pkg.changes;
+ Subst = self.Subst;
+ Cnf = self.Cnf;
+
+ bugs = changes["closes"].keys();
+
+ if not bugs:
+ return summary;
+
+ bugs.sort();
+ if not self.nmu.is_an_nmu(self.pkg):
+ if changes["distribution"].has_key("experimental"):
+ # tag bugs as fixed-in-experimental for uploads to experimental
+ summary += "Setting bugs to severity fixed: ";
+ control_message = "";
+ for bug in bugs:
+ summary += "%s " % (bug);
+ control_message += "tag %s + fixed-in-experimental\n" % (bug);
+ if action and control_message != "":
+ Subst["__CONTROL_MESSAGE__"] = control_message;
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-experimental-fixed");
+ utils.send_mail (mail_message);
+ if action:
+ self.Logger.log(["setting bugs to fixed"]+bugs);
+
+
+ else:
+ summary += "Closing bugs: ";
+ for bug in bugs:
+ summary += "%s " % (bug);
+ if action:
+ Subst["__BUG_NUMBER__"] = bug;
+ if changes["distribution"].has_key("stable"):
+ Subst["__STABLE_WARNING__"] = """
+Note that this package is not part of the released stable Debian
+distribution. It may have dependencies on other unreleased software,
+or other instabilities. Please take care if you wish to install it.
+The update will eventually make its way into the next released Debian
+distribution.""";
+ else:
+ Subst["__STABLE_WARNING__"] = "";
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-close");
+ utils.send_mail (mail_message);
+ if action:
+ self.Logger.log(["closing bugs"]+bugs);
+
+ else: # NMU
+ summary += "Setting bugs to severity fixed: ";
+ control_message = "";
+ for bug in bugs:
+ summary += "%s " % (bug);
+ control_message += "tag %s + fixed\n" % (bug);
+ if action and control_message != "":
+ Subst["__CONTROL_MESSAGE__"] = control_message;
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-nmu-fixed");
+ utils.send_mail (mail_message);
+ if action:
+ self.Logger.log(["setting bugs to fixed"]+bugs);
+ summary += "\n";
+ return summary;
+
+ ###########################################################################
+
+ def announce (self, short_summary, action):
+ Subst = self.Subst;
+ Cnf = self.Cnf;
+ changes = self.pkg.changes;
+
+ # Only do announcements for source uploads with a recent dpkg-dev installed
+ if float(changes.get("format", 0)) < 1.6 or not changes["architecture"].has_key("source"):
+ return "";
+
+ lists_done = {};
+ summary = "";
+ Subst["__SHORT_SUMMARY__"] = short_summary;
+
+ for dist in changes["distribution"].keys():
+ list = Cnf.Find("Suite::%s::Announce" % (dist));
+ if list == "" or lists_done.has_key(list):
+ continue;
+ lists_done[list] = 1;
+ summary += "Announcing to %s\n" % (list);
+
+ if action:
+ Subst["__ANNOUNCE_LIST_ADDRESS__"] = list;
+ if Cnf.get("Dinstall::TrackingServer") and changes["architecture"].has_key("source"):
+ Subst["__ANNOUNCE_LIST_ADDRESS__"] = Subst["__ANNOUNCE_LIST_ADDRESS__"] + "\nBcc: %s@%s" % (changes["source"], Cnf["Dinstall::TrackingServer"]);
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.announce");
+ utils.send_mail (mail_message);
+
+ if Cnf.FindB("Dinstall::CloseBugs"):
+ summary = self.close_bugs(summary, action);
+
+ return summary;
+
+ ###########################################################################
+
+ def accept (self, summary, short_summary):
+ Cnf = self.Cnf;
+ Subst = self.Subst;
+ files = self.pkg.files;
+ changes = self.pkg.changes;
+ changes_file = self.pkg.changes_file;
+ dsc = self.pkg.dsc;
+
+ print "Accepting."
+ self.Logger.log(["Accepting changes",changes_file]);
+
+ self.dump_vars(Cnf["Dir::Queue::Accepted"]);
+
+ # Move all the files into the accepted directory
+ utils.move(changes_file, Cnf["Dir::Queue::Accepted"]);
+ file_keys = files.keys();
+ for file in file_keys:
+ utils.move(file, Cnf["Dir::Queue::Accepted"]);
+ self.accept_bytes += float(files[file]["size"])
+ self.accept_count += 1;
+
+ # Send accept mail, announce to lists, close bugs and check for
+ # override disparities
+ if not Cnf["Dinstall::Options::No-Mail"]:
+ Subst["__SUITE__"] = "";
+ Subst["__SUMMARY__"] = summary;
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.accepted");
+ utils.send_mail(mail_message)
+ self.announce(short_summary, 1)
+
+
+ ## Helper stuff for DebBugs Version Tracking
+ if Cnf.Find("Dir::Queue::BTSVersionTrack"):
+ # ??? once queue/* is cleared on *.d.o and/or reprocessed
+ # the conditionalization on dsc["bts changelog"] should be
+ # dropped.
+
+ # Write out the version history from the changelog
+ if changes["architecture"].has_key("source") and \
+ dsc.has_key("bts changelog"):
+
+ temp_filename = utils.temp_filename(Cnf["Dir::Queue::BTSVersionTrack"],
+ dotprefix=1, perms=0644);
+ version_history = utils.open_file(temp_filename, 'w');
+ version_history.write(dsc["bts changelog"]);
+ version_history.close();
+ filename = "%s/%s" % (Cnf["Dir::Queue::BTSVersionTrack"],
+ changes_file[:-8]+".versions");
+ os.rename(temp_filename, filename);
+
+ # Write out the binary -> source mapping.
+ temp_filename = utils.temp_filename(Cnf["Dir::Queue::BTSVersionTrack"],
+ dotprefix=1, perms=0644);
+ debinfo = utils.open_file(temp_filename, 'w');
+ for file in file_keys:
+ f = files[file];
+ if f["type"] == "deb":
+ line = " ".join([f["package"], f["version"],
+ f["architecture"], f["source package"],
+ f["source version"]]);
+ debinfo.write(line+"\n");
+ debinfo.close();
+ filename = "%s/%s" % (Cnf["Dir::Queue::BTSVersionTrack"],
+ changes_file[:-8]+".debinfo");
+ os.rename(temp_filename, filename);
+
+ self.queue_build("accepted", Cnf["Dir::Queue::Accepted"])
+
+ ###########################################################################
+
+ def queue_build (self, queue, path):
+ Cnf = self.Cnf
+ Subst = self.Subst
+ files = self.pkg.files
+ changes = self.pkg.changes
+ changes_file = self.pkg.changes_file
+ dsc = self.pkg.dsc
+ file_keys = files.keys()
+
+ ## Special support to enable clean auto-building of queued packages
+ queue_id = db_access.get_or_set_queue_id(queue)
+
+ self.projectB.query("BEGIN WORK");
+ for suite in changes["distribution"].keys():
+ if suite not in Cnf.ValueList("Dinstall::QueueBuildSuites"):
+ continue;
+ suite_id = db_access.get_suite_id(suite);
+ dest_dir = Cnf["Dir::QueueBuild"];
+ if Cnf.FindB("Dinstall::SecurityQueueBuild"):
+ dest_dir = os.path.join(dest_dir, suite);
+ for file in file_keys:
+ src = os.path.join(path, file);
+ dest = os.path.join(dest_dir, file);
+ if Cnf.FindB("Dinstall::SecurityQueueBuild"):
+ # Copy it since the original won't be readable by www-data
+ utils.copy(src, dest);
+ else:
+ # Create a symlink to it
+ os.symlink(src, dest);
+ # Add it to the list of packages for later processing by apt-ftparchive
+ self.projectB.query("INSERT INTO queue_build (suite, queue, filename, in_queue) VALUES (%s, %s, '%s', 't')" % (suite_id, queue_id, dest));
+ # If the .orig.tar.gz is in the pool, create a symlink to
+ # it (if one doesn't already exist)
+ if self.pkg.orig_tar_id:
+ # Determine the .orig.tar.gz file name
+ for dsc_file in self.pkg.dsc_files.keys():
+ if dsc_file.endswith(".orig.tar.gz"):
+ filename = dsc_file;
+ dest = os.path.join(dest_dir, filename);
+ # If it doesn't exist, create a symlink
+ if not os.path.exists(dest):
+ # Find the .orig.tar.gz in the pool
+ q = self.projectB.query("SELECT l.path, f.filename from location l, files f WHERE f.id = %s and f.location = l.id" % (self.pkg.orig_tar_id));
+ ql = q.getresult();
+ if not ql:
+ utils.fubar("[INTERNAL ERROR] Couldn't find id %s in files table." % (self.pkg.orig_tar_id));
+ src = os.path.join(ql[0][0], ql[0][1]);
+ os.symlink(src, dest);
+ # Add it to the list of packages for later processing by apt-ftparchive
+ self.projectB.query("INSERT INTO queue_build (suite, queue, filename, in_queue) VALUES (%s, %s, '%s', 't')" % (suite_id, queue_id, dest));
+ # if it does, update things to ensure it's not removed prematurely
+ else:
+ self.projectB.query("UPDATE queue_build SET in_queue = 't', last_used = NULL WHERE filename = '%s' AND suite = %s" % (dest, suite_id));
+
+ self.projectB.query("COMMIT WORK");
+
+ ###########################################################################
+
+ def check_override (self):
+ Subst = self.Subst;
+ changes = self.pkg.changes;
+ files = self.pkg.files;
+ Cnf = self.Cnf;
+
+ # Abandon the check if:
+ # a) it's a non-sourceful upload
+ # b) override disparity checks have been disabled
+ # c) we're not sending mail
+ if not changes["architecture"].has_key("source") or \
+ not Cnf.FindB("Dinstall::OverrideDisparityCheck") or \
+ Cnf["Dinstall::Options::No-Mail"]:
+ return;
+
+ summary = "";
+ file_keys = files.keys();
+ file_keys.sort();
+ for file in file_keys:
+ if not files[file].has_key("new") and files[file]["type"] == "deb":
+ section = files[file]["section"];
+ override_section = files[file]["override section"];
+ if section.lower() != override_section.lower() and section != "-":
+ # Ignore this; it's a common mistake and not worth whining about
+ if section.lower() == "non-us/main" and override_section.lower() == "non-us":
+ continue;
+ summary += "%s: package says section is %s, override says %s.\n" % (file, section, override_section);
+ priority = files[file]["priority"];
+ override_priority = files[file]["override priority"];
+ if priority != override_priority and priority != "-":
+ summary += "%s: package says priority is %s, override says %s.\n" % (file, priority, override_priority);
+
+ if summary == "":
+ return;
+
+ Subst["__SUMMARY__"] = summary;
+ mail_message = utils.TemplateSubst(Subst,self.Cnf["Dir::Templates"]+"/jennifer.override-disparity");
+ utils.send_mail(mail_message);
+
+ ###########################################################################
+
+ def force_reject (self, files):
+ """Forcefully move files from the current directory to the
+ reject directory. If any file already exists in the reject
+ directory it will be moved to the morgue to make way for
+ the new file."""
+
+ Cnf = self.Cnf
+
+ for file in files:
+ # Skip any files which don't exist or which we don't have permission to copy.
+ if os.access(file,os.R_OK) == 0:
+ continue;
+ dest_file = os.path.join(Cnf["Dir::Queue::Reject"], file);
+ try:
+ dest_fd = os.open(dest_file, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
+ except OSError, e:
+ # File exists? Let's try and move it to the morgue
+ if errno.errorcode[e.errno] == 'EEXIST':
+ morgue_file = os.path.join(Cnf["Dir::Morgue"],Cnf["Dir::MorgueReject"],file);
+ try:
+ morgue_file = utils.find_next_free(morgue_file);
+ except utils.tried_too_hard_exc:
+ # Something's either gone badly Pete Tong, or
+ # someone is trying to exploit us.
+ utils.warn("**WARNING** failed to move %s from the reject directory to the morgue." % (file));
+ return;
+ utils.move(dest_file, morgue_file, perms=0660);
+ try:
+ dest_fd = os.open(dest_file, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
+ except OSError, e:
+ # Likewise
+ utils.warn("**WARNING** failed to claim %s in the reject directory." % (file));
+ return;
+ else:
+ raise;
+ # If we got here, we own the destination file, so we can
+ # safely overwrite it.
+ utils.move(file, dest_file, 1, perms=0660);
+ os.close(dest_fd)
+
+ ###########################################################################
+
+ def do_reject (self, manual = 0, reject_message = ""):
+ # If we weren't given a manual rejection message, spawn an
+ # editor so the user can add one in...
+ if manual and not reject_message:
+ temp_filename = utils.temp_filename();
+ editor = os.environ.get("EDITOR","vi")
+ answer = 'E';
+ while answer == 'E':
+ os.system("%s %s" % (editor, temp_filename))
+ temp_fh = utils.open_file(temp_filename);
+ reject_message = "".join(temp_fh.readlines());
+ temp_fh.close();
+ print "Reject message:";
+ print utils.prefix_multi_line_string(reject_message," ",include_blank_lines=1);
+ prompt = "[R]eject, Edit, Abandon, Quit ?"
+ answer = "XXX";
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+ os.unlink(temp_filename);
+ if answer == 'A':
+ return 1;
+ elif answer == 'Q':
+ sys.exit(0);
+
+ print "Rejecting.\n"
+
+ Cnf = self.Cnf;
+ Subst = self.Subst;
+ pkg = self.pkg;
+
+ reason_filename = pkg.changes_file[:-8] + ".reason";
+ reason_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
+
+ # Move all the files into the reject directory
+ reject_files = pkg.files.keys() + [pkg.changes_file];
+ self.force_reject(reject_files);
+
+ # If we fail here someone is probably trying to exploit the race
+ # so let's just raise an exception ...
+ if os.path.exists(reason_filename):
+ os.unlink(reason_filename);
+ reason_fd = os.open(reason_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
+
+ if not manual:
+ Subst["__REJECTOR_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
+ Subst["__MANUAL_REJECT_MESSAGE__"] = "";
+ Subst["__CC__"] = "X-Katie-Rejection: automatic (moo)";
+ os.write(reason_fd, reject_message);
+ reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/katie.rejected");
+ else:
+ # Build up the rejection email
+ user_email_address = utils.whoami() + " <%s>" % (Cnf["Dinstall::MyAdminAddress"]);
+
+ Subst["__REJECTOR_ADDRESS__"] = user_email_address;
+ Subst["__MANUAL_REJECT_MESSAGE__"] = reject_message;
+ Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
+ reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/katie.rejected");
+ # Write the rejection email out as the <foo>.reason file
+ os.write(reason_fd, reject_mail_message);
+
+ os.close(reason_fd)
+
+ # Send the rejection mail if appropriate
+ if not Cnf["Dinstall::Options::No-Mail"]:
+ utils.send_mail(reject_mail_message);
+
+ self.Logger.log(["rejected", pkg.changes_file]);
+ return 0;
+
+ ################################################################################
+
+ # Ensure that source exists somewhere in the archive for the binary
+ # upload being processed.
+ #
+ # (1) exact match => 1.0-3
+ # (2) Bin-only NMU => 1.0-3+b1 , 1.0-3.1+b1
+
+ def source_exists (self, package, source_version, suites = ["any"]):
+ okay = 1
+ for suite in suites:
+ if suite == "any":
+ que = "SELECT s.version FROM source s WHERE s.source = '%s'" % \
+ (package)
+ else:
+ # source must exist in suite X, or in some other suite that's
+ # mapped to X, recursively... silent-maps are counted too,
+ # unreleased-maps aren't.
+ maps = self.Cnf.ValueList("SuiteMappings")[:]
+ maps.reverse()
+ maps = [ m.split() for m in maps ]
+ maps = [ (x[1], x[2]) for x in maps
+ if x[0] == "map" or x[0] == "silent-map" ]
+ s = [suite]
+ for x in maps:
+ if x[1] in s and x[0] not in s:
+ s.append(x[0])
+
+ que = "SELECT s.version FROM source s JOIN src_associations sa ON (s.id = sa.source) JOIN suite su ON (sa.suite = su.id) WHERE s.source = '%s' AND (%s)" % (package, string.join(["su.suite_name = '%s'" % a for a in s], " OR "));
+ q = self.projectB.query(que)
+
+ # Reduce the query results to a list of version numbers
+ ql = map(lambda x: x[0], q.getresult());
+
+ # Try (1)
+ if source_version in ql:
+ continue
+
+ # Try (2)
+ orig_source_version = re_bin_only_nmu.sub('', source_version)
+ if orig_source_version in ql:
+ continue
+
+ # No source found...
+ okay = 0
+ break
+ return okay
+
+ ################################################################################
+
+ def in_override_p (self, package, component, suite, binary_type, file):
+ files = self.pkg.files;
+
+ if binary_type == "": # must be source
+ type = "dsc";
+ else:
+ type = binary_type;
+
+ # Override suite name; used for example with proposed-updates
+ if self.Cnf.Find("Suite::%s::OverrideSuite" % (suite)) != "":
+ suite = self.Cnf["Suite::%s::OverrideSuite" % (suite)];
+
+ # Avoid <undef> on unknown distributions
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ return None;
+ component_id = db_access.get_component_id(component);
+ type_id = db_access.get_override_type_id(type);
+
+ # FIXME: nasty non-US speficic hack
+ if component.lower().startswith("non-us/"):
+ component = component[7:];
+
+ q = self.projectB.query("SELECT s.section, p.priority FROM override o, section s, priority p WHERE package = '%s' AND suite = %s AND component = %s AND type = %s AND o.section = s.id AND o.priority = p.id"
+ % (package, suite_id, component_id, type_id));
+ result = q.getresult();
+ # If checking for a source package fall back on the binary override type
+ if type == "dsc" and not result:
+ deb_type_id = db_access.get_override_type_id("deb");
+ udeb_type_id = db_access.get_override_type_id("udeb");
+ q = self.projectB.query("SELECT s.section, p.priority FROM override o, section s, priority p WHERE package = '%s' AND suite = %s AND component = %s AND (type = %s OR type = %s) AND o.section = s.id AND o.priority = p.id"
+ % (package, suite_id, component_id, deb_type_id, udeb_type_id));
+ result = q.getresult();
+
+ # Remember the section and priority so we can check them later if appropriate
+ if result:
+ files[file]["override section"] = result[0][0];
+ files[file]["override priority"] = result[0][1];
+
+ return result;
+
+ ################################################################################
+
+ def reject (self, str, prefix="Rejected: "):
+ if str:
+ # Unlike other rejects we add new lines first to avoid trailing
+ # new lines when this message is passed back up to a caller.
+ if self.reject_message:
+ self.reject_message += "\n";
+ self.reject_message += prefix + str;
+
+ ################################################################################
+
+ def get_anyversion(self, query_result, suite):
+ anyversion=None
+ anysuite = [suite] + self.Cnf.ValueList("Suite::%s::VersionChecks::Enhances" % (suite))
+ for (v, s) in query_result:
+ if s in [ string.lower(x) for x in anysuite ]:
+ if not anyversion or apt_pkg.VersionCompare(anyversion, v) <= 0:
+ anyversion=v
+ return anyversion
+
+ ################################################################################
+
+ def cross_suite_version_check(self, query_result, file, new_version):
+ """Ensure versions are newer than existing packages in target
+ suites and that cross-suite version checking rules as
+ set out in the conf file are satisfied."""
+
+ # Check versions for each target suite
+ for target_suite in self.pkg.changes["distribution"].keys():
+ must_be_newer_than = map(string.lower, self.Cnf.ValueList("Suite::%s::VersionChecks::MustBeNewerThan" % (target_suite)));
+ must_be_older_than = map(string.lower, self.Cnf.ValueList("Suite::%s::VersionChecks::MustBeOlderThan" % (target_suite)));
+ # Enforce "must be newer than target suite" even if conffile omits it
+ if target_suite not in must_be_newer_than:
+ must_be_newer_than.append(target_suite);
+ for entry in query_result:
+ existent_version = entry[0];
+ suite = entry[1];
+ if suite in must_be_newer_than and \
+ apt_pkg.VersionCompare(new_version, existent_version) < 1:
+ self.reject("%s: old version (%s) in %s >= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite));
+ if suite in must_be_older_than and \
+ apt_pkg.VersionCompare(new_version, existent_version) > -1:
+ ch = self.pkg.changes
+ cansave = 0
+ if ch.get('distribution-version', {}).has_key(suite):
+ # we really use the other suite, ignoring the conflicting one ...
+ addsuite = ch["distribution-version"][suite]
+
+ add_version = self.get_anyversion(query_result, addsuite)
+ target_version = self.get_anyversion(query_result, target_suite)
+
+ if not add_version:
+ # not add_version can only happen if we map to a suite
+ # that doesn't enhance the suite we're propup'ing from.
+ # so "propup-ver x a b c; map a d" is a problem only if
+ # d doesn't enhance a.
+ #
+ # i think we could always propagate in this case, rather
+ # than complaining. either way, this isn't a REJECT issue
+ #
+ # And - we really should complain to the dorks who configured dak
+ self.reject("%s is mapped to, but not enhanced by %s - adding anyways" % (suite, addsuite), "Warning: ")
+ self.pkg.changes.setdefault("propdistribution", {})
+ self.pkg.changes["propdistribution"][addsuite] = 1
+ cansave = 1
+ elif not target_version:
+ # not targets_version is true when the package is NEW
+ # we could just stick with the "...old version..." REJECT
+ # for this, I think.
+ self.reject("Won't propogate NEW packages.")
+ elif apt_pkg.VersionCompare(new_version, add_version) < 0:
+ # propogation would be redundant. no need to reject though.
+ self.reject("ignoring versionconflict: %s: old version (%s) in %s <= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite), "Warning: ")
+ cansave = 1
+ elif apt_pkg.VersionCompare(new_version, add_version) > 0 and \
+ apt_pkg.VersionCompare(add_version, target_version) >= 0:
+ # propogate!!
+ self.reject("Propogating upload to %s" % (addsuite), "Warning: ")
+ self.pkg.changes.setdefault("propdistribution", {})
+ self.pkg.changes["propdistribution"][addsuite] = 1
+ cansave = 1
+
+ if not cansave:
+ self.reject("%s: old version (%s) in %s <= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite))
+
+ ################################################################################
+
+ def check_binary_against_db(self, file):
+ self.reject_message = "";
+ files = self.pkg.files;
+
+ # Ensure version is sane
+ q = self.projectB.query("""
+SELECT b.version, su.suite_name FROM binaries b, bin_associations ba, suite su,
+ architecture a
+ WHERE b.package = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all')
+ AND ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id"""
+ % (files[file]["package"],
+ files[file]["architecture"]));
+ self.cross_suite_version_check(q.getresult(), file, files[file]["version"]);
+
+ # Check for any existing copies of the file
+ q = self.projectB.query("""
+SELECT b.id FROM binaries b, architecture a
+ WHERE b.package = '%s' AND b.version = '%s' AND a.arch_string = '%s'
+ AND a.id = b.architecture"""
+ % (files[file]["package"],
+ files[file]["version"],
+ files[file]["architecture"]))
+ if q.getresult():
+ self.reject("%s: can not overwrite existing copy already in the archive." % (file));
+
+ return self.reject_message;
+
+ ################################################################################
+
+ def check_source_against_db(self, file):
+ self.reject_message = "";
+ dsc = self.pkg.dsc;
+
+ # Ensure version is sane
+ q = self.projectB.query("""
+SELECT s.version, su.suite_name FROM source s, src_associations sa, suite su
+ WHERE s.source = '%s' AND sa.source = s.id AND sa.suite = su.id""" % (dsc.get("source")));
+ self.cross_suite_version_check(q.getresult(), file, dsc.get("version"));
+
+ return self.reject_message;
+
+ ################################################################################
+
+ # **WARNING**
+ # NB: this function can remove entries from the 'files' index [if
+ # the .orig.tar.gz is a duplicate of the one in the archive]; if
+ # you're iterating over 'files' and call this function as part of
+ # the loop, be sure to add a check to the top of the loop to
+ # ensure you haven't just tried to derefernece the deleted entry.
+ # **WARNING**
+
+ def check_dsc_against_db(self, file):
+ self.reject_message = "";
+ files = self.pkg.files;
+ dsc_files = self.pkg.dsc_files;
+ legacy_source_untouchable = self.pkg.legacy_source_untouchable;
+ self.pkg.orig_tar_gz = None;
+
+ # Try and find all files mentioned in the .dsc. This has
+ # to work harder to cope with the multiple possible
+ # locations of an .orig.tar.gz.
+ for dsc_file in dsc_files.keys():
+ found = None;
+ if files.has_key(dsc_file):
+ actual_md5 = files[dsc_file]["md5sum"];
+ actual_size = int(files[dsc_file]["size"]);
+ found = "%s in incoming" % (dsc_file)
+ # Check the file does not already exist in the archive
+ q = self.projectB.query("SELECT f.size, f.md5sum, l.path, f.filename FROM files f, location l WHERE f.filename LIKE '%%%s%%' AND l.id = f.location" % (dsc_file));
+ ql = q.getresult();
+ # Strip out anything that isn't '%s' or '/%s$'
+ for i in ql:
+ if i[3] != dsc_file and i[3][-(len(dsc_file)+1):] != '/'+dsc_file:
+ ql.remove(i);
+
+ # "[katie] has not broken them. [katie] has fixed a
+ # brokenness. Your crappy hack exploited a bug in
+ # the old dinstall.
+ #
+ # "(Come on! I thought it was always obvious that
+ # one just doesn't release different files with
+ # the same name and version.)"
+ # -- ajk@ on d-devel@l.d.o
+
+ if ql:
+ # Ignore exact matches for .orig.tar.gz
+ match = 0;
+ if dsc_file.endswith(".orig.tar.gz"):
+ for i in ql:
+ if files.has_key(dsc_file) and \
+ int(files[dsc_file]["size"]) == int(i[0]) and \
+ files[dsc_file]["md5sum"] == i[1]:
+ self.reject("ignoring %s, since it's already in the archive." % (dsc_file), "Warning: ");
+ del files[dsc_file];
+ self.pkg.orig_tar_gz = i[2] + i[3];
+ match = 1;
+
+ if not match:
+ self.reject("can not overwrite existing copy of '%s' already in the archive." % (dsc_file));
+ elif dsc_file.endswith(".orig.tar.gz"):
+ # Check in the pool
+ q = self.projectB.query("SELECT l.path, f.filename, l.type, f.id, l.id FROM files f, location l WHERE f.filename LIKE '%%%s%%' AND l.id = f.location" % (dsc_file));
+ ql = q.getresult();
+ # Strip out anything that isn't '%s' or '/%s$'
+ for i in ql:
+ if i[1] != dsc_file and i[1][-(len(dsc_file)+1):] != '/'+dsc_file:
+ ql.remove(i);
+
+ if ql:
+ # Unfortunately, we may get more than one match here if,
+ # for example, the package was in potato but had an -sa
+ # upload in woody. So we need to choose the right one.
+
+ x = ql[0]; # default to something sane in case we don't match any or have only one
+
+ if len(ql) > 1:
+ for i in ql:
+ old_file = i[0] + i[1];
+ old_file_fh = utils.open_file(old_file)
+ actual_md5 = apt_pkg.md5sum(old_file_fh);
+ old_file_fh.close()
+ actual_size = os.stat(old_file)[stat.ST_SIZE];
+ if actual_md5 == dsc_files[dsc_file]["md5sum"] and actual_size == int(dsc_files[dsc_file]["size"]):
+ x = i;
+ else:
+ legacy_source_untouchable[i[3]] = "";
+
+ old_file = x[0] + x[1];
+ old_file_fh = utils.open_file(old_file)
+ actual_md5 = apt_pkg.md5sum(old_file_fh);
+ old_file_fh.close()
+ actual_size = os.stat(old_file)[stat.ST_SIZE];
+ found = old_file;
+ suite_type = x[2];
+ dsc_files[dsc_file]["files id"] = x[3]; # need this for updating dsc_files in install()
+ # See install() in katie...
+ self.pkg.orig_tar_id = x[3];
+ self.pkg.orig_tar_gz = old_file;
+ if suite_type == "legacy" or suite_type == "legacy-mixed":
+ self.pkg.orig_tar_location = "legacy";
+ else:
+ self.pkg.orig_tar_location = x[4];
+ else:
+ # Not there? Check the queue directories...
+
+ in_unchecked = os.path.join(self.Cnf["Dir::Queue::Unchecked"],dsc_file);
+ # See process_it() in jennifer for explanation of this
+ if os.path.exists(in_unchecked):
+ return (self.reject_message, in_unchecked);
+ else:
+ for dir in [ "Accepted", "New", "Byhand" ]:
+ in_otherdir = os.path.join(self.Cnf["Dir::Queue::%s" % (dir)],dsc_file);
+ if os.path.exists(in_otherdir):
+ in_otherdir_fh = utils.open_file(in_otherdir)
+ actual_md5 = apt_pkg.md5sum(in_otherdir_fh);
+ in_otherdir_fh.close()
+ actual_size = os.stat(in_otherdir)[stat.ST_SIZE];
+ found = in_otherdir;
+ self.pkg.orig_tar_gz = in_otherdir;
+
+ if not found:
+ self.reject("%s refers to %s, but I can't find it in the queue or in the pool." % (file, dsc_file));
+ self.pkg.orig_tar_gz = -1;
+ continue;
+ else:
+ self.reject("%s refers to %s, but I can't find it in the queue." % (file, dsc_file));
+ continue;
+ if actual_md5 != dsc_files[dsc_file]["md5sum"]:
+ self.reject("md5sum for %s doesn't match %s." % (found, file));
+ if actual_size != int(dsc_files[dsc_file]["size"]):
+ self.reject("size for %s doesn't match %s." % (found, file));
+
+ return (self.reject_message, None);
+
+ def do_query(self, q):
+ sys.stderr.write("query: \"%s\" ... " % (q));
+ before = time.time();
+ r = self.projectB.query(q);
+ time_diff = time.time()-before;
+ sys.stderr.write("took %.3f seconds.\n" % (time_diff));
+ return r;
--- /dev/null
+#!/usr/bin/env python
+
+# Utility functions
+# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: utils.py,v 1.73 2005-03-18 05:24:38 troup Exp $
+
+################################################################################
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import codecs, commands, email.Header, os, pwd, re, select, socket, shutil, \
+ string, sys, tempfile, traceback;
+import apt_pkg;
+import db_access;
+
+################################################################################
+
+re_comments = re.compile(r"\#.*")
+re_no_epoch = re.compile(r"^\d+\:")
+re_no_revision = re.compile(r"-[^-]+$")
+re_arch_from_filename = re.compile(r"/binary-[^/]+/")
+re_extract_src_version = re.compile (r"(\S+)\s*\((.*)\)")
+re_isadeb = re.compile (r"(.+?)_(.+?)_(.+)\.u?deb$");
+re_issource = re.compile (r"(.+)_(.+?)\.(orig\.tar\.gz|diff\.gz|tar\.gz|dsc)$");
+
+re_single_line_field = re.compile(r"^(\S*)\s*:\s*(.*)");
+re_multi_line_field = re.compile(r"^\s(.*)");
+re_taint_free = re.compile(r"^[-+~/\.\w]+$");
+
+re_parse_maintainer = re.compile(r"^\s*(\S.*\S)\s*\<([^\>]+)\>");
+
+changes_parse_error_exc = "Can't parse line in .changes file";
+invalid_dsc_format_exc = "Invalid .dsc file";
+nk_format_exc = "Unknown Format: in .changes file";
+no_files_exc = "No Files: field in .dsc or .changes file.";
+cant_open_exc = "Can't open file";
+unknown_hostname_exc = "Unknown hostname";
+cant_overwrite_exc = "Permission denied; can't overwrite existent file."
+file_exists_exc = "Destination file exists";
+sendmail_failed_exc = "Sendmail invocation failed";
+tried_too_hard_exc = "Tried too hard to find a free filename.";
+
+default_config = "/etc/katie/katie.conf";
+default_apt_config = "/etc/katie/apt.conf";
+
+################################################################################
+
+class Error(Exception):
+ """Base class for exceptions in this module."""
+ pass;
+
+class ParseMaintError(Error):
+ """Exception raised for errors in parsing a maintainer field.
+
+ Attributes:
+ message -- explanation of the error
+ """
+
+ def __init__(self, message):
+ self.args = message,;
+ self.message = message;
+
+################################################################################
+
+def open_file(filename, mode='r'):
+ try:
+ f = open(filename, mode);
+ except IOError:
+ raise cant_open_exc, filename;
+ return f
+
+################################################################################
+
+def our_raw_input(prompt=""):
+ if prompt:
+ sys.stdout.write(prompt);
+ sys.stdout.flush();
+ try:
+ ret = raw_input();
+ return ret;
+ except EOFError:
+ sys.stderr.write("\nUser interrupt (^D).\n");
+ raise SystemExit;
+
+################################################################################
+
+def str_isnum (s):
+ for c in s:
+ if c not in string.digits:
+ return 0;
+ return 1;
+
+################################################################################
+
+def extract_component_from_section(section):
+ component = "";
+
+ if section.find('/') != -1:
+ component = section.split('/')[0];
+ if component.lower() == "non-us" and section.find('/') != -1:
+ s = component + '/' + section.split('/')[1];
+ if Cnf.has_key("Component::%s" % s): # Avoid e.g. non-US/libs
+ component = s;
+
+ if section.lower() == "non-us":
+ component = "non-US/main";
+
+ # non-US prefix is case insensitive
+ if component.lower()[:6] == "non-us":
+ component = "non-US"+component[6:];
+
+ # Expand default component
+ if component == "":
+ if Cnf.has_key("Component::%s" % section):
+ component = section;
+ else:
+ component = "main";
+ elif component == "non-US":
+ component = "non-US/main";
+
+ return (section, component);
+
+################################################################################
+
+def parse_changes(filename, signing_rules=0):
+ """Parses a changes file and returns a dictionary where each field is a
+key. The mandatory first argument is the filename of the .changes
+file.
+
+signing_rules is an optional argument:
+
+ o If signing_rules == -1, no signature is required.
+ o If signing_rules == 0 (the default), a signature is required.
+ o If signing_rules == 1, it turns on the same strict format checking
+ as dpkg-source.
+
+The rules for (signing_rules == 1)-mode are:
+
+ o The PGP header consists of "-----BEGIN PGP SIGNED MESSAGE-----"
+ followed by any PGP header data and must end with a blank line.
+
+ o The data section must end with a blank line and must be followed by
+ "-----BEGIN PGP SIGNATURE-----".
+"""
+
+ error = "";
+ changes = {};
+
+ changes_in = open_file(filename);
+ lines = changes_in.readlines();
+
+ if not lines:
+ raise changes_parse_error_exc, "[Empty changes file]";
+
+ # Reindex by line number so we can easily verify the format of
+ # .dsc files...
+ index = 0;
+ indexed_lines = {};
+ for line in lines:
+ index += 1;
+ indexed_lines[index] = line[:-1];
+
+ inside_signature = 0;
+
+ num_of_lines = len(indexed_lines.keys());
+ index = 0;
+ first = -1;
+ while index < num_of_lines:
+ index += 1;
+ line = indexed_lines[index];
+ if line == "":
+ if signing_rules == 1:
+ index += 1;
+ if index > num_of_lines:
+ raise invalid_dsc_format_exc, index;
+ line = indexed_lines[index];
+ if not line.startswith("-----BEGIN PGP SIGNATURE"):
+ raise invalid_dsc_format_exc, index;
+ inside_signature = 0;
+ break;
+ else:
+ continue;
+ if line.startswith("-----BEGIN PGP SIGNATURE"):
+ break;
+ if line.startswith("-----BEGIN PGP SIGNED MESSAGE"):
+ inside_signature = 1;
+ if signing_rules == 1:
+ while index < num_of_lines and line != "":
+ index += 1;
+ line = indexed_lines[index];
+ continue;
+ # If we're not inside the signed data, don't process anything
+ if signing_rules >= 0 and not inside_signature:
+ continue;
+ slf = re_single_line_field.match(line);
+ if slf:
+ field = slf.groups()[0].lower();
+ changes[field] = slf.groups()[1];
+ first = 1;
+ continue;
+ if line == " .":
+ changes[field] += '\n';
+ continue;
+ mlf = re_multi_line_field.match(line);
+ if mlf:
+ if first == -1:
+ raise changes_parse_error_exc, "'%s'\n [Multi-line field continuing on from nothing?]" % (line);
+ if first == 1 and changes[field] != "":
+ changes[field] += '\n';
+ first = 0;
+ changes[field] += mlf.groups()[0] + '\n';
+ continue;
+ error += line;
+
+ if signing_rules == 1 and inside_signature:
+ raise invalid_dsc_format_exc, index;
+
+ changes_in.close();
+ changes["filecontents"] = "".join(lines);
+
+ if error:
+ raise changes_parse_error_exc, error;
+
+ return changes;
+
+################################################################################
+
+# Dropped support for 1.4 and ``buggy dchanges 3.4'' (?!) compared to di.pl
+
+def build_file_list(changes, is_a_dsc=0):
+ files = {};
+
+ # Make sure we have a Files: field to parse...
+ if not changes.has_key("files"):
+ raise no_files_exc;
+
+ # Make sure we recognise the format of the Files: field
+ format = changes.get("format", "");
+ if format != "":
+ format = float(format);
+ if not is_a_dsc and (format < 1.5 or format > 2.0):
+ raise nk_format_exc, format;
+
+ # Parse each entry/line:
+ for i in changes["files"].split('\n'):
+ if not i:
+ break;
+ s = i.split();
+ section = priority = "";
+ try:
+ if is_a_dsc:
+ (md5, size, name) = s;
+ else:
+ (md5, size, section, priority, name) = s;
+ except ValueError:
+ raise changes_parse_error_exc, i;
+
+ if section == "":
+ section = "-";
+ if priority == "":
+ priority = "-";
+
+ (section, component) = extract_component_from_section(section);
+
+ files[name] = Dict(md5sum=md5, size=size, section=section,
+ priority=priority, component=component);
+
+ return files
+
+################################################################################
+
+def force_to_utf8(s):
+ """Forces a string to UTF-8. If the string isn't already UTF-8,
+it's assumed to be ISO-8859-1."""
+ try:
+ unicode(s, 'utf-8');
+ return s;
+ except UnicodeError:
+ latin1_s = unicode(s,'iso8859-1');
+ return latin1_s.encode('utf-8');
+
+def rfc2047_encode(s):
+ """Encodes a (header) string per RFC2047 if necessary. If the
+string is neither ASCII nor UTF-8, it's assumed to be ISO-8859-1."""
+ try:
+ codecs.lookup('ascii')[1](s)
+ return s;
+ except UnicodeError:
+ pass;
+ try:
+ codecs.lookup('utf-8')[1](s)
+ h = email.Header.Header(s, 'utf-8', 998);
+ return str(h);
+ except UnicodeError:
+ h = email.Header.Header(s, 'iso-8859-1', 998);
+ return str(h);
+
+################################################################################
+
+# <Culus> 'The standard sucks, but my tool is supposed to interoperate
+# with it. I know - I'll fix the suckage and make things
+# incompatible!'
+
+def fix_maintainer (maintainer):
+ """Parses a Maintainer or Changed-By field and returns:
+ (1) an RFC822 compatible version,
+ (2) an RFC2047 compatible version,
+ (3) the name
+ (4) the email
+
+The name is forced to UTF-8 for both (1) and (3). If the name field
+contains '.' or ',' (as allowed by Debian policy), (1) and (2) are
+switched to 'email (name)' format."""
+ maintainer = maintainer.strip()
+ if not maintainer:
+ return ('', '', '', '');
+
+ if maintainer.find("<") == -1:
+ email = maintainer;
+ name = "";
+ elif (maintainer[0] == "<" and maintainer[-1:] == ">"):
+ email = maintainer[1:-1];
+ name = "";
+ else:
+ m = re_parse_maintainer.match(maintainer);
+ if not m:
+ raise ParseMaintError, "Doesn't parse as a valid Maintainer field."
+ name = m.group(1);
+ email = m.group(2);
+
+ # Get an RFC2047 compliant version of the name
+ rfc2047_name = rfc2047_encode(name);
+
+ # Force the name to be UTF-8
+ name = force_to_utf8(name);
+
+ if name.find(',') != -1 or name.find('.') != -1:
+ rfc822_maint = "%s (%s)" % (email, name);
+ rfc2047_maint = "%s (%s)" % (email, rfc2047_name);
+ else:
+ rfc822_maint = "%s <%s>" % (name, email);
+ rfc2047_maint = "%s <%s>" % (rfc2047_name, email);
+
+ if email.find("@") == -1 and email.find("buildd_") != 0:
+ raise ParseMaintError, "No @ found in email address part."
+
+ return (rfc822_maint, rfc2047_maint, name, email);
+
+################################################################################
+
+# sendmail wrapper, takes _either_ a message string or a file as arguments
+def send_mail (message, filename=""):
+ # If we've been passed a string dump it into a temporary file
+ if message:
+ filename = tempfile.mktemp();
+ fd = os.open(filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0700);
+ os.write (fd, message);
+ os.close (fd);
+
+ # Invoke sendmail
+ (result, output) = commands.getstatusoutput("%s < %s" % (Cnf["Dinstall::SendmailCommand"], filename));
+ if (result != 0):
+ raise sendmail_failed_exc, output;
+
+ # Clean up any temporary files
+ if message:
+ os.unlink (filename);
+
+################################################################################
+
+def poolify (source, component):
+ if component:
+ component += '/';
+ # FIXME: this is nasty
+ component = component.lower().replace("non-us/", "non-US/");
+ if source[:3] == "lib":
+ return component + source[:4] + '/' + source + '/'
+ else:
+ return component + source[:1] + '/' + source + '/'
+
+################################################################################
+
+def move (src, dest, overwrite = 0, perms = 0664):
+ if os.path.exists(dest) and os.path.isdir(dest):
+ dest_dir = dest;
+ else:
+ dest_dir = os.path.dirname(dest);
+ if not os.path.exists(dest_dir):
+ umask = os.umask(00000);
+ os.makedirs(dest_dir, 02775);
+ os.umask(umask);
+ #print "Moving %s to %s..." % (src, dest);
+ if os.path.exists(dest) and os.path.isdir(dest):
+ dest += '/' + os.path.basename(src);
+ # Don't overwrite unless forced to
+ if os.path.exists(dest):
+ if not overwrite:
+ fubar("Can't move %s to %s - file already exists." % (src, dest));
+ else:
+ if not os.access(dest, os.W_OK):
+ fubar("Can't move %s to %s - can't write to existing file." % (src, dest));
+ shutil.copy2(src, dest);
+ os.chmod(dest, perms);
+ os.unlink(src);
+
+def copy (src, dest, overwrite = 0, perms = 0664):
+ if os.path.exists(dest) and os.path.isdir(dest):
+ dest_dir = dest;
+ else:
+ dest_dir = os.path.dirname(dest);
+ if not os.path.exists(dest_dir):
+ umask = os.umask(00000);
+ os.makedirs(dest_dir, 02775);
+ os.umask(umask);
+ #print "Copying %s to %s..." % (src, dest);
+ if os.path.exists(dest) and os.path.isdir(dest):
+ dest += '/' + os.path.basename(src);
+ # Don't overwrite unless forced to
+ if os.path.exists(dest):
+ if not overwrite:
+ raise file_exists_exc
+ else:
+ if not os.access(dest, os.W_OK):
+ raise cant_overwrite_exc
+ shutil.copy2(src, dest);
+ os.chmod(dest, perms);
+
+################################################################################
+
+def where_am_i ():
+ res = socket.gethostbyaddr(socket.gethostname());
+ database_hostname = Cnf.get("Config::" + res[0] + "::DatabaseHostname");
+ if database_hostname:
+ return database_hostname;
+ else:
+ return res[0];
+
+def which_conf_file ():
+ res = socket.gethostbyaddr(socket.gethostname());
+ if Cnf.get("Config::" + res[0] + "::KatieConfig"):
+ return Cnf["Config::" + res[0] + "::KatieConfig"]
+ else:
+ return default_config;
+
+def which_apt_conf_file ():
+ res = socket.gethostbyaddr(socket.gethostname());
+ if Cnf.get("Config::" + res[0] + "::AptConfig"):
+ return Cnf["Config::" + res[0] + "::AptConfig"]
+ else:
+ return default_apt_config;
+
+################################################################################
+
+# Escape characters which have meaning to SQL's regex comparison operator ('~')
+# (woefully incomplete)
+
+def regex_safe (s):
+ s = s.replace('+', '\\\\+');
+ s = s.replace('.', '\\\\.');
+ return s
+
+################################################################################
+
+# Perform a substition of template
+def TemplateSubst(map, filename):
+ file = open_file(filename);
+ template = file.read();
+ for x in map.keys():
+ template = template.replace(x,map[x]);
+ file.close();
+ return template;
+
+################################################################################
+
+def fubar(msg, exit_code=1):
+ sys.stderr.write("E: %s\n" % (msg));
+ sys.exit(exit_code);
+
+def warn(msg):
+ sys.stderr.write("W: %s\n" % (msg));
+
+################################################################################
+
+# Returns the user name with a laughable attempt at rfc822 conformancy
+# (read: removing stray periods).
+def whoami ():
+ return pwd.getpwuid(os.getuid())[4].split(',')[0].replace('.', '');
+
+################################################################################
+
+def size_type (c):
+ t = " B";
+ if c > 10240:
+ c = c / 1024;
+ t = " KB";
+ if c > 10240:
+ c = c / 1024;
+ t = " MB";
+ return ("%d%s" % (c, t))
+
+################################################################################
+
+def cc_fix_changes (changes):
+ o = changes.get("architecture", "");
+ if o:
+ del changes["architecture"];
+ changes["architecture"] = {};
+ for j in o.split():
+ changes["architecture"][j] = 1;
+
+# Sort by source name, source version, 'have source', and then by filename
+def changes_compare (a, b):
+ try:
+ a_changes = parse_changes(a);
+ except:
+ return -1;
+
+ try:
+ b_changes = parse_changes(b);
+ except:
+ return 1;
+
+ cc_fix_changes (a_changes);
+ cc_fix_changes (b_changes);
+
+ # Sort by source name
+ a_source = a_changes.get("source");
+ b_source = b_changes.get("source");
+ q = cmp (a_source, b_source);
+ if q:
+ return q;
+
+ # Sort by source version
+ a_version = a_changes.get("version", "0");
+ b_version = b_changes.get("version", "0");
+ q = apt_pkg.VersionCompare(a_version, b_version);
+ if q:
+ return q;
+
+ # Sort by 'have source'
+ a_has_source = a_changes["architecture"].get("source");
+ b_has_source = b_changes["architecture"].get("source");
+ if a_has_source and not b_has_source:
+ return -1;
+ elif b_has_source and not a_has_source:
+ return 1;
+
+ # Fall back to sort by filename
+ return cmp(a, b);
+
+################################################################################
+
+def find_next_free (dest, too_many=100):
+ extra = 0;
+ orig_dest = dest;
+ while os.path.exists(dest) and extra < too_many:
+ dest = orig_dest + '.' + repr(extra);
+ extra += 1;
+ if extra >= too_many:
+ raise tried_too_hard_exc;
+ return dest;
+
+################################################################################
+
+def result_join (original, sep = '\t'):
+ list = [];
+ for i in xrange(len(original)):
+ if original[i] == None:
+ list.append("");
+ else:
+ list.append(original[i]);
+ return sep.join(list);
+
+################################################################################
+
+def prefix_multi_line_string(str, prefix, include_blank_lines=0):
+ out = "";
+ for line in str.split('\n'):
+ line = line.strip();
+ if line or include_blank_lines:
+ out += "%s%s\n" % (prefix, line);
+ # Strip trailing new line
+ if out:
+ out = out[:-1];
+ return out;
+
+################################################################################
+
+def validate_changes_file_arg(filename, require_changes=1):
+ """'filename' is either a .changes or .katie file. If 'filename' is a
+.katie file, it's changed to be the corresponding .changes file. The
+function then checks if the .changes file a) exists and b) is
+readable and returns the .changes filename if so. If there's a
+problem, the next action depends on the option 'require_changes'
+argument:
+
+ o If 'require_changes' == -1, errors are ignored and the .changes
+ filename is returned.
+ o If 'require_changes' == 0, a warning is given and 'None' is returned.
+ o If 'require_changes' == 1, a fatal error is raised.
+"""
+ error = None;
+
+ orig_filename = filename
+ if filename.endswith(".katie"):
+ filename = filename[:-6]+".changes";
+
+ if not filename.endswith(".changes"):
+ error = "invalid file type; not a changes file";
+ else:
+ if not os.access(filename,os.R_OK):
+ if os.path.exists(filename):
+ error = "permission denied";
+ else:
+ error = "file not found";
+
+ if error:
+ if require_changes == 1:
+ fubar("%s: %s." % (orig_filename, error));
+ elif require_changes == 0:
+ warn("Skipping %s - %s" % (orig_filename, error));
+ return None;
+ else: # We only care about the .katie file
+ return filename;
+ else:
+ return filename;
+
+################################################################################
+
+def real_arch(arch):
+ return (arch != "source" and arch != "all");
+
+################################################################################
+
+def join_with_commas_and(list):
+ if len(list) == 0: return "nothing";
+ if len(list) == 1: return list[0];
+ return ", ".join(list[:-1]) + " and " + list[-1];
+
+################################################################################
+
+def pp_deps (deps):
+ pp_deps = [];
+ for atom in deps:
+ (pkg, version, constraint) = atom;
+ if constraint:
+ pp_dep = "%s (%s %s)" % (pkg, constraint, version);
+ else:
+ pp_dep = pkg;
+ pp_deps.append(pp_dep);
+ return " |".join(pp_deps);
+
+################################################################################
+
+def get_conf():
+ return Cnf;
+
+################################################################################
+
+# Handle -a, -c and -s arguments; returns them as SQL constraints
+def parse_args(Options):
+ # Process suite
+ if Options["Suite"]:
+ suite_ids_list = [];
+ for suite in split_args(Options["Suite"]):
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ warn("suite '%s' not recognised." % (suite));
+ else:
+ suite_ids_list.append(suite_id);
+ if suite_ids_list:
+ con_suites = "AND su.id IN (%s)" % ", ".join(map(str, suite_ids_list));
+ else:
+ fubar("No valid suite given.");
+ else:
+ con_suites = "";
+
+ # Process component
+ if Options["Component"]:
+ component_ids_list = [];
+ for component in split_args(Options["Component"]):
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ warn("component '%s' not recognised." % (component));
+ else:
+ component_ids_list.append(component_id);
+ if component_ids_list:
+ con_components = "AND c.id IN (%s)" % ", ".join(map(str, component_ids_list));
+ else:
+ fubar("No valid component given.");
+ else:
+ con_components = "";
+
+ # Process architecture
+ con_architectures = "";
+ if Options["Architecture"]:
+ arch_ids_list = [];
+ check_source = 0;
+ for architecture in split_args(Options["Architecture"]):
+ if architecture == "source":
+ check_source = 1;
+ else:
+ architecture_id = db_access.get_architecture_id(architecture);
+ if architecture_id == -1:
+ warn("architecture '%s' not recognised." % (architecture));
+ else:
+ arch_ids_list.append(architecture_id);
+ if arch_ids_list:
+ con_architectures = "AND a.id IN (%s)" % ", ".join(map(str, arch_ids_list));
+ else:
+ if not check_source:
+ fubar("No valid architecture given.");
+ else:
+ check_source = 1;
+
+ return (con_suites, con_architectures, con_components, check_source);
+
+################################################################################
+
+# Inspired(tm) by Bryn Keller's print_exc_plus (See
+# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52215)
+
+def print_exc():
+ tb = sys.exc_info()[2];
+ while tb.tb_next:
+ tb = tb.tb_next;
+ stack = [];
+ frame = tb.tb_frame;
+ while frame:
+ stack.append(frame);
+ frame = frame.f_back;
+ stack.reverse();
+ traceback.print_exc();
+ for frame in stack:
+ print "\nFrame %s in %s at line %s" % (frame.f_code.co_name,
+ frame.f_code.co_filename,
+ frame.f_lineno);
+ for key, value in frame.f_locals.items():
+ print "\t%20s = " % key,;
+ try:
+ print value;
+ except:
+ print "<unable to print>";
+
+################################################################################
+
+def try_with_debug(function):
+ try:
+ function();
+ except SystemExit:
+ raise;
+ except:
+ print_exc();
+
+################################################################################
+
+# Function for use in sorting lists of architectures.
+# Sorts normally except that 'source' dominates all others.
+
+def arch_compare_sw (a, b):
+ if a == "source" and b == "source":
+ return 0;
+ elif a == "source":
+ return -1;
+ elif b == "source":
+ return 1;
+
+ return cmp (a, b);
+
+################################################################################
+
+# Split command line arguments which can be separated by either commas
+# or whitespace. If dwim is set, it will complain about string ending
+# in comma since this usually means someone did 'madison -a i386, m68k
+# foo' or something and the inevitable confusion resulting from 'm68k'
+# being treated as an argument is undesirable.
+
+def split_args (s, dwim=1):
+ if s.find(",") == -1:
+ return s.split();
+ else:
+ if s[-1:] == "," and dwim:
+ fubar("split_args: found trailing comma, spurious space maybe?");
+ return s.split(",");
+
+################################################################################
+
+def Dict(**dict): return dict
+
+########################################
+
+# Our very own version of commands.getouputstatus(), hacked to support
+# gpgv's status fd.
+def gpgv_get_status_output(cmd, status_read, status_write):
+ cmd = ['/bin/sh', '-c', cmd];
+ p2cread, p2cwrite = os.pipe();
+ c2pread, c2pwrite = os.pipe();
+ errout, errin = os.pipe();
+ pid = os.fork();
+ if pid == 0:
+ # Child
+ os.close(0);
+ os.close(1);
+ os.dup(p2cread);
+ os.dup(c2pwrite);
+ os.close(2);
+ os.dup(errin);
+ for i in range(3, 256):
+ if i != status_write:
+ try:
+ os.close(i);
+ except:
+ pass;
+ try:
+ os.execvp(cmd[0], cmd);
+ finally:
+ os._exit(1);
+
+ # Parent
+ os.close(p2cread)
+ os.dup2(c2pread, c2pwrite);
+ os.dup2(errout, errin);
+
+ output = status = "";
+ while 1:
+ i, o, e = select.select([c2pwrite, errin, status_read], [], []);
+ more_data = [];
+ for fd in i:
+ r = os.read(fd, 8196);
+ if len(r) > 0:
+ more_data.append(fd);
+ if fd == c2pwrite or fd == errin:
+ output += r;
+ elif fd == status_read:
+ status += r;
+ else:
+ fubar("Unexpected file descriptor [%s] returned from select\n" % (fd));
+ if not more_data:
+ pid, exit_status = os.waitpid(pid, 0)
+ try:
+ os.close(status_write);
+ os.close(status_read);
+ os.close(c2pread);
+ os.close(c2pwrite);
+ os.close(p2cwrite);
+ os.close(errin);
+ os.close(errout);
+ except:
+ pass;
+ break;
+
+ return output, status, exit_status;
+
+############################################################
+
+
+def check_signature (sig_filename, reject, data_filename="", keyrings=None):
+ """Check the signature of a file and return the fingerprint if the
+signature is valid or 'None' if it's not. The first argument is the
+filename whose signature should be checked. The second argument is a
+reject function and is called when an error is found. The reject()
+function must allow for two arguments: the first is the error message,
+the second is an optional prefix string. It's possible for reject()
+to be called more than once during an invocation of check_signature().
+The third argument is optional and is the name of the files the
+detached signature applies to. The fourth argument is optional and is
+a *list* of keyrings to use.
+"""
+
+ # Ensure the filename contains no shell meta-characters or other badness
+ if not re_taint_free.match(sig_filename):
+ reject("!!WARNING!! tainted signature filename: '%s'." % (sig_filename));
+ return None;
+
+ if data_filename and not re_taint_free.match(data_filename):
+ reject("!!WARNING!! tainted data filename: '%s'." % (data_filename));
+ return None;
+
+ if not keyrings:
+ keyrings = (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"])
+
+ # Build the command line
+ status_read, status_write = os.pipe();
+ cmd = "gpgv --status-fd %s" % (status_write);
+ for keyring in keyrings:
+ cmd += " --keyring %s" % (keyring);
+ cmd += " %s %s" % (sig_filename, data_filename);
+ # Invoke gpgv on the file
+ (output, status, exit_status) = gpgv_get_status_output(cmd, status_read, status_write);
+
+ # Process the status-fd output
+ keywords = {};
+ bad = internal_error = "";
+ for line in status.split('\n'):
+ line = line.strip();
+ if line == "":
+ continue;
+ split = line.split();
+ if len(split) < 2:
+ internal_error += "gpgv status line is malformed (< 2 atoms) ['%s'].\n" % (line);
+ continue;
+ (gnupg, keyword) = split[:2];
+ if gnupg != "[GNUPG:]":
+ internal_error += "gpgv status line is malformed (incorrect prefix '%s').\n" % (gnupg);
+ continue;
+ args = split[2:];
+ if keywords.has_key(keyword) and (keyword != "NODATA" and keyword != "SIGEXPIRED"):
+ internal_error += "found duplicate status token ('%s').\n" % (keyword);
+ continue;
+ else:
+ keywords[keyword] = args;
+
+ # If we failed to parse the status-fd output, let's just whine and bail now
+ if internal_error:
+ reject("internal error while performing signature check on %s." % (sig_filename));
+ reject(internal_error, "");
+ reject("Please report the above errors to the Archive maintainers by replying to this mail.", "");
+ return None;
+
+ # Now check for obviously bad things in the processed output
+ if keywords.has_key("SIGEXPIRED"):
+ reject("The key used to sign %s has expired." % (sig_filename));
+ bad = 1;
+ if keywords.has_key("KEYREVOKED"):
+ reject("The key used to sign %s has been revoked." % (sig_filename));
+ bad = 1;
+ if keywords.has_key("BADSIG"):
+ reject("bad signature on %s." % (sig_filename));
+ bad = 1;
+ if keywords.has_key("ERRSIG") and not keywords.has_key("NO_PUBKEY"):
+ reject("failed to check signature on %s." % (sig_filename));
+ bad = 1;
+ if keywords.has_key("NO_PUBKEY"):
+ args = keywords["NO_PUBKEY"];
+ if len(args) >= 1:
+ key = args[0];
+ reject("The key (0x%s) used to sign %s wasn't found in the keyring(s)." % (key, sig_filename));
+ bad = 1;
+ if keywords.has_key("BADARMOR"):
+ reject("ASCII armour of signature was corrupt in %s." % (sig_filename));
+ bad = 1;
+ if keywords.has_key("NODATA"):
+ reject("no signature found in %s." % (sig_filename));
+ bad = 1;
+
+ if bad:
+ return None;
+
+ # Next check gpgv exited with a zero return code
+ if exit_status:
+ reject("gpgv failed while checking %s." % (sig_filename));
+ if status.strip():
+ reject(prefix_multi_line_string(status, " [GPG status-fd output:] "), "");
+ else:
+ reject(prefix_multi_line_string(output, " [GPG output:] "), "");
+ return None;
+
+ # Sanity check the good stuff we expect
+ if not keywords.has_key("VALIDSIG"):
+ reject("signature on %s does not appear to be valid [No VALIDSIG]." % (sig_filename));
+ bad = 1;
+ else:
+ args = keywords["VALIDSIG"];
+ if len(args) < 1:
+ reject("internal error while checking signature on %s." % (sig_filename));
+ bad = 1;
+ else:
+ fingerprint = args[0];
+ if not keywords.has_key("GOODSIG"):
+ reject("signature on %s does not appear to be valid [No GOODSIG]." % (sig_filename));
+ bad = 1;
+ if not keywords.has_key("SIG_ID"):
+ reject("signature on %s does not appear to be valid [No SIG_ID]." % (sig_filename));
+ bad = 1;
+
+ # Finally ensure there's not something we don't recognise
+ known_keywords = Dict(VALIDSIG="",SIG_ID="",GOODSIG="",BADSIG="",ERRSIG="",
+ SIGEXPIRED="",KEYREVOKED="",NO_PUBKEY="",BADARMOR="",
+ NODATA="");
+
+ for keyword in keywords.keys():
+ if not known_keywords.has_key(keyword):
+ reject("found unknown status token '%s' from gpgv with args '%r' in %s." % (keyword, keywords[keyword], sig_filename));
+ bad = 1;
+
+ if bad:
+ return None;
+ else:
+ return fingerprint;
+
+################################################################################
+
+# Inspired(tm) by http://www.zopelabs.com/cookbook/1022242603
+
+def wrap(paragraph, max_length, prefix=""):
+ line = "";
+ s = "";
+ have_started = 0;
+ words = paragraph.split();
+
+ for word in words:
+ word_size = len(word);
+ if word_size > max_length:
+ if have_started:
+ s += line + '\n' + prefix;
+ s += word + '\n' + prefix;
+ else:
+ if have_started:
+ new_length = len(line) + word_size + 1;
+ if new_length > max_length:
+ s += line + '\n' + prefix;
+ line = word;
+ else:
+ line += ' ' + word;
+ else:
+ line = word;
+ have_started = 1;
+
+ if have_started:
+ s += line;
+
+ return s;
+
+################################################################################
+
+# Relativize an absolute symlink from 'src' -> 'dest' relative to 'root'.
+# Returns fixed 'src'
+def clean_symlink (src, dest, root):
+ src = src.replace(root, '', 1);
+ dest = dest.replace(root, '', 1);
+ dest = os.path.dirname(dest);
+ new_src = '../' * len(dest.split('/'));
+ return new_src + src;
+
+################################################################################
+
+def temp_filename(directory=None, dotprefix=None, perms=0700):
+ """Return a secure and unique filename by pre-creating it.
+If 'directory' is non-null, it will be the directory the file is pre-created in.
+If 'dotprefix' is non-null, the filename will be prefixed with a '.'."""
+
+ if directory:
+ old_tempdir = tempfile.tempdir;
+ tempfile.tempdir = directory;
+
+ filename = tempfile.mktemp();
+
+ if dotprefix:
+ filename = "%s/.%s" % (os.path.dirname(filename), os.path.basename(filename));
+ fd = os.open(filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, perms);
+ os.close(fd);
+
+ if directory:
+ tempfile.tempdir = old_tempdir;
+
+ return filename;
+
+################################################################################
+
+apt_pkg.init();
+
+Cnf = apt_pkg.newConfiguration();
+apt_pkg.ReadConfigFileISC(Cnf,default_config);
+
+if which_conf_file() != default_config:
+ apt_pkg.ReadConfigFileISC(Cnf,which_conf_file());
+
+################################################################################
--- /dev/null
+#!/usr/bin/env python
+
+# Display information about package(s) (suite, version, etc.)
+# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: madison,v 1.33 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <aj> ooo, elmo has "special powers"
+# <neuro> ooo, does he have lasers that shoot out of his eyes?
+# <aj> dunno
+# <aj> maybe he can turn invisible? that'd sure help with improved transparency!
+
+################################################################################
+
+import os, pg, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: madison [OPTION] PACKAGE[...]
+Display information about PACKAGE(s).
+
+ -a, --architecture=ARCH only show info for ARCH(s)
+ -b, --binary-type=TYPE only show info for binary TYPE
+ -c, --component=COMPONENT only show info for COMPONENT(s)
+ -g, --greaterorequal show buildd 'dep-wait pkg >= {highest version}' info
+ -G, --greaterthan show buildd 'dep-wait pkg >> {highest version}' info
+ -h, --help show this help and exit
+ -r, --regex treat PACKAGE as a regex
+ -s, --suite=SUITE only show info for this suite
+ -S, --source-and-binary show info for the binary children of source pkgs
+
+ARCH, COMPONENT and SUITE can be comma (or space) separated lists, e.g.
+ --architecture=m68k,i386"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('a', "architecture", "Madison::Options::Architecture", "HasArg"),
+ ('b', "binarytype", "Madison::Options::BinaryType", "HasArg"),
+ ('c', "component", "Madison::Options::Component", "HasArg"),
+ ('f', "format", "Madison::Options::Format", "HasArg"),
+ ('g', "greaterorequal", "Madison::Options::GreaterOrEqual"),
+ ('G', "greaterthan", "Madison::Options::GreaterThan"),
+ ('r', "regex", "Madison::Options::Regex"),
+ ('s', "suite", "Madison::Options::Suite", "HasArg"),
+ ('S', "source-and-binary", "Madison::Options::Source-And-Binary"),
+ ('h', "help", "Madison::Options::Help")];
+ for i in [ "architecture", "binarytype", "component", "format",
+ "greaterorequal", "greaterthan", "regex", "suite",
+ "source-and-binary", "help" ]:
+ if not Cnf.has_key("Madison::Options::%s" % (i)):
+ Cnf["Madison::Options::%s" % (i)] = "";
+
+ packages = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Madison::Options")
+
+ if Options["Help"]:
+ usage();
+ if not packages:
+ utils.fubar("need at least one package name as an argument.");
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ # If cron.daily is running; warn the user that our output might seem strange
+ if os.path.exists(os.path.join(Cnf["Dir::Root"], "Archive_Maintenance_In_Progress")):
+ utils.warn("Archive maintenance is in progress; database inconsistencies are possible.");
+
+ # Handle buildd maintenance helper options
+ if Options["GreaterOrEqual"] or Options["GreaterThan"]:
+ if Options["GreaterOrEqual"] and Options["GreaterThan"]:
+ utils.fubar("-g/--greaterorequal and -G/--greaterthan are mutually exclusive.");
+ if not Options["Suite"]:
+ Options["Suite"] = "unstable";
+
+ # Parse -a/--architecture, -c/--component and -s/--suite
+ (con_suites, con_architectures, con_components, check_source) = \
+ utils.parse_args(Options);
+
+ if Options["BinaryType"]:
+ if Options["BinaryType"] != "udeb" and Options["BinaryType"] != "deb":
+ utils.fubar("Invalid binary type. 'udeb' and 'deb' recognised.");
+ con_bintype = "AND b.type = '%s'" % (Options["BinaryType"]);
+ # REMOVE ME TRAMP
+ if Options["BinaryType"] == "udeb":
+ check_source = 0;
+ else:
+ con_bintype = "";
+
+ if Options["Regex"]:
+ comparison_operator = "~";
+ else:
+ comparison_operator = "=";
+
+ if Options["Source-And-Binary"]:
+ new_packages = [];
+ for package in packages:
+ q = projectB.query("SELECT DISTINCT b.package FROM binaries b, bin_associations ba, suite su, source s WHERE b.source = s.id AND su.id = ba.suite AND b.id = ba.bin AND s.source %s '%s' %s" % (comparison_operator, package, con_suites));
+ new_packages.extend(map(lambda x: x[0], q.getresult()));
+ if package not in new_packages:
+ new_packages.append(package);
+ packages = new_packages;
+
+ results = 0;
+ for package in packages:
+ q = projectB.query("""
+SELECT b.package, b.version, a.arch_string, su.suite_name, c.name, m.name
+ FROM binaries b, architecture a, suite su, bin_associations ba,
+ files f, location l, component c, maintainer m
+ WHERE b.package %s '%s' AND a.id = b.architecture AND su.id = ba.suite
+ AND b.id = ba.bin AND b.file = f.id AND f.location = l.id
+ AND l.component = c.id AND b.maintainer = m.id %s %s %s
+""" % (comparison_operator, package, con_suites, con_architectures, con_bintype));
+ ql = q.getresult();
+ if check_source:
+ q = projectB.query("""
+SELECT s.source, s.version, 'source', su.suite_name, c.name, m.name
+ FROM source s, suite su, src_associations sa, files f, location l,
+ component c, maintainer m
+ WHERE s.source %s '%s' AND su.id = sa.suite AND s.id = sa.source
+ AND s.file = f.id AND f.location = l.id AND l.component = c.id
+ AND s.maintainer = m.id %s
+""" % (comparison_operator, package, con_suites));
+ ql.extend(q.getresult());
+ d = {};
+ highver = {};
+ for i in ql:
+ results += 1;
+ (pkg, version, architecture, suite, component, maintainer) = i;
+ if component != "main":
+ suite = "%s/%s" % (suite, component);
+ if not d.has_key(pkg):
+ d[pkg] = {};
+ highver.setdefault(pkg,"");
+ if not d[pkg].has_key(version):
+ d[pkg][version] = {};
+ if apt_pkg.VersionCompare(version, highver[pkg]) > 0:
+ highver[pkg] = version;
+ if not d[pkg][version].has_key(suite):
+ d[pkg][version][suite] = [];
+ d[pkg][version][suite].append(architecture);
+
+ packages = d.keys();
+ packages.sort();
+ for pkg in packages:
+ versions = d[pkg].keys();
+ versions.sort(apt_pkg.VersionCompare);
+ for version in versions:
+ suites = d[pkg][version].keys();
+ suites.sort();
+ for suite in suites:
+ arches = d[pkg][version][suite];
+ arches.sort(utils.arch_compare_sw);
+ if Options["Format"] == "": #normal
+ sys.stdout.write("%10s | %10s | %13s | " % (pkg, version, suite));
+ sys.stdout.write(", ".join(arches));
+ sys.stdout.write('\n');
+ elif Options["Format"] == "heidi":
+ for arch in arches:
+ sys.stdout.write("%s %s %s\n" % (pkg, version, arch));
+ if Options["GreaterOrEqual"]:
+ print "\n%s (>= %s)" % (pkg, highver[pkg])
+ if Options["GreaterThan"]:
+ print "\n%s (>> %s)" % (pkg, highver[pkg])
+
+ if not results:
+ sys.exit(1);
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Generate Maintainers file used by e.g. the Debian Bug Tracking System
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: charisma,v 1.18 2004-06-17 15:02:02 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# ``As opposed to "Linux sucks. Respect my academic authoritah, damn
+# you!" or whatever all this hot air amounts to.''
+# -- ajt@ in _that_ thread on debian-devel@
+
+################################################################################
+
+import pg, sys;
+import db_access, utils;
+import apt_pkg;
+
+################################################################################
+
+projectB = None
+Cnf = None
+maintainer_from_source_cache = {}
+packages = {}
+fixed_maintainer_cache = {}
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: charisma [OPTION] EXTRA_FILE[...]
+Generate an index of packages <=> Maintainers.
+
+ -h, --help show this help and exit
+"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def fix_maintainer (maintainer):
+ global fixed_maintainer_cache;
+
+ if not fixed_maintainer_cache.has_key(maintainer):
+ fixed_maintainer_cache[maintainer] = utils.fix_maintainer(maintainer)[0]
+
+ return fixed_maintainer_cache[maintainer]
+
+def get_maintainer (maintainer):
+ return fix_maintainer(db_access.get_maintainer(maintainer));
+
+def get_maintainer_from_source (source_id):
+ global maintainer_from_source_cache
+
+ if not maintainer_from_source_cache.has_key(source_id):
+ q = projectB.query("SELECT m.name FROM maintainer m, source s WHERE s.id = %s and s.maintainer = m.id" % (source_id));
+ maintainer = q.getresult()[0][0]
+ maintainer_from_source_cache[source_id] = fix_maintainer(maintainer)
+
+ return maintainer_from_source_cache[source_id]
+
+################################################################################
+
+def main():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Charisma::Options::Help")];
+ if not Cnf.has_key("Charisma::Options::Help"):
+ Cnf["Charisma::Options::Help"] = "";
+
+ extra_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Charisma::Options");
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ for suite in Cnf.SubTree("Suite").List():
+ suite = suite.lower();
+ suite_priority = int(Cnf["Suite::%s::Priority" % (suite)]);
+
+ # Source packages
+ q = projectB.query("SELECT s.source, s.version, m.name FROM src_associations sa, source s, suite su, maintainer m WHERE su.suite_name = '%s' AND sa.suite = su.id AND sa.source = s.id AND m.id = s.maintainer" % (suite))
+ sources = q.getresult();
+ for source in sources:
+ package = source[0];
+ version = source[1];
+ maintainer = fix_maintainer(source[2]);
+ if packages.has_key(package):
+ if packages[package]["priority"] <= suite_priority:
+ if apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
+ packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
+ else:
+ packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
+
+ # Binary packages
+ q = projectB.query("SELECT b.package, b.source, b.maintainer, b.version FROM bin_associations ba, binaries b, suite s WHERE s.suite_name = '%s' AND ba.suite = s.id AND ba.bin = b.id" % (suite));
+ binaries = q.getresult();
+ for binary in binaries:
+ package = binary[0];
+ source_id = binary[1];
+ version = binary[3];
+ # Use the source maintainer first; falling back on the binary maintainer as a last resort only
+ if source_id:
+ maintainer = get_maintainer_from_source(source_id);
+ else:
+ maintainer = get_maintainer(binary[2]);
+ if packages.has_key(package):
+ if packages[package]["priority"] <= suite_priority:
+ if apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
+ packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
+ else:
+ packages[package] = { "maintainer": maintainer, "priority": suite_priority, "version": version };
+
+ # Process any additional Maintainer files (e.g. from non-US or pseudo packages)
+ for filename in extra_files:
+ file = utils.open_file(filename);
+ for line in file.readlines():
+ line = utils.re_comments.sub('', line).strip();
+ if line == "":
+ continue;
+ split = line.split();
+ lhs = split[0];
+ maintainer = fix_maintainer(" ".join(split[1:]));
+ if lhs.find('~') != -1:
+ (package, version) = lhs.split('~');
+ else:
+ package = lhs;
+ version = '*';
+ # A version of '*' overwhelms all real version numbers
+ if not packages.has_key(package) or version == '*' \
+ or apt_pkg.VersionCompare(packages[package]["version"], version) < 0:
+ packages[package] = { "maintainer": maintainer, "version": version };
+ file.close();
+
+ package_keys = packages.keys()
+ package_keys.sort()
+ for package in package_keys:
+ lhs = "~".join([package, packages[package]["version"]]);
+ print "%-30s %s" % (lhs, packages[package]["maintainer"]);
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Output override files for apt-ftparchive and indices/
+# Copyright (C) 2000, 2001, 2002, 2004 James Troup <james@nocrew.org>
+# $Id: denise,v 1.18 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# This is seperate because it's horribly Debian specific and I don't
+# want that kind of horribleness in the otherwise generic natalie. It
+# does duplicate code tho.
+
+################################################################################
+
+import pg, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+override = {}
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: denise
+Outputs the override tables to text files.
+
+ -h, --help show this help and exit."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def do_list(output_file, suite, component, otype):
+ global override;
+
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ utils.fubar("Suite '%s' not recognised." % (suite));
+
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ utils.fubar("Component '%s' not recognised." % (component));
+
+ otype_id = db_access.get_override_type_id(otype);
+ if otype_id == -1:
+ utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (otype));
+
+ override.setdefault(suite, {});
+ override[suite].setdefault(component, {});
+ override[suite][component].setdefault(otype, {});
+
+ if otype == "dsc":
+ q = projectB.query("SELECT o.package, s.section, o.maintainer FROM override o, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.section = s.id ORDER BY s.section, o.package" % (suite_id, component_id, otype_id));
+ for i in q.getresult():
+ override[suite][component][otype][i[0]] = i;
+ output_file.write(utils.result_join(i)+'\n');
+ else:
+ q = projectB.query("SELECT o.package, p.priority, s.section, o.maintainer, p.level FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.priority = p.id AND o.section = s.id ORDER BY s.section, p.level, o.package" % (suite_id, component_id, otype_id));
+ for i in q.getresult():
+ i = i[:-1]; # Strip the priority level
+ override[suite][component][otype][i[0]] = i;
+ output_file.write(utils.result_join(i)+'\n');
+
+################################################################################
+
+def main ():
+ global Cnf, projectB, override;
+
+ Cnf = utils.get_conf()
+ Arguments = [('h',"help","Denise::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Denise::Options::%s" % (i)):
+ Cnf["Denise::Options::%s" % (i)] = "";
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Denise::Options")
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ for suite in Cnf.SubTree("Cindy::OverrideSuites").List():
+ if Cnf.has_key("Suite::%s::Untouchable" % suite) and Cnf["Suite::%s::Untouchable" % suite] != 0:
+ continue
+ suite = suite.lower()
+
+ sys.stderr.write("Processing %s...\n" % (suite));
+ override_suite = Cnf["Suite::%s::OverrideCodeName" % (suite)];
+ for component in Cnf.SubTree("Component").List():
+ if component == "mixed":
+ continue; # Ick
+ for otype in Cnf.ValueList("OverrideType"):
+ if otype == "deb":
+ suffix = "";
+ elif otype == "udeb":
+ if component != "main":
+ continue; # Ick2
+ suffix = ".debian-installer";
+ elif otype == "dsc":
+ suffix = ".src";
+ filename = "%s/override.%s.%s%s" % (Cnf["Dir::Override"], override_suite, component.replace("non-US/", ""), suffix);
+ output_file = utils.open_file(filename, 'w');
+ do_list(output_file, suite, component, otype);
+ output_file.close();
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# Generate file lists used by apt-ftparchive to generate Packages and Sources files
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: jenna,v 1.29 2004-11-27 17:58:47 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <elmo> I'm doing it in python btw.. nothing against your monster
+# SQL, but the python wins in terms of speed and readiblity
+# <aj> bah
+# <aj> you suck!!!!!
+# <elmo> sorry :(
+# <aj> you are not!!!
+# <aj> you mock my SQL!!!!
+# <elmo> you want have contest of skillz??????
+# <aj> all your skillz are belong to my sql!!!!
+# <elmo> yo momma are belong to my python!!!!
+# <aj> yo momma was SQLin' like a pig last night!
+
+################################################################################
+
+import copy, os, pg, string, sys;
+import apt_pkg;
+import claire, db_access, logging, utils;
+
+################################################################################
+
+projectB = None;
+Cnf = None;
+Logger = None;
+Options = None;
+
+################################################################################
+
+def Dict(**dict): return dict
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: jenna [OPTION]
+Write out file lists suitable for use with apt-ftparchive.
+
+ -a, --architecture=ARCH only write file lists for this architecture
+ -c, --component=COMPONENT only write file lists for this component
+ -h, --help show this help and exit
+ -n, --no-delete don't delete older versions
+ -s, --suite=SUITE only write file lists for this suite
+
+ARCH, COMPONENT and SUITE can be space separated lists, e.g.
+ --architecture=\"m68k i386\"""";
+ sys.exit(exit_code);
+
+################################################################################
+
+def version_cmp(a, b):
+ return -apt_pkg.VersionCompare(a[0], b[0]);
+
+#####################################################
+
+def delete_packages(delete_versions, pkg, dominant_arch, suite,
+ dominant_version, delete_table, delete_col, packages):
+ suite_id = db_access.get_suite_id(suite);
+ for version in delete_versions:
+ delete_unique_id = version[1];
+ if not packages.has_key(delete_unique_id):
+ continue;
+ delete_version = version[0];
+ delete_id = packages[delete_unique_id]["id"];
+ delete_arch = packages[delete_unique_id]["arch"];
+ if not Cnf.Find("Suite::%s::Untouchable" % (suite)):
+ if Options["No-Delete"]:
+ print "Would delete %s_%s_%s in %s in favour of %s_%s" % (pkg, delete_arch, delete_version, suite, dominant_version, dominant_arch);
+ else:
+ Logger.log(["dominated", pkg, delete_arch, delete_version, dominant_version, dominant_arch]);
+ projectB.query("DELETE FROM %s WHERE suite = %s AND %s = %s" % (delete_table, suite_id, delete_col, delete_id));
+ del packages[delete_unique_id];
+ else:
+ if Options["No-Delete"]:
+ print "Would delete %s_%s_%s in favour of %s_%s, but %s is untouchable" % (pkg, delete_arch, delete_version, dominant_version, dominant_arch, suite);
+ else:
+ Logger.log(["dominated but untouchable", pkg, delete_arch, delete_version, dominant_version, dominant_arch]);
+
+#####################################################
+
+# Per-suite&pkg: resolve arch-all, vs. arch-any, assumes only one arch-all
+def resolve_arch_all_vs_any(versions, packages):
+ arch_all_version = None;
+ arch_any_versions = copy.copy(versions);
+ for i in arch_any_versions:
+ unique_id = i[1];
+ arch = packages[unique_id]["arch"];
+ if arch == "all":
+ arch_all_versions = [i];
+ arch_all_version = i[0];
+ arch_any_versions.remove(i);
+ # Sort arch: any versions into descending order
+ arch_any_versions.sort(version_cmp);
+ highest_arch_any_version = arch_any_versions[0][0];
+
+ pkg = packages[unique_id]["pkg"];
+ suite = packages[unique_id]["suite"];
+ delete_table = "bin_associations";
+ delete_col = "bin";
+
+ if apt_pkg.VersionCompare(highest_arch_any_version, arch_all_version) < 1:
+ # arch: all dominates
+ delete_packages(arch_any_versions, pkg, "all", suite,
+ arch_all_version, delete_table, delete_col, packages);
+ else:
+ # arch: any dominates
+ delete_packages(arch_all_versions, pkg, "any", suite,
+ highest_arch_any_version, delete_table, delete_col,
+ packages);
+
+#####################################################
+
+# Per-suite&pkg&arch: resolve duplicate versions
+def remove_duplicate_versions(versions, packages):
+ # Sort versions into descending order
+ versions.sort(version_cmp);
+ dominant_versions = versions[0];
+ dominated_versions = versions[1:];
+ (dominant_version, dominant_unqiue_id) = dominant_versions;
+ pkg = packages[dominant_unqiue_id]["pkg"];
+ arch = packages[dominant_unqiue_id]["arch"];
+ suite = packages[dominant_unqiue_id]["suite"];
+ if arch == "source":
+ delete_table = "src_associations";
+ delete_col = "source";
+ else: # !source
+ delete_table = "bin_associations";
+ delete_col = "bin";
+ # Remove all but the highest
+ delete_packages(dominated_versions, pkg, arch, suite,
+ dominant_version, delete_table, delete_col, packages);
+ return [dominant_versions];
+
+################################################################################
+
+def cleanup(packages):
+ # Build up the index used by the clean up functions
+ d = {};
+ for unique_id in packages.keys():
+ suite = packages[unique_id]["suite"];
+ pkg = packages[unique_id]["pkg"];
+ arch = packages[unique_id]["arch"];
+ version = packages[unique_id]["version"];
+ d.setdefault(suite, {});
+ d[suite].setdefault(pkg, {});
+ d[suite][pkg].setdefault(arch, []);
+ d[suite][pkg][arch].append([version, unique_id]);
+ # Clean up old versions
+ for suite in d.keys():
+ for pkg in d[suite].keys():
+ for arch in d[suite][pkg].keys():
+ versions = d[suite][pkg][arch];
+ if len(versions) > 1:
+ d[suite][pkg][arch] = remove_duplicate_versions(versions, packages);
+
+ # Arch: all -> any and vice versa
+ for suite in d.keys():
+ for pkg in d[suite].keys():
+ arches = d[suite][pkg];
+ # If we don't have any arch: all; we've nothing to do
+ if not arches.has_key("all"):
+ continue;
+ # Check to see if we have arch: all and arch: !all (ignoring source)
+ num_arches = len(arches.keys());
+ if arches.has_key("source"):
+ num_arches -= 1;
+ # If we do, remove the duplicates
+ if num_arches > 1:
+ versions = [];
+ for arch in arches.keys():
+ if arch != "source":
+ versions.extend(d[suite][pkg][arch]);
+ resolve_arch_all_vs_any(versions, packages);
+
+################################################################################
+
+def write_legacy_mixed_filelist(suite, list, packages, dislocated_files):
+ # Work out the filename
+ filename = os.path.join(Cnf["Dir::Lists"], "%s_-_all.list" % (suite));
+ output = utils.open_file(filename, "w");
+ # Generate the final list of files
+ files = {};
+ for id in list:
+ path = packages[id]["path"];
+ filename = packages[id]["filename"];
+ file_id = packages[id]["file_id"];
+ if suite == "stable" and dislocated_files.has_key(file_id):
+ filename = dislocated_files[file_id];
+ else:
+ filename = path + filename;
+ if files.has_key(filename):
+ utils.warn("%s (in %s) is duplicated." % (filename, suite));
+ else:
+ files[filename] = "";
+ # Sort the files since apt-ftparchive doesn't
+ keys = files.keys();
+ keys.sort();
+ # Write the list of files out
+ for file in keys:
+ output.write(file+'\n')
+ output.close();
+
+############################################################
+
+def write_filelist(suite, component, arch, type, list, packages, dislocated_files):
+ # Work out the filename
+ if arch != "source":
+ if type == "udeb":
+ arch = "debian-installer_binary-%s" % (arch);
+ elif type == "deb":
+ arch = "binary-%s" % (arch);
+ filename = os.path.join(Cnf["Dir::Lists"], "%s_%s_%s.list" % (suite, component, arch));
+ output = utils.open_file(filename, "w");
+ # Generate the final list of files
+ files = {};
+ for id in list:
+ path = packages[id]["path"];
+ filename = packages[id]["filename"];
+ file_id = packages[id]["file_id"];
+ pkg = packages[id]["pkg"];
+ if suite == "stable" and dislocated_files.has_key(file_id):
+ filename = dislocated_files[file_id];
+ else:
+ filename = path + filename;
+ if files.has_key(pkg):
+ utils.warn("%s (in %s/%s, %s) is duplicated." % (pkg, suite, component, filename));
+ else:
+ files[pkg] = filename;
+ # Sort the files since apt-ftparchive doesn't
+ pkgs = files.keys();
+ pkgs.sort();
+ # Write the list of files out
+ for pkg in pkgs:
+ output.write(files[pkg]+'\n')
+ output.close();
+
+################################################################################
+
+def write_filelists(packages, dislocated_files):
+ # Build up the index to iterate over
+ d = {};
+ for unique_id in packages.keys():
+ suite = packages[unique_id]["suite"];
+ component = packages[unique_id]["component"];
+ arch = packages[unique_id]["arch"];
+ type = packages[unique_id]["type"];
+ d.setdefault(suite, {});
+ d[suite].setdefault(component, {});
+ d[suite][component].setdefault(arch, {});
+ d[suite][component][arch].setdefault(type, []);
+ d[suite][component][arch][type].append(unique_id);
+ # Flesh out the index
+ if not Options["Suite"]:
+ suites = Cnf.SubTree("Suite").List();
+ else:
+ suites = utils.split_args(Options["Suite"]);
+ for suite in map(string.lower, suites):
+ d.setdefault(suite, {});
+ if not Options["Component"]:
+ components = Cnf.ValueList("Suite::%s::Components" % (suite));
+ else:
+ components = utils.split_args(Options["Component"]);
+ udeb_components = Cnf.ValueList("Suite::%s::UdebComponents" % (suite));
+ udeb_components = udeb_components;
+ for component in components:
+ d[suite].setdefault(component, {});
+ if component in udeb_components:
+ binary_types = [ "deb", "udeb" ];
+ else:
+ binary_types = [ "deb" ];
+ if not Options["Architecture"]:
+ architectures = Cnf.ValueList("Suite::%s::Architectures" % (suite));
+ else:
+ architectures = utils.split_args(Options["Architectures"]);
+ for arch in map(string.lower, architectures):
+ d[suite][component].setdefault(arch, {});
+ if arch == "source":
+ types = [ "dsc" ];
+ else:
+ types = binary_types;
+ for type in types:
+ d[suite][component][arch].setdefault(type, []);
+ # Then walk it
+ for suite in d.keys():
+ if Cnf.has_key("Suite::%s::Components" % (suite)):
+ for component in d[suite].keys():
+ for arch in d[suite][component].keys():
+ if arch == "all":
+ continue;
+ for type in d[suite][component][arch].keys():
+ list = d[suite][component][arch][type];
+ # If it's a binary, we need to add in the arch: all debs too
+ if arch != "source":
+ archall_suite = Cnf.get("Jenna::ArchAllMap::%s" % (suite));
+ if archall_suite:
+ list.extend(d[archall_suite][component]["all"][type]);
+ elif d[suite][component].has_key("all") and \
+ d[suite][component]["all"].has_key(type):
+ list.extend(d[suite][component]["all"][type]);
+ write_filelist(suite, component, arch, type, list,
+ packages, dislocated_files);
+ else: # legacy-mixed suite
+ list = [];
+ for component in d[suite].keys():
+ for arch in d[suite][component].keys():
+ for type in d[suite][component][arch].keys():
+ list.extend(d[suite][component][arch][type]);
+ write_legacy_mixed_filelist(suite, list, packages, dislocated_files);
+
+################################################################################
+
+# Want to use stable dislocation support: True or false?
+def stable_dislocation_p():
+ # If the support is not explicitly enabled, assume it's disabled
+ if not Cnf.FindB("Dinstall::StableDislocationSupport"):
+ return 0;
+ # If we don't have a stable suite, obviously a no-op
+ if not Cnf.has_key("Suite::Stable"):
+ return 0;
+ # If the suite(s) weren't explicitly listed, all suites are done
+ if not Options["Suite"]:
+ return 1;
+ # Otherwise, look in what suites the user specified
+ suites = utils.split_args(Options["Suite"]);
+
+ if "stable" in suites:
+ return 1;
+ else:
+ return 0;
+
+################################################################################
+
+def do_da_do_da():
+ # If we're only doing a subset of suites, ensure we do enough to
+ # be able to do arch: all mapping.
+ if Options["Suite"]:
+ suites = utils.split_args(Options["Suite"]);
+ for suite in suites:
+ archall_suite = Cnf.get("Jenna::ArchAllMap::%s" % (suite));
+ if archall_suite and archall_suite not in suites:
+ utils.warn("Adding %s as %s maps Arch: all from it." % (archall_suite, suite));
+ suites.append(archall_suite);
+ Options["Suite"] = ",".join(suites);
+
+ (con_suites, con_architectures, con_components, check_source) = \
+ utils.parse_args(Options);
+
+ if stable_dislocation_p():
+ dislocated_files = claire.find_dislocated_stable(Cnf, projectB);
+ else:
+ dislocated_files = {};
+
+ query = """
+SELECT b.id, b.package, a.arch_string, b.version, l.path, f.filename, c.name,
+ f.id, su.suite_name, b.type
+ FROM binaries b, bin_associations ba, architecture a, files f, location l,
+ component c, suite su
+ WHERE b.id = ba.bin AND b.file = f.id AND b.architecture = a.id
+ AND f.location = l.id AND l.component = c.id AND ba.suite = su.id
+ %s %s %s""" % (con_suites, con_architectures, con_components);
+ if check_source:
+ query += """
+UNION
+SELECT s.id, s.source, 'source', s.version, l.path, f.filename, c.name, f.id,
+ su.suite_name, 'dsc'
+ FROM source s, src_associations sa, files f, location l, component c, suite su
+ WHERE s.id = sa.source AND s.file = f.id AND f.location = l.id
+ AND l.component = c.id AND sa.suite = su.id %s %s""" % (con_suites, con_components);
+ q = projectB.query(query);
+ ql = q.getresult();
+ # Build up the main index of packages
+ packages = {};
+ unique_id = 0;
+ for i in ql:
+ (id, pkg, arch, version, path, filename, component, file_id, suite, type) = i;
+ # 'id' comes from either 'binaries' or 'source', so it's not unique
+ unique_id += 1;
+ packages[unique_id] = Dict(id=id, pkg=pkg, arch=arch, version=version,
+ path=path, filename=filename,
+ component=component, file_id=file_id,
+ suite=suite, type = type);
+ cleanup(packages);
+ write_filelists(packages, dislocated_files);
+
+################################################################################
+
+def main():
+ global Cnf, projectB, Options, Logger;
+
+ Cnf = utils.get_conf();
+ Arguments = [('a', "architecture", "Jenna::Options::Architecture", "HasArg"),
+ ('c', "component", "Jenna::Options::Component", "HasArg"),
+ ('h', "help", "Jenna::Options::Help"),
+ ('n', "no-delete", "Jenna::Options::No-Delete"),
+ ('s', "suite", "Jenna::Options::Suite", "HasArg")];
+ for i in ["architecture", "component", "help", "no-delete", "suite" ]:
+ if not Cnf.has_key("Jenna::Options::%s" % (i)):
+ Cnf["Jenna::Options::%s" % (i)] = "";
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Jenna::Options");
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+ Logger = logging.Logger(Cnf, "jenna");
+ do_da_do_da();
+ Logger.close();
+
+#########################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# Prepare and maintain partial trees by architecture
+# Copyright (C) 2004 Daniel Silverstone <dsilvers@digital-scurf.org>
+# $Id: billie,v 1.4 2004-11-27 16:06:42 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+
+###############################################################################
+## <kinnison> So Martin, do you have a quote for me yet?
+## <tbm> Make something damned stupid up and attribute it to me, that's okay
+###############################################################################
+
+import pg, pwd, sys;
+import utils, db_access;
+import apt_pkg, logging;
+
+from stat import S_ISDIR, S_ISLNK, S_ISREG;
+import os;
+import cPickle;
+
+## Master path is the main repository
+#MASTER_PATH = "/org/ftp.debian.org/scratch/dsilvers/master";
+
+MASTER_PATH = "***Configure Billie::FTPPath Please***";
+TREE_ROOT = "***Configure Billie::TreeRootPath Please***";
+TREE_DB_ROOT = "***Configure Billie::TreeDatabasePath Please***";
+trees = []
+
+###############################################################################
+# A BillieTarget is a representation of a target. It is a set of archs, a path
+# and whether or not the target includes source.
+##################
+
+class BillieTarget:
+ def __init__(self, name, archs, source):
+ self.name = name;
+ self.root = "%s/%s" % (TREE_ROOT,name);
+ self.archs = archs.split(",");
+ self.source = source;
+ self.dbpath = "%s/%s.db" % (TREE_DB_ROOT,name);
+ self.db = BillieDB();
+ if os.path.exists( self.dbpath ):
+ self.db.load_from_file( self.dbpath );
+
+ ## Save the db back to disk
+ def save_db(self):
+ self.db.save_to_file( self.dbpath );
+
+ ## Returns true if it's a poolish match
+ def poolish_match(self, path):
+ for a in self.archs:
+ if path.endswith( "_%s.deb" % (a) ):
+ return 1;
+ if path.endswith( "_%s.udeb" % (a) ):
+ return 1;
+ if self.source:
+ if (path.endswith( ".tar.gz" ) or
+ path.endswith( ".diff.gz" ) or
+ path.endswith( ".dsc" )):
+ return 1;
+ return 0;
+
+ ## Returns false if it's a badmatch distswise
+ def distish_match(self,path):
+ for a in self.archs:
+ if path.endswith("/Contents-%s.gz" % (a)):
+ return 1;
+ if path.find("/binary-%s/" % (a)) != -1:
+ return 1;
+ if path.find("/installer-%s/" % (a)) != -1:
+ return 1;
+ if path.find("/source/") != -1:
+ if self.source:
+ return 1;
+ else:
+ return 0;
+ if path.find("/Contents-") != -1:
+ return 0;
+ if path.find("/binary-") != -1:
+ return 0;
+ if path.find("/installer-") != -1:
+ return 0;
+ return 1;
+
+##############################################################################
+# The applicable function is basically a predicate. Given a path and a
+# target object its job is to decide if the path conforms for the
+# target and thus is wanted.
+#
+# 'verbatim' is a list of files which are copied regardless
+# it should be loaded from a config file eventually
+##################
+
+verbatim = [
+ ];
+
+verbprefix = [
+ "/tools/",
+ "/README",
+ "/doc/"
+ ];
+
+def applicable(path, target):
+ if path.startswith("/pool/"):
+ return target.poolish_match(path);
+ if (path.startswith("/dists/") or
+ path.startswith("/project/experimental/")):
+ return target.distish_match(path);
+ if path in verbatim:
+ return 1;
+ for prefix in verbprefix:
+ if path.startswith(prefix):
+ return 1;
+ return 0;
+
+
+##############################################################################
+# A BillieDir is a representation of a tree.
+# It distinguishes files dirs and links
+# Dirs are dicts of (name, BillieDir)
+# Files are dicts of (name, inode)
+# Links are dicts of (name, target)
+##############
+
+class BillieDir:
+ def __init__(self):
+ self.dirs = {};
+ self.files = {};
+ self.links = {};
+
+##############################################################################
+# A BillieDB is a container for a BillieDir...
+##############
+
+class BillieDB:
+ ## Initialise a BillieDB as containing nothing
+ def __init__(self):
+ self.root = BillieDir();
+
+ def _internal_recurse(self, path):
+ bdir = BillieDir();
+ dl = os.listdir( path );
+ dl.sort();
+ dirs = [];
+ for ln in dl:
+ lnl = os.lstat( "%s/%s" % (path, ln) );
+ if S_ISDIR(lnl[0]):
+ dirs.append(ln);
+ elif S_ISLNK(lnl[0]):
+ bdir.links[ln] = os.readlink( "%s/%s" % (path, ln) );
+ elif S_ISREG(lnl[0]):
+ bdir.files[ln] = lnl[1];
+ else:
+ util.fubar( "Confused by %s/%s -- not a dir, link or file" %
+ ( path, ln ) );
+ for d in dirs:
+ bdir.dirs[d] = self._internal_recurse( "%s/%s" % (path,d) );
+
+ return bdir;
+
+ ## Recurse through a given path, setting the sequence accordingly
+ def init_from_dir(self, dirp):
+ self.root = self._internal_recurse( dirp );
+
+ ## Load this BillieDB from file
+ def load_from_file(self, fname):
+ f = open(fname, "r");
+ self.root = cPickle.load(f);
+ f.close();
+
+ ## Save this BillieDB to a file
+ def save_to_file(self, fname):
+ f = open(fname, "w");
+ cPickle.dump( self.root, f, 1 );
+ f.close();
+
+
+##############################################################################
+# Helper functions for the tree syncing...
+##################
+
+def _pth(a,b):
+ return "%s/%s" % (a,b);
+
+def do_mkdir(targ,path):
+ if not os.path.exists( _pth(targ.root, path) ):
+ os.makedirs( _pth(targ.root, path) );
+
+def do_mkdir_f(targ,path):
+ do_mkdir(targ, os.path.dirname(path));
+
+def do_link(targ,path):
+ do_mkdir_f(targ,path);
+ os.link( _pth(MASTER_PATH, path),
+ _pth(targ.root, path));
+
+def do_symlink(targ,path,link):
+ do_mkdir_f(targ,path);
+ os.symlink( link, _pth(targ.root, path) );
+
+def do_unlink(targ,path):
+ os.unlink( _pth(targ.root, path) );
+
+def do_unlink_dir(targ,path):
+ os.system( "rm -Rf '%s'" % _pth(targ.root, path) );
+
+##############################################################################
+# Reconciling a target with the sourcedb
+################
+
+def _internal_reconcile( path, srcdir, targdir, targ ):
+ # Remove any links in targdir which aren't in srcdir
+ # Or which aren't applicable
+ rm = []
+ for k in targdir.links.keys():
+ if applicable( _pth(path, k), targ ):
+ if not srcdir.links.has_key(k):
+ rm.append(k);
+ else:
+ rm.append(k);
+ for k in rm:
+ #print "-L-", _pth(path,k)
+ do_unlink(targ, _pth(path,k))
+ del targdir.links[k];
+
+ # Remove any files in targdir which aren't in srcdir
+ # Or which aren't applicable
+ rm = []
+ for k in targdir.files.keys():
+ if applicable( _pth(path, k), targ ):
+ if not srcdir.files.has_key(k):
+ rm.append(k);
+ else:
+ rm.append(k);
+ for k in rm:
+ #print "-F-", _pth(path,k)
+ do_unlink(targ, _pth(path,k))
+ del targdir.files[k];
+
+ # Remove any dirs in targdir which aren't in srcdir
+ rm = []
+ for k in targdir.dirs.keys():
+ if not srcdir.dirs.has_key(k):
+ rm.append(k);
+ for k in rm:
+ #print "-D-", _pth(path,k)
+ do_unlink_dir(targ, _pth(path,k))
+ del targdir.dirs[k];
+
+ # Add/update files
+ for k in srcdir.files.keys():
+ if applicable( _pth(path,k), targ ):
+ if not targdir.files.has_key(k):
+ #print "+F+", _pth(path,k)
+ do_link( targ, _pth(path,k) );
+ targdir.files[k] = srcdir.files[k];
+ else:
+ if targdir.files[k] != srcdir.files[k]:
+ #print "*F*", _pth(path,k);
+ do_unlink( targ, _pth(path,k) );
+ do_link( targ, _pth(path,k) );
+ targdir.files[k] = srcdir.files[k];
+
+ # Add/update links
+ for k in srcdir.links.keys():
+ if applicable( _pth(path,k), targ ):
+ if not targdir.links.has_key(k):
+ targdir.links[k] = srcdir.links[k];
+ #print "+L+",_pth(path,k), "->", srcdir.links[k]
+ do_symlink( targ, _pth(path,k), targdir.links[k] );
+ else:
+ if targdir.links[k] != srcdir.links[k]:
+ do_unlink( targ, _pth(path,k) );
+ targdir.links[k] = srcdir.links[k];
+ #print "*L*", _pth(path,k), "to ->", srcdir.links[k]
+ do_symlink( targ, _pth(path,k), targdir.links[k] );
+
+ # Do dirs
+ for k in srcdir.dirs.keys():
+ if not targdir.dirs.has_key(k):
+ targdir.dirs[k] = BillieDir();
+ #print "+D+", _pth(path,k)
+ _internal_reconcile( _pth(path,k), srcdir.dirs[k],
+ targdir.dirs[k], targ );
+
+
+def reconcile_target_db( src, targ ):
+ _internal_reconcile( "", src.root, targ.db.root, targ );
+
+###############################################################################
+
+def load_config():
+ global MASTER_PATH
+ global TREE_ROOT
+ global TREE_DB_ROOT
+ global trees
+
+ MASTER_PATH = Cnf["Billie::FTPPath"];
+ TREE_ROOT = Cnf["Billie::TreeRootPath"];
+ TREE_DB_ROOT = Cnf["Billie::TreeDatabasePath"];
+
+ for a in Cnf.ValueList("Billie::BasicTrees"):
+ trees.append( BillieTarget( a, "%s,all" % a, 1 ) )
+
+ for n in Cnf.SubTree("Billie::CombinationTrees").List():
+ archs = Cnf.ValueList("Billie::CombinationTrees::%s" % n)
+ source = 0
+ if "source" in archs:
+ source = 1
+ archs.remove("source")
+ archs = ",".join(archs)
+ trees.append( BillieTarget( n, archs, source ) );
+
+def do_list ():
+ print "Master path",MASTER_PATH
+ print "Trees at",TREE_ROOT
+ print "DBs at",TREE_DB_ROOT
+
+ for tree in trees:
+ print tree.name,"contains",", ".join(tree.archs),
+ if tree.source:
+ print " [source]"
+ else:
+ print ""
+
+def do_help ():
+ print """Usage: billie [OPTIONS]
+Generate hardlink trees of certain architectures
+
+ -h, --help show this help and exit
+ -l, --list list the configuration and exit
+"""
+
+
+def main ():
+ global Cnf
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Billie::Options::Help"),
+ ('l',"list","Billie::Options::List"),
+ ];
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Cnf["Billie::Options::cake"] = "";
+ Options = Cnf.SubTree("Billie::Options")
+
+ print "Loading configuration..."
+ load_config();
+ print "Loaded."
+
+ if Options.has_key("Help"):
+ do_help();
+ return;
+ if Options.has_key("List"):
+ do_list();
+ return;
+
+
+ src = BillieDB()
+ print "Scanning", MASTER_PATH
+ src.init_from_dir(MASTER_PATH)
+ print "Scanned"
+
+ for tree in trees:
+ print "Reconciling tree:",tree.name
+ reconcile_target_db( src, tree );
+ print "Saving updated DB...",
+ tree.save_db();
+ print "Done"
+
+##############################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Microscopic modification and query tool for overrides in projectb
+# Copyright (C) 2004 Daniel Silverstone <dsilvers@digital-scurf.org>
+# $Id: alicia,v 1.6 2004-11-27 17:58:13 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+
+################################################################################
+## So line up your soldiers and she'll shoot them all down
+## Coz Alisha Rules The World
+## You think you found a dream, then it shatters and it seems,
+## That Alisha Rules The World
+################################################################################
+
+import pg, sys;
+import utils, db_access;
+import apt_pkg, logging;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+# Shamelessly stolen from melanie. Should probably end up in utils.py
+def game_over():
+ answer = utils.our_raw_input("Continue (y/N)? ").lower();
+ if answer != "y":
+ print "Aborted."
+ sys.exit(1);
+
+
+def usage (exit_code=0):
+ print """Usage: alicia [OPTIONS] package [section] [priority]
+Make microchanges or microqueries of the overrides
+
+ -h, --help show this help and exit
+ -d, --done=BUG# send priority/section change as closure to bug#
+ -n, --no-action don't do anything
+ -s, --suite specify the suite to use
+"""
+ sys.exit(exit_code)
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Alicia::Options::Help"),
+ ('d',"done","Alicia::Options::Done", "HasArg"),
+ ('n',"no-action","Alicia::Options::No-Action"),
+ ('s',"suite","Alicia::Options::Suite", "HasArg"),
+ ];
+ for i in ["help", "no-action"]:
+ if not Cnf.has_key("Alicia::Options::%s" % (i)):
+ Cnf["Alicia::Options::%s" % (i)] = "";
+ if not Cnf.has_key("Alicia::Options::Suite"):
+ Cnf["Alicia::Options::Suite"] = "unstable";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Alicia::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ if not arguments:
+ utils.fubar("package name is a required argument.");
+
+ package = arguments.pop(0);
+ suite = Options["Suite"]
+ if arguments and len(arguments) > 2:
+ utils.fubar("Too many arguments");
+
+ if arguments and len(arguments) == 1:
+ # Determine if the argument is a priority or a section...
+ arg = arguments.pop();
+ q = projectB.query("""
+ SELECT ( SELECT COUNT(*) FROM section WHERE section=%s ) AS secs,
+ ( SELECT COUNT(*) FROM priority WHERE priority=%s ) AS prios
+ """ % ( pg._quote(arg,"str"), pg._quote(arg,"str")));
+ r = q.getresult();
+ if r[0][0] == 1:
+ arguments = (arg,".");
+ elif r[0][1] == 1:
+ arguments = (".",arg);
+ else:
+ utils.fubar("%s is not a valid section or priority" % (arg));
+
+
+ # Retrieve current section/priority...
+ q = projectB.query("""
+ SELECT priority.priority AS prio, section.section AS sect
+ FROM override, priority, section, suite
+ WHERE override.priority = priority.id
+ AND override.section = section.id
+ AND override.package = %s
+ AND override.suite = suite.id
+ AND suite.suite_name = %s
+ """ % (pg._quote(package,"str"), pg._quote(suite,"str")));
+
+ if q.ntuples() == 0:
+ utils.fubar("Unable to find package %s" % (package));
+ if q.ntuples() > 1:
+ utils.fubar("%s is ambiguous. Matches %d packages" % (package,q.ntuples()));
+
+ r = q.getresult();
+ oldsection = r[0][1];
+ oldpriority = r[0][0];
+
+ if not arguments:
+ print "%s is in section '%s' at priority '%s'" % (
+ package,oldsection,oldpriority);
+ sys.exit(0);
+
+ # At this point, we have a new section and priority... check they're valid...
+ newsection, newpriority = arguments;
+
+ if newsection == ".":
+ newsection = oldsection;
+ if newpriority == ".":
+ newpriority = oldpriority;
+
+ q = projectB.query("SELECT id FROM section WHERE section=%s" % (
+ pg._quote(newsection,"str")));
+
+ if q.ntuples() == 0:
+ utils.fubar("Supplied section %s is invalid" % (newsection));
+ newsecid = q.getresult()[0][0];
+
+ q = projectB.query("SELECT id FROM priority WHERE priority=%s" % (
+ pg._quote(newpriority,"str")));
+
+ if q.ntuples() == 0:
+ utils.fubar("Supplied priority %s is invalid" % (newpriority));
+ newprioid = q.getresult()[0][0];
+
+ if newpriority == oldpriority and newsection == oldsection:
+ print "I: Doing nothing"
+ sys.exit(0);
+
+ # If we're in no-action mode
+ if Options["No-Action"]:
+ if newpriority != oldpriority:
+ print "I: Would change priority from %s to %s" % (oldpriority,newpriority);
+ if newsection != oldsection:
+ print "I: Would change section from %s to %s" % (oldsection,newsection);
+ if Options.has_key("Done"):
+ print "I: Would also close bug(s): %s" % (Options["Done"]);
+
+ sys.exit(0);
+
+ if newpriority != oldpriority:
+ print "I: Will change priority from %s to %s" % (oldpriority,newpriority);
+ if newsection != oldsection:
+ print "I: Will change section from %s to %s" % (oldsection,newsection);
+
+ if not Options.has_key("Done"):
+ pass;
+ #utils.warn("No bugs to close have been specified. Noone will know you have done this.");
+ else:
+ print "I: Will close bug(s): %s" % (Options["Done"]);
+
+ game_over();
+
+ Logger = logging.Logger(Cnf, "alicia");
+
+ projectB.query("BEGIN WORK");
+ # We're in "do it" mode, we have something to do... do it
+ if newpriority != oldpriority:
+ q = projectB.query("""
+ UPDATE override
+ SET priority=%d
+ WHERE package=%s
+ AND suite = (SELECT id FROM suite WHERE suite_name=%s)""" % (
+ newprioid,
+ pg._quote(package,"str"),
+ pg._quote(suite,"str") ));
+ Logger.log(["changed priority",package,oldpriority,newpriority]);
+
+ if newsection != oldsection:
+ q = projectB.query("""
+ UPDATE override
+ SET section=%d
+ WHERE package=%s
+ AND suite = (SELECT id FROM suite WHERE suite_name=%s)""" % (
+ newsecid,
+ pg._quote(package,"str"),
+ pg._quote(suite,"str") ));
+ Logger.log(["changed priority",package,oldsection,newsection]);
+ projectB.query("COMMIT WORK");
+
+ if Options.has_key("Done"):
+ Subst = {};
+ Subst["__ALICIA_ADDRESS__"] = Cnf["Alicia::MyEmailAddress"];
+ Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
+ bcc = [];
+ if Cnf.Find("Dinstall::Bcc") != "":
+ bcc.append(Cnf["Dinstall::Bcc"]);
+ if Cnf.Find("Alicia::Bcc") != "":
+ bcc.append(Cnf["Alicia::Bcc"]);
+ if bcc:
+ Subst["__BCC__"] = "Bcc: " + ", ".join(bcc);
+ else:
+ Subst["__BCC__"] = "X-Filler: 42";
+ Subst["__CC__"] = "X-Katie: alicia $Revision: 1.6 $";
+ Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
+ Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
+ Subst["__WHOAMI__"] = utils.whoami();
+
+ summary = "Concerning package %s...\n" % (package);
+ summary += "Operating on the %s suite\n" % (suite);
+ if newpriority != oldpriority:
+ summary += "Changed priority from %s to %s\n" % (oldpriority,newpriority);
+ if newsection != oldsection:
+ summary += "Changed section from %s to %s\n" % (oldsection,newsection);
+ Subst["__SUMMARY__"] = summary;
+
+ for bug in utils.split_args(Options["Done"]):
+ Subst["__BUG_NUMBER__"] = bug;
+ mail_message = utils.TemplateSubst(
+ Subst,Cnf["Dir::Templates"]+"/alicia.bug-close");
+ utils.send_mail(mail_message);
+ Logger.log(["closed bug",bug]);
+
+ Logger.close();
+
+ print "Done";
+
+#################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Poolify (move packages from "legacy" type locations to pool locations)
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: catherine,v 1.19 2004-03-11 00:20:51 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# "Welcome to where time stands still,
+# No one leaves and no one will."
+# - Sanitarium - Metallica / Master of the puppets
+
+################################################################################
+
+import os, pg, re, stat, sys;
+import utils, db_access;
+import apt_pkg, apt_inst;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+re_isadeb = re.compile (r"(.+?)_(.+?)(_(.+))?\.u?deb$");
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: catherine [OPTIONS]
+Migrate packages from legacy locations into the pool.
+
+ -l, --limit=AMOUNT only migrate AMOUNT Kb of packages
+ -n, --no-action don't do anything
+ -v, --verbose explain what is being done
+ -h, --help show this help and exit"""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+# Q is a python-postgresql query result set and must have the
+# following four columns:
+# o files.id (as 'files_id')
+# o files.filename
+# o location.path
+# o component.name (as 'component')
+#
+# limit is a value in bytes or -1 for no limit (use with care!)
+# verbose and no_action are booleans
+
+def poolize (q, limit, verbose, no_action):
+ poolized_size = 0L;
+ poolized_count = 0;
+
+ # Parse -l/--limit argument
+ qd = q.dictresult();
+ for qid in qd:
+ legacy_filename = qid["path"]+qid["filename"];
+ size = os.stat(legacy_filename)[stat.ST_SIZE];
+ if (poolized_size + size) > limit and limit >= 0:
+ utils.warn("Hit %s limit." % (utils.size_type(limit)));
+ break;
+ poolized_size += size;
+ poolized_count += 1;
+ base_filename = os.path.basename(legacy_filename);
+ destination_filename = base_filename;
+ # Work out the source package name
+ if re_isadeb.match(base_filename):
+ control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(legacy_filename)))
+ package = control.Find("Package", "");
+ source = control.Find("Source", package);
+ if source.find("(") != -1:
+ m = utils.re_extract_src_version.match(source)
+ source = m.group(1)
+ # If it's a binary, we need to also rename the file to include the architecture
+ version = control.Find("Version", "");
+ architecture = control.Find("Architecture", "");
+ if package == "" or version == "" or architecture == "":
+ utils.fubar("%s: couldn't determine required information to rename .deb file." % (legacy_filename));
+ version = utils.re_no_epoch.sub('', version);
+ destination_filename = "%s_%s_%s.deb" % (package, version, architecture);
+ else:
+ m = utils.re_issource.match(base_filename)
+ if m:
+ source = m.group(1);
+ else:
+ utils.fubar("expansion of source filename '%s' failed." % (legacy_filename));
+ # Work out the component name
+ component = qid["component"];
+ if component == "":
+ q = projectB.query("SELECT DISTINCT(c.name) FROM override o, component c WHERE o.package = '%s' AND o.component = c.id;" % (source));
+ ql = q.getresult();
+ if not ql:
+ utils.fubar("No override match for '%s' so I can't work out the component." % (source));
+ if len(ql) > 1:
+ utils.fubar("Multiple override matches for '%s' so I can't work out the component." % (source));
+ component = ql[0][0];
+ # Work out the new location
+ q = projectB.query("SELECT l.id FROM location l, component c WHERE c.name = '%s' AND c.id = l.component AND l.type = 'pool';" % (component));
+ ql = q.getresult();
+ if len(ql) != 1:
+ utils.fubar("couldn't determine location ID for '%s'. [query returned %d matches, not 1 as expected]" % (source, len(ql)));
+ location_id = ql[0][0];
+ # First move the files to the new location
+ pool_location = utils.poolify (source, component);
+ pool_filename = pool_location + destination_filename;
+ destination = Cnf["Dir::Pool"] + pool_location + destination_filename;
+ if os.path.exists(destination):
+ utils.fubar("'%s' already exists in the pool; serious FUBARity." % (legacy_filename));
+ if verbose:
+ print "Moving: %s -> %s" % (legacy_filename, destination);
+ if not no_action:
+ utils.move(legacy_filename, destination);
+ # Then Update the DB's files table
+ if verbose:
+ print "SQL: UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, location_id, qid["files_id"]);
+ if not no_action:
+ q = projectB.query("UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, location_id, qid["files_id"]));
+
+ sys.stderr.write("Poolized %s in %s files.\n" % (utils.size_type(poolized_size), poolized_count));
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ for i in ["help", "limit", "no-action", "verbose" ]:
+ if not Cnf.has_key("Catherine::Options::%s" % (i)):
+ Cnf["Catherine::Options::%s" % (i)] = "";
+
+
+ Arguments = [('h',"help","Catherine::Options::Help"),
+ ('l',"limit", "Catherine::Options::Limit", "HasArg"),
+ ('n',"no-action","Catherine::Options::No-Action"),
+ ('v',"verbose","Catherine::Options::Verbose")];
+
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Catherine::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ if not Options["Limit"]:
+ limit = -1;
+ else:
+ limit = int(Options["Limit"]) * 1024;
+
+ # -n/--no-action implies -v/--verbose
+ if Options["No-Action"]:
+ Options["Verbose"] = "true";
+
+ # Sanity check the limit argument
+ if limit > 0 and limit < 1024:
+ utils.fubar("-l/--limit takes an argument with a value in kilobytes.");
+
+ # Grab a list of all files not already in the pool
+ q = projectB.query("""
+SELECT l.path, f.filename, f.id as files_id, c.name as component
+ FROM files f, location l, component c WHERE
+ NOT EXISTS (SELECT 1 FROM location l WHERE l.type = 'pool' AND f.location = l.id)
+ AND NOT (f.filename ~ '^potato') AND f.location = l.id AND l.component = c.id
+UNION SELECT l.path, f.filename, f.id as files_id, null as component
+ FROM files f, location l WHERE
+ NOT EXISTS (SELECT 1 FROM location l WHERE l.type = 'pool' AND f.location = l.id)
+ AND NOT (f.filename ~ '^potato') AND f.location = l.id AND NOT EXISTS
+ (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);""");
+
+ poolize(q, limit, Options["Verbose"], Options["No-Action"]);
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Installs Debian packages from queue/accepted into the pool
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: kelly,v 1.18 2005-12-17 10:57:03 rmurray Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+###############################################################################
+
+# Cartman: "I'm trying to make the best of a bad situation, I don't
+# need to hear crap from a bunch of hippy freaks living in
+# denial. Screw you guys, I'm going home."
+#
+# Kyle: "But Cartman, we're trying to..."
+#
+# Cartman: "uhh.. screw you guys... home."
+
+###############################################################################
+
+import errno, fcntl, os, sys, time, re;
+import apt_pkg;
+import db_access, katie, logging, utils;
+
+###############################################################################
+
+# Globals
+kelly_version = "$Revision: 1.18 $";
+
+Cnf = None;
+Options = None;
+Logger = None;
+Urgency_Logger = None;
+projectB = None;
+Katie = None;
+pkg = None;
+
+reject_message = "";
+changes = None;
+dsc = None;
+dsc_files = None;
+files = None;
+Subst = None;
+
+install_count = 0;
+install_bytes = 0.0;
+
+installing_to_stable = 0;
+
+###############################################################################
+
+# FIXME: this should go away to some Debian specific file
+# FIXME: should die if file already exists
+
+class Urgency_Log:
+ "Urgency Logger object"
+ def __init__ (self, Cnf):
+ "Initialize a new Urgency Logger object"
+ self.Cnf = Cnf;
+ self.timestamp = time.strftime("%Y%m%d%H%M%S");
+ # Create the log directory if it doesn't exist
+ self.log_dir = Cnf["Dir::UrgencyLog"];
+ if not os.path.exists(self.log_dir):
+ umask = os.umask(00000);
+ os.makedirs(self.log_dir, 02775);
+ # Open the logfile
+ self.log_filename = "%s/.install-urgencies-%s.new" % (self.log_dir, self.timestamp);
+ self.log_file = utils.open_file(self.log_filename, 'w');
+ self.writes = 0;
+
+ def log (self, source, version, urgency):
+ "Log an event"
+ self.log_file.write(" ".join([source, version, urgency])+'\n');
+ self.log_file.flush();
+ self.writes += 1;
+
+ def close (self):
+ "Close a Logger object"
+ self.log_file.flush();
+ self.log_file.close();
+ if self.writes:
+ new_filename = "%s/install-urgencies-%s" % (self.log_dir, self.timestamp);
+ utils.move(self.log_filename, new_filename);
+ else:
+ os.unlink(self.log_filename);
+
+###############################################################################
+
+def reject (str, prefix="Rejected: "):
+ global reject_message;
+ if str:
+ reject_message += prefix + str + "\n";
+
+# Recheck anything that relies on the database; since that's not
+# frozen between accept and our run time.
+
+def check():
+ propogate={}
+ nopropogate={}
+ for file in files.keys():
+ # The .orig.tar.gz can disappear out from under us is it's a
+ # duplicate of one in the archive.
+ if not files.has_key(file):
+ continue;
+ # Check that the source still exists
+ if files[file]["type"] == "deb":
+ source_version = files[file]["source version"];
+ source_package = files[file]["source package"];
+ if not changes["architecture"].has_key("source") \
+ and not Katie.source_exists(source_package, source_version, changes["distribution"].keys()):
+ reject("no source found for %s %s (%s)." % (source_package, source_version, file));
+
+ # Version and file overwrite checks
+ if not installing_to_stable:
+ if files[file]["type"] == "deb":
+ reject(Katie.check_binary_against_db(file), "");
+ elif files[file]["type"] == "dsc":
+ reject(Katie.check_source_against_db(file), "");
+ (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(file);
+ reject(reject_msg, "");
+
+ # propogate in the case it is in the override tables:
+ if changes.has_key("propdistribution"):
+ for suite in changes["propdistribution"].keys():
+ if Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
+ propogate[suite] = 1
+ else:
+ nopropogate[suite] = 1
+
+ for suite in propogate.keys():
+ if suite in nopropogate:
+ continue
+ changes["distribution"][suite] = 1
+
+ for file in files.keys():
+ # Check the package is still in the override tables
+ for suite in changes["distribution"].keys():
+ if not Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
+ reject("%s is NEW for %s." % (file, suite));
+
+###############################################################################
+
+def init():
+ global Cnf, Options, Katie, projectB, changes, dsc, dsc_files, files, pkg, Subst;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('a',"automatic","Dinstall::Options::Automatic"),
+ ('h',"help","Dinstall::Options::Help"),
+ ('n',"no-action","Dinstall::Options::No-Action"),
+ ('p',"no-lock", "Dinstall::Options::No-Lock"),
+ ('s',"no-mail", "Dinstall::Options::No-Mail"),
+ ('V',"version","Dinstall::Options::Version")];
+
+ for i in ["automatic", "help", "no-action", "no-lock", "no-mail", "version"]:
+ if not Cnf.has_key("Dinstall::Options::%s" % (i)):
+ Cnf["Dinstall::Options::%s" % (i)] = "";
+
+ changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Dinstall::Options")
+
+ if Options["Help"]:
+ usage();
+
+ if Options["Version"]:
+ print "kelly %s" % (kelly_version);
+ sys.exit(0);
+
+ Katie = katie.Katie(Cnf);
+ projectB = Katie.projectB;
+
+ changes = Katie.pkg.changes;
+ dsc = Katie.pkg.dsc;
+ dsc_files = Katie.pkg.dsc_files;
+ files = Katie.pkg.files;
+ pkg = Katie.pkg;
+ Subst = Katie.Subst;
+
+ return changes_files;
+
+###############################################################################
+
+def usage (exit_code=0):
+ print """Usage: kelly [OPTION]... [CHANGES]...
+ -a, --automatic automatic run
+ -h, --help show this help and exit.
+ -n, --no-action don't do anything
+ -p, --no-lock don't check lockfile !! for cron.daily only !!
+ -s, --no-mail don't send any mail
+ -V, --version display the version number and exit"""
+ sys.exit(exit_code)
+
+###############################################################################
+
+def action ():
+ (summary, short_summary) = Katie.build_summaries();
+
+ (prompt, answer) = ("", "XXX")
+ if Options["No-Action"] or Options["Automatic"]:
+ answer = 'S'
+
+ if reject_message.find("Rejected") != -1:
+ print "REJECT\n" + reject_message,;
+ prompt = "[R]eject, Skip, Quit ?";
+ if Options["Automatic"]:
+ answer = 'R';
+ else:
+ print "INSTALL to " + ", ".join(changes["distribution"].keys())
+ print reject_message + summary,;
+ prompt = "[I]nstall, Skip, Quit ?";
+ if Options["Automatic"]:
+ answer = 'I';
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.match(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+
+ if answer == 'R':
+ do_reject ();
+ elif answer == 'I':
+ if not installing_to_stable:
+ install();
+ else:
+ stable_install(summary, short_summary);
+ elif answer == 'Q':
+ sys.exit(0)
+
+###############################################################################
+
+# Our reject is not really a reject, but an unaccept, but since a) the
+# code for that is non-trivial (reopen bugs, unannounce etc.), b) this
+# should be exteremly rare, for now we'll go with whining at our admin
+# folks...
+
+def do_reject ():
+ Subst["__REJECTOR_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
+ Subst["__REJECT_MESSAGE__"] = reject_message;
+ Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
+ reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/kelly.unaccept");
+
+ # Write the rejection email out as the <foo>.reason file
+ reason_filename = os.path.basename(pkg.changes_file[:-8]) + ".reason";
+ reject_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
+ # If we fail here someone is probably trying to exploit the race
+ # so let's just raise an exception ...
+ if os.path.exists(reject_filename):
+ os.unlink(reject_filename);
+ fd = os.open(reject_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
+ os.write(fd, reject_mail_message);
+ os.close(fd);
+
+ utils.send_mail(reject_mail_message);
+ Logger.log(["unaccepted", pkg.changes_file]);
+
+###############################################################################
+
+def install ():
+ global install_count, install_bytes;
+
+ print "Installing."
+
+ Logger.log(["installing changes",pkg.changes_file]);
+
+ # Begin a transaction; if we bomb out anywhere between here and the COMMIT WORK below, the DB will not be changed.
+ projectB.query("BEGIN WORK");
+
+ # Add the .dsc file to the DB
+ for file in files.keys():
+ if files[file]["type"] == "dsc":
+ package = dsc["source"]
+ version = dsc["version"] # NB: not files[file]["version"], that has no epoch
+ maintainer = dsc["maintainer"]
+ maintainer = maintainer.replace("'", "\\'")
+ maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
+ fingerprint_id = db_access.get_or_set_fingerprint_id(dsc["fingerprint"]);
+ install_date = time.strftime("%Y-%m-%d");
+ filename = files[file]["pool name"] + file;
+ dsc_component = files[file]["component"];
+ dsc_location_id = files[file]["location id"];
+ if not files[file].has_key("files id") or not files[file]["files id"]:
+ files[file]["files id"] = db_access.set_files_id (filename, files[file]["size"], files[file]["md5sum"], dsc_location_id)
+ projectB.query("INSERT INTO source (source, version, maintainer, file, install_date, sig_fpr) VALUES ('%s', '%s', %d, %d, '%s', %s)"
+ % (package, version, maintainer_id, files[file]["files id"], install_date, fingerprint_id));
+
+ for suite in changes["distribution"].keys():
+ suite_id = db_access.get_suite_id(suite);
+ projectB.query("INSERT INTO src_associations (suite, source) VALUES (%d, currval('source_id_seq'))" % (suite_id))
+
+ # Add the source files to the DB (files and dsc_files)
+ projectB.query("INSERT INTO dsc_files (source, file) VALUES (currval('source_id_seq'), %d)" % (files[file]["files id"]));
+ for dsc_file in dsc_files.keys():
+ filename = files[file]["pool name"] + dsc_file;
+ # If the .orig.tar.gz is already in the pool, it's
+ # files id is stored in dsc_files by check_dsc().
+ files_id = dsc_files[dsc_file].get("files id", None);
+ if files_id == None:
+ files_id = db_access.get_files_id(filename, dsc_files[dsc_file]["size"], dsc_files[dsc_file]["md5sum"], dsc_location_id);
+ # FIXME: needs to check for -1/-2 and or handle exception
+ if files_id == None:
+ files_id = db_access.set_files_id (filename, dsc_files[dsc_file]["size"], dsc_files[dsc_file]["md5sum"], dsc_location_id);
+ projectB.query("INSERT INTO dsc_files (source, file) VALUES (currval('source_id_seq'), %d)" % (files_id));
+
+ # Add the .deb files to the DB
+ for file in files.keys():
+ if files[file]["type"] == "deb":
+ package = files[file]["package"]
+ version = files[file]["version"]
+ maintainer = files[file]["maintainer"]
+ maintainer = maintainer.replace("'", "\\'")
+ maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
+ fingerprint_id = db_access.get_or_set_fingerprint_id(changes["fingerprint"]);
+ architecture = files[file]["architecture"]
+ architecture_id = db_access.get_architecture_id (architecture);
+ type = files[file]["dbtype"];
+ source = files[file]["source package"]
+ source_version = files[file]["source version"];
+ filename = files[file]["pool name"] + file;
+ if not files[file].has_key("location id") or not files[file]["location id"]:
+ files[file]["location id"] = db_access.get_location_id(Cnf["Dir::Pool"],files[file]["component"],utils.where_am_i());
+ if not files[file].has_key("files id") or not files[file]["files id"]:
+ files[file]["files id"] = db_access.set_files_id (filename, files[file]["size"], files[file]["md5sum"], files[file]["location id"])
+ source_id = db_access.get_source_id (source, source_version);
+ if source_id:
+ projectB.query("INSERT INTO binaries (package, version, maintainer, source, architecture, file, type, sig_fpr) VALUES ('%s', '%s', %d, %d, %d, %d, '%s', %d)"
+ % (package, version, maintainer_id, source_id, architecture_id, files[file]["files id"], type, fingerprint_id));
+ else:
+ projectB.query("INSERT INTO binaries (package, version, maintainer, architecture, file, type, sig_fpr) VALUES ('%s', '%s', %d, %d, %d, '%s', %d)"
+ % (package, version, maintainer_id, architecture_id, files[file]["files id"], type, fingerprint_id));
+ for suite in changes["distribution"].keys():
+ suite_id = db_access.get_suite_id(suite);
+ projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%d, currval('binaries_id_seq'))" % (suite_id));
+
+ # If the .orig.tar.gz is in a legacy directory we need to poolify
+ # it, so that apt-get source (and anything else that goes by the
+ # "Directory:" field in the Sources.gz file) works.
+ orig_tar_id = Katie.pkg.orig_tar_id;
+ orig_tar_location = Katie.pkg.orig_tar_location;
+ legacy_source_untouchable = Katie.pkg.legacy_source_untouchable;
+ if orig_tar_id and orig_tar_location == "legacy":
+ q = projectB.query("SELECT DISTINCT ON (f.id) l.path, f.filename, f.id as files_id, df.source, df.id as dsc_files_id, f.size, f.md5sum FROM files f, dsc_files df, location l WHERE df.source IN (SELECT source FROM dsc_files WHERE file = %s) AND f.id = df.file AND l.id = f.location AND (l.type = 'legacy' OR l.type = 'legacy-mixed')" % (orig_tar_id));
+ qd = q.dictresult();
+ for qid in qd:
+ # Is this an old upload superseded by a newer -sa upload? (See check_dsc() for details)
+ if legacy_source_untouchable.has_key(qid["files_id"]):
+ continue;
+ # First move the files to the new location
+ legacy_filename = qid["path"] + qid["filename"];
+ pool_location = utils.poolify (changes["source"], files[file]["component"]);
+ pool_filename = pool_location + os.path.basename(qid["filename"]);
+ destination = Cnf["Dir::Pool"] + pool_location
+ utils.move(legacy_filename, destination);
+ # Then Update the DB's files table
+ q = projectB.query("UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, dsc_location_id, qid["files_id"]));
+
+ # If this is a sourceful diff only upload that is moving non-legacy
+ # cross-component we need to copy the .orig.tar.gz into the new
+ # component too for the same reasons as above.
+ #
+ if changes["architecture"].has_key("source") and orig_tar_id and \
+ orig_tar_location != "legacy" and orig_tar_location != dsc_location_id:
+ q = projectB.query("SELECT l.path, f.filename, f.size, f.md5sum FROM files f, location l WHERE f.id = %s AND f.location = l.id" % (orig_tar_id));
+ ql = q.getresult()[0];
+ old_filename = ql[0] + ql[1];
+ file_size = ql[2];
+ file_md5sum = ql[3];
+ new_filename = utils.poolify(changes["source"], dsc_component) + os.path.basename(old_filename);
+ new_files_id = db_access.get_files_id(new_filename, file_size, file_md5sum, dsc_location_id);
+ if new_files_id == None:
+ utils.copy(old_filename, Cnf["Dir::Pool"] + new_filename);
+ new_files_id = db_access.set_files_id(new_filename, file_size, file_md5sum, dsc_location_id);
+ projectB.query("UPDATE dsc_files SET file = %s WHERE source = %s AND file = %s" % (new_files_id, source_id, orig_tar_id));
+
+ # Install the files into the pool
+ for file in files.keys():
+ destination = Cnf["Dir::Pool"] + files[file]["pool name"] + file;
+ utils.move(file, destination);
+ Logger.log(["installed", file, files[file]["type"], files[file]["size"], files[file]["architecture"]]);
+ install_bytes += float(files[file]["size"]);
+
+ # Copy the .changes file across for suite which need it.
+ copy_changes = {};
+ copy_katie = {};
+ for suite in changes["distribution"].keys():
+ if Cnf.has_key("Suite::%s::CopyChanges" % (suite)):
+ copy_changes[Cnf["Suite::%s::CopyChanges" % (suite)]] = "";
+ # and the .katie file...
+ if Cnf.has_key("Suite::%s::CopyKatie" % (suite)):
+ copy_katie[Cnf["Suite::%s::CopyKatie" % (suite)]] = "";
+ for dest in copy_changes.keys():
+ utils.copy(pkg.changes_file, Cnf["Dir::Root"] + dest);
+ for dest in copy_katie.keys():
+ utils.copy(Katie.pkg.changes_file[:-8]+".katie", dest);
+
+ projectB.query("COMMIT WORK");
+
+ # Move the .changes into the 'done' directory
+ utils.move (pkg.changes_file,
+ os.path.join(Cnf["Dir::Queue::Done"], os.path.basename(pkg.changes_file)));
+
+ # Remove the .katie file
+ os.unlink(Katie.pkg.changes_file[:-8]+".katie");
+
+ if changes["architecture"].has_key("source") and Urgency_Logger:
+ Urgency_Logger.log(dsc["source"], dsc["version"], changes["urgency"]);
+
+ # Undo the work done in katie.py(accept) to help auto-building
+ # from accepted.
+ projectB.query("BEGIN WORK");
+ for suite in changes["distribution"].keys():
+ if suite not in Cnf.ValueList("Dinstall::QueueBuildSuites"):
+ continue;
+ now_date = time.strftime("%Y-%m-%d %H:%M");
+ suite_id = db_access.get_suite_id(suite);
+ dest_dir = Cnf["Dir::QueueBuild"];
+ if Cnf.FindB("Dinstall::SecurityQueueBuild"):
+ dest_dir = os.path.join(dest_dir, suite);
+ for file in files.keys():
+ dest = os.path.join(dest_dir, file);
+ # Remove it from the list of packages for later processing by apt-ftparchive
+ projectB.query("UPDATE queue_build SET in_queue = 'f', last_used = '%s' WHERE filename = '%s' AND suite = %s" % (now_date, dest, suite_id));
+ if not Cnf.FindB("Dinstall::SecurityQueueBuild"):
+ # Update the symlink to point to the new location in the pool
+ pool_location = utils.poolify (changes["source"], files[file]["component"]);
+ src = os.path.join(Cnf["Dir::Pool"], pool_location, os.path.basename(file));
+ if os.path.islink(dest):
+ os.unlink(dest);
+ os.symlink(src, dest);
+ # Update last_used on any non-upload .orig.tar.gz symlink
+ if orig_tar_id:
+ # Determine the .orig.tar.gz file name
+ for dsc_file in dsc_files.keys():
+ if dsc_file.endswith(".orig.tar.gz"):
+ orig_tar_gz = os.path.join(dest_dir, dsc_file);
+ # Remove it from the list of packages for later processing by apt-ftparchive
+ projectB.query("UPDATE queue_build SET in_queue = 'f', last_used = '%s' WHERE filename = '%s' AND suite = %s" % (now_date, orig_tar_gz, suite_id));
+ projectB.query("COMMIT WORK");
+
+ # Finally...
+ install_count += 1;
+
+################################################################################
+
+def stable_install (summary, short_summary):
+ global install_count;
+
+ print "Installing to stable.";
+
+ # Begin a transaction; if we bomb out anywhere between here and
+ # the COMMIT WORK below, the DB won't be changed.
+ projectB.query("BEGIN WORK");
+
+ # Add the source to stable (and remove it from proposed-updates)
+ for file in files.keys():
+ if files[file]["type"] == "dsc":
+ package = dsc["source"];
+ version = dsc["version"]; # NB: not files[file]["version"], that has no epoch
+ q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version))
+ ql = q.getresult();
+ if not ql:
+ utils.fubar("[INTERNAL ERROR] couldn't find '%s' (%s) in source table." % (package, version));
+ source_id = ql[0][0];
+ suite_id = db_access.get_suite_id('proposed-updates');
+ projectB.query("DELETE FROM src_associations WHERE suite = '%s' AND source = '%s'" % (suite_id, source_id));
+ suite_id = db_access.get_suite_id('stable');
+ projectB.query("INSERT INTO src_associations (suite, source) VALUES ('%s', '%s')" % (suite_id, source_id));
+
+ # Add the binaries to stable (and remove it/them from proposed-updates)
+ for file in files.keys():
+ if files[file]["type"] == "deb":
+ binNMU = 0
+ package = files[file]["package"];
+ version = files[file]["version"];
+ architecture = files[file]["architecture"];
+ q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture));
+ ql = q.getresult();
+ if not ql:
+ suite_id = db_access.get_suite_id('proposed-updates');
+ que = "SELECT b.version FROM binaries b JOIN bin_associations ba ON (b.id = ba.bin) JOIN suite su ON (ba.suite = su.id) WHERE b.package = '%s' AND (ba.suite = '%s')" % (package, suite_id);
+ q = projectB.query(que)
+
+ # Reduce the query results to a list of version numbers
+ ql = map(lambda x: x[0], q.getresult());
+ if not ql:
+ utils.fubar("[INTERNAL ERROR] couldn't find '%s' (%s for %s architecture) in binaries table." % (package, version, architecture));
+ else:
+ for x in ql:
+ if re.match(re.compile(r"%s((\.0)?\.)|(\+b)\d+$" % re.escape(version)),x):
+ binNMU = 1
+ break
+ if not binNMU:
+ binary_id = ql[0][0];
+ suite_id = db_access.get_suite_id('proposed-updates');
+ projectB.query("DELETE FROM bin_associations WHERE suite = '%s' AND bin = '%s'" % (suite_id, binary_id));
+ suite_id = db_access.get_suite_id('stable');
+ projectB.query("INSERT INTO bin_associations (suite, bin) VALUES ('%s', '%s')" % (suite_id, binary_id));
+ else:
+ del files[file]
+
+ projectB.query("COMMIT WORK");
+
+ utils.move (pkg.changes_file, Cnf["Dir::Morgue"] + '/katie/' + os.path.basename(pkg.changes_file));
+
+ ## Update the Stable ChangeLog file
+ new_changelog_filename = Cnf["Dir::Root"] + Cnf["Suite::Stable::ChangeLogBase"] + ".ChangeLog";
+ changelog_filename = Cnf["Dir::Root"] + Cnf["Suite::Stable::ChangeLogBase"] + "ChangeLog";
+ if os.path.exists(new_changelog_filename):
+ os.unlink (new_changelog_filename);
+
+ new_changelog = utils.open_file(new_changelog_filename, 'w');
+ for file in files.keys():
+ if files[file]["type"] == "deb":
+ new_changelog.write("stable/%s/binary-%s/%s\n" % (files[file]["component"], files[file]["architecture"], file));
+ elif utils.re_issource.match(file):
+ new_changelog.write("stable/%s/source/%s\n" % (files[file]["component"], file));
+ else:
+ new_changelog.write("%s\n" % (file));
+ chop_changes = katie.re_fdnic.sub("\n", changes["changes"]);
+ new_changelog.write(chop_changes + '\n\n');
+ if os.access(changelog_filename, os.R_OK) != 0:
+ changelog = utils.open_file(changelog_filename);
+ new_changelog.write(changelog.read());
+ new_changelog.close();
+ if os.access(changelog_filename, os.R_OK) != 0:
+ os.unlink(changelog_filename);
+ utils.move(new_changelog_filename, changelog_filename);
+
+ install_count += 1;
+
+ if not Options["No-Mail"] and changes["architecture"].has_key("source"):
+ Subst["__SUITE__"] = " into stable";
+ Subst["__SUMMARY__"] = summary;
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/kelly.installed");
+ utils.send_mail(mail_message);
+ Katie.announce(short_summary, 1)
+
+ # Finally remove the .katie file
+ katie_file = os.path.join(Cnf["Suite::Proposed-Updates::CopyKatie"], os.path.basename(Katie.pkg.changes_file[:-8]+".katie"));
+ os.unlink(katie_file);
+
+################################################################################
+
+def process_it (changes_file):
+ global reject_message;
+
+ reject_message = "";
+
+ # Absolutize the filename to avoid the requirement of being in the
+ # same directory as the .changes file.
+ pkg.changes_file = os.path.abspath(changes_file);
+
+ # And since handling of installs to stable munges with the CWD;
+ # save and restore it.
+ pkg.directory = os.getcwd();
+
+ if installing_to_stable:
+ old = Katie.pkg.changes_file;
+ Katie.pkg.changes_file = os.path.basename(old);
+ os.chdir(Cnf["Suite::Proposed-Updates::CopyKatie"]);
+
+ Katie.init_vars();
+ Katie.update_vars();
+ Katie.update_subst();
+
+ if installing_to_stable:
+ Katie.pkg.changes_file = old;
+
+ check();
+ action();
+
+ # Restore CWD
+ os.chdir(pkg.directory);
+
+###############################################################################
+
+def main():
+ global projectB, Logger, Urgency_Logger, installing_to_stable;
+
+ changes_files = init();
+
+ # -n/--dry-run invalidates some other options which would involve things happening
+ if Options["No-Action"]:
+ Options["Automatic"] = "";
+
+ # Check that we aren't going to clash with the daily cron job
+
+ if not Options["No-Action"] and os.path.exists("%s/Archive_Maintenance_In_Progress" % (Cnf["Dir::Root"])) and not Options["No-Lock"]:
+ utils.fubar("Archive maintenance in progress. Try again later.");
+
+ # If running from within proposed-updates; assume an install to stable
+ if os.getcwd().find('proposed-updates') != -1:
+ installing_to_stable = 1;
+
+ # Obtain lock if not in no-action mode and initialize the log
+ if not Options["No-Action"]:
+ lock_fd = os.open(Cnf["Dinstall::LockFile"], os.O_RDWR | os.O_CREAT);
+ try:
+ fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB);
+ except IOError, e:
+ if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EAGAIN':
+ utils.fubar("Couldn't obtain lock; assuming another kelly is already running.");
+ else:
+ raise;
+ Logger = Katie.Logger = logging.Logger(Cnf, "kelly");
+ if not installing_to_stable and Cnf.get("Dir::UrgencyLog"):
+ Urgency_Logger = Urgency_Log(Cnf);
+
+ # Initialize the substitution template mapping global
+ bcc = "X-Katie: %s" % (kelly_version);
+ if Cnf.has_key("Dinstall::Bcc"):
+ Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
+ else:
+ Subst["__BCC__"] = bcc;
+
+ # Sort the .changes files so that we process sourceful ones first
+ changes_files.sort(utils.changes_compare);
+
+ # Process the changes files
+ for changes_file in changes_files:
+ print "\n" + changes_file;
+ process_it (changes_file);
+
+ if install_count:
+ sets = "set"
+ if install_count > 1:
+ sets = "sets"
+ sys.stderr.write("Installed %d package %s, %s.\n" % (install_count, sets, utils.size_type(int(install_bytes))));
+ Logger.log(["total",install_count,install_bytes]);
+
+ if not Options["No-Action"]:
+ Logger.close();
+ if Urgency_Logger:
+ Urgency_Logger.close();
+
+###############################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# Handles NEW and BYHAND packages
+# Copyright (C) 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: lisa,v 1.31 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# 23:12|<aj> I will not hush!
+# 23:12|<elmo> :>
+# 23:12|<aj> Where there is injustice in the world, I shall be there!
+# 23:13|<aj> I shall not be silenced!
+# 23:13|<aj> The world shall know!
+# 23:13|<aj> The world *must* know!
+# 23:13|<elmo> oh dear, he's gone back to powerpuff girls... ;-)
+# 23:13|<aj> yay powerpuff girls!!
+# 23:13|<aj> buttercup's my favourite, who's yours?
+# 23:14|<aj> you're backing away from the keyboard right now aren't you?
+# 23:14|<aj> *AREN'T YOU*?!
+# 23:15|<aj> I will not be treated like this.
+# 23:15|<aj> I shall have my revenge.
+# 23:15|<aj> I SHALL!!!
+
+################################################################################
+
+import copy, errno, os, readline, stat, sys, time;
+import apt_pkg, apt_inst;
+import db_access, fernanda, katie, logging, utils;
+
+# Globals
+lisa_version = "$Revision: 1.31 $";
+
+Cnf = None;
+Options = None;
+Katie = None;
+projectB = None;
+Logger = None;
+
+Priorities = None;
+Sections = None;
+
+reject_message = "";
+
+################################################################################
+################################################################################
+################################################################################
+
+def reject (str, prefix="Rejected: "):
+ global reject_message;
+ if str:
+ reject_message += prefix + str + "\n";
+
+def recheck():
+ global reject_message;
+ files = Katie.pkg.files;
+ reject_message = "";
+
+ for file in files.keys():
+ # The .orig.tar.gz can disappear out from under us is it's a
+ # duplicate of one in the archive.
+ if not files.has_key(file):
+ continue;
+ # Check that the source still exists
+ if files[file]["type"] == "deb":
+ source_version = files[file]["source version"];
+ source_package = files[file]["source package"];
+ if not Katie.pkg.changes["architecture"].has_key("source") \
+ and not Katie.source_exists(source_package, source_version, Katie.pkg.changes["distribution"].keys()):
+ source_epochless_version = utils.re_no_epoch.sub('', source_version);
+ dsc_filename = "%s_%s.dsc" % (source_package, source_epochless_version);
+ if not os.path.exists(Cnf["Dir::Queue::Accepted"] + '/' + dsc_filename):
+ reject("no source found for %s %s (%s)." % (source_package, source_version, file));
+
+ # Version and file overwrite checks
+ if files[file]["type"] == "deb":
+ reject(Katie.check_binary_against_db(file));
+ elif files[file]["type"] == "dsc":
+ reject(Katie.check_source_against_db(file));
+ (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(file);
+ reject(reject_msg);
+
+ if reject_message:
+ answer = "XXX";
+ if Options["No-Action"] or Options["Automatic"]:
+ answer = 'S'
+
+ print "REJECT\n" + reject_message,;
+ prompt = "[R]eject, Skip, Quit ?";
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.match(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+
+ if answer == 'R':
+ Katie.do_reject(0, reject_message);
+ os.unlink(Katie.pkg.changes_file[:-8]+".katie");
+ return 0;
+ elif answer == 'S':
+ return 0;
+ elif answer == 'Q':
+ sys.exit(0);
+
+ return 1;
+
+################################################################################
+
+def determine_new (changes, files):
+ new = {};
+
+ # Build up a list of potentially new things
+ for file in files.keys():
+ f = files[file];
+ # Skip byhand elements
+ if f["type"] == "byhand":
+ continue;
+ pkg = f["package"];
+ priority = f["priority"];
+ section = f["section"];
+ # FIXME: unhardcode
+ if section == "non-US/main":
+ section = "non-US";
+ type = get_type(f);
+ component = f["component"];
+
+ if type == "dsc":
+ priority = "source";
+ if not new.has_key(pkg):
+ new[pkg] = {};
+ new[pkg]["priority"] = priority;
+ new[pkg]["section"] = section;
+ new[pkg]["type"] = type;
+ new[pkg]["component"] = component;
+ new[pkg]["files"] = [];
+ else:
+ old_type = new[pkg]["type"];
+ if old_type != type:
+ # source gets trumped by deb or udeb
+ if old_type == "dsc":
+ new[pkg]["priority"] = priority;
+ new[pkg]["section"] = section;
+ new[pkg]["type"] = type;
+ new[pkg]["component"] = component;
+ new[pkg]["files"].append(file);
+ if f.has_key("othercomponents"):
+ new[pkg]["othercomponents"] = f["othercomponents"];
+
+ for suite in changes["suite"].keys():
+ suite_id = db_access.get_suite_id(suite);
+ for pkg in new.keys():
+ component_id = db_access.get_component_id(new[pkg]["component"]);
+ type_id = db_access.get_override_type_id(new[pkg]["type"]);
+ q = projectB.query("SELECT package FROM override WHERE package = '%s' AND suite = %s AND component = %s AND type = %s" % (pkg, suite_id, component_id, type_id));
+ ql = q.getresult();
+ if ql:
+ for file in new[pkg]["files"]:
+ if files[file].has_key("new"):
+ del files[file]["new"];
+ del new[pkg];
+
+ if changes["suite"].has_key("stable"):
+ print "WARNING: overrides will be added for stable!";
+ if changes["suite"].has_key("oldstable"):
+ print "WARNING: overrides will be added for OLDstable!";
+ for pkg in new.keys():
+ if new[pkg].has_key("othercomponents"):
+ print "WARNING: %s already present in %s distribution." % (pkg, new[pkg]["othercomponents"]);
+
+ return new;
+
+################################################################################
+
+def indiv_sg_compare (a, b):
+ """Sort by source name, source, version, 'have source', and
+ finally by filename."""
+ # Sort by source version
+ q = apt_pkg.VersionCompare(a["version"], b["version"]);
+ if q:
+ return -q;
+
+ # Sort by 'have source'
+ a_has_source = a["architecture"].get("source");
+ b_has_source = b["architecture"].get("source");
+ if a_has_source and not b_has_source:
+ return -1;
+ elif b_has_source and not a_has_source:
+ return 1;
+
+ return cmp(a["filename"], b["filename"]);
+
+############################################################
+
+def sg_compare (a, b):
+ a = a[1];
+ b = b[1];
+ """Sort by have note, time of oldest upload."""
+ # Sort by have note
+ a_note_state = a["note_state"];
+ b_note_state = b["note_state"];
+ if a_note_state < b_note_state:
+ return -1;
+ elif a_note_state > b_note_state:
+ return 1;
+
+ # Sort by time of oldest upload
+ return cmp(a["oldest"], b["oldest"]);
+
+def sort_changes(changes_files):
+ """Sort into source groups, then sort each source group by version,
+ have source, filename. Finally, sort the source groups by have
+ note, time of oldest upload of each source upload."""
+ if len(changes_files) == 1:
+ return changes_files;
+
+ sorted_list = [];
+ cache = {};
+ # Read in all the .changes files
+ for filename in changes_files:
+ try:
+ Katie.pkg.changes_file = filename;
+ Katie.init_vars();
+ Katie.update_vars();
+ cache[filename] = copy.copy(Katie.pkg.changes);
+ cache[filename]["filename"] = filename;
+ except:
+ sorted_list.append(filename);
+ break;
+ # Divide the .changes into per-source groups
+ per_source = {};
+ for filename in cache.keys():
+ source = cache[filename]["source"];
+ if not per_source.has_key(source):
+ per_source[source] = {};
+ per_source[source]["list"] = [];
+ per_source[source]["list"].append(cache[filename]);
+ # Determine oldest time and have note status for each source group
+ for source in per_source.keys():
+ source_list = per_source[source]["list"];
+ first = source_list[0];
+ oldest = os.stat(first["filename"])[stat.ST_MTIME];
+ have_note = 0;
+ for d in per_source[source]["list"]:
+ mtime = os.stat(d["filename"])[stat.ST_MTIME];
+ if mtime < oldest:
+ oldest = mtime;
+ have_note += (d.has_key("lisa note"));
+ per_source[source]["oldest"] = oldest;
+ if not have_note:
+ per_source[source]["note_state"] = 0; # none
+ elif have_note < len(source_list):
+ per_source[source]["note_state"] = 1; # some
+ else:
+ per_source[source]["note_state"] = 2; # all
+ per_source[source]["list"].sort(indiv_sg_compare);
+ per_source_items = per_source.items();
+ per_source_items.sort(sg_compare);
+ for i in per_source_items:
+ for j in i[1]["list"]:
+ sorted_list.append(j["filename"]);
+ return sorted_list;
+
+################################################################################
+
+class Section_Completer:
+ def __init__ (self):
+ self.sections = [];
+ q = projectB.query("SELECT section FROM section");
+ for i in q.getresult():
+ self.sections.append(i[0]);
+
+ def complete(self, text, state):
+ if state == 0:
+ self.matches = [];
+ n = len(text);
+ for word in self.sections:
+ if word[:n] == text:
+ self.matches.append(word);
+ try:
+ return self.matches[state]
+ except IndexError:
+ return None
+
+############################################################
+
+class Priority_Completer:
+ def __init__ (self):
+ self.priorities = [];
+ q = projectB.query("SELECT priority FROM priority");
+ for i in q.getresult():
+ self.priorities.append(i[0]);
+
+ def complete(self, text, state):
+ if state == 0:
+ self.matches = [];
+ n = len(text);
+ for word in self.priorities:
+ if word[:n] == text:
+ self.matches.append(word);
+ try:
+ return self.matches[state]
+ except IndexError:
+ return None
+
+################################################################################
+
+def check_valid (new):
+ for pkg in new.keys():
+ section = new[pkg]["section"];
+ priority = new[pkg]["priority"];
+ type = new[pkg]["type"];
+ new[pkg]["section id"] = db_access.get_section_id(section);
+ new[pkg]["priority id"] = db_access.get_priority_id(new[pkg]["priority"]);
+ # Sanity checks
+ if (section == "debian-installer" and type != "udeb") or \
+ (section != "debian-installer" and type == "udeb"):
+ new[pkg]["section id"] = -1;
+ if (priority == "source" and type != "dsc") or \
+ (priority != "source" and type == "dsc"):
+ new[pkg]["priority id"] = -1;
+
+################################################################################
+
+def print_new (new, indexed, file=sys.stdout):
+ check_valid(new);
+ broken = 0;
+ index = 0;
+ for pkg in new.keys():
+ index += 1;
+ section = new[pkg]["section"];
+ priority = new[pkg]["priority"];
+ if new[pkg]["section id"] == -1:
+ section += "[!]";
+ broken = 1;
+ if new[pkg]["priority id"] == -1:
+ priority += "[!]";
+ broken = 1;
+ if indexed:
+ line = "(%s): %-20s %-20s %-20s" % (index, pkg, priority, section);
+ else:
+ line = "%-20s %-20s %-20s" % (pkg, priority, section);
+ line = line.strip()+'\n';
+ file.write(line);
+ note = Katie.pkg.changes.get("lisa note");
+ if note:
+ print "*"*75;
+ print note;
+ print "*"*75;
+ return broken, note;
+
+################################################################################
+
+def get_type (f):
+ # Determine the type
+ if f.has_key("dbtype"):
+ type = f["dbtype"];
+ elif f["type"] == "orig.tar.gz" or f["type"] == "tar.gz" or f["type"] == "diff.gz" or f["type"] == "dsc":
+ type = "dsc";
+ else:
+ utils.fubar("invalid type (%s) for new. Dazed, confused and sure as heck not continuing." % (type));
+
+ # Validate the override type
+ type_id = db_access.get_override_type_id(type);
+ if type_id == -1:
+ utils.fubar("invalid type (%s) for new. Say wha?" % (type));
+
+ return type;
+
+################################################################################
+
+def index_range (index):
+ if index == 1:
+ return "1";
+ else:
+ return "1-%s" % (index);
+
+################################################################################
+################################################################################
+
+def edit_new (new):
+ # Write the current data to a temporary file
+ temp_filename = utils.temp_filename();
+ temp_file = utils.open_file(temp_filename, 'w');
+ print_new (new, 0, temp_file);
+ temp_file.close();
+ # Spawn an editor on that file
+ editor = os.environ.get("EDITOR","vi")
+ result = os.system("%s %s" % (editor, temp_filename))
+ if result != 0:
+ utils.fubar ("%s invocation failed for %s." % (editor, temp_filename), result)
+ # Read the edited data back in
+ temp_file = utils.open_file(temp_filename);
+ lines = temp_file.readlines();
+ temp_file.close();
+ os.unlink(temp_filename);
+ # Parse the new data
+ for line in lines:
+ line = line.strip();
+ if line == "":
+ continue;
+ s = line.split();
+ # Pad the list if necessary
+ s[len(s):3] = [None] * (3-len(s));
+ (pkg, priority, section) = s[:3];
+ if not new.has_key(pkg):
+ utils.warn("Ignoring unknown package '%s'" % (pkg));
+ else:
+ # Strip off any invalid markers, print_new will readd them.
+ if section.endswith("[!]"):
+ section = section[:-3];
+ if priority.endswith("[!]"):
+ priority = priority[:-3];
+ for file in new[pkg]["files"]:
+ Katie.pkg.files[file]["section"] = section;
+ Katie.pkg.files[file]["priority"] = priority;
+ new[pkg]["section"] = section;
+ new[pkg]["priority"] = priority;
+
+################################################################################
+
+def edit_index (new, index):
+ priority = new[index]["priority"]
+ section = new[index]["section"]
+ type = new[index]["type"];
+ done = 0
+ while not done:
+ print "\t".join([index, priority, section]);
+
+ answer = "XXX";
+ if type != "dsc":
+ prompt = "[B]oth, Priority, Section, Done ? ";
+ else:
+ prompt = "[S]ection, Done ? ";
+ edit_priority = edit_section = 0;
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.match(prompt)
+ if answer == "":
+ answer = m.group(1)
+ answer = answer[:1].upper()
+
+ if answer == 'P':
+ edit_priority = 1;
+ elif answer == 'S':
+ edit_section = 1;
+ elif answer == 'B':
+ edit_priority = edit_section = 1;
+ elif answer == 'D':
+ done = 1;
+
+ # Edit the priority
+ if edit_priority:
+ readline.set_completer(Priorities.complete);
+ got_priority = 0;
+ while not got_priority:
+ new_priority = utils.our_raw_input("New priority: ").strip();
+ if new_priority not in Priorities.priorities:
+ print "E: '%s' is not a valid priority, try again." % (new_priority);
+ else:
+ got_priority = 1;
+ priority = new_priority;
+
+ # Edit the section
+ if edit_section:
+ readline.set_completer(Sections.complete);
+ got_section = 0;
+ while not got_section:
+ new_section = utils.our_raw_input("New section: ").strip();
+ if new_section not in Sections.sections:
+ print "E: '%s' is not a valid section, try again." % (new_section);
+ else:
+ got_section = 1;
+ section = new_section;
+
+ # Reset the readline completer
+ readline.set_completer(None);
+
+ for file in new[index]["files"]:
+ Katie.pkg.files[file]["section"] = section;
+ Katie.pkg.files[file]["priority"] = priority;
+ new[index]["priority"] = priority;
+ new[index]["section"] = section;
+ return new;
+
+################################################################################
+
+def edit_overrides (new):
+ print;
+ done = 0
+ while not done:
+ print_new (new, 1);
+ new_index = {};
+ index = 0;
+ for i in new.keys():
+ index += 1;
+ new_index[index] = i;
+
+ prompt = "(%s) edit override <n>, Editor, Done ? " % (index_range(index));
+
+ got_answer = 0
+ while not got_answer:
+ answer = utils.our_raw_input(prompt);
+ if not utils.str_isnum(answer):
+ answer = answer[:1].upper();
+ if answer == "E" or answer == "D":
+ got_answer = 1;
+ elif katie.re_isanum.match (answer):
+ answer = int(answer);
+ if (answer < 1) or (answer > index):
+ print "%s is not a valid index (%s). Please retry." % (answer, index_range(index));
+ else:
+ got_answer = 1;
+
+ if answer == 'E':
+ edit_new(new);
+ elif answer == 'D':
+ done = 1;
+ else:
+ edit_index (new, new_index[answer]);
+
+ return new;
+
+################################################################################
+
+def edit_note(note):
+ # Write the current data to a temporary file
+ temp_filename = utils.temp_filename();
+ temp_file = utils.open_file(temp_filename, 'w');
+ temp_file.write(note);
+ temp_file.close();
+ editor = os.environ.get("EDITOR","vi")
+ answer = 'E';
+ while answer == 'E':
+ os.system("%s %s" % (editor, temp_filename))
+ temp_file = utils.open_file(temp_filename);
+ note = temp_file.read().rstrip();
+ temp_file.close();
+ print "Note:";
+ print utils.prefix_multi_line_string(note," ");
+ prompt = "[D]one, Edit, Abandon, Quit ?"
+ answer = "XXX";
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+ os.unlink(temp_filename);
+ if answer == 'A':
+ return;
+ elif answer == 'Q':
+ sys.exit(0);
+ Katie.pkg.changes["lisa note"] = note;
+ Katie.dump_vars(Cnf["Dir::Queue::New"]);
+
+################################################################################
+
+def check_pkg ():
+ try:
+ less_fd = os.popen("less -R -", 'w', 0);
+ stdout_fd = sys.stdout;
+ try:
+ sys.stdout = less_fd;
+ fernanda.display_changes(Katie.pkg.changes_file);
+ files = Katie.pkg.files;
+ for file in files.keys():
+ if files[file].has_key("new"):
+ type = files[file]["type"];
+ if type == "deb":
+ fernanda.check_deb(file);
+ elif type == "dsc":
+ fernanda.check_dsc(file);
+ finally:
+ sys.stdout = stdout_fd;
+ except IOError, e:
+ if errno.errorcode[e.errno] == 'EPIPE':
+ utils.warn("[fernanda] Caught EPIPE; skipping.");
+ pass;
+ else:
+ raise;
+ except KeyboardInterrupt:
+ utils.warn("[fernanda] Caught C-c; skipping.");
+ pass;
+
+################################################################################
+
+## FIXME: horribly Debian specific
+
+def do_bxa_notification():
+ files = Katie.pkg.files;
+ summary = "";
+ for file in files.keys():
+ if files[file]["type"] == "deb":
+ control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(file)));
+ summary += "\n";
+ summary += "Package: %s\n" % (control.Find("Package"));
+ summary += "Description: %s\n" % (control.Find("Description"));
+ Katie.Subst["__BINARY_DESCRIPTIONS__"] = summary;
+ bxa_mail = utils.TemplateSubst(Katie.Subst,Cnf["Dir::Templates"]+"/lisa.bxa_notification");
+ utils.send_mail(bxa_mail);
+
+################################################################################
+
+def add_overrides (new):
+ changes = Katie.pkg.changes;
+ files = Katie.pkg.files;
+
+ projectB.query("BEGIN WORK");
+ for suite in changes["suite"].keys():
+ suite_id = db_access.get_suite_id(suite);
+ for pkg in new.keys():
+ component_id = db_access.get_component_id(new[pkg]["component"]);
+ type_id = db_access.get_override_type_id(new[pkg]["type"]);
+ priority_id = new[pkg]["priority id"];
+ section_id = new[pkg]["section id"];
+ projectB.query("INSERT INTO override (suite, component, type, package, priority, section, maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '')" % (suite_id, component_id, type_id, pkg, priority_id, section_id));
+ for file in new[pkg]["files"]:
+ if files[file].has_key("new"):
+ del files[file]["new"];
+ del new[pkg];
+
+ projectB.query("COMMIT WORK");
+
+ if Cnf.FindB("Dinstall::BXANotify"):
+ do_bxa_notification();
+
+################################################################################
+
+def prod_maintainer ():
+ # Here we prepare an editor and get them ready to prod...
+ temp_filename = utils.temp_filename();
+ editor = os.environ.get("EDITOR","vi")
+ answer = 'E';
+ while answer == 'E':
+ os.system("%s %s" % (editor, temp_filename))
+ file = utils.open_file(temp_filename);
+ prod_message = "".join(file.readlines());
+ file.close();
+ print "Prod message:";
+ print utils.prefix_multi_line_string(prod_message," ",include_blank_lines=1);
+ prompt = "[P]rod, Edit, Abandon, Quit ?"
+ answer = "XXX";
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+ os.unlink(temp_filename);
+ if answer == 'A':
+ return;
+ elif answer == 'Q':
+ sys.exit(0);
+ # Otherwise, do the proding...
+ user_email_address = utils.whoami() + " <%s>" % (
+ Cnf["Dinstall::MyAdminAddress"]);
+
+ Subst = Katie.Subst;
+
+ Subst["__FROM_ADDRESS__"] = user_email_address;
+ Subst["__PROD_MESSAGE__"] = prod_message;
+ Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
+
+ prod_mail_message = utils.TemplateSubst(
+ Subst,Cnf["Dir::Templates"]+"/lisa.prod");
+
+ # Send the prod mail if appropriate
+ if not Cnf["Dinstall::Options::No-Mail"]:
+ utils.send_mail(prod_mail_message);
+
+ print "Sent proding message";
+
+################################################################################
+
+def do_new():
+ print "NEW\n";
+ files = Katie.pkg.files;
+ changes = Katie.pkg.changes;
+
+ # Make a copy of distribution we can happily trample on
+ changes["suite"] = copy.copy(changes["distribution"]);
+
+ # Fix up the list of target suites
+ for suite in changes["suite"].keys():
+ override = Cnf.Find("Suite::%s::OverrideSuite" % (suite));
+ if override:
+ del changes["suite"][suite];
+ changes["suite"][override] = 1;
+ # Validate suites
+ for suite in changes["suite"].keys():
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id == -1:
+ utils.fubar("%s has invalid suite '%s' (possibly overriden). say wha?" % (changes, suite));
+
+ # The main NEW processing loop
+ done = 0;
+ while not done:
+ # Find out what's new
+ new = determine_new(changes, files);
+
+ if not new:
+ break;
+
+ answer = "XXX";
+ if Options["No-Action"] or Options["Automatic"]:
+ answer = 'S';
+
+ (broken, note) = print_new(new, 0);
+ prompt = "";
+
+ if not broken and not note:
+ prompt = "Add overrides, ";
+ if broken:
+ print "W: [!] marked entries must be fixed before package can be processed.";
+ if note:
+ print "W: note must be removed before package can be processed.";
+ prompt += "Remove note, ";
+
+ prompt += "Edit overrides, Check, Manual reject, Note edit, Prod, [S]kip, Quit ?";
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1)
+ answer = answer[:1].upper()
+
+ if answer == 'A':
+ done = add_overrides (new);
+ elif answer == 'C':
+ check_pkg();
+ elif answer == 'E':
+ new = edit_overrides (new);
+ elif answer == 'M':
+ aborted = Katie.do_reject(1, Options["Manual-Reject"]);
+ if not aborted:
+ os.unlink(Katie.pkg.changes_file[:-8]+".katie");
+ done = 1;
+ elif answer == 'N':
+ edit_note(changes.get("lisa note", ""));
+ elif answer == 'P':
+ prod_maintainer();
+ elif answer == 'R':
+ confirm = utils.our_raw_input("Really clear note (y/N)? ").lower();
+ if confirm == "y":
+ del changes["lisa note"];
+ elif answer == 'S':
+ done = 1;
+ elif answer == 'Q':
+ sys.exit(0)
+
+################################################################################
+################################################################################
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: lisa [OPTION]... [CHANGES]...
+ -a, --automatic automatic run
+ -h, --help show this help and exit.
+ -m, --manual-reject=MSG manual reject with `msg'
+ -n, --no-action don't do anything
+ -V, --version display the version number and exit"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def init():
+ global Cnf, Options, Logger, Katie, projectB, Sections, Priorities;
+
+ Cnf = utils.get_conf();
+
+ Arguments = [('a',"automatic","Lisa::Options::Automatic"),
+ ('h',"help","Lisa::Options::Help"),
+ ('m',"manual-reject","Lisa::Options::Manual-Reject", "HasArg"),
+ ('n',"no-action","Lisa::Options::No-Action"),
+ ('V',"version","Lisa::Options::Version")];
+
+ for i in ["automatic", "help", "manual-reject", "no-action", "version"]:
+ if not Cnf.has_key("Lisa::Options::%s" % (i)):
+ Cnf["Lisa::Options::%s" % (i)] = "";
+
+ changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Lisa::Options")
+
+ if Options["Help"]:
+ usage();
+
+ if Options["Version"]:
+ print "lisa %s" % (lisa_version);
+ sys.exit(0);
+
+ Katie = katie.Katie(Cnf);
+
+ if not Options["No-Action"]:
+ Logger = Katie.Logger = logging.Logger(Cnf, "lisa");
+
+ projectB = Katie.projectB;
+
+ Sections = Section_Completer();
+ Priorities = Priority_Completer();
+ readline.parse_and_bind("tab: complete");
+
+ return changes_files;
+
+################################################################################
+
+def do_byhand():
+ done = 0;
+ while not done:
+ files = Katie.pkg.files;
+ will_install = 1;
+ byhand = [];
+
+ for file in files.keys():
+ if files[file]["type"] == "byhand":
+ if os.path.exists(file):
+ print "W: %s still present; please process byhand components and try again." % (file);
+ will_install = 0;
+ else:
+ byhand.append(file);
+
+ answer = "XXXX";
+ if Options["No-Action"]:
+ answer = "S";
+ if will_install:
+ if Options["Automatic"] and not Options["No-Action"]:
+ answer = 'A';
+ prompt = "[A]ccept, Manual reject, Skip, Quit ?";
+ else:
+ prompt = "Manual reject, [S]kip, Quit ?";
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+
+ if answer == 'A':
+ done = 1;
+ for file in byhand:
+ del files[file];
+ elif answer == 'M':
+ Katie.do_reject(1, Options["Manual-Reject"]);
+ os.unlink(Katie.pkg.changes_file[:-8]+".katie");
+ done = 1;
+ elif answer == 'S':
+ done = 1;
+ elif answer == 'Q':
+ sys.exit(0);
+
+################################################################################
+
+def do_accept():
+ print "ACCEPT";
+ if not Options["No-Action"]:
+ retry = 0;
+ while retry < 10:
+ try:
+ lock_fd = os.open(Cnf["Lisa::AcceptedLockFile"], os.O_RDONLY | os.O_CREAT | os.O_EXCL);
+ retry = 10;
+ except OSError, e:
+ if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EEXIST':
+ retry += 1;
+ if (retry >= 10):
+ utils.fubar("Couldn't obtain lock; assuming jennifer is already running.");
+ else:
+ print("Unable to get accepted lock (try %d of 10)" % retry);
+ time.sleep(60);
+ else:
+ raise;
+ (summary, short_summary) = Katie.build_summaries();
+ Katie.accept(summary, short_summary);
+ os.unlink(Katie.pkg.changes_file[:-8]+".katie");
+ os.unlink(Cnf["Lisa::AcceptedLockFile"]);
+
+def check_status(files):
+ new = byhand = 0;
+ for file in files.keys():
+ if files[file]["type"] == "byhand":
+ byhand = 1;
+ elif files[file].has_key("new"):
+ new = 1;
+ return (new, byhand);
+
+def do_pkg(changes_file):
+ Katie.pkg.changes_file = changes_file;
+ Katie.init_vars();
+ Katie.update_vars();
+ Katie.update_subst();
+ files = Katie.pkg.files;
+
+ if not recheck():
+ return;
+
+ (new, byhand) = check_status(files);
+ if new or byhand:
+ if new:
+ do_new();
+ if byhand:
+ do_byhand();
+ (new, byhand) = check_status(files);
+
+ if not new and not byhand:
+ do_accept();
+
+################################################################################
+
+def end():
+ accept_count = Katie.accept_count;
+ accept_bytes = Katie.accept_bytes;
+
+ if accept_count:
+ sets = "set"
+ if accept_count > 1:
+ sets = "sets"
+ sys.stderr.write("Accepted %d package %s, %s.\n" % (accept_count, sets, utils.size_type(int(accept_bytes))));
+ Logger.log(["total",accept_count,accept_bytes]);
+
+ if not Options["No-Action"]:
+ Logger.close();
+
+################################################################################
+
+def main():
+ changes_files = init();
+ if len(changes_files) > 50:
+ sys.stderr.write("Sorting changes...\n");
+ changes_files = sort_changes(changes_files);
+
+ # Kill me now? **FIXME**
+ Cnf["Dinstall::Options::No-Mail"] = "";
+ bcc = "X-Katie: lisa %s" % (lisa_version);
+ if Cnf.has_key("Dinstall::Bcc"):
+ Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
+ else:
+ Katie.Subst["__BCC__"] = bcc;
+
+ for changes_file in changes_files:
+ changes_file = utils.validate_changes_file_arg(changes_file, 0);
+ if not changes_file:
+ continue;
+ print "\n" + changes_file;
+ do_pkg (changes_file);
+
+ end();
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Checks Debian packages from Incoming
+# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
+# $Id: jennifer,v 1.65 2005-12-05 05:35:47 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# Originally based on dinstall by Guy Maor <maor@debian.org>
+
+################################################################################
+
+# Computer games don't affect kids. I mean if Pacman affected our generation as
+# kids, we'd all run around in a darkened room munching pills and listening to
+# repetitive music.
+# -- Unknown
+
+################################################################################
+
+import commands, errno, fcntl, os, re, shutil, stat, sys, time, tempfile, traceback;
+import apt_inst, apt_pkg;
+import db_access, katie, logging, utils;
+
+from types import *;
+
+################################################################################
+
+re_valid_version = re.compile(r"^([0-9]+:)?[0-9A-Za-z\.\-\+:]+$");
+re_valid_pkg_name = re.compile(r"^[\dA-Za-z][\dA-Za-z\+\-\.]+$");
+re_changelog_versions = re.compile(r"^\w[-+0-9a-z.]+ \([^\(\) \t]+\)");
+re_strip_revision = re.compile(r"-([^-]+)$");
+
+################################################################################
+
+# Globals
+jennifer_version = "$Revision: 1.65 $";
+
+Cnf = None;
+Options = None;
+Logger = None;
+Katie = None;
+
+reprocess = 0;
+in_holding = {};
+
+# Aliases to the real vars in the Katie class; hysterical raisins.
+reject_message = "";
+changes = {};
+dsc = {};
+dsc_files = {};
+files = {};
+pkg = {};
+
+###############################################################################
+
+def init():
+ global Cnf, Options, Katie, changes, dsc, dsc_files, files, pkg;
+
+ apt_pkg.init();
+
+ Cnf = apt_pkg.newConfiguration();
+ apt_pkg.ReadConfigFileISC(Cnf,utils.which_conf_file());
+
+ Arguments = [('a',"automatic","Dinstall::Options::Automatic"),
+ ('h',"help","Dinstall::Options::Help"),
+ ('n',"no-action","Dinstall::Options::No-Action"),
+ ('p',"no-lock", "Dinstall::Options::No-Lock"),
+ ('s',"no-mail", "Dinstall::Options::No-Mail"),
+ ('V',"version","Dinstall::Options::Version")];
+
+ for i in ["automatic", "help", "no-action", "no-lock", "no-mail",
+ "override-distribution", "version"]:
+ Cnf["Dinstall::Options::%s" % (i)] = "";
+
+ changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Dinstall::Options")
+
+ if Options["Help"]:
+ usage();
+ elif Options["Version"]:
+ print "jennifer %s" % (jennifer_version);
+ sys.exit(0);
+
+ Katie = katie.Katie(Cnf);
+
+ changes = Katie.pkg.changes;
+ dsc = Katie.pkg.dsc;
+ dsc_files = Katie.pkg.dsc_files;
+ files = Katie.pkg.files;
+ pkg = Katie.pkg;
+
+ return changes_files;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: dinstall [OPTION]... [CHANGES]...
+ -a, --automatic automatic run
+ -h, --help show this help and exit.
+ -n, --no-action don't do anything
+ -p, --no-lock don't check lockfile !! for cron.daily only !!
+ -s, --no-mail don't send any mail
+ -V, --version display the version number and exit"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def reject (str, prefix="Rejected: "):
+ global reject_message;
+ if str:
+ reject_message += prefix + str + "\n";
+
+################################################################################
+
+def copy_to_holding(filename):
+ global in_holding;
+
+ base_filename = os.path.basename(filename);
+
+ dest = Cnf["Dir::Queue::Holding"] + '/' + base_filename;
+ try:
+ fd = os.open(dest, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0640);
+ os.close(fd);
+ except OSError, e:
+ # Shouldn't happen, but will if, for example, someone lists a
+ # file twice in the .changes.
+ if errno.errorcode[e.errno] == 'EEXIST':
+ reject("%s: already exists in holding area; can not overwrite." % (base_filename));
+ return;
+ raise;
+
+ try:
+ shutil.copy(filename, dest);
+ except IOError, e:
+ # In either case (ENOENT or EACCES) we want to remove the
+ # O_CREAT | O_EXCLed ghost file, so add the file to the list
+ # of 'in holding' even if it's not the real file.
+ if errno.errorcode[e.errno] == 'ENOENT':
+ reject("%s: can not copy to holding area: file not found." % (base_filename));
+ os.unlink(dest);
+ return;
+ elif errno.errorcode[e.errno] == 'EACCES':
+ reject("%s: can not copy to holding area: read permission denied." % (base_filename));
+ os.unlink(dest);
+ return;
+ raise;
+
+ in_holding[base_filename] = "";
+
+################################################################################
+
+def clean_holding():
+ global in_holding;
+
+ cwd = os.getcwd();
+ os.chdir(Cnf["Dir::Queue::Holding"]);
+ for file in in_holding.keys():
+ if os.path.exists(file):
+ if file.find('/') != -1:
+ utils.fubar("WTF? clean_holding() got a file ('%s') with / in it!" % (file));
+ else:
+ os.unlink(file);
+ in_holding = {};
+ os.chdir(cwd);
+
+################################################################################
+
+def check_changes():
+ filename = pkg.changes_file;
+
+ # Parse the .changes field into a dictionary
+ try:
+ changes.update(utils.parse_changes(filename));
+ except utils.cant_open_exc:
+ reject("%s: can't read file." % (filename));
+ return 0;
+ except utils.changes_parse_error_exc, line:
+ reject("%s: parse error, can't grok: %s." % (filename, line));
+ return 0;
+
+ # Parse the Files field from the .changes into another dictionary
+ try:
+ files.update(utils.build_file_list(changes));
+ except utils.changes_parse_error_exc, line:
+ reject("%s: parse error, can't grok: %s." % (filename, line));
+ except utils.nk_format_exc, format:
+ reject("%s: unknown format '%s'." % (filename, format));
+ return 0;
+
+ # Check for mandatory fields
+ for i in ("source", "binary", "architecture", "version", "distribution",
+ "maintainer", "files", "changes", "description"):
+ if not changes.has_key(i):
+ reject("%s: Missing mandatory field `%s'." % (filename, i));
+ return 0 # Avoid <undef> errors during later tests
+
+ # Split multi-value fields into a lower-level dictionary
+ for i in ("architecture", "distribution", "binary", "closes"):
+ o = changes.get(i, "")
+ if o != "":
+ del changes[i]
+ changes[i] = {}
+ for j in o.split():
+ changes[i][j] = 1
+
+ # Fix the Maintainer: field to be RFC822/2047 compatible
+ try:
+ (changes["maintainer822"], changes["maintainer2047"],
+ changes["maintainername"], changes["maintaineremail"]) = \
+ utils.fix_maintainer (changes["maintainer"]);
+ except utils.ParseMaintError, msg:
+ reject("%s: Maintainer field ('%s') failed to parse: %s" \
+ % (filename, changes["maintainer"], msg));
+
+ # ...likewise for the Changed-By: field if it exists.
+ try:
+ (changes["changedby822"], changes["changedby2047"],
+ changes["changedbyname"], changes["changedbyemail"]) = \
+ utils.fix_maintainer (changes.get("changed-by", ""));
+ except utils.ParseMaintError, msg:
+ (changes["changedby822"], changes["changedby2047"],
+ changes["changedbyname"], changes["changedbyemail"]) = \
+ ("", "", "", "")
+ reject("%s: Changed-By field ('%s') failed to parse: %s" \
+ % (filename, changes["changed-by"], msg));
+
+ # Ensure all the values in Closes: are numbers
+ if changes.has_key("closes"):
+ for i in changes["closes"].keys():
+ if katie.re_isanum.match (i) == None:
+ reject("%s: `%s' from Closes field isn't a number." % (filename, i));
+
+
+ # chopversion = no epoch; chopversion2 = no epoch and no revision (e.g. for .orig.tar.gz comparison)
+ changes["chopversion"] = utils.re_no_epoch.sub('', changes["version"])
+ changes["chopversion2"] = utils.re_no_revision.sub('', changes["chopversion"])
+
+ # Check there isn't already a changes file of the same name in one
+ # of the queue directories.
+ base_filename = os.path.basename(filename);
+ for dir in [ "Accepted", "Byhand", "Done", "New" ]:
+ if os.path.exists(Cnf["Dir::Queue::%s" % (dir) ]+'/'+base_filename):
+ reject("%s: a file with this name already exists in the %s directory." % (base_filename, dir));
+
+ # Check the .changes is non-empty
+ if not files:
+ reject("%s: nothing to do (Files field is empty)." % (base_filename))
+ return 0;
+
+ return 1;
+
+################################################################################
+
+def check_distributions():
+ "Check and map the Distribution field of a .changes file."
+
+ # Handle suite mappings
+ for map in Cnf.ValueList("SuiteMappings"):
+ args = map.split();
+ type = args[0];
+ if type == "map" or type == "silent-map":
+ (source, dest) = args[1:3];
+ if changes["distribution"].has_key(source):
+ del changes["distribution"][source]
+ changes["distribution"][dest] = 1;
+ if type != "silent-map":
+ reject("Mapping %s to %s." % (source, dest),"");
+ if changes.has_key("distribution-version"):
+ if changes["distribution-version"].has_key(source):
+ changes["distribution-version"][source]=dest
+ elif type == "map-unreleased":
+ (source, dest) = args[1:3];
+ if changes["distribution"].has_key(source):
+ for arch in changes["architecture"].keys():
+ if arch not in Cnf.ValueList("Suite::%s::Architectures" % (source)):
+ reject("Mapping %s to %s for unreleased architecture %s." % (source, dest, arch),"");
+ del changes["distribution"][source];
+ changes["distribution"][dest] = 1;
+ break;
+ elif type == "ignore":
+ suite = args[1];
+ if changes["distribution"].has_key(suite):
+ del changes["distribution"][suite];
+ reject("Ignoring %s as a target suite." % (suite), "Warning: ");
+ elif type == "reject":
+ suite = args[1];
+ if changes["distribution"].has_key(suite):
+ reject("Uploads to %s are not accepted." % (suite));
+ elif type == "propup-version":
+ # give these as "uploaded-to(non-mapped) suites-to-add-when-upload-obsoletes"
+ #
+ # changes["distribution-version"] looks like: {'testing': 'testing-proposed-updates'}
+ if changes["distribution"].has_key(args[1]):
+ changes.setdefault("distribution-version", {})
+ for suite in args[2:]: changes["distribution-version"][suite]=suite
+
+ # Ensure there is (still) a target distribution
+ if changes["distribution"].keys() == []:
+ reject("no valid distribution.");
+
+ # Ensure target distributions exist
+ for suite in changes["distribution"].keys():
+ if not Cnf.has_key("Suite::%s" % (suite)):
+ reject("Unknown distribution `%s'." % (suite));
+
+################################################################################
+
+def check_deb_ar(filename, control):
+ """Sanity check the ar of a .deb, i.e. that there is:
+
+ o debian-binary
+ o control.tar.gz
+ o data.tar.gz or data.tar.bz2
+
+in that order, and nothing else. If the third member is a
+data.tar.bz2, an additional check is performed for the required
+Pre-Depends on dpkg (>= 1.10.24)."""
+ cmd = "ar t %s" % (filename)
+ (result, output) = commands.getstatusoutput(cmd)
+ if result != 0:
+ reject("%s: 'ar t' invocation failed." % (filename))
+ reject(utils.prefix_multi_line_string(output, " [ar output:] "), "")
+ chunks = output.split('\n')
+ if len(chunks) != 3:
+ reject("%s: found %d chunks, expected 3." % (filename, len(chunks)))
+ if chunks[0] != "debian-binary":
+ reject("%s: first chunk is '%s', expected 'debian-binary'." % (filename, chunks[0]))
+ if chunks[1] != "control.tar.gz":
+ reject("%s: second chunk is '%s', expected 'control.tar.gz'." % (filename, chunks[1]))
+ if chunks[2] == "data.tar.bz2":
+ # Packages using bzip2 compression must have a Pre-Depends on dpkg >= 1.10.24.
+ found_needed_predep = 0
+ for parsed_dep in apt_pkg.ParseDepends(control.Find("Pre-Depends", "")):
+ for atom in parsed_dep:
+ (dep, version, constraint) = atom
+ if dep != "dpkg" or (constraint != ">=" and constraint != ">>") or \
+ len(parsed_dep) > 1: # or'ed deps don't count
+ continue
+ if (constraint == ">=" and apt_pkg.VersionCompare(version, "1.10.24") < 0) or \
+ (constraint == ">>" and apt_pkg.VersionCompare(version, "1.10.23") < 0):
+ continue
+ found_needed_predep = 1
+ if not found_needed_predep:
+ reject("%s: uses bzip2 compression, but doesn't Pre-Depend on dpkg (>= 1.10.24)" % (filename))
+ elif chunks[2] != "data.tar.gz":
+ reject("%s: third chunk is '%s', expected 'data.tar.gz' or 'data.tar.bz2'." % (filename, chunks[2]))
+
+################################################################################
+
+def check_files():
+ global reprocess
+
+ archive = utils.where_am_i();
+ file_keys = files.keys();
+
+ # if reprocess is 2 we've already done this and we're checking
+ # things again for the new .orig.tar.gz.
+ # [Yes, I'm fully aware of how disgusting this is]
+ if not Options["No-Action"] and reprocess < 2:
+ cwd = os.getcwd();
+ os.chdir(pkg.directory);
+ for file in file_keys:
+ copy_to_holding(file);
+ os.chdir(cwd);
+
+ # Check there isn't already a .changes or .katie file of the same name in
+ # the proposed-updates "CopyChanges" or "CopyKatie" storage directories.
+ # [NB: this check must be done post-suite mapping]
+ base_filename = os.path.basename(pkg.changes_file);
+ katie_filename = base_filename[:-8]+".katie"
+ for suite in changes["distribution"].keys():
+ copychanges = "Suite::%s::CopyChanges" % (suite);
+ if Cnf.has_key(copychanges) and \
+ os.path.exists(Cnf[copychanges]+"/"+base_filename):
+ reject("%s: a file with this name already exists in %s" \
+ % (base_filename, Cnf[copychanges]));
+
+ copykatie = "Suite::%s::CopyKatie" % (suite);
+ if Cnf.has_key(copykatie) and \
+ os.path.exists(Cnf[copykatie]+"/"+katie_filename):
+ reject("%s: a file with this name already exists in %s" \
+ % (katie_filename, Cnf[copykatie]));
+
+ reprocess = 0;
+ has_binaries = 0;
+ has_source = 0;
+
+ for file in file_keys:
+ # Ensure the file does not already exist in one of the accepted directories
+ for dir in [ "Accepted", "Byhand", "New" ]:
+ if os.path.exists(Cnf["Dir::Queue::%s" % (dir) ]+'/'+file):
+ reject("%s file already exists in the %s directory." % (file, dir));
+ if not utils.re_taint_free.match(file):
+ reject("!!WARNING!! tainted filename: '%s'." % (file));
+ # Check the file is readable
+ if os.access(file,os.R_OK) == 0:
+ # When running in -n, copy_to_holding() won't have
+ # generated the reject_message, so we need to.
+ if Options["No-Action"]:
+ if os.path.exists(file):
+ reject("Can't read `%s'. [permission denied]" % (file));
+ else:
+ reject("Can't read `%s'. [file not found]" % (file));
+ files[file]["type"] = "unreadable";
+ continue;
+ # If it's byhand skip remaining checks
+ if files[file]["section"] == "byhand" or files[file]["section"] == "raw-installer":
+ files[file]["byhand"] = 1;
+ files[file]["type"] = "byhand";
+ # Checks for a binary package...
+ elif utils.re_isadeb.match(file):
+ has_binaries = 1;
+ files[file]["type"] = "deb";
+
+ # Extract package control information
+ deb_file = utils.open_file(file);
+ try:
+ control = apt_pkg.ParseSection(apt_inst.debExtractControl(deb_file));
+ except:
+ reject("%s: debExtractControl() raised %s." % (file, sys.exc_type));
+ deb_file.close();
+ # Can't continue, none of the checks on control would work.
+ continue;
+ deb_file.close();
+
+ # Check for mandatory fields
+ for field in [ "Package", "Architecture", "Version" ]:
+ if control.Find(field) == None:
+ reject("%s: No %s field in control." % (file, field));
+ # Can't continue
+ continue;
+
+ # Ensure the package name matches the one give in the .changes
+ if not changes["binary"].has_key(control.Find("Package", "")):
+ reject("%s: control file lists name as `%s', which isn't in changes file." % (file, control.Find("Package", "")));
+
+ # Validate the package field
+ package = control.Find("Package");
+ if not re_valid_pkg_name.match(package):
+ reject("%s: invalid package name '%s'." % (file, package));
+
+ # Validate the version field
+ version = control.Find("Version");
+ if not re_valid_version.match(version):
+ reject("%s: invalid version number '%s'." % (file, version));
+
+ # Ensure the architecture of the .deb is one we know about.
+ default_suite = Cnf.get("Dinstall::DefaultSuite", "Unstable")
+ architecture = control.Find("Architecture");
+ if architecture not in Cnf.ValueList("Suite::%s::Architectures" % (default_suite)):
+ reject("Unknown architecture '%s'." % (architecture));
+
+ # Ensure the architecture of the .deb is one of the ones
+ # listed in the .changes.
+ if not changes["architecture"].has_key(architecture):
+ reject("%s: control file lists arch as `%s', which isn't in changes file." % (file, architecture));
+
+ # Sanity-check the Depends field
+ depends = control.Find("Depends");
+ if depends == '':
+ reject("%s: Depends field is empty." % (file));
+
+ # Check the section & priority match those given in the .changes (non-fatal)
+ if control.Find("Section") and files[file]["section"] != "" and files[file]["section"] != control.Find("Section"):
+ reject("%s control file lists section as `%s', but changes file has `%s'." % (file, control.Find("Section", ""), files[file]["section"]), "Warning: ");
+ if control.Find("Priority") and files[file]["priority"] != "" and files[file]["priority"] != control.Find("Priority"):
+ reject("%s control file lists priority as `%s', but changes file has `%s'." % (file, control.Find("Priority", ""), files[file]["priority"]),"Warning: ");
+
+ files[file]["package"] = package;
+ files[file]["architecture"] = architecture;
+ files[file]["version"] = version;
+ files[file]["maintainer"] = control.Find("Maintainer", "");
+ if file.endswith(".udeb"):
+ files[file]["dbtype"] = "udeb";
+ elif file.endswith(".deb"):
+ files[file]["dbtype"] = "deb";
+ else:
+ reject("%s is neither a .deb or a .udeb." % (file));
+ files[file]["source"] = control.Find("Source", files[file]["package"]);
+ # Get the source version
+ source = files[file]["source"];
+ source_version = "";
+ if source.find("(") != -1:
+ m = utils.re_extract_src_version.match(source);
+ source = m.group(1);
+ source_version = m.group(2);
+ if not source_version:
+ source_version = files[file]["version"];
+ files[file]["source package"] = source;
+ files[file]["source version"] = source_version;
+
+ # Ensure the filename matches the contents of the .deb
+ m = utils.re_isadeb.match(file);
+ # package name
+ file_package = m.group(1);
+ if files[file]["package"] != file_package:
+ reject("%s: package part of filename (%s) does not match package name in the %s (%s)." % (file, file_package, files[file]["dbtype"], files[file]["package"]));
+ epochless_version = utils.re_no_epoch.sub('', control.Find("Version"));
+ # version
+ file_version = m.group(2);
+ if epochless_version != file_version:
+ reject("%s: version part of filename (%s) does not match package version in the %s (%s)." % (file, file_version, files[file]["dbtype"], epochless_version));
+ # architecture
+ file_architecture = m.group(3);
+ if files[file]["architecture"] != file_architecture:
+ reject("%s: architecture part of filename (%s) does not match package architecture in the %s (%s)." % (file, file_architecture, files[file]["dbtype"], files[file]["architecture"]));
+
+ # Check for existent source
+ source_version = files[file]["source version"];
+ source_package = files[file]["source package"];
+ if changes["architecture"].has_key("source"):
+ if source_version != changes["version"]:
+ reject("source version (%s) for %s doesn't match changes version %s." % (source_version, file, changes["version"]));
+ else:
+ # Check in the SQL database
+ if not Katie.source_exists(source_package, source_version, changes["distribution"].keys()):
+ # Check in one of the other directories
+ source_epochless_version = utils.re_no_epoch.sub('', source_version);
+ dsc_filename = "%s_%s.dsc" % (source_package, source_epochless_version);
+ if os.path.exists(Cnf["Dir::Queue::Byhand"] + '/' + dsc_filename):
+ files[file]["byhand"] = 1;
+ elif os.path.exists(Cnf["Dir::Queue::New"] + '/' + dsc_filename):
+ files[file]["new"] = 1;
+ elif not os.path.exists(Cnf["Dir::Queue::Accepted"] + '/' + dsc_filename):
+ reject("no source found for %s %s (%s)." % (source_package, source_version, file));
+ # Check the version and for file overwrites
+ reject(Katie.check_binary_against_db(file),"");
+
+ check_deb_ar(file, control)
+
+ # Checks for a source package...
+ else:
+ m = utils.re_issource.match(file);
+ if m:
+ has_source = 1;
+ files[file]["package"] = m.group(1);
+ files[file]["version"] = m.group(2);
+ files[file]["type"] = m.group(3);
+
+ # Ensure the source package name matches the Source filed in the .changes
+ if changes["source"] != files[file]["package"]:
+ reject("%s: changes file doesn't say %s for Source" % (file, files[file]["package"]));
+
+ # Ensure the source version matches the version in the .changes file
+ if files[file]["type"] == "orig.tar.gz":
+ changes_version = changes["chopversion2"];
+ else:
+ changes_version = changes["chopversion"];
+ if changes_version != files[file]["version"]:
+ reject("%s: should be %s according to changes file." % (file, changes_version));
+
+ # Ensure the .changes lists source in the Architecture field
+ if not changes["architecture"].has_key("source"):
+ reject("%s: changes file doesn't list `source' in Architecture field." % (file));
+
+ # Check the signature of a .dsc file
+ if files[file]["type"] == "dsc":
+ dsc["fingerprint"] = utils.check_signature(file, reject);
+
+ files[file]["architecture"] = "source";
+
+ # Not a binary or source package? Assume byhand...
+ else:
+ files[file]["byhand"] = 1;
+ files[file]["type"] = "byhand";
+
+ # Per-suite file checks
+ files[file]["oldfiles"] = {};
+ for suite in changes["distribution"].keys():
+ # Skip byhand
+ if files[file].has_key("byhand"):
+ continue;
+
+ # Handle component mappings
+ for map in Cnf.ValueList("ComponentMappings"):
+ (source, dest) = map.split();
+ if files[file]["component"] == source:
+ files[file]["original component"] = source;
+ files[file]["component"] = dest;
+
+ # Ensure the component is valid for the target suite
+ if Cnf.has_key("Suite:%s::Components" % (suite)) and \
+ files[file]["component"] not in Cnf.ValueList("Suite::%s::Components" % (suite)):
+ reject("unknown component `%s' for suite `%s'." % (files[file]["component"], suite));
+ continue;
+
+ # Validate the component
+ component = files[file]["component"];
+ component_id = db_access.get_component_id(component);
+ if component_id == -1:
+ reject("file '%s' has unknown component '%s'." % (file, component));
+ continue;
+
+ # See if the package is NEW
+ if not Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
+ files[file]["new"] = 1;
+
+ # Validate the priority
+ if files[file]["priority"].find('/') != -1:
+ reject("file '%s' has invalid priority '%s' [contains '/']." % (file, files[file]["priority"]));
+
+ # Determine the location
+ location = Cnf["Dir::Pool"];
+ location_id = db_access.get_location_id (location, component, archive);
+ if location_id == -1:
+ reject("[INTERNAL ERROR] couldn't determine location (Component: %s, Archive: %s)" % (component, archive));
+ files[file]["location id"] = location_id;
+
+ # Check the md5sum & size against existing files (if any)
+ files[file]["pool name"] = utils.poolify (changes["source"], files[file]["component"]);
+ files_id = db_access.get_files_id(files[file]["pool name"] + file, files[file]["size"], files[file]["md5sum"], files[file]["location id"]);
+ if files_id == -1:
+ reject("INTERNAL ERROR, get_files_id() returned multiple matches for %s." % (file));
+ elif files_id == -2:
+ reject("md5sum and/or size mismatch on existing copy of %s." % (file));
+ files[file]["files id"] = files_id
+
+ # Check for packages that have moved from one component to another
+ q = Katie.projectB.query("""
+SELECT c.name FROM binaries b, bin_associations ba, suite s, location l,
+ component c, architecture a, files f
+ WHERE b.package = '%s' AND s.suite_name = '%s'
+ AND (a.arch_string = '%s' OR a.arch_string = 'all')
+ AND ba.bin = b.id AND ba.suite = s.id AND b.architecture = a.id
+ AND f.location = l.id AND l.component = c.id AND b.file = f.id"""
+ % (files[file]["package"], suite,
+ files[file]["architecture"]));
+ ql = q.getresult();
+ if ql:
+ files[file]["othercomponents"] = ql[0][0];
+
+ # If the .changes file says it has source, it must have source.
+ if changes["architecture"].has_key("source"):
+ if not has_source:
+ reject("no source found and Architecture line in changes mention source.");
+
+ if not has_binaries and Cnf.FindB("Dinstall::Reject::NoSourceOnly"):
+ reject("source only uploads are not supported.");
+
+###############################################################################
+
+def check_dsc():
+ global reprocess;
+
+ # Ensure there is source to check
+ if not changes["architecture"].has_key("source"):
+ return 1;
+
+ # Find the .dsc
+ dsc_filename = None;
+ for file in files.keys():
+ if files[file]["type"] == "dsc":
+ if dsc_filename:
+ reject("can not process a .changes file with multiple .dsc's.");
+ return 0;
+ else:
+ dsc_filename = file;
+
+ # If there isn't one, we have nothing to do. (We have reject()ed the upload already)
+ if not dsc_filename:
+ reject("source uploads must contain a dsc file");
+ return 0;
+
+ # Parse the .dsc file
+ try:
+ dsc.update(utils.parse_changes(dsc_filename, signing_rules=1));
+ except utils.cant_open_exc:
+ # if not -n copy_to_holding() will have done this for us...
+ if Options["No-Action"]:
+ reject("%s: can't read file." % (dsc_filename));
+ except utils.changes_parse_error_exc, line:
+ reject("%s: parse error, can't grok: %s." % (dsc_filename, line));
+ except utils.invalid_dsc_format_exc, line:
+ reject("%s: syntax error on line %s." % (dsc_filename, line));
+ # Build up the file list of files mentioned by the .dsc
+ try:
+ dsc_files.update(utils.build_file_list(dsc, is_a_dsc=1));
+ except utils.no_files_exc:
+ reject("%s: no Files: field." % (dsc_filename));
+ return 0;
+ except utils.changes_parse_error_exc, line:
+ reject("%s: parse error, can't grok: %s." % (dsc_filename, line));
+ return 0;
+
+ # Enforce mandatory fields
+ for i in ("format", "source", "version", "binary", "maintainer", "architecture", "files"):
+ if not dsc.has_key(i):
+ reject("%s: missing mandatory field `%s'." % (dsc_filename, i));
+ return 0;
+
+ # Validate the source and version fields
+ if not re_valid_pkg_name.match(dsc["source"]):
+ reject("%s: invalid source name '%s'." % (dsc_filename, dsc["source"]));
+ if not re_valid_version.match(dsc["version"]):
+ reject("%s: invalid version number '%s'." % (dsc_filename, dsc["version"]));
+
+ # Bumping the version number of the .dsc breaks extraction by stable's
+ # dpkg-source. So let's not do that...
+ if dsc["format"] != "1.0":
+ reject("%s: incompatible 'Format' version produced by a broken version of dpkg-dev 1.9.1{3,4}." % (dsc_filename));
+
+ # Validate the Maintainer field
+ try:
+ utils.fix_maintainer (dsc["maintainer"]);
+ except utils.ParseMaintError, msg:
+ reject("%s: Maintainer field ('%s') failed to parse: %s" \
+ % (dsc_filename, dsc["maintainer"], msg));
+
+ # Validate the build-depends field(s)
+ for field_name in [ "build-depends", "build-depends-indep" ]:
+ field = dsc.get(field_name);
+ if field:
+ # Check for broken dpkg-dev lossage...
+ if field.startswith("ARRAY"):
+ reject("%s: invalid %s field produced by a broken version of dpkg-dev (1.10.11)" % (dsc_filename, field_name.title()));
+
+ # Have apt try to parse them...
+ try:
+ apt_pkg.ParseSrcDepends(field);
+ except:
+ reject("%s: invalid %s field (can not be parsed by apt)." % (dsc_filename, field_name.title()));
+ pass;
+
+ # Ensure the version number in the .dsc matches the version number in the .changes
+ epochless_dsc_version = utils.re_no_epoch.sub('', dsc["version"]);
+ changes_version = files[dsc_filename]["version"];
+ if epochless_dsc_version != files[dsc_filename]["version"]:
+ reject("version ('%s') in .dsc does not match version ('%s') in .changes." % (epochless_dsc_version, changes_version));
+
+ # Ensure there is a .tar.gz in the .dsc file
+ has_tar = 0;
+ for f in dsc_files.keys():
+ m = utils.re_issource.match(f);
+ if not m:
+ reject("%s: %s in Files field not recognised as source." % (dsc_filename, f));
+ type = m.group(3);
+ if type == "orig.tar.gz" or type == "tar.gz":
+ has_tar = 1;
+ if not has_tar:
+ reject("%s: no .tar.gz or .orig.tar.gz in 'Files' field." % (dsc_filename));
+
+ # Ensure source is newer than existing source in target suites
+ reject(Katie.check_source_against_db(dsc_filename),"");
+
+ (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(dsc_filename);
+ reject(reject_msg, "");
+ if is_in_incoming:
+ if not Options["No-Action"]:
+ copy_to_holding(is_in_incoming);
+ orig_tar_gz = os.path.basename(is_in_incoming);
+ files[orig_tar_gz] = {};
+ files[orig_tar_gz]["size"] = os.stat(orig_tar_gz)[stat.ST_SIZE];
+ files[orig_tar_gz]["md5sum"] = dsc_files[orig_tar_gz]["md5sum"];
+ files[orig_tar_gz]["section"] = files[dsc_filename]["section"];
+ files[orig_tar_gz]["priority"] = files[dsc_filename]["priority"];
+ files[orig_tar_gz]["component"] = files[dsc_filename]["component"];
+ files[orig_tar_gz]["type"] = "orig.tar.gz";
+ reprocess = 2;
+
+ return 1;
+
+################################################################################
+
+def get_changelog_versions(source_dir):
+ """Extracts a the source package and (optionally) grabs the
+ version history out of debian/changelog for the BTS."""
+
+ # Find the .dsc (again)
+ dsc_filename = None;
+ for file in files.keys():
+ if files[file]["type"] == "dsc":
+ dsc_filename = file;
+
+ # If there isn't one, we have nothing to do. (We have reject()ed the upload already)
+ if not dsc_filename:
+ return;
+
+ # Create a symlink mirror of the source files in our temporary directory
+ for f in files.keys():
+ m = utils.re_issource.match(f);
+ if m:
+ src = os.path.join(source_dir, f);
+ # If a file is missing for whatever reason, give up.
+ if not os.path.exists(src):
+ return;
+ type = m.group(3);
+ if type == "orig.tar.gz" and pkg.orig_tar_gz:
+ continue;
+ dest = os.path.join(os.getcwd(), f);
+ os.symlink(src, dest);
+
+ # If the orig.tar.gz is not a part of the upload, create a symlink to the
+ # existing copy.
+ if pkg.orig_tar_gz:
+ dest = os.path.join(os.getcwd(), os.path.basename(pkg.orig_tar_gz));
+ os.symlink(pkg.orig_tar_gz, dest);
+
+ # Extract the source
+ cmd = "dpkg-source -sn -x %s" % (dsc_filename);
+ (result, output) = commands.getstatusoutput(cmd);
+ if (result != 0):
+ reject("'dpkg-source -x' failed for %s [return code: %s]." % (dsc_filename, result));
+ reject(utils.prefix_multi_line_string(output, " [dpkg-source output:] "), "");
+ return;
+
+ if not Cnf.Find("Dir::Queue::BTSVersionTrack"):
+ return;
+
+ # Get the upstream version
+ upstr_version = utils.re_no_epoch.sub('', dsc["version"]);
+ if re_strip_revision.search(upstr_version):
+ upstr_version = re_strip_revision.sub('', upstr_version);
+
+ # Ensure the changelog file exists
+ changelog_filename = "%s-%s/debian/changelog" % (dsc["source"], upstr_version);
+ if not os.path.exists(changelog_filename):
+ reject("%s: debian/changelog not found in extracted source." % (dsc_filename));
+ return;
+
+ # Parse the changelog
+ dsc["bts changelog"] = "";
+ changelog_file = utils.open_file(changelog_filename);
+ for line in changelog_file.readlines():
+ m = re_changelog_versions.match(line);
+ if m:
+ dsc["bts changelog"] += line;
+ changelog_file.close();
+
+ # Check we found at least one revision in the changelog
+ if not dsc["bts changelog"]:
+ reject("%s: changelog format not recognised (empty version tree)." % (dsc_filename));
+
+########################################
+
+def check_source():
+ # Bail out if:
+ # a) there's no source
+ # or b) reprocess is 2 - we will do this check next time when orig.tar.gz is in 'files'
+ # or c) the orig.tar.gz is MIA
+ if not changes["architecture"].has_key("source") or reprocess == 2 \
+ or pkg.orig_tar_gz == -1:
+ return;
+
+ # Create a temporary directory to extract the source into
+ if Options["No-Action"]:
+ tmpdir = tempfile.mktemp();
+ else:
+ # We're in queue/holding and can create a random directory.
+ tmpdir = "%s" % (os.getpid());
+ os.mkdir(tmpdir);
+
+ # Move into the temporary directory
+ cwd = os.getcwd();
+ os.chdir(tmpdir);
+
+ # Get the changelog version history
+ get_changelog_versions(cwd);
+
+ # Move back and cleanup the temporary tree
+ os.chdir(cwd);
+ try:
+ shutil.rmtree(tmpdir);
+ except OSError, e:
+ if errno.errorcode[e.errno] != 'EACCES':
+ utils.fubar("%s: couldn't remove tmp dir for source tree." % (dsc["source"]));
+
+ reject("%s: source tree could not be cleanly removed." % (dsc["source"]));
+ # We probably have u-r or u-w directories so chmod everything
+ # and try again.
+ cmd = "chmod -R u+rwx %s" % (tmpdir)
+ result = os.system(cmd)
+ if result != 0:
+ utils.fubar("'%s' failed with result %s." % (cmd, result));
+ shutil.rmtree(tmpdir);
+ except:
+ utils.fubar("%s: couldn't remove tmp dir for source tree." % (dsc["source"]));
+
+################################################################################
+
+# FIXME: should be a debian specific check called from a hook
+
+def check_urgency ():
+ if changes["architecture"].has_key("source"):
+ if not changes.has_key("urgency"):
+ changes["urgency"] = Cnf["Urgency::Default"];
+ if changes["urgency"] not in Cnf.ValueList("Urgency::Valid"):
+ reject("%s is not a valid urgency; it will be treated as %s by testing." % (changes["urgency"], Cnf["Urgency::Default"]), "Warning: ");
+ changes["urgency"] = Cnf["Urgency::Default"];
+ changes["urgency"] = changes["urgency"].lower();
+
+################################################################################
+
+def check_md5sums ():
+ for file in files.keys():
+ try:
+ file_handle = utils.open_file(file);
+ except utils.cant_open_exc:
+ continue;
+
+ # Check md5sum
+ if apt_pkg.md5sum(file_handle) != files[file]["md5sum"]:
+ reject("%s: md5sum check failed." % (file));
+ file_handle.close();
+ # Check size
+ actual_size = os.stat(file)[stat.ST_SIZE];
+ size = int(files[file]["size"]);
+ if size != actual_size:
+ reject("%s: actual file size (%s) does not match size (%s) in .changes"
+ % (file, actual_size, size));
+
+ for file in dsc_files.keys():
+ try:
+ file_handle = utils.open_file(file);
+ except utils.cant_open_exc:
+ continue;
+
+ # Check md5sum
+ if apt_pkg.md5sum(file_handle) != dsc_files[file]["md5sum"]:
+ reject("%s: md5sum check failed." % (file));
+ file_handle.close();
+ # Check size
+ actual_size = os.stat(file)[stat.ST_SIZE];
+ size = int(dsc_files[file]["size"]);
+ if size != actual_size:
+ reject("%s: actual file size (%s) does not match size (%s) in .dsc"
+ % (file, actual_size, size));
+
+################################################################################
+
+# Sanity check the time stamps of files inside debs.
+# [Files in the near future cause ugly warnings and extreme time
+# travel can cause errors on extraction]
+
+def check_timestamps():
+ class Tar:
+ def __init__(self, future_cutoff, past_cutoff):
+ self.reset();
+ self.future_cutoff = future_cutoff;
+ self.past_cutoff = past_cutoff;
+
+ def reset(self):
+ self.future_files = {};
+ self.ancient_files = {};
+
+ def callback(self, Kind,Name,Link,Mode,UID,GID,Size,MTime,Major,Minor):
+ if MTime > self.future_cutoff:
+ self.future_files[Name] = MTime;
+ if MTime < self.past_cutoff:
+ self.ancient_files[Name] = MTime;
+ ####
+
+ future_cutoff = time.time() + int(Cnf["Dinstall::FutureTimeTravelGrace"]);
+ past_cutoff = time.mktime(time.strptime(Cnf["Dinstall::PastCutoffYear"],"%Y"));
+ tar = Tar(future_cutoff, past_cutoff);
+ for filename in files.keys():
+ if files[filename]["type"] == "deb":
+ tar.reset();
+ try:
+ deb_file = utils.open_file(filename);
+ apt_inst.debExtract(deb_file,tar.callback,"control.tar.gz");
+ deb_file.seek(0);
+ try:
+ apt_inst.debExtract(deb_file,tar.callback,"data.tar.gz")
+ except SystemError, e:
+ # If we can't find a data.tar.gz, look for data.tar.bz2 instead.
+ if not re.match(r"Cannot f[ui]nd chunk data.tar.gz$", str(e)):
+ raise
+ deb_file.seek(0)
+ apt_inst.debExtract(deb_file,tar.callback,"data.tar.bz2")
+ deb_file.close();
+ #
+ future_files = tar.future_files.keys();
+ if future_files:
+ num_future_files = len(future_files);
+ future_file = future_files[0];
+ future_date = tar.future_files[future_file];
+ reject("%s: has %s file(s) with a time stamp too far into the future (e.g. %s [%s])."
+ % (filename, num_future_files, future_file,
+ time.ctime(future_date)));
+ #
+ ancient_files = tar.ancient_files.keys();
+ if ancient_files:
+ num_ancient_files = len(ancient_files);
+ ancient_file = ancient_files[0];
+ ancient_date = tar.ancient_files[ancient_file];
+ reject("%s: has %s file(s) with a time stamp too ancient (e.g. %s [%s])."
+ % (filename, num_ancient_files, ancient_file,
+ time.ctime(ancient_date)));
+ except:
+ reject("%s: deb contents timestamp check failed [%s: %s]" % (filename, sys.exc_type, sys.exc_value));
+
+################################################################################
+################################################################################
+
+# If any file of an upload has a recent mtime then chances are good
+# the file is still being uploaded.
+
+def upload_too_new():
+ too_new = 0;
+ # Move back to the original directory to get accurate time stamps
+ cwd = os.getcwd();
+ os.chdir(pkg.directory);
+ file_list = pkg.files.keys();
+ file_list.extend(pkg.dsc_files.keys());
+ file_list.append(pkg.changes_file);
+ for file in file_list:
+ try:
+ last_modified = time.time()-os.path.getmtime(file);
+ if last_modified < int(Cnf["Dinstall::SkipTime"]):
+ too_new = 1;
+ break;
+ except:
+ pass;
+ os.chdir(cwd);
+ return too_new;
+
+################################################################################
+
+def action ():
+ # changes["distribution"] may not exist in corner cases
+ # (e.g. unreadable changes files)
+ if not changes.has_key("distribution") or not isinstance(changes["distribution"], DictType):
+ changes["distribution"] = {};
+
+ (summary, short_summary) = Katie.build_summaries();
+
+ # q-unapproved hax0ring
+ queue_info = {
+ "New": { "is": is_new, "process": acknowledge_new },
+ "Byhand" : { "is": is_byhand, "process": do_byhand },
+ "Unembargo" : { "is": is_unembargo, "process": queue_unembargo },
+ "Embargo" : { "is": is_embargo, "process": queue_embargo },
+ }
+ queues = [ "New", "Byhand" ]
+ if Cnf.FindB("Dinstall::SecurityQueueHandling"):
+ queues += [ "Unembargo", "Embargo" ]
+
+ (prompt, answer) = ("", "XXX")
+ if Options["No-Action"] or Options["Automatic"]:
+ answer = 'S'
+
+ queuekey = ''
+
+ if reject_message.find("Rejected") != -1:
+ if upload_too_new():
+ print "SKIP (too new)\n" + reject_message,;
+ prompt = "[S]kip, Quit ?";
+ else:
+ print "REJECT\n" + reject_message,;
+ prompt = "[R]eject, Skip, Quit ?";
+ if Options["Automatic"]:
+ answer = 'R';
+ else:
+ queue = None
+ for q in queues:
+ if queue_info[q]["is"]():
+ queue = q
+ break
+ if queue:
+ print "%s for %s\n%s%s" % (
+ queue.upper(), ", ".join(changes["distribution"].keys()),
+ reject_message, summary),
+ queuekey = queue[0].upper()
+ if queuekey in "RQSA":
+ queuekey = "D"
+ prompt = "[D]ivert, Skip, Quit ?"
+ else:
+ prompt = "[%s]%s, Skip, Quit ?" % (queuekey, queue[1:].lower())
+ if Options["Automatic"]:
+ answer = queuekey
+ else:
+ print "ACCEPT\n" + reject_message + summary,;
+ prompt = "[A]ccept, Skip, Quit ?";
+ if Options["Automatic"]:
+ answer = 'A';
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.match(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+
+ if answer == 'R':
+ os.chdir (pkg.directory);
+ Katie.do_reject(0, reject_message);
+ elif answer == 'A':
+ accept(summary, short_summary);
+ remove_from_unchecked()
+ elif answer == queuekey:
+ queue_info[queue]["process"](summary)
+ remove_from_unchecked()
+ elif answer == 'Q':
+ sys.exit(0)
+
+def remove_from_unchecked():
+ os.chdir (pkg.directory);
+ for file in files.keys():
+ os.unlink(file);
+ os.unlink(pkg.changes_file);
+
+################################################################################
+
+def accept (summary, short_summary):
+ Katie.accept(summary, short_summary);
+ Katie.check_override();
+
+################################################################################
+
+def move_to_dir (dest, perms=0660, changesperms=0664):
+ utils.move (pkg.changes_file, dest, perms=changesperms);
+ file_keys = files.keys();
+ for file in file_keys:
+ utils.move (file, dest, perms=perms);
+
+################################################################################
+
+def is_unembargo ():
+ q = Katie.projectB.query(
+ "SELECT package FROM disembargo WHERE package = '%s' AND version = '%s'" %
+ (changes["source"], changes["version"]))
+ ql = q.getresult()
+ if ql:
+ return 1
+
+ if pkg.directory == Cnf["Dir::Queue::Disembargo"].rstrip("/"):
+ if changes["architecture"].has_key("source"):
+ if Options["No-Action"]: return 1
+
+ Katie.projectB.query(
+ "INSERT INTO disembargo (package, version) VALUES ('%s', '%s')" %
+ (changes["source"], changes["version"]))
+ return 1
+
+ return 0
+
+def queue_unembargo (summary):
+ print "Moving to UNEMBARGOED holding area."
+ Logger.log(["Moving to unembargoed", pkg.changes_file]);
+
+ Katie.dump_vars(Cnf["Dir::Queue::Unembargoed"]);
+ move_to_dir(Cnf["Dir::Queue::Unembargoed"])
+ Katie.queue_build("unembargoed", Cnf["Dir::Queue::Unembargoed"])
+
+ # Check for override disparities
+ Katie.Subst["__SUMMARY__"] = summary;
+ Katie.check_override();
+
+################################################################################
+
+def is_embargo ():
+ return 0
+
+def queue_embargo (summary):
+ print "Moving to EMBARGOED holding area."
+ Logger.log(["Moving to embargoed", pkg.changes_file]);
+
+ Katie.dump_vars(Cnf["Dir::Queue::Embargoed"]);
+ move_to_dir(Cnf["Dir::Queue::Embargoed"])
+ Katie.queue_build("embargoed", Cnf["Dir::Queue::Embargoed"])
+
+ # Check for override disparities
+ Katie.Subst["__SUMMARY__"] = summary;
+ Katie.check_override();
+
+################################################################################
+
+def is_byhand ():
+ for file in files.keys():
+ if files[file].has_key("byhand"):
+ return 1
+ return 0
+
+def do_byhand (summary):
+ print "Moving to BYHAND holding area."
+ Logger.log(["Moving to byhand", pkg.changes_file]);
+
+ Katie.dump_vars(Cnf["Dir::Queue::Byhand"]);
+ move_to_dir(Cnf["Dir::Queue::Byhand"])
+
+ # Check for override disparities
+ Katie.Subst["__SUMMARY__"] = summary;
+ Katie.check_override();
+
+################################################################################
+
+def is_new ():
+ for file in files.keys():
+ if files[file].has_key("new"):
+ return 1
+ return 0
+
+def acknowledge_new (summary):
+ Subst = Katie.Subst;
+
+ print "Moving to NEW holding area."
+ Logger.log(["Moving to new", pkg.changes_file]);
+
+ Katie.dump_vars(Cnf["Dir::Queue::New"]);
+ move_to_dir(Cnf["Dir::Queue::New"])
+
+ if not Options["No-Mail"]:
+ print "Sending new ack.";
+ Subst["__SUMMARY__"] = summary;
+ new_ack_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.new");
+ utils.send_mail(new_ack_message);
+
+################################################################################
+
+# reprocess is necessary for the case of foo_1.2-1 and foo_1.2-2 in
+# Incoming. -1 will reference the .orig.tar.gz, but -2 will not.
+# Katie.check_dsc_against_db() can find the .orig.tar.gz but it will
+# not have processed it during it's checks of -2. If -1 has been
+# deleted or otherwise not checked by jennifer, the .orig.tar.gz will
+# not have been checked at all. To get round this, we force the
+# .orig.tar.gz into the .changes structure and reprocess the .changes
+# file.
+
+def process_it (changes_file):
+ global reprocess, reject_message;
+
+ # Reset some globals
+ reprocess = 1;
+ Katie.init_vars();
+ # Some defaults in case we can't fully process the .changes file
+ changes["maintainer2047"] = Cnf["Dinstall::MyEmailAddress"];
+ changes["changedby2047"] = Cnf["Dinstall::MyEmailAddress"];
+ reject_message = "";
+
+ # Absolutize the filename to avoid the requirement of being in the
+ # same directory as the .changes file.
+ pkg.changes_file = os.path.abspath(changes_file);
+
+ # Remember where we are so we can come back after cd-ing into the
+ # holding directory.
+ pkg.directory = os.getcwd();
+
+ try:
+ # If this is the Real Thing(tm), copy things into a private
+ # holding directory first to avoid replacable file races.
+ if not Options["No-Action"]:
+ os.chdir(Cnf["Dir::Queue::Holding"]);
+ copy_to_holding(pkg.changes_file);
+ # Relativize the filename so we use the copy in holding
+ # rather than the original...
+ pkg.changes_file = os.path.basename(pkg.changes_file);
+ changes["fingerprint"] = utils.check_signature(pkg.changes_file, reject);
+ if changes["fingerprint"]:
+ valid_changes_p = check_changes();
+ else:
+ valid_changes_p = 0;
+ if valid_changes_p:
+ while reprocess:
+ check_distributions();
+ check_files();
+ valid_dsc_p = check_dsc();
+ if valid_dsc_p:
+ check_source();
+ check_md5sums();
+ check_urgency();
+ check_timestamps();
+ Katie.update_subst(reject_message);
+ action();
+ except SystemExit:
+ raise;
+ except:
+ print "ERROR";
+ traceback.print_exc(file=sys.stderr);
+ pass;
+
+ # Restore previous WD
+ os.chdir(pkg.directory);
+
+###############################################################################
+
+def main():
+ global Cnf, Options, Logger;
+
+ changes_files = init();
+
+ # -n/--dry-run invalidates some other options which would involve things happening
+ if Options["No-Action"]:
+ Options["Automatic"] = "";
+
+ # Ensure all the arguments we were given are .changes files
+ for file in changes_files:
+ if not file.endswith(".changes"):
+ utils.warn("Ignoring '%s' because it's not a .changes file." % (file));
+ changes_files.remove(file);
+
+ if changes_files == []:
+ utils.fubar("Need at least one .changes file as an argument.");
+
+ # Check that we aren't going to clash with the daily cron job
+
+ if not Options["No-Action"] and os.path.exists("%s/daily.lock" % (Cnf["Dir::Lock"])) and not Options["No-Lock"]:
+ utils.fubar("Archive maintenance in progress. Try again later.");
+
+ # Obtain lock if not in no-action mode and initialize the log
+
+ if not Options["No-Action"]:
+ lock_fd = os.open(Cnf["Dinstall::LockFile"], os.O_RDWR | os.O_CREAT);
+ try:
+ fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB);
+ except IOError, e:
+ if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EAGAIN':
+ utils.fubar("Couldn't obtain lock; assuming another jennifer is already running.");
+ else:
+ raise;
+ Logger = Katie.Logger = logging.Logger(Cnf, "jennifer");
+
+ # debian-{devel-,}-changes@lists.debian.org toggles writes access based on this header
+ bcc = "X-Katie: %s" % (jennifer_version);
+ if Cnf.has_key("Dinstall::Bcc"):
+ Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
+ else:
+ Katie.Subst["__BCC__"] = bcc;
+
+
+ # Sort the .changes files so that we process sourceful ones first
+ changes_files.sort(utils.changes_compare);
+
+ # Process the changes files
+ for changes_file in changes_files:
+ print "\n" + changes_file;
+ try:
+ process_it (changes_file);
+ finally:
+ if not Options["No-Action"]:
+ clean_holding();
+
+ accept_count = Katie.accept_count;
+ accept_bytes = Katie.accept_bytes;
+ if accept_count:
+ sets = "set"
+ if accept_count > 1:
+ sets = "sets";
+ print "Accepted %d package %s, %s." % (accept_count, sets, utils.size_type(int(accept_bytes)));
+ Logger.log(["total",accept_count,accept_bytes]);
+
+ if not Options["No-Action"]:
+ Logger.close();
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Produces a report on NEW and BYHAND packages
+# Copyright (C) 2001, 2002, 2003, 2005 James Troup <james@nocrew.org>
+# $Id: helena,v 1.6 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <o-o> XP runs GCC, XFREE86, SSH etc etc,.,, I feel almost like linux....
+# <o-o> I am very confident that I can replicate any Linux application on XP
+# <willy> o-o: *boggle*
+# <o-o> building from source.
+# <o-o> Viiru: I already run GIMP under XP
+# <willy> o-o: why do you capitalise the names of all pieces of software?
+# <o-o> willy: because I want the EMPHASIZE them....
+# <o-o> grr s/the/to/
+# <willy> o-o: it makes you look like ZIPPY the PINHEAD
+# <o-o> willy: no idea what you are talking about.
+# <willy> o-o: do some research
+# <o-o> willy: for what reason?
+
+################################################################################
+
+import copy, glob, os, stat, sys, time;
+import apt_pkg;
+import katie, utils;
+import encodings.utf_8, encodings.latin_1, string;
+
+Cnf = None;
+Katie = None;
+direction = [];
+row_number = 0;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: helena
+Prints a report of packages in queue directories (usually new and byhand).
+
+ -h, --help show this help and exit.
+ -n, --new produce html-output
+ -s, --sort=key sort output according to key, see below.
+ -a, --age=key if using sort by age, how should time be treated?
+ If not given a default of hours will be used.
+
+ Sorting Keys: ao=age, oldest first. an=age, newest first.
+ na=name, ascending nd=name, descending
+ nf=notes, first nl=notes, last
+
+ Age Keys: m=minutes, h=hours, d=days, w=weeks, o=months, y=years
+
+"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def plural(x):
+ if x > 1:
+ return "s";
+ else:
+ return "";
+
+################################################################################
+
+def time_pp(x):
+ if x < 60:
+ unit="second";
+ elif x < 3600:
+ x /= 60;
+ unit="minute";
+ elif x < 86400:
+ x /= 3600;
+ unit="hour";
+ elif x < 604800:
+ x /= 86400;
+ unit="day";
+ elif x < 2419200:
+ x /= 604800;
+ unit="week";
+ elif x < 29030400:
+ x /= 2419200;
+ unit="month";
+ else:
+ x /= 29030400;
+ unit="year";
+ x = int(x);
+ return "%s %s%s" % (x, unit, plural(x));
+
+################################################################################
+
+def sg_compare (a, b):
+ a = a[1];
+ b = b[1];
+ """Sort by have note, time of oldest upload."""
+ # Sort by have note
+ a_note_state = a["note_state"];
+ b_note_state = b["note_state"];
+ if a_note_state < b_note_state:
+ return -1;
+ elif a_note_state > b_note_state:
+ return 1;
+
+ # Sort by time of oldest upload
+ return cmp(a["oldest"], b["oldest"]);
+
+############################################################
+
+def sortfunc(a,b):
+ for sorting in direction:
+ (sortkey, way, time) = sorting;
+ ret = 0
+ if time == "m":
+ x=int(a[sortkey]/60)
+ y=int(b[sortkey]/60)
+ elif time == "h":
+ x=int(a[sortkey]/3600)
+ y=int(b[sortkey]/3600)
+ elif time == "d":
+ x=int(a[sortkey]/86400)
+ y=int(b[sortkey]/86400)
+ elif time == "w":
+ x=int(a[sortkey]/604800)
+ y=int(b[sortkey]/604800)
+ elif time == "o":
+ x=int(a[sortkey]/2419200)
+ y=int(b[sortkey]/2419200)
+ elif time == "y":
+ x=int(a[sortkey]/29030400)
+ y=int(b[sortkey]/29030400)
+ else:
+ x=a[sortkey]
+ y=b[sortkey]
+ if x < y:
+ ret = -1
+ elif x > y:
+ ret = 1
+ if ret != 0:
+ if way < 0:
+ ret = ret*-1
+ return ret
+ return 0
+
+############################################################
+
+def header():
+ print """<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+ <html><head><meta http-equiv="Content-Type" content="text/html; charset=iso8859-1">
+ <title>Debian NEW and BYHAND Packages</title>
+ <link type="text/css" rel="stylesheet" href="style.css">
+ <link rel="shortcut icon" href="http://www.debian.org/favicon.ico">
+ </head>
+ <body>
+ <div align="center">
+ <a href="http://www.debian.org/">
+ <img src="http://www.debian.org/logos/openlogo-nd-50.png" border="0" hspace="0" vspace="0" alt=""></a>
+ <a href="http://www.debian.org/">
+ <img src="http://www.debian.org/Pics/debian.png" border="0" hspace="0" vspace="0" alt="Debian Project"></a>
+ </div>
+ <br />
+ <table class="reddy" width="100%">
+ <tr>
+ <td class="reddy">
+ <img src="http://www.debian.org/Pics/red-upperleft.png" align="left" border="0" hspace="0" vspace="0"
+ alt="" width="15" height="16"></td>
+ <td rowspan="2" class="reddy">Debian NEW and BYHAND Packages</td>
+ <td class="reddy">
+ <img src="http://www.debian.org/Pics/red-upperright.png" align="right" border="0" hspace="0" vspace="0"
+ alt="" width="16" height="16"></td>
+ </tr>
+ <tr>
+ <td class="reddy">
+ <img src="http://www.debian.org/Pics/red-lowerleft.png" align="left" border="0" hspace="0" vspace="0"
+ alt="" width="16" height="16"></td>
+ <td class="reddy">
+ <img src="http://www.debian.org/Pics/red-lowerright.png" align="right" border="0" hspace="0" vspace="0"
+ alt="" width="15" height="16"></td>
+ </tr>
+ </table>
+ """
+
+def footer():
+ print "<p class=\"validate\">Timestamp: %s (UTC)</p>" % (time.strftime("%d.%m.%Y / %H:%M:%S", time.gmtime()))
+ print "<hr><p>Hint: Age is the youngest upload of the package, if there is more than one version.</p>"
+ print "<p>You may want to look at <a href=\"http://ftp-master.debian.org/REJECT-FAQ.html\">the REJECT-FAQ</a> for possible reasons why one of the above packages may get rejected.</p>"
+ print """<a href="http://validator.w3.org/check?uri=referer">
+ <img border="0" src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!" height="31" width="88"></a>
+ <a href="http://jigsaw.w3.org/css-validator/check/referer">
+ <img border="0" src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"
+ height="31" width="88"></a>
+ """
+ print "</body></html>"
+
+def table_header(type):
+ print "<h1>Summary for: %s</h1>" % (type)
+ print """<center><table border="0">
+ <tr>
+ <th align="center">Package</th>
+ <th align="center">Version</th>
+ <th align="center">Arch</th>
+ <th align="center">Distribution</th>
+ <th align="center">Age</th>
+ <th align="center">Maintainer</th>
+ <th align="center">Closes</th>
+ </tr>
+ """
+
+def table_footer(type, source_count, total_count):
+ print "</table></center><br>\n"
+ print "<p class=\"validate\">Package count in <b>%s</b>: <i>%s</i>\n" % (type, source_count)
+ print "<br>Total Package count: <i>%s</i></p>\n" % (total_count)
+
+def force_to_latin(s):
+ """Forces a string to Latin-1."""
+ latin1_s = unicode(s,'utf-8');
+ return latin1_s.encode('iso8859-1', 'replace');
+
+
+def table_row(source, version, arch, last_mod, maint, distribution, closes):
+
+ global row_number;
+
+ if row_number % 2 != 0:
+ print "<tr class=\"even\">"
+ else:
+ print "<tr class=\"odd\">"
+
+ tdclass = "sid"
+ for dist in distribution:
+ if dist == "experimental":
+ tdclass = "exp";
+ print "<td valign=\"top\" class=\"%s\">%s</td>" % (tdclass, source);
+ print "<td valign=\"top\" class=\"%s\">" % (tdclass)
+ for vers in version.split():
+ print "%s<br>" % (vers);
+ print "</td><td valign=\"top\" class=\"%s\">%s</td><td valign=\"top\" class=\"%s\">" % (tdclass, arch, tdclass);
+ for dist in distribution:
+ print "%s<br>" % (dist);
+ print "</td><td valign=\"top\" class=\"%s\">%s</td>" % (tdclass, last_mod);
+ (name, mail) = maint.split(":");
+ name = force_to_latin(name);
+
+ print "<td valign=\"top\" class=\"%s\"><a href=\"http://qa.debian.org/developer.php?login=%s\">%s</a></td>" % (tdclass, mail, name);
+ print "<td valign=\"top\" class=\"%s\">" % (tdclass)
+ for close in closes:
+ print "<a href=\"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=%s\">#%s</a><br>" % (close, close);
+ print "</td></tr>";
+ row_number+=1;
+
+############################################################
+
+def process_changes_files(changes_files, type):
+ msg = "";
+ cache = {};
+ # Read in all the .changes files
+ for filename in changes_files:
+ try:
+ Katie.pkg.changes_file = filename;
+ Katie.init_vars();
+ Katie.update_vars();
+ cache[filename] = copy.copy(Katie.pkg.changes);
+ cache[filename]["filename"] = filename;
+ except:
+ break;
+ # Divide the .changes into per-source groups
+ per_source = {};
+ for filename in cache.keys():
+ source = cache[filename]["source"];
+ if not per_source.has_key(source):
+ per_source[source] = {};
+ per_source[source]["list"] = [];
+ per_source[source]["list"].append(cache[filename]);
+ # Determine oldest time and have note status for each source group
+ for source in per_source.keys():
+ source_list = per_source[source]["list"];
+ first = source_list[0];
+ oldest = os.stat(first["filename"])[stat.ST_MTIME];
+ have_note = 0;
+ for d in per_source[source]["list"]:
+ mtime = os.stat(d["filename"])[stat.ST_MTIME];
+ if Cnf.has_key("Helena::Options::New"):
+ if mtime > oldest:
+ oldest = mtime;
+ else:
+ if mtime < oldest:
+ oldest = mtime;
+ have_note += (d.has_key("lisa note"));
+ per_source[source]["oldest"] = oldest;
+ if not have_note:
+ per_source[source]["note_state"] = 0; # none
+ elif have_note < len(source_list):
+ per_source[source]["note_state"] = 1; # some
+ else:
+ per_source[source]["note_state"] = 2; # all
+ per_source_items = per_source.items();
+ per_source_items.sort(sg_compare);
+
+ entries = [];
+ max_source_len = 0;
+ max_version_len = 0;
+ max_arch_len = 0;
+ maintainer = {};
+ maint="";
+ distribution="";
+ closes="";
+ source_exists="";
+ for i in per_source_items:
+ last_modified = time.time()-i[1]["oldest"];
+ source = i[1]["list"][0]["source"];
+ if len(source) > max_source_len:
+ max_source_len = len(source);
+ arches = {};
+ versions = {};
+ for j in i[1]["list"]:
+ if Cnf.has_key("Helena::Options::New"):
+ try:
+ (maintainer["maintainer822"], maintainer["maintainer2047"],
+ maintainer["maintainername"], maintainer["maintaineremail"]) = \
+ utils.fix_maintainer (j["maintainer"]);
+ except utils.ParseMaintError, msg:
+ print "Problems while parsing maintainer address\n";
+ maintainer["maintainername"] = "Unknown";
+ maintainer["maintaineremail"] = "Unknown";
+ maint="%s:%s" % (maintainer["maintainername"], maintainer["maintaineremail"]);
+ distribution=j["distribution"].keys();
+ closes=j["closes"].keys();
+ for arch in j["architecture"].keys():
+ arches[arch] = "";
+ version = j["version"];
+ versions[version] = "";
+ arches_list = arches.keys();
+ arches_list.sort(utils.arch_compare_sw);
+ arch_list = " ".join(arches_list);
+ version_list = " ".join(versions.keys());
+ if len(version_list) > max_version_len:
+ max_version_len = len(version_list);
+ if len(arch_list) > max_arch_len:
+ max_arch_len = len(arch_list);
+ if i[1]["note_state"]:
+ note = " | [N]";
+ else:
+ note = "";
+ entries.append([source, version_list, arch_list, note, last_modified, maint, distribution, closes]);
+
+ # direction entry consists of "Which field, which direction, time-consider" where
+ # time-consider says how we should treat last_modified. Thats all.
+
+ # Look for the options for sort and then do the sort.
+ age = "h"
+ if Cnf.has_key("Helena::Options::Age"):
+ age = Cnf["Helena::Options::Age"]
+ if Cnf.has_key("Helena::Options::New"):
+ # If we produce html we always have oldest first.
+ direction.append([4,-1,"ao"]);
+ else:
+ if Cnf.has_key("Helena::Options::Sort"):
+ for i in Cnf["Helena::Options::Sort"].split(","):
+ if i == "ao":
+ # Age, oldest first.
+ direction.append([4,-1,age]);
+ elif i == "an":
+ # Age, newest first.
+ direction.append([4,1,age]);
+ elif i == "na":
+ # Name, Ascending.
+ direction.append([0,1,0]);
+ elif i == "nd":
+ # Name, Descending.
+ direction.append([0,-1,0]);
+ elif i == "nl":
+ # Notes last.
+ direction.append([3,1,0]);
+ elif i == "nf":
+ # Notes first.
+ direction.append([3,-1,0]);
+ entries.sort(lambda x, y: sortfunc(x, y))
+ # Yes, in theory you can add several sort options at the commandline with. But my mind is to small
+ # at the moment to come up with a real good sorting function that considers all the sidesteps you
+ # have with it. (If you combine options it will simply take the last one at the moment).
+ # Will be enhanced in the future.
+
+ if Cnf.has_key("Helena::Options::New"):
+ direction.append([4,1,"ao"]);
+ entries.sort(lambda x, y: sortfunc(x, y))
+ # Output for a html file. First table header. then table_footer.
+ # Any line between them is then a <tr> printed from subroutine table_row.
+ if len(entries) > 0:
+ table_header(type.upper());
+ for entry in entries:
+ (source, version_list, arch_list, note, last_modified, maint, distribution, closes) = entry;
+ table_row(source, version_list, arch_list, time_pp(last_modified), maint, distribution, closes);
+ total_count = len(changes_files);
+ source_count = len(per_source_items);
+ table_footer(type.upper(), source_count, total_count);
+ else:
+ # The "normal" output without any formatting.
+ format="%%-%ds | %%-%ds | %%-%ds%%s | %%s old\n" % (max_source_len, max_version_len, max_arch_len)
+
+ msg = "";
+ for entry in entries:
+ (source, version_list, arch_list, note, last_modified, undef, undef, undef) = entry;
+ msg += format % (source, version_list, arch_list, note, time_pp(last_modified));
+
+ if msg:
+ total_count = len(changes_files);
+ source_count = len(per_source_items);
+ print type.upper();
+ print "-"*len(type);
+ print
+ print msg;
+ print "%s %s source package%s / %s %s package%s in total." % (source_count, type, plural(source_count), total_count, type, plural(total_count));
+ print
+
+
+################################################################################
+
+def main():
+ global Cnf, Katie;
+
+ Cnf = utils.get_conf();
+ Arguments = [('h',"help","Helena::Options::Help"),
+ ('n',"new","Helena::Options::New"),
+ ('s',"sort","Helena::Options::Sort", "HasArg"),
+ ('a',"age","Helena::Options::Age", "HasArg")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Helena::Options::%s" % (i)):
+ Cnf["Helena::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Helena::Options")
+ if Options["Help"]:
+ usage();
+
+ Katie = katie.Katie(Cnf);
+
+ if Cnf.has_key("Helena::Options::New"):
+ header();
+
+ directories = Cnf.ValueList("Helena::Directories");
+ if not directories:
+ directories = [ "byhand", "new" ];
+
+ for directory in directories:
+ changes_files = glob.glob("%s/*.changes" % (Cnf["Dir::Queue::%s" % (directory)]));
+ process_changes_files(changes_files, directory);
+
+ if Cnf.has_key("Helena::Options::New"):
+ footer();
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# Manually reject packages for proprosed-updates
+# Copyright (C) 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: lauren,v 1.4 2004-04-01 17:13:11 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, pg, sys;
+import db_access, katie, logging, utils;
+import apt_pkg;
+
+################################################################################
+
+# Globals
+lauren_version = "$Revision: 1.4 $";
+
+Cnf = None;
+Options = None;
+projectB = None;
+Katie = None;
+Logger = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: lauren .CHANGES[...]
+Manually reject the .CHANGES file(s).
+
+ -h, --help show this help and exit.
+ -m, --message=MSG use this message for the rejection.
+ -s, --no-mail don't send any mail."""
+ sys.exit(exit_code)
+
+################################################################################
+
+def main():
+ global Cnf, Logger, Options, projectB, Katie;
+
+ Cnf = utils.get_conf();
+ Arguments = [('h',"help","Lauren::Options::Help"),
+ ('m',"manual-reject","Lauren::Options::Manual-Reject", "HasArg"),
+ ('s',"no-mail", "Lauren::Options::No-Mail")];
+ for i in [ "help", "manual-reject", "no-mail" ]:
+ if not Cnf.has_key("Lauren::Options::%s" % (i)):
+ Cnf["Lauren::Options::%s" % (i)] = "";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Lauren::Options");
+ if Options["Help"]:
+ usage();
+ if not arguments:
+ utils.fubar("need at least one .changes filename as an argument.");
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ Katie = katie.Katie(Cnf);
+ Logger = Katie.Logger = logging.Logger(Cnf, "lauren");
+
+ bcc = "X-Katie: lauren %s" % (lauren_version);
+ if Cnf.has_key("Dinstall::Bcc"):
+ Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
+ else:
+ Katie.Subst["__BCC__"] = bcc;
+
+ for arg in arguments:
+ arg = utils.validate_changes_file_arg(arg);
+ Katie.pkg.changes_file = arg;
+ Katie.init_vars();
+ cwd = os.getcwd();
+ os.chdir(Cnf["Suite::Proposed-Updates::CopyKatie"]);
+ Katie.update_vars();
+ os.chdir(cwd);
+ Katie.update_subst();
+
+ print arg
+ done = 0;
+ prompt = "Manual reject, [S]kip, Quit ?";
+ while not done:
+ answer = "XXX";
+
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1)
+ answer = answer[:1].upper()
+
+ if answer == 'M':
+ aborted = reject(Options["Manual-Reject"]);
+ if not aborted:
+ done = 1;
+ elif answer == 'S':
+ done = 1;
+ elif answer == 'Q':
+ sys.exit(0)
+
+ Logger.close();
+
+################################################################################
+
+def reject (reject_message = ""):
+ files = Katie.pkg.files;
+ dsc = Katie.pkg.dsc;
+ changes_file = Katie.pkg.changes_file;
+
+ # If we weren't given a manual rejection message, spawn an editor
+ # so the user can add one in...
+ if not reject_message:
+ temp_filename = utils.temp_filename();
+ editor = os.environ.get("EDITOR","vi")
+ answer = 'E';
+ while answer == 'E':
+ os.system("%s %s" % (editor, temp_filename))
+ file = utils.open_file(temp_filename);
+ reject_message = "".join(file.readlines());
+ file.close();
+ print "Reject message:";
+ print utils.prefix_multi_line_string(reject_message," ", include_blank_lines=1);
+ prompt = "[R]eject, Edit, Abandon, Quit ?"
+ answer = "XXX";
+ while prompt.find(answer) == -1:
+ answer = utils.our_raw_input(prompt);
+ m = katie.re_default_answer.search(prompt);
+ if answer == "":
+ answer = m.group(1);
+ answer = answer[:1].upper();
+ os.unlink(temp_filename);
+ if answer == 'A':
+ return 1;
+ elif answer == 'Q':
+ sys.exit(0);
+
+ print "Rejecting.\n"
+
+ # Reject the .changes file
+ Katie.force_reject([changes_file]);
+
+ # Setup the .reason file
+ reason_filename = changes_file[:-8] + ".reason";
+ reject_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
+
+ # If we fail here someone is probably trying to exploit the race
+ # so let's just raise an exception ...
+ if os.path.exists(reject_filename):
+ os.unlink(reject_filename);
+ reject_fd = os.open(reject_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
+
+ # Build up the rejection email
+ user_email_address = utils.whoami() + " <%s>" % (Cnf["Dinstall::MyAdminAddress"]);
+
+ Katie.Subst["__REJECTOR_ADDRESS__"] = user_email_address;
+ Katie.Subst["__MANUAL_REJECT_MESSAGE__"] = reject_message;
+ Katie.Subst["__STABLE_REJECTOR__"] = Cnf["Lauren::StableRejector"];
+ Katie.Subst["__MORE_INFO_URL__"] = Cnf["Lauren::MoreInfoURL"];
+ Katie.Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
+ reject_mail_message = utils.TemplateSubst(Katie.Subst,Cnf["Dir::Templates"]+"/lauren.stable-rejected");
+
+ # Write the rejection email out as the <foo>.reason file
+ os.write(reject_fd, reject_mail_message);
+ os.close(reject_fd);
+
+ # Remove the packages from proposed-updates
+ suite_id = db_access.get_suite_id('proposed-updates');
+
+ projectB.query("BEGIN WORK");
+ # Remove files from proposed-updates suite
+ for file in files.keys():
+ if files[file]["type"] == "dsc":
+ package = dsc["source"];
+ version = dsc["version"]; # NB: not files[file]["version"], that has no epoch
+ q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version));
+ ql = q.getresult();
+ if not ql:
+ utils.fubar("reject: Couldn't find %s_%s in source table." % (package, version));
+ source_id = ql[0][0];
+ projectB.query("DELETE FROM src_associations WHERE suite = '%s' AND source = '%s'" % (suite_id, source_id));
+ elif files[file]["type"] == "deb":
+ package = files[file]["package"];
+ version = files[file]["version"];
+ architecture = files[file]["architecture"];
+ q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture));
+ ql = q.getresult();
+
+ # Horrible hack to work around partial replacement of
+ # packages with newer versions (from different source
+ # packages). This, obviously, should instead check for a
+ # newer version of the package and only do the
+ # warn&continue thing if it finds one.
+ if not ql:
+ utils.warn("reject: Couldn't find %s_%s_%s in binaries table." % (package, version, architecture));
+ else:
+ binary_id = ql[0][0];
+ projectB.query("DELETE FROM bin_associations WHERE suite = '%s' AND bin = '%s'" % (suite_id, binary_id));
+ projectB.query("COMMIT WORK");
+
+ # Send the rejection mail if appropriate
+ if not Options["No-Mail"]:
+ utils.send_mail(reject_mail_message);
+
+ # Finally remove the .katie file
+ katie_file = os.path.join(Cnf["Suite::Proposed-Updates::CopyKatie"], os.path.basename(changes_file[:-8]+".katie"));
+ os.unlink(katie_file);
+
+ Logger.log(["rejected", changes_file]);
+ return 0;
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
--- /dev/null
+#!/usr/bin/env python
+
+# General purpose package removal tool for ftpmaster
+# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: melanie,v 1.44 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# o OpenBSD team wants to get changes incorporated into IPF. Darren no
+# respond.
+# o Ask again -> No respond. Darren coder supreme.
+# o OpenBSD decide to make changes, but only in OpenBSD source
+# tree. Darren hears, gets angry! Decides: "LICENSE NO ALLOW!"
+# o Insert Flame War.
+# o OpenBSD team decide to switch to different packet filter under BSD
+# license. Because Project Goal: Every user should be able to make
+# changes to source tree. IPF license bad!!
+# o Darren try get back: says, NetBSD, FreeBSD allowed! MUAHAHAHAH!!!
+# o Theo say: no care, pf much better than ipf!
+# o Darren changes mind: changes license. But OpenBSD will not change
+# back to ipf. Darren even much more bitter.
+# o Darren so bitterbitter. Decides: I'LL GET BACK BY FORKING OPENBSD AND
+# RELEASING MY OWN VERSION. HEHEHEHEHE.
+
+# http://slashdot.org/comments.pl?sid=26697&cid=2883271
+
+################################################################################
+
+import commands, os, pg, re, sys;
+import utils, db_access;
+import apt_pkg, apt_inst;
+
+################################################################################
+
+re_strip_source_version = re.compile (r'\s+.*$');
+re_build_dep_arch = re.compile(r"\[[^]]+\]");
+
+################################################################################
+
+Cnf = None;
+Options = None;
+projectB = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: melanie [OPTIONS] PACKAGE[...]
+Remove PACKAGE(s) from suite(s).
+
+ -a, --architecture=ARCH only act on this architecture
+ -b, --binary remove binaries only
+ -c, --component=COMPONENT act on this component
+ -C, --carbon-copy=EMAIL send a CC of removal message to EMAIL
+ -d, --done=BUG# send removal message as closure to bug#
+ -h, --help show this help and exit
+ -m, --reason=MSG reason for removal
+ -n, --no-action don't do anything
+ -p, --partial don't affect override files
+ -R, --rdep-check check reverse dependencies
+ -s, --suite=SUITE act on this suite
+ -S, --source-only remove source only
+
+ARCH, BUG#, COMPONENT and SUITE can be comma (or space) separated lists, e.g.
+ --architecture=m68k,i386"""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+# "Hudson: What that's great, that's just fucking great man, now what
+# the fuck are we supposed to do? We're in some real pretty shit now
+# man...That's it man, game over man, game over, man! Game over! What
+# the fuck are we gonna do now? What are we gonna do?"
+
+def game_over():
+ answer = utils.our_raw_input("Continue (y/N)? ").lower();
+ if answer != "y":
+ print "Aborted."
+ sys.exit(1);
+
+################################################################################
+
+def reverse_depends_check(removals, suites):
+ print "Checking reverse dependencies..."
+ components = Cnf.ValueList("Suite::%s::Components" % suites[0])
+ dep_problem = 0
+ p2c = {};
+ for architecture in Cnf.ValueList("Suite::%s::Architectures" % suites[0]):
+ if architecture in ["source", "all"]:
+ continue
+ deps = {};
+ virtual_packages = {};
+ for component in components:
+ filename = "%s/dists/%s/%s/binary-%s/Packages.gz" % (Cnf["Dir::Root"], suites[0], component, architecture)
+ # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
+ temp_filename = utils.temp_filename();
+ (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
+ if (result != 0):
+ utils.fubar("Gunzip invocation failed!\n%s\n" % (output), result);
+ packages = utils.open_file(temp_filename);
+ Packages = apt_pkg.ParseTagFile(packages)
+ while Packages.Step():
+ package = Packages.Section.Find("Package")
+ depends = Packages.Section.Find("Depends")
+ if depends:
+ deps[package] = depends
+ provides = Packages.Section.Find("Provides")
+ # Maintain a counter for each virtual package. If a
+ # Provides: exists, set the counter to 0 and count all
+ # provides by a package not in the list for removal.
+ # If the counter stays 0 at the end, we know that only
+ # the to-be-removed packages provided this virtual
+ # package.
+ if provides:
+ for virtual_pkg in provides.split(","):
+ virtual_pkg = virtual_pkg.strip()
+ if virtual_pkg == package: continue
+ if not virtual_packages.has_key(virtual_pkg):
+ virtual_packages[virtual_pkg] = 0
+ if package not in removals:
+ virtual_packages[virtual_pkg] += 1
+ p2c[package] = component;
+ packages.close()
+ os.unlink(temp_filename);
+
+ # If a virtual package is only provided by the to-be-removed
+ # packages, treat the virtual package as to-be-removed too.
+ for virtual_pkg in virtual_packages.keys():
+ if virtual_packages[virtual_pkg] == 0:
+ removals.append(virtual_pkg)
+
+ # Check binary dependencies (Depends)
+ for package in deps.keys():
+ if package in removals: continue
+ parsed_dep = []
+ try:
+ parsed_dep += apt_pkg.ParseDepends(deps[package])
+ except ValueError, e:
+ print "Error for package %s: %s" % (package, e)
+ for dep in parsed_dep:
+ # Check for partial breakage. If a package has a ORed
+ # dependency, there is only a dependency problem if all
+ # packages in the ORed depends will be removed.
+ unsat = 0
+ for dep_package, _, _ in dep:
+ if dep_package in removals:
+ unsat += 1
+ if unsat == len(dep):
+ component = p2c[package];
+ if component != "main":
+ what = "%s/%s" % (package, component);
+ else:
+ what = "** %s" % (package);
+ print "%s has an unsatisfied dependency on %s: %s" % (what, architecture, utils.pp_deps(dep));
+ dep_problem = 1
+
+ # Check source dependencies (Build-Depends and Build-Depends-Indep)
+ for component in components:
+ filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suites[0], component)
+ # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
+ temp_filename = utils.temp_filename();
+ result, output = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename))
+ if result != 0:
+ sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output))
+ sys.exit(result)
+ sources = utils.open_file(temp_filename, "r")
+ Sources = apt_pkg.ParseTagFile(sources)
+ while Sources.Step():
+ source = Sources.Section.Find("Package")
+ if source in removals: continue
+ parsed_dep = []
+ for build_dep_type in ["Build-Depends", "Build-Depends-Indep"]:
+ build_dep = Sources.Section.get(build_dep_type)
+ if build_dep:
+ # Remove [arch] information since we want to see breakage on all arches
+ build_dep = re_build_dep_arch.sub("", build_dep)
+ try:
+ parsed_dep += apt_pkg.ParseDepends(build_dep)
+ except ValueError, e:
+ print "Error for source %s: %s" % (source, e)
+ for dep in parsed_dep:
+ unsat = 0
+ for dep_package, _, _ in dep:
+ if dep_package in removals:
+ unsat += 1
+ if unsat == len(dep):
+ if component != "main":
+ source = "%s/%s" % (source, component);
+ else:
+ source = "** %s" % (source);
+ print "%s has an unsatisfied build-dependency: %s" % (source, utils.pp_deps(dep))
+ dep_problem = 1
+ sources.close()
+ os.unlink(temp_filename)
+
+ if dep_problem:
+ print "Dependency problem found."
+ if not Options["No-Action"]:
+ game_over()
+ else:
+ print "No dependency problem found."
+ print
+
+################################################################################
+
+def main ():
+ global Cnf, Options, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Melanie::Options::Help"),
+ ('a',"architecture","Melanie::Options::Architecture", "HasArg"),
+ ('b',"binary", "Melanie::Options::Binary-Only"),
+ ('c',"component", "Melanie::Options::Component", "HasArg"),
+ ('C',"carbon-copy", "Melanie::Options::Carbon-Copy", "HasArg"), # Bugs to Cc
+ ('d',"done","Melanie::Options::Done", "HasArg"), # Bugs fixed
+ ('R',"rdep-check", "Melanie::Options::Rdep-Check"),
+ ('m',"reason", "Melanie::Options::Reason", "HasArg"), # Hysterical raisins; -m is old-dinstall option for rejection reason
+ ('n',"no-action","Melanie::Options::No-Action"),
+ ('p',"partial", "Melanie::Options::Partial"),
+ ('s',"suite","Melanie::Options::Suite", "HasArg"),
+ ('S',"source-only", "Melanie::Options::Source-Only"),
+ ];
+
+ for i in [ "architecture", "binary-only", "carbon-copy", "component",
+ "done", "help", "no-action", "partial", "rdep-check", "reason",
+ "source-only" ]:
+ if not Cnf.has_key("Melanie::Options::%s" % (i)):
+ Cnf["Melanie::Options::%s" % (i)] = "";
+ if not Cnf.has_key("Melanie::Options::Suite"):
+ Cnf["Melanie::Options::Suite"] = "unstable";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Melanie::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+ db_access.init(Cnf, projectB);
+
+ # Sanity check options
+ if not arguments:
+ utils.fubar("need at least one package name as an argument.");
+ if Options["Architecture"] and Options["Source-Only"]:
+ utils.fubar("can't use -a/--architecutre and -S/--source-only options simultaneously.");
+ if Options["Binary-Only"] and Options["Source-Only"]:
+ utils.fubar("can't use -b/--binary-only and -S/--source-only options simultaneously.");
+ if Options.has_key("Carbon-Copy") and not Options.has_key("Done"):
+ utils.fubar("can't use -C/--carbon-copy without also using -d/--done option.");
+ if Options["Architecture"] and not Options["Partial"]:
+ utils.warn("-a/--architecture implies -p/--partial.");
+ Options["Partial"] = "true";
+
+ # Force the admin to tell someone if we're not doing a rene-led removal
+ # (or closing a bug, which counts as telling someone).
+ if not Options["No-Action"] and not Options["Carbon-Copy"] \
+ and not Options["Done"] and Options["Reason"].find("[rene]") == -1:
+ utils.fubar("Need a -C/--carbon-copy if not closing a bug and not doing a rene-led removal.");
+
+ # Process -C/--carbon-copy
+ #
+ # Accept 3 types of arguments (space separated):
+ # 1) a number - assumed to be a bug number, i.e. nnnnn@bugs.debian.org
+ # 2) the keyword 'package' - cc's $package@packages.debian.org for every argument
+ # 3) contains a '@' - assumed to be an email address, used unmofidied
+ #
+ carbon_copy = [];
+ for copy_to in utils.split_args(Options.get("Carbon-Copy")):
+ if utils.str_isnum(copy_to):
+ carbon_copy.append(copy_to + "@" + Cnf["Dinstall::BugServer"]);
+ elif copy_to == 'package':
+ for package in arguments:
+ carbon_copy.append(package + "@" + Cnf["Dinstall::PackagesServer"]);
+ if Cnf.has_key("Dinstall::TrackingServer"):
+ carbon_copy.append(package + "@" + Cnf["Dinstall::TrackingServer"]);
+ elif '@' in copy_to:
+ carbon_copy.append(copy_to);
+ else:
+ utils.fubar("Invalid -C/--carbon-copy argument '%s'; not a bug number, 'package' or email address." % (copy_to));
+
+ if Options["Binary-Only"]:
+ field = "b.package";
+ else:
+ field = "s.source";
+ con_packages = "AND %s IN (%s)" % (field, ", ".join(map(repr, arguments)));
+
+ (con_suites, con_architectures, con_components, check_source) = \
+ utils.parse_args(Options);
+
+ # Additional suite checks
+ suite_ids_list = [];
+ suites = utils.split_args(Options["Suite"]);
+ suites_list = utils.join_with_commas_and(suites);
+ if not Options["No-Action"]:
+ for suite in suites:
+ suite_id = db_access.get_suite_id(suite);
+ if suite_id != -1:
+ suite_ids_list.append(suite_id);
+ if suite == "stable":
+ print "**WARNING** About to remove from the stable suite!"
+ print "This should only be done just prior to a (point) release and not at"
+ print "any other time."
+ game_over();
+ elif suite == "testing":
+ print "**WARNING About to remove from the testing suite!"
+ print "There's no need to do this normally as removals from unstable will"
+ print "propogate to testing automagically."
+ game_over();
+
+ # Additional architecture checks
+ if Options["Architecture"] and check_source:
+ utils.warn("'source' in -a/--argument makes no sense and is ignored.");
+
+ # Additional component processing
+ over_con_components = con_components.replace("c.id", "component");
+
+ print "Working...",
+ sys.stdout.flush();
+ to_remove = [];
+ maintainers = {};
+
+ # We have 3 modes of package selection: binary-only, source-only
+ # and source+binary. The first two are trivial and obvious; the
+ # latter is a nasty mess, but very nice from a UI perspective so
+ # we try to support it.
+
+ if Options["Binary-Only"]:
+ # Binary-only
+ q = projectB.query("SELECT b.package, b.version, a.arch_string, b.id, b.maintainer FROM binaries b, bin_associations ba, architecture a, suite su, files f, location l, component c WHERE ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id AND b.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s %s" % (con_packages, con_suites, con_components, con_architectures));
+ for i in q.getresult():
+ to_remove.append(i);
+ else:
+ # Source-only
+ source_packages = {};
+ q = projectB.query("SELECT l.path, f.filename, s.source, s.version, 'source', s.id, s.maintainer FROM source s, src_associations sa, suite su, files f, location l, component c WHERE sa.source = s.id AND sa.suite = su.id AND s.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s" % (con_packages, con_suites, con_components));
+ for i in q.getresult():
+ source_packages[i[2]] = i[:2];
+ to_remove.append(i[2:]);
+ if not Options["Source-Only"]:
+ # Source + Binary
+ binary_packages = {};
+ # First get a list of binary package names we suspect are linked to the source
+ q = projectB.query("SELECT DISTINCT b.package FROM binaries b, source s, src_associations sa, suite su, files f, location l, component c WHERE b.source = s.id AND sa.source = s.id AND sa.suite = su.id AND s.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s" % (con_packages, con_suites, con_components));
+ for i in q.getresult():
+ binary_packages[i[0]] = "";
+ # Then parse each .dsc that we found earlier to see what binary packages it thinks it produces
+ for i in source_packages.keys():
+ filename = "/".join(source_packages[i]);
+ try:
+ dsc = utils.parse_changes(filename);
+ except utils.cant_open_exc:
+ utils.warn("couldn't open '%s'." % (filename));
+ continue;
+ for package in dsc.get("binary").split(','):
+ package = package.strip();
+ binary_packages[package] = "";
+ # Then for each binary package: find any version in
+ # unstable, check the Source: field in the deb matches our
+ # source package and if so add it to the list of packages
+ # to be removed.
+ for package in binary_packages.keys():
+ q = projectB.query("SELECT l.path, f.filename, b.package, b.version, a.arch_string, b.id, b.maintainer FROM binaries b, bin_associations ba, architecture a, suite su, files f, location l, component c WHERE ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id AND b.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s AND b.package = '%s'" % (con_suites, con_components, con_architectures, package));
+ for i in q.getresult():
+ filename = "/".join(i[:2]);
+ control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(filename)))
+ source = control.Find("Source", control.Find("Package"));
+ source = re_strip_source_version.sub('', source);
+ if source_packages.has_key(source):
+ to_remove.append(i[2:]);
+ print "done."
+
+ if not to_remove:
+ print "Nothing to do."
+ sys.exit(0);
+
+ # If we don't have a reason; spawn an editor so the user can add one
+ # Write the rejection email out as the <foo>.reason file
+ if not Options["Reason"] and not Options["No-Action"]:
+ temp_filename = utils.temp_filename();
+ editor = os.environ.get("EDITOR","vi")
+ result = os.system("%s %s" % (editor, temp_filename))
+ if result != 0:
+ utils.fubar ("vi invocation failed for `%s'!" % (temp_filename), result)
+ temp_file = utils.open_file(temp_filename);
+ for line in temp_file.readlines():
+ Options["Reason"] += line;
+ temp_file.close();
+ os.unlink(temp_filename);
+
+ # Generate the summary of what's to be removed
+ d = {};
+ for i in to_remove:
+ package = i[0];
+ version = i[1];
+ architecture = i[2];
+ maintainer = i[4];
+ maintainers[maintainer] = "";
+ if not d.has_key(package):
+ d[package] = {};
+ if not d[package].has_key(version):
+ d[package][version] = [];
+ if architecture not in d[package][version]:
+ d[package][version].append(architecture);
+
+ maintainer_list = [];
+ for maintainer_id in maintainers.keys():
+ maintainer_list.append(db_access.get_maintainer(maintainer_id));
+ summary = "";
+ removals = d.keys();
+ removals.sort();
+ for package in removals:
+ versions = d[package].keys();
+ versions.sort(apt_pkg.VersionCompare);
+ for version in versions:
+ d[package][version].sort(utils.arch_compare_sw);
+ summary += "%10s | %10s | %s\n" % (package, version, ", ".join(d[package][version]));
+ print "Will remove the following packages from %s:" % (suites_list);
+ print
+ print summary
+ print "Maintainer: %s" % ", ".join(maintainer_list)
+ if Options["Done"]:
+ print "Will also close bugs: "+Options["Done"];
+ if carbon_copy:
+ print "Will also send CCs to: " + ", ".join(carbon_copy)
+ print
+ print "------------------- Reason -------------------"
+ print Options["Reason"];
+ print "----------------------------------------------"
+ print
+
+ if Options["Rdep-Check"]:
+ reverse_depends_check(removals, suites);
+
+ # If -n/--no-action, drop out here
+ if Options["No-Action"]:
+ sys.exit(0);
+
+ print "Going to remove the packages now."
+ game_over();
+
+ whoami = utils.whoami();
+ date = commands.getoutput('date -R');
+
+ # Log first; if it all falls apart I want a record that we at least tried.
+ logfile = utils.open_file(Cnf["Melanie::LogFile"], 'a');
+ logfile.write("=========================================================================\n");
+ logfile.write("[Date: %s] [ftpmaster: %s]\n" % (date, whoami));
+ logfile.write("Removed the following packages from %s:\n\n%s" % (suites_list, summary));
+ if Options["Done"]:
+ logfile.write("Closed bugs: %s\n" % (Options["Done"]));
+ logfile.write("\n------------------- Reason -------------------\n%s\n" % (Options["Reason"]));
+ logfile.write("----------------------------------------------\n");
+ logfile.flush();
+
+ dsc_type_id = db_access.get_override_type_id('dsc');
+ deb_type_id = db_access.get_override_type_id('deb');
+
+ # Do the actual deletion
+ print "Deleting...",
+ sys.stdout.flush();
+ projectB.query("BEGIN WORK");
+ for i in to_remove:
+ package = i[0];
+ architecture = i[2];
+ package_id = i[3];
+ for suite_id in suite_ids_list:
+ if architecture == "source":
+ projectB.query("DELETE FROM src_associations WHERE source = %s AND suite = %s" % (package_id, suite_id));
+ #print "DELETE FROM src_associations WHERE source = %s AND suite = %s" % (package_id, suite_id);
+ else:
+ projectB.query("DELETE FROM bin_associations WHERE bin = %s AND suite = %s" % (package_id, suite_id));
+ #print "DELETE FROM bin_associations WHERE bin = %s AND suite = %s" % (package_id, suite_id);
+ # Delete from the override file
+ if not Options["Partial"]:
+ if architecture == "source":
+ type_id = dsc_type_id;
+ else:
+ type_id = deb_type_id;
+ projectB.query("DELETE FROM override WHERE package = '%s' AND type = %s AND suite = %s %s" % (package, type_id, suite_id, over_con_components));
+ projectB.query("COMMIT WORK");
+ print "done."
+
+ # Send the bug closing messages
+ if Options["Done"]:
+ Subst = {};
+ Subst["__MELANIE_ADDRESS__"] = Cnf["Melanie::MyEmailAddress"];
+ Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
+ bcc = [];
+ if Cnf.Find("Dinstall::Bcc") != "":
+ bcc.append(Cnf["Dinstall::Bcc"]);
+ if Cnf.Find("Melanie::Bcc") != "":
+ bcc.append(Cnf["Melanie::Bcc"]);
+ if bcc:
+ Subst["__BCC__"] = "Bcc: " + ", ".join(bcc);
+ else:
+ Subst["__BCC__"] = "X-Filler: 42";
+ Subst["__CC__"] = "X-Katie: melanie $Revision: 1.44 $";
+ if carbon_copy:
+ Subst["__CC__"] += "\nCc: " + ", ".join(carbon_copy);
+ Subst["__SUITE_LIST__"] = suites_list;
+ Subst["__SUMMARY__"] = summary;
+ Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
+ Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
+ Subst["__WHOAMI__"] = whoami;
+ whereami = utils.where_am_i();
+ Archive = Cnf.SubTree("Archive::%s" % (whereami));
+ Subst["__MASTER_ARCHIVE__"] = Archive["OriginServer"];
+ Subst["__PRIMARY_MIRROR__"] = Archive["PrimaryMirror"];
+ for bug in utils.split_args(Options["Done"]):
+ Subst["__BUG_NUMBER__"] = bug;
+ mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/melanie.bug-close");
+ utils.send_mail(mail_message);
+
+ logfile.write("=========================================================================\n");
+ logfile.close();
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Wrapper for Debian Security team
+# Copyright (C) 2002, 2003, 2004 James Troup <james@nocrew.org>
+# $Id: amber,v 1.11 2005-11-26 07:52:06 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+
+################################################################################
+
+# <aj> neuro: <usual question>?
+# <neuro> aj: PPG: the movie! july 3!
+# <aj> _PHWOAR_!!!!!
+# <aj> (you think you can distract me, and you're right)
+# <aj> urls?!
+# <aj> promo videos?!
+# <aj> where, where!?
+
+################################################################################
+
+import commands, os, pwd, re, sys, time;
+import apt_pkg;
+import katie, utils;
+
+################################################################################
+
+Cnf = None;
+Options = None;
+Katie = None;
+
+re_taint_free = re.compile(r"^['/;\-\+\.\s\w]+$");
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: amber ADV_NUMBER CHANGES_FILE[...]
+Install CHANGES_FILE(s) as security advisory ADV_NUMBER
+
+ -h, --help show this help and exit
+ -n, --no-action don't do anything
+
+"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def do_upload(changes_files):
+ file_list = "";
+ suites = {};
+ component_mapping = {};
+ for component in Cnf.SubTree("Amber::ComponentMappings").List():
+ component_mapping[component] = Cnf["Amber::ComponentMappings::%s" % (component)];
+ uploads = {}; # uploads[uri] = file_list;
+ changesfiles = {}; # changesfiles[uri] = file_list;
+ package_list = {} # package_list[source_name][version];
+ changes_files.sort(utils.changes_compare);
+ for changes_file in changes_files:
+ changes_file = utils.validate_changes_file_arg(changes_file);
+ # Reset variables
+ components = {};
+ upload_uris = {};
+ file_list = [];
+ Katie.init_vars();
+ # Parse the .katie file for the .changes file
+ Katie.pkg.changes_file = changes_file;
+ Katie.update_vars();
+ files = Katie.pkg.files;
+ changes = Katie.pkg.changes;
+ dsc = Katie.pkg.dsc;
+ # We have the changes, now return if its amd64, to not upload them to ftp-master
+ if changes["architecture"].has_key("amd64"):
+ print "Not uploading amd64 part to ftp-master\n";
+ continue
+ if changes["distribution"].has_key("oldstable-security"):
+ print "Not uploading oldstable-security changes to ftp-master\n";
+ continue
+ # Build the file list for this .changes file
+ for file in files.keys():
+ poolname = os.path.join(Cnf["Dir::Root"], Cnf["Dir::PoolRoot"],
+ utils.poolify(changes["source"], files[file]["component"]),
+ file);
+ file_list.append(poolname);
+ orig_component = files[file].get("original component", files[file]["component"]);
+ components[orig_component] = "";
+ # Determine the upload uri for this .changes file
+ for component in components.keys():
+ upload_uri = component_mapping.get(component);
+ if upload_uri:
+ upload_uris[upload_uri] = "";
+ num_upload_uris = len(upload_uris.keys());
+ if num_upload_uris == 0:
+ utils.fubar("%s: No valid upload URI found from components (%s)."
+ % (changes_file, ", ".join(components.keys())));
+ elif num_upload_uris > 1:
+ utils.fubar("%s: more than one upload URI (%s) from components (%s)."
+ % (changes_file, ", ".join(upload_uris.keys()),
+ ", ".join(components.keys())));
+ upload_uri = upload_uris.keys()[0];
+ # Update the file list for the upload uri
+ if not uploads.has_key(upload_uri):
+ uploads[upload_uri] = [];
+ uploads[upload_uri].extend(file_list);
+ # Update the changes list for the upload uri
+ if not changes.has_key(upload_uri):
+ changesfiles[upload_uri] = [];
+ changesfiles[upload_uri].append(changes_file);
+ # Remember the suites and source name/version
+ for suite in changes["distribution"].keys():
+ suites[suite] = "";
+ # Remember the source name and version
+ if changes["architecture"].has_key("source") and \
+ changes["distribution"].has_key("testing"):
+ if not package_list.has_key(dsc["source"]):
+ package_list[dsc["source"]] = {};
+ package_list[dsc["source"]][dsc["version"]] = "";
+
+ if not Options["No-Action"]:
+ answer = yes_no("Upload to files to main archive (Y/n)?");
+ if answer != "y":
+ return;
+
+ for uri in uploads.keys():
+ uploads[uri].extend(changesfiles[uri]);
+ (host, path) = uri.split(":");
+ file_list = " ".join(uploads[uri]);
+ print "Uploading files to %s..." % (host);
+ spawn("lftp -c 'open %s; cd %s; put %s'" % (host, path, file_list));
+
+ if not Options["No-Action"]:
+ filename = "%s/testing-processed" % (Cnf["Dir::Log"]);
+ file = utils.open_file(filename, 'a');
+ for source in package_list.keys():
+ for version in package_list[source].keys():
+ file.write(" ".join([source, version])+'\n');
+ file.close();
+
+######################################################################
+# This function was originally written by aj and NIHishly merged into
+# amber by me.
+
+def make_advisory(advisory_nr, changes_files):
+ adv_packages = [];
+ updated_pkgs = {}; # updated_pkgs[distro][arch][file] = {path,md5,size}
+
+ for arg in changes_files:
+ arg = utils.validate_changes_file_arg(arg);
+ Katie.pkg.changes_file = arg;
+ Katie.init_vars();
+ Katie.update_vars();
+
+ src = Katie.pkg.changes["source"];
+ if src not in adv_packages:
+ adv_packages += [src];
+
+ suites = Katie.pkg.changes["distribution"].keys();
+ for suite in suites:
+ if not updated_pkgs.has_key(suite):
+ updated_pkgs[suite] = {};
+
+ files = Katie.pkg.files;
+ for file in files.keys():
+ arch = files[file]["architecture"];
+ md5 = files[file]["md5sum"];
+ size = files[file]["size"];
+ poolname = Cnf["Dir::PoolRoot"] + \
+ utils.poolify(src, files[file]["component"]);
+ if arch == "source" and file.endswith(".dsc"):
+ dscpoolname = poolname;
+ for suite in suites:
+ if not updated_pkgs[suite].has_key(arch):
+ updated_pkgs[suite][arch] = {}
+ updated_pkgs[suite][arch][file] = {
+ "md5": md5, "size": size,
+ "poolname": poolname };
+
+ dsc_files = Katie.pkg.dsc_files;
+ for file in dsc_files.keys():
+ arch = "source"
+ if not dsc_files[file].has_key("files id"):
+ continue;
+
+ # otherwise, it's already in the pool and needs to be
+ # listed specially
+ md5 = dsc_files[file]["md5sum"];
+ size = dsc_files[file]["size"];
+ for suite in suites:
+ if not updated_pkgs[suite].has_key(arch):
+ updated_pkgs[suite][arch] = {};
+ updated_pkgs[suite][arch][file] = {
+ "md5": md5, "size": size,
+ "poolname": dscpoolname };
+
+ if os.environ.has_key("SUDO_UID"):
+ whoami = long(os.environ["SUDO_UID"]);
+ else:
+ whoami = os.getuid();
+ whoamifull = pwd.getpwuid(whoami);
+ username = whoamifull[4].split(",")[0];
+
+ Subst = {
+ "__ADVISORY__": advisory_nr,
+ "__WHOAMI__": username,
+ "__DATE__": time.strftime("%B %d, %Y", time.gmtime(time.time())),
+ "__PACKAGE__": ", ".join(adv_packages),
+ "__KATIE_ADDRESS__": Cnf["Dinstall::MyEmailAddress"]
+ };
+
+ if Cnf.has_key("Dinstall::Bcc"):
+ Subst["__BCC__"] = "Bcc: %s" % (Cnf["Dinstall::Bcc"]);
+
+ adv = "";
+ archive = Cnf["Archive::%s::PrimaryMirror" % (utils.where_am_i())];
+ for suite in updated_pkgs.keys():
+ suite_header = "%s %s (%s)" % (Cnf["Dinstall::MyDistribution"],
+ Cnf["Suite::%s::Version" % suite], suite);
+ adv += "%s\n%s\n\n" % (suite_header, "-"*len(suite_header));
+
+ arches = Cnf.ValueList("Suite::%s::Architectures" % suite);
+ if "source" in arches:
+ arches.remove("source");
+ if "all" in arches:
+ arches.remove("all");
+ arches.sort();
+
+ adv += " %s was released for %s.\n\n" % (
+ suite.capitalize(), utils.join_with_commas_and(arches));
+
+ for a in ["source", "all"] + arches:
+ if not updated_pkgs[suite].has_key(a):
+ continue;
+
+ if a == "source":
+ adv += " Source archives:\n\n";
+ elif a == "all":
+ adv += " Architecture independent packages:\n\n";
+ else:
+ adv += " %s architecture (%s)\n\n" % (a,
+ Cnf["Architectures::%s" % a]);
+
+ for file in updated_pkgs[suite][a].keys():
+ adv += " http://%s/%s%s\n" % (
+ archive, updated_pkgs[suite][a][file]["poolname"], file);
+ adv += " Size/MD5 checksum: %8s %s\n" % (
+ updated_pkgs[suite][a][file]["size"],
+ updated_pkgs[suite][a][file]["md5"]);
+ adv += "\n";
+ adv = adv.rstrip();
+
+ Subst["__ADVISORY_TEXT__"] = adv;
+
+ adv = utils.TemplateSubst(Subst, Cnf["Dir::Templates"]+"/amber.advisory");
+ if not Options["No-Action"]:
+ utils.send_mail (adv);
+ else:
+ print "[<Would send template advisory mail>]";
+
+######################################################################
+
+def init():
+ global Cnf, Katie, Options;
+
+ apt_pkg.init();
+ Cnf = utils.get_conf();
+
+ Arguments = [('h', "help", "Amber::Options::Help"),
+ ('n', "no-action", "Amber::Options::No-Action")];
+
+ for i in [ "help", "no-action" ]:
+ Cnf["Amber::Options::%s" % (i)] = "";
+
+ arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Amber::Options")
+ Katie = katie.Katie(Cnf);
+
+ if Options["Help"]:
+ usage(0);
+
+ if not arguments:
+ usage(1);
+
+ advisory_number = arguments[0];
+ changes_files = arguments[1:];
+ if advisory_number.endswith(".changes"):
+ utils.warn("first argument must be the advisory number.");
+ usage(1);
+ for file in changes_files:
+ file = utils.validate_changes_file_arg(file);
+ return (advisory_number, changes_files);
+
+######################################################################
+
+def yes_no(prompt):
+ while 1:
+ answer = utils.our_raw_input(prompt+" ").lower();
+ if answer == "y" or answer == "n":
+ break;
+ else:
+ print "Invalid answer; please try again.";
+ return answer;
+
+######################################################################
+
+def spawn(command):
+ if not re_taint_free.match(command):
+ utils.fubar("Invalid character in \"%s\"." % (command));
+
+ if Options["No-Action"]:
+ print "[%s]" % (command);
+ else:
+ (result, output) = commands.getstatusoutput(command);
+ if (result != 0):
+ utils.fubar("Invocation of '%s' failed:\n%s\n" % (command, output), result);
+
+######################################################################
+
+
+def main():
+ (advisory_number, changes_files) = init();
+
+ if not Options["No-Action"]:
+ print "About to install the following files: "
+ for file in changes_files:
+ print " %s" % (file);
+ answer = yes_no("Continue (Y/n)?");
+ if answer == "n":
+ sys.exit(0);
+
+ os.chdir(Cnf["Dir::Queue::Accepted"]);
+ print "Installing packages into the archive...";
+ spawn("%s/kelly -pa %s" % (Cnf["Dir::Katie"], " ".join(changes_files)));
+ os.chdir(Cnf["Dir::Katie"]);
+ print "Updating file lists for apt-ftparchive...";
+ spawn("./jenna");
+ print "Updating Packages and Sources files...";
+ spawn("apt-ftparchive generate %s" % (utils.which_apt_conf_file()));
+ print "Updating Release files...";
+ spawn("./ziyi");
+
+ if not Options["No-Action"]:
+ os.chdir(Cnf["Dir::Queue::Done"]);
+ else:
+ os.chdir(Cnf["Dir::Queue::Accepted"]);
+ print "Generating template advisory...";
+ make_advisory(advisory_number, changes_files);
+
+ # Trigger security mirrors
+ spawn("sudo -u archvsync /home/archvsync/signal_security");
+
+ do_upload(changes_files);
+
+################################################################################
+
+if __name__ == '__main__':
+ main();
+
+################################################################################
--- /dev/null
+#!/usr/bin/env python
+
+# Launch dak functionality
+# Copyright (c) 2005 Anthony Towns <ajt@debian.org>
+# $Id: dak,v 1.1 2005-11-17 08:47:31 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# well I don't know where you're from but in AMERICA, there's a little
+# thing called "abstinent until proven guilty."
+# -- http://harrietmiers.blogspot.com/2005/10/wow-i-feel-loved.html
+
+# (if James had a blog, I bet I could find a funny quote in it to use!)
+
+################################################################################
+
+import sys
+
+################################################################################
+
+# maps a command name to a module name
+functionality = [
+ ("ls", "Show which suites packages are in",
+ ("madison", "main"), ["madison"]),
+ ("rm", "Remove packages from suites", "melanie"),
+
+ ("decode-dot-dak", "Display contents of a .katie file", "ashley"),
+ ("override", "Query/change the overrides", "alicia"),
+
+ ("install", "Install a package from accepted (security only)",
+ "amber"), # XXX - hmm (ajt)
+ ("reject-proposed-updates", "Manually reject from proposed-updates", "lauren"),
+ ("process-new", "Process NEW and BYHAND packages", "lisa"),
+
+ ("control-overrides", "Manipulate/list override entries in bulk",
+ "natalie"),
+ ("control-suite", "Manipulate suites in bulk", "heidi"),
+
+ ("stats", "Generate stats pr0n", "saffron"),
+ ("cruft-report", "Check for obsolete or duplicated packages",
+ "rene"),
+ ("queue-report", "Produce a report on NEW and BYHAND packages",
+ "helena"),
+ ("compare-suites", "Show fixable discrepencies between suites",
+ "andrea"),
+
+ ("check-archive", "Archive sanity checks", "tea"),
+ ("check-overrides", "Override cruft checks", "cindy"),
+ ("check-proposed-updates", "Dependency checking for proposed-updates",
+ "jeri"),
+
+ ("examine-package", "Show information useful for NEW processing",
+ "fernanda"),
+
+ ("init-db", "Update the database to match the conf file",
+ "alyson"),
+ ("init-dirs", "Initial setup of the archive", "rose"),
+ ("import-archive", "Populate SQL database based from an archive tree",
+ "neve"),
+
+ ("poolize", "Move packages from dists/ to pool/", "catherine"),
+ ("symlink-dists", "Generate compatability symlinks from dists/",
+ "claire"),
+
+ ("process-unchecked", "Process packages in queue/unchecked", "jennifer"),
+
+ ("process-accepted", "Install packages into the pool", "kelly"),
+ ("generate-releases", "Generate Release files", "ziyi"),
+ ("generate-index-diffs", "Generate .diff/Index files", "tiffani"),
+
+ ("make-suite-file-list",
+ "Generate lists of packages per suite for apt-ftparchive", "jenna"),
+ ("make-maintainers", "Generates Maintainers file for BTS etc",
+ "charisma"),
+ ("make-overrides", "Generates override files", "denise"),
+
+ ("mirror-split", "Split the pool/ by architecture groups",
+ "billie"),
+
+ ("clean-proposed-updates", "Remove obsolete .changes from proposed-updates",
+ "halle"),
+ ("clean-queues", "Clean cruft from incoming", "shania"),
+ ("clean-suites",
+ "Clean unused/superseded packages from the archive", "rhona"),
+
+ ("split-done", "Split queue/done into a data-based hierarchy",
+ "nina"),
+
+ ("import-ldap-fingerprints",
+ "Syncs fingerprint and uid tables with Debian LDAP db", "emilie"),
+ ("import-users-from-passwd",
+ "Sync PostgreSQL users with passwd file", "julia"),
+ ("find-null-maintainers",
+ "Check for users with no packages in the archive", "rosamund"),
+]
+
+names = {}
+for f in functionality:
+ if isinstance(f[2], str):
+ names[f[2]] = names[f[0]] = (f[2], "main")
+ else:
+ names[f[0]] = f[2]
+ for a in f[3]: names[a] = f[2]
+
+################################################################################
+
+def main():
+ if len(sys.argv) == 0:
+ print "err, argc == 0? how is that possible?"
+ sys.exit(1);
+ elif len(sys.argv) == 1 or (len(sys.argv) == 2 and sys.argv[1] == "--help"):
+ print "Sub commands:"
+ for f in functionality:
+ print " %-23s %s" % (f[0], f[1])
+ sys.exit(0);
+ else:
+ # should set PATH based on sys.argv[0] maybe
+ # possibly should set names based on sys.argv[0] too
+ sys.path = [sys.path[0]+"/py-symlinks"] + sys.path
+
+ cmdname = sys.argv[0]
+ cmdname = cmdname[cmdname.rfind("/")+1:]
+ if cmdname in names:
+ pass # invoke directly
+ else:
+ cmdname = sys.argv[1]
+ sys.argv = [sys.argv[0] + " " + sys.argv[1]] + sys.argv[2:]
+ if cmdname not in names:
+ match = []
+ for f in names:
+ if f.startswith(cmdname):
+ match.append(f)
+ if len(match) == 1:
+ cmdname = match[0]
+ elif len(match) > 1:
+ print "ambiguous command: %s" % ", ".join(match)
+ sys.exit(1);
+ else:
+ print "unknown command \"%s\"" % (cmdname)
+ sys.exit(1);
+
+ func = names[cmdname]
+ x = __import__(func[0])
+ x.__getattribute__(func[1])()
+
+if __name__ == "__main__":
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# Copyright (C) 2004, 2005 James Troup <james@nocrew.org>
+# $Id: nina,v 1.2 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import glob, os, stat, time;
+import utils;
+
+################################################################################
+
+def main():
+ Cnf = utils.get_conf()
+ count = 0;
+ os.chdir(Cnf["Dir::Queue::Done"])
+ files = glob.glob("%s/*" % (Cnf["Dir::Queue::Done"]));
+ for filename in files:
+ if os.path.isfile(filename):
+ mtime = time.gmtime(os.stat(filename)[stat.ST_MTIME]);
+ dirname = time.strftime("%Y/%m/%d", mtime);
+ if not os.path.exists(dirname):
+ print "Creating: %s" % (dirname);
+ os.makedirs(dirname);
+ dest = dirname + '/' + os.path.basename(filename);
+ if os.path.exists(dest):
+ utils.fubar("%s already exists." % (dest));
+ print "Move: %s -> %s" % (filename, dest) ;
+ os.rename(filename, dest);
+ count = count + 1;
+ print "Moved %d files." % (count);
+
+############################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Various statistical pr0nography fun and games
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: saffron,v 1.3 2005-11-15 09:50:32 ajt Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# <aj> can we change the standards instead?
+# <neuro> standards?
+# <aj> whatever we're not conforming to
+# <aj> if there's no written standard, why don't we declare linux as
+# the defacto standard
+# <aj> go us!
+
+# [aj's attempt to avoid ABI changes for released architecture(s)]
+
+################################################################################
+
+import pg, sys;
+import utils;
+import apt_pkg;
+
+################################################################################
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage(exit_code=0):
+ print """Usage: saffron STAT
+Print various stats.
+
+ -h, --help show this help and exit.
+
+The following STAT modes are available:
+
+ arch-space - displays space used by each architecture
+ pkg-nums - displays the number of packages by suite/architecture
+ daily-install - displays daily install stats suitable for graphing
+"""
+ sys.exit(exit_code)
+
+################################################################################
+
+def per_arch_space_use():
+ q = projectB.query("""
+SELECT a.arch_string as Architecture, sum(f.size)
+ FROM files f, binaries b, architecture a
+ WHERE a.id=b.architecture AND f.id=b.file
+ GROUP BY a.arch_string""");
+ print q;
+ q = projectB.query("SELECT sum(size) FROM files WHERE filename ~ '.(diff.gz|tar.gz|dsc)$'");
+ print q;
+
+################################################################################
+
+def daily_install_stats():
+ stats = {};
+ file = utils.open_file("2001-11");
+ for line in file.readlines():
+ split = line.strip().split('~');
+ program = split[1];
+ if program != "katie":
+ continue;
+ action = split[2];
+ if action != "installing changes" and action != "installed":
+ continue;
+ date = split[0][:8];
+ if not stats.has_key(date):
+ stats[date] = {};
+ stats[date]["packages"] = 0;
+ stats[date]["size"] = 0.0;
+ if action == "installing changes":
+ stats[date]["packages"] += 1;
+ elif action == "installed":
+ stats[date]["size"] += float(split[5]);
+
+ dates = stats.keys();
+ dates.sort();
+ for date in dates:
+ packages = stats[date]["packages"]
+ size = int(stats[date]["size"] / 1024.0 / 1024.0)
+ print "%s %s %s" % (date, packages, size);
+
+################################################################################
+
+def longest(list):
+ longest = 0;
+ for i in list:
+ l = len(i);
+ if l > longest:
+ longest = l;
+ return longest;
+
+def suite_sort(a, b):
+ if Cnf.has_key("Suite::%s::Priority" % (a)):
+ a_priority = int(Cnf["Suite::%s::Priority" % (a)]);
+ else:
+ a_priority = 0;
+ if Cnf.has_key("Suite::%s::Priority" % (b)):
+ b_priority = int(Cnf["Suite::%s::Priority" % (b)]);
+ else:
+ b_priority = 0;
+ return cmp(a_priority, b_priority);
+
+def output_format(suite):
+ output_suite = [];
+ for word in suite.split("-"):
+ output_suite.append(word[0]);
+ return "-".join(output_suite);
+
+# Obvious query with GROUP BY and mapped names -> 50 seconds
+# GROUP BY but ids instead of suite/architecture names -> 28 seconds
+# Simple query -> 14 seconds
+# Simple query into large dictionary + processing -> 21 seconds
+# Simple query into large pre-created dictionary + processing -> 18 seconds
+
+def number_of_packages():
+ arches = {};
+ arch_ids = {};
+ suites = {};
+ suite_ids = {};
+ d = {};
+ # Build up suite mapping
+ q = projectB.query("SELECT id, suite_name FROM suite");
+ suite_ql = q.getresult();
+ for i in suite_ql:
+ (id, name) = i;
+ suites[id] = name;
+ suite_ids[name] = id;
+ # Build up architecture mapping
+ q = projectB.query("SELECT id, arch_string FROM architecture");
+ for i in q.getresult():
+ (id, name) = i;
+ arches[id] = name;
+ arch_ids[name] = id;
+ # Pre-create the dictionary
+ for suite_id in suites.keys():
+ d[suite_id] = {};
+ for arch_id in arches.keys():
+ d[suite_id][arch_id] = 0;
+ # Get the raw data for binaries
+ q = projectB.query("""
+SELECT ba.suite, b.architecture
+ FROM binaries b, bin_associations ba
+ WHERE b.id = ba.bin""");
+ # Simultate 'GROUP by suite, architecture' with a dictionary
+ for i in q.getresult():
+ (suite_id, arch_id) = i;
+ d[suite_id][arch_id] = d[suite_id][arch_id] + 1;
+ # Get the raw data for source
+ arch_id = arch_ids["source"];
+ q = projectB.query("""
+SELECT suite, count(suite) FROM src_associations GROUP BY suite;""");
+ for i in q.getresult():
+ (suite_id, count) = i;
+ d[suite_id][arch_id] = d[suite_id][arch_id] + count;
+ ## Print the results
+ # Setup
+ suite_list = suites.values();
+ suite_list.sort(suite_sort);
+ suite_id_list = [];
+ suite_arches = {};
+ for suite in suite_list:
+ suite_id = suite_ids[suite];
+ suite_arches[suite_id] = {};
+ for arch in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
+ suite_arches[suite_id][arch] = "";
+ suite_id_list.append(suite_id);
+ output_list = map(lambda x: output_format(x), suite_list);
+ longest_suite = longest(output_list);
+ arch_list = arches.values();
+ arch_list.sort();
+ longest_arch = longest(arch_list);
+ # Header
+ output = (" "*longest_arch) + " |"
+ for suite in output_list:
+ output = output + suite.center(longest_suite)+" |";
+ output = output + "\n"+(len(output)*"-")+"\n";
+ # per-arch data
+ arch_list = arches.values();
+ arch_list.sort();
+ longest_arch = longest(arch_list);
+ for arch in arch_list:
+ arch_id = arch_ids[arch];
+ output = output + arch.center(longest_arch)+" |";
+ for suite_id in suite_id_list:
+ if suite_arches[suite_id].has_key(arch):
+ count = repr(d[suite_id][arch_id]);
+ else:
+ count = "-";
+ output = output + count.rjust(longest_suite)+" |";
+ output = output + "\n";
+ print output;
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf();
+ Arguments = [('h',"help","Saffron::Options::Help")];
+ for i in [ "help" ]:
+ if not Cnf.has_key("Saffron::Options::%s" % (i)):
+ Cnf["Saffron::Options::%s" % (i)] = "";
+
+ args = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
+
+ Options = Cnf.SubTree("Saffron::Options")
+ if Options["Help"]:
+ usage();
+
+ if len(args) < 1:
+ utils.warn("saffron requires at least one argument");
+ usage(1);
+ elif len(args) > 1:
+ utils.warn("saffron accepts only one argument");
+ usage(1);
+ mode = args[0].lower();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+
+ if mode == "arch-space":
+ per_arch_space_use();
+ elif mode == "pkg-nums":
+ number_of_packages();
+ elif mode == "daily-install":
+ daily_install_stats();
+ else:
+ utils.warn("unknown mode '%s'" % (mode));
+ usage(1);
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+
--- /dev/null
+#!/usr/bin/env python
+
+# 'Fix' stable to make debian-cd and dpkg -BORGiE users happy
+# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
+# $Id: claire.py,v 1.19 2003-09-07 13:52:11 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# _ _ ____
+# | \ | | __ )_
+# | \| | _ (_)
+# | |\ | |_) | This has been obsoleted since the release of woody.
+# |_| \_|____(_)
+#
+
+################################################################################
+
+import os, pg, re, sys;
+import utils, db_access;
+import apt_pkg;
+
+################################################################################
+
+re_strip_section_prefix = re.compile(r'.*/');
+
+Cnf = None;
+projectB = None;
+
+################################################################################
+
+def usage (exit_code=0):
+ print """Usage: claire [OPTIONS]
+Create compatibility symlinks from legacy locations to the pool.
+
+ -v, --verbose explain what is being done
+ -h, --help show this help and exit"""
+
+ sys.exit(exit_code)
+
+################################################################################
+
+def fix_component_section (component, section):
+ if component == "":
+ component = utils.extract_component_from_section(section)[1];
+
+ # FIXME: ugly hacks to work around override brain damage
+ section = re_strip_section_prefix.sub('', section);
+ section = section.lower().replace('non-us', '');
+ if section == "main" or section == "contrib" or section == "non-free":
+ section = '';
+ if section != '':
+ section += '/';
+
+ return (component, section);
+
+################################################################################
+
+def find_dislocated_stable(Cnf, projectB):
+ dislocated_files = {}
+
+ codename = Cnf["Suite::Stable::Codename"];
+
+ # Source
+ q = projectB.query("""
+SELECT DISTINCT ON (f.id) c.name, sec.section, l.path, f.filename, f.id
+ FROM component c, override o, section sec, source s, files f, location l,
+ dsc_files df, suite su, src_associations sa, files f2, location l2
+ WHERE su.suite_name = 'stable' AND sa.suite = su.id AND sa.source = s.id
+ AND f2.id = s.file AND f2.location = l2.id AND df.source = s.id
+ AND f.id = df.file AND f.location = l.id AND o.package = s.source
+ AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
+ AND l.component = c.id AND o.suite = su.id
+""" % (codename));
+# Only needed if you have files in legacy-mixed locations
+# UNION SELECT DISTINCT ON (f.id) null, sec.section, l.path, f.filename, f.id
+# FROM component c, override o, section sec, source s, files f, location l,
+# dsc_files df, suite su, src_associations sa, files f2, location l2
+# WHERE su.suite_name = 'stable' AND sa.suite = su.id AND sa.source = s.id
+# AND f2.id = s.file AND f2.location = l2.id AND df.source = s.id
+# AND f.id = df.file AND f.location = l.id AND o.package = s.source
+# AND sec.id = o.section AND NOT (f.filename ~ '^%s/') AND o.suite = su.id
+# AND NOT EXISTS (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);
+ for i in q.getresult():
+ (component, section) = fix_component_section(i[0], i[1]);
+ if Cnf.FindB("Dinstall::LegacyStableHasNoSections"):
+ section="";
+ dest = "%sdists/%s/%s/source/%s%s" % (Cnf["Dir::Root"], codename, component, section, os.path.basename(i[3]));
+ if not os.path.exists(dest):
+ src = i[2]+i[3];
+ src = utils.clean_symlink(src, dest, Cnf["Dir::Root"]);
+ if Cnf.Find("Claire::Options::Verbose"):
+ print src+' -> '+dest
+ os.symlink(src, dest);
+ dislocated_files[i[4]] = dest;
+
+ # Binary
+ architectures = filter(utils.real_arch, Cnf.ValueList("Suite::Stable::Architectures"));
+ q = projectB.query("""
+SELECT DISTINCT ON (f.id) c.name, a.arch_string, sec.section, b.package,
+ b.version, l.path, f.filename, f.id
+ FROM architecture a, bin_associations ba, binaries b, component c, files f,
+ location l, override o, section sec, suite su
+ WHERE su.suite_name = 'stable' AND ba.suite = su.id AND ba.bin = b.id
+ AND f.id = b.file AND f.location = l.id AND o.package = b.package
+ AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
+ AND b.architecture = a.id AND l.component = c.id AND o.suite = su.id""" %
+ (codename));
+# Only needed if you have files in legacy-mixed locations
+# UNION SELECT DISTINCT ON (f.id) null, a.arch_string, sec.section, b.package,
+# b.version, l.path, f.filename, f.id
+# FROM architecture a, bin_associations ba, binaries b, component c, files f,
+# location l, override o, section sec, suite su
+# WHERE su.suite_name = 'stable' AND ba.suite = su.id AND ba.bin = b.id
+# AND f.id = b.file AND f.location = l.id AND o.package = b.package
+# AND sec.id = o.section AND NOT (f.filename ~ '^%s/')
+# AND b.architecture = a.id AND o.suite = su.id AND NOT EXISTS
+# (SELECT 1 FROM location l WHERE l.component IS NOT NULL AND f.location = l.id);
+ for i in q.getresult():
+ (component, section) = fix_component_section(i[0], i[2]);
+ if Cnf.FindB("Dinstall::LegacyStableHasNoSections"):
+ section="";
+ architecture = i[1];
+ package = i[3];
+ version = utils.re_no_epoch.sub('', i[4]);
+ src = i[5]+i[6];
+
+ dest = "%sdists/%s/%s/binary-%s/%s%s_%s.deb" % (Cnf["Dir::Root"], codename, component, architecture, section, package, version);
+ src = utils.clean_symlink(src, dest, Cnf["Dir::Root"]);
+ if not os.path.exists(dest):
+ if Cnf.Find("Claire::Options::Verbose"):
+ print src+' -> '+dest;
+ os.symlink(src, dest);
+ dislocated_files[i[7]] = dest;
+ # Add per-arch symlinks for arch: all debs
+ if architecture == "all":
+ for arch in architectures:
+ dest = "%sdists/%s/%s/binary-%s/%s%s_%s.deb" % (Cnf["Dir::Root"], codename, component, arch, section, package, version);
+ if not os.path.exists(dest):
+ if Cnf.Find("Claire::Options::Verbose"):
+ print src+' -> '+dest
+ os.symlink(src, dest);
+
+ return dislocated_files
+
+################################################################################
+
+def main ():
+ global Cnf, projectB;
+
+ Cnf = utils.get_conf()
+
+ Arguments = [('h',"help","Claire::Options::Help"),
+ ('v',"verbose","Claire::Options::Verbose")];
+ for i in ["help", "verbose" ]:
+ if not Cnf.has_key("Claire::Options::%s" % (i)):
+ Cnf["Claire::Options::%s" % (i)] = "";
+
+ apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
+ Options = Cnf.SubTree("Claire::Options")
+
+ if Options["Help"]:
+ usage();
+
+ projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
+
+ db_access.init(Cnf, projectB);
+
+ find_dislocated_stable(Cnf, projectB);
+
+#######################################################################################
+
+if __name__ == '__main__':
+ main();
+
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+
+Format: 1.0
+Source: amaya
+Version: 3.2.1-1
+Binary: amaya
+Maintainer: Steve Dunham <dunham@debian.org>
+Architecture: any
+Standards-Version: 2.4.0.0
+Files:
+ 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
+ da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.2 (GNU/Linux)
+Comment: For info see http://www.gnupg.org
+
+iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
+rhYnRmVuNMa8oYSvL4hl/Yw=
+=EFAA
+-----END PGP SIGNATURE-----
--- /dev/null
+#!/usr/bin/env python
+
+# Check utils.parse_changes()'s .dsc file validation
+# Copyright (C) 2000 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.1 2001-01-28 09:06:44 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+def main ():
+ # Valid .dsc
+ utils.parse_changes('1.dsc',1);
+
+ # Missing blank line before signature body
+ try:
+ utils.parse_changes('2.dsc',1);
+ except utils.invalid_dsc_format_exc, line:
+ if line != 14:
+ fail("Incorrect line number ('%s') for test #2." % (line));
+ else:
+ fail("Test #2 wasn't recognised as invalid.");
+
+ # Missing blank line after signature header
+ try:
+ utils.parse_changes('3.dsc',1);
+ except utils.invalid_dsc_format_exc, line:
+ if line != 14:
+ fail("Incorrect line number ('%s') for test #3." % (line));
+ else:
+ fail("Test #3 wasn't recognised as invalid.");
+
+ # No blank lines at all
+ try:
+ utils.parse_changes('4.dsc',1);
+ except utils.invalid_dsc_format_exc, line:
+ if line != 19:
+ fail("Incorrect line number ('%s') for test #4." % (line));
+ else:
+ fail("Test #4 wasn't recognised as invalid.");
+
+ # Extra blank line before signature body
+ try:
+ utils.parse_changes('5.dsc',1);
+ except utils.invalid_dsc_format_exc, line:
+ if line != 15:
+ fail("Incorrect line number ('%s') for test #5." % (line));
+ else:
+ fail("Test #5 wasn't recognised as invalid.");
+
+ # Extra blank line after signature header
+ try:
+ utils.parse_changes('6.dsc',1);
+ except utils.invalid_dsc_format_exc, line:
+ if line != 5:
+ fail("Incorrect line number ('%s') for test #6." % (line));
+ else:
+ fail("Test #6 wasn't recognised as invalid.");
+
+ # Valid .dsc ; ignoring errors
+ utils.parse_changes('1.dsc', 0);
+
+ # Invalid .dsc ; ignoring errors
+ utils.parse_changes('2.dsc', 0);
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Check utils.parse_changes()'s for handling empty files
+# Copyright (C) 2000 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.1 2001-03-02 02:31:07 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+def main ():
+ # Empty .changes file; should raise a 'parse error' exception.
+ try:
+ utils.parse_changes('empty.changes', 0)
+ except utils.changes_parse_error_exc, line:
+ if line != "[Empty changes file]":
+ fail("Returned exception with unexcpected error message `%s'." % (line));
+ else:
+ fail("Didn't raise a 'parse error' exception for a zero-length .changes file.");
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+
+Format: 1.7
+Date: Fri, 20 Apr 2001 02:47:21 -0400
+Source: krb5
+Binary: krb5-kdc krb5-doc krb5-rsh-server libkrb5-dev libkrb53 krb5-ftpd
+ krb5-clients krb5-user libkadm54 krb5-telnetd krb5-admin-server
+Architecture: m68k
+Version: 1.2.2-4
+Distribution: unstable
+Urgency: low
+Maintainer: buildd m68k user account <buildd@ax.westfalen.de>
+Changed-By: Sam Hartman <hartmans@debian.org>
+Description:
+ krb5-admin-server - Mit Kerberos master server (kadmind)
+ krb5-clients - Secure replacements for ftp, telnet and rsh using MIT Kerberos
+ krb5-ftpd - Secure FTP server supporting MIT Kerberos
+ krb5-kdc - Mit Kerberos key server (KDC)
+ krb5-rsh-server - Secure replacements for rshd and rlogind using MIT Kerberos
+ krb5-telnetd - Secure telnet server supporting MIT Kerberos
+ krb5-user - Basic programs to authenticate using MIT Kerberos
+ libkadm54 - MIT Kerberos administration runtime libraries
+ libkrb5-dev - Headers and development libraries for MIT Kerberos
+ libkrb53 - MIT Kerberos runtime libraries
+Closes: 94407
+Changes:
+ krb5 (1.2.2-4) unstable; urgency=low
+ .
+ * Fix shared libraries to build with gcc not ld to properly include
+ -lgcc symbols, closes: #94407
+Files:
+ 563dac1cdd3ba922f9301fe074fbfc80 65836 non-us/main optional libkadm54_1.2.2-4_m68k.deb
+ bb620f589c17ab0ebea1aa6e10ca52ad 272198 non-us/main optional libkrb53_1.2.2-4_m68k.deb
+ 40af6e64b3030a179e0de25bd95c95e9 143264 non-us/main optional krb5-user_1.2.2-4_m68k.deb
+ ffe4e5e7b2cab162dc608d56278276cf 141870 non-us/main optional krb5-clients_1.2.2-4_m68k.deb
+ 4fe01d1acb4b82ce0b8b72652a9a15ae 54592 non-us/main optional krb5-rsh-server_1.2.2-4_m68k.deb
+ b3c8c617ea72008a33b869b75d2485bf 41292 non-us/main optional krb5-ftpd_1.2.2-4_m68k.deb
+ 5908f8f60fe536d7bfc1ef3fdd9d74cc 42090 non-us/main optional krb5-telnetd_1.2.2-4_m68k.deb
+ 650ea769009a312396e56503d0059ebc 160236 non-us/main optional krb5-kdc_1.2.2-4_m68k.deb
+ 399c9de4e9d7d0b0f5626793808a4391 160392 non-us/main optional krb5-admin-server_1.2.2-4_m68k.deb
+ 6f962fe530c3187e986268b4e4d27de9 398662 non-us/main optional libkrb5-dev_1.2.2-4_m68k.deb
+
+-----BEGIN PGP SIGNATURE-----
+Version: 2.6.3i
+Charset: noconv
+
+iQCVAwUBOvVPPm547I3m3eHJAQHyaQP+M7RXVEqZ2/xHiPzaPcZRJ4q7o0zbMaU8
+qG/Mi6kuR1EhRNMjMH4Cp6ctbhRDHK5FR/8v7UkOd+ETDAhiw7eqJnLC60EZxZ/H
+CiOs8JklAXDERkQ3i7EYybv46Gxx91pIs2nE4xVKnG16d/wFELWMBLY6skF1B2/g
+zZju3cuFCCE=
+=Vm59
+-----END PGP SIGNATURE-----
+
+
--- /dev/null
+#!/usr/bin/env python
+
+# Check utils.parse_changes()'s for handling of multi-line fields
+# Copyright (C) 2000 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.2 2002-10-16 02:47:32 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+# The deal here is that for the first 6 months of katie's
+# implementation it has been misparsing multi-line fields in .changes
+# files; specifically multi-line fields where there _is_ data on the
+# first line. So, for example:
+
+# Foo: bar baz
+# bat bant
+
+# Became "foo: bar bazbat bant" rather than "foo: bar baz\nbat bant"
+
+################################################################################
+
+import os, sys
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+def main ():
+ # Valid .changes file with a multi-line Binary: field
+ try:
+ changes = utils.parse_changes('krb5_1.2.2-4_m68k.changes', 0)
+ except utils.changes_parse_error_exc, line:
+ fail("parse_changes() returned an exception with error message `%s'." % (line));
+
+ o = changes.get("binary", "")
+ if o != "":
+ del changes["binary"]
+ changes["binary"] = {}
+ for j in o.split():
+ changes["binary"][j] = 1
+
+ if not changes["binary"].has_key("krb5-ftpd"):
+ fail("parse_changes() is broken; 'krb5-ftpd' is not in the Binary: dictionary.");
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+
+# Check utils.extract_component_from_section()
+# Copyright (C) 2000 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.3 2002-10-16 02:47:32 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys;
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+# prefix: non-US
+# component: main, contrib, non-free
+# section: games, admin, libs, [...]
+
+# [1] Order is as above.
+# [2] Prefix is optional for the default archive, but mandatory when
+# uploads are going anywhere else.
+# [3] Default component is main and may be omitted.
+# [4] Section is optional.
+# [5] Prefix is case insensitive
+# [6] Everything else is case sensitive.
+
+def test(input, output):
+ result = utils.extract_component_from_section(input);
+ if result != output:
+ fail ("%s -> %r [should have been %r]" % (input, result, output));
+
+def main ():
+ # Err, whoops? should probably be "utils", "main"...
+ input = "main/utils"; output = ("main/utils", "main");
+ test (input, output);
+
+
+ # Validate #3
+ input = "utils"; output = ("utils", "main");
+ test (input, output);
+
+ input = "non-free/libs"; output = ("non-free/libs", "non-free");
+ test (input, output);
+
+ input = "contrib/net"; output = ("contrib/net", "contrib");
+ test (input, output);
+
+
+ # Validate #3 with a prefix
+ input = "non-US"; output = ("non-US", "non-US/main");
+ test (input, output);
+
+
+ # Validate #4
+ input = "main"; output = ("main", "main");
+ test (input, output);
+
+ input = "contrib"; output = ("contrib", "contrib");
+ test (input, output);
+
+ input = "non-free"; output = ("non-free", "non-free");
+ test (input, output);
+
+
+ # Validate #4 with a prefix
+ input = "non-US/main"; output = ("non-US/main", "non-US/main");
+ test (input, output);
+
+ input = "non-US/contrib"; output = ("non-US/contrib", "non-US/contrib");
+ test (input, output);
+
+ input = "non-US/non-free"; output = ("non-US/non-free", "non-US/non-free");
+ test (input, output);
+
+
+ # Validate #5
+ input = "non-us"; output = ("non-us", "non-US/main");
+ test (input, output);
+
+ input = "non-us/contrib"; output = ("non-us/contrib", "non-US/contrib");
+ test (input, output);
+
+
+ # Validate #6 (section)
+ input = "utIls"; output = ("utIls", "main");
+ test (input, output);
+
+ # Others..
+ input = "non-US/libs"; output = ("non-US/libs", "non-US/main");
+ test (input, output);
+ input = "non-US/main/libs"; output = ("non-US/main/libs", "non-US/main");
+ test (input, output);
+ input = "non-US/contrib/libs"; output = ("non-US/contrib/libs", "non-US/contrib");
+ test (input, output);
+ input = "non-US/non-free/libs"; output = ("non-US/non-free/libs", "non-US/non-free");
+ test (input, output);
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.7
+Date: Tue, 9 Sep 2003 01:16:01 +0100
+Source: gawk
+Binary: gawk
+Architecture: source i386
+Version: 1:3.1.3-2
+Distribution: unstable
+Urgency: low
+Maintainer: James Troup <james@nocrew.org>
+Changed-By: James Troup <james@nocrew.org>
+Description:
+ gawk - GNU awk, a pattern scanning and processing language
+Closes: 204699 204701
+Changes:
+ gawk (1:3.1.3-2) unstable; urgency=low
+ .
+ * debian/control (Standards-Version): bump to 3.6.1.0.
+ .
+ * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
+ Thanks to <vle@gmx.net> for reporting the bug and forwarding it
+ upstream. Closes: #204701
+ .
+ * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
+ [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
+ forwarding it upstream. Closes: #204699
+Files:
+ 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
+ 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
+ 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.6 (GNU/Linux)
+
+iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
+fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
+=O7Gh
+-----END PGP SIGNATURE-----
+You: have been 0wned
--- /dev/null
+You: have been 0wned
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.7
+Date: Tue, 9 Sep 2003 01:16:01 +0100
+Source: gawk
+Binary: gawk
+Architecture: source i386
+Version: 1:3.1.3-2
+Distribution: unstable
+Urgency: low
+Maintainer: James Troup <james@nocrew.org>
+Changed-By: James Troup <james@nocrew.org>
+Description:
+ gawk - GNU awk, a pattern scanning and processing language
+Closes: 204699 204701
+Changes:
+ gawk (1:3.1.3-2) unstable; urgency=low
+ .
+ * debian/control (Standards-Version): bump to 3.6.1.0.
+ .
+ * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
+ Thanks to <vle@gmx.net> for reporting the bug and forwarding it
+ upstream. Closes: #204701
+ .
+ * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
+ [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
+ forwarding it upstream. Closes: #204699
+Files:
+ 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
+ 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
+ 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.6 (GNU/Linux)
+
+iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
+fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
+=O7Gh
+-----END PGP SIGNATURE-----
--- /dev/null
+#!/usr/bin/env python
+
+# Check utils.parse_changes() correctly ignores data outside the signed area
+# Copyright (C) 2004 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.3 2004-03-11 00:22:19 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+def main ():
+ for file in [ "valid", "bogus-pre", "bogus-post" ]:
+ for strict_whitespace in [ 0, 1 ]:
+ try:
+ changes = utils.parse_changes("%s.changes" % (file), strict_whitespace)
+ except utils.changes_parse_error_exc, line:
+ fail("%s[%s]: parse_changes() returned an exception with error message `%s'." % (file, strict_whitespace, line));
+ oh_dear = changes.get("you");
+ if oh_dear:
+ fail("%s[%s]: parsed and accepted unsigned data!" % (file, strict_whitespace));
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+-----BEGIN PGP SIGNED MESSAGE-----
+Hash: SHA1
+
+Format: 1.7
+Date: Tue, 9 Sep 2003 01:16:01 +0100
+Source: gawk
+Binary: gawk
+Architecture: source i386
+Version: 1:3.1.3-2
+Distribution: unstable
+Urgency: low
+Maintainer: James Troup <james@nocrew.org>
+Changed-By: James Troup <james@nocrew.org>
+Description:
+ gawk - GNU awk, a pattern scanning and processing language
+Closes: 204699 204701
+Changes:
+ gawk (1:3.1.3-2) unstable; urgency=low
+ .
+ * debian/control (Standards-Version): bump to 3.6.1.0.
+ .
+ * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
+ Thanks to <vle@gmx.net> for reporting the bug and forwarding it
+ upstream. Closes: #204701
+ .
+ * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
+ [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
+ forwarding it upstream. Closes: #204699
+Files:
+ 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
+ 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
+ 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
+
+-----BEGIN PGP SIGNATURE-----
+Version: GnuPG v1.0.6 (GNU/Linux)
+
+iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
+fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
+=O7Gh
+-----END PGP SIGNATURE-----
--- /dev/null
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# Test utils.fix_maintainer()
+# Copyright (C) 2004 James Troup <james@nocrew.org>
+# $Id: test.py,v 1.2 2004-06-23 23:11:51 troup Exp $
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+################################################################################
+
+import os, sys
+
+sys.path.append(os.path.abspath('../../'));
+
+import utils
+
+################################################################################
+
+def fail(message):
+ sys.stderr.write("%s\n" % (message));
+ sys.exit(1);
+
+################################################################################
+
+def check_valid(s, xa, xb, xc, xd):
+ (a, b, c, d) = utils.fix_maintainer(s)
+ if a != xa:
+ fail("rfc822_maint: %s (returned) != %s (expected [From: '%s']" % (a, xa, s));
+ if b != xb:
+ fail("rfc2047_maint: %s (returned) != %s (expected [From: '%s']" % (b, xb, s));
+ if c != xc:
+ fail("name: %s (returned) != %s (expected [From: '%s']" % (c, xc, s));
+ if d != xd:
+ fail("email: %s (returned) != %s (expected [From: '%s']" % (d, xd, s));
+
+def check_invalid(s):
+ try:
+ utils.fix_maintainer(s);
+ fail("%s was parsed successfully but is expected to be invalid." % (s));
+ except utils.ParseMaintError, unused:
+ pass;
+
+def main ():
+ # Check Valid UTF-8 maintainer field
+ s = "Noèl Köthe <noel@debian.org>"
+ xa = "Noèl Köthe <noel@debian.org>"
+ xb = "=?utf-8?b?Tm/DqGwgS8O2dGhl?= <noel@debian.org>"
+ xc = "Noèl Köthe"
+ xd = "noel@debian.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check valid ISO-8859-1 maintainer field
+ s = "Noèl Köthe <noel@debian.org>"
+ xa = "Noèl Köthe <noel@debian.org>"
+ xb = "=?iso-8859-1?q?No=E8l_K=F6the?= <noel@debian.org>"
+ xc = "Noèl Köthe"
+ xd = "noel@debian.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check valid ASCII maintainer field
+ s = "James Troup <james@nocrew.org>"
+ xa = "James Troup <james@nocrew.org>"
+ xb = "James Troup <james@nocrew.org>"
+ xc = "James Troup"
+ xd = "james@nocrew.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check "Debian vs RFC822" fixup of names with '.' or ',' in them
+ s = "James J. Troup <james@nocrew.org>"
+ xa = "james@nocrew.org (James J. Troup)"
+ xb = "james@nocrew.org (James J. Troup)"
+ xc = "James J. Troup"
+ xd = "james@nocrew.org"
+ check_valid(s, xa, xb, xc, xd);
+ s = "James J, Troup <james@nocrew.org>"
+ xa = "james@nocrew.org (James J, Troup)"
+ xb = "james@nocrew.org (James J, Troup)"
+ xc = "James J, Troup"
+ xd = "james@nocrew.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check just-email form
+ s = "james@nocrew.org"
+ xa = " <james@nocrew.org>"
+ xb = " <james@nocrew.org>"
+ xc = ""
+ xd = "james@nocrew.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check bracketed just-email form
+ s = "<james@nocrew.org>"
+ xa = " <james@nocrew.org>"
+ xb = " <james@nocrew.org>"
+ xc = ""
+ xd = "james@nocrew.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check Krazy quoted-string local part email address
+ s = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
+ xa = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
+ xb = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
+ xc = "Cris van Pelt"
+ xd = "\"Cris van Pelt\"@tribe.eu.org"
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check empty string
+ s = xa = xb = xc = xd = "";
+ check_valid(s, xa, xb, xc, xd);
+
+ # Check for missing email address
+ check_invalid("James Troup");
+ # Check for invalid email address
+ check_invalid("James Troup <james@nocrew.org");
+
+################################################################################
+
+if __name__ == '__main__':
+ main()
+++ /dev/null
-#!/usr/bin/env python
-
-# DB access fucntions
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: db_access.py,v 1.18 2005-12-05 05:08:10 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import sys, time, types;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-suite_id_cache = {};
-section_id_cache = {};
-priority_id_cache = {};
-override_type_id_cache = {};
-architecture_id_cache = {};
-archive_id_cache = {};
-component_id_cache = {};
-location_id_cache = {};
-maintainer_id_cache = {};
-source_id_cache = {};
-files_id_cache = {};
-maintainer_cache = {};
-fingerprint_id_cache = {};
-queue_id_cache = {};
-uid_id_cache = {};
-
-################################################################################
-
-def init (config, sql):
- global Cnf, projectB
-
- Cnf = config;
- projectB = sql;
-
-
-def do_query(q):
- sys.stderr.write("query: \"%s\" ... " % (q));
- before = time.time();
- r = projectB.query(q);
- time_diff = time.time()-before;
- sys.stderr.write("took %.3f seconds.\n" % (time_diff));
- if type(r) is int:
- sys.stderr.write("int result: %s\n" % (r));
- elif type(r) is types.NoneType:
- sys.stderr.write("result: None\n");
- else:
- sys.stderr.write("pgresult: %s\n" % (r.getresult()));
- return r;
-
-################################################################################
-
-def get_suite_id (suite):
- global suite_id_cache
-
- if suite_id_cache.has_key(suite):
- return suite_id_cache[suite]
-
- q = projectB.query("SELECT id FROM suite WHERE suite_name = '%s'" % (suite))
- ql = q.getresult();
- if not ql:
- return -1;
-
- suite_id = ql[0][0];
- suite_id_cache[suite] = suite_id
-
- return suite_id
-
-def get_section_id (section):
- global section_id_cache
-
- if section_id_cache.has_key(section):
- return section_id_cache[section]
-
- q = projectB.query("SELECT id FROM section WHERE section = '%s'" % (section))
- ql = q.getresult();
- if not ql:
- return -1;
-
- section_id = ql[0][0];
- section_id_cache[section] = section_id
-
- return section_id
-
-def get_priority_id (priority):
- global priority_id_cache
-
- if priority_id_cache.has_key(priority):
- return priority_id_cache[priority]
-
- q = projectB.query("SELECT id FROM priority WHERE priority = '%s'" % (priority))
- ql = q.getresult();
- if not ql:
- return -1;
-
- priority_id = ql[0][0];
- priority_id_cache[priority] = priority_id
-
- return priority_id
-
-def get_override_type_id (type):
- global override_type_id_cache;
-
- if override_type_id_cache.has_key(type):
- return override_type_id_cache[type];
-
- q = projectB.query("SELECT id FROM override_type WHERE type = '%s'" % (type));
- ql = q.getresult();
- if not ql:
- return -1;
-
- override_type_id = ql[0][0];
- override_type_id_cache[type] = override_type_id;
-
- return override_type_id;
-
-def get_architecture_id (architecture):
- global architecture_id_cache;
-
- if architecture_id_cache.has_key(architecture):
- return architecture_id_cache[architecture];
-
- q = projectB.query("SELECT id FROM architecture WHERE arch_string = '%s'" % (architecture))
- ql = q.getresult();
- if not ql:
- return -1;
-
- architecture_id = ql[0][0];
- architecture_id_cache[architecture] = architecture_id;
-
- return architecture_id;
-
-def get_archive_id (archive):
- global archive_id_cache
-
- archive = archive.lower();
-
- if archive_id_cache.has_key(archive):
- return archive_id_cache[archive]
-
- q = projectB.query("SELECT id FROM archive WHERE lower(name) = '%s'" % (archive));
- ql = q.getresult();
- if not ql:
- return -1;
-
- archive_id = ql[0][0]
- archive_id_cache[archive] = archive_id
-
- return archive_id
-
-def get_component_id (component):
- global component_id_cache
-
- component = component.lower();
-
- if component_id_cache.has_key(component):
- return component_id_cache[component]
-
- q = projectB.query("SELECT id FROM component WHERE lower(name) = '%s'" % (component))
- ql = q.getresult();
- if not ql:
- return -1;
-
- component_id = ql[0][0];
- component_id_cache[component] = component_id
-
- return component_id
-
-def get_location_id (location, component, archive):
- global location_id_cache
-
- cache_key = location + '~' + component + '~' + location
- if location_id_cache.has_key(cache_key):
- return location_id_cache[cache_key]
-
- archive_id = get_archive_id (archive)
- if component != "":
- component_id = get_component_id (component)
- if component_id != -1:
- q = projectB.query("SELECT id FROM location WHERE path = '%s' AND component = %d AND archive = %d" % (location, component_id, archive_id))
- else:
- q = projectB.query("SELECT id FROM location WHERE path = '%s' AND archive = %d" % (location, archive_id))
- ql = q.getresult();
- if not ql:
- return -1;
-
- location_id = ql[0][0]
- location_id_cache[cache_key] = location_id
-
- return location_id
-
-def get_source_id (source, version):
- global source_id_cache
-
- cache_key = source + '~' + version + '~'
- if source_id_cache.has_key(cache_key):
- return source_id_cache[cache_key]
-
- q = projectB.query("SELECT id FROM source s WHERE s.source = '%s' AND s.version = '%s'" % (source, version))
-
- if not q.getresult():
- return None
-
- source_id = q.getresult()[0][0]
- source_id_cache[cache_key] = source_id
-
- return source_id
-
-################################################################################
-
-def get_or_set_maintainer_id (maintainer):
- global maintainer_id_cache
-
- if maintainer_id_cache.has_key(maintainer):
- return maintainer_id_cache[maintainer]
-
- q = projectB.query("SELECT id FROM maintainer WHERE name = '%s'" % (maintainer))
- if not q.getresult():
- projectB.query("INSERT INTO maintainer (name) VALUES ('%s')" % (maintainer))
- q = projectB.query("SELECT id FROM maintainer WHERE name = '%s'" % (maintainer))
- maintainer_id = q.getresult()[0][0]
- maintainer_id_cache[maintainer] = maintainer_id
-
- return maintainer_id
-
-################################################################################
-
-def get_or_set_uid_id (uid):
- global uid_id_cache;
-
- if uid_id_cache.has_key(uid):
- return uid_id_cache[uid];
-
- q = projectB.query("SELECT id FROM uid WHERE uid = '%s'" % (uid))
- if not q.getresult():
- projectB.query("INSERT INTO uid (uid) VALUES ('%s')" % (uid));
- q = projectB.query("SELECT id FROM uid WHERE uid = '%s'" % (uid));
- uid_id = q.getresult()[0][0];
- uid_id_cache[uid] = uid_id;
-
- return uid_id;
-
-################################################################################
-
-def get_or_set_fingerprint_id (fingerprint):
- global fingerprint_id_cache;
-
- if fingerprint_id_cache.has_key(fingerprint):
- return fingerprint_id_cache[fingerprint]
-
- q = projectB.query("SELECT id FROM fingerprint WHERE fingerprint = '%s'" % (fingerprint));
- if not q.getresult():
- projectB.query("INSERT INTO fingerprint (fingerprint) VALUES ('%s')" % (fingerprint));
- q = projectB.query("SELECT id FROM fingerprint WHERE fingerprint = '%s'" % (fingerprint));
- fingerprint_id = q.getresult()[0][0];
- fingerprint_id_cache[fingerprint] = fingerprint_id;
-
- return fingerprint_id;
-
-################################################################################
-
-def get_files_id (filename, size, md5sum, location_id):
- global files_id_cache
-
- cache_key = "%s~%d" % (filename, location_id);
-
- if files_id_cache.has_key(cache_key):
- return files_id_cache[cache_key]
-
- size = int(size);
- q = projectB.query("SELECT id, size, md5sum FROM files WHERE filename = '%s' AND location = %d" % (filename, location_id));
- ql = q.getresult();
- if ql:
- if len(ql) != 1:
- return -1;
- ql = ql[0];
- orig_size = int(ql[1]);
- orig_md5sum = ql[2];
- if orig_size != size or orig_md5sum != md5sum:
- return -2;
- files_id_cache[cache_key] = ql[0];
- return files_id_cache[cache_key]
- else:
- return None
-
-################################################################################
-
-def get_or_set_queue_id (queue):
- global queue_id_cache
-
- if queue_id_cache.has_key(queue):
- return queue_id_cache[queue]
-
- q = projectB.query("SELECT id FROM queue WHERE queue_name = '%s'" % (queue))
- if not q.getresult():
- projectB.query("INSERT INTO queue (queue_name) VALUES ('%s')" % (queue))
- q = projectB.query("SELECT id FROM queue WHERE queue_name = '%s'" % (queue))
- queue_id = q.getresult()[0][0]
- queue_id_cache[queue] = queue_id
-
- return queue_id
-
-################################################################################
-
-def set_files_id (filename, size, md5sum, location_id):
- global files_id_cache
-
- projectB.query("INSERT INTO files (filename, size, md5sum, location) VALUES ('%s', %d, '%s', %d)" % (filename, long(size), md5sum, location_id));
-
- return get_files_id (filename, size, md5sum, location_id);
-
- ### currval has issues with postgresql 7.1.3 when the table is big;
- ### it was taking ~3 seconds to return on auric which is very Not
- ### Cool(tm).
- ##
- ##q = projectB.query("SELECT id FROM files WHERE id = currval('files_id_seq')");
- ##ql = q.getresult()[0];
- ##cache_key = "%s~%d" % (filename, location_id);
- ##files_id_cache[cache_key] = ql[0]
- ##return files_id_cache[cache_key];
-
-################################################################################
-
-def get_maintainer (maintainer_id):
- global maintainer_cache;
-
- if not maintainer_cache.has_key(maintainer_id):
- q = projectB.query("SELECT name FROM maintainer WHERE id = %s" % (maintainer_id));
- maintainer_cache[maintainer_id] = q.getresult()[0][0];
-
- return maintainer_cache[maintainer_id];
-
-################################################################################
+++ /dev/null
-#!/usr/bin/env python
-
-# Output override files for apt-ftparchive and indices/
-# Copyright (C) 2000, 2001, 2002, 2004 James Troup <james@nocrew.org>
-# $Id: denise,v 1.18 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# This is seperate because it's horribly Debian specific and I don't
-# want that kind of horribleness in the otherwise generic natalie. It
-# does duplicate code tho.
-
-################################################################################
-
-import pg, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-override = {}
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: denise
-Outputs the override tables to text files.
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def do_list(output_file, suite, component, otype):
- global override;
-
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- utils.fubar("Suite '%s' not recognised." % (suite));
-
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- utils.fubar("Component '%s' not recognised." % (component));
-
- otype_id = db_access.get_override_type_id(otype);
- if otype_id == -1:
- utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (otype));
-
- override.setdefault(suite, {});
- override[suite].setdefault(component, {});
- override[suite][component].setdefault(otype, {});
-
- if otype == "dsc":
- q = projectB.query("SELECT o.package, s.section, o.maintainer FROM override o, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.section = s.id ORDER BY s.section, o.package" % (suite_id, component_id, otype_id));
- for i in q.getresult():
- override[suite][component][otype][i[0]] = i;
- output_file.write(utils.result_join(i)+'\n');
- else:
- q = projectB.query("SELECT o.package, p.priority, s.section, o.maintainer, p.level FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.priority = p.id AND o.section = s.id ORDER BY s.section, p.level, o.package" % (suite_id, component_id, otype_id));
- for i in q.getresult():
- i = i[:-1]; # Strip the priority level
- override[suite][component][otype][i[0]] = i;
- output_file.write(utils.result_join(i)+'\n');
-
-################################################################################
-
-def main ():
- global Cnf, projectB, override;
-
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Denise::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Denise::Options::%s" % (i)):
- Cnf["Denise::Options::%s" % (i)] = "";
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Denise::Options")
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- for suite in Cnf.SubTree("Cindy::OverrideSuites").List():
- if Cnf.has_key("Suite::%s::Untouchable" % suite) and Cnf["Suite::%s::Untouchable" % suite] != 0:
- continue
- suite = suite.lower()
-
- sys.stderr.write("Processing %s...\n" % (suite));
- override_suite = Cnf["Suite::%s::OverrideCodeName" % (suite)];
- for component in Cnf.SubTree("Component").List():
- if component == "mixed":
- continue; # Ick
- for otype in Cnf.ValueList("OverrideType"):
- if otype == "deb":
- suffix = "";
- elif otype == "udeb":
- if component != "main":
- continue; # Ick2
- suffix = ".debian-installer";
- elif otype == "dsc":
- suffix = ".src";
- filename = "%s/override.%s.%s%s" % (Cnf["Dir::Override"], override_suite, component.replace("non-US/", ""), suffix);
- output_file = utils.open_file(filename, 'w');
- do_list(output_file, suite, component, otype);
- output_file.close();
-
-################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/perl
-
-@t = ('BYHAND', 'CONFIRM', 'NEW', 'REJECT', 'INSTALL', 'SKIP');
-
-$/="";
-IR: while (<>) {
- for $i (1..$#t) {
- if (/^$t[$i]/m) {
- $data[$i] .= "$_\n";
- $cnt[$i]++;
-
- if ($t[$i] eq "NEW") {
- ($dist) = (/^NEW to (.*)/m);
- while (/^\(new\) ([^_]*)_\S* (\S*) (\S*)$/mg) {
- ($n,$p,$s) = ($1,$2,$3);
- $p = "?" if (!$p);
- $s = "?" if (!$s);
- $s = "non-free/$p" if ($dist=~'non-free' && $s!~'non-free');
- $s = "contrib/$p" if ($dist=~'contrib' && $s!~'contrib');
- $l = length($n)>15 ? 30-length($n) : 15;
- for $d (split(/, /,$dist)) {
- $d.='-contrib' if ($s =~ 'contrib' && $d!~'contrib');
- $d.='-non-free' if ($s =~ 'non-free' && $d!~'non-free');
- $override{$d} .= sprintf("%-15s %-${l}s %s\n", $n, $p, $s)
- if (!$over{$n});
- $over{$n} = 1;
- }
- }
- }
-
- next IR;
- }
- }
- $data[0] .= "$_\n";
- $cnt[$i]++;
-}
-
-for $i (0..$#t) {
- print "-"x40, "\n$cnt[$i] $t[$i]\n", "-"x40, "\n$data[$i]" if $cnt[$i];
-}
-print "-"x40, "\nOVERRIDE ADDITIONS\n", "-"x40,"\n";
-for $d (sort keys %override) {
- print "-"x5," $d\n$override{$d}\n\n";
-}
--- /dev/null
+The katie software is based in large part on 'dinstall' by Guy Maor.
+The original 'katie' script was pretty much a line by line
+reimplementation of the perl 'dinstall' in python.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+[Alphabetical Order]
+
+Adam Heath <doogie@debian.org>
+Anthony Towns <ajt@debian.org>
+Antti-Juhani Kaijanaho <ajk@debian.org>
+Ben Collins <bcollins@debian.org>
+Brendan O'Dea <bod@debian.org>
+Daniel Jacobwitz <dan@debian.org>
+Daniel Silverstone <dsilvers@debian.org>
+Drake Diedrich <dld@debian.org>
+Guy Maor <maor@debian.org>
+Jason Gunthorpe <jgg@debian.org>
+Joey Hess <joeyh@debian.org>
+Mark Brown <broonie@debian.org>
+Martin Michlmayr <tbm@debian.org>
+Michael Beattie <mjb@debian.org>
+Randall Donald <rdonald@debian.org>
+Ryan Murray <rmurray@debian.org>
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+Special thanks go to Jason and AJ; without their patient help, none of
+this would have been possible.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
--- /dev/null
+ TODO
+ ====
+
+[NB: I use this as a thought record/scribble, not everything on here
+ makes sense and/or is actually ever going to get done, so IIWY I
+ wouldn't use it as gospel for the future of katie or as a TODO
+ list for random hacking.]
+
+================================================================================
+
+Others
+------
+
+ o cindy should remove the src-only override when a binary+source override
+ exists
+
+ o reject on > or < in a version constraint
+
+23:07 < aba> elmo: and, how about enhancing rene to spot half-dropped
+ binaries on one arch (i.e. package used to build A and B, but B is
+ no longer built on some archs)?
+
+ o tabnanny the source
+
+ o drop map-unreleased
+
+ o check email only portions of addresses match too, iff the names
+ don't, helps with the "James Troup <james@nocrew.org>"
+ vs. "<james@nocrew.org>" case.
+
+ o ensure .dsc section/prio match .changes section/prio
+
+ o rhona's kind of crap when asked to remove a lot of files (e.g. 2k
+ or so).
+
+ o we don't handle the case where an identical orig.tar.gz is
+ mentioned in the .changes, but not in unchecked; but should we
+ care?
+
+ o madison could do better sanity checking for -g/-G (e.g. not more
+ than one suite, etc.)
+
+ o use python2.2-tarfile (once it's in stable?) to check orig.tar.gz
+ timestamps too.
+
+ o need to decide on whether we're tying for most errors at once.. if
+ so (probably) then make sure code doesn't assume variables exist and
+ either way do something about checking error code of check_dsc and
+ later functions so we skip later checks if they're bailing.
+
+ o the .katie stuff is fundamentally braindamaged, it's not versioned
+ so there's no way to change the format, yay me. need to fix.
+ probably by putting a version var as the first thing and checking
+ that.. auto-upgrade at least from original format would be good.
+ might also be a good idea to put everything in one big dict after
+ that?
+
+ o [?, wishlist, distant future] RFC2047-ing should be extended to
+ all headers of mails sent out.
+
+ o reject sparc64 binaries in a non '*64*' package.
+
+ o katie.py(source_exists): a) we take arguments as parameters that
+ we could figure out for ourselves (we're part of the Katie class
+ after all), b) we have this 3rd argument which defaults to "any"
+ but could in fact be dropped since no one uses it like that.
+
+ o jennifer: doesn't handle bin-only NMUs of stuff still in NEW,
+ BYHAND or ACCEPTED (but not the pool) - not a big deal, upload can
+ be retried once the source is in the archive, but still.
+
+ o security global mail overrides should special case buildd stuff so
+ that buildds get ACCEPTED mails (or maybe amber (?)), that way
+ upload-security doesn't grow boundlessly.
+
+ o amber should upload sourceful packages first, otherwise with big
+ packages (e.g. X) and esp. when source is !i386, half the arches
+ can be uploaded without source, get copied into queue/unaccepted
+ and promptly rejected.
+
+ o rene's NVIU check doesn't catch cases where source package changed
+ name, should check binaries too. [debian-devel@l.d.o, 2004-02-03]
+
+ o cnf[melanie::logfile] is misnamed...
+
+<aj> i'd be kinda inclined to go with insisting the .changes file take
+ the form ---- BEGIN PGP MESSAGE --- <non -- BEGIN/END lines> --
+ BEGIN PGP SIG -- END PGP MESSAGE -- with no lines before or after,
+ and rejecting .changes that didn't match that
+
+ o rene should check for source packages not building any binaries
+
+ o heidi should have a diff mode that accepts diff output!
+
+ o halle doesn't deal with melanie'd packages, partial replacements
+ etc. and more.
+
+ o lauren, the tramp, blindly deletes with no check that the delete
+ failed which it might well given we only look for package/version,
+ not package/version _in p-u_. duh.
+
+ o melanie should remove obsolete changes when removing from p-u, or
+ at least warn. or halle should handle it.
+
+ o need a testsuite _badly_
+
+ o lisa should have an Bitch-Then-Accept option
+
+ o jennifer crashes if run as a user in -n mode when orig.tar.gz is
+ in queue/new...
+
+<elmo_home> [<random>maybe I should reject debian packages with a non-Debian origin or bugs field</>]
+<Kamion> [<random>agreed; dunno what origin does but non-Debian bugs fields would be bad]
+
+ o rhona should make use of select..except select, temporary tables
+ etc. rather than looping and calling SQL every time so we can do
+ suite removal sanely (see potato-removal document)
+
+ o melanie will happily include packages in the Cc list that aren't
+ being removed...
+
+ o melanie doesn't remove udebs when removing the source they build from
+
+ o check_dsc_against_db's "delete an entry from files while you're
+ not looking" habit is Evil and Bad.
+
+ o lisa allows you to edit the section and change the component, but
+ really shouldn't.
+
+ o melanie needs to, when not sending bug close mails, promote Cc: to
+ To: and send the mail anyways.
+
+ o the lockfile (Archive_Maintenance_In_Progress) should probably be in a conf file
+
+ o madison should cross-check the b.source field and if it's not null
+ and s.name linked from it != the source given in
+ -S/--source-and-binary ignore.
+
+ o lauren sucks; she should a) only spam d-i for sourceful
+ rejections, b) sort stuff so she rejects sourceful stuff first. the
+ non-sourceful should probably get a form mail, c) automate the
+ non-sourceful stuff (see b).
+
+ o jennifer should do q-d stuff for faster AA [ryan]
+
+ o split the morgue into source and binary so binaries can be purged first!
+
+ o per-architecture priorities for things like different arch'es
+ gcc's, silly BSD libftw, palo, etc.
+
+ o use postgres 7.2's built-in stat features to figure out how indices are used etc.
+
+ o neve shouldn't be using location, she should run down suites instead
+
+ o halle needs to know about udebs
+
+ o by default hamstring katie's mail sending so that she won't send
+ anything until someone edits a script; she's been used far too
+ much to send spam atm :(
+
+ o $ftpdir/indices isn't created by rose because it's not in katie.conf
+
+ o sanity check depends/recommends/suggests too? in fact for any
+ empty field?
+
+[minor] kelly's copychanges, copykatie handling sucks, the per-suite
+ thing is static for all packages, so work out in advance dummy.
+
+[madison] # filenames ?
+[madison] # maintainer, component, install date (source only?), fingerprint?
+
+ o UrgencyLog stuff should minimize it's bombing out(?)
+ o Log stuff should open the log file
+
+ o helena should footnote the actual notes, and also * the versions
+ with notes so we can see new versions since being noted...
+
+ o helena should have alternative sorting options, including reverse
+ and without or without differentiaion.
+
+ o julia should sync debadmin and ftpmaster (?)
+
+ o <drow> Can't read file.:
+ /org/security.debian.org/queue/accepted/accepted/apache-perl_1.3.9-14.1-1.21.20000309-1_sparc.katie.
+ You assume that the filenames are relative to accepted/, might want
+ to doc or fix that.
+
+ o <neuro> the orig was in NEW, the changes that caused it to be NEW
+ were pulled out in -2, and we end up with no orig in the archive
+ :(
+
+ o SecurityQueueBuild doesn't handle the case of foo_3.3woody1
+ with a new .orig.tar.gz followed by a foo_3.3potato1 with the same
+ .orig.tar.gz; jennifer sees it and copes, but the AA code doesn't
+ and can't really easily know so the potato AA dir is left with no
+ .orig.tar.gz copy. doh.
+
+ o orig.tar.gz in accepted not handled properly (?)
+
+ o amber doesn't include .orig.tar.gz but it should
+
+ o permissions (paranoia, group write, etc.) configurability and overhaul
+
+ o remember duplicate copyrights in lisaand skip them, per package
+
+ o <M>ove option for lisa byhand proecessing
+
+ o rene could do with overrides
+
+ o db_access.get_location_id should handle the lack of archive_id properly
+
+ o the whole versioncmp thing should be documented
+
+ o lisa doesn't do the right thing with -2 and -1 uploads, as you can
+ end up with the .orig.tar.gz not in the pool
+
+ o lisa exits if you check twice (aj)
+
+ o lisa doesn't trap signals from fernanda properly
+
+ o queued and/or perl on sparc stable sucks - reimplement it.
+
+ o aj's bin nmu changes
+
+ o Lisa:
+ * priority >> optional
+ * arch != {any,all}
+ * build-depends wrong (via andrea)
+ * suid
+ * conflicts
+ * notification/stats to admin daily
+ o trap fernanda exiting
+ o distinguish binary only versus others (neuro)
+
+ o cache changes parsed from ordering (careful tho: would be caching
+ .changes from world writable incoming, not holding)
+
+ o katie doesn't recognise binonlyNMUs correctly in terms of telling
+ who their source is; source-must-exist does, but the info is not
+ propogated down.
+
+ o Fix BTS vs. katie sync issues by queueing(via BSMTP) BTS mail so
+ that it can be released on deman (e.g. ETRN to exim).
+
+ o maintainers file needs overrides
+
+ [ change override.maintainer to override.maintainer-from +
+ override.maintainer-to and have them reference the maintainers
+ table. Then fix charisma to use them and write some scripting
+ to handle the Santiago situation. ]
+
+ o Validate Depends (et al.) [it should match \(\s*(<<|<|<=|=|>=|>|>>)\s*<VERSIONREGEXP>\)]
+
+ o Clean up DONE; archive to tar file every 2 weeks, update tar tvzf INDEX file.
+
+ o testing-updates suite: if binary-only and version << version in
+ unstable and source-ver ~= source-ver in testing; then map
+ unstable -> testing-updates ?
+
+ o hooks or configurability for debian specific checks (e.g. check_urgency, auto-building support)
+
+ o morgue needs auto-cleaning (?)
+
+ o saffron: two modes, all included, seperate
+ o saffron: add non-US
+ o saffron: add ability to control components, architectures, archives, suites
+ o saffron: add key to expand header
+
+================================================================================
+
+queue/approved
+--------------
+
+ o What to do with multi-suite uploads? Presumably hold in unapproved
+ and warn? Or what? Can't accept just for unstable or reject just
+ from stable.
+
+ o Whenever we check for anything in accepted we also need to check in
+ unapproved.
+
+ o non-sourceful uploads should go straight through if they have
+ source in accepted or the archive.
+
+ o security uploads on auric should be pre-approved.
+
+================================================================================
+
+Less Urgent
+-----------
+
+ o change utils.copy to try rename() first
+
+ o [hard, long term] unchecked -> accepted should go into the db, not
+ a suite, but similar. this would allow katie to get even faster,
+ make madison more useful, decomplexify specialacceptedautobuild
+ and generally be more sane. may even be helpful to have e.g. new
+ in the DB, so that we avoid corner cases like the .orig.tar.gz
+ disappearing 'cos the package has been entirely removed but was
+ still on stayofexecution when it entered new.
+
+ o Logging [mostly done] (todo: rhona (hard), .. ?)
+
+ o jennifer: the tar extractor class doesn't need to be redone for each package
+
+ o reverse of source-must-exist; i.e. binary-for-source-must-not-exist
+ o REJECT reminders in shania.
+ o fernanda should check for conflicts and warn about them visavis priority [rmurray]
+ o store a list of removed/files versions; also compare against them.
+ [but be careful about scalability]
+
+ o fernanda: print_copyright should be a lot more intelligent
+ @ handle copyright.gz
+ @ handle copyright.ja and copyright
+ @ handle (detect at least) symlinks to another package's doc directory
+ @ handle and/or fall back on source files (?)
+
+ o To incorporate from utils:
+ @ unreject
+
+ o auto-purge out-of-date stuff from non-free/contrib so that testing and stuff works
+ o doogie's binary -> source index
+ o jt's web stuff, matt's changelog stuff (overlap)
+
+ o [Hard] Need to merge non-non-US and non-US DBs.
+
+ o experimental needs to auto clean (relative to unstable) [partial: rene warns about this]
+
+ o Do a checkpc(1)-a-like which sanitizes a config files.
+ o fix parse_changes()/build_file_list() to sanity check filenames
+ o saftey check and/or rename debs so they match what they should be
+
+ o Improve andrea.
+ o Need to optimize all the queries by using EXAMINE and building some INDEXs.
+ [postgresql 7.2 will help here]
+ o Need to enclose all the setting SQL stuff in transactions (mostly done).
+ o Need to finish alyson (a way to sync katie.conf and the DB)
+ o Need the ability to rebuild all other tables from dists _or_ pools (in the event of disaster) (?)
+ o Make the --help and --version options do stuff for all scripts
+
+ o charisma can't handle whitespace-only lines (for the moment, this is feature)
+
+ o generic way of saying isabinary and isadsc. (?)
+
+ o s/distribution/suite/g
+
+ o cron.weekly:
+ @ weekly postins to d-c (?)
+ @ backup of report (?)
+ @ backup of changes.tgz (?)
+
+ o --help doesn't work without /etc/katie/katie.conf (or similar) at
+ least existing.
+
+ o rename andrea (clashes with existing andrea)...
+
+ * Harder:
+
+ o interrupting of stracing jennifer causes exceptions errors from apt_inst calls
+ o dependency checking (esp. stable) (partially done)
+ o override checks sucks; it needs to track changes made by the
+ maintainer and pass them onto ftpmaster instead of warning the
+ maintainer.
+ o need to do proper rfc822 escaping of from lines (as opposed to s/\.//g)
+ o Revisit linking of binary->source in install() in katie.
+ o Fix component handling in overrides (aj)
+ o Fix lack of entires in source overrides (aj)
+ o direport misreports things as section 'devel' (? we don't use direport)
+ o vrfy check of every Maintainer+Changed-By address; valid for 3 months.
+ o binary-all should be done on a per-source, per-architecture package
+ basis to avoid, e.g. the perl-modules problem.
+ o a source-missing-diff check: if the version has a - in it, and it
+ is sourceful, it needs orig and diff, e.g. if someone uploads
+ esound_0.2.22-6, and it is sourceful, and there is no diff ->
+ REJECT (version has a dash, therefore not debian native.)
+ o check linking of .tar.gz's to .dsc's.. see proftpd 1.2.1 as an example
+ o archive needs md5sum'ed regularly, but takes too long to do all
+ in one go; make progressive or weekly.
+ o katie/jenna/rhona/whatever needs to clear out .changes
+ files from p-u when removing stuff superseded by newer versions.
+ [but for now we have halle]
+ o test sig checking stuff in test/ (stupid thing is not modularized due to global abuse)
+ o when encountering suspicous things (e.g. file tainting) do something more drastic
+
+ * Easy:
+
+ o suite mapping and component mapping are parsed per changes file,
+ they should probably be stored in a dictionary created at startup.
+ o don't stat/md5sum files you have entries for in the DB, moron
+ boy (Katie.check_source_blah_blah)
+ o promote changes["changes"] to mandatory in katie.py(dump_vars)
+ after a month or so (or all .katie files contain in the queue
+ contain it).
+ o melanie should behave better with -a and without -b; see
+ gcc-defaults removal for an example.
+ o Reject on misconfigured kernel-package uploads
+ o utils.extract_component_from_section: main/utils -> main/utils, main rather than utils, main
+ o Fix jennier to warn if run when not in incoming or p-u
+ o katie should validate multi-suite uploads; only possible valid one
+ is "stable unstable"
+ o cron.daily* should change umask (aj sucks)
+ o Rene doesn't look at debian-installer but should.
+ o Rene needs to check for binary-less source packages.
+ o Rene could accept a suite argument (?)
+ o byhand stuff should send notification
+ o catherine should udpate db; move files, not the other way around [neuro]
+ o melanie should update the stable changelog [joey]
+ o update tagdb.dia
+
+ * Bizzare/uncertain:
+
+ o drop rather dubious currval stuff (?)
+ o rationalize os.path.join() usage
+ o Rene also doesn't seem to warn about missing binary packages (??)
+ o logging: hostname + pid ?
+ o ANAIS should be done in katie (?)
+ o Add an 'add' ability to melanie (? separate prog maybe)
+ o Replicate old dinstall report stuff (? needed ?)
+ o Handle the case of 1:1.1 which would overwrite 1.1 (?)
+ o maybe drop -r/--regex in madison, make it the default and
+ implement -e/--exact (a la joey's "elmo")
+ o dsc files are not checked for existence/perms (only an issue if
+ they're in the .dsc, but not the .changes.. possible?)
+
+ * Cleanups & misc:
+
+ o db_access' get_files needs to use exceptions not this None, > 0, < 0 return val BS (?)
+ o The untouchable flag doesn't stop new packages being added to ``untouchable'' suites
+
+================================================================================
+
+Packaging
+---------
+
+ o Fix stuff to look in sensible places for libs and config file in debian package (?)
+
+================================================================================
+
+ --help manpage
+-----------------------------------------------------------------------------
+alyson X
+amber X
+andrea X
+ashley X
+catherine X X
+charisma X X
+cindy X X
+claire X
+denise X
+fernanda X
+halle X
+heidi X X
+helena X
+jenna X
+jennifer X
+jeri X
+julia X X
+kelly X X
+lisa X X
+madison X X
+melanie X X
+natalie X X
+neve X
+rene X
+rose X
+rhona X X
+saffron X
+shania X
+tea X
+ziyi X
+
+
+================================================================================
+
+Random useful-at-some-point SQL
+-------------------------------
+
+UPDATE files SET last_used = '1980-01-01'
+ FROM binaries WHERE binaries.architecture = <x>
+ AND binaries.file = files.id;
+
+DELETE FROM bin_associations
+ WHERE EXISTS (SELECT id FROM binaries
+ WHERE architecture = <x>
+ AND id = bin_associations.bin);
+
+================================================================================
+++ /dev/null
-#!/usr/bin/env python
-
-# Sync fingerprint and uid tables with a debian.org LDAP DB
-# Copyright (C) 2003, 2004 James Troup <james@nocrew.org>
-# $Id: emilie,v 1.3 2004-11-27 13:25:35 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <elmo> ping@debian.org ?
-# <aj> missing@ ? wtfru@ ?
-# <elmo> giggle
-# <elmo> I like wtfru
-# <aj> all you have to do is retrofit wtfru into an acronym and no one
-# could possibly be offended!
-# <elmo> aj: worried terriers for russian unity ?
-# <aj> uhhh
-# <aj> ooookkkaaaaay
-# <elmo> wthru is a little less offensive maybe? but myabe that's
-# just because I read h as heck, not hell
-# <elmo> ho hum
-# <aj> (surely the "f" stands for "freedom" though...)
-# <elmo> where the freedom are you?
-# <aj> 'xactly
-# <elmo> or worried terriers freed (of) russian unilateralism ?
-# <aj> freedom -- it's the "foo" of the 21st century
-# <aj> oo, how about "wat@" as in wherefore art thou?
-# <neuro> or worried attack terriers
-# <aj> Waning Trysts Feared - Return? Unavailable?
-# <aj> (i find all these terriers more worrying, than worried)
-# <neuro> worrying attack terriers, then
-
-################################################################################
-
-import commands, ldap, pg, re, sys, time;
-import apt_pkg;
-import db_access, utils;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-re_gpg_fingerprint = re.compile(r"^\s+Key fingerprint = (.*)$", re.MULTILINE);
-re_debian_address = re.compile(r"^.*<(.*)@debian\.org>$", re.MULTILINE);
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: emilie
-Syncs fingerprint and uid tables with a debian.org LDAP DB
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def get_ldap_value(entry, value):
- ret = entry.get(value);
- if not ret:
- return "";
- else:
- # FIXME: what about > 0 ?
- return ret[0];
-
-def main():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Emilie::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Emilie::Options::%s" % (i)):
- Cnf["Emilie::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Emilie::Options")
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- #before = time.time();
- #sys.stderr.write("[Getting info from the LDAP server...");
- LDAPDn = Cnf["Emilie::LDAPDn"];
- LDAPServer = Cnf["Emilie::LDAPServer"];
- l = ldap.open(LDAPServer);
- l.simple_bind_s("","");
- Attrs = l.search_s(LDAPDn, ldap.SCOPE_ONELEVEL,
- "(&(keyfingerprint=*)(gidnumber=%s))" % (Cnf["Julia::ValidGID"]),
- ["uid", "keyfingerprint"]);
- #sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
-
-
- projectB.query("BEGIN WORK");
-
-
- # Sync LDAP with DB
- db_fin_uid = {};
- ldap_fin_uid_id = {};
- q = projectB.query("""
-SELECT f.fingerprint, f.id, u.uid FROM fingerprint f, uid u WHERE f.uid = u.id
- UNION SELECT f.fingerprint, f.id, null FROM fingerprint f where f.uid is null""");
- for i in q.getresult():
- (fingerprint, fingerprint_id, uid) = i;
- db_fin_uid[fingerprint] = (uid, fingerprint_id);
-
- for i in Attrs:
- entry = i[1];
- fingerprints = entry["keyFingerPrint"];
- uid = entry["uid"][0];
- uid_id = db_access.get_or_set_uid_id(uid);
- for fingerprint in fingerprints:
- ldap_fin_uid_id[fingerprint] = (uid, uid_id);
- if db_fin_uid.has_key(fingerprint):
- (existing_uid, fingerprint_id) = db_fin_uid[fingerprint];
- if not existing_uid:
- q = projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
- print "Assigning %s to 0x%s." % (uid, fingerprint);
- else:
- if existing_uid != uid:
- utils.fubar("%s has %s in LDAP, but projectB says it should be %s." % (uid, fingerprint, existing_uid));
-
- # Try to update people who sign with non-primary key
- q = projectB.query("SELECT fingerprint, id FROM fingerprint WHERE uid is null");
- for i in q.getresult():
- (fingerprint, fingerprint_id) = i;
- cmd = "gpg --no-default-keyring --keyring=%s --keyring=%s --fingerprint %s" \
- % (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"],
- fingerprint);
- (result, output) = commands.getstatusoutput(cmd);
- if result == 0:
- m = re_gpg_fingerprint.search(output);
- if not m:
- print output
- utils.fubar("0x%s: No fingerprint found in gpg output but it returned 0?\n%s" % (fingerprint, utils.prefix_multi_line_string(output, " [GPG output:] ")));
- primary_key = m.group(1);
- primary_key = primary_key.replace(" ","");
- if not ldap_fin_uid_id.has_key(primary_key):
- utils.fubar("0x%s (from 0x%s): no UID found in LDAP" % (primary_key, fingerprint));
- (uid, uid_id) = ldap_fin_uid_id[primary_key];
- q = projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
- print "Assigning %s to 0x%s." % (uid, fingerprint);
- else:
- extra_keyrings = "";
- for keyring in Cnf.ValueList("Emilie::ExtraKeyrings"):
- extra_keyrings += " --keyring=%s" % (keyring);
- cmd = "gpg --keyring=%s --keyring=%s %s --list-key %s" \
- % (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"],
- extra_keyrings, fingerprint);
- (result, output) = commands.getstatusoutput(cmd);
- if result != 0:
- cmd = "gpg --keyserver=%s --allow-non-selfsigned-uid --recv-key %s" % (Cnf["Emilie::KeyServer"], fingerprint);
- (result, output) = commands.getstatusoutput(cmd);
- if result != 0:
- print "0x%s: NOT found on keyserver." % (fingerprint);
- print cmd
- print result
- print output
- continue;
- else:
- cmd = "gpg --list-key %s" % (fingerprint);
- (result, output) = commands.getstatusoutput(cmd);
- if result != 0:
- print "0x%s: --list-key returned error after --recv-key didn't." % (fingerprint);
- print cmd
- print result
- print output
- continue;
- m = re_debian_address.search(output);
- if m:
- guess_uid = m.group(1);
- else:
- guess_uid = "???";
- name = " ".join(output.split('\n')[0].split()[3:]);
- print "0x%s -> %s -> %s" % (fingerprint, name, guess_uid);
- # FIXME: make me optionally non-interactive
- # FIXME: default to the guessed ID
- uid = None;
- while not uid:
- uid = utils.our_raw_input("Map to which UID ? ");
- Attrs = l.search_s(LDAPDn,ldap.SCOPE_ONELEVEL,"(uid=%s)" % (uid), ["cn","mn","sn"])
- if not Attrs:
- print "That UID doesn't exist in LDAP!"
- uid = None;
- else:
- entry = Attrs[0][1];
- name = " ".join([get_ldap_value(entry, "cn"),
- get_ldap_value(entry, "mn"),
- get_ldap_value(entry, "sn")]);
- prompt = "Map to %s - %s (y/N) ? " % (uid, name.replace(" "," "));
- yn = utils.our_raw_input(prompt).lower();
- if yn == "y":
- uid_id = db_access.get_or_set_uid_id(uid);
- projectB.query("UPDATE fingerprint SET uid = %s WHERE id = %s" % (uid_id, fingerprint_id));
- print "Assigning %s to 0x%s." % (uid, fingerprint);
- else:
- uid = None;
- projectB.query("COMMIT WORK");
-
-############################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-// Example /etc/katie/katie.conf
-
-Config
-{
- // FQDN hostname
- auric.debian.org
- {
-
- // Optional hostname as it appears in the database (if it differs
- // from the FQDN hostname).
- DatbaseHostname "ftp-master";
-
- // Optional filename of katie's config file; if not present, this
- // file is assumed to contain katie config info.
- KatieConfig "/org/ftp.debian.org/katie/katie.conf";
-
- // Optional filename of apt-ftparchive's config file; if not
- // present, the file is assumed to be 'apt.conf' in the same
- // directory as this file.
- AptConfig "/org/ftp.debian.org/katie/apt.conf";
- }
-
-}
+++ /dev/null
-#!/usr/bin/env python
-
-# Script to automate some parts of checking NEW packages
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: fernanda.py,v 1.10 2003-11-10 23:01:17 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <Omnic> elmo wrote docs?!!?!?!?!?!?!
-# <aj> as if he wasn't scary enough before!!
-# * aj imagines a little red furry toy sitting hunched over a computer
-# tapping furiously and giggling to himself
-# <aj> eventually he stops, and his heads slowly spins around and you
-# see this really evil grin and then he sees you, and picks up a
-# knife from beside the keyboard and throws it at you, and as you
-# breathe your last breath, he starts giggling again
-# <aj> but i should be telling this to my psychiatrist, not you guys,
-# right? :)
-
-################################################################################
-
-import errno, os, re, sys
-import utils
-import apt_pkg, apt_inst
-import pg, db_access
-
-################################################################################
-
-re_package = re.compile(r"^(.+?)_.*");
-re_doc_directory = re.compile(r".*/doc/([^/]*).*");
-
-re_contrib = re.compile('^contrib/')
-re_nonfree = re.compile('^non\-free/')
-
-re_arch = re.compile("Architecture: .*")
-re_builddep = re.compile("Build-Depends: .*")
-re_builddepind = re.compile("Build-Depends-Indep: .*")
-
-re_localhost = re.compile("localhost\.localdomain")
-re_version = re.compile('^(.*)\((.*)\)')
-
-re_newlinespace = re.compile('\n')
-re_spacestrip = re.compile('(\s)')
-
-################################################################################
-
-# Colour definitions
-
-# Main
-main_colour = "\033[36m";
-# Contrib
-contrib_colour = "\033[33m";
-# Non-Free
-nonfree_colour = "\033[31m";
-# Arch
-arch_colour = "\033[32m";
-# End
-end_colour = "\033[0m";
-# Bold
-bold_colour = "\033[1m";
-# Bad maintainer
-maintainer_colour = arch_colour;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-Cnf = utils.get_conf()
-projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]))
-db_access.init(Cnf, projectB);
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: fernanda [PACKAGE]...
-Check NEW package(s).
-
- -h, --help show this help and exit
-
-PACKAGE can be a .changes, .dsc, .deb or .udeb filename."""
-
- sys.exit(exit_code)
-
-################################################################################
-
-def get_depends_parts(depend) :
- v_match = re_version.match(depend)
- if v_match:
- d_parts = { 'name' : v_match.group(1), 'version' : v_match.group(2) }
- else :
- d_parts = { 'name' : depend , 'version' : '' }
- return d_parts
-
-def get_or_list(depend) :
- or_list = depend.split("|");
- return or_list
-
-def get_comma_list(depend) :
- dep_list = depend.split(",");
- return dep_list
-
-def split_depends (d_str) :
- # creates a list of lists of dictionaries of depends (package,version relation)
-
- d_str = re_spacestrip.sub('',d_str);
- depends_tree = [];
- # first split depends string up amongs comma delimiter
- dep_list = get_comma_list(d_str);
- d = 0;
- while d < len(dep_list):
- # put depends into their own list
- depends_tree.append([dep_list[d]]);
- d += 1;
- d = 0;
- while d < len(depends_tree):
- k = 0;
- # split up Or'd depends into a multi-item list
- depends_tree[d] = get_or_list(depends_tree[d][0]);
- while k < len(depends_tree[d]):
- # split depends into {package, version relation}
- depends_tree[d][k] = get_depends_parts(depends_tree[d][k]);
- k += 1;
- d += 1;
- return depends_tree;
-
-def read_control (filename):
- recommends = [];
- depends = [];
- section = '';
- maintainer = '';
- arch = '';
-
- deb_file = utils.open_file(filename);
- try:
- extracts = apt_inst.debExtractControl(deb_file);
- control = apt_pkg.ParseSection(extracts);
- except:
- print "can't parse control info";
- control = '';
-
- deb_file.close();
-
- control_keys = control.keys();
-
- if control.has_key("Depends"):
- depends_str = control.Find("Depends");
- # create list of dependancy lists
- depends = split_depends(depends_str);
-
- if control.has_key("Recommends"):
- recommends_str = control.Find("Recommends");
- recommends = split_depends(recommends_str);
-
- if control.has_key("Section"):
- section_str = control.Find("Section");
-
- c_match = re_contrib.search(section_str)
- nf_match = re_nonfree.search(section_str)
- if c_match :
- # contrib colour
- section = contrib_colour + section_str + end_colour
- elif nf_match :
- # non-free colour
- section = nonfree_colour + section_str + end_colour
- else :
- # main
- section = main_colour + section_str + end_colour
- if control.has_key("Architecture"):
- arch_str = control.Find("Architecture")
- arch = arch_colour + arch_str + end_colour
-
- if control.has_key("Maintainer"):
- maintainer = control.Find("Maintainer")
- localhost = re_localhost.search(maintainer)
- if localhost:
- #highlight bad email
- maintainer = maintainer_colour + maintainer + end_colour;
-
- return (control, control_keys, section, depends, recommends, arch, maintainer)
-
-def read_dsc (dsc_filename):
- dsc = {};
-
- dsc_file = utils.open_file(dsc_filename);
- try:
- dsc = utils.parse_changes(dsc_filename);
- except:
- print "can't parse control info"
- dsc_file.close();
-
- filecontents = strip_pgp_signature(dsc_filename);
-
- if dsc.has_key("build-depends"):
- builddep = split_depends(dsc["build-depends"]);
- builddepstr = create_depends_string(builddep);
- filecontents = re_builddep.sub("Build-Depends: "+builddepstr, filecontents);
-
- if dsc.has_key("build-depends-indep"):
- builddepindstr = create_depends_string(split_depends(dsc["build-depends-indep"]));
- filecontents = re_builddepind.sub("Build-Depends-Indep: "+builddepindstr, filecontents);
-
- if dsc.has_key("architecture") :
- if (dsc["architecture"] != "any"):
- newarch = arch_colour + dsc["architecture"] + end_colour;
- filecontents = re_arch.sub("Architecture: " + newarch, filecontents);
-
- return filecontents;
-
-def create_depends_string (depends_tree):
- # just look up unstable for now. possibly pull from .changes later
- suite = "unstable";
- result = "";
- comma_count = 1;
- for l in depends_tree:
- if (comma_count >= 2):
- result += ", ";
- or_count = 1
- for d in l:
- if (or_count >= 2 ):
- result += " | "
- # doesn't do version lookup yet.
-
- q = projectB.query("SELECT DISTINCT(b.package), b.version, c.name, su.suite_name FROM binaries b, files fi, location l, component c, bin_associations ba, suite su WHERE b.package='%s' AND b.file = fi.id AND fi.location = l.id AND l.component = c.id AND ba.bin=b.id AND ba.suite = su.id AND su.suite_name='%s' ORDER BY b.version desc" % (d['name'], suite));
- ql = q.getresult();
- if ql:
- i = ql[0];
-
- if i[2] == "contrib":
- result += contrib_colour + d['name'];
- elif i[2] == "non-free":
- result += nonfree_colour + d['name'];
- else :
- result += main_colour + d['name'];
-
- if d['version'] != '' :
- result += " (%s)" % (d['version']);
- result += end_colour;
- else:
- result += bold_colour + d['name'];
- if d['version'] != '' :
- result += " (%s)" % (d['version']);
- result += end_colour;
- or_count += 1;
- comma_count += 1;
- return result;
-
-def output_deb_info(filename):
- (control, control_keys, section, depends, recommends, arch, maintainer) = read_control(filename);
-
- if control == '':
- print "no control info"
- else:
- for key in control_keys :
- output = " " + key + ": "
- if key == 'Depends':
- output += create_depends_string(depends);
- elif key == 'Recommends':
- output += create_depends_string(recommends);
- elif key == 'Section':
- output += section;
- elif key == 'Architecture':
- output += arch;
- elif key == 'Maintainer':
- output += maintainer;
- elif key == 'Description':
- desc = control.Find(key);
- desc = re_newlinespace.sub('\n ', desc);
- output += desc;
- else:
- output += control.Find(key);
- print output;
-
-def do_command (command, filename):
- o = os.popen("%s %s" % (command, filename));
- print o.read();
-
-def print_copyright (deb_filename):
- package = re_package.sub(r'\1', deb_filename);
- o = os.popen("ar p %s data.tar.gz | tar tzvf - | egrep 'usr(/share)?/doc/[^/]*/copyright' | awk '{ print $6 }' | head -n 1" % (deb_filename));
- copyright = o.read()[:-1];
-
- if copyright == "":
- print "WARNING: No copyright found, please check package manually."
- return;
-
- doc_directory = re_doc_directory.sub(r'\1', copyright);
- if package != doc_directory:
- print "WARNING: wrong doc directory (expected %s, got %s)." % (package, doc_directory);
- return;
-
- o = os.popen("ar p %s data.tar.gz | tar xzOf - %s" % (deb_filename, copyright));
- print o.read();
-
-def check_dsc (dsc_filename):
- print "---- .dsc file for %s ----" % (dsc_filename);
- (dsc) = read_dsc(dsc_filename)
- print dsc
-
-def check_deb (deb_filename):
- filename = os.path.basename(deb_filename);
-
- if filename.endswith(".udeb"):
- is_a_udeb = 1;
- else:
- is_a_udeb = 0;
-
- print "---- control file for %s ----" % (filename);
- #do_command ("dpkg -I", deb_filename);
- output_deb_info(deb_filename)
-
- if is_a_udeb:
- print "---- skipping lintian check for µdeb ----";
- print ;
- else:
- print "---- lintian check for %s ----" % (filename);
- do_command ("lintian", deb_filename);
- print "---- linda check for %s ----" % (filename);
- do_command ("linda", deb_filename);
-
- print "---- contents of %s ----" % (filename);
- do_command ("dpkg -c", deb_filename);
-
- if is_a_udeb:
- print "---- skipping copyright for µdeb ----";
- else:
- print "---- copyright of %s ----" % (filename);
- print_copyright(deb_filename);
-
- print "---- file listing of %s ----" % (filename);
- do_command ("ls -l", deb_filename);
-
-# Read a file, strip the signature and return the modified contents as
-# a string.
-def strip_pgp_signature (filename):
- file = utils.open_file (filename);
- contents = "";
- inside_signature = 0;
- skip_next = 0;
- for line in file.readlines():
- if line[:-1] == "":
- continue;
- if inside_signature:
- continue;
- if skip_next:
- skip_next = 0;
- continue;
- if line.startswith("-----BEGIN PGP SIGNED MESSAGE"):
- skip_next = 1;
- continue;
- if line.startswith("-----BEGIN PGP SIGNATURE"):
- inside_signature = 1;
- continue;
- if line.startswith("-----END PGP SIGNATURE"):
- inside_signature = 0;
- continue;
- contents += line;
- file.close();
- return contents;
-
-# Display the .changes [without the signature]
-def display_changes (changes_filename):
- print "---- .changes file for %s ----" % (changes_filename);
- print strip_pgp_signature(changes_filename);
-
-def check_changes (changes_filename):
- display_changes(changes_filename);
-
- changes = utils.parse_changes (changes_filename);
- files = utils.build_file_list(changes);
- for file in files.keys():
- if file.endswith(".deb") or file.endswith(".udeb"):
- check_deb(file);
- if file.endswith(".dsc"):
- check_dsc(file);
- # else: => byhand
-
-def main ():
- global Cnf, projectB, db_files, waste, excluded;
-
-# Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Fernanda::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Frenanda::Options::%s" % (i)):
- Cnf["Fernanda::Options::%s" % (i)] = "";
-
- args = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Fernanda::Options")
-
- if Options["Help"]:
- usage();
-
- stdout_fd = sys.stdout;
-
- for file in args:
- try:
- # Pipe output for each argument through less
- less_fd = os.popen("less -R -", 'w', 0);
- # -R added to display raw control chars for colour
- sys.stdout = less_fd;
-
- try:
- if file.endswith(".changes"):
- check_changes(file);
- elif file.endswith(".deb") or file.endswith(".udeb"):
- check_deb(file);
- elif file.endswith(".dsc"):
- check_dsc(file);
- else:
- utils.fubar("Unrecognised file type: '%s'." % (file));
- finally:
- # Reset stdout here so future less invocations aren't FUBAR
- less_fd.close();
- sys.stdout = stdout_fd;
- except IOError, e:
- if errno.errorcode[e.errno] == 'EPIPE':
- utils.warn("[fernanda] Caught EPIPE; skipping.");
- pass;
- else:
- raise;
- except KeyboardInterrupt:
- utils.warn("[fernanda] Caught C-c; skipping.");
- pass;
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Remove obsolete .changes files from proposed-updates
-# Copyright (C) 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: halle,v 1.13 2005-12-17 10:57:03 rmurray Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, pg, re, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-Options = None;
-pu = {};
-
-re_isdeb = re.compile (r"^(.+)_(.+?)_(.+?).u?deb$");
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: halle [OPTION] <CHANGES FILE | ADMIN FILE>[...]
-Remove obsolete changes files from proposed-updates.
-
- -v, --verbose be more verbose about what is being done
- -h, --help show this help and exit
-
-Need either changes files or an admin.txt file with a '.joey' suffix."""
- sys.exit(exit_code)
-
-################################################################################
-
-def check_changes (filename):
- try:
- changes = utils.parse_changes(filename);
- files = utils.build_file_list(changes);
- except:
- utils.warn("Couldn't read changes file '%s'." % (filename));
- return;
- num_files = len(files.keys());
- for file in files.keys():
- if utils.re_isadeb.match(file):
- m = re_isdeb.match(file);
- pkg = m.group(1);
- version = m.group(2);
- arch = m.group(3);
- if Options["debug"]:
- print "BINARY: %s ==> %s_%s_%s" % (file, pkg, version, arch);
- else:
- m = utils.re_issource.match(file)
- if m:
- pkg = m.group(1);
- version = m.group(2);
- type = m.group(3);
- if type != "dsc":
- del files[file];
- num_files -= 1;
- continue;
- arch = "source";
- if Options["debug"]:
- print "SOURCE: %s ==> %s_%s_%s" % (file, pkg, version, arch);
- else:
- utils.fubar("unknown type, fix me");
- if not pu.has_key(pkg):
- # FIXME
- utils.warn("%s doesn't seem to exist in p-u?? (from %s [%s])" % (pkg, file, filename));
- continue;
- if not pu[pkg].has_key(arch):
- # FIXME
- utils.warn("%s doesn't seem to exist for %s in p-u?? (from %s [%s])" % (pkg, arch, file, filename));
- continue;
- pu_version = utils.re_no_epoch.sub('', pu[pkg][arch]);
- if pu_version == version:
- if Options["verbose"]:
- print "%s: ok" % (file);
- else:
- if Options["verbose"]:
- print "%s: superseded, removing. [%s]" % (file, pu_version);
- del files[file];
-
- new_num_files = len(files.keys());
- if new_num_files == 0:
- print "%s: no files left, superseded by %s" % (filename, pu_version);
- dest = Cnf["Dir::Morgue"] + "/misc/";
- utils.move(filename, dest);
- elif new_num_files < num_files:
- print "%s: lost files, MWAAP." % (filename);
- else:
- if Options["verbose"]:
- print "%s: ok" % (filename);
-
-################################################################################
-
-def check_joey (filename):
- file = utils.open_file(filename);
-
- cwd = os.getcwd();
- os.chdir("%s/dists/proposed-updates" % (Cnf["Dir::Root"]));
-
- for line in file.readlines():
- line = line.rstrip();
- if line.find('install') != -1:
- split_line = line.split();
- if len(split_line) != 2:
- utils.fubar("Parse error (not exactly 2 elements): %s" % (line));
- install_type = split_line[0];
- if install_type not in [ "install", "install-u", "sync-install" ]:
- utils.fubar("Unknown install type ('%s') from: %s" % (install_type, line));
- changes_filename = split_line[1]
- if Options["debug"]:
- print "Processing %s..." % (changes_filename);
- check_changes(changes_filename);
-
- os.chdir(cwd);
-
-################################################################################
-
-def init_pu ():
- global pu;
-
- q = projectB.query("""
-SELECT b.package, b.version, a.arch_string
- FROM bin_associations ba, binaries b, suite su, architecture a
- WHERE b.id = ba.bin AND ba.suite = su.id
- AND su.suite_name = 'proposed-updates' AND a.id = b.architecture
-UNION SELECT s.source, s.version, 'source'
- FROM src_associations sa, source s, suite su
- WHERE s.id = sa.source AND sa.suite = su.id
- AND su.suite_name = 'proposed-updates'
-ORDER BY package, version, arch_string;
-""");
- ql = q.getresult();
- for i in ql:
- pkg = i[0];
- version = i[1];
- arch = i[2];
- if not pu.has_key(pkg):
- pu[pkg] = {};
- pu[pkg][arch] = version;
-
-def main ():
- global Cnf, projectB, Options;
-
- Cnf = utils.get_conf()
-
- Arguments = [('d', "debug", "Halle::Options::Debug"),
- ('v',"verbose","Halle::Options::Verbose"),
- ('h',"help","Halle::Options::Help")];
- for i in [ "debug", "verbose", "help" ]:
- if not Cnf.has_key("Halle::Options::%s" % (i)):
- Cnf["Halle::Options::%s" % (i)] = "";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Halle::Options")
-
- if Options["Help"]:
- usage(0);
- if not arguments:
- utils.fubar("need at least one package name as an argument.");
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- init_pu();
-
- for file in arguments:
- if file.endswith(".changes"):
- check_changes(file);
- elif file.endswith(".joey"):
- check_joey(file);
- else:
- utils.fubar("Unrecognised file type: '%s'." % (file));
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Manipulate suite tags
-# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: heidi,v 1.19 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-#######################################################################################
-
-# 8to6Guy: "Wow, Bob, You look rough!"
-# BTAF: "Mbblpmn..."
-# BTAF <.oO>: "You moron! This is what you get for staying up all night drinking vodka and salad dressing!"
-# BTAF <.oO>: "This coffee I.V. drip is barely even keeping me awake! I need something with more kick! But what?"
-# BTAF: "OMIGOD! I OVERDOSED ON HEROIN"
-# CoWorker#n: "Give him air!!"
-# CoWorker#n+1: "We need a syringe full of adrenaline!"
-# CoWorker#n+2: "Stab him in the heart!"
-# BTAF: "*YES!*"
-# CoWorker#n+3: "Bob's been overdosing quite a bit lately..."
-# CoWorker#n+4: "Third time this week."
-
-# -- http://www.angryflower.com/8to6.gif
-
-#######################################################################################
-
-# Adds or removes packages from a suite. Takes the list of files
-# either from stdin or as a command line argument. Special action
-# "set", will reset the suite (!) and add all packages from scratch.
-
-#######################################################################################
-
-import pg, sys;
-import apt_pkg;
-import utils, db_access, logging;
-
-#######################################################################################
-
-Cnf = None;
-projectB = None;
-Logger = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: heidi [OPTIONS] [FILE]
-Display or alter the contents of a suite using FILE(s), or stdin.
-
- -a, --add=SUITE add to SUITE
- -h, --help show this help and exit
- -l, --list=SUITE list the contents of SUITE
- -r, --remove=SUITE remove from SUITE
- -s, --set=SUITE set SUITE"""
-
- sys.exit(exit_code)
-
-#######################################################################################
-
-def get_id (package, version, architecture):
- if architecture == "source":
- q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version))
- else:
- q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture))
-
- ql = q.getresult();
- if not ql:
- utils.warn("Couldn't find '%s~%s~%s'." % (package, version, architecture));
- return None;
- if len(ql) > 1:
- utils.warn("Found more than one match for '%s~%s~%s'." % (package, version, architecture));
- return None;
- id = ql[0][0];
- return id;
-
-#######################################################################################
-
-def set_suite (file, suite_id):
- lines = file.readlines();
-
- projectB.query("BEGIN WORK");
-
- # Build up a dictionary of what is currently in the suite
- current = {};
- q = projectB.query("SELECT b.package, b.version, a.arch_string, ba.id FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id" % (suite_id));
- ql = q.getresult();
- for i in ql:
- key = " ".join(i[:3]);
- current[key] = i[3];
- q = projectB.query("SELECT s.source, s.version, sa.id FROM source s, src_associations sa WHERE sa.suite = %s AND sa.source = s.id" % (suite_id));
- ql = q.getresult();
- for i in ql:
- key = " ".join(i[:2]) + " source";
- current[key] = i[2];
-
- # Build up a dictionary of what should be in the suite
- desired = {};
- for line in lines:
- split_line = line.strip().split();
- if len(split_line) != 3:
- utils.warn("'%s' does not break into 'package version architecture'." % (line[:-1]));
- continue;
- key = " ".join(split_line);
- desired[key] = "";
-
- # Check to see which packages need removed and remove them
- for key in current.keys():
- if not desired.has_key(key):
- (package, version, architecture) = key.split();
- id = current[key];
- if architecture == "source":
- q = projectB.query("DELETE FROM src_associations WHERE id = %s" % (id));
- else:
- q = projectB.query("DELETE FROM bin_associations WHERE id = %s" % (id));
- Logger.log(["removed",key,id]);
-
- # Check to see which packages need added and add them
- for key in desired.keys():
- if not current.has_key(key):
- (package, version, architecture) = key.split();
- id = get_id (package, version, architecture);
- if not id:
- continue;
- if architecture == "source":
- q = projectB.query("INSERT INTO src_associations (suite, source) VALUES (%s, %s)" % (suite_id, id));
- else:
- q = projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%s, %s)" % (suite_id, id));
- Logger.log(["added",key,id]);
-
- projectB.query("COMMIT WORK");
-
-#######################################################################################
-
-def process_file (file, suite, action):
-
- suite_id = db_access.get_suite_id(suite);
-
- if action == "set":
- set_suite (file, suite_id);
- return;
-
- lines = file.readlines();
-
- projectB.query("BEGIN WORK");
-
- for line in lines:
- split_line = line.strip().split();
- if len(split_line) != 3:
- utils.warn("'%s' does not break into 'package version architecture'." % (line[:-1]));
- continue;
-
- (package, version, architecture) = split_line;
-
- id = get_id(package, version, architecture);
- if not id:
- continue;
-
- if architecture == "source":
- # Find the existing assoications ID, if any
- q = projectB.query("SELECT id FROM src_associations WHERE suite = %s and source = %s" % (suite_id, id));
- ql = q.getresult();
- if not ql:
- assoication_id = None;
- else:
- assoication_id = ql[0][0];
- # Take action
- if action == "add":
- if assoication_id:
- utils.warn("'%s~%s~%s' already exists in suite %s." % (package, version, architecture, suite));
- continue;
- else:
- q = projectB.query("INSERT INTO src_associations (suite, source) VALUES (%s, %s)" % (suite_id, id));
- elif action == "remove":
- if assoication_id == None:
- utils.warn("'%s~%s~%s' doesn't exist in suite %s." % (package, version, architecture, suite));
- continue;
- else:
- q = projectB.query("DELETE FROM src_associations WHERE id = %s" % (assoication_id));
- else:
- # Find the existing assoications ID, if any
- q = projectB.query("SELECT id FROM bin_associations WHERE suite = %s and bin = %s" % (suite_id, id));
- ql = q.getresult();
- if not ql:
- assoication_id = None;
- else:
- assoication_id = ql[0][0];
- # Take action
- if action == "add":
- if assoication_id:
- utils.warn("'%s~%s~%s' already exists in suite %s." % (package, version, architecture, suite));
- continue;
- else:
- q = projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%s, %s)" % (suite_id, id));
- elif action == "remove":
- if assoication_id == None:
- utils.warn("'%s~%s~%s' doesn't exist in suite %s." % (package, version, architecture, suite));
- continue;
- else:
- q = projectB.query("DELETE FROM bin_associations WHERE id = %s" % (assoication_id));
-
- projectB.query("COMMIT WORK");
-
-#######################################################################################
-
-def get_list (suite):
- suite_id = db_access.get_suite_id(suite);
- # List binaries
- q = projectB.query("SELECT b.package, b.version, a.arch_string FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id" % (suite_id));
- ql = q.getresult();
- for i in ql:
- print " ".join(i);
-
- # List source
- q = projectB.query("SELECT s.source, s.version FROM source s, src_associations sa WHERE sa.suite = %s AND sa.source = s.id" % (suite_id));
- ql = q.getresult();
- for i in ql:
- print " ".join(i) + " source";
-
-#######################################################################################
-
-def main ():
- global Cnf, projectB, Logger;
-
- Cnf = utils.get_conf()
-
- Arguments = [('a',"add","Heidi::Options::Add", "HasArg"),
- ('h',"help","Heidi::Options::Help"),
- ('l',"list","Heidi::Options::List","HasArg"),
- ('r',"remove", "Heidi::Options::Remove", "HasArg"),
- ('s',"set", "Heidi::Options::Set", "HasArg")];
-
- for i in ["add", "help", "list", "remove", "set", "version" ]:
- if not Cnf.has_key("Heidi::Options::%s" % (i)):
- Cnf["Heidi::Options::%s" % (i)] = "";
-
- file_list = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Heidi::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"],int(Cnf["DB::Port"]));
-
- db_access.init(Cnf, projectB);
-
- action = None;
-
- for i in ("add", "list", "remove", "set"):
- if Cnf["Heidi::Options::%s" % (i)] != "":
- suite = Cnf["Heidi::Options::%s" % (i)];
- if db_access.get_suite_id(suite) == -1:
- utils.fubar("Unknown suite '%s'." %(suite));
- else:
- if action:
- utils.fubar("Can only perform one action at a time.");
- action = i;
-
- # Need an action...
- if action == None:
- utils.fubar("No action specified.");
-
- # Safety/Sanity check
- if action == "set" and suite != "testing":
- utils.fubar("Will not reset a suite other than testing.");
-
- if action == "list":
- get_list(suite);
- else:
- Logger = logging.Logger(Cnf, "heidi");
- if file_list:
- for file in file_list:
- process_file(utils.open_file(file), suite, action);
- else:
- process_file(sys.stdin, suite, action);
- Logger.close();
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Produces a report on NEW and BYHAND packages
-# Copyright (C) 2001, 2002, 2003, 2005 James Troup <james@nocrew.org>
-# $Id: helena,v 1.6 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <o-o> XP runs GCC, XFREE86, SSH etc etc,.,, I feel almost like linux....
-# <o-o> I am very confident that I can replicate any Linux application on XP
-# <willy> o-o: *boggle*
-# <o-o> building from source.
-# <o-o> Viiru: I already run GIMP under XP
-# <willy> o-o: why do you capitalise the names of all pieces of software?
-# <o-o> willy: because I want the EMPHASIZE them....
-# <o-o> grr s/the/to/
-# <willy> o-o: it makes you look like ZIPPY the PINHEAD
-# <o-o> willy: no idea what you are talking about.
-# <willy> o-o: do some research
-# <o-o> willy: for what reason?
-
-################################################################################
-
-import copy, glob, os, stat, sys, time;
-import apt_pkg;
-import katie, utils;
-import encodings.utf_8, encodings.latin_1, string;
-
-Cnf = None;
-Katie = None;
-direction = [];
-row_number = 0;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: helena
-Prints a report of packages in queue directories (usually new and byhand).
-
- -h, --help show this help and exit.
- -n, --new produce html-output
- -s, --sort=key sort output according to key, see below.
- -a, --age=key if using sort by age, how should time be treated?
- If not given a default of hours will be used.
-
- Sorting Keys: ao=age, oldest first. an=age, newest first.
- na=name, ascending nd=name, descending
- nf=notes, first nl=notes, last
-
- Age Keys: m=minutes, h=hours, d=days, w=weeks, o=months, y=years
-
-"""
- sys.exit(exit_code)
-
-################################################################################
-
-def plural(x):
- if x > 1:
- return "s";
- else:
- return "";
-
-################################################################################
-
-def time_pp(x):
- if x < 60:
- unit="second";
- elif x < 3600:
- x /= 60;
- unit="minute";
- elif x < 86400:
- x /= 3600;
- unit="hour";
- elif x < 604800:
- x /= 86400;
- unit="day";
- elif x < 2419200:
- x /= 604800;
- unit="week";
- elif x < 29030400:
- x /= 2419200;
- unit="month";
- else:
- x /= 29030400;
- unit="year";
- x = int(x);
- return "%s %s%s" % (x, unit, plural(x));
-
-################################################################################
-
-def sg_compare (a, b):
- a = a[1];
- b = b[1];
- """Sort by have note, time of oldest upload."""
- # Sort by have note
- a_note_state = a["note_state"];
- b_note_state = b["note_state"];
- if a_note_state < b_note_state:
- return -1;
- elif a_note_state > b_note_state:
- return 1;
-
- # Sort by time of oldest upload
- return cmp(a["oldest"], b["oldest"]);
-
-############################################################
-
-def sortfunc(a,b):
- for sorting in direction:
- (sortkey, way, time) = sorting;
- ret = 0
- if time == "m":
- x=int(a[sortkey]/60)
- y=int(b[sortkey]/60)
- elif time == "h":
- x=int(a[sortkey]/3600)
- y=int(b[sortkey]/3600)
- elif time == "d":
- x=int(a[sortkey]/86400)
- y=int(b[sortkey]/86400)
- elif time == "w":
- x=int(a[sortkey]/604800)
- y=int(b[sortkey]/604800)
- elif time == "o":
- x=int(a[sortkey]/2419200)
- y=int(b[sortkey]/2419200)
- elif time == "y":
- x=int(a[sortkey]/29030400)
- y=int(b[sortkey]/29030400)
- else:
- x=a[sortkey]
- y=b[sortkey]
- if x < y:
- ret = -1
- elif x > y:
- ret = 1
- if ret != 0:
- if way < 0:
- ret = ret*-1
- return ret
- return 0
-
-############################################################
-
-def header():
- print """<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
- <html><head><meta http-equiv="Content-Type" content="text/html; charset=iso8859-1">
- <title>Debian NEW and BYHAND Packages</title>
- <link type="text/css" rel="stylesheet" href="style.css">
- <link rel="shortcut icon" href="http://www.debian.org/favicon.ico">
- </head>
- <body>
- <div align="center">
- <a href="http://www.debian.org/">
- <img src="http://www.debian.org/logos/openlogo-nd-50.png" border="0" hspace="0" vspace="0" alt=""></a>
- <a href="http://www.debian.org/">
- <img src="http://www.debian.org/Pics/debian.png" border="0" hspace="0" vspace="0" alt="Debian Project"></a>
- </div>
- <br />
- <table class="reddy" width="100%">
- <tr>
- <td class="reddy">
- <img src="http://www.debian.org/Pics/red-upperleft.png" align="left" border="0" hspace="0" vspace="0"
- alt="" width="15" height="16"></td>
- <td rowspan="2" class="reddy">Debian NEW and BYHAND Packages</td>
- <td class="reddy">
- <img src="http://www.debian.org/Pics/red-upperright.png" align="right" border="0" hspace="0" vspace="0"
- alt="" width="16" height="16"></td>
- </tr>
- <tr>
- <td class="reddy">
- <img src="http://www.debian.org/Pics/red-lowerleft.png" align="left" border="0" hspace="0" vspace="0"
- alt="" width="16" height="16"></td>
- <td class="reddy">
- <img src="http://www.debian.org/Pics/red-lowerright.png" align="right" border="0" hspace="0" vspace="0"
- alt="" width="15" height="16"></td>
- </tr>
- </table>
- """
-
-def footer():
- print "<p class=\"validate\">Timestamp: %s (UTC)</p>" % (time.strftime("%d.%m.%Y / %H:%M:%S", time.gmtime()))
- print "<hr><p>Hint: Age is the youngest upload of the package, if there is more than one version.</p>"
- print "<p>You may want to look at <a href=\"http://ftp-master.debian.org/REJECT-FAQ.html\">the REJECT-FAQ</a> for possible reasons why one of the above packages may get rejected.</p>"
- print """<a href="http://validator.w3.org/check?uri=referer">
- <img border="0" src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!" height="31" width="88"></a>
- <a href="http://jigsaw.w3.org/css-validator/check/referer">
- <img border="0" src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"
- height="31" width="88"></a>
- """
- print "</body></html>"
-
-def table_header(type):
- print "<h1>Summary for: %s</h1>" % (type)
- print """<center><table border="0">
- <tr>
- <th align="center">Package</th>
- <th align="center">Version</th>
- <th align="center">Arch</th>
- <th align="center">Distribution</th>
- <th align="center">Age</th>
- <th align="center">Maintainer</th>
- <th align="center">Closes</th>
- </tr>
- """
-
-def table_footer(type, source_count, total_count):
- print "</table></center><br>\n"
- print "<p class=\"validate\">Package count in <b>%s</b>: <i>%s</i>\n" % (type, source_count)
- print "<br>Total Package count: <i>%s</i></p>\n" % (total_count)
-
-def force_to_latin(s):
- """Forces a string to Latin-1."""
- latin1_s = unicode(s,'utf-8');
- return latin1_s.encode('iso8859-1', 'replace');
-
-
-def table_row(source, version, arch, last_mod, maint, distribution, closes):
-
- global row_number;
-
- if row_number % 2 != 0:
- print "<tr class=\"even\">"
- else:
- print "<tr class=\"odd\">"
-
- tdclass = "sid"
- for dist in distribution:
- if dist == "experimental":
- tdclass = "exp";
- print "<td valign=\"top\" class=\"%s\">%s</td>" % (tdclass, source);
- print "<td valign=\"top\" class=\"%s\">" % (tdclass)
- for vers in version.split():
- print "%s<br>" % (vers);
- print "</td><td valign=\"top\" class=\"%s\">%s</td><td valign=\"top\" class=\"%s\">" % (tdclass, arch, tdclass);
- for dist in distribution:
- print "%s<br>" % (dist);
- print "</td><td valign=\"top\" class=\"%s\">%s</td>" % (tdclass, last_mod);
- (name, mail) = maint.split(":");
- name = force_to_latin(name);
-
- print "<td valign=\"top\" class=\"%s\"><a href=\"http://qa.debian.org/developer.php?login=%s\">%s</a></td>" % (tdclass, mail, name);
- print "<td valign=\"top\" class=\"%s\">" % (tdclass)
- for close in closes:
- print "<a href=\"http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=%s\">#%s</a><br>" % (close, close);
- print "</td></tr>";
- row_number+=1;
-
-############################################################
-
-def process_changes_files(changes_files, type):
- msg = "";
- cache = {};
- # Read in all the .changes files
- for filename in changes_files:
- try:
- Katie.pkg.changes_file = filename;
- Katie.init_vars();
- Katie.update_vars();
- cache[filename] = copy.copy(Katie.pkg.changes);
- cache[filename]["filename"] = filename;
- except:
- break;
- # Divide the .changes into per-source groups
- per_source = {};
- for filename in cache.keys():
- source = cache[filename]["source"];
- if not per_source.has_key(source):
- per_source[source] = {};
- per_source[source]["list"] = [];
- per_source[source]["list"].append(cache[filename]);
- # Determine oldest time and have note status for each source group
- for source in per_source.keys():
- source_list = per_source[source]["list"];
- first = source_list[0];
- oldest = os.stat(first["filename"])[stat.ST_MTIME];
- have_note = 0;
- for d in per_source[source]["list"]:
- mtime = os.stat(d["filename"])[stat.ST_MTIME];
- if Cnf.has_key("Helena::Options::New"):
- if mtime > oldest:
- oldest = mtime;
- else:
- if mtime < oldest:
- oldest = mtime;
- have_note += (d.has_key("lisa note"));
- per_source[source]["oldest"] = oldest;
- if not have_note:
- per_source[source]["note_state"] = 0; # none
- elif have_note < len(source_list):
- per_source[source]["note_state"] = 1; # some
- else:
- per_source[source]["note_state"] = 2; # all
- per_source_items = per_source.items();
- per_source_items.sort(sg_compare);
-
- entries = [];
- max_source_len = 0;
- max_version_len = 0;
- max_arch_len = 0;
- maintainer = {};
- maint="";
- distribution="";
- closes="";
- source_exists="";
- for i in per_source_items:
- last_modified = time.time()-i[1]["oldest"];
- source = i[1]["list"][0]["source"];
- if len(source) > max_source_len:
- max_source_len = len(source);
- arches = {};
- versions = {};
- for j in i[1]["list"]:
- if Cnf.has_key("Helena::Options::New"):
- try:
- (maintainer["maintainer822"], maintainer["maintainer2047"],
- maintainer["maintainername"], maintainer["maintaineremail"]) = \
- utils.fix_maintainer (j["maintainer"]);
- except utils.ParseMaintError, msg:
- print "Problems while parsing maintainer address\n";
- maintainer["maintainername"] = "Unknown";
- maintainer["maintaineremail"] = "Unknown";
- maint="%s:%s" % (maintainer["maintainername"], maintainer["maintaineremail"]);
- distribution=j["distribution"].keys();
- closes=j["closes"].keys();
- for arch in j["architecture"].keys():
- arches[arch] = "";
- version = j["version"];
- versions[version] = "";
- arches_list = arches.keys();
- arches_list.sort(utils.arch_compare_sw);
- arch_list = " ".join(arches_list);
- version_list = " ".join(versions.keys());
- if len(version_list) > max_version_len:
- max_version_len = len(version_list);
- if len(arch_list) > max_arch_len:
- max_arch_len = len(arch_list);
- if i[1]["note_state"]:
- note = " | [N]";
- else:
- note = "";
- entries.append([source, version_list, arch_list, note, last_modified, maint, distribution, closes]);
-
- # direction entry consists of "Which field, which direction, time-consider" where
- # time-consider says how we should treat last_modified. Thats all.
-
- # Look for the options for sort and then do the sort.
- age = "h"
- if Cnf.has_key("Helena::Options::Age"):
- age = Cnf["Helena::Options::Age"]
- if Cnf.has_key("Helena::Options::New"):
- # If we produce html we always have oldest first.
- direction.append([4,-1,"ao"]);
- else:
- if Cnf.has_key("Helena::Options::Sort"):
- for i in Cnf["Helena::Options::Sort"].split(","):
- if i == "ao":
- # Age, oldest first.
- direction.append([4,-1,age]);
- elif i == "an":
- # Age, newest first.
- direction.append([4,1,age]);
- elif i == "na":
- # Name, Ascending.
- direction.append([0,1,0]);
- elif i == "nd":
- # Name, Descending.
- direction.append([0,-1,0]);
- elif i == "nl":
- # Notes last.
- direction.append([3,1,0]);
- elif i == "nf":
- # Notes first.
- direction.append([3,-1,0]);
- entries.sort(lambda x, y: sortfunc(x, y))
- # Yes, in theory you can add several sort options at the commandline with. But my mind is to small
- # at the moment to come up with a real good sorting function that considers all the sidesteps you
- # have with it. (If you combine options it will simply take the last one at the moment).
- # Will be enhanced in the future.
-
- if Cnf.has_key("Helena::Options::New"):
- direction.append([4,1,"ao"]);
- entries.sort(lambda x, y: sortfunc(x, y))
- # Output for a html file. First table header. then table_footer.
- # Any line between them is then a <tr> printed from subroutine table_row.
- if len(entries) > 0:
- table_header(type.upper());
- for entry in entries:
- (source, version_list, arch_list, note, last_modified, maint, distribution, closes) = entry;
- table_row(source, version_list, arch_list, time_pp(last_modified), maint, distribution, closes);
- total_count = len(changes_files);
- source_count = len(per_source_items);
- table_footer(type.upper(), source_count, total_count);
- else:
- # The "normal" output without any formatting.
- format="%%-%ds | %%-%ds | %%-%ds%%s | %%s old\n" % (max_source_len, max_version_len, max_arch_len)
-
- msg = "";
- for entry in entries:
- (source, version_list, arch_list, note, last_modified, undef, undef, undef) = entry;
- msg += format % (source, version_list, arch_list, note, time_pp(last_modified));
-
- if msg:
- total_count = len(changes_files);
- source_count = len(per_source_items);
- print type.upper();
- print "-"*len(type);
- print
- print msg;
- print "%s %s source package%s / %s %s package%s in total." % (source_count, type, plural(source_count), total_count, type, plural(total_count));
- print
-
-
-################################################################################
-
-def main():
- global Cnf, Katie;
-
- Cnf = utils.get_conf();
- Arguments = [('h',"help","Helena::Options::Help"),
- ('n',"new","Helena::Options::New"),
- ('s',"sort","Helena::Options::Sort", "HasArg"),
- ('a',"age","Helena::Options::Age", "HasArg")];
- for i in [ "help" ]:
- if not Cnf.has_key("Helena::Options::%s" % (i)):
- Cnf["Helena::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Helena::Options")
- if Options["Help"]:
- usage();
-
- Katie = katie.Katie(Cnf);
-
- if Cnf.has_key("Helena::Options::New"):
- header();
-
- directories = Cnf.ValueList("Helena::Directories");
- if not directories:
- directories = [ "byhand", "new" ];
-
- for directory in directories:
- changes_files = glob.glob("%s/*.changes" % (Cnf["Dir::Queue::%s" % (directory)]));
- process_changes_files(changes_files, directory);
-
- if Cnf.has_key("Helena::Options::New"):
- footer();
-
-################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-DROP DATABASE projectb;
-CREATE DATABASE projectb WITH ENCODING = 'SQL_ASCII';
-
-\c projectb
-
-CREATE TABLE archive (
- id SERIAL PRIMARY KEY,
- name TEXT UNIQUE NOT NULL,
- origin_server TEXT,
- description TEXT
-);
-
-CREATE TABLE component (
- id SERIAL PRIMARY KEY,
- name TEXT UNIQUE NOT NULL,
- description TEXT,
- meets_dfsg BOOLEAN
-);
-
-CREATE TABLE architecture (
- id SERIAL PRIMARY KEY,
- arch_string TEXT UNIQUE NOT NULL,
- description TEXT
-);
-
-CREATE TABLE maintainer (
- id SERIAL PRIMARY KEY,
- name TEXT UNIQUE NOT NULL
-);
-
-CREATE TABLE uid (
- id SERIAL PRIMARY KEY,
- uid TEXT UNIQUE NOT NULL
-);
-
-CREATE TABLE fingerprint (
- id SERIAL PRIMARY KEY,
- fingerprint TEXT UNIQUE NOT NULL,
- uid INT4 REFERENCES uid
-);
-
-CREATE TABLE location (
- id SERIAL PRIMARY KEY,
- path TEXT NOT NULL,
- component INT4 REFERENCES component,
- archive INT4 REFERENCES archive,
- type TEXT NOT NULL
-);
-
--- No references below here to allow sane population; added post-population
-
-CREATE TABLE files (
- id SERIAL PRIMARY KEY,
- filename TEXT NOT NULL,
- size INT8 NOT NULL,
- md5sum TEXT NOT NULL,
- location INT4 NOT NULL, -- REFERENCES location
- last_used TIMESTAMP,
- unique (filename, location)
-);
-
-CREATE TABLE source (
- id SERIAL PRIMARY KEY,
- source TEXT NOT NULL,
- version TEXT NOT NULL,
- maintainer INT4 NOT NULL, -- REFERENCES maintainer
- file INT4 UNIQUE NOT NULL, -- REFERENCES files
- install_date TIMESTAMP NOT NULL,
- sig_fpr INT4 NOT NULL, -- REFERENCES fingerprint
- unique (source, version)
-);
-
-CREATE TABLE dsc_files (
- id SERIAL PRIMARY KEY,
- source INT4 NOT NULL, -- REFERENCES source,
- file INT4 NOT NULL, -- RERENCES files
- unique (source, file)
-);
-
-CREATE TABLE binaries (
- id SERIAL PRIMARY KEY,
- package TEXT NOT NULL,
- version TEXT NOT NULL,
- maintainer INT4 NOT NULL, -- REFERENCES maintainer
- source INT4, -- REFERENCES source,
- architecture INT4 NOT NULL, -- REFERENCES architecture
- file INT4 UNIQUE NOT NULL, -- REFERENCES files,
- type TEXT NOT NULL,
--- joeyh@ doesn't want .udebs and .debs with the same name, which is why the unique () doesn't mention type
- sig_fpr INT4 NOT NULL, -- REFERENCES fingerprint
- unique (package, version, architecture)
-);
-
-CREATE TABLE suite (
- id SERIAL PRIMARY KEY,
- suite_name TEXT NOT NULL,
- version TEXT,
- origin TEXT,
- label TEXT,
- policy_engine TEXT,
- description TEXT
-);
-
-CREATE TABLE queue (
- id SERIAL PRIMARY KEY,
- queue_name TEXT NOT NULL
-);
-
-CREATE TABLE suite_architectures (
- suite INT4 NOT NULL, -- REFERENCES suite
- architecture INT4 NOT NULL, -- REFERENCES architecture
- unique (suite, architecture)
-);
-
-CREATE TABLE bin_associations (
- id SERIAL PRIMARY KEY,
- suite INT4 NOT NULL, -- REFERENCES suite
- bin INT4 NOT NULL, -- REFERENCES binaries
- unique (suite, bin)
-);
-
-CREATE TABLE src_associations (
- id SERIAL PRIMARY KEY,
- suite INT4 NOT NULL, -- REFERENCES suite
- source INT4 NOT NULL, -- REFERENCES source
- unique (suite, source)
-);
-
-CREATE TABLE section (
- id SERIAL PRIMARY KEY,
- section TEXT UNIQUE NOT NULL
-);
-
-CREATE TABLE priority (
- id SERIAL PRIMARY KEY,
- priority TEXT UNIQUE NOT NULL,
- level INT4 UNIQUE NOT NULL
-);
-
-CREATE TABLE override_type (
- id SERIAL PRIMARY KEY,
- type TEXT UNIQUE NOT NULL
-);
-
-CREATE TABLE override (
- package TEXT NOT NULL,
- suite INT4 NOT NULL, -- references suite
- component INT4 NOT NULL, -- references component
- priority INT4, -- references priority
- section INT4 NOT NULL, -- references section
- type INT4 NOT NULL, -- references override_type
- maintainer TEXT,
- unique (suite, component, package, type)
-);
-
-CREATE TABLE queue_build (
- suite INT4 NOT NULL, -- references suite
- queue INT4 NOT NULL, -- references queue
- filename TEXT NOT NULL,
- in_queue BOOLEAN NOT NULL,
- last_used TIMESTAMP
-);
-
--- Critical indexes
-
-CREATE INDEX bin_associations_bin ON bin_associations (bin);
-CREATE INDEX src_associations_source ON src_associations (source);
-CREATE INDEX source_maintainer ON source (maintainer);
-CREATE INDEX binaries_maintainer ON binaries (maintainer);
-CREATE INDEX binaries_fingerprint on binaries (sig_fpr);
-CREATE INDEX source_fingerprint on source (sig_fpr);
-CREATE INDEX dsc_files_file ON dsc_files (file);
+++ /dev/null
-
-CREATE TABLE disembargo (
- package TEXT NOT NULL,
- version TEXT NOT NULL
-);
-
-GRANT ALL ON disembargo TO GROUP ftpmaster;
-GRANT SELECT ON disembargo TO PUBLIC;
+++ /dev/null
-#!/usr/bin/env python
-
-# Generate file lists used by apt-ftparchive to generate Packages and Sources files
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: jenna,v 1.29 2004-11-27 17:58:47 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <elmo> I'm doing it in python btw.. nothing against your monster
-# SQL, but the python wins in terms of speed and readiblity
-# <aj> bah
-# <aj> you suck!!!!!
-# <elmo> sorry :(
-# <aj> you are not!!!
-# <aj> you mock my SQL!!!!
-# <elmo> you want have contest of skillz??????
-# <aj> all your skillz are belong to my sql!!!!
-# <elmo> yo momma are belong to my python!!!!
-# <aj> yo momma was SQLin' like a pig last night!
-
-################################################################################
-
-import copy, os, pg, string, sys;
-import apt_pkg;
-import claire, db_access, logging, utils;
-
-################################################################################
-
-projectB = None;
-Cnf = None;
-Logger = None;
-Options = None;
-
-################################################################################
-
-def Dict(**dict): return dict
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: jenna [OPTION]
-Write out file lists suitable for use with apt-ftparchive.
-
- -a, --architecture=ARCH only write file lists for this architecture
- -c, --component=COMPONENT only write file lists for this component
- -h, --help show this help and exit
- -n, --no-delete don't delete older versions
- -s, --suite=SUITE only write file lists for this suite
-
-ARCH, COMPONENT and SUITE can be space separated lists, e.g.
- --architecture=\"m68k i386\"""";
- sys.exit(exit_code);
-
-################################################################################
-
-def version_cmp(a, b):
- return -apt_pkg.VersionCompare(a[0], b[0]);
-
-#####################################################
-
-def delete_packages(delete_versions, pkg, dominant_arch, suite,
- dominant_version, delete_table, delete_col, packages):
- suite_id = db_access.get_suite_id(suite);
- for version in delete_versions:
- delete_unique_id = version[1];
- if not packages.has_key(delete_unique_id):
- continue;
- delete_version = version[0];
- delete_id = packages[delete_unique_id]["id"];
- delete_arch = packages[delete_unique_id]["arch"];
- if not Cnf.Find("Suite::%s::Untouchable" % (suite)):
- if Options["No-Delete"]:
- print "Would delete %s_%s_%s in %s in favour of %s_%s" % (pkg, delete_arch, delete_version, suite, dominant_version, dominant_arch);
- else:
- Logger.log(["dominated", pkg, delete_arch, delete_version, dominant_version, dominant_arch]);
- projectB.query("DELETE FROM %s WHERE suite = %s AND %s = %s" % (delete_table, suite_id, delete_col, delete_id));
- del packages[delete_unique_id];
- else:
- if Options["No-Delete"]:
- print "Would delete %s_%s_%s in favour of %s_%s, but %s is untouchable" % (pkg, delete_arch, delete_version, dominant_version, dominant_arch, suite);
- else:
- Logger.log(["dominated but untouchable", pkg, delete_arch, delete_version, dominant_version, dominant_arch]);
-
-#####################################################
-
-# Per-suite&pkg: resolve arch-all, vs. arch-any, assumes only one arch-all
-def resolve_arch_all_vs_any(versions, packages):
- arch_all_version = None;
- arch_any_versions = copy.copy(versions);
- for i in arch_any_versions:
- unique_id = i[1];
- arch = packages[unique_id]["arch"];
- if arch == "all":
- arch_all_versions = [i];
- arch_all_version = i[0];
- arch_any_versions.remove(i);
- # Sort arch: any versions into descending order
- arch_any_versions.sort(version_cmp);
- highest_arch_any_version = arch_any_versions[0][0];
-
- pkg = packages[unique_id]["pkg"];
- suite = packages[unique_id]["suite"];
- delete_table = "bin_associations";
- delete_col = "bin";
-
- if apt_pkg.VersionCompare(highest_arch_any_version, arch_all_version) < 1:
- # arch: all dominates
- delete_packages(arch_any_versions, pkg, "all", suite,
- arch_all_version, delete_table, delete_col, packages);
- else:
- # arch: any dominates
- delete_packages(arch_all_versions, pkg, "any", suite,
- highest_arch_any_version, delete_table, delete_col,
- packages);
-
-#####################################################
-
-# Per-suite&pkg&arch: resolve duplicate versions
-def remove_duplicate_versions(versions, packages):
- # Sort versions into descending order
- versions.sort(version_cmp);
- dominant_versions = versions[0];
- dominated_versions = versions[1:];
- (dominant_version, dominant_unqiue_id) = dominant_versions;
- pkg = packages[dominant_unqiue_id]["pkg"];
- arch = packages[dominant_unqiue_id]["arch"];
- suite = packages[dominant_unqiue_id]["suite"];
- if arch == "source":
- delete_table = "src_associations";
- delete_col = "source";
- else: # !source
- delete_table = "bin_associations";
- delete_col = "bin";
- # Remove all but the highest
- delete_packages(dominated_versions, pkg, arch, suite,
- dominant_version, delete_table, delete_col, packages);
- return [dominant_versions];
-
-################################################################################
-
-def cleanup(packages):
- # Build up the index used by the clean up functions
- d = {};
- for unique_id in packages.keys():
- suite = packages[unique_id]["suite"];
- pkg = packages[unique_id]["pkg"];
- arch = packages[unique_id]["arch"];
- version = packages[unique_id]["version"];
- d.setdefault(suite, {});
- d[suite].setdefault(pkg, {});
- d[suite][pkg].setdefault(arch, []);
- d[suite][pkg][arch].append([version, unique_id]);
- # Clean up old versions
- for suite in d.keys():
- for pkg in d[suite].keys():
- for arch in d[suite][pkg].keys():
- versions = d[suite][pkg][arch];
- if len(versions) > 1:
- d[suite][pkg][arch] = remove_duplicate_versions(versions, packages);
-
- # Arch: all -> any and vice versa
- for suite in d.keys():
- for pkg in d[suite].keys():
- arches = d[suite][pkg];
- # If we don't have any arch: all; we've nothing to do
- if not arches.has_key("all"):
- continue;
- # Check to see if we have arch: all and arch: !all (ignoring source)
- num_arches = len(arches.keys());
- if arches.has_key("source"):
- num_arches -= 1;
- # If we do, remove the duplicates
- if num_arches > 1:
- versions = [];
- for arch in arches.keys():
- if arch != "source":
- versions.extend(d[suite][pkg][arch]);
- resolve_arch_all_vs_any(versions, packages);
-
-################################################################################
-
-def write_legacy_mixed_filelist(suite, list, packages, dislocated_files):
- # Work out the filename
- filename = os.path.join(Cnf["Dir::Lists"], "%s_-_all.list" % (suite));
- output = utils.open_file(filename, "w");
- # Generate the final list of files
- files = {};
- for id in list:
- path = packages[id]["path"];
- filename = packages[id]["filename"];
- file_id = packages[id]["file_id"];
- if suite == "stable" and dislocated_files.has_key(file_id):
- filename = dislocated_files[file_id];
- else:
- filename = path + filename;
- if files.has_key(filename):
- utils.warn("%s (in %s) is duplicated." % (filename, suite));
- else:
- files[filename] = "";
- # Sort the files since apt-ftparchive doesn't
- keys = files.keys();
- keys.sort();
- # Write the list of files out
- for file in keys:
- output.write(file+'\n')
- output.close();
-
-############################################################
-
-def write_filelist(suite, component, arch, type, list, packages, dislocated_files):
- # Work out the filename
- if arch != "source":
- if type == "udeb":
- arch = "debian-installer_binary-%s" % (arch);
- elif type == "deb":
- arch = "binary-%s" % (arch);
- filename = os.path.join(Cnf["Dir::Lists"], "%s_%s_%s.list" % (suite, component, arch));
- output = utils.open_file(filename, "w");
- # Generate the final list of files
- files = {};
- for id in list:
- path = packages[id]["path"];
- filename = packages[id]["filename"];
- file_id = packages[id]["file_id"];
- pkg = packages[id]["pkg"];
- if suite == "stable" and dislocated_files.has_key(file_id):
- filename = dislocated_files[file_id];
- else:
- filename = path + filename;
- if files.has_key(pkg):
- utils.warn("%s (in %s/%s, %s) is duplicated." % (pkg, suite, component, filename));
- else:
- files[pkg] = filename;
- # Sort the files since apt-ftparchive doesn't
- pkgs = files.keys();
- pkgs.sort();
- # Write the list of files out
- for pkg in pkgs:
- output.write(files[pkg]+'\n')
- output.close();
-
-################################################################################
-
-def write_filelists(packages, dislocated_files):
- # Build up the index to iterate over
- d = {};
- for unique_id in packages.keys():
- suite = packages[unique_id]["suite"];
- component = packages[unique_id]["component"];
- arch = packages[unique_id]["arch"];
- type = packages[unique_id]["type"];
- d.setdefault(suite, {});
- d[suite].setdefault(component, {});
- d[suite][component].setdefault(arch, {});
- d[suite][component][arch].setdefault(type, []);
- d[suite][component][arch][type].append(unique_id);
- # Flesh out the index
- if not Options["Suite"]:
- suites = Cnf.SubTree("Suite").List();
- else:
- suites = utils.split_args(Options["Suite"]);
- for suite in map(string.lower, suites):
- d.setdefault(suite, {});
- if not Options["Component"]:
- components = Cnf.ValueList("Suite::%s::Components" % (suite));
- else:
- components = utils.split_args(Options["Component"]);
- udeb_components = Cnf.ValueList("Suite::%s::UdebComponents" % (suite));
- udeb_components = udeb_components;
- for component in components:
- d[suite].setdefault(component, {});
- if component in udeb_components:
- binary_types = [ "deb", "udeb" ];
- else:
- binary_types = [ "deb" ];
- if not Options["Architecture"]:
- architectures = Cnf.ValueList("Suite::%s::Architectures" % (suite));
- else:
- architectures = utils.split_args(Options["Architectures"]);
- for arch in map(string.lower, architectures):
- d[suite][component].setdefault(arch, {});
- if arch == "source":
- types = [ "dsc" ];
- else:
- types = binary_types;
- for type in types:
- d[suite][component][arch].setdefault(type, []);
- # Then walk it
- for suite in d.keys():
- if Cnf.has_key("Suite::%s::Components" % (suite)):
- for component in d[suite].keys():
- for arch in d[suite][component].keys():
- if arch == "all":
- continue;
- for type in d[suite][component][arch].keys():
- list = d[suite][component][arch][type];
- # If it's a binary, we need to add in the arch: all debs too
- if arch != "source":
- archall_suite = Cnf.get("Jenna::ArchAllMap::%s" % (suite));
- if archall_suite:
- list.extend(d[archall_suite][component]["all"][type]);
- elif d[suite][component].has_key("all") and \
- d[suite][component]["all"].has_key(type):
- list.extend(d[suite][component]["all"][type]);
- write_filelist(suite, component, arch, type, list,
- packages, dislocated_files);
- else: # legacy-mixed suite
- list = [];
- for component in d[suite].keys():
- for arch in d[suite][component].keys():
- for type in d[suite][component][arch].keys():
- list.extend(d[suite][component][arch][type]);
- write_legacy_mixed_filelist(suite, list, packages, dislocated_files);
-
-################################################################################
-
-# Want to use stable dislocation support: True or false?
-def stable_dislocation_p():
- # If the support is not explicitly enabled, assume it's disabled
- if not Cnf.FindB("Dinstall::StableDislocationSupport"):
- return 0;
- # If we don't have a stable suite, obviously a no-op
- if not Cnf.has_key("Suite::Stable"):
- return 0;
- # If the suite(s) weren't explicitly listed, all suites are done
- if not Options["Suite"]:
- return 1;
- # Otherwise, look in what suites the user specified
- suites = utils.split_args(Options["Suite"]);
-
- if "stable" in suites:
- return 1;
- else:
- return 0;
-
-################################################################################
-
-def do_da_do_da():
- # If we're only doing a subset of suites, ensure we do enough to
- # be able to do arch: all mapping.
- if Options["Suite"]:
- suites = utils.split_args(Options["Suite"]);
- for suite in suites:
- archall_suite = Cnf.get("Jenna::ArchAllMap::%s" % (suite));
- if archall_suite and archall_suite not in suites:
- utils.warn("Adding %s as %s maps Arch: all from it." % (archall_suite, suite));
- suites.append(archall_suite);
- Options["Suite"] = ",".join(suites);
-
- (con_suites, con_architectures, con_components, check_source) = \
- utils.parse_args(Options);
-
- if stable_dislocation_p():
- dislocated_files = claire.find_dislocated_stable(Cnf, projectB);
- else:
- dislocated_files = {};
-
- query = """
-SELECT b.id, b.package, a.arch_string, b.version, l.path, f.filename, c.name,
- f.id, su.suite_name, b.type
- FROM binaries b, bin_associations ba, architecture a, files f, location l,
- component c, suite su
- WHERE b.id = ba.bin AND b.file = f.id AND b.architecture = a.id
- AND f.location = l.id AND l.component = c.id AND ba.suite = su.id
- %s %s %s""" % (con_suites, con_architectures, con_components);
- if check_source:
- query += """
-UNION
-SELECT s.id, s.source, 'source', s.version, l.path, f.filename, c.name, f.id,
- su.suite_name, 'dsc'
- FROM source s, src_associations sa, files f, location l, component c, suite su
- WHERE s.id = sa.source AND s.file = f.id AND f.location = l.id
- AND l.component = c.id AND sa.suite = su.id %s %s""" % (con_suites, con_components);
- q = projectB.query(query);
- ql = q.getresult();
- # Build up the main index of packages
- packages = {};
- unique_id = 0;
- for i in ql:
- (id, pkg, arch, version, path, filename, component, file_id, suite, type) = i;
- # 'id' comes from either 'binaries' or 'source', so it's not unique
- unique_id += 1;
- packages[unique_id] = Dict(id=id, pkg=pkg, arch=arch, version=version,
- path=path, filename=filename,
- component=component, file_id=file_id,
- suite=suite, type = type);
- cleanup(packages);
- write_filelists(packages, dislocated_files);
-
-################################################################################
-
-def main():
- global Cnf, projectB, Options, Logger;
-
- Cnf = utils.get_conf();
- Arguments = [('a', "architecture", "Jenna::Options::Architecture", "HasArg"),
- ('c', "component", "Jenna::Options::Component", "HasArg"),
- ('h', "help", "Jenna::Options::Help"),
- ('n', "no-delete", "Jenna::Options::No-Delete"),
- ('s', "suite", "Jenna::Options::Suite", "HasArg")];
- for i in ["architecture", "component", "help", "no-delete", "suite" ]:
- if not Cnf.has_key("Jenna::Options::%s" % (i)):
- Cnf["Jenna::Options::%s" % (i)] = "";
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Jenna::Options");
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
- Logger = logging.Logger(Cnf, "jenna");
- do_da_do_da();
- Logger.close();
-
-#########################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/env python
-
-# Checks Debian packages from Incoming
-# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: jennifer,v 1.65 2005-12-05 05:35:47 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-# Originally based on dinstall by Guy Maor <maor@debian.org>
-
-################################################################################
-
-# Computer games don't affect kids. I mean if Pacman affected our generation as
-# kids, we'd all run around in a darkened room munching pills and listening to
-# repetitive music.
-# -- Unknown
-
-################################################################################
-
-import commands, errno, fcntl, os, re, shutil, stat, sys, time, tempfile, traceback;
-import apt_inst, apt_pkg;
-import db_access, katie, logging, utils;
-
-from types import *;
-
-################################################################################
-
-re_valid_version = re.compile(r"^([0-9]+:)?[0-9A-Za-z\.\-\+:]+$");
-re_valid_pkg_name = re.compile(r"^[\dA-Za-z][\dA-Za-z\+\-\.]+$");
-re_changelog_versions = re.compile(r"^\w[-+0-9a-z.]+ \([^\(\) \t]+\)");
-re_strip_revision = re.compile(r"-([^-]+)$");
-
-################################################################################
-
-# Globals
-jennifer_version = "$Revision: 1.65 $";
-
-Cnf = None;
-Options = None;
-Logger = None;
-Katie = None;
-
-reprocess = 0;
-in_holding = {};
-
-# Aliases to the real vars in the Katie class; hysterical raisins.
-reject_message = "";
-changes = {};
-dsc = {};
-dsc_files = {};
-files = {};
-pkg = {};
-
-###############################################################################
-
-def init():
- global Cnf, Options, Katie, changes, dsc, dsc_files, files, pkg;
-
- apt_pkg.init();
-
- Cnf = apt_pkg.newConfiguration();
- apt_pkg.ReadConfigFileISC(Cnf,utils.which_conf_file());
-
- Arguments = [('a',"automatic","Dinstall::Options::Automatic"),
- ('h',"help","Dinstall::Options::Help"),
- ('n',"no-action","Dinstall::Options::No-Action"),
- ('p',"no-lock", "Dinstall::Options::No-Lock"),
- ('s',"no-mail", "Dinstall::Options::No-Mail"),
- ('V',"version","Dinstall::Options::Version")];
-
- for i in ["automatic", "help", "no-action", "no-lock", "no-mail",
- "override-distribution", "version"]:
- Cnf["Dinstall::Options::%s" % (i)] = "";
-
- changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Dinstall::Options")
-
- if Options["Help"]:
- usage();
- elif Options["Version"]:
- print "jennifer %s" % (jennifer_version);
- sys.exit(0);
-
- Katie = katie.Katie(Cnf);
-
- changes = Katie.pkg.changes;
- dsc = Katie.pkg.dsc;
- dsc_files = Katie.pkg.dsc_files;
- files = Katie.pkg.files;
- pkg = Katie.pkg;
-
- return changes_files;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: dinstall [OPTION]... [CHANGES]...
- -a, --automatic automatic run
- -h, --help show this help and exit.
- -n, --no-action don't do anything
- -p, --no-lock don't check lockfile !! for cron.daily only !!
- -s, --no-mail don't send any mail
- -V, --version display the version number and exit"""
- sys.exit(exit_code)
-
-################################################################################
-
-def reject (str, prefix="Rejected: "):
- global reject_message;
- if str:
- reject_message += prefix + str + "\n";
-
-################################################################################
-
-def copy_to_holding(filename):
- global in_holding;
-
- base_filename = os.path.basename(filename);
-
- dest = Cnf["Dir::Queue::Holding"] + '/' + base_filename;
- try:
- fd = os.open(dest, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0640);
- os.close(fd);
- except OSError, e:
- # Shouldn't happen, but will if, for example, someone lists a
- # file twice in the .changes.
- if errno.errorcode[e.errno] == 'EEXIST':
- reject("%s: already exists in holding area; can not overwrite." % (base_filename));
- return;
- raise;
-
- try:
- shutil.copy(filename, dest);
- except IOError, e:
- # In either case (ENOENT or EACCES) we want to remove the
- # O_CREAT | O_EXCLed ghost file, so add the file to the list
- # of 'in holding' even if it's not the real file.
- if errno.errorcode[e.errno] == 'ENOENT':
- reject("%s: can not copy to holding area: file not found." % (base_filename));
- os.unlink(dest);
- return;
- elif errno.errorcode[e.errno] == 'EACCES':
- reject("%s: can not copy to holding area: read permission denied." % (base_filename));
- os.unlink(dest);
- return;
- raise;
-
- in_holding[base_filename] = "";
-
-################################################################################
-
-def clean_holding():
- global in_holding;
-
- cwd = os.getcwd();
- os.chdir(Cnf["Dir::Queue::Holding"]);
- for file in in_holding.keys():
- if os.path.exists(file):
- if file.find('/') != -1:
- utils.fubar("WTF? clean_holding() got a file ('%s') with / in it!" % (file));
- else:
- os.unlink(file);
- in_holding = {};
- os.chdir(cwd);
-
-################################################################################
-
-def check_changes():
- filename = pkg.changes_file;
-
- # Parse the .changes field into a dictionary
- try:
- changes.update(utils.parse_changes(filename));
- except utils.cant_open_exc:
- reject("%s: can't read file." % (filename));
- return 0;
- except utils.changes_parse_error_exc, line:
- reject("%s: parse error, can't grok: %s." % (filename, line));
- return 0;
-
- # Parse the Files field from the .changes into another dictionary
- try:
- files.update(utils.build_file_list(changes));
- except utils.changes_parse_error_exc, line:
- reject("%s: parse error, can't grok: %s." % (filename, line));
- except utils.nk_format_exc, format:
- reject("%s: unknown format '%s'." % (filename, format));
- return 0;
-
- # Check for mandatory fields
- for i in ("source", "binary", "architecture", "version", "distribution",
- "maintainer", "files", "changes", "description"):
- if not changes.has_key(i):
- reject("%s: Missing mandatory field `%s'." % (filename, i));
- return 0 # Avoid <undef> errors during later tests
-
- # Split multi-value fields into a lower-level dictionary
- for i in ("architecture", "distribution", "binary", "closes"):
- o = changes.get(i, "")
- if o != "":
- del changes[i]
- changes[i] = {}
- for j in o.split():
- changes[i][j] = 1
-
- # Fix the Maintainer: field to be RFC822/2047 compatible
- try:
- (changes["maintainer822"], changes["maintainer2047"],
- changes["maintainername"], changes["maintaineremail"]) = \
- utils.fix_maintainer (changes["maintainer"]);
- except utils.ParseMaintError, msg:
- reject("%s: Maintainer field ('%s') failed to parse: %s" \
- % (filename, changes["maintainer"], msg));
-
- # ...likewise for the Changed-By: field if it exists.
- try:
- (changes["changedby822"], changes["changedby2047"],
- changes["changedbyname"], changes["changedbyemail"]) = \
- utils.fix_maintainer (changes.get("changed-by", ""));
- except utils.ParseMaintError, msg:
- (changes["changedby822"], changes["changedby2047"],
- changes["changedbyname"], changes["changedbyemail"]) = \
- ("", "", "", "")
- reject("%s: Changed-By field ('%s') failed to parse: %s" \
- % (filename, changes["changed-by"], msg));
-
- # Ensure all the values in Closes: are numbers
- if changes.has_key("closes"):
- for i in changes["closes"].keys():
- if katie.re_isanum.match (i) == None:
- reject("%s: `%s' from Closes field isn't a number." % (filename, i));
-
-
- # chopversion = no epoch; chopversion2 = no epoch and no revision (e.g. for .orig.tar.gz comparison)
- changes["chopversion"] = utils.re_no_epoch.sub('', changes["version"])
- changes["chopversion2"] = utils.re_no_revision.sub('', changes["chopversion"])
-
- # Check there isn't already a changes file of the same name in one
- # of the queue directories.
- base_filename = os.path.basename(filename);
- for dir in [ "Accepted", "Byhand", "Done", "New" ]:
- if os.path.exists(Cnf["Dir::Queue::%s" % (dir) ]+'/'+base_filename):
- reject("%s: a file with this name already exists in the %s directory." % (base_filename, dir));
-
- # Check the .changes is non-empty
- if not files:
- reject("%s: nothing to do (Files field is empty)." % (base_filename))
- return 0;
-
- return 1;
-
-################################################################################
-
-def check_distributions():
- "Check and map the Distribution field of a .changes file."
-
- # Handle suite mappings
- for map in Cnf.ValueList("SuiteMappings"):
- args = map.split();
- type = args[0];
- if type == "map" or type == "silent-map":
- (source, dest) = args[1:3];
- if changes["distribution"].has_key(source):
- del changes["distribution"][source]
- changes["distribution"][dest] = 1;
- if type != "silent-map":
- reject("Mapping %s to %s." % (source, dest),"");
- if changes.has_key("distribution-version"):
- if changes["distribution-version"].has_key(source):
- changes["distribution-version"][source]=dest
- elif type == "map-unreleased":
- (source, dest) = args[1:3];
- if changes["distribution"].has_key(source):
- for arch in changes["architecture"].keys():
- if arch not in Cnf.ValueList("Suite::%s::Architectures" % (source)):
- reject("Mapping %s to %s for unreleased architecture %s." % (source, dest, arch),"");
- del changes["distribution"][source];
- changes["distribution"][dest] = 1;
- break;
- elif type == "ignore":
- suite = args[1];
- if changes["distribution"].has_key(suite):
- del changes["distribution"][suite];
- reject("Ignoring %s as a target suite." % (suite), "Warning: ");
- elif type == "reject":
- suite = args[1];
- if changes["distribution"].has_key(suite):
- reject("Uploads to %s are not accepted." % (suite));
- elif type == "propup-version":
- # give these as "uploaded-to(non-mapped) suites-to-add-when-upload-obsoletes"
- #
- # changes["distribution-version"] looks like: {'testing': 'testing-proposed-updates'}
- if changes["distribution"].has_key(args[1]):
- changes.setdefault("distribution-version", {})
- for suite in args[2:]: changes["distribution-version"][suite]=suite
-
- # Ensure there is (still) a target distribution
- if changes["distribution"].keys() == []:
- reject("no valid distribution.");
-
- # Ensure target distributions exist
- for suite in changes["distribution"].keys():
- if not Cnf.has_key("Suite::%s" % (suite)):
- reject("Unknown distribution `%s'." % (suite));
-
-################################################################################
-
-def check_deb_ar(filename, control):
- """Sanity check the ar of a .deb, i.e. that there is:
-
- o debian-binary
- o control.tar.gz
- o data.tar.gz or data.tar.bz2
-
-in that order, and nothing else. If the third member is a
-data.tar.bz2, an additional check is performed for the required
-Pre-Depends on dpkg (>= 1.10.24)."""
- cmd = "ar t %s" % (filename)
- (result, output) = commands.getstatusoutput(cmd)
- if result != 0:
- reject("%s: 'ar t' invocation failed." % (filename))
- reject(utils.prefix_multi_line_string(output, " [ar output:] "), "")
- chunks = output.split('\n')
- if len(chunks) != 3:
- reject("%s: found %d chunks, expected 3." % (filename, len(chunks)))
- if chunks[0] != "debian-binary":
- reject("%s: first chunk is '%s', expected 'debian-binary'." % (filename, chunks[0]))
- if chunks[1] != "control.tar.gz":
- reject("%s: second chunk is '%s', expected 'control.tar.gz'." % (filename, chunks[1]))
- if chunks[2] == "data.tar.bz2":
- # Packages using bzip2 compression must have a Pre-Depends on dpkg >= 1.10.24.
- found_needed_predep = 0
- for parsed_dep in apt_pkg.ParseDepends(control.Find("Pre-Depends", "")):
- for atom in parsed_dep:
- (dep, version, constraint) = atom
- if dep != "dpkg" or (constraint != ">=" and constraint != ">>") or \
- len(parsed_dep) > 1: # or'ed deps don't count
- continue
- if (constraint == ">=" and apt_pkg.VersionCompare(version, "1.10.24") < 0) or \
- (constraint == ">>" and apt_pkg.VersionCompare(version, "1.10.23") < 0):
- continue
- found_needed_predep = 1
- if not found_needed_predep:
- reject("%s: uses bzip2 compression, but doesn't Pre-Depend on dpkg (>= 1.10.24)" % (filename))
- elif chunks[2] != "data.tar.gz":
- reject("%s: third chunk is '%s', expected 'data.tar.gz' or 'data.tar.bz2'." % (filename, chunks[2]))
-
-################################################################################
-
-def check_files():
- global reprocess
-
- archive = utils.where_am_i();
- file_keys = files.keys();
-
- # if reprocess is 2 we've already done this and we're checking
- # things again for the new .orig.tar.gz.
- # [Yes, I'm fully aware of how disgusting this is]
- if not Options["No-Action"] and reprocess < 2:
- cwd = os.getcwd();
- os.chdir(pkg.directory);
- for file in file_keys:
- copy_to_holding(file);
- os.chdir(cwd);
-
- # Check there isn't already a .changes or .katie file of the same name in
- # the proposed-updates "CopyChanges" or "CopyKatie" storage directories.
- # [NB: this check must be done post-suite mapping]
- base_filename = os.path.basename(pkg.changes_file);
- katie_filename = base_filename[:-8]+".katie"
- for suite in changes["distribution"].keys():
- copychanges = "Suite::%s::CopyChanges" % (suite);
- if Cnf.has_key(copychanges) and \
- os.path.exists(Cnf[copychanges]+"/"+base_filename):
- reject("%s: a file with this name already exists in %s" \
- % (base_filename, Cnf[copychanges]));
-
- copykatie = "Suite::%s::CopyKatie" % (suite);
- if Cnf.has_key(copykatie) and \
- os.path.exists(Cnf[copykatie]+"/"+katie_filename):
- reject("%s: a file with this name already exists in %s" \
- % (katie_filename, Cnf[copykatie]));
-
- reprocess = 0;
- has_binaries = 0;
- has_source = 0;
-
- for file in file_keys:
- # Ensure the file does not already exist in one of the accepted directories
- for dir in [ "Accepted", "Byhand", "New" ]:
- if os.path.exists(Cnf["Dir::Queue::%s" % (dir) ]+'/'+file):
- reject("%s file already exists in the %s directory." % (file, dir));
- if not utils.re_taint_free.match(file):
- reject("!!WARNING!! tainted filename: '%s'." % (file));
- # Check the file is readable
- if os.access(file,os.R_OK) == 0:
- # When running in -n, copy_to_holding() won't have
- # generated the reject_message, so we need to.
- if Options["No-Action"]:
- if os.path.exists(file):
- reject("Can't read `%s'. [permission denied]" % (file));
- else:
- reject("Can't read `%s'. [file not found]" % (file));
- files[file]["type"] = "unreadable";
- continue;
- # If it's byhand skip remaining checks
- if files[file]["section"] == "byhand" or files[file]["section"] == "raw-installer":
- files[file]["byhand"] = 1;
- files[file]["type"] = "byhand";
- # Checks for a binary package...
- elif utils.re_isadeb.match(file):
- has_binaries = 1;
- files[file]["type"] = "deb";
-
- # Extract package control information
- deb_file = utils.open_file(file);
- try:
- control = apt_pkg.ParseSection(apt_inst.debExtractControl(deb_file));
- except:
- reject("%s: debExtractControl() raised %s." % (file, sys.exc_type));
- deb_file.close();
- # Can't continue, none of the checks on control would work.
- continue;
- deb_file.close();
-
- # Check for mandatory fields
- for field in [ "Package", "Architecture", "Version" ]:
- if control.Find(field) == None:
- reject("%s: No %s field in control." % (file, field));
- # Can't continue
- continue;
-
- # Ensure the package name matches the one give in the .changes
- if not changes["binary"].has_key(control.Find("Package", "")):
- reject("%s: control file lists name as `%s', which isn't in changes file." % (file, control.Find("Package", "")));
-
- # Validate the package field
- package = control.Find("Package");
- if not re_valid_pkg_name.match(package):
- reject("%s: invalid package name '%s'." % (file, package));
-
- # Validate the version field
- version = control.Find("Version");
- if not re_valid_version.match(version):
- reject("%s: invalid version number '%s'." % (file, version));
-
- # Ensure the architecture of the .deb is one we know about.
- default_suite = Cnf.get("Dinstall::DefaultSuite", "Unstable")
- architecture = control.Find("Architecture");
- if architecture not in Cnf.ValueList("Suite::%s::Architectures" % (default_suite)):
- reject("Unknown architecture '%s'." % (architecture));
-
- # Ensure the architecture of the .deb is one of the ones
- # listed in the .changes.
- if not changes["architecture"].has_key(architecture):
- reject("%s: control file lists arch as `%s', which isn't in changes file." % (file, architecture));
-
- # Sanity-check the Depends field
- depends = control.Find("Depends");
- if depends == '':
- reject("%s: Depends field is empty." % (file));
-
- # Check the section & priority match those given in the .changes (non-fatal)
- if control.Find("Section") and files[file]["section"] != "" and files[file]["section"] != control.Find("Section"):
- reject("%s control file lists section as `%s', but changes file has `%s'." % (file, control.Find("Section", ""), files[file]["section"]), "Warning: ");
- if control.Find("Priority") and files[file]["priority"] != "" and files[file]["priority"] != control.Find("Priority"):
- reject("%s control file lists priority as `%s', but changes file has `%s'." % (file, control.Find("Priority", ""), files[file]["priority"]),"Warning: ");
-
- files[file]["package"] = package;
- files[file]["architecture"] = architecture;
- files[file]["version"] = version;
- files[file]["maintainer"] = control.Find("Maintainer", "");
- if file.endswith(".udeb"):
- files[file]["dbtype"] = "udeb";
- elif file.endswith(".deb"):
- files[file]["dbtype"] = "deb";
- else:
- reject("%s is neither a .deb or a .udeb." % (file));
- files[file]["source"] = control.Find("Source", files[file]["package"]);
- # Get the source version
- source = files[file]["source"];
- source_version = "";
- if source.find("(") != -1:
- m = utils.re_extract_src_version.match(source);
- source = m.group(1);
- source_version = m.group(2);
- if not source_version:
- source_version = files[file]["version"];
- files[file]["source package"] = source;
- files[file]["source version"] = source_version;
-
- # Ensure the filename matches the contents of the .deb
- m = utils.re_isadeb.match(file);
- # package name
- file_package = m.group(1);
- if files[file]["package"] != file_package:
- reject("%s: package part of filename (%s) does not match package name in the %s (%s)." % (file, file_package, files[file]["dbtype"], files[file]["package"]));
- epochless_version = utils.re_no_epoch.sub('', control.Find("Version"));
- # version
- file_version = m.group(2);
- if epochless_version != file_version:
- reject("%s: version part of filename (%s) does not match package version in the %s (%s)." % (file, file_version, files[file]["dbtype"], epochless_version));
- # architecture
- file_architecture = m.group(3);
- if files[file]["architecture"] != file_architecture:
- reject("%s: architecture part of filename (%s) does not match package architecture in the %s (%s)." % (file, file_architecture, files[file]["dbtype"], files[file]["architecture"]));
-
- # Check for existent source
- source_version = files[file]["source version"];
- source_package = files[file]["source package"];
- if changes["architecture"].has_key("source"):
- if source_version != changes["version"]:
- reject("source version (%s) for %s doesn't match changes version %s." % (source_version, file, changes["version"]));
- else:
- # Check in the SQL database
- if not Katie.source_exists(source_package, source_version, changes["distribution"].keys()):
- # Check in one of the other directories
- source_epochless_version = utils.re_no_epoch.sub('', source_version);
- dsc_filename = "%s_%s.dsc" % (source_package, source_epochless_version);
- if os.path.exists(Cnf["Dir::Queue::Byhand"] + '/' + dsc_filename):
- files[file]["byhand"] = 1;
- elif os.path.exists(Cnf["Dir::Queue::New"] + '/' + dsc_filename):
- files[file]["new"] = 1;
- elif not os.path.exists(Cnf["Dir::Queue::Accepted"] + '/' + dsc_filename):
- reject("no source found for %s %s (%s)." % (source_package, source_version, file));
- # Check the version and for file overwrites
- reject(Katie.check_binary_against_db(file),"");
-
- check_deb_ar(file, control)
-
- # Checks for a source package...
- else:
- m = utils.re_issource.match(file);
- if m:
- has_source = 1;
- files[file]["package"] = m.group(1);
- files[file]["version"] = m.group(2);
- files[file]["type"] = m.group(3);
-
- # Ensure the source package name matches the Source filed in the .changes
- if changes["source"] != files[file]["package"]:
- reject("%s: changes file doesn't say %s for Source" % (file, files[file]["package"]));
-
- # Ensure the source version matches the version in the .changes file
- if files[file]["type"] == "orig.tar.gz":
- changes_version = changes["chopversion2"];
- else:
- changes_version = changes["chopversion"];
- if changes_version != files[file]["version"]:
- reject("%s: should be %s according to changes file." % (file, changes_version));
-
- # Ensure the .changes lists source in the Architecture field
- if not changes["architecture"].has_key("source"):
- reject("%s: changes file doesn't list `source' in Architecture field." % (file));
-
- # Check the signature of a .dsc file
- if files[file]["type"] == "dsc":
- dsc["fingerprint"] = utils.check_signature(file, reject);
-
- files[file]["architecture"] = "source";
-
- # Not a binary or source package? Assume byhand...
- else:
- files[file]["byhand"] = 1;
- files[file]["type"] = "byhand";
-
- # Per-suite file checks
- files[file]["oldfiles"] = {};
- for suite in changes["distribution"].keys():
- # Skip byhand
- if files[file].has_key("byhand"):
- continue;
-
- # Handle component mappings
- for map in Cnf.ValueList("ComponentMappings"):
- (source, dest) = map.split();
- if files[file]["component"] == source:
- files[file]["original component"] = source;
- files[file]["component"] = dest;
-
- # Ensure the component is valid for the target suite
- if Cnf.has_key("Suite:%s::Components" % (suite)) and \
- files[file]["component"] not in Cnf.ValueList("Suite::%s::Components" % (suite)):
- reject("unknown component `%s' for suite `%s'." % (files[file]["component"], suite));
- continue;
-
- # Validate the component
- component = files[file]["component"];
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- reject("file '%s' has unknown component '%s'." % (file, component));
- continue;
-
- # See if the package is NEW
- if not Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
- files[file]["new"] = 1;
-
- # Validate the priority
- if files[file]["priority"].find('/') != -1:
- reject("file '%s' has invalid priority '%s' [contains '/']." % (file, files[file]["priority"]));
-
- # Determine the location
- location = Cnf["Dir::Pool"];
- location_id = db_access.get_location_id (location, component, archive);
- if location_id == -1:
- reject("[INTERNAL ERROR] couldn't determine location (Component: %s, Archive: %s)" % (component, archive));
- files[file]["location id"] = location_id;
-
- # Check the md5sum & size against existing files (if any)
- files[file]["pool name"] = utils.poolify (changes["source"], files[file]["component"]);
- files_id = db_access.get_files_id(files[file]["pool name"] + file, files[file]["size"], files[file]["md5sum"], files[file]["location id"]);
- if files_id == -1:
- reject("INTERNAL ERROR, get_files_id() returned multiple matches for %s." % (file));
- elif files_id == -2:
- reject("md5sum and/or size mismatch on existing copy of %s." % (file));
- files[file]["files id"] = files_id
-
- # Check for packages that have moved from one component to another
- q = Katie.projectB.query("""
-SELECT c.name FROM binaries b, bin_associations ba, suite s, location l,
- component c, architecture a, files f
- WHERE b.package = '%s' AND s.suite_name = '%s'
- AND (a.arch_string = '%s' OR a.arch_string = 'all')
- AND ba.bin = b.id AND ba.suite = s.id AND b.architecture = a.id
- AND f.location = l.id AND l.component = c.id AND b.file = f.id"""
- % (files[file]["package"], suite,
- files[file]["architecture"]));
- ql = q.getresult();
- if ql:
- files[file]["othercomponents"] = ql[0][0];
-
- # If the .changes file says it has source, it must have source.
- if changes["architecture"].has_key("source"):
- if not has_source:
- reject("no source found and Architecture line in changes mention source.");
-
- if not has_binaries and Cnf.FindB("Dinstall::Reject::NoSourceOnly"):
- reject("source only uploads are not supported.");
-
-###############################################################################
-
-def check_dsc():
- global reprocess;
-
- # Ensure there is source to check
- if not changes["architecture"].has_key("source"):
- return 1;
-
- # Find the .dsc
- dsc_filename = None;
- for file in files.keys():
- if files[file]["type"] == "dsc":
- if dsc_filename:
- reject("can not process a .changes file with multiple .dsc's.");
- return 0;
- else:
- dsc_filename = file;
-
- # If there isn't one, we have nothing to do. (We have reject()ed the upload already)
- if not dsc_filename:
- reject("source uploads must contain a dsc file");
- return 0;
-
- # Parse the .dsc file
- try:
- dsc.update(utils.parse_changes(dsc_filename, signing_rules=1));
- except utils.cant_open_exc:
- # if not -n copy_to_holding() will have done this for us...
- if Options["No-Action"]:
- reject("%s: can't read file." % (dsc_filename));
- except utils.changes_parse_error_exc, line:
- reject("%s: parse error, can't grok: %s." % (dsc_filename, line));
- except utils.invalid_dsc_format_exc, line:
- reject("%s: syntax error on line %s." % (dsc_filename, line));
- # Build up the file list of files mentioned by the .dsc
- try:
- dsc_files.update(utils.build_file_list(dsc, is_a_dsc=1));
- except utils.no_files_exc:
- reject("%s: no Files: field." % (dsc_filename));
- return 0;
- except utils.changes_parse_error_exc, line:
- reject("%s: parse error, can't grok: %s." % (dsc_filename, line));
- return 0;
-
- # Enforce mandatory fields
- for i in ("format", "source", "version", "binary", "maintainer", "architecture", "files"):
- if not dsc.has_key(i):
- reject("%s: missing mandatory field `%s'." % (dsc_filename, i));
- return 0;
-
- # Validate the source and version fields
- if not re_valid_pkg_name.match(dsc["source"]):
- reject("%s: invalid source name '%s'." % (dsc_filename, dsc["source"]));
- if not re_valid_version.match(dsc["version"]):
- reject("%s: invalid version number '%s'." % (dsc_filename, dsc["version"]));
-
- # Bumping the version number of the .dsc breaks extraction by stable's
- # dpkg-source. So let's not do that...
- if dsc["format"] != "1.0":
- reject("%s: incompatible 'Format' version produced by a broken version of dpkg-dev 1.9.1{3,4}." % (dsc_filename));
-
- # Validate the Maintainer field
- try:
- utils.fix_maintainer (dsc["maintainer"]);
- except utils.ParseMaintError, msg:
- reject("%s: Maintainer field ('%s') failed to parse: %s" \
- % (dsc_filename, dsc["maintainer"], msg));
-
- # Validate the build-depends field(s)
- for field_name in [ "build-depends", "build-depends-indep" ]:
- field = dsc.get(field_name);
- if field:
- # Check for broken dpkg-dev lossage...
- if field.startswith("ARRAY"):
- reject("%s: invalid %s field produced by a broken version of dpkg-dev (1.10.11)" % (dsc_filename, field_name.title()));
-
- # Have apt try to parse them...
- try:
- apt_pkg.ParseSrcDepends(field);
- except:
- reject("%s: invalid %s field (can not be parsed by apt)." % (dsc_filename, field_name.title()));
- pass;
-
- # Ensure the version number in the .dsc matches the version number in the .changes
- epochless_dsc_version = utils.re_no_epoch.sub('', dsc["version"]);
- changes_version = files[dsc_filename]["version"];
- if epochless_dsc_version != files[dsc_filename]["version"]:
- reject("version ('%s') in .dsc does not match version ('%s') in .changes." % (epochless_dsc_version, changes_version));
-
- # Ensure there is a .tar.gz in the .dsc file
- has_tar = 0;
- for f in dsc_files.keys():
- m = utils.re_issource.match(f);
- if not m:
- reject("%s: %s in Files field not recognised as source." % (dsc_filename, f));
- type = m.group(3);
- if type == "orig.tar.gz" or type == "tar.gz":
- has_tar = 1;
- if not has_tar:
- reject("%s: no .tar.gz or .orig.tar.gz in 'Files' field." % (dsc_filename));
-
- # Ensure source is newer than existing source in target suites
- reject(Katie.check_source_against_db(dsc_filename),"");
-
- (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(dsc_filename);
- reject(reject_msg, "");
- if is_in_incoming:
- if not Options["No-Action"]:
- copy_to_holding(is_in_incoming);
- orig_tar_gz = os.path.basename(is_in_incoming);
- files[orig_tar_gz] = {};
- files[orig_tar_gz]["size"] = os.stat(orig_tar_gz)[stat.ST_SIZE];
- files[orig_tar_gz]["md5sum"] = dsc_files[orig_tar_gz]["md5sum"];
- files[orig_tar_gz]["section"] = files[dsc_filename]["section"];
- files[orig_tar_gz]["priority"] = files[dsc_filename]["priority"];
- files[orig_tar_gz]["component"] = files[dsc_filename]["component"];
- files[orig_tar_gz]["type"] = "orig.tar.gz";
- reprocess = 2;
-
- return 1;
-
-################################################################################
-
-def get_changelog_versions(source_dir):
- """Extracts a the source package and (optionally) grabs the
- version history out of debian/changelog for the BTS."""
-
- # Find the .dsc (again)
- dsc_filename = None;
- for file in files.keys():
- if files[file]["type"] == "dsc":
- dsc_filename = file;
-
- # If there isn't one, we have nothing to do. (We have reject()ed the upload already)
- if not dsc_filename:
- return;
-
- # Create a symlink mirror of the source files in our temporary directory
- for f in files.keys():
- m = utils.re_issource.match(f);
- if m:
- src = os.path.join(source_dir, f);
- # If a file is missing for whatever reason, give up.
- if not os.path.exists(src):
- return;
- type = m.group(3);
- if type == "orig.tar.gz" and pkg.orig_tar_gz:
- continue;
- dest = os.path.join(os.getcwd(), f);
- os.symlink(src, dest);
-
- # If the orig.tar.gz is not a part of the upload, create a symlink to the
- # existing copy.
- if pkg.orig_tar_gz:
- dest = os.path.join(os.getcwd(), os.path.basename(pkg.orig_tar_gz));
- os.symlink(pkg.orig_tar_gz, dest);
-
- # Extract the source
- cmd = "dpkg-source -sn -x %s" % (dsc_filename);
- (result, output) = commands.getstatusoutput(cmd);
- if (result != 0):
- reject("'dpkg-source -x' failed for %s [return code: %s]." % (dsc_filename, result));
- reject(utils.prefix_multi_line_string(output, " [dpkg-source output:] "), "");
- return;
-
- if not Cnf.Find("Dir::Queue::BTSVersionTrack"):
- return;
-
- # Get the upstream version
- upstr_version = utils.re_no_epoch.sub('', dsc["version"]);
- if re_strip_revision.search(upstr_version):
- upstr_version = re_strip_revision.sub('', upstr_version);
-
- # Ensure the changelog file exists
- changelog_filename = "%s-%s/debian/changelog" % (dsc["source"], upstr_version);
- if not os.path.exists(changelog_filename):
- reject("%s: debian/changelog not found in extracted source." % (dsc_filename));
- return;
-
- # Parse the changelog
- dsc["bts changelog"] = "";
- changelog_file = utils.open_file(changelog_filename);
- for line in changelog_file.readlines():
- m = re_changelog_versions.match(line);
- if m:
- dsc["bts changelog"] += line;
- changelog_file.close();
-
- # Check we found at least one revision in the changelog
- if not dsc["bts changelog"]:
- reject("%s: changelog format not recognised (empty version tree)." % (dsc_filename));
-
-########################################
-
-def check_source():
- # Bail out if:
- # a) there's no source
- # or b) reprocess is 2 - we will do this check next time when orig.tar.gz is in 'files'
- # or c) the orig.tar.gz is MIA
- if not changes["architecture"].has_key("source") or reprocess == 2 \
- or pkg.orig_tar_gz == -1:
- return;
-
- # Create a temporary directory to extract the source into
- if Options["No-Action"]:
- tmpdir = tempfile.mktemp();
- else:
- # We're in queue/holding and can create a random directory.
- tmpdir = "%s" % (os.getpid());
- os.mkdir(tmpdir);
-
- # Move into the temporary directory
- cwd = os.getcwd();
- os.chdir(tmpdir);
-
- # Get the changelog version history
- get_changelog_versions(cwd);
-
- # Move back and cleanup the temporary tree
- os.chdir(cwd);
- try:
- shutil.rmtree(tmpdir);
- except OSError, e:
- if errno.errorcode[e.errno] != 'EACCES':
- utils.fubar("%s: couldn't remove tmp dir for source tree." % (dsc["source"]));
-
- reject("%s: source tree could not be cleanly removed." % (dsc["source"]));
- # We probably have u-r or u-w directories so chmod everything
- # and try again.
- cmd = "chmod -R u+rwx %s" % (tmpdir)
- result = os.system(cmd)
- if result != 0:
- utils.fubar("'%s' failed with result %s." % (cmd, result));
- shutil.rmtree(tmpdir);
- except:
- utils.fubar("%s: couldn't remove tmp dir for source tree." % (dsc["source"]));
-
-################################################################################
-
-# FIXME: should be a debian specific check called from a hook
-
-def check_urgency ():
- if changes["architecture"].has_key("source"):
- if not changes.has_key("urgency"):
- changes["urgency"] = Cnf["Urgency::Default"];
- if changes["urgency"] not in Cnf.ValueList("Urgency::Valid"):
- reject("%s is not a valid urgency; it will be treated as %s by testing." % (changes["urgency"], Cnf["Urgency::Default"]), "Warning: ");
- changes["urgency"] = Cnf["Urgency::Default"];
- changes["urgency"] = changes["urgency"].lower();
-
-################################################################################
-
-def check_md5sums ():
- for file in files.keys():
- try:
- file_handle = utils.open_file(file);
- except utils.cant_open_exc:
- continue;
-
- # Check md5sum
- if apt_pkg.md5sum(file_handle) != files[file]["md5sum"]:
- reject("%s: md5sum check failed." % (file));
- file_handle.close();
- # Check size
- actual_size = os.stat(file)[stat.ST_SIZE];
- size = int(files[file]["size"]);
- if size != actual_size:
- reject("%s: actual file size (%s) does not match size (%s) in .changes"
- % (file, actual_size, size));
-
- for file in dsc_files.keys():
- try:
- file_handle = utils.open_file(file);
- except utils.cant_open_exc:
- continue;
-
- # Check md5sum
- if apt_pkg.md5sum(file_handle) != dsc_files[file]["md5sum"]:
- reject("%s: md5sum check failed." % (file));
- file_handle.close();
- # Check size
- actual_size = os.stat(file)[stat.ST_SIZE];
- size = int(dsc_files[file]["size"]);
- if size != actual_size:
- reject("%s: actual file size (%s) does not match size (%s) in .dsc"
- % (file, actual_size, size));
-
-################################################################################
-
-# Sanity check the time stamps of files inside debs.
-# [Files in the near future cause ugly warnings and extreme time
-# travel can cause errors on extraction]
-
-def check_timestamps():
- class Tar:
- def __init__(self, future_cutoff, past_cutoff):
- self.reset();
- self.future_cutoff = future_cutoff;
- self.past_cutoff = past_cutoff;
-
- def reset(self):
- self.future_files = {};
- self.ancient_files = {};
-
- def callback(self, Kind,Name,Link,Mode,UID,GID,Size,MTime,Major,Minor):
- if MTime > self.future_cutoff:
- self.future_files[Name] = MTime;
- if MTime < self.past_cutoff:
- self.ancient_files[Name] = MTime;
- ####
-
- future_cutoff = time.time() + int(Cnf["Dinstall::FutureTimeTravelGrace"]);
- past_cutoff = time.mktime(time.strptime(Cnf["Dinstall::PastCutoffYear"],"%Y"));
- tar = Tar(future_cutoff, past_cutoff);
- for filename in files.keys():
- if files[filename]["type"] == "deb":
- tar.reset();
- try:
- deb_file = utils.open_file(filename);
- apt_inst.debExtract(deb_file,tar.callback,"control.tar.gz");
- deb_file.seek(0);
- try:
- apt_inst.debExtract(deb_file,tar.callback,"data.tar.gz")
- except SystemError, e:
- # If we can't find a data.tar.gz, look for data.tar.bz2 instead.
- if not re.match(r"Cannot f[ui]nd chunk data.tar.gz$", str(e)):
- raise
- deb_file.seek(0)
- apt_inst.debExtract(deb_file,tar.callback,"data.tar.bz2")
- deb_file.close();
- #
- future_files = tar.future_files.keys();
- if future_files:
- num_future_files = len(future_files);
- future_file = future_files[0];
- future_date = tar.future_files[future_file];
- reject("%s: has %s file(s) with a time stamp too far into the future (e.g. %s [%s])."
- % (filename, num_future_files, future_file,
- time.ctime(future_date)));
- #
- ancient_files = tar.ancient_files.keys();
- if ancient_files:
- num_ancient_files = len(ancient_files);
- ancient_file = ancient_files[0];
- ancient_date = tar.ancient_files[ancient_file];
- reject("%s: has %s file(s) with a time stamp too ancient (e.g. %s [%s])."
- % (filename, num_ancient_files, ancient_file,
- time.ctime(ancient_date)));
- except:
- reject("%s: deb contents timestamp check failed [%s: %s]" % (filename, sys.exc_type, sys.exc_value));
-
-################################################################################
-################################################################################
-
-# If any file of an upload has a recent mtime then chances are good
-# the file is still being uploaded.
-
-def upload_too_new():
- too_new = 0;
- # Move back to the original directory to get accurate time stamps
- cwd = os.getcwd();
- os.chdir(pkg.directory);
- file_list = pkg.files.keys();
- file_list.extend(pkg.dsc_files.keys());
- file_list.append(pkg.changes_file);
- for file in file_list:
- try:
- last_modified = time.time()-os.path.getmtime(file);
- if last_modified < int(Cnf["Dinstall::SkipTime"]):
- too_new = 1;
- break;
- except:
- pass;
- os.chdir(cwd);
- return too_new;
-
-################################################################################
-
-def action ():
- # changes["distribution"] may not exist in corner cases
- # (e.g. unreadable changes files)
- if not changes.has_key("distribution") or not isinstance(changes["distribution"], DictType):
- changes["distribution"] = {};
-
- (summary, short_summary) = Katie.build_summaries();
-
- # q-unapproved hax0ring
- queue_info = {
- "New": { "is": is_new, "process": acknowledge_new },
- "Byhand" : { "is": is_byhand, "process": do_byhand },
- "Unembargo" : { "is": is_unembargo, "process": queue_unembargo },
- "Embargo" : { "is": is_embargo, "process": queue_embargo },
- }
- queues = [ "New", "Byhand" ]
- if Cnf.FindB("Dinstall::SecurityQueueHandling"):
- queues += [ "Unembargo", "Embargo" ]
-
- (prompt, answer) = ("", "XXX")
- if Options["No-Action"] or Options["Automatic"]:
- answer = 'S'
-
- queuekey = ''
-
- if reject_message.find("Rejected") != -1:
- if upload_too_new():
- print "SKIP (too new)\n" + reject_message,;
- prompt = "[S]kip, Quit ?";
- else:
- print "REJECT\n" + reject_message,;
- prompt = "[R]eject, Skip, Quit ?";
- if Options["Automatic"]:
- answer = 'R';
- else:
- queue = None
- for q in queues:
- if queue_info[q]["is"]():
- queue = q
- break
- if queue:
- print "%s for %s\n%s%s" % (
- queue.upper(), ", ".join(changes["distribution"].keys()),
- reject_message, summary),
- queuekey = queue[0].upper()
- if queuekey in "RQSA":
- queuekey = "D"
- prompt = "[D]ivert, Skip, Quit ?"
- else:
- prompt = "[%s]%s, Skip, Quit ?" % (queuekey, queue[1:].lower())
- if Options["Automatic"]:
- answer = queuekey
- else:
- print "ACCEPT\n" + reject_message + summary,;
- prompt = "[A]ccept, Skip, Quit ?";
- if Options["Automatic"]:
- answer = 'A';
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.match(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
-
- if answer == 'R':
- os.chdir (pkg.directory);
- Katie.do_reject(0, reject_message);
- elif answer == 'A':
- accept(summary, short_summary);
- remove_from_unchecked()
- elif answer == queuekey:
- queue_info[queue]["process"](summary)
- remove_from_unchecked()
- elif answer == 'Q':
- sys.exit(0)
-
-def remove_from_unchecked():
- os.chdir (pkg.directory);
- for file in files.keys():
- os.unlink(file);
- os.unlink(pkg.changes_file);
-
-################################################################################
-
-def accept (summary, short_summary):
- Katie.accept(summary, short_summary);
- Katie.check_override();
-
-################################################################################
-
-def move_to_dir (dest, perms=0660, changesperms=0664):
- utils.move (pkg.changes_file, dest, perms=changesperms);
- file_keys = files.keys();
- for file in file_keys:
- utils.move (file, dest, perms=perms);
-
-################################################################################
-
-def is_unembargo ():
- q = Katie.projectB.query(
- "SELECT package FROM disembargo WHERE package = '%s' AND version = '%s'" %
- (changes["source"], changes["version"]))
- ql = q.getresult()
- if ql:
- return 1
-
- if pkg.directory == Cnf["Dir::Queue::Disembargo"].rstrip("/"):
- if changes["architecture"].has_key("source"):
- if Options["No-Action"]: return 1
-
- Katie.projectB.query(
- "INSERT INTO disembargo (package, version) VALUES ('%s', '%s')" %
- (changes["source"], changes["version"]))
- return 1
-
- return 0
-
-def queue_unembargo (summary):
- print "Moving to UNEMBARGOED holding area."
- Logger.log(["Moving to unembargoed", pkg.changes_file]);
-
- Katie.dump_vars(Cnf["Dir::Queue::Unembargoed"]);
- move_to_dir(Cnf["Dir::Queue::Unembargoed"])
- Katie.queue_build("unembargoed", Cnf["Dir::Queue::Unembargoed"])
-
- # Check for override disparities
- Katie.Subst["__SUMMARY__"] = summary;
- Katie.check_override();
-
-################################################################################
-
-def is_embargo ():
- return 0
-
-def queue_embargo (summary):
- print "Moving to EMBARGOED holding area."
- Logger.log(["Moving to embargoed", pkg.changes_file]);
-
- Katie.dump_vars(Cnf["Dir::Queue::Embargoed"]);
- move_to_dir(Cnf["Dir::Queue::Embargoed"])
- Katie.queue_build("embargoed", Cnf["Dir::Queue::Embargoed"])
-
- # Check for override disparities
- Katie.Subst["__SUMMARY__"] = summary;
- Katie.check_override();
-
-################################################################################
-
-def is_byhand ():
- for file in files.keys():
- if files[file].has_key("byhand"):
- return 1
- return 0
-
-def do_byhand (summary):
- print "Moving to BYHAND holding area."
- Logger.log(["Moving to byhand", pkg.changes_file]);
-
- Katie.dump_vars(Cnf["Dir::Queue::Byhand"]);
- move_to_dir(Cnf["Dir::Queue::Byhand"])
-
- # Check for override disparities
- Katie.Subst["__SUMMARY__"] = summary;
- Katie.check_override();
-
-################################################################################
-
-def is_new ():
- for file in files.keys():
- if files[file].has_key("new"):
- return 1
- return 0
-
-def acknowledge_new (summary):
- Subst = Katie.Subst;
-
- print "Moving to NEW holding area."
- Logger.log(["Moving to new", pkg.changes_file]);
-
- Katie.dump_vars(Cnf["Dir::Queue::New"]);
- move_to_dir(Cnf["Dir::Queue::New"])
-
- if not Options["No-Mail"]:
- print "Sending new ack.";
- Subst["__SUMMARY__"] = summary;
- new_ack_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.new");
- utils.send_mail(new_ack_message);
-
-################################################################################
-
-# reprocess is necessary for the case of foo_1.2-1 and foo_1.2-2 in
-# Incoming. -1 will reference the .orig.tar.gz, but -2 will not.
-# Katie.check_dsc_against_db() can find the .orig.tar.gz but it will
-# not have processed it during it's checks of -2. If -1 has been
-# deleted or otherwise not checked by jennifer, the .orig.tar.gz will
-# not have been checked at all. To get round this, we force the
-# .orig.tar.gz into the .changes structure and reprocess the .changes
-# file.
-
-def process_it (changes_file):
- global reprocess, reject_message;
-
- # Reset some globals
- reprocess = 1;
- Katie.init_vars();
- # Some defaults in case we can't fully process the .changes file
- changes["maintainer2047"] = Cnf["Dinstall::MyEmailAddress"];
- changes["changedby2047"] = Cnf["Dinstall::MyEmailAddress"];
- reject_message = "";
-
- # Absolutize the filename to avoid the requirement of being in the
- # same directory as the .changes file.
- pkg.changes_file = os.path.abspath(changes_file);
-
- # Remember where we are so we can come back after cd-ing into the
- # holding directory.
- pkg.directory = os.getcwd();
-
- try:
- # If this is the Real Thing(tm), copy things into a private
- # holding directory first to avoid replacable file races.
- if not Options["No-Action"]:
- os.chdir(Cnf["Dir::Queue::Holding"]);
- copy_to_holding(pkg.changes_file);
- # Relativize the filename so we use the copy in holding
- # rather than the original...
- pkg.changes_file = os.path.basename(pkg.changes_file);
- changes["fingerprint"] = utils.check_signature(pkg.changes_file, reject);
- if changes["fingerprint"]:
- valid_changes_p = check_changes();
- else:
- valid_changes_p = 0;
- if valid_changes_p:
- while reprocess:
- check_distributions();
- check_files();
- valid_dsc_p = check_dsc();
- if valid_dsc_p:
- check_source();
- check_md5sums();
- check_urgency();
- check_timestamps();
- Katie.update_subst(reject_message);
- action();
- except SystemExit:
- raise;
- except:
- print "ERROR";
- traceback.print_exc(file=sys.stderr);
- pass;
-
- # Restore previous WD
- os.chdir(pkg.directory);
-
-###############################################################################
-
-def main():
- global Cnf, Options, Logger;
-
- changes_files = init();
-
- # -n/--dry-run invalidates some other options which would involve things happening
- if Options["No-Action"]:
- Options["Automatic"] = "";
-
- # Ensure all the arguments we were given are .changes files
- for file in changes_files:
- if not file.endswith(".changes"):
- utils.warn("Ignoring '%s' because it's not a .changes file." % (file));
- changes_files.remove(file);
-
- if changes_files == []:
- utils.fubar("Need at least one .changes file as an argument.");
-
- # Check that we aren't going to clash with the daily cron job
-
- if not Options["No-Action"] and os.path.exists("%s/daily.lock" % (Cnf["Dir::Lock"])) and not Options["No-Lock"]:
- utils.fubar("Archive maintenance in progress. Try again later.");
-
- # Obtain lock if not in no-action mode and initialize the log
-
- if not Options["No-Action"]:
- lock_fd = os.open(Cnf["Dinstall::LockFile"], os.O_RDWR | os.O_CREAT);
- try:
- fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB);
- except IOError, e:
- if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EAGAIN':
- utils.fubar("Couldn't obtain lock; assuming another jennifer is already running.");
- else:
- raise;
- Logger = Katie.Logger = logging.Logger(Cnf, "jennifer");
-
- # debian-{devel-,}-changes@lists.debian.org toggles writes access based on this header
- bcc = "X-Katie: %s" % (jennifer_version);
- if Cnf.has_key("Dinstall::Bcc"):
- Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
- else:
- Katie.Subst["__BCC__"] = bcc;
-
-
- # Sort the .changes files so that we process sourceful ones first
- changes_files.sort(utils.changes_compare);
-
- # Process the changes files
- for changes_file in changes_files:
- print "\n" + changes_file;
- try:
- process_it (changes_file);
- finally:
- if not Options["No-Action"]:
- clean_holding();
-
- accept_count = Katie.accept_count;
- accept_bytes = Katie.accept_bytes;
- if accept_count:
- sets = "set"
- if accept_count > 1:
- sets = "sets";
- print "Accepted %d package %s, %s." % (accept_count, sets, utils.size_type(int(accept_bytes)));
- Logger.log(["total",accept_count,accept_bytes]);
-
- if not Options["No-Action"]:
- Logger.close();
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Dependency check proposed-updates
-# Copyright (C) 2001, 2002, 2004 James Troup <james@nocrew.org>
-# $Id: jeri,v 1.15 2005-02-08 22:43:45 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# | > amd64 is more mature than even some released architectures
-# |
-# | This might be true of the architecture, unfortunately it seems to be the
-# | exact opposite for most of the people involved with it.
-#
-# <1089213290.24029.6.camel@descent.netsplit.com>
-
-################################################################################
-
-import pg, sys, os;
-import utils, db_access
-import apt_pkg, apt_inst;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-Options = None;
-stable = {};
-stable_virtual = {};
-architectures = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: jeri [OPTION] <CHANGES FILE | DEB FILE | ADMIN FILE>[...]
-(Very) Basic dependency checking for proposed-updates.
-
- -q, --quiet be quieter about what is being done
- -v, --verbose be more verbose about what is being done
- -h, --help show this help and exit
-
-Need either changes files, deb files or an admin.txt file with a '.joey' suffix."""
- sys.exit(exit_code)
-
-################################################################################
-
-def d_test (dict, key, positive, negative):
- if not dict:
- return negative;
- if dict.has_key(key):
- return positive;
- else:
- return negative;
-
-################################################################################
-
-def check_dep (depends, dep_type, check_archs, filename, files):
- pkg_unsat = 0;
- for arch in check_archs:
- for parsed_dep in apt_pkg.ParseDepends(depends):
- unsat = [];
- for atom in parsed_dep:
- (dep, version, constraint) = atom;
- # As a real package?
- if stable.has_key(dep):
- if stable[dep].has_key(arch):
- if apt_pkg.CheckDep(stable[dep][arch], constraint, version):
- if Options["debug"]:
- print "Found %s as a real package." % (utils.pp_deps(parsed_dep));
- unsat = 0;
- break;
- # As a virtual?
- if stable_virtual.has_key(dep):
- if stable_virtual[dep].has_key(arch):
- if not constraint and not version:
- if Options["debug"]:
- print "Found %s as a virtual package." % (utils.pp_deps(parsed_dep));
- unsat = 0;
- break;
- # As part of the same .changes?
- epochless_version = utils.re_no_epoch.sub('', version)
- dep_filename = "%s_%s_%s.deb" % (dep, epochless_version, arch);
- if files.has_key(dep_filename):
- if Options["debug"]:
- print "Found %s in the same upload." % (utils.pp_deps(parsed_dep));
- unsat = 0;
- break;
- # Not found...
- # [FIXME: must be a better way ... ]
- error = "%s not found. [Real: " % (utils.pp_deps(parsed_dep))
- if stable.has_key(dep):
- if stable[dep].has_key(arch):
- error += "%s:%s:%s" % (dep, arch, stable[dep][arch]);
- else:
- error += "%s:-:-" % (dep);
- else:
- error += "-:-:-";
- error += ", Virtual: ";
- if stable_virtual.has_key(dep):
- if stable_virtual[dep].has_key(arch):
- error += "%s:%s" % (dep, arch);
- else:
- error += "%s:-";
- else:
- error += "-:-";
- error += ", Upload: ";
- if files.has_key(dep_filename):
- error += "yes";
- else:
- error += "no";
- error += "]";
- unsat.append(error);
-
- if unsat:
- sys.stderr.write("MWAAP! %s: '%s' %s can not be satisifed:\n" % (filename, utils.pp_deps(parsed_dep), dep_type));
- for error in unsat:
- sys.stderr.write(" %s\n" % (error));
- pkg_unsat = 1;
-
- return pkg_unsat;
-
-def check_package(filename, files):
- try:
- control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(filename)));
- except:
- utils.warn("%s: debExtractControl() raised %s." % (filename, sys.exc_type));
- return 1;
- Depends = control.Find("Depends");
- Pre_Depends = control.Find("Pre-Depends");
- #Recommends = control.Find("Recommends");
- pkg_arch = control.Find("Architecture");
- base_file = os.path.basename(filename);
- if pkg_arch == "all":
- check_archs = architectures;
- else:
- check_archs = [pkg_arch];
-
- pkg_unsat = 0;
- if Pre_Depends:
- pkg_unsat += check_dep(Pre_Depends, "pre-dependency", check_archs, base_file, files);
-
- if Depends:
- pkg_unsat += check_dep(Depends, "dependency", check_archs, base_file, files);
- #if Recommends:
- #pkg_unsat += check_dep(Recommends, "recommendation", check_archs, base_file, files);
-
- return pkg_unsat;
-
-################################################################################
-
-def pass_fail (filename, result):
- if not Options["quiet"]:
- print "%s:" % (os.path.basename(filename)),
- if result:
- print "FAIL";
- else:
- print "ok";
-
-################################################################################
-
-def check_changes (filename):
- try:
- changes = utils.parse_changes(filename);
- files = utils.build_file_list(changes);
- except:
- utils.warn("Error parsing changes file '%s'" % (filename));
- return;
-
- result = 0;
-
- # Move to the pool directory
- cwd = os.getcwd();
- file = files.keys()[0];
- pool_dir = Cnf["Dir::Pool"] + '/' + utils.poolify(changes["source"], files[file]["component"]);
- os.chdir(pool_dir);
-
- changes_result = 0;
- for file in files.keys():
- if file.endswith(".deb"):
- result = check_package(file, files);
- if Options["verbose"]:
- pass_fail(file, result);
- changes_result += result;
-
- pass_fail (filename, changes_result);
-
- # Move back
- os.chdir(cwd);
-
-################################################################################
-
-def check_deb (filename):
- result = check_package(filename, {});
- pass_fail(filename, result);
-
-
-################################################################################
-
-def check_joey (filename):
- file = utils.open_file(filename);
-
- cwd = os.getcwd();
- os.chdir("%s/dists/proposed-updates" % (Cnf["Dir::Root"]));
-
- for line in file.readlines():
- line = line.rstrip();
- if line.find('install') != -1:
- split_line = line.split();
- if len(split_line) != 2:
- utils.fubar("Parse error (not exactly 2 elements): %s" % (line));
- install_type = split_line[0];
- if install_type not in [ "install", "install-u", "sync-install" ]:
- utils.fubar("Unknown install type ('%s') from: %s" % (install_type, line));
- changes_filename = split_line[1]
- if Options["debug"]:
- print "Processing %s..." % (changes_filename);
- check_changes(changes_filename);
- file.close();
-
- os.chdir(cwd);
-
-################################################################################
-
-def parse_packages():
- global stable, stable_virtual, architectures;
-
- # Parse the Packages files (since it's a sub-second operation on auric)
- suite = "stable";
- stable = {};
- components = Cnf.ValueList("Suite::%s::Components" % (suite));
- architectures = filter(utils.real_arch, Cnf.ValueList("Suite::%s::Architectures" % (suite)));
- for component in components:
- for architecture in architectures:
- filename = "%s/dists/%s/%s/binary-%s/Packages" % (Cnf["Dir::Root"], suite, component, architecture);
- packages = utils.open_file(filename, 'r');
- Packages = apt_pkg.ParseTagFile(packages);
- while Packages.Step():
- package = Packages.Section.Find('Package');
- version = Packages.Section.Find('Version');
- provides = Packages.Section.Find('Provides');
- if not stable.has_key(package):
- stable[package] = {};
- stable[package][architecture] = version;
- if provides:
- for virtual_pkg in provides.split(","):
- virtual_pkg = virtual_pkg.strip();
- if not stable_virtual.has_key(virtual_pkg):
- stable_virtual[virtual_pkg] = {};
- stable_virtual[virtual_pkg][architecture] = "NA";
- packages.close()
-
-################################################################################
-
-def main ():
- global Cnf, projectB, Options;
-
- Cnf = utils.get_conf()
-
- Arguments = [('d', "debug", "Jeri::Options::Debug"),
- ('q',"quiet","Jeri::Options::Quiet"),
- ('v',"verbose","Jeri::Options::Verbose"),
- ('h',"help","Jeri::Options::Help")];
- for i in [ "debug", "quiet", "verbose", "help" ]:
- if not Cnf.has_key("Jeri::Options::%s" % (i)):
- Cnf["Jeri::Options::%s" % (i)] = "";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Jeri::Options")
-
- if Options["Help"]:
- usage(0);
- if not arguments:
- utils.fubar("need at least one package name as an argument.");
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- print "Parsing packages files...",
- parse_packages();
- print "done.";
-
- for file in arguments:
- if file.endswith(".changes"):
- check_changes(file);
- elif file.endswith(".deb"):
- check_deb(file);
- elif file.endswith(".joey"):
- check_joey(file);
- else:
- utils.fubar("Unrecognised file type: '%s'." % (file));
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Sync PostgreSQL users with system users
-# Copyright (C) 2001, 2002 James Troup <james@nocrew.org>
-# $Id: julia,v 1.9 2003-01-02 18:12:50 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <aj> ARRRGGGHHH
-# <aj> what's wrong with me!?!?!?
-# <aj> i was just nice to some mormon doorknockers!!!
-# <Omnic> AJ?!?!
-# <aj> i know!!!!!
-# <Omnic> I'm gonna have to kick your ass when you come over
-# <Culus> aj: GET THE HELL OUT OF THE CABAL! :P
-
-################################################################################
-
-import pg, pwd, sys;
-import utils;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: julia [OPTION]...
-Sync PostgreSQL's users with system users.
-
- -h, --help show this help and exit
- -n, --no-action don't do anything
- -q, --quiet be quiet about what is being done
- -v, --verbose explain what is being done"""
- sys.exit(exit_code)
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('n', "no-action", "Julia::Options::No-Action"),
- ('q', "quiet", "Julia::Options::Quiet"),
- ('v', "verbose", "Julia::Options::Verbose"),
- ('h', "help", "Julia::Options::Help")];
- for i in [ "no-action", "quiet", "verbose", "help" ]:
- if not Cnf.has_key("Julia::Options::%s" % (i)):
- Cnf["Julia::Options::%s" % (i)] = "";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Julia::Options")
-
- if Options["Help"]:
- usage();
- elif arguments:
- utils.warn("julia takes no non-option arguments.");
- usage(1);
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- valid_gid = int(Cnf.get("Julia::ValidGID",""));
-
- passwd_unames = {};
- for entry in pwd.getpwall():
- uname = entry[0];
- gid = entry[3];
- if valid_gid and gid != valid_gid:
- if Options["Verbose"]:
- print "Skipping %s (GID %s != Valid GID %s)." % (uname, gid, valid_gid);
- continue;
- passwd_unames[uname] = "";
-
- postgres_unames = {};
- q = projectB.query("SELECT usename FROM pg_user");
- ql = q.getresult();
- for i in ql:
- uname = i[0];
- postgres_unames[uname] = "";
-
- known_postgres_unames = {};
- for i in Cnf.get("Julia::KnownPostgres","").split(","):
- uname = i.strip();
- known_postgres_unames[uname] = "";
-
- keys = postgres_unames.keys()
- keys.sort();
- for uname in keys:
- if not passwd_unames.has_key(uname)and not known_postgres_unames.has_key(uname):
- print "W: %s is in Postgres but not the passwd file or list of known Postgres users." % (uname);
-
- keys = passwd_unames.keys()
- keys.sort();
- for uname in keys:
- if not postgres_unames.has_key(uname):
- if not Options["Quiet"]:
- print "Creating %s user in Postgres." % (uname);
- if not Options["No-Action"]:
- q = projectB.query('CREATE USER "%s"' % (uname));
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-Dinstall
-{
- PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
- GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
- SigningKeyring "/org/ftp.debian.org/s3kr1t/dot-gnupg/secring.gpg";
- SigningPubKeyring "/org/ftp.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
- SigningKeyIds "4F368D5D";
- SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
- MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
- MyAdminAddress "ftpmaster@debian.org";
- MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
- MyDistribution "Debian"; // Used in emails
- BugServer "bugs.debian.org";
- PackagesServer "packages.debian.org";
- TrackingServer "packages.qa.debian.org";
- LockFile "/org/ftp.debian.org/lock/dinstall.lock";
- Bcc "archive@ftp-master.debian.org";
- GroupOverrideFilename "override.group-maint";
- FutureTimeTravelGrace 28800; // 8 hours
- PastCutoffYear "1984";
- SkipTime 300;
- BXANotify "true";
- CloseBugs "true";
- OverrideDisparityCheck "true";
- StableDislocationSupport "false";
- DefaultSuite "unstable";
- QueueBuildSuites
- {
- unstable;
- };
- Reject
- {
- NoSourceOnly "true";
- };
-};
-
-Tiffani
-{
- Options
- {
- TempDir "/org/ftp.debian.org/tiffani";
- MaxDiffs { Default 90; };
- };
-};
-
-Alicia
-{
- MyEmailAddress "Debian FTP Masters <ftpmaster@ftp-master.debian.org>";
-};
-
-Billie
-{
- FTPPath "/org/ftp.debian.org/ftp";
- TreeRootPath "/org/ftp.debian.org/scratch/dsilvers/treeroots";
- TreeDatabasePath "/org/ftp.debian.org/scratch/dsilvers/treedbs";
- BasicTrees { alpha; arm; hppa; hurd-i386; i386; ia64; mips; mipsel; powerpc; s390; sparc; m68k };
- CombinationTrees
- {
- popular { i386; powerpc; all; source; };
- source { source; };
- everything { source; all; alpha; arm; hppa; hurd-i386; i386; ia64; mips; mipsel; powerpc; s390; sparc; m68k; };
- };
-};
-
-Julia
-{
- ValidGID "800";
- // Comma separated list of users who are in Postgres but not the passwd file
- KnownPostgres "postgres,katie";
-};
-
-Shania
-{
- Options
- {
- Days 14;
- };
- MorgueSubDir "shania";
-};
-
-Natalie
-{
- Options
- {
- Component "main";
- Suite "unstable";
- Type "deb";
- };
-
- ComponentPosition "prefix"; // Whether the component is prepended or appended to the section name
-};
-
-Melanie
-{
- Options
- {
- Suite "unstable";
- };
-
- MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
- LogFile "/org/ftp.debian.org/web/removals.txt";
- Bcc "removed-packages@qa.debian.org";
-};
-
-Neve
-{
- ExportDir "/org/ftp.debian.org/katie/neve-files/";
-};
-
-Lauren
-{
- StableRejector "Martin (Joey) Schulze <joey@debian.org>";
- MoreInfoURL "http://people.debian.org/~joey/3.1r2/";
-};
-
-Emilie
-{
- LDAPDn "ou=users,dc=debian,dc=org";
- LDAPServer "db.debian.org";
- ExtraKeyrings
- {
- "/org/keyring.debian.org/keyrings/removed-keys.pgp";
- "/org/keyring.debian.org/keyrings/removed-keys.gpg";
- "/org/keyring.debian.org/keyrings/extra-keys.pgp";
- };
- KeyServer "wwwkeys.eu.pgp.net";
-};
-
-Rhona
-{
- // How long (in seconds) dead packages are left before being killed
- StayOfExecution 129600; // 1.5 days
- QueueBuildStayOfExecution 86400; // 24 hours
- MorgueSubDir "rhona";
-};
-
-Lisa
-{
- AcceptedLockFile "/org/ftp.debian.org/lock/unchecked.lock";
-};
-
-Cindy
-{
- OverrideSuites
- {
- Stable
- {
- Process "0";
- };
-
- Testing
- {
- Process "1";
- OriginSuite "Unstable";
- };
-
- Unstable
- {
- Process "1";
- };
- };
-};
-
-Suite
-{
- Oldstable
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-changes@lists.debian.org";
- Version "3.0r6";
- Origin "Debian";
- Description "Debian 3.0r6 Released 31 May 2005";
- CodeName "woody";
- OverrideCodeName "woody";
- Priority "1";
- Untouchable "1";
- };
-
- Stable
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-changes@lists.debian.org";
- Version "3.1r1";
- Origin "Debian";
- Description "Debian 3.1r1 Released 17 December 2005";
- CodeName "sarge";
- OverrideCodeName "sarge";
- Priority "3";
- Untouchable "1";
- ChangeLogBase "dists/stable/";
- UdebComponents
- {
- main;
- };
- };
-
- Proposed-Updates
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-changes@lists.debian.org";
- CopyChanges "dists/proposed-updates/";
- CopyKatie "/org/ftp.debian.org/queue/proposed-updates/";
- Version "3.1-updates";
- Origin "Debian";
- Description "Debian 3.1 Proposed Updates - Not Released";
- CodeName "proposed-updates";
- OverrideCodeName "sarge";
- OverrideSuite "stable";
- Priority "4";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- };
- MustBeOlderThan
- {
- Testing;
- Unstable;
- Experimental;
- };
- Enhances
- {
- Stable;
- };
- };
- UdebComponents
- {
- main;
- };
- };
-
- Testing
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-testing-changes@lists.debian.org";
- Origin "Debian";
- Description "Debian Testing distribution - Not Released";
- CodeName "etch";
- OverrideCodeName "etch";
- Priority "5";
- UdebComponents
- {
- main;
- };
- };
-
- Testing-Proposed-Updates
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-testing-changes@lists.debian.org";
- Origin "Debian";
- Description "Debian Testing distribution updates - Not Released";
- CodeName "testing-proposed-updates";
- OverrideCodeName "etch";
- OverrideSuite "testing";
- Priority "6";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- Proposed-Updates;
- Testing;
- };
- MustBeOlderThan
- {
- Unstable;
- Experimental;
- };
- Enhances
- {
- Testing;
- };
- };
- UdebComponents
- {
- main;
- };
- };
-
- Unstable
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- hurd-i386;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sh;
- sparc;
- };
- Announce "debian-devel-changes@lists.debian.org";
- Origin "Debian";
- Description "Debian Unstable - Not Released";
- CodeName "sid";
- OverrideCodeName "sid";
- Priority "7";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- Proposed-Updates;
- Testing;
- Testing-Proposed-Updates;
- };
- };
- UdebComponents
- {
- main;
- };
- };
-
- Experimental
- {
- Components
- {
- main;
- contrib;
- non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- hurd-i386;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sh;
- sparc;
- };
- Announce "debian-devel-changes@lists.debian.org";
- Origin "Debian";
- Description "Experimental packages - not released; use at your own risk.";
- CodeName "experimental";
- NotAutomatic "yes";
- OverrideCodeName "sid";
- OverrideSuite "unstable";
- Priority "0";
- Tree "project/experimental";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- Proposed-Updates;
- Testing;
- Testing-Proposed-Updates;
- Unstable;
- };
- };
-
- };
-
-};
-
-SuiteMappings
-{
- "propup-version stable-security testing testing-proposed-updates unstable";
- "propup-version testing-security unstable";
- "map stable proposed-updates";
- "map stable-security proposed-updates";
- "map-unreleased stable unstable";
- "map-unreleased proposed-updates unstable";
- "map testing testing-proposed-updates";
- "map testing-security testing-proposed-updates";
- "map-unreleased testing unstable";
- "map-unreleased testing-proposed-updates unstable";
-};
-
-Dir
-{
- Root "/org/ftp.debian.org/ftp/";
- Pool "/org/ftp.debian.org/ftp/pool/";
- Templates "/org/ftp.debian.org/katie/templates/";
- PoolRoot "pool/";
- Lists "/org/ftp.debian.org/database/dists/";
- Log "/org/ftp.debian.org/log/";
- Lock "/org/ftp.debian.org/lock";
- Morgue "/org/ftp.debian.org/morgue/";
- MorgueReject "reject";
- Override "/org/ftp.debian.org/scripts/override/";
- QueueBuild "/org/incoming.debian.org/buildd/";
- UrgencyLog "/org/ftp.debian.org/testing/urgencies/";
- Queue
- {
- Accepted "/org/ftp.debian.org/queue/accepted/";
- Byhand "/org/ftp.debian.org/queue/byhand/";
- Done "/org/ftp.debian.org/queue/done/";
- Holding "/org/ftp.debian.org/queue/holding/";
- New "/org/ftp.debian.org/queue/new/";
- Reject "/org/ftp.debian.org/queue/reject/";
- Unchecked "/org/ftp.debian.org/queue/unchecked/";
- BTSVersionTrack "/org/ftp.debian.org/queue/bts_version_track/";
- };
-};
-
-DB
-{
- Name "projectb";
- Host "";
- Port -1;
-
- NonUSName "projectb";
- NonUSHost "non-US.debian.org";
- NonUSPort -1;
- NonUSUser "auric";
- NonUSPassword "moo";
-};
-
-Architectures
-{
- source "Source";
- all "Architecture Independent";
- alpha "DEC Alpha";
- hurd-i386 "Intel ia32 running the HURD";
- hppa "HP PA RISC";
- arm "ARM";
- i386 "Intel ia32";
- ia64 "Intel ia64";
- m68k "Motorola Mc680x0";
- mips "MIPS (Big Endian)";
- mipsel "MIPS (Little Endian)";
- powerpc "PowerPC";
- s390 "IBM S/390";
- sh "Hitatchi SuperH";
- sparc "Sun SPARC/UltraSPARC";
-};
-
-Archive
-{
- ftp-master
- {
- OriginServer "ftp-master.debian.org";
- PrimaryMirror "ftp.debian.org";
- Description "Master Archive for the Debian project";
- };
-};
-
-Component
-{
- main
- {
- Description "Main";
- MeetsDFSG "true";
- };
-
- contrib
- {
- Description "Contrib";
- MeetsDFSG "true";
- };
-
- non-free
- {
- Description "Software that fails to meet the DFSG";
- MeetsDFSG "false";
- };
-
- mixed // **NB:** only used for overrides; not yet used in other code
- {
- Description "Legacy Mixed";
- MeetsDFSG "false";
- };
-};
-
-Section
-{
- admin;
- base;
- comm;
- debian-installer;
- devel;
- doc;
- editors;
- embedded;
- electronics;
- games;
- gnome;
- graphics;
- hamradio;
- interpreters;
- kde;
- libdevel;
- libs;
- mail;
- math;
- misc;
- net;
- news;
- oldlibs;
- otherosfs;
- perl;
- python;
- science;
- shells;
- sound;
- tex;
- text;
- utils;
- web;
- x11;
-};
-
-Priority
-{
- required 1;
- important 2;
- standard 3;
- optional 4;
- extra 5;
- source 0; // i.e. unused
-};
-
-OverrideType
-{
- deb;
- udeb;
- dsc;
-};
-
-Location
-{
-
- // Pool locations on ftp-master.debian.org
- /org/ftp.debian.org/ftp/pool/
- {
- Archive "ftp-master";
- Type "pool";
- };
-
-};
-
-Urgency
-{
- Default "low";
- Valid
- {
- low;
- medium;
- high;
- emergency;
- critical;
- };
-};
+++ /dev/null
-Dinstall
-{
- PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
- GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
- SigningKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/secring.gpg";
- SigningPubKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
- SigningKeyIds "1DB114E0";
- SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
- MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
- MyAdminAddress "ftpmaster@debian.org";
- MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
- MyDistribution "Debian"; // Used in emails
- BugServer "bugs.debian.org";
- PackagesServer "packages.debian.org";
- TrackingServer "packages.qa.debian.org";
- LockFile "/org/non-us.debian.org/katie/lock";
- Bcc "archive@ftp-master.debian.org";
- GroupOverrideFilename "override.group-maint";
- FutureTimeTravelGrace 28800; // 8 hours
- PastCutoffYear "1984";
- SkipTime 300;
- CloseBugs "true";
- SuiteSuffix "non-US";
- OverrideDisparityCheck "true";
- StableDislocationSupport "false";
- Reject
- {
- NoSourceOnly "true";
- };
-};
-
-Lauren
-{
- StableRejector "Martin (Joey) Schulze <joey@debian.org>";
- MoreInfoURL "http://people.debian.org/~joey/3.0r4/";
-};
-
-Julia
-{
- ValidGID "800";
- // Comma separated list of users who are in Postgres but not the passwd file
- KnownPostgres "udmsearch,postgres,www-data,katie,auric";
-};
-
-Shania
-{
- Options
- {
- Days 14;
- };
- MorgueSubDir "shania";
-};
-
-
-Catherine
-{
- Options
- {
- Limit 10240;
- };
-};
-
-Natalie
-{
- Options
- {
- Component "non-US/main";
- Suite "unstable";
- Type "deb";
- };
- ComponentPosition "suffix"; // Whether the component is prepended or appended to the section name
-};
-
-Melanie
-{
- Options
- {
- Suite "unstable";
- };
- MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
- LogFile "/home/troup/public_html/removals.txt";
- Bcc "removed-packages@qa.debian.org";
-};
-
-Neve
-{
- ExportDir "/org/non-us.debian.org/katie/neve-files/";
-};
-
-Rhona
-{
- // How long (in seconds) dead packages are left before being killed
- StayOfExecution 129600; // 1.5 days
- MorgueSubDir "rhona";
-};
-
-Suite
-{
-
- Stable
- {
- Components
- {
- non-US/main;
- non-US/contrib;
- non-US/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-changes@lists.debian.org";
- Version "3.0r4";
- Origin "Debian";
- Description "Debian 3.0r4 Released 31st December 2004";
- CodeName "woody";
- OverrideCodeName "woody";
- Priority "3";
- Untouchable "1";
- ChangeLogBase "dists/stable/non-US/";
- };
-
- Proposed-Updates
- {
- Components
- {
- non-US/main;
- non-US/contrib;
- non-US/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "debian-changes@lists.debian.org";
- CopyChanges "dists/proposed-updates/";
- CopyKatie "/org/non-us.debian.org/queue/proposed-updates/";
- Version "3.0-updates";
- Origin "Debian";
- Description "Debian 3.0 Proposed Updates - Not Released";
- CodeName "proposed-updates";
- OverrideCodeName "woody";
- OverrideSuite "stable";
- Priority "4";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- };
- MustBeOlderThan
- {
- Unstable;
- Experimental;
- };
- };
- };
-
- Testing
- {
- Components
- {
- non-US/main;
- non-US/contrib;
- non-US/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Origin "Debian";
- Description "Debian Testing distribution - Not Released";
- CodeName "sarge";
- OverrideCodeName "sarge";
- Priority "5";
- };
-
- Testing-Proposed-Updates
- {
- Components
- {
- non-US/main;
- non-US/contrib;
- non-US/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Origin "Debian";
- Description "Debian Testing distribution updates - Not Released";
- CodeName "testing-proposed-updates";
- OverrideCodeName "sarge";
- OverrideSuite "unstable";
- Priority "6";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- Proposed-Updates;
- Testing;
- };
- MustBeOlderThan
- {
- Unstable;
- Experimental;
- };
- };
- };
-
- Unstable
- {
- Components
- {
- non-US/main;
- non-US/contrib;
- non-US/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- hurd-i386;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sh;
- sparc;
- };
- Announce "debian-devel-changes@lists.debian.org";
- Origin "Debian";
- Description "Debian Unstable - Not Released";
- CodeName "sid";
- OverrideCodeName "sid";
- Priority "7";
- VersionChecks
- {
- MustBeNewerThan
- {
- Stable;
- Proposed-Updates;
- Testing;
- Testing-Proposed-Updates;
- };
- };
- };
-
-};
-
-SuiteMappings
-{
- // JT - temp measure
- "map testing-security proposed-updates";
-
- "map stable proposed-updates";
- "map stable-security proposed-updates";
- "map-unreleased stable unstable";
- "map-unreleased proposed-updates unstable";
- "map testing testing-proposed-updates";
- //"map testing-security testing-proposed-updates";
- "map-unreleased testing unstable";
- "map-unreleased testing-proposed-updates unstable";
-};
-
-Dir
-{
- Root "/org/non-us.debian.org/ftp/";
- Pool "/org/non-us.debian.org/ftp/pool/";
- PoolRoot "pool/";
- Templates "/org/non-us.debian.org/katie/templates/";
- Override "/org/non-us.debian.org/scripts/override/";
- Lists "/org/non-us.debian.org/database/dists/";
- Log "/org/non-us.debian.org/log/";
- Morgue "/org/non-us.debian.org/morgue/";
- MorgueReject "reject";
- UrgencyLog "/org/non-us.debian.org/testing/";
- Queue
- {
- Accepted "/org/non-us.debian.org/queue/accepted/";
- Byhand "/org/non-us.debian.org/queue/byhand/";
- Done "/org/non-us.debian.org/queue/done/";
- Holding "/org/non-us.debian.org/queue/holding/";
- New "/org/non-us.debian.org/queue/new/";
- Reject "/org/non-us.debian.org/queue/reject/";
- Unchecked "/org/non-us.debian.org/queue/unchecked/";
- };
-};
-
-DB
-{
- Name "projectb";
- Host "";
- Port -1;
-};
-
-Architectures
-{
-
- source "Source";
- all "Architecture Independent";
- alpha "DEC Alpha";
- hurd-i386 "Intel ia32 running the HURD";
- hppa "HP PA RISC";
- arm "ARM";
- i386 "Intel ia32";
- ia64 "Intel ia64";
- m68k "Motorola Mc680x0";
- mips "MIPS (Big Endian)";
- mipsel "MIPS (Little Endian)";
- powerpc "PowerPC";
- s390 "IBM S/390";
- sh "Hitatchi SuperH";
- sparc "Sun SPARC/UltraSPARC";
-
-};
-
-Archive
-{
-
- non-US
- {
- OriginServer "non-us.debian.org";
- PrimaryMirror "non-us.debian.org";
- Description "Non-US Archive for the Debian project";
- };
-
-};
-
-Component
-{
-
- non-US/main
- {
- Description "Main (non-US)";
- MeetsDFSG "true";
- };
-
- non-US/contrib
- {
- Description "Contrib (non-US)";
- MeetsDFSG "true";
- };
-
- non-US/non-free
- {
- Description "Software that fails to meet the DFSG (non-US)";
- MeetsDFSG "false";
- };
-
-};
-
-Section
-{
-
- non-US;
-
-};
-
-Priority
-{
-
- required 1;
- important 2;
- standard 3;
- optional 4;
- extra 5;
- source 0; // i.e. unused
-
-};
-
-OverrideType
-{
-
- deb;
- udeb;
- dsc;
-
-};
-
-Location
-{
- /org/non-us.debian.org/ftp/dists/
- {
- Archive "non-US";
- Type "legacy";
- };
-
- /org/non-us.debian.org/ftp/dists/old-proposed-updates/
- {
- Archive "non-US";
- Type "legacy-mixed";
- };
-
- /org/non-us.debian.org/ftp/pool/
- {
- Archive "non-US";
- Suites
- {
- OldStable;
- Stable;
- Proposed-Updates;
- Testing;
- Testing-Proposed-Updates;
- Unstable;
- };
- Type "pool";
- };
-
-};
-
-Urgency
-{
- Default "low";
- Valid
- {
- low;
- medium;
- high;
- emergency;
- critical;
- };
-};
+++ /dev/null
-Dinstall
-{
- PGPKeyring "/org/keyring.debian.org/keyrings/debian-keyring.pgp";
- GPGKeyring "/org/keyring.debian.org/keyrings/debian-keyring.gpg";
- SigningKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/secring.gpg";
- SigningPubKeyring "/org/non-us.debian.org/s3kr1t/dot-gnupg/pubring.gpg";
- SigningKeyIds "4F368D5D";
- SendmailCommand "/usr/sbin/sendmail -odq -oi -t";
- MyEmailAddress "Debian Installer <installer@ftp-master.debian.org>";
- MyAdminAddress "ftpmaster@debian.org";
- MyHost "debian.org"; // used for generating user@my_host addresses in e.g. manual_reject()
- MyDistribution "Debian"; // Used in emails
- BugServer "bugs.debian.org";
- PackagesServer "packages.debian.org";
- LockFile "/org/security.debian.org/katie/lock";
- Bcc "archive@ftp-master.debian.org";
- // GroupOverrideFilename "override.group-maint";
- FutureTimeTravelGrace 28800; // 8 hours
- PastCutoffYear "1984";
- SkipTime 300;
- CloseBugs "false";
- OverrideDisparityCheck "false";
- BXANotify "false";
- QueueBuildSuites
- {
- oldstable;
- stable;
- testing;
- };
- SecurityQueueHandling "true";
- SecurityQueueBuild "true";
- DefaultSuite "Testing";
- SuiteSuffix "updates";
- OverrideMaintainer "katie@security.debian.org";
- StableDislocationSupport "false";
- LegacyStableHasNoSections "false";
-};
-
-Julia
-{
- ValidGID "800";
- // Comma separated list of users who are in Postgres but not the passwd file
- KnownPostgres "postgres,katie,www-data,udmsearch";
-};
-
-Helena
-{
- Directories
- {
- byhand;
- new;
- accepted;
- };
-};
-
-Shania
-{
- Options
- {
- Days 14;
- };
- MorgueSubDir "shania";
-};
-
-Melanie
-{
- Options
- {
- Suite "unstable";
- };
-
- MyEmailAddress "Debian Archive Maintenance <ftpmaster@ftp-master.debian.org>";
- LogFile "/org/security.debian.org/katie-log/removals.txt";
-};
-
-Neve
-{
- ExportDir "/org/security.debian.org/katie/neve-files/";
-};
-
-Rhona
-{
- // How long (in seconds) dead packages are left before being killed
- StayOfExecution 129600; // 1.5 days
- QueueBuildStayOfExecution 86400; // 24 hours
- MorgueSubDir "rhona";
- OverrideFilename "override.source-only";
-};
-
-Amber
-{
- ComponentMappings
- {
- main "ftp-master.debian.org:/pub/UploadQueue";
- contrib "ftp-master.debian.org:/pub/UploadQueue";
- non-free "ftp-master.debian.org:/pub/UploadQueue";
- non-US/main "non-us.debian.org:/pub/UploadQueue";
- non-US/contrib "non-us.debian.org:/pub/UploadQueue";
- non-US/non-free "non-us.debian.org:/pub/UploadQueue";
- };
-};
-
-Suite
-{
- // Priority determines which suite is used for the Maintainers file
- // as generated by charisma (highest wins).
-
- Oldstable
- {
- Components
- {
- updates/main;
- updates/contrib;
- updates/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "katie@security.debian.org";
- Version "3.0";
- Origin "Debian";
- Label "Debian-Security";
- Description "Debian 3.0 Security Updates";
- CodeName "woody";
- OverrideCodeName "woody";
- CopyKatie "/org/security.debian.org/queue/done/";
- };
-
- Stable
- {
- Components
- {
- updates/main;
- updates/contrib;
- updates/non-free;
- };
- Architectures
- {
- source;
- all;
- alpha;
- amd64;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "katie@security.debian.org";
- Version "3.1";
- Origin "Debian";
- Label "Debian-Security";
- Description "Debian 3.1 Security Updates";
- CodeName "sarge";
- OverrideCodeName "sarge";
- CopyKatie "/org/security.debian.org/queue/done/";
- };
-
- Testing
- {
- Components
- {
- updates/main;
- updates/contrib;
- updates/non-free;
- };
- Architectures
- {
- source;
- all;
- amd64;
- alpha;
- arm;
- hppa;
- i386;
- ia64;
- m68k;
- mips;
- mipsel;
- powerpc;
- s390;
- sparc;
- };
- Announce "katie@security.debian.org";
- Version "x.y";
- Origin "Debian";
- Label "Debian-Security";
- Description "Debian x.y Security Updates";
- CodeName "etch";
- OverrideCodeName "etch";
- CopyKatie "/org/security.debian.org/queue/done/";
- };
-
-};
-
-SuiteMappings
-{
- "silent-map oldstable-security oldstable";
- "silent-map stable-security stable";
- // JT - FIXME, hackorama
- // "silent-map testing-security stable";
- "silent-map testing-security testing";
-};
-
-Dir
-{
- Root "/org/security.debian.org/ftp/";
- Pool "/org/security.debian.org/ftp/pool/";
- Katie "/org/security.debian.org/katie/";
- Templates "/org/security.debian.org/katie/templates/";
- PoolRoot "pool/";
- Override "/org/security.debian.org/override/";
- Lock "/org/security.debian.org/lock/";
- Lists "/org/security.debian.org/katie-database/dists/";
- Log "/org/security.debian.org/katie-log/";
- Morgue "/org/security.debian.org/morgue/";
- MorgueReject "reject";
- Override "/org/security.debian.org/scripts/override/";
- QueueBuild "/org/security.debian.org/buildd/";
- Queue
- {
- Accepted "/org/security.debian.org/queue/accepted/";
- Byhand "/org/security.debian.org/queue/byhand/";
- Done "/org/security.debian.org/queue/done/";
- Holding "/org/security.debian.org/queue/holding/";
- New "/org/security.debian.org/queue/new/";
- Reject "/org/security.debian.org/queue/reject/";
- Unchecked "/org/security.debian.org/queue/unchecked/";
-
- Embargoed "/org/security.debian.org/queue/embargoed/";
- Unembargoed "/org/security.debian.org/queue/unembargoed/";
- Disembargo "/org/security.debian.org/queue/unchecked-disembargo/";
- };
-};
-
-DB
-{
- Name "obscurity";
- Host "";
- Port -1;
-
-};
-
-Architectures
-{
-
- source "Source";
- all "Architecture Independent";
- alpha "DEC Alpha";
- hppa "HP PA RISC";
- arm "ARM";
- i386 "Intel ia32";
- ia64 "Intel ia64";
- m68k "Motorola Mc680x0";
- mips "MIPS (Big Endian)";
- mipsel "MIPS (Little Endian)";
- powerpc "PowerPC";
- s390 "IBM S/390";
- sparc "Sun SPARC/UltraSPARC";
- amd64 "AMD x86_64 (AMD64)";
-
-};
-
-Archive
-{
-
- security
- {
- OriginServer "security.debian.org";
- PrimaryMirror "security.debian.org";
- Description "Security Updates for the Debian project";
- };
-
-};
-
-Component
-{
-
- updates/main
- {
- Description "Main (updates)";
- MeetsDFSG "true";
- };
-
- updates/contrib
- {
- Description "Contrib (updates)";
- MeetsDFSG "true";
- };
-
- updates/non-free
- {
- Description "Software that fails to meet the DFSG";
- MeetsDFSG "false";
- };
-
-};
-
-ComponentMappings
-{
- "main updates/main";
- "contrib updates/contrib";
- "non-free updates/non-free";
- "non-US/main updates/main";
- "non-US/contrib updates/contrib";
- "non-US/non-free updates/non-free";
-};
-
-Section
-{
- admin;
- base;
- comm;
- debian-installer;
- devel;
- doc;
- editors;
- electronics;
- embedded;
- games;
- gnome;
- graphics;
- hamradio;
- interpreters;
- kde;
- libdevel;
- libs;
- mail;
- math;
- misc;
- net;
- news;
- oldlibs;
- otherosfs;
- perl;
- python;
- science;
- shells;
- sound;
- tex;
- text;
- utils;
- web;
- x11;
- non-US;
-};
-
-Priority
-{
- required 1;
- important 2;
- standard 3;
- optional 4;
- extra 5;
- source 0; // i.e. unused
-};
-
-OverrideType
-{
- deb;
- udeb;
- dsc;
-};
-
-Location
-{
- /org/security.debian.org/ftp/dists/
- {
- Archive "security";
- Type "legacy";
- };
-
- /org/security.debian.org/ftp/pool/
- {
- Archive "security";
- Suites
- {
- Oldstable;
- Stable;
- Testing;
- };
- Type "pool";
- };
-};
-
-Urgency
-{
- Default "low";
- Valid
- {
- low;
- medium;
- high;
- emergency;
- critical;
- };
-};
+++ /dev/null
-#!/usr/bin/env python
-
-# Utility functions for katie
-# Copyright (C) 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: katie.py,v 1.59 2005-12-17 10:57:03 rmurray Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-###############################################################################
-
-import cPickle, errno, os, pg, re, stat, string, sys, time;
-import utils, db_access;
-import apt_inst, apt_pkg;
-
-from types import *;
-
-###############################################################################
-
-re_isanum = re.compile (r"^\d+$");
-re_default_answer = re.compile(r"\[(.*)\]");
-re_fdnic = re.compile(r"\n\n");
-re_bin_only_nmu = re.compile(r"\+b\d+$");
-###############################################################################
-
-# Convenience wrapper to carry around all the package information in
-
-class Pkg:
- def __init__(self, **kwds):
- self.__dict__.update(kwds);
-
- def update(self, **kwds):
- self.__dict__.update(kwds);
-
-###############################################################################
-
-class nmu_p:
- # Read in the group maintainer override file
- def __init__ (self, Cnf):
- self.group_maint = {};
- self.Cnf = Cnf;
- if Cnf.get("Dinstall::GroupOverrideFilename"):
- filename = Cnf["Dir::Override"] + Cnf["Dinstall::GroupOverrideFilename"];
- file = utils.open_file(filename);
- for line in file.readlines():
- line = utils.re_comments.sub('', line).lower().strip();
- if line != "":
- self.group_maint[line] = 1;
- file.close();
-
- def is_an_nmu (self, pkg):
- Cnf = self.Cnf;
- changes = pkg.changes;
- dsc = pkg.dsc;
-
- i = utils.fix_maintainer (dsc.get("maintainer",
- Cnf["Dinstall::MyEmailAddress"]).lower());
- (dsc_rfc822, dsc_rfc2047, dsc_name, dsc_email) = i;
- # changes["changedbyname"] == dsc_name is probably never true, but better safe than sorry
- if dsc_name == changes["maintainername"].lower() and \
- (changes["changedby822"] == "" or changes["changedbyname"].lower() == dsc_name):
- return 0;
-
- if dsc.has_key("uploaders"):
- uploaders = dsc["uploaders"].lower().split(",");
- uploadernames = {};
- for i in uploaders:
- (rfc822, rfc2047, name, email) = utils.fix_maintainer (i.strip());
- uploadernames[name] = "";
- if uploadernames.has_key(changes["changedbyname"].lower()):
- return 0;
-
- # Some group maintained packages (e.g. Debian QA) are never NMU's
- if self.group_maint.has_key(changes["maintaineremail"].lower()):
- return 0;
-
- return 1;
-
-###############################################################################
-
-class Katie:
-
- def __init__(self, Cnf):
- self.Cnf = Cnf;
- # Read in the group-maint override file
- self.nmu = nmu_p(Cnf);
- self.accept_count = 0;
- self.accept_bytes = 0L;
- self.pkg = Pkg(changes = {}, dsc = {}, dsc_files = {}, files = {},
- legacy_source_untouchable = {});
-
- # Initialize the substitution template mapping global
- Subst = self.Subst = {};
- Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
- Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
- Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
- Subst["__KATIE_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
-
- self.projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, self.projectB);
-
- ###########################################################################
-
- def init_vars (self):
- for i in [ "changes", "dsc", "files", "dsc_files", "legacy_source_untouchable" ]:
- exec "self.pkg.%s.clear();" % (i);
- self.pkg.orig_tar_id = None;
- self.pkg.orig_tar_location = "";
- self.pkg.orig_tar_gz = None;
-
- ###########################################################################
-
- def update_vars (self):
- dump_filename = self.pkg.changes_file[:-8]+".katie";
- dump_file = utils.open_file(dump_filename);
- p = cPickle.Unpickler(dump_file);
- for i in [ "changes", "dsc", "files", "dsc_files", "legacy_source_untouchable" ]:
- exec "self.pkg.%s.update(p.load());" % (i);
- for i in [ "orig_tar_id", "orig_tar_location" ]:
- exec "self.pkg.%s = p.load();" % (i);
- dump_file.close();
-
- ###########################################################################
-
- # This could just dump the dictionaries as is, but I'd like to avoid
- # this so there's some idea of what katie & lisa use from jennifer
-
- def dump_vars(self, dest_dir):
- for i in [ "changes", "dsc", "files", "dsc_files",
- "legacy_source_untouchable", "orig_tar_id", "orig_tar_location" ]:
- exec "%s = self.pkg.%s;" % (i,i);
- dump_filename = os.path.join(dest_dir,self.pkg.changes_file[:-8] + ".katie");
- dump_file = utils.open_file(dump_filename, 'w');
- try:
- os.chmod(dump_filename, 0660);
- except OSError, e:
- if errno.errorcode[e.errno] == 'EPERM':
- perms = stat.S_IMODE(os.stat(dump_filename)[stat.ST_MODE]);
- if perms & stat.S_IROTH:
- utils.fubar("%s is world readable and chmod failed." % (dump_filename));
- else:
- raise;
-
- p = cPickle.Pickler(dump_file, 1);
- for i in [ "d_changes", "d_dsc", "d_files", "d_dsc_files" ]:
- exec "%s = {}" % i;
- ## files
- for file in files.keys():
- d_files[file] = {};
- for i in [ "package", "version", "architecture", "type", "size",
- "md5sum", "component", "location id", "source package",
- "source version", "maintainer", "dbtype", "files id",
- "new", "section", "priority", "othercomponents",
- "pool name", "original component" ]:
- if files[file].has_key(i):
- d_files[file][i] = files[file][i];
- ## changes
- # Mandatory changes fields
- for i in [ "distribution", "source", "architecture", "version",
- "maintainer", "urgency", "fingerprint", "changedby822",
- "changedby2047", "changedbyname", "maintainer822",
- "maintainer2047", "maintainername", "maintaineremail",
- "closes", "changes" ]:
- d_changes[i] = changes[i];
- # Optional changes fields
- for i in [ "changed-by", "filecontents", "format", "lisa note", "distribution-version" ]:
- if changes.has_key(i):
- d_changes[i] = changes[i];
- ## dsc
- for i in [ "source", "version", "maintainer", "fingerprint",
- "uploaders", "bts changelog" ]:
- if dsc.has_key(i):
- d_dsc[i] = dsc[i];
- ## dsc_files
- for file in dsc_files.keys():
- d_dsc_files[file] = {};
- # Mandatory dsc_files fields
- for i in [ "size", "md5sum" ]:
- d_dsc_files[file][i] = dsc_files[file][i];
- # Optional dsc_files fields
- for i in [ "files id" ]:
- if dsc_files[file].has_key(i):
- d_dsc_files[file][i] = dsc_files[file][i];
-
- for i in [ d_changes, d_dsc, d_files, d_dsc_files,
- legacy_source_untouchable, orig_tar_id, orig_tar_location ]:
- p.dump(i);
- dump_file.close();
-
- ###########################################################################
-
- # Set up the per-package template substitution mappings
-
- def update_subst (self, reject_message = ""):
- Subst = self.Subst;
- changes = self.pkg.changes;
- # If jennifer crashed out in the right place, architecture may still be a string.
- if not changes.has_key("architecture") or not isinstance(changes["architecture"], DictType):
- changes["architecture"] = { "Unknown" : "" };
- # and maintainer2047 may not exist.
- if not changes.has_key("maintainer2047"):
- changes["maintainer2047"] = self.Cnf["Dinstall::MyEmailAddress"];
-
- Subst["__ARCHITECTURE__"] = " ".join(changes["architecture"].keys());
- Subst["__CHANGES_FILENAME__"] = os.path.basename(self.pkg.changes_file);
- Subst["__FILE_CONTENTS__"] = changes.get("filecontents", "");
-
- # For source uploads the Changed-By field wins; otherwise Maintainer wins.
- if changes["architecture"].has_key("source") and changes["changedby822"] != "" and (changes["changedby822"] != changes["maintainer822"]):
- Subst["__MAINTAINER_FROM__"] = changes["changedby2047"];
- Subst["__MAINTAINER_TO__"] = "%s, %s" % (changes["changedby2047"],
- changes["maintainer2047"]);
- Subst["__MAINTAINER__"] = changes.get("changed-by", "Unknown");
- else:
- Subst["__MAINTAINER_FROM__"] = changes["maintainer2047"];
- Subst["__MAINTAINER_TO__"] = changes["maintainer2047"];
- Subst["__MAINTAINER__"] = changes.get("maintainer", "Unknown");
- if self.Cnf.has_key("Dinstall::TrackingServer") and changes.has_key("source"):
- Subst["__MAINTAINER_TO__"] += "\nBcc: %s@%s" % (changes["source"], self.Cnf["Dinstall::TrackingServer"])
-
- # Apply any global override of the Maintainer field
- if self.Cnf.get("Dinstall::OverrideMaintainer"):
- Subst["__MAINTAINER_TO__"] = self.Cnf["Dinstall::OverrideMaintainer"];
- Subst["__MAINTAINER_FROM__"] = self.Cnf["Dinstall::OverrideMaintainer"];
-
- Subst["__REJECT_MESSAGE__"] = reject_message;
- Subst["__SOURCE__"] = changes.get("source", "Unknown");
- Subst["__VERSION__"] = changes.get("version", "Unknown");
-
- ###########################################################################
-
- def build_summaries(self):
- changes = self.pkg.changes;
- files = self.pkg.files;
-
- byhand = summary = new = "";
-
- # changes["distribution"] may not exist in corner cases
- # (e.g. unreadable changes files)
- if not changes.has_key("distribution") or not isinstance(changes["distribution"], DictType):
- changes["distribution"] = {};
-
- file_keys = files.keys();
- file_keys.sort();
- for file in file_keys:
- if files[file].has_key("byhand"):
- byhand = 1
- summary += file + " byhand\n"
- elif files[file].has_key("new"):
- new = 1
- summary += "(new) %s %s %s\n" % (file, files[file]["priority"], files[file]["section"])
- if files[file].has_key("othercomponents"):
- summary += "WARNING: Already present in %s distribution.\n" % (files[file]["othercomponents"])
- if files[file]["type"] == "deb":
- deb_fh = utils.open_file(file)
- summary += apt_pkg.ParseSection(apt_inst.debExtractControl(deb_fh))["Description"] + '\n';
- deb_fh.close()
- else:
- files[file]["pool name"] = utils.poolify (changes.get("source",""), files[file]["component"])
- destination = self.Cnf["Dir::PoolRoot"] + files[file]["pool name"] + file
- summary += file + "\n to " + destination + "\n"
-
- short_summary = summary;
-
- # This is for direport's benefit...
- f = re_fdnic.sub("\n .\n", changes.get("changes",""));
-
- if byhand or new:
- summary += "Changes: " + f;
-
- summary += self.announce(short_summary, 0)
-
- return (summary, short_summary);
-
- ###########################################################################
-
- def close_bugs (self, summary, action):
- changes = self.pkg.changes;
- Subst = self.Subst;
- Cnf = self.Cnf;
-
- bugs = changes["closes"].keys();
-
- if not bugs:
- return summary;
-
- bugs.sort();
- if not self.nmu.is_an_nmu(self.pkg):
- if changes["distribution"].has_key("experimental"):
- # tag bugs as fixed-in-experimental for uploads to experimental
- summary += "Setting bugs to severity fixed: ";
- control_message = "";
- for bug in bugs:
- summary += "%s " % (bug);
- control_message += "tag %s + fixed-in-experimental\n" % (bug);
- if action and control_message != "":
- Subst["__CONTROL_MESSAGE__"] = control_message;
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-experimental-fixed");
- utils.send_mail (mail_message);
- if action:
- self.Logger.log(["setting bugs to fixed"]+bugs);
-
-
- else:
- summary += "Closing bugs: ";
- for bug in bugs:
- summary += "%s " % (bug);
- if action:
- Subst["__BUG_NUMBER__"] = bug;
- if changes["distribution"].has_key("stable"):
- Subst["__STABLE_WARNING__"] = """
-Note that this package is not part of the released stable Debian
-distribution. It may have dependencies on other unreleased software,
-or other instabilities. Please take care if you wish to install it.
-The update will eventually make its way into the next released Debian
-distribution.""";
- else:
- Subst["__STABLE_WARNING__"] = "";
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-close");
- utils.send_mail (mail_message);
- if action:
- self.Logger.log(["closing bugs"]+bugs);
-
- else: # NMU
- summary += "Setting bugs to severity fixed: ";
- control_message = "";
- for bug in bugs:
- summary += "%s " % (bug);
- control_message += "tag %s + fixed\n" % (bug);
- if action and control_message != "":
- Subst["__CONTROL_MESSAGE__"] = control_message;
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.bug-nmu-fixed");
- utils.send_mail (mail_message);
- if action:
- self.Logger.log(["setting bugs to fixed"]+bugs);
- summary += "\n";
- return summary;
-
- ###########################################################################
-
- def announce (self, short_summary, action):
- Subst = self.Subst;
- Cnf = self.Cnf;
- changes = self.pkg.changes;
-
- # Only do announcements for source uploads with a recent dpkg-dev installed
- if float(changes.get("format", 0)) < 1.6 or not changes["architecture"].has_key("source"):
- return "";
-
- lists_done = {};
- summary = "";
- Subst["__SHORT_SUMMARY__"] = short_summary;
-
- for dist in changes["distribution"].keys():
- list = Cnf.Find("Suite::%s::Announce" % (dist));
- if list == "" or lists_done.has_key(list):
- continue;
- lists_done[list] = 1;
- summary += "Announcing to %s\n" % (list);
-
- if action:
- Subst["__ANNOUNCE_LIST_ADDRESS__"] = list;
- if Cnf.get("Dinstall::TrackingServer") and changes["architecture"].has_key("source"):
- Subst["__ANNOUNCE_LIST_ADDRESS__"] = Subst["__ANNOUNCE_LIST_ADDRESS__"] + "\nBcc: %s@%s" % (changes["source"], Cnf["Dinstall::TrackingServer"]);
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.announce");
- utils.send_mail (mail_message);
-
- if Cnf.FindB("Dinstall::CloseBugs"):
- summary = self.close_bugs(summary, action);
-
- return summary;
-
- ###########################################################################
-
- def accept (self, summary, short_summary):
- Cnf = self.Cnf;
- Subst = self.Subst;
- files = self.pkg.files;
- changes = self.pkg.changes;
- changes_file = self.pkg.changes_file;
- dsc = self.pkg.dsc;
-
- print "Accepting."
- self.Logger.log(["Accepting changes",changes_file]);
-
- self.dump_vars(Cnf["Dir::Queue::Accepted"]);
-
- # Move all the files into the accepted directory
- utils.move(changes_file, Cnf["Dir::Queue::Accepted"]);
- file_keys = files.keys();
- for file in file_keys:
- utils.move(file, Cnf["Dir::Queue::Accepted"]);
- self.accept_bytes += float(files[file]["size"])
- self.accept_count += 1;
-
- # Send accept mail, announce to lists, close bugs and check for
- # override disparities
- if not Cnf["Dinstall::Options::No-Mail"]:
- Subst["__SUITE__"] = "";
- Subst["__SUMMARY__"] = summary;
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/jennifer.accepted");
- utils.send_mail(mail_message)
- self.announce(short_summary, 1)
-
-
- ## Helper stuff for DebBugs Version Tracking
- if Cnf.Find("Dir::Queue::BTSVersionTrack"):
- # ??? once queue/* is cleared on *.d.o and/or reprocessed
- # the conditionalization on dsc["bts changelog"] should be
- # dropped.
-
- # Write out the version history from the changelog
- if changes["architecture"].has_key("source") and \
- dsc.has_key("bts changelog"):
-
- temp_filename = utils.temp_filename(Cnf["Dir::Queue::BTSVersionTrack"],
- dotprefix=1, perms=0644);
- version_history = utils.open_file(temp_filename, 'w');
- version_history.write(dsc["bts changelog"]);
- version_history.close();
- filename = "%s/%s" % (Cnf["Dir::Queue::BTSVersionTrack"],
- changes_file[:-8]+".versions");
- os.rename(temp_filename, filename);
-
- # Write out the binary -> source mapping.
- temp_filename = utils.temp_filename(Cnf["Dir::Queue::BTSVersionTrack"],
- dotprefix=1, perms=0644);
- debinfo = utils.open_file(temp_filename, 'w');
- for file in file_keys:
- f = files[file];
- if f["type"] == "deb":
- line = " ".join([f["package"], f["version"],
- f["architecture"], f["source package"],
- f["source version"]]);
- debinfo.write(line+"\n");
- debinfo.close();
- filename = "%s/%s" % (Cnf["Dir::Queue::BTSVersionTrack"],
- changes_file[:-8]+".debinfo");
- os.rename(temp_filename, filename);
-
- self.queue_build("accepted", Cnf["Dir::Queue::Accepted"])
-
- ###########################################################################
-
- def queue_build (self, queue, path):
- Cnf = self.Cnf
- Subst = self.Subst
- files = self.pkg.files
- changes = self.pkg.changes
- changes_file = self.pkg.changes_file
- dsc = self.pkg.dsc
- file_keys = files.keys()
-
- ## Special support to enable clean auto-building of queued packages
- queue_id = db_access.get_or_set_queue_id(queue)
-
- self.projectB.query("BEGIN WORK");
- for suite in changes["distribution"].keys():
- if suite not in Cnf.ValueList("Dinstall::QueueBuildSuites"):
- continue;
- suite_id = db_access.get_suite_id(suite);
- dest_dir = Cnf["Dir::QueueBuild"];
- if Cnf.FindB("Dinstall::SecurityQueueBuild"):
- dest_dir = os.path.join(dest_dir, suite);
- for file in file_keys:
- src = os.path.join(path, file);
- dest = os.path.join(dest_dir, file);
- if Cnf.FindB("Dinstall::SecurityQueueBuild"):
- # Copy it since the original won't be readable by www-data
- utils.copy(src, dest);
- else:
- # Create a symlink to it
- os.symlink(src, dest);
- # Add it to the list of packages for later processing by apt-ftparchive
- self.projectB.query("INSERT INTO queue_build (suite, queue, filename, in_queue) VALUES (%s, %s, '%s', 't')" % (suite_id, queue_id, dest));
- # If the .orig.tar.gz is in the pool, create a symlink to
- # it (if one doesn't already exist)
- if self.pkg.orig_tar_id:
- # Determine the .orig.tar.gz file name
- for dsc_file in self.pkg.dsc_files.keys():
- if dsc_file.endswith(".orig.tar.gz"):
- filename = dsc_file;
- dest = os.path.join(dest_dir, filename);
- # If it doesn't exist, create a symlink
- if not os.path.exists(dest):
- # Find the .orig.tar.gz in the pool
- q = self.projectB.query("SELECT l.path, f.filename from location l, files f WHERE f.id = %s and f.location = l.id" % (self.pkg.orig_tar_id));
- ql = q.getresult();
- if not ql:
- utils.fubar("[INTERNAL ERROR] Couldn't find id %s in files table." % (self.pkg.orig_tar_id));
- src = os.path.join(ql[0][0], ql[0][1]);
- os.symlink(src, dest);
- # Add it to the list of packages for later processing by apt-ftparchive
- self.projectB.query("INSERT INTO queue_build (suite, queue, filename, in_queue) VALUES (%s, %s, '%s', 't')" % (suite_id, queue_id, dest));
- # if it does, update things to ensure it's not removed prematurely
- else:
- self.projectB.query("UPDATE queue_build SET in_queue = 't', last_used = NULL WHERE filename = '%s' AND suite = %s" % (dest, suite_id));
-
- self.projectB.query("COMMIT WORK");
-
- ###########################################################################
-
- def check_override (self):
- Subst = self.Subst;
- changes = self.pkg.changes;
- files = self.pkg.files;
- Cnf = self.Cnf;
-
- # Abandon the check if:
- # a) it's a non-sourceful upload
- # b) override disparity checks have been disabled
- # c) we're not sending mail
- if not changes["architecture"].has_key("source") or \
- not Cnf.FindB("Dinstall::OverrideDisparityCheck") or \
- Cnf["Dinstall::Options::No-Mail"]:
- return;
-
- summary = "";
- file_keys = files.keys();
- file_keys.sort();
- for file in file_keys:
- if not files[file].has_key("new") and files[file]["type"] == "deb":
- section = files[file]["section"];
- override_section = files[file]["override section"];
- if section.lower() != override_section.lower() and section != "-":
- # Ignore this; it's a common mistake and not worth whining about
- if section.lower() == "non-us/main" and override_section.lower() == "non-us":
- continue;
- summary += "%s: package says section is %s, override says %s.\n" % (file, section, override_section);
- priority = files[file]["priority"];
- override_priority = files[file]["override priority"];
- if priority != override_priority and priority != "-":
- summary += "%s: package says priority is %s, override says %s.\n" % (file, priority, override_priority);
-
- if summary == "":
- return;
-
- Subst["__SUMMARY__"] = summary;
- mail_message = utils.TemplateSubst(Subst,self.Cnf["Dir::Templates"]+"/jennifer.override-disparity");
- utils.send_mail(mail_message);
-
- ###########################################################################
-
- def force_reject (self, files):
- """Forcefully move files from the current directory to the
- reject directory. If any file already exists in the reject
- directory it will be moved to the morgue to make way for
- the new file."""
-
- Cnf = self.Cnf
-
- for file in files:
- # Skip any files which don't exist or which we don't have permission to copy.
- if os.access(file,os.R_OK) == 0:
- continue;
- dest_file = os.path.join(Cnf["Dir::Queue::Reject"], file);
- try:
- dest_fd = os.open(dest_file, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
- except OSError, e:
- # File exists? Let's try and move it to the morgue
- if errno.errorcode[e.errno] == 'EEXIST':
- morgue_file = os.path.join(Cnf["Dir::Morgue"],Cnf["Dir::MorgueReject"],file);
- try:
- morgue_file = utils.find_next_free(morgue_file);
- except utils.tried_too_hard_exc:
- # Something's either gone badly Pete Tong, or
- # someone is trying to exploit us.
- utils.warn("**WARNING** failed to move %s from the reject directory to the morgue." % (file));
- return;
- utils.move(dest_file, morgue_file, perms=0660);
- try:
- dest_fd = os.open(dest_file, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
- except OSError, e:
- # Likewise
- utils.warn("**WARNING** failed to claim %s in the reject directory." % (file));
- return;
- else:
- raise;
- # If we got here, we own the destination file, so we can
- # safely overwrite it.
- utils.move(file, dest_file, 1, perms=0660);
- os.close(dest_fd)
-
- ###########################################################################
-
- def do_reject (self, manual = 0, reject_message = ""):
- # If we weren't given a manual rejection message, spawn an
- # editor so the user can add one in...
- if manual and not reject_message:
- temp_filename = utils.temp_filename();
- editor = os.environ.get("EDITOR","vi")
- answer = 'E';
- while answer == 'E':
- os.system("%s %s" % (editor, temp_filename))
- temp_fh = utils.open_file(temp_filename);
- reject_message = "".join(temp_fh.readlines());
- temp_fh.close();
- print "Reject message:";
- print utils.prefix_multi_line_string(reject_message," ",include_blank_lines=1);
- prompt = "[R]eject, Edit, Abandon, Quit ?"
- answer = "XXX";
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
- os.unlink(temp_filename);
- if answer == 'A':
- return 1;
- elif answer == 'Q':
- sys.exit(0);
-
- print "Rejecting.\n"
-
- Cnf = self.Cnf;
- Subst = self.Subst;
- pkg = self.pkg;
-
- reason_filename = pkg.changes_file[:-8] + ".reason";
- reason_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
-
- # Move all the files into the reject directory
- reject_files = pkg.files.keys() + [pkg.changes_file];
- self.force_reject(reject_files);
-
- # If we fail here someone is probably trying to exploit the race
- # so let's just raise an exception ...
- if os.path.exists(reason_filename):
- os.unlink(reason_filename);
- reason_fd = os.open(reason_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
-
- if not manual:
- Subst["__REJECTOR_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
- Subst["__MANUAL_REJECT_MESSAGE__"] = "";
- Subst["__CC__"] = "X-Katie-Rejection: automatic (moo)";
- os.write(reason_fd, reject_message);
- reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/katie.rejected");
- else:
- # Build up the rejection email
- user_email_address = utils.whoami() + " <%s>" % (Cnf["Dinstall::MyAdminAddress"]);
-
- Subst["__REJECTOR_ADDRESS__"] = user_email_address;
- Subst["__MANUAL_REJECT_MESSAGE__"] = reject_message;
- Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
- reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/katie.rejected");
- # Write the rejection email out as the <foo>.reason file
- os.write(reason_fd, reject_mail_message);
-
- os.close(reason_fd)
-
- # Send the rejection mail if appropriate
- if not Cnf["Dinstall::Options::No-Mail"]:
- utils.send_mail(reject_mail_message);
-
- self.Logger.log(["rejected", pkg.changes_file]);
- return 0;
-
- ################################################################################
-
- # Ensure that source exists somewhere in the archive for the binary
- # upload being processed.
- #
- # (1) exact match => 1.0-3
- # (2) Bin-only NMU => 1.0-3+b1 , 1.0-3.1+b1
-
- def source_exists (self, package, source_version, suites = ["any"]):
- okay = 1
- for suite in suites:
- if suite == "any":
- que = "SELECT s.version FROM source s WHERE s.source = '%s'" % \
- (package)
- else:
- # source must exist in suite X, or in some other suite that's
- # mapped to X, recursively... silent-maps are counted too,
- # unreleased-maps aren't.
- maps = self.Cnf.ValueList("SuiteMappings")[:]
- maps.reverse()
- maps = [ m.split() for m in maps ]
- maps = [ (x[1], x[2]) for x in maps
- if x[0] == "map" or x[0] == "silent-map" ]
- s = [suite]
- for x in maps:
- if x[1] in s and x[0] not in s:
- s.append(x[0])
-
- que = "SELECT s.version FROM source s JOIN src_associations sa ON (s.id = sa.source) JOIN suite su ON (sa.suite = su.id) WHERE s.source = '%s' AND (%s)" % (package, string.join(["su.suite_name = '%s'" % a for a in s], " OR "));
- q = self.projectB.query(que)
-
- # Reduce the query results to a list of version numbers
- ql = map(lambda x: x[0], q.getresult());
-
- # Try (1)
- if source_version in ql:
- continue
-
- # Try (2)
- orig_source_version = re_bin_only_nmu.sub('', source_version)
- if orig_source_version in ql:
- continue
-
- # No source found...
- okay = 0
- break
- return okay
-
- ################################################################################
-
- def in_override_p (self, package, component, suite, binary_type, file):
- files = self.pkg.files;
-
- if binary_type == "": # must be source
- type = "dsc";
- else:
- type = binary_type;
-
- # Override suite name; used for example with proposed-updates
- if self.Cnf.Find("Suite::%s::OverrideSuite" % (suite)) != "":
- suite = self.Cnf["Suite::%s::OverrideSuite" % (suite)];
-
- # Avoid <undef> on unknown distributions
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- return None;
- component_id = db_access.get_component_id(component);
- type_id = db_access.get_override_type_id(type);
-
- # FIXME: nasty non-US speficic hack
- if component.lower().startswith("non-us/"):
- component = component[7:];
-
- q = self.projectB.query("SELECT s.section, p.priority FROM override o, section s, priority p WHERE package = '%s' AND suite = %s AND component = %s AND type = %s AND o.section = s.id AND o.priority = p.id"
- % (package, suite_id, component_id, type_id));
- result = q.getresult();
- # If checking for a source package fall back on the binary override type
- if type == "dsc" and not result:
- deb_type_id = db_access.get_override_type_id("deb");
- udeb_type_id = db_access.get_override_type_id("udeb");
- q = self.projectB.query("SELECT s.section, p.priority FROM override o, section s, priority p WHERE package = '%s' AND suite = %s AND component = %s AND (type = %s OR type = %s) AND o.section = s.id AND o.priority = p.id"
- % (package, suite_id, component_id, deb_type_id, udeb_type_id));
- result = q.getresult();
-
- # Remember the section and priority so we can check them later if appropriate
- if result:
- files[file]["override section"] = result[0][0];
- files[file]["override priority"] = result[0][1];
-
- return result;
-
- ################################################################################
-
- def reject (self, str, prefix="Rejected: "):
- if str:
- # Unlike other rejects we add new lines first to avoid trailing
- # new lines when this message is passed back up to a caller.
- if self.reject_message:
- self.reject_message += "\n";
- self.reject_message += prefix + str;
-
- ################################################################################
-
- def get_anyversion(self, query_result, suite):
- anyversion=None
- anysuite = [suite] + self.Cnf.ValueList("Suite::%s::VersionChecks::Enhances" % (suite))
- for (v, s) in query_result:
- if s in [ string.lower(x) for x in anysuite ]:
- if not anyversion or apt_pkg.VersionCompare(anyversion, v) <= 0:
- anyversion=v
- return anyversion
-
- ################################################################################
-
- def cross_suite_version_check(self, query_result, file, new_version):
- """Ensure versions are newer than existing packages in target
- suites and that cross-suite version checking rules as
- set out in the conf file are satisfied."""
-
- # Check versions for each target suite
- for target_suite in self.pkg.changes["distribution"].keys():
- must_be_newer_than = map(string.lower, self.Cnf.ValueList("Suite::%s::VersionChecks::MustBeNewerThan" % (target_suite)));
- must_be_older_than = map(string.lower, self.Cnf.ValueList("Suite::%s::VersionChecks::MustBeOlderThan" % (target_suite)));
- # Enforce "must be newer than target suite" even if conffile omits it
- if target_suite not in must_be_newer_than:
- must_be_newer_than.append(target_suite);
- for entry in query_result:
- existent_version = entry[0];
- suite = entry[1];
- if suite in must_be_newer_than and \
- apt_pkg.VersionCompare(new_version, existent_version) < 1:
- self.reject("%s: old version (%s) in %s >= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite));
- if suite in must_be_older_than and \
- apt_pkg.VersionCompare(new_version, existent_version) > -1:
- ch = self.pkg.changes
- cansave = 0
- if ch.get('distribution-version', {}).has_key(suite):
- # we really use the other suite, ignoring the conflicting one ...
- addsuite = ch["distribution-version"][suite]
-
- add_version = self.get_anyversion(query_result, addsuite)
- target_version = self.get_anyversion(query_result, target_suite)
-
- if not add_version:
- # not add_version can only happen if we map to a suite
- # that doesn't enhance the suite we're propup'ing from.
- # so "propup-ver x a b c; map a d" is a problem only if
- # d doesn't enhance a.
- #
- # i think we could always propagate in this case, rather
- # than complaining. either way, this isn't a REJECT issue
- #
- # And - we really should complain to the dorks who configured dak
- self.reject("%s is mapped to, but not enhanced by %s - adding anyways" % (suite, addsuite), "Warning: ")
- self.pkg.changes.setdefault("propdistribution", {})
- self.pkg.changes["propdistribution"][addsuite] = 1
- cansave = 1
- elif not target_version:
- # not targets_version is true when the package is NEW
- # we could just stick with the "...old version..." REJECT
- # for this, I think.
- self.reject("Won't propogate NEW packages.")
- elif apt_pkg.VersionCompare(new_version, add_version) < 0:
- # propogation would be redundant. no need to reject though.
- self.reject("ignoring versionconflict: %s: old version (%s) in %s <= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite), "Warning: ")
- cansave = 1
- elif apt_pkg.VersionCompare(new_version, add_version) > 0 and \
- apt_pkg.VersionCompare(add_version, target_version) >= 0:
- # propogate!!
- self.reject("Propogating upload to %s" % (addsuite), "Warning: ")
- self.pkg.changes.setdefault("propdistribution", {})
- self.pkg.changes["propdistribution"][addsuite] = 1
- cansave = 1
-
- if not cansave:
- self.reject("%s: old version (%s) in %s <= new version (%s) targeted at %s." % (file, existent_version, suite, new_version, target_suite))
-
- ################################################################################
-
- def check_binary_against_db(self, file):
- self.reject_message = "";
- files = self.pkg.files;
-
- # Ensure version is sane
- q = self.projectB.query("""
-SELECT b.version, su.suite_name FROM binaries b, bin_associations ba, suite su,
- architecture a
- WHERE b.package = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all')
- AND ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id"""
- % (files[file]["package"],
- files[file]["architecture"]));
- self.cross_suite_version_check(q.getresult(), file, files[file]["version"]);
-
- # Check for any existing copies of the file
- q = self.projectB.query("""
-SELECT b.id FROM binaries b, architecture a
- WHERE b.package = '%s' AND b.version = '%s' AND a.arch_string = '%s'
- AND a.id = b.architecture"""
- % (files[file]["package"],
- files[file]["version"],
- files[file]["architecture"]))
- if q.getresult():
- self.reject("%s: can not overwrite existing copy already in the archive." % (file));
-
- return self.reject_message;
-
- ################################################################################
-
- def check_source_against_db(self, file):
- self.reject_message = "";
- dsc = self.pkg.dsc;
-
- # Ensure version is sane
- q = self.projectB.query("""
-SELECT s.version, su.suite_name FROM source s, src_associations sa, suite su
- WHERE s.source = '%s' AND sa.source = s.id AND sa.suite = su.id""" % (dsc.get("source")));
- self.cross_suite_version_check(q.getresult(), file, dsc.get("version"));
-
- return self.reject_message;
-
- ################################################################################
-
- # **WARNING**
- # NB: this function can remove entries from the 'files' index [if
- # the .orig.tar.gz is a duplicate of the one in the archive]; if
- # you're iterating over 'files' and call this function as part of
- # the loop, be sure to add a check to the top of the loop to
- # ensure you haven't just tried to derefernece the deleted entry.
- # **WARNING**
-
- def check_dsc_against_db(self, file):
- self.reject_message = "";
- files = self.pkg.files;
- dsc_files = self.pkg.dsc_files;
- legacy_source_untouchable = self.pkg.legacy_source_untouchable;
- self.pkg.orig_tar_gz = None;
-
- # Try and find all files mentioned in the .dsc. This has
- # to work harder to cope with the multiple possible
- # locations of an .orig.tar.gz.
- for dsc_file in dsc_files.keys():
- found = None;
- if files.has_key(dsc_file):
- actual_md5 = files[dsc_file]["md5sum"];
- actual_size = int(files[dsc_file]["size"]);
- found = "%s in incoming" % (dsc_file)
- # Check the file does not already exist in the archive
- q = self.projectB.query("SELECT f.size, f.md5sum, l.path, f.filename FROM files f, location l WHERE f.filename LIKE '%%%s%%' AND l.id = f.location" % (dsc_file));
- ql = q.getresult();
- # Strip out anything that isn't '%s' or '/%s$'
- for i in ql:
- if i[3] != dsc_file and i[3][-(len(dsc_file)+1):] != '/'+dsc_file:
- ql.remove(i);
-
- # "[katie] has not broken them. [katie] has fixed a
- # brokenness. Your crappy hack exploited a bug in
- # the old dinstall.
- #
- # "(Come on! I thought it was always obvious that
- # one just doesn't release different files with
- # the same name and version.)"
- # -- ajk@ on d-devel@l.d.o
-
- if ql:
- # Ignore exact matches for .orig.tar.gz
- match = 0;
- if dsc_file.endswith(".orig.tar.gz"):
- for i in ql:
- if files.has_key(dsc_file) and \
- int(files[dsc_file]["size"]) == int(i[0]) and \
- files[dsc_file]["md5sum"] == i[1]:
- self.reject("ignoring %s, since it's already in the archive." % (dsc_file), "Warning: ");
- del files[dsc_file];
- self.pkg.orig_tar_gz = i[2] + i[3];
- match = 1;
-
- if not match:
- self.reject("can not overwrite existing copy of '%s' already in the archive." % (dsc_file));
- elif dsc_file.endswith(".orig.tar.gz"):
- # Check in the pool
- q = self.projectB.query("SELECT l.path, f.filename, l.type, f.id, l.id FROM files f, location l WHERE f.filename LIKE '%%%s%%' AND l.id = f.location" % (dsc_file));
- ql = q.getresult();
- # Strip out anything that isn't '%s' or '/%s$'
- for i in ql:
- if i[1] != dsc_file and i[1][-(len(dsc_file)+1):] != '/'+dsc_file:
- ql.remove(i);
-
- if ql:
- # Unfortunately, we may get more than one match here if,
- # for example, the package was in potato but had an -sa
- # upload in woody. So we need to choose the right one.
-
- x = ql[0]; # default to something sane in case we don't match any or have only one
-
- if len(ql) > 1:
- for i in ql:
- old_file = i[0] + i[1];
- old_file_fh = utils.open_file(old_file)
- actual_md5 = apt_pkg.md5sum(old_file_fh);
- old_file_fh.close()
- actual_size = os.stat(old_file)[stat.ST_SIZE];
- if actual_md5 == dsc_files[dsc_file]["md5sum"] and actual_size == int(dsc_files[dsc_file]["size"]):
- x = i;
- else:
- legacy_source_untouchable[i[3]] = "";
-
- old_file = x[0] + x[1];
- old_file_fh = utils.open_file(old_file)
- actual_md5 = apt_pkg.md5sum(old_file_fh);
- old_file_fh.close()
- actual_size = os.stat(old_file)[stat.ST_SIZE];
- found = old_file;
- suite_type = x[2];
- dsc_files[dsc_file]["files id"] = x[3]; # need this for updating dsc_files in install()
- # See install() in katie...
- self.pkg.orig_tar_id = x[3];
- self.pkg.orig_tar_gz = old_file;
- if suite_type == "legacy" or suite_type == "legacy-mixed":
- self.pkg.orig_tar_location = "legacy";
- else:
- self.pkg.orig_tar_location = x[4];
- else:
- # Not there? Check the queue directories...
-
- in_unchecked = os.path.join(self.Cnf["Dir::Queue::Unchecked"],dsc_file);
- # See process_it() in jennifer for explanation of this
- if os.path.exists(in_unchecked):
- return (self.reject_message, in_unchecked);
- else:
- for dir in [ "Accepted", "New", "Byhand" ]:
- in_otherdir = os.path.join(self.Cnf["Dir::Queue::%s" % (dir)],dsc_file);
- if os.path.exists(in_otherdir):
- in_otherdir_fh = utils.open_file(in_otherdir)
- actual_md5 = apt_pkg.md5sum(in_otherdir_fh);
- in_otherdir_fh.close()
- actual_size = os.stat(in_otherdir)[stat.ST_SIZE];
- found = in_otherdir;
- self.pkg.orig_tar_gz = in_otherdir;
-
- if not found:
- self.reject("%s refers to %s, but I can't find it in the queue or in the pool." % (file, dsc_file));
- self.pkg.orig_tar_gz = -1;
- continue;
- else:
- self.reject("%s refers to %s, but I can't find it in the queue." % (file, dsc_file));
- continue;
- if actual_md5 != dsc_files[dsc_file]["md5sum"]:
- self.reject("md5sum for %s doesn't match %s." % (found, file));
- if actual_size != int(dsc_files[dsc_file]["size"]):
- self.reject("size for %s doesn't match %s." % (found, file));
-
- return (self.reject_message, None);
-
- def do_query(self, q):
- sys.stderr.write("query: \"%s\" ... " % (q));
- before = time.time();
- r = self.projectB.query(q);
- time_diff = time.time()-before;
- sys.stderr.write("took %.3f seconds.\n" % (time_diff));
- return r;
+++ /dev/null
-#!/usr/bin/env python
-
-# Installs Debian packages from queue/accepted into the pool
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: kelly,v 1.18 2005-12-17 10:57:03 rmurray Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-###############################################################################
-
-# Cartman: "I'm trying to make the best of a bad situation, I don't
-# need to hear crap from a bunch of hippy freaks living in
-# denial. Screw you guys, I'm going home."
-#
-# Kyle: "But Cartman, we're trying to..."
-#
-# Cartman: "uhh.. screw you guys... home."
-
-###############################################################################
-
-import errno, fcntl, os, sys, time, re;
-import apt_pkg;
-import db_access, katie, logging, utils;
-
-###############################################################################
-
-# Globals
-kelly_version = "$Revision: 1.18 $";
-
-Cnf = None;
-Options = None;
-Logger = None;
-Urgency_Logger = None;
-projectB = None;
-Katie = None;
-pkg = None;
-
-reject_message = "";
-changes = None;
-dsc = None;
-dsc_files = None;
-files = None;
-Subst = None;
-
-install_count = 0;
-install_bytes = 0.0;
-
-installing_to_stable = 0;
-
-###############################################################################
-
-# FIXME: this should go away to some Debian specific file
-# FIXME: should die if file already exists
-
-class Urgency_Log:
- "Urgency Logger object"
- def __init__ (self, Cnf):
- "Initialize a new Urgency Logger object"
- self.Cnf = Cnf;
- self.timestamp = time.strftime("%Y%m%d%H%M%S");
- # Create the log directory if it doesn't exist
- self.log_dir = Cnf["Dir::UrgencyLog"];
- if not os.path.exists(self.log_dir):
- umask = os.umask(00000);
- os.makedirs(self.log_dir, 02775);
- # Open the logfile
- self.log_filename = "%s/.install-urgencies-%s.new" % (self.log_dir, self.timestamp);
- self.log_file = utils.open_file(self.log_filename, 'w');
- self.writes = 0;
-
- def log (self, source, version, urgency):
- "Log an event"
- self.log_file.write(" ".join([source, version, urgency])+'\n');
- self.log_file.flush();
- self.writes += 1;
-
- def close (self):
- "Close a Logger object"
- self.log_file.flush();
- self.log_file.close();
- if self.writes:
- new_filename = "%s/install-urgencies-%s" % (self.log_dir, self.timestamp);
- utils.move(self.log_filename, new_filename);
- else:
- os.unlink(self.log_filename);
-
-###############################################################################
-
-def reject (str, prefix="Rejected: "):
- global reject_message;
- if str:
- reject_message += prefix + str + "\n";
-
-# Recheck anything that relies on the database; since that's not
-# frozen between accept and our run time.
-
-def check():
- propogate={}
- nopropogate={}
- for file in files.keys():
- # The .orig.tar.gz can disappear out from under us is it's a
- # duplicate of one in the archive.
- if not files.has_key(file):
- continue;
- # Check that the source still exists
- if files[file]["type"] == "deb":
- source_version = files[file]["source version"];
- source_package = files[file]["source package"];
- if not changes["architecture"].has_key("source") \
- and not Katie.source_exists(source_package, source_version, changes["distribution"].keys()):
- reject("no source found for %s %s (%s)." % (source_package, source_version, file));
-
- # Version and file overwrite checks
- if not installing_to_stable:
- if files[file]["type"] == "deb":
- reject(Katie.check_binary_against_db(file), "");
- elif files[file]["type"] == "dsc":
- reject(Katie.check_source_against_db(file), "");
- (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(file);
- reject(reject_msg, "");
-
- # propogate in the case it is in the override tables:
- if changes.has_key("propdistribution"):
- for suite in changes["propdistribution"].keys():
- if Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
- propogate[suite] = 1
- else:
- nopropogate[suite] = 1
-
- for suite in propogate.keys():
- if suite in nopropogate:
- continue
- changes["distribution"][suite] = 1
-
- for file in files.keys():
- # Check the package is still in the override tables
- for suite in changes["distribution"].keys():
- if not Katie.in_override_p(files[file]["package"], files[file]["component"], suite, files[file].get("dbtype",""), file):
- reject("%s is NEW for %s." % (file, suite));
-
-###############################################################################
-
-def init():
- global Cnf, Options, Katie, projectB, changes, dsc, dsc_files, files, pkg, Subst;
-
- Cnf = utils.get_conf()
-
- Arguments = [('a',"automatic","Dinstall::Options::Automatic"),
- ('h',"help","Dinstall::Options::Help"),
- ('n',"no-action","Dinstall::Options::No-Action"),
- ('p',"no-lock", "Dinstall::Options::No-Lock"),
- ('s',"no-mail", "Dinstall::Options::No-Mail"),
- ('V',"version","Dinstall::Options::Version")];
-
- for i in ["automatic", "help", "no-action", "no-lock", "no-mail", "version"]:
- if not Cnf.has_key("Dinstall::Options::%s" % (i)):
- Cnf["Dinstall::Options::%s" % (i)] = "";
-
- changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Dinstall::Options")
-
- if Options["Help"]:
- usage();
-
- if Options["Version"]:
- print "kelly %s" % (kelly_version);
- sys.exit(0);
-
- Katie = katie.Katie(Cnf);
- projectB = Katie.projectB;
-
- changes = Katie.pkg.changes;
- dsc = Katie.pkg.dsc;
- dsc_files = Katie.pkg.dsc_files;
- files = Katie.pkg.files;
- pkg = Katie.pkg;
- Subst = Katie.Subst;
-
- return changes_files;
-
-###############################################################################
-
-def usage (exit_code=0):
- print """Usage: kelly [OPTION]... [CHANGES]...
- -a, --automatic automatic run
- -h, --help show this help and exit.
- -n, --no-action don't do anything
- -p, --no-lock don't check lockfile !! for cron.daily only !!
- -s, --no-mail don't send any mail
- -V, --version display the version number and exit"""
- sys.exit(exit_code)
-
-###############################################################################
-
-def action ():
- (summary, short_summary) = Katie.build_summaries();
-
- (prompt, answer) = ("", "XXX")
- if Options["No-Action"] or Options["Automatic"]:
- answer = 'S'
-
- if reject_message.find("Rejected") != -1:
- print "REJECT\n" + reject_message,;
- prompt = "[R]eject, Skip, Quit ?";
- if Options["Automatic"]:
- answer = 'R';
- else:
- print "INSTALL to " + ", ".join(changes["distribution"].keys())
- print reject_message + summary,;
- prompt = "[I]nstall, Skip, Quit ?";
- if Options["Automatic"]:
- answer = 'I';
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.match(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
-
- if answer == 'R':
- do_reject ();
- elif answer == 'I':
- if not installing_to_stable:
- install();
- else:
- stable_install(summary, short_summary);
- elif answer == 'Q':
- sys.exit(0)
-
-###############################################################################
-
-# Our reject is not really a reject, but an unaccept, but since a) the
-# code for that is non-trivial (reopen bugs, unannounce etc.), b) this
-# should be exteremly rare, for now we'll go with whining at our admin
-# folks...
-
-def do_reject ():
- Subst["__REJECTOR_ADDRESS__"] = Cnf["Dinstall::MyEmailAddress"];
- Subst["__REJECT_MESSAGE__"] = reject_message;
- Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
- reject_mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/kelly.unaccept");
-
- # Write the rejection email out as the <foo>.reason file
- reason_filename = os.path.basename(pkg.changes_file[:-8]) + ".reason";
- reject_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
- # If we fail here someone is probably trying to exploit the race
- # so let's just raise an exception ...
- if os.path.exists(reject_filename):
- os.unlink(reject_filename);
- fd = os.open(reject_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
- os.write(fd, reject_mail_message);
- os.close(fd);
-
- utils.send_mail(reject_mail_message);
- Logger.log(["unaccepted", pkg.changes_file]);
-
-###############################################################################
-
-def install ():
- global install_count, install_bytes;
-
- print "Installing."
-
- Logger.log(["installing changes",pkg.changes_file]);
-
- # Begin a transaction; if we bomb out anywhere between here and the COMMIT WORK below, the DB will not be changed.
- projectB.query("BEGIN WORK");
-
- # Add the .dsc file to the DB
- for file in files.keys():
- if files[file]["type"] == "dsc":
- package = dsc["source"]
- version = dsc["version"] # NB: not files[file]["version"], that has no epoch
- maintainer = dsc["maintainer"]
- maintainer = maintainer.replace("'", "\\'")
- maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
- fingerprint_id = db_access.get_or_set_fingerprint_id(dsc["fingerprint"]);
- install_date = time.strftime("%Y-%m-%d");
- filename = files[file]["pool name"] + file;
- dsc_component = files[file]["component"];
- dsc_location_id = files[file]["location id"];
- if not files[file].has_key("files id") or not files[file]["files id"]:
- files[file]["files id"] = db_access.set_files_id (filename, files[file]["size"], files[file]["md5sum"], dsc_location_id)
- projectB.query("INSERT INTO source (source, version, maintainer, file, install_date, sig_fpr) VALUES ('%s', '%s', %d, %d, '%s', %s)"
- % (package, version, maintainer_id, files[file]["files id"], install_date, fingerprint_id));
-
- for suite in changes["distribution"].keys():
- suite_id = db_access.get_suite_id(suite);
- projectB.query("INSERT INTO src_associations (suite, source) VALUES (%d, currval('source_id_seq'))" % (suite_id))
-
- # Add the source files to the DB (files and dsc_files)
- projectB.query("INSERT INTO dsc_files (source, file) VALUES (currval('source_id_seq'), %d)" % (files[file]["files id"]));
- for dsc_file in dsc_files.keys():
- filename = files[file]["pool name"] + dsc_file;
- # If the .orig.tar.gz is already in the pool, it's
- # files id is stored in dsc_files by check_dsc().
- files_id = dsc_files[dsc_file].get("files id", None);
- if files_id == None:
- files_id = db_access.get_files_id(filename, dsc_files[dsc_file]["size"], dsc_files[dsc_file]["md5sum"], dsc_location_id);
- # FIXME: needs to check for -1/-2 and or handle exception
- if files_id == None:
- files_id = db_access.set_files_id (filename, dsc_files[dsc_file]["size"], dsc_files[dsc_file]["md5sum"], dsc_location_id);
- projectB.query("INSERT INTO dsc_files (source, file) VALUES (currval('source_id_seq'), %d)" % (files_id));
-
- # Add the .deb files to the DB
- for file in files.keys():
- if files[file]["type"] == "deb":
- package = files[file]["package"]
- version = files[file]["version"]
- maintainer = files[file]["maintainer"]
- maintainer = maintainer.replace("'", "\\'")
- maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
- fingerprint_id = db_access.get_or_set_fingerprint_id(changes["fingerprint"]);
- architecture = files[file]["architecture"]
- architecture_id = db_access.get_architecture_id (architecture);
- type = files[file]["dbtype"];
- source = files[file]["source package"]
- source_version = files[file]["source version"];
- filename = files[file]["pool name"] + file;
- if not files[file].has_key("location id") or not files[file]["location id"]:
- files[file]["location id"] = db_access.get_location_id(Cnf["Dir::Pool"],files[file]["component"],utils.where_am_i());
- if not files[file].has_key("files id") or not files[file]["files id"]:
- files[file]["files id"] = db_access.set_files_id (filename, files[file]["size"], files[file]["md5sum"], files[file]["location id"])
- source_id = db_access.get_source_id (source, source_version);
- if source_id:
- projectB.query("INSERT INTO binaries (package, version, maintainer, source, architecture, file, type, sig_fpr) VALUES ('%s', '%s', %d, %d, %d, %d, '%s', %d)"
- % (package, version, maintainer_id, source_id, architecture_id, files[file]["files id"], type, fingerprint_id));
- else:
- projectB.query("INSERT INTO binaries (package, version, maintainer, architecture, file, type, sig_fpr) VALUES ('%s', '%s', %d, %d, %d, '%s', %d)"
- % (package, version, maintainer_id, architecture_id, files[file]["files id"], type, fingerprint_id));
- for suite in changes["distribution"].keys():
- suite_id = db_access.get_suite_id(suite);
- projectB.query("INSERT INTO bin_associations (suite, bin) VALUES (%d, currval('binaries_id_seq'))" % (suite_id));
-
- # If the .orig.tar.gz is in a legacy directory we need to poolify
- # it, so that apt-get source (and anything else that goes by the
- # "Directory:" field in the Sources.gz file) works.
- orig_tar_id = Katie.pkg.orig_tar_id;
- orig_tar_location = Katie.pkg.orig_tar_location;
- legacy_source_untouchable = Katie.pkg.legacy_source_untouchable;
- if orig_tar_id and orig_tar_location == "legacy":
- q = projectB.query("SELECT DISTINCT ON (f.id) l.path, f.filename, f.id as files_id, df.source, df.id as dsc_files_id, f.size, f.md5sum FROM files f, dsc_files df, location l WHERE df.source IN (SELECT source FROM dsc_files WHERE file = %s) AND f.id = df.file AND l.id = f.location AND (l.type = 'legacy' OR l.type = 'legacy-mixed')" % (orig_tar_id));
- qd = q.dictresult();
- for qid in qd:
- # Is this an old upload superseded by a newer -sa upload? (See check_dsc() for details)
- if legacy_source_untouchable.has_key(qid["files_id"]):
- continue;
- # First move the files to the new location
- legacy_filename = qid["path"] + qid["filename"];
- pool_location = utils.poolify (changes["source"], files[file]["component"]);
- pool_filename = pool_location + os.path.basename(qid["filename"]);
- destination = Cnf["Dir::Pool"] + pool_location
- utils.move(legacy_filename, destination);
- # Then Update the DB's files table
- q = projectB.query("UPDATE files SET filename = '%s', location = '%s' WHERE id = '%s'" % (pool_filename, dsc_location_id, qid["files_id"]));
-
- # If this is a sourceful diff only upload that is moving non-legacy
- # cross-component we need to copy the .orig.tar.gz into the new
- # component too for the same reasons as above.
- #
- if changes["architecture"].has_key("source") and orig_tar_id and \
- orig_tar_location != "legacy" and orig_tar_location != dsc_location_id:
- q = projectB.query("SELECT l.path, f.filename, f.size, f.md5sum FROM files f, location l WHERE f.id = %s AND f.location = l.id" % (orig_tar_id));
- ql = q.getresult()[0];
- old_filename = ql[0] + ql[1];
- file_size = ql[2];
- file_md5sum = ql[3];
- new_filename = utils.poolify(changes["source"], dsc_component) + os.path.basename(old_filename);
- new_files_id = db_access.get_files_id(new_filename, file_size, file_md5sum, dsc_location_id);
- if new_files_id == None:
- utils.copy(old_filename, Cnf["Dir::Pool"] + new_filename);
- new_files_id = db_access.set_files_id(new_filename, file_size, file_md5sum, dsc_location_id);
- projectB.query("UPDATE dsc_files SET file = %s WHERE source = %s AND file = %s" % (new_files_id, source_id, orig_tar_id));
-
- # Install the files into the pool
- for file in files.keys():
- destination = Cnf["Dir::Pool"] + files[file]["pool name"] + file;
- utils.move(file, destination);
- Logger.log(["installed", file, files[file]["type"], files[file]["size"], files[file]["architecture"]]);
- install_bytes += float(files[file]["size"]);
-
- # Copy the .changes file across for suite which need it.
- copy_changes = {};
- copy_katie = {};
- for suite in changes["distribution"].keys():
- if Cnf.has_key("Suite::%s::CopyChanges" % (suite)):
- copy_changes[Cnf["Suite::%s::CopyChanges" % (suite)]] = "";
- # and the .katie file...
- if Cnf.has_key("Suite::%s::CopyKatie" % (suite)):
- copy_katie[Cnf["Suite::%s::CopyKatie" % (suite)]] = "";
- for dest in copy_changes.keys():
- utils.copy(pkg.changes_file, Cnf["Dir::Root"] + dest);
- for dest in copy_katie.keys():
- utils.copy(Katie.pkg.changes_file[:-8]+".katie", dest);
-
- projectB.query("COMMIT WORK");
-
- # Move the .changes into the 'done' directory
- utils.move (pkg.changes_file,
- os.path.join(Cnf["Dir::Queue::Done"], os.path.basename(pkg.changes_file)));
-
- # Remove the .katie file
- os.unlink(Katie.pkg.changes_file[:-8]+".katie");
-
- if changes["architecture"].has_key("source") and Urgency_Logger:
- Urgency_Logger.log(dsc["source"], dsc["version"], changes["urgency"]);
-
- # Undo the work done in katie.py(accept) to help auto-building
- # from accepted.
- projectB.query("BEGIN WORK");
- for suite in changes["distribution"].keys():
- if suite not in Cnf.ValueList("Dinstall::QueueBuildSuites"):
- continue;
- now_date = time.strftime("%Y-%m-%d %H:%M");
- suite_id = db_access.get_suite_id(suite);
- dest_dir = Cnf["Dir::QueueBuild"];
- if Cnf.FindB("Dinstall::SecurityQueueBuild"):
- dest_dir = os.path.join(dest_dir, suite);
- for file in files.keys():
- dest = os.path.join(dest_dir, file);
- # Remove it from the list of packages for later processing by apt-ftparchive
- projectB.query("UPDATE queue_build SET in_queue = 'f', last_used = '%s' WHERE filename = '%s' AND suite = %s" % (now_date, dest, suite_id));
- if not Cnf.FindB("Dinstall::SecurityQueueBuild"):
- # Update the symlink to point to the new location in the pool
- pool_location = utils.poolify (changes["source"], files[file]["component"]);
- src = os.path.join(Cnf["Dir::Pool"], pool_location, os.path.basename(file));
- if os.path.islink(dest):
- os.unlink(dest);
- os.symlink(src, dest);
- # Update last_used on any non-upload .orig.tar.gz symlink
- if orig_tar_id:
- # Determine the .orig.tar.gz file name
- for dsc_file in dsc_files.keys():
- if dsc_file.endswith(".orig.tar.gz"):
- orig_tar_gz = os.path.join(dest_dir, dsc_file);
- # Remove it from the list of packages for later processing by apt-ftparchive
- projectB.query("UPDATE queue_build SET in_queue = 'f', last_used = '%s' WHERE filename = '%s' AND suite = %s" % (now_date, orig_tar_gz, suite_id));
- projectB.query("COMMIT WORK");
-
- # Finally...
- install_count += 1;
-
-################################################################################
-
-def stable_install (summary, short_summary):
- global install_count;
-
- print "Installing to stable.";
-
- # Begin a transaction; if we bomb out anywhere between here and
- # the COMMIT WORK below, the DB won't be changed.
- projectB.query("BEGIN WORK");
-
- # Add the source to stable (and remove it from proposed-updates)
- for file in files.keys():
- if files[file]["type"] == "dsc":
- package = dsc["source"];
- version = dsc["version"]; # NB: not files[file]["version"], that has no epoch
- q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version))
- ql = q.getresult();
- if not ql:
- utils.fubar("[INTERNAL ERROR] couldn't find '%s' (%s) in source table." % (package, version));
- source_id = ql[0][0];
- suite_id = db_access.get_suite_id('proposed-updates');
- projectB.query("DELETE FROM src_associations WHERE suite = '%s' AND source = '%s'" % (suite_id, source_id));
- suite_id = db_access.get_suite_id('stable');
- projectB.query("INSERT INTO src_associations (suite, source) VALUES ('%s', '%s')" % (suite_id, source_id));
-
- # Add the binaries to stable (and remove it/them from proposed-updates)
- for file in files.keys():
- if files[file]["type"] == "deb":
- binNMU = 0
- package = files[file]["package"];
- version = files[file]["version"];
- architecture = files[file]["architecture"];
- q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture));
- ql = q.getresult();
- if not ql:
- suite_id = db_access.get_suite_id('proposed-updates');
- que = "SELECT b.version FROM binaries b JOIN bin_associations ba ON (b.id = ba.bin) JOIN suite su ON (ba.suite = su.id) WHERE b.package = '%s' AND (ba.suite = '%s')" % (package, suite_id);
- q = projectB.query(que)
-
- # Reduce the query results to a list of version numbers
- ql = map(lambda x: x[0], q.getresult());
- if not ql:
- utils.fubar("[INTERNAL ERROR] couldn't find '%s' (%s for %s architecture) in binaries table." % (package, version, architecture));
- else:
- for x in ql:
- if re.match(re.compile(r"%s((\.0)?\.)|(\+b)\d+$" % re.escape(version)),x):
- binNMU = 1
- break
- if not binNMU:
- binary_id = ql[0][0];
- suite_id = db_access.get_suite_id('proposed-updates');
- projectB.query("DELETE FROM bin_associations WHERE suite = '%s' AND bin = '%s'" % (suite_id, binary_id));
- suite_id = db_access.get_suite_id('stable');
- projectB.query("INSERT INTO bin_associations (suite, bin) VALUES ('%s', '%s')" % (suite_id, binary_id));
- else:
- del files[file]
-
- projectB.query("COMMIT WORK");
-
- utils.move (pkg.changes_file, Cnf["Dir::Morgue"] + '/katie/' + os.path.basename(pkg.changes_file));
-
- ## Update the Stable ChangeLog file
- new_changelog_filename = Cnf["Dir::Root"] + Cnf["Suite::Stable::ChangeLogBase"] + ".ChangeLog";
- changelog_filename = Cnf["Dir::Root"] + Cnf["Suite::Stable::ChangeLogBase"] + "ChangeLog";
- if os.path.exists(new_changelog_filename):
- os.unlink (new_changelog_filename);
-
- new_changelog = utils.open_file(new_changelog_filename, 'w');
- for file in files.keys():
- if files[file]["type"] == "deb":
- new_changelog.write("stable/%s/binary-%s/%s\n" % (files[file]["component"], files[file]["architecture"], file));
- elif utils.re_issource.match(file):
- new_changelog.write("stable/%s/source/%s\n" % (files[file]["component"], file));
- else:
- new_changelog.write("%s\n" % (file));
- chop_changes = katie.re_fdnic.sub("\n", changes["changes"]);
- new_changelog.write(chop_changes + '\n\n');
- if os.access(changelog_filename, os.R_OK) != 0:
- changelog = utils.open_file(changelog_filename);
- new_changelog.write(changelog.read());
- new_changelog.close();
- if os.access(changelog_filename, os.R_OK) != 0:
- os.unlink(changelog_filename);
- utils.move(new_changelog_filename, changelog_filename);
-
- install_count += 1;
-
- if not Options["No-Mail"] and changes["architecture"].has_key("source"):
- Subst["__SUITE__"] = " into stable";
- Subst["__SUMMARY__"] = summary;
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/kelly.installed");
- utils.send_mail(mail_message);
- Katie.announce(short_summary, 1)
-
- # Finally remove the .katie file
- katie_file = os.path.join(Cnf["Suite::Proposed-Updates::CopyKatie"], os.path.basename(Katie.pkg.changes_file[:-8]+".katie"));
- os.unlink(katie_file);
-
-################################################################################
-
-def process_it (changes_file):
- global reject_message;
-
- reject_message = "";
-
- # Absolutize the filename to avoid the requirement of being in the
- # same directory as the .changes file.
- pkg.changes_file = os.path.abspath(changes_file);
-
- # And since handling of installs to stable munges with the CWD;
- # save and restore it.
- pkg.directory = os.getcwd();
-
- if installing_to_stable:
- old = Katie.pkg.changes_file;
- Katie.pkg.changes_file = os.path.basename(old);
- os.chdir(Cnf["Suite::Proposed-Updates::CopyKatie"]);
-
- Katie.init_vars();
- Katie.update_vars();
- Katie.update_subst();
-
- if installing_to_stable:
- Katie.pkg.changes_file = old;
-
- check();
- action();
-
- # Restore CWD
- os.chdir(pkg.directory);
-
-###############################################################################
-
-def main():
- global projectB, Logger, Urgency_Logger, installing_to_stable;
-
- changes_files = init();
-
- # -n/--dry-run invalidates some other options which would involve things happening
- if Options["No-Action"]:
- Options["Automatic"] = "";
-
- # Check that we aren't going to clash with the daily cron job
-
- if not Options["No-Action"] and os.path.exists("%s/Archive_Maintenance_In_Progress" % (Cnf["Dir::Root"])) and not Options["No-Lock"]:
- utils.fubar("Archive maintenance in progress. Try again later.");
-
- # If running from within proposed-updates; assume an install to stable
- if os.getcwd().find('proposed-updates') != -1:
- installing_to_stable = 1;
-
- # Obtain lock if not in no-action mode and initialize the log
- if not Options["No-Action"]:
- lock_fd = os.open(Cnf["Dinstall::LockFile"], os.O_RDWR | os.O_CREAT);
- try:
- fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB);
- except IOError, e:
- if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EAGAIN':
- utils.fubar("Couldn't obtain lock; assuming another kelly is already running.");
- else:
- raise;
- Logger = Katie.Logger = logging.Logger(Cnf, "kelly");
- if not installing_to_stable and Cnf.get("Dir::UrgencyLog"):
- Urgency_Logger = Urgency_Log(Cnf);
-
- # Initialize the substitution template mapping global
- bcc = "X-Katie: %s" % (kelly_version);
- if Cnf.has_key("Dinstall::Bcc"):
- Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
- else:
- Subst["__BCC__"] = bcc;
-
- # Sort the .changes files so that we process sourceful ones first
- changes_files.sort(utils.changes_compare);
-
- # Process the changes files
- for changes_file in changes_files:
- print "\n" + changes_file;
- process_it (changes_file);
-
- if install_count:
- sets = "set"
- if install_count > 1:
- sets = "sets"
- sys.stderr.write("Installed %d package %s, %s.\n" % (install_count, sets, utils.size_type(int(install_bytes))));
- Logger.log(["total",install_count,install_bytes]);
-
- if not Options["No-Action"]:
- Logger.close();
- if Urgency_Logger:
- Urgency_Logger.close();
-
-###############################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/env python
-
-# Manually reject packages for proprosed-updates
-# Copyright (C) 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: lauren,v 1.4 2004-04-01 17:13:11 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, pg, sys;
-import db_access, katie, logging, utils;
-import apt_pkg;
-
-################################################################################
-
-# Globals
-lauren_version = "$Revision: 1.4 $";
-
-Cnf = None;
-Options = None;
-projectB = None;
-Katie = None;
-Logger = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: lauren .CHANGES[...]
-Manually reject the .CHANGES file(s).
-
- -h, --help show this help and exit.
- -m, --message=MSG use this message for the rejection.
- -s, --no-mail don't send any mail."""
- sys.exit(exit_code)
-
-################################################################################
-
-def main():
- global Cnf, Logger, Options, projectB, Katie;
-
- Cnf = utils.get_conf();
- Arguments = [('h',"help","Lauren::Options::Help"),
- ('m',"manual-reject","Lauren::Options::Manual-Reject", "HasArg"),
- ('s',"no-mail", "Lauren::Options::No-Mail")];
- for i in [ "help", "manual-reject", "no-mail" ]:
- if not Cnf.has_key("Lauren::Options::%s" % (i)):
- Cnf["Lauren::Options::%s" % (i)] = "";
-
- arguments = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Lauren::Options");
- if Options["Help"]:
- usage();
- if not arguments:
- utils.fubar("need at least one .changes filename as an argument.");
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- Katie = katie.Katie(Cnf);
- Logger = Katie.Logger = logging.Logger(Cnf, "lauren");
-
- bcc = "X-Katie: lauren %s" % (lauren_version);
- if Cnf.has_key("Dinstall::Bcc"):
- Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
- else:
- Katie.Subst["__BCC__"] = bcc;
-
- for arg in arguments:
- arg = utils.validate_changes_file_arg(arg);
- Katie.pkg.changes_file = arg;
- Katie.init_vars();
- cwd = os.getcwd();
- os.chdir(Cnf["Suite::Proposed-Updates::CopyKatie"]);
- Katie.update_vars();
- os.chdir(cwd);
- Katie.update_subst();
-
- print arg
- done = 0;
- prompt = "Manual reject, [S]kip, Quit ?";
- while not done:
- answer = "XXX";
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1)
- answer = answer[:1].upper()
-
- if answer == 'M':
- aborted = reject(Options["Manual-Reject"]);
- if not aborted:
- done = 1;
- elif answer == 'S':
- done = 1;
- elif answer == 'Q':
- sys.exit(0)
-
- Logger.close();
-
-################################################################################
-
-def reject (reject_message = ""):
- files = Katie.pkg.files;
- dsc = Katie.pkg.dsc;
- changes_file = Katie.pkg.changes_file;
-
- # If we weren't given a manual rejection message, spawn an editor
- # so the user can add one in...
- if not reject_message:
- temp_filename = utils.temp_filename();
- editor = os.environ.get("EDITOR","vi")
- answer = 'E';
- while answer == 'E':
- os.system("%s %s" % (editor, temp_filename))
- file = utils.open_file(temp_filename);
- reject_message = "".join(file.readlines());
- file.close();
- print "Reject message:";
- print utils.prefix_multi_line_string(reject_message," ", include_blank_lines=1);
- prompt = "[R]eject, Edit, Abandon, Quit ?"
- answer = "XXX";
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
- os.unlink(temp_filename);
- if answer == 'A':
- return 1;
- elif answer == 'Q':
- sys.exit(0);
-
- print "Rejecting.\n"
-
- # Reject the .changes file
- Katie.force_reject([changes_file]);
-
- # Setup the .reason file
- reason_filename = changes_file[:-8] + ".reason";
- reject_filename = Cnf["Dir::Queue::Reject"] + '/' + reason_filename;
-
- # If we fail here someone is probably trying to exploit the race
- # so let's just raise an exception ...
- if os.path.exists(reject_filename):
- os.unlink(reject_filename);
- reject_fd = os.open(reject_filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0644);
-
- # Build up the rejection email
- user_email_address = utils.whoami() + " <%s>" % (Cnf["Dinstall::MyAdminAddress"]);
-
- Katie.Subst["__REJECTOR_ADDRESS__"] = user_email_address;
- Katie.Subst["__MANUAL_REJECT_MESSAGE__"] = reject_message;
- Katie.Subst["__STABLE_REJECTOR__"] = Cnf["Lauren::StableRejector"];
- Katie.Subst["__MORE_INFO_URL__"] = Cnf["Lauren::MoreInfoURL"];
- Katie.Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
- reject_mail_message = utils.TemplateSubst(Katie.Subst,Cnf["Dir::Templates"]+"/lauren.stable-rejected");
-
- # Write the rejection email out as the <foo>.reason file
- os.write(reject_fd, reject_mail_message);
- os.close(reject_fd);
-
- # Remove the packages from proposed-updates
- suite_id = db_access.get_suite_id('proposed-updates');
-
- projectB.query("BEGIN WORK");
- # Remove files from proposed-updates suite
- for file in files.keys():
- if files[file]["type"] == "dsc":
- package = dsc["source"];
- version = dsc["version"]; # NB: not files[file]["version"], that has no epoch
- q = projectB.query("SELECT id FROM source WHERE source = '%s' AND version = '%s'" % (package, version));
- ql = q.getresult();
- if not ql:
- utils.fubar("reject: Couldn't find %s_%s in source table." % (package, version));
- source_id = ql[0][0];
- projectB.query("DELETE FROM src_associations WHERE suite = '%s' AND source = '%s'" % (suite_id, source_id));
- elif files[file]["type"] == "deb":
- package = files[file]["package"];
- version = files[file]["version"];
- architecture = files[file]["architecture"];
- q = projectB.query("SELECT b.id FROM binaries b, architecture a WHERE b.package = '%s' AND b.version = '%s' AND (a.arch_string = '%s' OR a.arch_string = 'all') AND b.architecture = a.id" % (package, version, architecture));
- ql = q.getresult();
-
- # Horrible hack to work around partial replacement of
- # packages with newer versions (from different source
- # packages). This, obviously, should instead check for a
- # newer version of the package and only do the
- # warn&continue thing if it finds one.
- if not ql:
- utils.warn("reject: Couldn't find %s_%s_%s in binaries table." % (package, version, architecture));
- else:
- binary_id = ql[0][0];
- projectB.query("DELETE FROM bin_associations WHERE suite = '%s' AND bin = '%s'" % (suite_id, binary_id));
- projectB.query("COMMIT WORK");
-
- # Send the rejection mail if appropriate
- if not Options["No-Mail"]:
- utils.send_mail(reject_mail_message);
-
- # Finally remove the .katie file
- katie_file = os.path.join(Cnf["Suite::Proposed-Updates::CopyKatie"], os.path.basename(changes_file[:-8]+".katie"));
- os.unlink(katie_file);
-
- Logger.log(["rejected", changes_file]);
- return 0;
-
-################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/env python
-
-# Handles NEW and BYHAND packages
-# Copyright (C) 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: lisa,v 1.31 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# 23:12|<aj> I will not hush!
-# 23:12|<elmo> :>
-# 23:12|<aj> Where there is injustice in the world, I shall be there!
-# 23:13|<aj> I shall not be silenced!
-# 23:13|<aj> The world shall know!
-# 23:13|<aj> The world *must* know!
-# 23:13|<elmo> oh dear, he's gone back to powerpuff girls... ;-)
-# 23:13|<aj> yay powerpuff girls!!
-# 23:13|<aj> buttercup's my favourite, who's yours?
-# 23:14|<aj> you're backing away from the keyboard right now aren't you?
-# 23:14|<aj> *AREN'T YOU*?!
-# 23:15|<aj> I will not be treated like this.
-# 23:15|<aj> I shall have my revenge.
-# 23:15|<aj> I SHALL!!!
-
-################################################################################
-
-import copy, errno, os, readline, stat, sys, time;
-import apt_pkg, apt_inst;
-import db_access, fernanda, katie, logging, utils;
-
-# Globals
-lisa_version = "$Revision: 1.31 $";
-
-Cnf = None;
-Options = None;
-Katie = None;
-projectB = None;
-Logger = None;
-
-Priorities = None;
-Sections = None;
-
-reject_message = "";
-
-################################################################################
-################################################################################
-################################################################################
-
-def reject (str, prefix="Rejected: "):
- global reject_message;
- if str:
- reject_message += prefix + str + "\n";
-
-def recheck():
- global reject_message;
- files = Katie.pkg.files;
- reject_message = "";
-
- for file in files.keys():
- # The .orig.tar.gz can disappear out from under us is it's a
- # duplicate of one in the archive.
- if not files.has_key(file):
- continue;
- # Check that the source still exists
- if files[file]["type"] == "deb":
- source_version = files[file]["source version"];
- source_package = files[file]["source package"];
- if not Katie.pkg.changes["architecture"].has_key("source") \
- and not Katie.source_exists(source_package, source_version, Katie.pkg.changes["distribution"].keys()):
- source_epochless_version = utils.re_no_epoch.sub('', source_version);
- dsc_filename = "%s_%s.dsc" % (source_package, source_epochless_version);
- if not os.path.exists(Cnf["Dir::Queue::Accepted"] + '/' + dsc_filename):
- reject("no source found for %s %s (%s)." % (source_package, source_version, file));
-
- # Version and file overwrite checks
- if files[file]["type"] == "deb":
- reject(Katie.check_binary_against_db(file));
- elif files[file]["type"] == "dsc":
- reject(Katie.check_source_against_db(file));
- (reject_msg, is_in_incoming) = Katie.check_dsc_against_db(file);
- reject(reject_msg);
-
- if reject_message:
- answer = "XXX";
- if Options["No-Action"] or Options["Automatic"]:
- answer = 'S'
-
- print "REJECT\n" + reject_message,;
- prompt = "[R]eject, Skip, Quit ?";
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.match(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
-
- if answer == 'R':
- Katie.do_reject(0, reject_message);
- os.unlink(Katie.pkg.changes_file[:-8]+".katie");
- return 0;
- elif answer == 'S':
- return 0;
- elif answer == 'Q':
- sys.exit(0);
-
- return 1;
-
-################################################################################
-
-def determine_new (changes, files):
- new = {};
-
- # Build up a list of potentially new things
- for file in files.keys():
- f = files[file];
- # Skip byhand elements
- if f["type"] == "byhand":
- continue;
- pkg = f["package"];
- priority = f["priority"];
- section = f["section"];
- # FIXME: unhardcode
- if section == "non-US/main":
- section = "non-US";
- type = get_type(f);
- component = f["component"];
-
- if type == "dsc":
- priority = "source";
- if not new.has_key(pkg):
- new[pkg] = {};
- new[pkg]["priority"] = priority;
- new[pkg]["section"] = section;
- new[pkg]["type"] = type;
- new[pkg]["component"] = component;
- new[pkg]["files"] = [];
- else:
- old_type = new[pkg]["type"];
- if old_type != type:
- # source gets trumped by deb or udeb
- if old_type == "dsc":
- new[pkg]["priority"] = priority;
- new[pkg]["section"] = section;
- new[pkg]["type"] = type;
- new[pkg]["component"] = component;
- new[pkg]["files"].append(file);
- if f.has_key("othercomponents"):
- new[pkg]["othercomponents"] = f["othercomponents"];
-
- for suite in changes["suite"].keys():
- suite_id = db_access.get_suite_id(suite);
- for pkg in new.keys():
- component_id = db_access.get_component_id(new[pkg]["component"]);
- type_id = db_access.get_override_type_id(new[pkg]["type"]);
- q = projectB.query("SELECT package FROM override WHERE package = '%s' AND suite = %s AND component = %s AND type = %s" % (pkg, suite_id, component_id, type_id));
- ql = q.getresult();
- if ql:
- for file in new[pkg]["files"]:
- if files[file].has_key("new"):
- del files[file]["new"];
- del new[pkg];
-
- if changes["suite"].has_key("stable"):
- print "WARNING: overrides will be added for stable!";
- if changes["suite"].has_key("oldstable"):
- print "WARNING: overrides will be added for OLDstable!";
- for pkg in new.keys():
- if new[pkg].has_key("othercomponents"):
- print "WARNING: %s already present in %s distribution." % (pkg, new[pkg]["othercomponents"]);
-
- return new;
-
-################################################################################
-
-def indiv_sg_compare (a, b):
- """Sort by source name, source, version, 'have source', and
- finally by filename."""
- # Sort by source version
- q = apt_pkg.VersionCompare(a["version"], b["version"]);
- if q:
- return -q;
-
- # Sort by 'have source'
- a_has_source = a["architecture"].get("source");
- b_has_source = b["architecture"].get("source");
- if a_has_source and not b_has_source:
- return -1;
- elif b_has_source and not a_has_source:
- return 1;
-
- return cmp(a["filename"], b["filename"]);
-
-############################################################
-
-def sg_compare (a, b):
- a = a[1];
- b = b[1];
- """Sort by have note, time of oldest upload."""
- # Sort by have note
- a_note_state = a["note_state"];
- b_note_state = b["note_state"];
- if a_note_state < b_note_state:
- return -1;
- elif a_note_state > b_note_state:
- return 1;
-
- # Sort by time of oldest upload
- return cmp(a["oldest"], b["oldest"]);
-
-def sort_changes(changes_files):
- """Sort into source groups, then sort each source group by version,
- have source, filename. Finally, sort the source groups by have
- note, time of oldest upload of each source upload."""
- if len(changes_files) == 1:
- return changes_files;
-
- sorted_list = [];
- cache = {};
- # Read in all the .changes files
- for filename in changes_files:
- try:
- Katie.pkg.changes_file = filename;
- Katie.init_vars();
- Katie.update_vars();
- cache[filename] = copy.copy(Katie.pkg.changes);
- cache[filename]["filename"] = filename;
- except:
- sorted_list.append(filename);
- break;
- # Divide the .changes into per-source groups
- per_source = {};
- for filename in cache.keys():
- source = cache[filename]["source"];
- if not per_source.has_key(source):
- per_source[source] = {};
- per_source[source]["list"] = [];
- per_source[source]["list"].append(cache[filename]);
- # Determine oldest time and have note status for each source group
- for source in per_source.keys():
- source_list = per_source[source]["list"];
- first = source_list[0];
- oldest = os.stat(first["filename"])[stat.ST_MTIME];
- have_note = 0;
- for d in per_source[source]["list"]:
- mtime = os.stat(d["filename"])[stat.ST_MTIME];
- if mtime < oldest:
- oldest = mtime;
- have_note += (d.has_key("lisa note"));
- per_source[source]["oldest"] = oldest;
- if not have_note:
- per_source[source]["note_state"] = 0; # none
- elif have_note < len(source_list):
- per_source[source]["note_state"] = 1; # some
- else:
- per_source[source]["note_state"] = 2; # all
- per_source[source]["list"].sort(indiv_sg_compare);
- per_source_items = per_source.items();
- per_source_items.sort(sg_compare);
- for i in per_source_items:
- for j in i[1]["list"]:
- sorted_list.append(j["filename"]);
- return sorted_list;
-
-################################################################################
-
-class Section_Completer:
- def __init__ (self):
- self.sections = [];
- q = projectB.query("SELECT section FROM section");
- for i in q.getresult():
- self.sections.append(i[0]);
-
- def complete(self, text, state):
- if state == 0:
- self.matches = [];
- n = len(text);
- for word in self.sections:
- if word[:n] == text:
- self.matches.append(word);
- try:
- return self.matches[state]
- except IndexError:
- return None
-
-############################################################
-
-class Priority_Completer:
- def __init__ (self):
- self.priorities = [];
- q = projectB.query("SELECT priority FROM priority");
- for i in q.getresult():
- self.priorities.append(i[0]);
-
- def complete(self, text, state):
- if state == 0:
- self.matches = [];
- n = len(text);
- for word in self.priorities:
- if word[:n] == text:
- self.matches.append(word);
- try:
- return self.matches[state]
- except IndexError:
- return None
-
-################################################################################
-
-def check_valid (new):
- for pkg in new.keys():
- section = new[pkg]["section"];
- priority = new[pkg]["priority"];
- type = new[pkg]["type"];
- new[pkg]["section id"] = db_access.get_section_id(section);
- new[pkg]["priority id"] = db_access.get_priority_id(new[pkg]["priority"]);
- # Sanity checks
- if (section == "debian-installer" and type != "udeb") or \
- (section != "debian-installer" and type == "udeb"):
- new[pkg]["section id"] = -1;
- if (priority == "source" and type != "dsc") or \
- (priority != "source" and type == "dsc"):
- new[pkg]["priority id"] = -1;
-
-################################################################################
-
-def print_new (new, indexed, file=sys.stdout):
- check_valid(new);
- broken = 0;
- index = 0;
- for pkg in new.keys():
- index += 1;
- section = new[pkg]["section"];
- priority = new[pkg]["priority"];
- if new[pkg]["section id"] == -1:
- section += "[!]";
- broken = 1;
- if new[pkg]["priority id"] == -1:
- priority += "[!]";
- broken = 1;
- if indexed:
- line = "(%s): %-20s %-20s %-20s" % (index, pkg, priority, section);
- else:
- line = "%-20s %-20s %-20s" % (pkg, priority, section);
- line = line.strip()+'\n';
- file.write(line);
- note = Katie.pkg.changes.get("lisa note");
- if note:
- print "*"*75;
- print note;
- print "*"*75;
- return broken, note;
-
-################################################################################
-
-def get_type (f):
- # Determine the type
- if f.has_key("dbtype"):
- type = f["dbtype"];
- elif f["type"] == "orig.tar.gz" or f["type"] == "tar.gz" or f["type"] == "diff.gz" or f["type"] == "dsc":
- type = "dsc";
- else:
- utils.fubar("invalid type (%s) for new. Dazed, confused and sure as heck not continuing." % (type));
-
- # Validate the override type
- type_id = db_access.get_override_type_id(type);
- if type_id == -1:
- utils.fubar("invalid type (%s) for new. Say wha?" % (type));
-
- return type;
-
-################################################################################
-
-def index_range (index):
- if index == 1:
- return "1";
- else:
- return "1-%s" % (index);
-
-################################################################################
-################################################################################
-
-def edit_new (new):
- # Write the current data to a temporary file
- temp_filename = utils.temp_filename();
- temp_file = utils.open_file(temp_filename, 'w');
- print_new (new, 0, temp_file);
- temp_file.close();
- # Spawn an editor on that file
- editor = os.environ.get("EDITOR","vi")
- result = os.system("%s %s" % (editor, temp_filename))
- if result != 0:
- utils.fubar ("%s invocation failed for %s." % (editor, temp_filename), result)
- # Read the edited data back in
- temp_file = utils.open_file(temp_filename);
- lines = temp_file.readlines();
- temp_file.close();
- os.unlink(temp_filename);
- # Parse the new data
- for line in lines:
- line = line.strip();
- if line == "":
- continue;
- s = line.split();
- # Pad the list if necessary
- s[len(s):3] = [None] * (3-len(s));
- (pkg, priority, section) = s[:3];
- if not new.has_key(pkg):
- utils.warn("Ignoring unknown package '%s'" % (pkg));
- else:
- # Strip off any invalid markers, print_new will readd them.
- if section.endswith("[!]"):
- section = section[:-3];
- if priority.endswith("[!]"):
- priority = priority[:-3];
- for file in new[pkg]["files"]:
- Katie.pkg.files[file]["section"] = section;
- Katie.pkg.files[file]["priority"] = priority;
- new[pkg]["section"] = section;
- new[pkg]["priority"] = priority;
-
-################################################################################
-
-def edit_index (new, index):
- priority = new[index]["priority"]
- section = new[index]["section"]
- type = new[index]["type"];
- done = 0
- while not done:
- print "\t".join([index, priority, section]);
-
- answer = "XXX";
- if type != "dsc":
- prompt = "[B]oth, Priority, Section, Done ? ";
- else:
- prompt = "[S]ection, Done ? ";
- edit_priority = edit_section = 0;
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.match(prompt)
- if answer == "":
- answer = m.group(1)
- answer = answer[:1].upper()
-
- if answer == 'P':
- edit_priority = 1;
- elif answer == 'S':
- edit_section = 1;
- elif answer == 'B':
- edit_priority = edit_section = 1;
- elif answer == 'D':
- done = 1;
-
- # Edit the priority
- if edit_priority:
- readline.set_completer(Priorities.complete);
- got_priority = 0;
- while not got_priority:
- new_priority = utils.our_raw_input("New priority: ").strip();
- if new_priority not in Priorities.priorities:
- print "E: '%s' is not a valid priority, try again." % (new_priority);
- else:
- got_priority = 1;
- priority = new_priority;
-
- # Edit the section
- if edit_section:
- readline.set_completer(Sections.complete);
- got_section = 0;
- while not got_section:
- new_section = utils.our_raw_input("New section: ").strip();
- if new_section not in Sections.sections:
- print "E: '%s' is not a valid section, try again." % (new_section);
- else:
- got_section = 1;
- section = new_section;
-
- # Reset the readline completer
- readline.set_completer(None);
-
- for file in new[index]["files"]:
- Katie.pkg.files[file]["section"] = section;
- Katie.pkg.files[file]["priority"] = priority;
- new[index]["priority"] = priority;
- new[index]["section"] = section;
- return new;
-
-################################################################################
-
-def edit_overrides (new):
- print;
- done = 0
- while not done:
- print_new (new, 1);
- new_index = {};
- index = 0;
- for i in new.keys():
- index += 1;
- new_index[index] = i;
-
- prompt = "(%s) edit override <n>, Editor, Done ? " % (index_range(index));
-
- got_answer = 0
- while not got_answer:
- answer = utils.our_raw_input(prompt);
- if not utils.str_isnum(answer):
- answer = answer[:1].upper();
- if answer == "E" or answer == "D":
- got_answer = 1;
- elif katie.re_isanum.match (answer):
- answer = int(answer);
- if (answer < 1) or (answer > index):
- print "%s is not a valid index (%s). Please retry." % (answer, index_range(index));
- else:
- got_answer = 1;
-
- if answer == 'E':
- edit_new(new);
- elif answer == 'D':
- done = 1;
- else:
- edit_index (new, new_index[answer]);
-
- return new;
-
-################################################################################
-
-def edit_note(note):
- # Write the current data to a temporary file
- temp_filename = utils.temp_filename();
- temp_file = utils.open_file(temp_filename, 'w');
- temp_file.write(note);
- temp_file.close();
- editor = os.environ.get("EDITOR","vi")
- answer = 'E';
- while answer == 'E':
- os.system("%s %s" % (editor, temp_filename))
- temp_file = utils.open_file(temp_filename);
- note = temp_file.read().rstrip();
- temp_file.close();
- print "Note:";
- print utils.prefix_multi_line_string(note," ");
- prompt = "[D]one, Edit, Abandon, Quit ?"
- answer = "XXX";
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
- os.unlink(temp_filename);
- if answer == 'A':
- return;
- elif answer == 'Q':
- sys.exit(0);
- Katie.pkg.changes["lisa note"] = note;
- Katie.dump_vars(Cnf["Dir::Queue::New"]);
-
-################################################################################
-
-def check_pkg ():
- try:
- less_fd = os.popen("less -R -", 'w', 0);
- stdout_fd = sys.stdout;
- try:
- sys.stdout = less_fd;
- fernanda.display_changes(Katie.pkg.changes_file);
- files = Katie.pkg.files;
- for file in files.keys():
- if files[file].has_key("new"):
- type = files[file]["type"];
- if type == "deb":
- fernanda.check_deb(file);
- elif type == "dsc":
- fernanda.check_dsc(file);
- finally:
- sys.stdout = stdout_fd;
- except IOError, e:
- if errno.errorcode[e.errno] == 'EPIPE':
- utils.warn("[fernanda] Caught EPIPE; skipping.");
- pass;
- else:
- raise;
- except KeyboardInterrupt:
- utils.warn("[fernanda] Caught C-c; skipping.");
- pass;
-
-################################################################################
-
-## FIXME: horribly Debian specific
-
-def do_bxa_notification():
- files = Katie.pkg.files;
- summary = "";
- for file in files.keys():
- if files[file]["type"] == "deb":
- control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(file)));
- summary += "\n";
- summary += "Package: %s\n" % (control.Find("Package"));
- summary += "Description: %s\n" % (control.Find("Description"));
- Katie.Subst["__BINARY_DESCRIPTIONS__"] = summary;
- bxa_mail = utils.TemplateSubst(Katie.Subst,Cnf["Dir::Templates"]+"/lisa.bxa_notification");
- utils.send_mail(bxa_mail);
-
-################################################################################
-
-def add_overrides (new):
- changes = Katie.pkg.changes;
- files = Katie.pkg.files;
-
- projectB.query("BEGIN WORK");
- for suite in changes["suite"].keys():
- suite_id = db_access.get_suite_id(suite);
- for pkg in new.keys():
- component_id = db_access.get_component_id(new[pkg]["component"]);
- type_id = db_access.get_override_type_id(new[pkg]["type"]);
- priority_id = new[pkg]["priority id"];
- section_id = new[pkg]["section id"];
- projectB.query("INSERT INTO override (suite, component, type, package, priority, section, maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '')" % (suite_id, component_id, type_id, pkg, priority_id, section_id));
- for file in new[pkg]["files"]:
- if files[file].has_key("new"):
- del files[file]["new"];
- del new[pkg];
-
- projectB.query("COMMIT WORK");
-
- if Cnf.FindB("Dinstall::BXANotify"):
- do_bxa_notification();
-
-################################################################################
-
-def prod_maintainer ():
- # Here we prepare an editor and get them ready to prod...
- temp_filename = utils.temp_filename();
- editor = os.environ.get("EDITOR","vi")
- answer = 'E';
- while answer == 'E':
- os.system("%s %s" % (editor, temp_filename))
- file = utils.open_file(temp_filename);
- prod_message = "".join(file.readlines());
- file.close();
- print "Prod message:";
- print utils.prefix_multi_line_string(prod_message," ",include_blank_lines=1);
- prompt = "[P]rod, Edit, Abandon, Quit ?"
- answer = "XXX";
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
- os.unlink(temp_filename);
- if answer == 'A':
- return;
- elif answer == 'Q':
- sys.exit(0);
- # Otherwise, do the proding...
- user_email_address = utils.whoami() + " <%s>" % (
- Cnf["Dinstall::MyAdminAddress"]);
-
- Subst = Katie.Subst;
-
- Subst["__FROM_ADDRESS__"] = user_email_address;
- Subst["__PROD_MESSAGE__"] = prod_message;
- Subst["__CC__"] = "Cc: " + Cnf["Dinstall::MyEmailAddress"];
-
- prod_mail_message = utils.TemplateSubst(
- Subst,Cnf["Dir::Templates"]+"/lisa.prod");
-
- # Send the prod mail if appropriate
- if not Cnf["Dinstall::Options::No-Mail"]:
- utils.send_mail(prod_mail_message);
-
- print "Sent proding message";
-
-################################################################################
-
-def do_new():
- print "NEW\n";
- files = Katie.pkg.files;
- changes = Katie.pkg.changes;
-
- # Make a copy of distribution we can happily trample on
- changes["suite"] = copy.copy(changes["distribution"]);
-
- # Fix up the list of target suites
- for suite in changes["suite"].keys():
- override = Cnf.Find("Suite::%s::OverrideSuite" % (suite));
- if override:
- del changes["suite"][suite];
- changes["suite"][override] = 1;
- # Validate suites
- for suite in changes["suite"].keys():
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- utils.fubar("%s has invalid suite '%s' (possibly overriden). say wha?" % (changes, suite));
-
- # The main NEW processing loop
- done = 0;
- while not done:
- # Find out what's new
- new = determine_new(changes, files);
-
- if not new:
- break;
-
- answer = "XXX";
- if Options["No-Action"] or Options["Automatic"]:
- answer = 'S';
-
- (broken, note) = print_new(new, 0);
- prompt = "";
-
- if not broken and not note:
- prompt = "Add overrides, ";
- if broken:
- print "W: [!] marked entries must be fixed before package can be processed.";
- if note:
- print "W: note must be removed before package can be processed.";
- prompt += "Remove note, ";
-
- prompt += "Edit overrides, Check, Manual reject, Note edit, Prod, [S]kip, Quit ?";
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1)
- answer = answer[:1].upper()
-
- if answer == 'A':
- done = add_overrides (new);
- elif answer == 'C':
- check_pkg();
- elif answer == 'E':
- new = edit_overrides (new);
- elif answer == 'M':
- aborted = Katie.do_reject(1, Options["Manual-Reject"]);
- if not aborted:
- os.unlink(Katie.pkg.changes_file[:-8]+".katie");
- done = 1;
- elif answer == 'N':
- edit_note(changes.get("lisa note", ""));
- elif answer == 'P':
- prod_maintainer();
- elif answer == 'R':
- confirm = utils.our_raw_input("Really clear note (y/N)? ").lower();
- if confirm == "y":
- del changes["lisa note"];
- elif answer == 'S':
- done = 1;
- elif answer == 'Q':
- sys.exit(0)
-
-################################################################################
-################################################################################
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: lisa [OPTION]... [CHANGES]...
- -a, --automatic automatic run
- -h, --help show this help and exit.
- -m, --manual-reject=MSG manual reject with `msg'
- -n, --no-action don't do anything
- -V, --version display the version number and exit"""
- sys.exit(exit_code)
-
-################################################################################
-
-def init():
- global Cnf, Options, Logger, Katie, projectB, Sections, Priorities;
-
- Cnf = utils.get_conf();
-
- Arguments = [('a',"automatic","Lisa::Options::Automatic"),
- ('h',"help","Lisa::Options::Help"),
- ('m',"manual-reject","Lisa::Options::Manual-Reject", "HasArg"),
- ('n',"no-action","Lisa::Options::No-Action"),
- ('V',"version","Lisa::Options::Version")];
-
- for i in ["automatic", "help", "manual-reject", "no-action", "version"]:
- if not Cnf.has_key("Lisa::Options::%s" % (i)):
- Cnf["Lisa::Options::%s" % (i)] = "";
-
- changes_files = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Lisa::Options")
-
- if Options["Help"]:
- usage();
-
- if Options["Version"]:
- print "lisa %s" % (lisa_version);
- sys.exit(0);
-
- Katie = katie.Katie(Cnf);
-
- if not Options["No-Action"]:
- Logger = Katie.Logger = logging.Logger(Cnf, "lisa");
-
- projectB = Katie.projectB;
-
- Sections = Section_Completer();
- Priorities = Priority_Completer();
- readline.parse_and_bind("tab: complete");
-
- return changes_files;
-
-################################################################################
-
-def do_byhand():
- done = 0;
- while not done:
- files = Katie.pkg.files;
- will_install = 1;
- byhand = [];
-
- for file in files.keys():
- if files[file]["type"] == "byhand":
- if os.path.exists(file):
- print "W: %s still present; please process byhand components and try again." % (file);
- will_install = 0;
- else:
- byhand.append(file);
-
- answer = "XXXX";
- if Options["No-Action"]:
- answer = "S";
- if will_install:
- if Options["Automatic"] and not Options["No-Action"]:
- answer = 'A';
- prompt = "[A]ccept, Manual reject, Skip, Quit ?";
- else:
- prompt = "Manual reject, [S]kip, Quit ?";
-
- while prompt.find(answer) == -1:
- answer = utils.our_raw_input(prompt);
- m = katie.re_default_answer.search(prompt);
- if answer == "":
- answer = m.group(1);
- answer = answer[:1].upper();
-
- if answer == 'A':
- done = 1;
- for file in byhand:
- del files[file];
- elif answer == 'M':
- Katie.do_reject(1, Options["Manual-Reject"]);
- os.unlink(Katie.pkg.changes_file[:-8]+".katie");
- done = 1;
- elif answer == 'S':
- done = 1;
- elif answer == 'Q':
- sys.exit(0);
-
-################################################################################
-
-def do_accept():
- print "ACCEPT";
- if not Options["No-Action"]:
- retry = 0;
- while retry < 10:
- try:
- lock_fd = os.open(Cnf["Lisa::AcceptedLockFile"], os.O_RDONLY | os.O_CREAT | os.O_EXCL);
- retry = 10;
- except OSError, e:
- if errno.errorcode[e.errno] == 'EACCES' or errno.errorcode[e.errno] == 'EEXIST':
- retry += 1;
- if (retry >= 10):
- utils.fubar("Couldn't obtain lock; assuming jennifer is already running.");
- else:
- print("Unable to get accepted lock (try %d of 10)" % retry);
- time.sleep(60);
- else:
- raise;
- (summary, short_summary) = Katie.build_summaries();
- Katie.accept(summary, short_summary);
- os.unlink(Katie.pkg.changes_file[:-8]+".katie");
- os.unlink(Cnf["Lisa::AcceptedLockFile"]);
-
-def check_status(files):
- new = byhand = 0;
- for file in files.keys():
- if files[file]["type"] == "byhand":
- byhand = 1;
- elif files[file].has_key("new"):
- new = 1;
- return (new, byhand);
-
-def do_pkg(changes_file):
- Katie.pkg.changes_file = changes_file;
- Katie.init_vars();
- Katie.update_vars();
- Katie.update_subst();
- files = Katie.pkg.files;
-
- if not recheck():
- return;
-
- (new, byhand) = check_status(files);
- if new or byhand:
- if new:
- do_new();
- if byhand:
- do_byhand();
- (new, byhand) = check_status(files);
-
- if not new and not byhand:
- do_accept();
-
-################################################################################
-
-def end():
- accept_count = Katie.accept_count;
- accept_bytes = Katie.accept_bytes;
-
- if accept_count:
- sets = "set"
- if accept_count > 1:
- sets = "sets"
- sys.stderr.write("Accepted %d package %s, %s.\n" % (accept_count, sets, utils.size_type(int(accept_bytes))));
- Logger.log(["total",accept_count,accept_bytes]);
-
- if not Options["No-Action"]:
- Logger.close();
-
-################################################################################
-
-def main():
- changes_files = init();
- if len(changes_files) > 50:
- sys.stderr.write("Sorting changes...\n");
- changes_files = sort_changes(changes_files);
-
- # Kill me now? **FIXME**
- Cnf["Dinstall::Options::No-Mail"] = "";
- bcc = "X-Katie: lisa %s" % (lisa_version);
- if Cnf.has_key("Dinstall::Bcc"):
- Katie.Subst["__BCC__"] = bcc + "\nBcc: %s" % (Cnf["Dinstall::Bcc"]);
- else:
- Katie.Subst["__BCC__"] = bcc;
-
- for changes_file in changes_files:
- changes_file = utils.validate_changes_file_arg(changes_file, 0);
- if not changes_file:
- continue;
- print "\n" + changes_file;
- do_pkg (changes_file);
-
- end();
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Logging functions
-# Copyright (C) 2001, 2002 James Troup <james@nocrew.org>
-# $Id: logging.py,v 1.4 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, pwd, time, sys;
-import utils;
-
-################################################################################
-
-class Logger:
- "Logger object"
- Cnf = None;
- logfile = None;
- program = None;
-
- def __init__ (self, Cnf, program, debug=0):
- "Initialize a new Logger object"
- self.Cnf = Cnf;
- self.program = program;
- # Create the log directory if it doesn't exist
- logdir = Cnf["Dir::Log"];
- if not os.path.exists(logdir):
- umask = os.umask(00000);
- os.makedirs(logdir, 02775);
- # Open the logfile
- logfilename = "%s/%s" % (logdir, time.strftime("%Y-%m"));
- logfile = None
- if debug:
- logfile = sys.stderr
- else:
- logfile = utils.open_file(logfilename, 'a');
- self.logfile = logfile;
- # Log the start of the program
- user = pwd.getpwuid(os.getuid())[0];
- self.log(["program start", user]);
-
- def log (self, details):
- "Log an event"
- # Prepend the timestamp and program name
- details.insert(0, self.program);
- timestamp = time.strftime("%Y%m%d%H%M%S");
- details.insert(0, timestamp);
- # Force the contents of the list to be string.join-able
- details = map(str, details);
- # Write out the log in TSV
- self.logfile.write("|".join(details)+'\n');
- # Flush the output to enable tail-ing
- self.logfile.flush();
-
- def close (self):
- "Close a Logger object"
- self.log(["program end"]);
- self.logfile.flush();
- self.logfile.close();
+++ /dev/null
-#!/usr/bin/env python
-
-# Display information about package(s) (suite, version, etc.)
-# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: madison,v 1.33 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <aj> ooo, elmo has "special powers"
-# <neuro> ooo, does he have lasers that shoot out of his eyes?
-# <aj> dunno
-# <aj> maybe he can turn invisible? that'd sure help with improved transparency!
-
-################################################################################
-
-import os, pg, sys;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: madison [OPTION] PACKAGE[...]
-Display information about PACKAGE(s).
-
- -a, --architecture=ARCH only show info for ARCH(s)
- -b, --binary-type=TYPE only show info for binary TYPE
- -c, --component=COMPONENT only show info for COMPONENT(s)
- -g, --greaterorequal show buildd 'dep-wait pkg >= {highest version}' info
- -G, --greaterthan show buildd 'dep-wait pkg >> {highest version}' info
- -h, --help show this help and exit
- -r, --regex treat PACKAGE as a regex
- -s, --suite=SUITE only show info for this suite
- -S, --source-and-binary show info for the binary children of source pkgs
-
-ARCH, COMPONENT and SUITE can be comma (or space) separated lists, e.g.
- --architecture=m68k,i386"""
- sys.exit(exit_code)
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('a', "architecture", "Madison::Options::Architecture", "HasArg"),
- ('b', "binarytype", "Madison::Options::BinaryType", "HasArg"),
- ('c', "component", "Madison::Options::Component", "HasArg"),
- ('f', "format", "Madison::Options::Format", "HasArg"),
- ('g', "greaterorequal", "Madison::Options::GreaterOrEqual"),
- ('G', "greaterthan", "Madison::Options::GreaterThan"),
- ('r', "regex", "Madison::Options::Regex"),
- ('s', "suite", "Madison::Options::Suite", "HasArg"),
- ('S', "source-and-binary", "Madison::Options::Source-And-Binary"),
- ('h', "help", "Madison::Options::Help")];
- for i in [ "architecture", "binarytype", "component", "format",
- "greaterorequal", "greaterthan", "regex", "suite",
- "source-and-binary", "help" ]:
- if not Cnf.has_key("Madison::Options::%s" % (i)):
- Cnf["Madison::Options::%s" % (i)] = "";
-
- packages = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Madison::Options")
-
- if Options["Help"]:
- usage();
- if not packages:
- utils.fubar("need at least one package name as an argument.");
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- # If cron.daily is running; warn the user that our output might seem strange
- if os.path.exists(os.path.join(Cnf["Dir::Root"], "Archive_Maintenance_In_Progress")):
- utils.warn("Archive maintenance is in progress; database inconsistencies are possible.");
-
- # Handle buildd maintenance helper options
- if Options["GreaterOrEqual"] or Options["GreaterThan"]:
- if Options["GreaterOrEqual"] and Options["GreaterThan"]:
- utils.fubar("-g/--greaterorequal and -G/--greaterthan are mutually exclusive.");
- if not Options["Suite"]:
- Options["Suite"] = "unstable";
-
- # Parse -a/--architecture, -c/--component and -s/--suite
- (con_suites, con_architectures, con_components, check_source) = \
- utils.parse_args(Options);
-
- if Options["BinaryType"]:
- if Options["BinaryType"] != "udeb" and Options["BinaryType"] != "deb":
- utils.fubar("Invalid binary type. 'udeb' and 'deb' recognised.");
- con_bintype = "AND b.type = '%s'" % (Options["BinaryType"]);
- # REMOVE ME TRAMP
- if Options["BinaryType"] == "udeb":
- check_source = 0;
- else:
- con_bintype = "";
-
- if Options["Regex"]:
- comparison_operator = "~";
- else:
- comparison_operator = "=";
-
- if Options["Source-And-Binary"]:
- new_packages = [];
- for package in packages:
- q = projectB.query("SELECT DISTINCT b.package FROM binaries b, bin_associations ba, suite su, source s WHERE b.source = s.id AND su.id = ba.suite AND b.id = ba.bin AND s.source %s '%s' %s" % (comparison_operator, package, con_suites));
- new_packages.extend(map(lambda x: x[0], q.getresult()));
- if package not in new_packages:
- new_packages.append(package);
- packages = new_packages;
-
- results = 0;
- for package in packages:
- q = projectB.query("""
-SELECT b.package, b.version, a.arch_string, su.suite_name, c.name, m.name
- FROM binaries b, architecture a, suite su, bin_associations ba,
- files f, location l, component c, maintainer m
- WHERE b.package %s '%s' AND a.id = b.architecture AND su.id = ba.suite
- AND b.id = ba.bin AND b.file = f.id AND f.location = l.id
- AND l.component = c.id AND b.maintainer = m.id %s %s %s
-""" % (comparison_operator, package, con_suites, con_architectures, con_bintype));
- ql = q.getresult();
- if check_source:
- q = projectB.query("""
-SELECT s.source, s.version, 'source', su.suite_name, c.name, m.name
- FROM source s, suite su, src_associations sa, files f, location l,
- component c, maintainer m
- WHERE s.source %s '%s' AND su.id = sa.suite AND s.id = sa.source
- AND s.file = f.id AND f.location = l.id AND l.component = c.id
- AND s.maintainer = m.id %s
-""" % (comparison_operator, package, con_suites));
- ql.extend(q.getresult());
- d = {};
- highver = {};
- for i in ql:
- results += 1;
- (pkg, version, architecture, suite, component, maintainer) = i;
- if component != "main":
- suite = "%s/%s" % (suite, component);
- if not d.has_key(pkg):
- d[pkg] = {};
- highver.setdefault(pkg,"");
- if not d[pkg].has_key(version):
- d[pkg][version] = {};
- if apt_pkg.VersionCompare(version, highver[pkg]) > 0:
- highver[pkg] = version;
- if not d[pkg][version].has_key(suite):
- d[pkg][version][suite] = [];
- d[pkg][version][suite].append(architecture);
-
- packages = d.keys();
- packages.sort();
- for pkg in packages:
- versions = d[pkg].keys();
- versions.sort(apt_pkg.VersionCompare);
- for version in versions:
- suites = d[pkg][version].keys();
- suites.sort();
- for suite in suites:
- arches = d[pkg][version][suite];
- arches.sort(utils.arch_compare_sw);
- if Options["Format"] == "": #normal
- sys.stdout.write("%10s | %10s | %13s | " % (pkg, version, suite));
- sys.stdout.write(", ".join(arches));
- sys.stdout.write('\n');
- elif Options["Format"] == "heidi":
- for arch in arches:
- sys.stdout.write("%s %s %s\n" % (pkg, version, arch));
- if Options["GreaterOrEqual"]:
- print "\n%s (>= %s)" % (pkg, highver[pkg])
- if Options["GreaterThan"]:
- print "\n%s (>> %s)" % (pkg, highver[pkg])
-
- if not results:
- sys.exit(1);
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# General purpose package removal tool for ftpmaster
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: melanie,v 1.44 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# o OpenBSD team wants to get changes incorporated into IPF. Darren no
-# respond.
-# o Ask again -> No respond. Darren coder supreme.
-# o OpenBSD decide to make changes, but only in OpenBSD source
-# tree. Darren hears, gets angry! Decides: "LICENSE NO ALLOW!"
-# o Insert Flame War.
-# o OpenBSD team decide to switch to different packet filter under BSD
-# license. Because Project Goal: Every user should be able to make
-# changes to source tree. IPF license bad!!
-# o Darren try get back: says, NetBSD, FreeBSD allowed! MUAHAHAHAH!!!
-# o Theo say: no care, pf much better than ipf!
-# o Darren changes mind: changes license. But OpenBSD will not change
-# back to ipf. Darren even much more bitter.
-# o Darren so bitterbitter. Decides: I'LL GET BACK BY FORKING OPENBSD AND
-# RELEASING MY OWN VERSION. HEHEHEHEHE.
-
-# http://slashdot.org/comments.pl?sid=26697&cid=2883271
-
-################################################################################
-
-import commands, os, pg, re, sys;
-import utils, db_access;
-import apt_pkg, apt_inst;
-
-################################################################################
-
-re_strip_source_version = re.compile (r'\s+.*$');
-re_build_dep_arch = re.compile(r"\[[^]]+\]");
-
-################################################################################
-
-Cnf = None;
-Options = None;
-projectB = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: melanie [OPTIONS] PACKAGE[...]
-Remove PACKAGE(s) from suite(s).
-
- -a, --architecture=ARCH only act on this architecture
- -b, --binary remove binaries only
- -c, --component=COMPONENT act on this component
- -C, --carbon-copy=EMAIL send a CC of removal message to EMAIL
- -d, --done=BUG# send removal message as closure to bug#
- -h, --help show this help and exit
- -m, --reason=MSG reason for removal
- -n, --no-action don't do anything
- -p, --partial don't affect override files
- -R, --rdep-check check reverse dependencies
- -s, --suite=SUITE act on this suite
- -S, --source-only remove source only
-
-ARCH, BUG#, COMPONENT and SUITE can be comma (or space) separated lists, e.g.
- --architecture=m68k,i386"""
-
- sys.exit(exit_code)
-
-################################################################################
-
-# "Hudson: What that's great, that's just fucking great man, now what
-# the fuck are we supposed to do? We're in some real pretty shit now
-# man...That's it man, game over man, game over, man! Game over! What
-# the fuck are we gonna do now? What are we gonna do?"
-
-def game_over():
- answer = utils.our_raw_input("Continue (y/N)? ").lower();
- if answer != "y":
- print "Aborted."
- sys.exit(1);
-
-################################################################################
-
-def reverse_depends_check(removals, suites):
- print "Checking reverse dependencies..."
- components = Cnf.ValueList("Suite::%s::Components" % suites[0])
- dep_problem = 0
- p2c = {};
- for architecture in Cnf.ValueList("Suite::%s::Architectures" % suites[0]):
- if architecture in ["source", "all"]:
- continue
- deps = {};
- virtual_packages = {};
- for component in components:
- filename = "%s/dists/%s/%s/binary-%s/Packages.gz" % (Cnf["Dir::Root"], suites[0], component, architecture)
- # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
- if (result != 0):
- utils.fubar("Gunzip invocation failed!\n%s\n" % (output), result);
- packages = utils.open_file(temp_filename);
- Packages = apt_pkg.ParseTagFile(packages)
- while Packages.Step():
- package = Packages.Section.Find("Package")
- depends = Packages.Section.Find("Depends")
- if depends:
- deps[package] = depends
- provides = Packages.Section.Find("Provides")
- # Maintain a counter for each virtual package. If a
- # Provides: exists, set the counter to 0 and count all
- # provides by a package not in the list for removal.
- # If the counter stays 0 at the end, we know that only
- # the to-be-removed packages provided this virtual
- # package.
- if provides:
- for virtual_pkg in provides.split(","):
- virtual_pkg = virtual_pkg.strip()
- if virtual_pkg == package: continue
- if not virtual_packages.has_key(virtual_pkg):
- virtual_packages[virtual_pkg] = 0
- if package not in removals:
- virtual_packages[virtual_pkg] += 1
- p2c[package] = component;
- packages.close()
- os.unlink(temp_filename);
-
- # If a virtual package is only provided by the to-be-removed
- # packages, treat the virtual package as to-be-removed too.
- for virtual_pkg in virtual_packages.keys():
- if virtual_packages[virtual_pkg] == 0:
- removals.append(virtual_pkg)
-
- # Check binary dependencies (Depends)
- for package in deps.keys():
- if package in removals: continue
- parsed_dep = []
- try:
- parsed_dep += apt_pkg.ParseDepends(deps[package])
- except ValueError, e:
- print "Error for package %s: %s" % (package, e)
- for dep in parsed_dep:
- # Check for partial breakage. If a package has a ORed
- # dependency, there is only a dependency problem if all
- # packages in the ORed depends will be removed.
- unsat = 0
- for dep_package, _, _ in dep:
- if dep_package in removals:
- unsat += 1
- if unsat == len(dep):
- component = p2c[package];
- if component != "main":
- what = "%s/%s" % (package, component);
- else:
- what = "** %s" % (package);
- print "%s has an unsatisfied dependency on %s: %s" % (what, architecture, utils.pp_deps(dep));
- dep_problem = 1
-
- # Check source dependencies (Build-Depends and Build-Depends-Indep)
- for component in components:
- filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suites[0], component)
- # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
- temp_filename = utils.temp_filename();
- result, output = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename))
- if result != 0:
- sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output))
- sys.exit(result)
- sources = utils.open_file(temp_filename, "r")
- Sources = apt_pkg.ParseTagFile(sources)
- while Sources.Step():
- source = Sources.Section.Find("Package")
- if source in removals: continue
- parsed_dep = []
- for build_dep_type in ["Build-Depends", "Build-Depends-Indep"]:
- build_dep = Sources.Section.get(build_dep_type)
- if build_dep:
- # Remove [arch] information since we want to see breakage on all arches
- build_dep = re_build_dep_arch.sub("", build_dep)
- try:
- parsed_dep += apt_pkg.ParseDepends(build_dep)
- except ValueError, e:
- print "Error for source %s: %s" % (source, e)
- for dep in parsed_dep:
- unsat = 0
- for dep_package, _, _ in dep:
- if dep_package in removals:
- unsat += 1
- if unsat == len(dep):
- if component != "main":
- source = "%s/%s" % (source, component);
- else:
- source = "** %s" % (source);
- print "%s has an unsatisfied build-dependency: %s" % (source, utils.pp_deps(dep))
- dep_problem = 1
- sources.close()
- os.unlink(temp_filename)
-
- if dep_problem:
- print "Dependency problem found."
- if not Options["No-Action"]:
- game_over()
- else:
- print "No dependency problem found."
- print
-
-################################################################################
-
-def main ():
- global Cnf, Options, projectB;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Melanie::Options::Help"),
- ('a',"architecture","Melanie::Options::Architecture", "HasArg"),
- ('b',"binary", "Melanie::Options::Binary-Only"),
- ('c',"component", "Melanie::Options::Component", "HasArg"),
- ('C',"carbon-copy", "Melanie::Options::Carbon-Copy", "HasArg"), # Bugs to Cc
- ('d',"done","Melanie::Options::Done", "HasArg"), # Bugs fixed
- ('R',"rdep-check", "Melanie::Options::Rdep-Check"),
- ('m',"reason", "Melanie::Options::Reason", "HasArg"), # Hysterical raisins; -m is old-dinstall option for rejection reason
- ('n',"no-action","Melanie::Options::No-Action"),
- ('p',"partial", "Melanie::Options::Partial"),
- ('s',"suite","Melanie::Options::Suite", "HasArg"),
- ('S',"source-only", "Melanie::Options::Source-Only"),
- ];
-
- for i in [ "architecture", "binary-only", "carbon-copy", "component",
- "done", "help", "no-action", "partial", "rdep-check", "reason",
- "source-only" ]:
- if not Cnf.has_key("Melanie::Options::%s" % (i)):
- Cnf["Melanie::Options::%s" % (i)] = "";
- if not Cnf.has_key("Melanie::Options::Suite"):
- Cnf["Melanie::Options::Suite"] = "unstable";
-
- arguments = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Melanie::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- # Sanity check options
- if not arguments:
- utils.fubar("need at least one package name as an argument.");
- if Options["Architecture"] and Options["Source-Only"]:
- utils.fubar("can't use -a/--architecutre and -S/--source-only options simultaneously.");
- if Options["Binary-Only"] and Options["Source-Only"]:
- utils.fubar("can't use -b/--binary-only and -S/--source-only options simultaneously.");
- if Options.has_key("Carbon-Copy") and not Options.has_key("Done"):
- utils.fubar("can't use -C/--carbon-copy without also using -d/--done option.");
- if Options["Architecture"] and not Options["Partial"]:
- utils.warn("-a/--architecture implies -p/--partial.");
- Options["Partial"] = "true";
-
- # Force the admin to tell someone if we're not doing a rene-led removal
- # (or closing a bug, which counts as telling someone).
- if not Options["No-Action"] and not Options["Carbon-Copy"] \
- and not Options["Done"] and Options["Reason"].find("[rene]") == -1:
- utils.fubar("Need a -C/--carbon-copy if not closing a bug and not doing a rene-led removal.");
-
- # Process -C/--carbon-copy
- #
- # Accept 3 types of arguments (space separated):
- # 1) a number - assumed to be a bug number, i.e. nnnnn@bugs.debian.org
- # 2) the keyword 'package' - cc's $package@packages.debian.org for every argument
- # 3) contains a '@' - assumed to be an email address, used unmofidied
- #
- carbon_copy = [];
- for copy_to in utils.split_args(Options.get("Carbon-Copy")):
- if utils.str_isnum(copy_to):
- carbon_copy.append(copy_to + "@" + Cnf["Dinstall::BugServer"]);
- elif copy_to == 'package':
- for package in arguments:
- carbon_copy.append(package + "@" + Cnf["Dinstall::PackagesServer"]);
- if Cnf.has_key("Dinstall::TrackingServer"):
- carbon_copy.append(package + "@" + Cnf["Dinstall::TrackingServer"]);
- elif '@' in copy_to:
- carbon_copy.append(copy_to);
- else:
- utils.fubar("Invalid -C/--carbon-copy argument '%s'; not a bug number, 'package' or email address." % (copy_to));
-
- if Options["Binary-Only"]:
- field = "b.package";
- else:
- field = "s.source";
- con_packages = "AND %s IN (%s)" % (field, ", ".join(map(repr, arguments)));
-
- (con_suites, con_architectures, con_components, check_source) = \
- utils.parse_args(Options);
-
- # Additional suite checks
- suite_ids_list = [];
- suites = utils.split_args(Options["Suite"]);
- suites_list = utils.join_with_commas_and(suites);
- if not Options["No-Action"]:
- for suite in suites:
- suite_id = db_access.get_suite_id(suite);
- if suite_id != -1:
- suite_ids_list.append(suite_id);
- if suite == "stable":
- print "**WARNING** About to remove from the stable suite!"
- print "This should only be done just prior to a (point) release and not at"
- print "any other time."
- game_over();
- elif suite == "testing":
- print "**WARNING About to remove from the testing suite!"
- print "There's no need to do this normally as removals from unstable will"
- print "propogate to testing automagically."
- game_over();
-
- # Additional architecture checks
- if Options["Architecture"] and check_source:
- utils.warn("'source' in -a/--argument makes no sense and is ignored.");
-
- # Additional component processing
- over_con_components = con_components.replace("c.id", "component");
-
- print "Working...",
- sys.stdout.flush();
- to_remove = [];
- maintainers = {};
-
- # We have 3 modes of package selection: binary-only, source-only
- # and source+binary. The first two are trivial and obvious; the
- # latter is a nasty mess, but very nice from a UI perspective so
- # we try to support it.
-
- if Options["Binary-Only"]:
- # Binary-only
- q = projectB.query("SELECT b.package, b.version, a.arch_string, b.id, b.maintainer FROM binaries b, bin_associations ba, architecture a, suite su, files f, location l, component c WHERE ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id AND b.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s %s" % (con_packages, con_suites, con_components, con_architectures));
- for i in q.getresult():
- to_remove.append(i);
- else:
- # Source-only
- source_packages = {};
- q = projectB.query("SELECT l.path, f.filename, s.source, s.version, 'source', s.id, s.maintainer FROM source s, src_associations sa, suite su, files f, location l, component c WHERE sa.source = s.id AND sa.suite = su.id AND s.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s" % (con_packages, con_suites, con_components));
- for i in q.getresult():
- source_packages[i[2]] = i[:2];
- to_remove.append(i[2:]);
- if not Options["Source-Only"]:
- # Source + Binary
- binary_packages = {};
- # First get a list of binary package names we suspect are linked to the source
- q = projectB.query("SELECT DISTINCT b.package FROM binaries b, source s, src_associations sa, suite su, files f, location l, component c WHERE b.source = s.id AND sa.source = s.id AND sa.suite = su.id AND s.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s" % (con_packages, con_suites, con_components));
- for i in q.getresult():
- binary_packages[i[0]] = "";
- # Then parse each .dsc that we found earlier to see what binary packages it thinks it produces
- for i in source_packages.keys():
- filename = "/".join(source_packages[i]);
- try:
- dsc = utils.parse_changes(filename);
- except utils.cant_open_exc:
- utils.warn("couldn't open '%s'." % (filename));
- continue;
- for package in dsc.get("binary").split(','):
- package = package.strip();
- binary_packages[package] = "";
- # Then for each binary package: find any version in
- # unstable, check the Source: field in the deb matches our
- # source package and if so add it to the list of packages
- # to be removed.
- for package in binary_packages.keys():
- q = projectB.query("SELECT l.path, f.filename, b.package, b.version, a.arch_string, b.id, b.maintainer FROM binaries b, bin_associations ba, architecture a, suite su, files f, location l, component c WHERE ba.bin = b.id AND ba.suite = su.id AND b.architecture = a.id AND b.file = f.id AND f.location = l.id AND l.component = c.id %s %s %s AND b.package = '%s'" % (con_suites, con_components, con_architectures, package));
- for i in q.getresult():
- filename = "/".join(i[:2]);
- control = apt_pkg.ParseSection(apt_inst.debExtractControl(utils.open_file(filename)))
- source = control.Find("Source", control.Find("Package"));
- source = re_strip_source_version.sub('', source);
- if source_packages.has_key(source):
- to_remove.append(i[2:]);
- print "done."
-
- if not to_remove:
- print "Nothing to do."
- sys.exit(0);
-
- # If we don't have a reason; spawn an editor so the user can add one
- # Write the rejection email out as the <foo>.reason file
- if not Options["Reason"] and not Options["No-Action"]:
- temp_filename = utils.temp_filename();
- editor = os.environ.get("EDITOR","vi")
- result = os.system("%s %s" % (editor, temp_filename))
- if result != 0:
- utils.fubar ("vi invocation failed for `%s'!" % (temp_filename), result)
- temp_file = utils.open_file(temp_filename);
- for line in temp_file.readlines():
- Options["Reason"] += line;
- temp_file.close();
- os.unlink(temp_filename);
-
- # Generate the summary of what's to be removed
- d = {};
- for i in to_remove:
- package = i[0];
- version = i[1];
- architecture = i[2];
- maintainer = i[4];
- maintainers[maintainer] = "";
- if not d.has_key(package):
- d[package] = {};
- if not d[package].has_key(version):
- d[package][version] = [];
- if architecture not in d[package][version]:
- d[package][version].append(architecture);
-
- maintainer_list = [];
- for maintainer_id in maintainers.keys():
- maintainer_list.append(db_access.get_maintainer(maintainer_id));
- summary = "";
- removals = d.keys();
- removals.sort();
- for package in removals:
- versions = d[package].keys();
- versions.sort(apt_pkg.VersionCompare);
- for version in versions:
- d[package][version].sort(utils.arch_compare_sw);
- summary += "%10s | %10s | %s\n" % (package, version, ", ".join(d[package][version]));
- print "Will remove the following packages from %s:" % (suites_list);
- print
- print summary
- print "Maintainer: %s" % ", ".join(maintainer_list)
- if Options["Done"]:
- print "Will also close bugs: "+Options["Done"];
- if carbon_copy:
- print "Will also send CCs to: " + ", ".join(carbon_copy)
- print
- print "------------------- Reason -------------------"
- print Options["Reason"];
- print "----------------------------------------------"
- print
-
- if Options["Rdep-Check"]:
- reverse_depends_check(removals, suites);
-
- # If -n/--no-action, drop out here
- if Options["No-Action"]:
- sys.exit(0);
-
- print "Going to remove the packages now."
- game_over();
-
- whoami = utils.whoami();
- date = commands.getoutput('date -R');
-
- # Log first; if it all falls apart I want a record that we at least tried.
- logfile = utils.open_file(Cnf["Melanie::LogFile"], 'a');
- logfile.write("=========================================================================\n");
- logfile.write("[Date: %s] [ftpmaster: %s]\n" % (date, whoami));
- logfile.write("Removed the following packages from %s:\n\n%s" % (suites_list, summary));
- if Options["Done"]:
- logfile.write("Closed bugs: %s\n" % (Options["Done"]));
- logfile.write("\n------------------- Reason -------------------\n%s\n" % (Options["Reason"]));
- logfile.write("----------------------------------------------\n");
- logfile.flush();
-
- dsc_type_id = db_access.get_override_type_id('dsc');
- deb_type_id = db_access.get_override_type_id('deb');
-
- # Do the actual deletion
- print "Deleting...",
- sys.stdout.flush();
- projectB.query("BEGIN WORK");
- for i in to_remove:
- package = i[0];
- architecture = i[2];
- package_id = i[3];
- for suite_id in suite_ids_list:
- if architecture == "source":
- projectB.query("DELETE FROM src_associations WHERE source = %s AND suite = %s" % (package_id, suite_id));
- #print "DELETE FROM src_associations WHERE source = %s AND suite = %s" % (package_id, suite_id);
- else:
- projectB.query("DELETE FROM bin_associations WHERE bin = %s AND suite = %s" % (package_id, suite_id));
- #print "DELETE FROM bin_associations WHERE bin = %s AND suite = %s" % (package_id, suite_id);
- # Delete from the override file
- if not Options["Partial"]:
- if architecture == "source":
- type_id = dsc_type_id;
- else:
- type_id = deb_type_id;
- projectB.query("DELETE FROM override WHERE package = '%s' AND type = %s AND suite = %s %s" % (package, type_id, suite_id, over_con_components));
- projectB.query("COMMIT WORK");
- print "done."
-
- # Send the bug closing messages
- if Options["Done"]:
- Subst = {};
- Subst["__MELANIE_ADDRESS__"] = Cnf["Melanie::MyEmailAddress"];
- Subst["__BUG_SERVER__"] = Cnf["Dinstall::BugServer"];
- bcc = [];
- if Cnf.Find("Dinstall::Bcc") != "":
- bcc.append(Cnf["Dinstall::Bcc"]);
- if Cnf.Find("Melanie::Bcc") != "":
- bcc.append(Cnf["Melanie::Bcc"]);
- if bcc:
- Subst["__BCC__"] = "Bcc: " + ", ".join(bcc);
- else:
- Subst["__BCC__"] = "X-Filler: 42";
- Subst["__CC__"] = "X-Katie: melanie $Revision: 1.44 $";
- if carbon_copy:
- Subst["__CC__"] += "\nCc: " + ", ".join(carbon_copy);
- Subst["__SUITE_LIST__"] = suites_list;
- Subst["__SUMMARY__"] = summary;
- Subst["__ADMIN_ADDRESS__"] = Cnf["Dinstall::MyAdminAddress"];
- Subst["__DISTRO__"] = Cnf["Dinstall::MyDistribution"];
- Subst["__WHOAMI__"] = whoami;
- whereami = utils.where_am_i();
- Archive = Cnf.SubTree("Archive::%s" % (whereami));
- Subst["__MASTER_ARCHIVE__"] = Archive["OriginServer"];
- Subst["__PRIMARY_MIRROR__"] = Archive["PrimaryMirror"];
- for bug in utils.split_args(Options["Done"]):
- Subst["__BUG_NUMBER__"] = bug;
- mail_message = utils.TemplateSubst(Subst,Cnf["Dir::Templates"]+"/melanie.bug-close");
- utils.send_mail(mail_message);
-
- logfile.write("=========================================================================\n");
- logfile.close();
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/bin/sh
-# Update the md5sums file
-# $Id: mkchecksums,v 1.2 2000-12-20 08:15:35 troup Exp $
-
-set -e
-. $SCRIPTVARS
-
-dsynclist=$dbdir/dsync.list
-md5list=$indices/md5sums
-
-echo -n "Creating md5 / dsync index file ... "
-
-cd "$ftpdir"
-dsync-flist -q generate $dsynclist --exclude $dsynclist --md5
-dsync-flist -q md5sums $dsynclist | gzip -9n > ${md5list}.gz
-dsync-flist -q link-dups $dsynclist || true
+++ /dev/null
-#!/bin/sh
-# Update the ls-lR.
-# $Id: mklslar,v 1.3 2001-09-24 21:47:54 rmurray Exp $
-
-set -e
-. $SCRIPTVARS
-
-cd $ftpdir
-
-filename=ls-lR
-
-echo "Removing any core files ..."
-find -type f -name core -print0 | xargs -0r rm -v
-
-echo "Checking permissions on files in the FTP tree ..."
-find -type f \( \! -perm -444 -o -perm +002 \) -ls
-find -type d \( \! -perm -555 -o -perm +002 \) -ls
-
-echo "Checking symlinks ..."
-symlinks -rd .
-
-echo "Creating recursive directory listing ... "
-rm -f .$filename.new
-TZ=UTC ls -lR | grep -v Archive_Maintenance_In_Progress > .$filename.new
-
-if [ -r $filename ] ; then
- mv -f $filename $filename.old
- mv -f .$filename.new $filename
- rm -f $filename.patch.gz
- diff -u $filename.old $filename | gzip -9cfn - >$filename.patch.gz
- rm -f $filename.old
-else
- mv -f .$filename.new $filename
-fi
-
-gzip -9cfN $filename >$filename.gz
+++ /dev/null
-#! /bin/sh
-# $Id: mkmaintainers,v 1.3 2004-02-27 20:09:51 troup Exp $
-
-echo
-echo -n 'Creating Maintainers index ... '
-
-set -e
-. $SCRIPTVARS
-cd $masterdir
-
-nonusmaint="$masterdir/Maintainers_Versions-non-US"
-
-
-if wget -T15 -q -O Maintainers_Versions-non-US.gz http://non-us.debian.org/indices-non-US/Maintainers_Versions.gz; then
- rm -f $nonusmaint
- gunzip -c ${nonusmaint}.gz > $nonusmaint
- rm -f ${nonusmaint}.gz
-fi
-
-cd $indices
-$masterdir/charisma $nonusmaint $masterdir/pseudo-packages.maintainers | sed -e "s/~[^ ]*\([ ]\)/\1/" | awk '{printf "%-20s ", $1; for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n";}' > .new-maintainers
-
-set +e
-cmp .new-maintainers Maintainers >/dev/null
-rc=$?
-set -e
-if [ $rc = 1 ] || [ ! -f Maintainers ] ; then
- echo -n "installing Maintainers ... "
- mv -f .new-maintainers Maintainers
- gzip -9v <Maintainers >.new-maintainers.gz
- mv -f .new-maintainers.gz Maintainers.gz
-elif [ $rc = 0 ] ; then
- echo '(same as before)'
- rm -f .new-maintainers
-else
- echo cmp returned $rc
- false
-fi
+++ /dev/null
-#!/usr/bin/env python
-
-# Manipulate override files
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: natalie,v 1.7 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# On 30 Nov 1998, James Troup wrote:
-#
-# > James Troup<2> <troup2@debian.org>
-# >
-# > James is a clone of James; he's going to take over the world.
-# > After he gets some sleep.
-#
-# Could you clone other things too? Sheep? Llamas? Giant mutant turnips?
-#
-# Your clone will need some help to take over the world, maybe clone up an
-# army of penguins and threaten to unleash them on the world, forcing
-# governments to sway to the new James' will!
-#
-# Yes, I can envision a day when James' duplicate decides to take a horrific
-# vengance on the James that spawned him and unleashes his fury in the form
-# of thousands upon thousands of chickens that look just like Captin Blue
-# Eye! Oh the horror.
-#
-# Now you'll have to were name tags to people can tell you apart, unless of
-# course the new clone is truely evil in which case he should be easy to
-# identify!
-#
-# Jason
-# Chicken. Black. Helicopters.
-# Be afraid.
-
-# <Pine.LNX.3.96.981130011300.30365Z-100000@wakko>
-
-################################################################################
-
-import pg, sys, time;
-import utils, db_access, logging;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-Logger = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: natalie.py [OPTIONS]
- -h, --help print this help and exit
-
- -c, --component=CMPT list/set overrides by component
- (contrib,*main,non-free)
- -s, --suite=SUITE list/set overrides by suite
- (experimental,stable,testing,*unstable)
- -t, --type=TYPE list/set overrides by type
- (*deb,dsc,udeb)
-
- -a, --add add overrides (changes and deletions are ignored)
- -S, --set set overrides
- -l, --list list overrides
-
- -q, --quiet be less verbose
-
- starred (*) values are default"""
- sys.exit(exit_code)
-
-################################################################################
-
-def process_file (file, suite, component, type, action):
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- utils.fubar("Suite '%s' not recognised." % (suite));
-
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- utils.fubar("Component '%s' not recognised." % (component));
-
- type_id = db_access.get_override_type_id(type);
- if type_id == -1:
- utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc.)" % (type));
-
- # --set is done mostly internal for performance reasons; most
- # invocations of --set will be updates and making people wait 2-3
- # minutes while 6000 select+inserts are run needlessly isn't cool.
-
- original = {};
- new = {};
- c_skipped = 0;
- c_added = 0;
- c_updated = 0;
- c_removed = 0;
- c_error = 0;
-
- q = projectB.query("SELECT o.package, o.priority, o.section, o.maintainer, p.priority, s.section FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s and o.priority = p.id and o.section = s.id"
- % (suite_id, component_id, type_id));
- for i in q.getresult():
- original[i[0]] = i[1:];
-
- start_time = time.time();
- projectB.query("BEGIN WORK");
- for line in file.readlines():
- line = utils.re_comments.sub('', line).strip();
- if line == "":
- continue;
-
- maintainer_override = None;
- if type == "dsc":
- split_line = line.split(None, 2);
- if len(split_line) == 2:
- (package, section) = split_line;
- elif len(split_line) == 3:
- (package, section, maintainer_override) = split_line;
- else:
- utils.warn("'%s' does not break into 'package section [maintainer-override]'." % (line));
- c_error += 1;
- continue;
- priority = "source";
- else: # binary or udeb
- split_line = line.split(None, 3);
- if len(split_line) == 3:
- (package, priority, section) = split_line;
- elif len(split_line) == 4:
- (package, priority, section, maintainer_override) = split_line;
- else:
- utils.warn("'%s' does not break into 'package priority section [maintainer-override]'." % (line));
- c_error += 1;
- continue;
-
- section_id = db_access.get_section_id(section);
- if section_id == -1:
- utils.warn("'%s' is not a valid section. ['%s' in suite %s, component %s]." % (section, package, suite, component));
- c_error += 1;
- continue;
- priority_id = db_access.get_priority_id(priority);
- if priority_id == -1:
- utils.warn("'%s' is not a valid priority. ['%s' in suite %s, component %s]." % (priority, package, suite, component));
- c_error += 1;
- continue;
-
- if new.has_key(package):
- utils.warn("Can't insert duplicate entry for '%s'; ignoring all but the first. [suite %s, component %s]" % (package, suite, component));
- c_error += 1;
- continue;
- new[package] = "";
- if original.has_key(package):
- (old_priority_id, old_section_id, old_maintainer_override, old_priority, old_section) = original[package];
- if action == "add" or old_priority_id == priority_id and \
- old_section_id == section_id and \
- ((old_maintainer_override == maintainer_override) or \
- (old_maintainer_override == "" and maintainer_override == None)):
- # If it's unchanged or we're in 'add only' mode, ignore it
- c_skipped += 1;
- continue;
- else:
- # If it's changed, delete the old one so we can
- # reinsert it with the new information
- c_updated += 1;
- projectB.query("DELETE FROM override WHERE suite = %s AND component = %s AND package = '%s' AND type = %s"
- % (suite_id, component_id, package, type_id));
- # Log changes
- if old_priority_id != priority_id:
- Logger.log(["changed priority",package,old_priority,priority]);
- if old_section_id != section_id:
- Logger.log(["changed section",package,old_section,section]);
- if old_maintainer_override != maintainer_override:
- Logger.log(["changed maintainer override",package,old_maintainer_override,maintainer_override]);
- update_p = 1;
- else:
- c_added += 1;
- update_p = 0;
-
- if maintainer_override:
- projectB.query("INSERT INTO override (suite, component, type, package, priority, section, maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '%s')"
- % (suite_id, component_id, type_id, package, priority_id, section_id, maintainer_override));
- else:
- projectB.query("INSERT INTO override (suite, component, type, package, priority, section,maintainer) VALUES (%s, %s, %s, '%s', %s, %s, '')"
- % (suite_id, component_id, type_id, package, priority_id, section_id));
-
- if not update_p:
- Logger.log(["new override",suite,component,type,package,priority,section,maintainer_override]);
-
- if not action == "add":
- # Delete any packages which were removed
- for package in original.keys():
- if not new.has_key(package):
- projectB.query("DELETE FROM override WHERE suite = %s AND component = %s AND package = '%s' AND type = %s"
- % (suite_id, component_id, package, type_id));
- c_removed += 1;
- Logger.log(["removed override",suite,component,type,package]);
-
- projectB.query("COMMIT WORK");
- if not Cnf["Natalie::Options::Quiet"]:
- print "Done in %d seconds. [Updated = %d, Added = %d, Removed = %d, Skipped = %d, Errors = %d]" % (int(time.time()-start_time), c_updated, c_added, c_removed, c_skipped, c_error);
- Logger.log(["set complete",c_updated, c_added, c_removed, c_skipped, c_error]);
-
-################################################################################
-
-def list(suite, component, type):
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- utils.fubar("Suite '%s' not recognised." % (suite));
-
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- utils.fubar("Component '%s' not recognised." % (component));
-
- type_id = db_access.get_override_type_id(type);
- if type_id == -1:
- utils.fubar("Type '%s' not recognised. (Valid types are deb, udeb and dsc)" % (type));
-
- if type == "dsc":
- q = projectB.query("SELECT o.package, s.section, o.maintainer FROM override o, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.section = s.id ORDER BY s.section, o.package" % (suite_id, component_id, type_id));
- for i in q.getresult():
- print utils.result_join(i);
- else:
- q = projectB.query("SELECT o.package, p.priority, s.section, o.maintainer, p.level FROM override o, priority p, section s WHERE o.suite = %s AND o.component = %s AND o.type = %s AND o.priority = p.id AND o.section = s.id ORDER BY s.section, p.level, o.package" % (suite_id, component_id, type_id));
- for i in q.getresult():
- print utils.result_join(i[:-1]);
-
-################################################################################
-
-def main ():
- global Cnf, projectB, Logger;
-
- Cnf = utils.get_conf();
- Arguments = [('a', "add", "Natalie::Options::Add"),
- ('c', "component", "Natalie::Options::Component", "HasArg"),
- ('h', "help", "Natalie::Options::Help"),
- ('l', "list", "Natalie::Options::List"),
- ('q', "quiet", "Natalie::Options::Quiet"),
- ('s', "suite", "Natalie::Options::Suite", "HasArg"),
- ('S', "set", "Natalie::Options::Set"),
- ('t', "type", "Natalie::Options::Type", "HasArg")];
-
- # Default arguments
- for i in [ "add", "help", "list", "quiet", "set" ]:
- if not Cnf.has_key("Natalie::Options::%s" % (i)):
- Cnf["Natalie::Options::%s" % (i)] = "";
- if not Cnf.has_key("Natalie::Options::Component"):
- Cnf["Natalie::Options::Component"] = "main";
- if not Cnf.has_key("Natalie::Options::Suite"):
- Cnf["Natalie::Options::Suite"] = "unstable";
- if not Cnf.has_key("Natalie::Options::Type"):
- Cnf["Natalie::Options::Type"] = "deb";
-
- file_list = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
-
- if Cnf["Natalie::Options::Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- action = None;
- for i in [ "add", "list", "set" ]:
- if Cnf["Natalie::Options::%s" % (i)]:
- if action:
- utils.fubar("Can not perform more than one action at once.");
- action = i;
-
- (suite, component, type) = (Cnf["Natalie::Options::Suite"], Cnf["Natalie::Options::Component"], Cnf["Natalie::Options::Type"])
-
- if action == "list":
- list(suite, component, type);
- else:
- Logger = logging.Logger(Cnf, "natalie");
- if file_list:
- for file in file_list:
- process_file(utils.open_file(file), suite, component, type, action);
- else:
- process_file(sys.stdin, suite, component, type, action);
- Logger.close();
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Populate the DB
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: neve,v 1.20 2004-06-17 14:59:57 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-###############################################################################
-
-# 04:36|<aj> elmo: you're making me waste 5 seconds per architecture!!!!!! YOU BASTARD!!!!!
-
-###############################################################################
-
-# This code is a horrible mess for two reasons:
-
-# (o) For Debian's usage, it's doing something like 160k INSERTs,
-# even on auric, that makes the program unusable unless we get
-# involed in sorts of silly optimization games (local dicts to avoid
-# redundant SELECTS, using COPY FROM rather than INSERTS etc.)
-
-# (o) It's very site specific, because I don't expect to use this
-# script again in a hurry, and I don't want to spend any more time
-# on it than absolutely necessary.
-
-###############################################################################
-
-import commands, os, pg, re, sys, time;
-import apt_pkg;
-import db_access, utils;
-
-###############################################################################
-
-re_arch_from_filename = re.compile(r"binary-[^/]+")
-
-###############################################################################
-
-Cnf = None;
-projectB = None;
-files_id_cache = {};
-source_cache = {};
-arch_all_cache = {};
-binary_cache = {};
-location_path_cache = {};
-#
-files_id_serial = 0;
-source_id_serial = 0;
-src_associations_id_serial = 0;
-dsc_files_id_serial = 0;
-files_query_cache = None;
-source_query_cache = None;
-src_associations_query_cache = None;
-dsc_files_query_cache = None;
-orig_tar_gz_cache = {};
-#
-binaries_id_serial = 0;
-binaries_query_cache = None;
-bin_associations_id_serial = 0;
-bin_associations_query_cache = None;
-#
-source_cache_for_binaries = {};
-reject_message = "";
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: neve
-Initializes a projectB database from an existing archive
-
- -a, --action actually perform the initalization
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-###############################################################################
-
-def reject (str, prefix="Rejected: "):
- global reject_message;
- if str:
- reject_message += prefix + str + "\n";
-
-###############################################################################
-
-def check_signature (filename):
- if not utils.re_taint_free.match(os.path.basename(filename)):
- reject("!!WARNING!! tainted filename: '%s'." % (filename));
- return None;
-
- status_read, status_write = os.pipe();
- cmd = "gpgv --status-fd %s --keyring %s --keyring %s %s" \
- % (status_write, Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"], filename);
- (output, status, exit_status) = utils.gpgv_get_status_output(cmd, status_read, status_write);
-
- # Process the status-fd output
- keywords = {};
- bad = internal_error = "";
- for line in status.split('\n'):
- line = line.strip();
- if line == "":
- continue;
- split = line.split();
- if len(split) < 2:
- internal_error += "gpgv status line is malformed (< 2 atoms) ['%s'].\n" % (line);
- continue;
- (gnupg, keyword) = split[:2];
- if gnupg != "[GNUPG:]":
- internal_error += "gpgv status line is malformed (incorrect prefix '%s').\n" % (gnupg);
- continue;
- args = split[2:];
- if keywords.has_key(keyword) and keyword != "NODATA" and keyword != "SIGEXPIRED":
- internal_error += "found duplicate status token ('%s').\n" % (keyword);
- continue;
- else:
- keywords[keyword] = args;
-
- # If we failed to parse the status-fd output, let's just whine and bail now
- if internal_error:
- reject("internal error while performing signature check on %s." % (filename));
- reject(internal_error, "");
- reject("Please report the above errors to the Archive maintainers by replying to this mail.", "");
- return None;
-
- # Now check for obviously bad things in the processed output
- if keywords.has_key("SIGEXPIRED"):
- utils.warn("%s: signing key has expired." % (filename));
- if keywords.has_key("KEYREVOKED"):
- reject("key used to sign %s has been revoked." % (filename));
- bad = 1;
- if keywords.has_key("BADSIG"):
- reject("bad signature on %s." % (filename));
- bad = 1;
- if keywords.has_key("ERRSIG") and not keywords.has_key("NO_PUBKEY"):
- reject("failed to check signature on %s." % (filename));
- bad = 1;
- if keywords.has_key("NO_PUBKEY"):
- args = keywords["NO_PUBKEY"];
- if len(args) < 1:
- reject("internal error while checking signature on %s." % (filename));
- bad = 1;
- else:
- fingerprint = args[0];
- if keywords.has_key("BADARMOR"):
- reject("ascii armour of signature was corrupt in %s." % (filename));
- bad = 1;
- if keywords.has_key("NODATA"):
- utils.warn("no signature found for %s." % (filename));
- return "NOSIG";
- #reject("no signature found in %s." % (filename));
- #bad = 1;
-
- if bad:
- return None;
-
- # Next check gpgv exited with a zero return code
- if exit_status and not keywords.has_key("NO_PUBKEY"):
- reject("gpgv failed while checking %s." % (filename));
- if status.strip():
- reject(utils.prefix_multi_line_string(status, " [GPG status-fd output:] "), "");
- else:
- reject(utils.prefix_multi_line_string(output, " [GPG output:] "), "");
- return None;
-
- # Sanity check the good stuff we expect
- if not keywords.has_key("VALIDSIG"):
- if not keywords.has_key("NO_PUBKEY"):
- reject("signature on %s does not appear to be valid [No VALIDSIG]." % (filename));
- bad = 1;
- else:
- args = keywords["VALIDSIG"];
- if len(args) < 1:
- reject("internal error while checking signature on %s." % (filename));
- bad = 1;
- else:
- fingerprint = args[0];
- if not keywords.has_key("GOODSIG") and not keywords.has_key("NO_PUBKEY"):
- reject("signature on %s does not appear to be valid [No GOODSIG]." % (filename));
- bad = 1;
- if not keywords.has_key("SIG_ID") and not keywords.has_key("NO_PUBKEY"):
- reject("signature on %s does not appear to be valid [No SIG_ID]." % (filename));
- bad = 1;
-
- # Finally ensure there's not something we don't recognise
- known_keywords = utils.Dict(VALIDSIG="",SIG_ID="",GOODSIG="",BADSIG="",ERRSIG="",
- SIGEXPIRED="",KEYREVOKED="",NO_PUBKEY="",BADARMOR="",
- NODATA="");
-
- for keyword in keywords.keys():
- if not known_keywords.has_key(keyword):
- reject("found unknown status token '%s' from gpgv with args '%r' in %s." % (keyword, keywords[keyword], filename));
- bad = 1;
-
- if bad:
- return None;
- else:
- return fingerprint;
-
-################################################################################
-
-# Prepares a filename or directory (s) to be file.filename by stripping any part of the location (sub) from it.
-def poolify (s, sub):
- for i in xrange(len(sub)):
- if sub[i:] == s[0:len(sub)-i]:
- return s[len(sub)-i:];
- return s;
-
-def update_archives ():
- projectB.query("DELETE FROM archive")
- for archive in Cnf.SubTree("Archive").List():
- SubSec = Cnf.SubTree("Archive::%s" % (archive));
- projectB.query("INSERT INTO archive (name, origin_server, description) VALUES ('%s', '%s', '%s')"
- % (archive, SubSec["OriginServer"], SubSec["Description"]));
-
-def update_components ():
- projectB.query("DELETE FROM component")
- for component in Cnf.SubTree("Component").List():
- SubSec = Cnf.SubTree("Component::%s" % (component));
- projectB.query("INSERT INTO component (name, description, meets_dfsg) VALUES ('%s', '%s', '%s')" %
- (component, SubSec["Description"], SubSec["MeetsDFSG"]));
-
-def update_locations ():
- projectB.query("DELETE FROM location")
- for location in Cnf.SubTree("Location").List():
- SubSec = Cnf.SubTree("Location::%s" % (location));
- archive_id = db_access.get_archive_id(SubSec["archive"]);
- type = SubSec.Find("type");
- if type == "legacy-mixed":
- projectB.query("INSERT INTO location (path, archive, type) VALUES ('%s', %d, '%s')" % (location, archive_id, SubSec["type"]));
- else:
- for component in Cnf.SubTree("Component").List():
- component_id = db_access.get_component_id(component);
- projectB.query("INSERT INTO location (path, component, archive, type) VALUES ('%s', %d, %d, '%s')" %
- (location, component_id, archive_id, SubSec["type"]));
-
-def update_architectures ():
- projectB.query("DELETE FROM architecture")
- for arch in Cnf.SubTree("Architectures").List():
- projectB.query("INSERT INTO architecture (arch_string, description) VALUES ('%s', '%s')" % (arch, Cnf["Architectures::%s" % (arch)]))
-
-def update_suites ():
- projectB.query("DELETE FROM suite")
- for suite in Cnf.SubTree("Suite").List():
- SubSec = Cnf.SubTree("Suite::%s" %(suite))
- projectB.query("INSERT INTO suite (suite_name) VALUES ('%s')" % suite.lower());
- for i in ("Version", "Origin", "Description"):
- if SubSec.has_key(i):
- projectB.query("UPDATE suite SET %s = '%s' WHERE suite_name = '%s'" % (i.lower(), SubSec[i], suite.lower()))
- for architecture in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
- architecture_id = db_access.get_architecture_id (architecture);
- projectB.query("INSERT INTO suite_architectures (suite, architecture) VALUES (currval('suite_id_seq'), %d)" % (architecture_id));
-
-def update_override_type():
- projectB.query("DELETE FROM override_type");
- for type in Cnf.ValueList("OverrideType"):
- projectB.query("INSERT INTO override_type (type) VALUES ('%s')" % (type));
-
-def update_priority():
- projectB.query("DELETE FROM priority");
- for priority in Cnf.SubTree("Priority").List():
- projectB.query("INSERT INTO priority (priority, level) VALUES ('%s', %s)" % (priority, Cnf["Priority::%s" % (priority)]));
-
-def update_section():
- projectB.query("DELETE FROM section");
- for component in Cnf.SubTree("Component").List():
- if Cnf["Natalie::ComponentPosition"] == "prefix":
- suffix = "";
- if component != 'main':
- prefix = component + '/';
- else:
- prefix = "";
- else:
- prefix = "";
- component = component.replace("non-US/", "");
- if component != 'main':
- suffix = '/' + component;
- else:
- suffix = "";
- for section in Cnf.ValueList("Section"):
- projectB.query("INSERT INTO section (section) VALUES ('%s%s%s')" % (prefix, section, suffix));
-
-def get_location_path(directory):
- global location_path_cache;
-
- if location_path_cache.has_key(directory):
- return location_path_cache[directory];
-
- q = projectB.query("SELECT DISTINCT path FROM location WHERE path ~ '%s'" % (directory));
- try:
- path = q.getresult()[0][0];
- except:
- utils.fubar("[neve] get_location_path(): Couldn't get path for %s" % (directory));
- location_path_cache[directory] = path;
- return path;
-
-################################################################################
-
-def get_or_set_files_id (filename, size, md5sum, location_id):
- global files_id_cache, files_id_serial, files_query_cache;
-
- cache_key = "~".join((filename, size, md5sum, repr(location_id)));
- if not files_id_cache.has_key(cache_key):
- files_id_serial += 1
- files_query_cache.write("%d\t%s\t%s\t%s\t%d\t\\N\n" % (files_id_serial, filename, size, md5sum, location_id));
- files_id_cache[cache_key] = files_id_serial
-
- return files_id_cache[cache_key]
-
-###############################################################################
-
-def process_sources (filename, suite, component, archive):
- global source_cache, source_query_cache, src_associations_query_cache, dsc_files_query_cache, source_id_serial, src_associations_id_serial, dsc_files_id_serial, source_cache_for_binaries, orig_tar_gz_cache, reject_message;
-
- suite = suite.lower();
- suite_id = db_access.get_suite_id(suite);
- try:
- file = utils.open_file (filename);
- except utils.cant_open_exc:
- utils.warn("can't open '%s'" % (filename));
- return;
- Scanner = apt_pkg.ParseTagFile(file);
- while Scanner.Step() != 0:
- package = Scanner.Section["package"];
- version = Scanner.Section["version"];
- directory = Scanner.Section["directory"];
- dsc_file = os.path.join(Cnf["Dir::Root"], directory, "%s_%s.dsc" % (package, utils.re_no_epoch.sub('', version)));
- # Sometimes the Directory path is a lie; check in the pool
- if not os.path.exists(dsc_file):
- if directory.split('/')[0] == "dists":
- directory = Cnf["Dir::PoolRoot"] + utils.poolify(package, component);
- dsc_file = os.path.join(Cnf["Dir::Root"], directory, "%s_%s.dsc" % (package, utils.re_no_epoch.sub('', version)));
- if not os.path.exists(dsc_file):
- utils.fubar("%s not found." % (dsc_file));
- install_date = time.strftime("%Y-%m-%d", time.localtime(os.path.getmtime(dsc_file)));
- fingerprint = check_signature(dsc_file);
- fingerprint_id = db_access.get_or_set_fingerprint_id(fingerprint);
- if reject_message:
- utils.fubar("%s: %s" % (dsc_file, reject_message));
- maintainer = Scanner.Section["maintainer"]
- maintainer = maintainer.replace("'", "\\'");
- maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
- location = get_location_path(directory.split('/')[0]);
- location_id = db_access.get_location_id (location, component, archive);
- if not directory.endswith("/"):
- directory += '/';
- directory = poolify (directory, location);
- if directory != "" and not directory.endswith("/"):
- directory += '/';
- no_epoch_version = utils.re_no_epoch.sub('', version);
- # Add all files referenced by the .dsc to the files table
- ids = [];
- for line in Scanner.Section["files"].split('\n'):
- id = None;
- (md5sum, size, filename) = line.strip().split();
- # Don't duplicate .orig.tar.gz's
- if filename.endswith(".orig.tar.gz"):
- cache_key = "%s~%s~%s" % (filename, size, md5sum);
- if orig_tar_gz_cache.has_key(cache_key):
- id = orig_tar_gz_cache[cache_key];
- else:
- id = get_or_set_files_id (directory + filename, size, md5sum, location_id);
- orig_tar_gz_cache[cache_key] = id;
- else:
- id = get_or_set_files_id (directory + filename, size, md5sum, location_id);
- ids.append(id);
- # If this is the .dsc itself; save the ID for later.
- if filename.endswith(".dsc"):
- files_id = id;
- filename = directory + package + '_' + no_epoch_version + '.dsc'
- cache_key = "%s~%s" % (package, version);
- if not source_cache.has_key(cache_key):
- nasty_key = "%s~%s" % (package, version)
- source_id_serial += 1;
- if not source_cache_for_binaries.has_key(nasty_key):
- source_cache_for_binaries[nasty_key] = source_id_serial;
- tmp_source_id = source_id_serial;
- source_cache[cache_key] = source_id_serial;
- source_query_cache.write("%d\t%s\t%s\t%d\t%d\t%s\t%s\n" % (source_id_serial, package, version, maintainer_id, files_id, install_date, fingerprint_id))
- for id in ids:
- dsc_files_id_serial += 1;
- dsc_files_query_cache.write("%d\t%d\t%d\n" % (dsc_files_id_serial, tmp_source_id,id));
- else:
- tmp_source_id = source_cache[cache_key];
-
- src_associations_id_serial += 1;
- src_associations_query_cache.write("%d\t%d\t%d\n" % (src_associations_id_serial, suite_id, tmp_source_id))
-
- file.close();
-
-###############################################################################
-
-def process_packages (filename, suite, component, archive):
- global arch_all_cache, binary_cache, binaries_id_serial, binaries_query_cache, bin_associations_id_serial, bin_associations_query_cache, reject_message;
-
- count_total = 0;
- count_bad = 0;
- suite = suite.lower();
- suite_id = db_access.get_suite_id(suite);
- try:
- file = utils.open_file (filename);
- except utils.cant_open_exc:
- utils.warn("can't open '%s'" % (filename));
- return;
- Scanner = apt_pkg.ParseTagFile(file);
- while Scanner.Step() != 0:
- package = Scanner.Section["package"]
- version = Scanner.Section["version"]
- maintainer = Scanner.Section["maintainer"]
- maintainer = maintainer.replace("'", "\\'")
- maintainer_id = db_access.get_or_set_maintainer_id(maintainer);
- architecture = Scanner.Section["architecture"]
- architecture_id = db_access.get_architecture_id (architecture);
- fingerprint = "NOSIG";
- fingerprint_id = db_access.get_or_set_fingerprint_id(fingerprint);
- if not Scanner.Section.has_key("source"):
- source = package
- else:
- source = Scanner.Section["source"]
- source_version = ""
- if source.find("(") != -1:
- m = utils.re_extract_src_version.match(source)
- source = m.group(1)
- source_version = m.group(2)
- if not source_version:
- source_version = version
- filename = Scanner.Section["filename"]
- location = get_location_path(filename.split('/')[0]);
- location_id = db_access.get_location_id (location, component, archive)
- filename = poolify (filename, location)
- if architecture == "all":
- filename = re_arch_from_filename.sub("binary-all", filename);
- cache_key = "%s~%s" % (source, source_version);
- source_id = source_cache_for_binaries.get(cache_key, None);
- size = Scanner.Section["size"];
- md5sum = Scanner.Section["md5sum"];
- files_id = get_or_set_files_id (filename, size, md5sum, location_id);
- type = "deb"; # FIXME
- cache_key = "%s~%s~%s~%d~%d~%d~%d" % (package, version, repr(source_id), architecture_id, location_id, files_id, suite_id);
- if not arch_all_cache.has_key(cache_key):
- arch_all_cache[cache_key] = 1;
- cache_key = "%s~%s~%s~%d" % (package, version, repr(source_id), architecture_id);
- if not binary_cache.has_key(cache_key):
- if not source_id:
- source_id = "\N";
- count_bad += 1;
- else:
- source_id = repr(source_id);
- binaries_id_serial += 1;
- binaries_query_cache.write("%d\t%s\t%s\t%d\t%s\t%d\t%d\t%s\t%s\n" % (binaries_id_serial, package, version, maintainer_id, source_id, architecture_id, files_id, type, fingerprint_id));
- binary_cache[cache_key] = binaries_id_serial;
- tmp_binaries_id = binaries_id_serial;
- else:
- tmp_binaries_id = binary_cache[cache_key];
-
- bin_associations_id_serial += 1;
- bin_associations_query_cache.write("%d\t%d\t%d\n" % (bin_associations_id_serial, suite_id, tmp_binaries_id));
- count_total += 1;
-
- file.close();
- if count_bad != 0:
- print "%d binary packages processed; %d with no source match which is %.2f%%" % (count_total, count_bad, (float(count_bad)/count_total)*100);
- else:
- print "%d binary packages processed; 0 with no source match which is 0%%" % (count_total);
-
-###############################################################################
-
-def do_sources(sources, suite, component, server):
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (sources, temp_filename));
- if (result != 0):
- utils.fubar("Gunzip invocation failed!\n%s" % (output), result);
- print 'Processing '+sources+'...';
- process_sources (temp_filename, suite, component, server);
- os.unlink(temp_filename);
-
-###############################################################################
-
-def do_da_do_da ():
- global Cnf, projectB, query_cache, files_query_cache, source_query_cache, src_associations_query_cache, dsc_files_query_cache, bin_associations_query_cache, binaries_query_cache;
-
- Cnf = utils.get_conf();
- Arguments = [('a', "action", "Neve::Options::Action"),
- ('h', "help", "Neve::Options::Help")];
- for i in [ "action", "help" ]:
- if not Cnf.has_key("Neve::Options::%s" % (i)):
- Cnf["Neve::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Neve::Options")
- if Options["Help"]:
- usage();
-
- if not Options["Action"]:
- utils.warn("""no -a/--action given; not doing anything.
-Please read the documentation before running this script.
-""");
- usage(1);
-
- print "Re-Creating DB..."
- (result, output) = commands.getstatusoutput("psql -f init_pool.sql template1");
- if (result != 0):
- utils.fubar("psql invocation failed!\n", result);
- print output;
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
-
- db_access.init (Cnf, projectB);
-
- print "Adding static tables from conf file..."
- projectB.query("BEGIN WORK");
- update_architectures();
- update_components();
- update_archives();
- update_locations();
- update_suites();
- update_override_type();
- update_priority();
- update_section();
- projectB.query("COMMIT WORK");
-
- files_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"files","w");
- source_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"source","w");
- src_associations_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"src_associations","w");
- dsc_files_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"dsc_files","w");
- binaries_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"binaries","w");
- bin_associations_query_cache = utils.open_file(Cnf["Neve::ExportDir"]+"bin_associations","w");
-
- projectB.query("BEGIN WORK");
- # Process Sources files to popoulate `source' and friends
- for location in Cnf.SubTree("Location").List():
- SubSec = Cnf.SubTree("Location::%s" % (location));
- server = SubSec["Archive"];
- type = Cnf.Find("Location::%s::Type" % (location));
- if type == "legacy-mixed":
- sources = location + 'Sources.gz';
- suite = Cnf.Find("Location::%s::Suite" % (location));
- do_sources(sources, suite, "", server);
- elif type == "legacy" or type == "pool":
- for suite in Cnf.ValueList("Location::%s::Suites" % (location)):
- for component in Cnf.SubTree("Component").List():
- sources = Cnf["Dir::Root"] + "dists/" + Cnf["Suite::%s::CodeName" % (suite)] + '/' + component + '/source/' + 'Sources.gz';
- do_sources(sources, suite, component, server);
- else:
- utils.fubar("Unknown location type ('%s')." % (type));
-
- # Process Packages files to populate `binaries' and friends
-
- for location in Cnf.SubTree("Location").List():
- SubSec = Cnf.SubTree("Location::%s" % (location));
- server = SubSec["Archive"];
- type = Cnf.Find("Location::%s::Type" % (location));
- if type == "legacy-mixed":
- packages = location + 'Packages';
- suite = Cnf.Find("Location::%s::Suite" % (location));
- print 'Processing '+location+'...';
- process_packages (packages, suite, "", server);
- elif type == "legacy" or type == "pool":
- for suite in Cnf.ValueList("Location::%s::Suites" % (location)):
- for component in Cnf.SubTree("Component").List():
- architectures = filter(utils.real_arch,
- Cnf.ValueList("Suite::%s::Architectures" % (suite)));
- for architecture in architectures:
- packages = Cnf["Dir::Root"] + "dists/" + Cnf["Suite::%s::CodeName" % (suite)] + '/' + component + '/binary-' + architecture + '/Packages'
- print 'Processing '+packages+'...';
- process_packages (packages, suite, component, server);
-
- files_query_cache.close();
- source_query_cache.close();
- src_associations_query_cache.close();
- dsc_files_query_cache.close();
- binaries_query_cache.close();
- bin_associations_query_cache.close();
- print "Writing data to `files' table...";
- projectB.query("COPY files FROM '%s'" % (Cnf["Neve::ExportDir"]+"files"));
- print "Writing data to `source' table...";
- projectB.query("COPY source FROM '%s'" % (Cnf["Neve::ExportDir"]+"source"));
- print "Writing data to `src_associations' table...";
- projectB.query("COPY src_associations FROM '%s'" % (Cnf["Neve::ExportDir"]+"src_associations"));
- print "Writing data to `dsc_files' table...";
- projectB.query("COPY dsc_files FROM '%s'" % (Cnf["Neve::ExportDir"]+"dsc_files"));
- print "Writing data to `binaries' table...";
- projectB.query("COPY binaries FROM '%s'" % (Cnf["Neve::ExportDir"]+"binaries"));
- print "Writing data to `bin_associations' table...";
- projectB.query("COPY bin_associations FROM '%s'" % (Cnf["Neve::ExportDir"]+"bin_associations"));
- print "Committing...";
- projectB.query("COMMIT WORK");
-
- # Add the constraints and otherwise generally clean up the database.
- # See add_constraints.sql for more details...
-
- print "Running add_constraints.sql...";
- (result, output) = commands.getstatusoutput("psql %s < add_constraints.sql" % (Cnf["DB::Name"]));
- print output
- if (result != 0):
- utils.fubar("psql invocation failed!\n%s" % (output), result);
-
- return;
-
-################################################################################
-
-def main():
- utils.try_with_debug(do_da_do_da);
-
-################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/env python
-
-# Copyright (C) 2004, 2005 James Troup <james@nocrew.org>
-# $Id: nina,v 1.2 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import glob, os, stat, time;
-import utils;
-
-################################################################################
-
-def main():
- Cnf = utils.get_conf()
- count = 0;
- os.chdir(Cnf["Dir::Queue::Done"])
- files = glob.glob("%s/*" % (Cnf["Dir::Queue::Done"]));
- for filename in files:
- if os.path.isfile(filename):
- mtime = time.gmtime(os.stat(filename)[stat.ST_MTIME]);
- dirname = time.strftime("%Y/%m/%d", mtime);
- if not os.path.exists(dirname):
- print "Creating: %s" % (dirname);
- os.makedirs(dirname);
- dest = dirname + '/' + os.path.basename(filename);
- if os.path.exists(dest):
- utils.fubar("%s already exists." % (dest));
- print "Move: %s -> %s" % (filename, dest) ;
- os.rename(filename, dest);
- count = count + 1;
- print "Moved %d files." % (count);
-
-############################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-base Base system (baseX_Y.tgz) general bugs
-install Installation system
-installation Installation system
-cdrom Installation system
-boot-floppy Installation system
-spam Spam (reassign spam to here so we can complain about it)
-press Press release issues
-kernel Problems with the Linux kernel, or that shipped with Debian
-project Problems related to project administration
-general General problems (e.g. "many manpages are mode 755")
-slink-cd Slink CD
-potato-cd Potato CD
-listarchives Problems with the WWW mailing list archives
-qa.debian.org The Quality Assurance group
-ftp.debian.org Problems with the FTP site
-www.debian.org Problems with the WWW site
-bugs.debian.org The bug tracking system, @bugs.debian.org
-nonus.debian.org Problems with the non-US FTP site
-lists.debian.org The mailing lists, debian-*@lists.debian.org
-wnpp Work-Needing and Prospective Packages list
-cdimage.debian.org CD Image issues
-tech-ctte The Debian Technical Committee (see the Constitution)
-mirrors Problems with the official mirrors
-security.debian.org The Debian Security Team
-installation-reports Reports of installation problems with stable & testing
-upgrade-reports Reports of upgrade problems for stable & testing
-release-notes Problems with the Release Notes
+++ /dev/null
-base Anthony Towns <debootstrap@packages.debian.org>
-install Debian Install Team <debian-boot@lists.debian.org>
-installation Debian Install Team <debian-boot@lists.debian.org>
-cdrom Debian CD-ROM Team <debian-cd@lists.debian.org>
-boot-floppy Debian Install Team <debian-boot@lists.debian.org>
-press press@debian.org
-bugs.debian.org Debian Bug Tracking Team <owner@bugs.debian.org>
-ftp.debian.org James Troup and others <ftpmaster@ftp-master.debian.org>
-qa.debian.org debian-qa@lists.debian.org
-nonus.debian.org Michael Beattie and others <ftpmaster@debian.org>
-www.debian.org Debian WWW Team <debian-www@lists.debian.org>
-mirrors Debian Mirrors Team <mirrors@debian.org>
-listarchives Debian List Archive Team <listarchives@debian.org>
-project debian-project@lists.debian.org
-general debian-devel@lists.debian.org
-kernel Debian Kernel Team <debian-kernel@lists.debian.org>
-lists.debian.org Debian Listmaster Team <listmaster@lists.debian.org>
-spam spam@debian.org
-slink-cd Steve McIntyre <stevem@chiark.greenend.org.uk>
-potato-cd Steve McIntyre <stevem@chiark.greenend.org.uk>
-wnpp wnpp@debian.org
-cdimage.debian.org Debian CD-ROM Team <debian-cd@lists.debian.org>
-tech-ctte Technical Committee <debian-ctte@lists.debian.org>
-security.debian.org Debian Security Team <team@security.debian.org>
-installation-reports Debian Install Team <debian-boot@lists.debian.org>
-upgrade-reports Debian Testing Group <debian-testing@lists.debian.org>
-release-notes Debian Documentation Team <debian-doc@lists.debian.org>
+++ /dev/null
-#!/usr/bin/env python
-
-# Check for obsolete binary packages
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: rene,v 1.23 2005-04-16 09:19:20 rmurray Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# ``If you're claiming that's a "problem" that needs to be "fixed",
-# you might as well write some letters to God about how unfair entropy
-# is while you're at it.'' -- 20020802143104.GA5628@azure.humbug.org.au
-
-## TODO: fix NBS looping for version, implement Dubious NBS, fix up output of duplicate source package stuff, improve experimental ?, add support for non-US ?, add overrides, avoid ANAIS for duplicated packages
-
-################################################################################
-
-import commands, pg, os, string, sys, time;
-import utils, db_access;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-suite_id = None;
-no_longer_in_suite = {}; # Really should be static to add_nbs, but I'm lazy
-
-source_binaries = {};
-source_versions = {};
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: rene
-Check for obsolete or duplicated packages.
-
- -h, --help show this help and exit.
- -m, --mode=MODE chose the MODE to run in (full or daily).
- -s, --suite=SUITE check suite SUITE."""
- sys.exit(exit_code)
-
-################################################################################
-
-def add_nbs(nbs_d, source, version, package):
- # Ensure the package is still in the suite (someone may have already removed it)
- if no_longer_in_suite.has_key(package):
- return;
- else:
- q = projectB.query("SELECT b.id FROM binaries b, bin_associations ba WHERE ba.bin = b.id AND ba.suite = %s AND b.package = '%s' LIMIT 1" % (suite_id, package));
- if not q.getresult():
- no_longer_in_suite[package] = "";
- return;
-
- nbs_d.setdefault(source, {})
- nbs_d[source].setdefault(version, {})
- nbs_d[source][version][package] = "";
-
-################################################################################
-
-# Check for packages built on architectures they shouldn't be.
-def do_anais(architecture, binaries_list, source):
- if architecture == "any" or architecture == "all":
- return "";
-
- anais_output = "";
- architectures = {};
- for arch in architecture.split():
- architectures[arch.strip()] = "";
- for binary in binaries_list:
- q = projectB.query("SELECT a.arch_string, b.version FROM binaries b, bin_associations ba, architecture a WHERE ba.suite = %s AND ba.bin = b.id AND b.architecture = a.id AND b.package = '%s'" % (suite_id, binary));
- ql = q.getresult();
- versions = [];
- for i in ql:
- arch = i[0];
- version = i[1];
- if architectures.has_key(arch):
- versions.append(version);
- versions.sort(apt_pkg.VersionCompare);
- if versions:
- latest_version = versions.pop()
- else:
- latest_version = None;
- # Check for 'invalid' architectures
- versions_d = {}
- for i in ql:
- arch = i[0];
- version = i[1];
- if not architectures.has_key(arch):
- versions_d.setdefault(version, [])
- versions_d[version].append(arch)
-
- if versions_d != {}:
- anais_output += "\n (*) %s_%s [%s]: %s\n" % (binary, latest_version, source, architecture);
- versions = versions_d.keys();
- versions.sort(apt_pkg.VersionCompare);
- for version in versions:
- arches = versions_d[version];
- arches.sort();
- anais_output += " o %s: %s\n" % (version, ", ".join(arches));
- return anais_output;
-
-################################################################################
-
-def do_nviu():
- experimental_id = db_access.get_suite_id("experimental");
- if experimental_id == -1:
- return;
- # Check for packages in experimental obsoleted by versions in unstable
- q = projectB.query("""
-SELECT s.source, s.version AS experimental, s2.version AS unstable
- FROM src_associations sa, source s, source s2, src_associations sa2
- WHERE sa.suite = %s AND sa2.suite = %d AND sa.source = s.id
- AND sa2.source = s2.id AND s.source = s2.source
- AND versioncmp(s.version, s2.version) < 0""" % (experimental_id,
- db_access.get_suite_id("unstable")));
- ql = q.getresult();
- if ql:
- nviu_to_remove = [];
- print "Newer version in unstable";
- print "-------------------------";
- print ;
- for i in ql:
- (source, experimental_version, unstable_version) = i;
- print " o %s (%s, %s)" % (source, experimental_version, unstable_version);
- nviu_to_remove.append(source);
- print
- print "Suggested command:"
- print " melanie -m \"[rene] NVIU\" -s experimental %s" % (" ".join(nviu_to_remove));
- print
-
-################################################################################
-
-def do_nbs(real_nbs):
- output = "Not Built from Source\n";
- output += "---------------------\n\n";
-
- nbs_to_remove = [];
- nbs_keys = real_nbs.keys();
- nbs_keys.sort();
- for source in nbs_keys:
- output += " * %s_%s builds: %s\n" % (source,
- source_versions.get(source, "??"),
- source_binaries.get(source, "(source does not exist)"));
- output += " but no longer builds:\n"
- versions = real_nbs[source].keys();
- versions.sort(apt_pkg.VersionCompare);
- for version in versions:
- packages = real_nbs[source][version].keys();
- packages.sort();
- for pkg in packages:
- nbs_to_remove.append(pkg);
- output += " o %s: %s\n" % (version, ", ".join(packages));
-
- output += "\n";
-
- if nbs_to_remove:
- print output;
-
- print "Suggested command:"
- print " melanie -m \"[rene] NBS\" -b %s" % (" ".join(nbs_to_remove));
- print
-
-################################################################################
-
-def do_dubious_nbs(dubious_nbs):
- print "Dubious NBS";
- print "-----------";
- print ;
-
- dubious_nbs_keys = dubious_nbs.keys();
- dubious_nbs_keys.sort();
- for source in dubious_nbs_keys:
- print " * %s_%s builds: %s" % (source,
- source_versions.get(source, "??"),
- source_binaries.get(source, "(source does not exist)"));
- print " won't admit to building:"
- versions = dubious_nbs[source].keys();
- versions.sort(apt_pkg.VersionCompare);
- for version in versions:
- packages = dubious_nbs[source][version].keys();
- packages.sort();
- print " o %s: %s" % (version, ", ".join(packages));
-
- print ;
-
-################################################################################
-
-def do_obsolete_source(duplicate_bins, bin2source):
- obsolete = {}
- for key in duplicate_bins.keys():
- (source_a, source_b) = key.split('~')
- for source in [ source_a, source_b ]:
- if not obsolete.has_key(source):
- if not source_binaries.has_key(source):
- # Source has already been removed
- continue;
- else:
- obsolete[source] = map(string.strip,
- source_binaries[source].split(','))
- for binary in duplicate_bins[key]:
- if bin2source.has_key(binary) and bin2source[binary]["source"] == source:
- continue
- if binary in obsolete[source]:
- obsolete[source].remove(binary)
-
- to_remove = []
- output = "Obsolete source package\n"
- output += "-----------------------\n\n"
- obsolete_keys = obsolete.keys()
- obsolete_keys.sort()
- for source in obsolete_keys:
- if not obsolete[source]:
- to_remove.append(source)
- output += " * %s (%s)\n" % (source, source_versions[source])
- for binary in map(string.strip, source_binaries[source].split(',')):
- if bin2source.has_key(binary):
- output += " o %s (%s) is built by %s.\n" \
- % (binary, bin2source[binary]["version"],
- bin2source[binary]["source"])
- else:
- output += " o %s is not built.\n" % binary
- output += "\n"
-
- if to_remove:
- print output;
-
- print "Suggested command:"
- print " melanie -S -p -m \"[rene] obsolete source package\" %s" % (" ".join(to_remove));
- print
-
-################################################################################
-
-def main ():
- global Cnf, projectB, suite_id, source_binaries, source_versions;
-
- Cnf = utils.get_conf();
-
- Arguments = [('h',"help","Rene::Options::Help"),
- ('m',"mode","Rene::Options::Mode", "HasArg"),
- ('s',"suite","Rene::Options::Suite","HasArg")];
- for i in [ "help" ]:
- if not Cnf.has_key("Rene::Options::%s" % (i)):
- Cnf["Rene::Options::%s" % (i)] = "";
- Cnf["Rene::Options::Suite"] = Cnf["Dinstall::DefaultSuite"];
-
- if not Cnf.has_key("Rene::Options::Mode"):
- Cnf["Rene::Options::Mode"] = "daily";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Rene::Options")
- if Options["Help"]:
- usage();
-
- # Set up checks based on mode
- if Options["Mode"] == "daily":
- checks = [ "nbs", "nviu", "obsolete source" ];
- elif Options["Mode"] == "full":
- checks = [ "nbs", "nviu", "obsolete source", "dubious nbs", "bnb", "bms", "anais" ];
- else:
- utils.warn("%s is not a recognised mode - only 'full' or 'daily' are understood." % (Options["Mode"]));
- usage(1);
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- bin_pkgs = {};
- src_pkgs = {};
- bin2source = {}
- bins_in_suite = {};
- nbs = {};
- source_versions = {};
-
- anais_output = "";
- duplicate_bins = {};
-
- suite = Options["Suite"]
- suite_id = db_access.get_suite_id(suite);
-
- bin_not_built = {};
-
- if "bnb" in checks:
- # Initalize a large hash table of all binary packages
- before = time.time();
- sys.stderr.write("[Getting a list of binary packages in %s..." % (suite));
- q = projectB.query("SELECT distinct b.package FROM binaries b, bin_associations ba WHERE ba.suite = %s AND ba.bin = b.id" % (suite_id));
- ql = q.getresult();
- sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
- for i in ql:
- bins_in_suite[i[0]] = "";
-
- # Checks based on the Sources files
- components = Cnf.ValueList("Suite::%s::Components" % (suite));
- for component in components:
- filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suite, component);
- # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
- if (result != 0):
- sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
- sys.exit(result);
- sources = utils.open_file(temp_filename);
- Sources = apt_pkg.ParseTagFile(sources);
- while Sources.Step():
- source = Sources.Section.Find('Package');
- source_version = Sources.Section.Find('Version');
- architecture = Sources.Section.Find('Architecture');
- binaries = Sources.Section.Find('Binary');
- binaries_list = map(string.strip, binaries.split(','));
-
- if "bnb" in checks:
- # Check for binaries not built on any architecture.
- for binary in binaries_list:
- if not bins_in_suite.has_key(binary):
- bin_not_built.setdefault(source, {})
- bin_not_built[source][binary] = "";
-
- if "anais" in checks:
- anais_output += do_anais(architecture, binaries_list, source);
-
- # Check for duplicated packages and build indices for checking "no source" later
- source_index = component + '/' + source;
- if src_pkgs.has_key(source):
- print " %s is a duplicated source package (%s and %s)" % (source, source_index, src_pkgs[source]);
- src_pkgs[source] = source_index;
- for binary in binaries_list:
- if bin_pkgs.has_key(binary):
- key_list = [ source, bin_pkgs[binary] ]
- key_list.sort()
- key = '~'.join(key_list)
- duplicate_bins.setdefault(key, [])
- duplicate_bins[key].append(binary);
- bin_pkgs[binary] = source;
- source_binaries[source] = binaries;
- source_versions[source] = source_version;
-
- sources.close();
- os.unlink(temp_filename);
-
- # Checks based on the Packages files
- for component in components + ['main/debian-installer']:
- architectures = filter(utils.real_arch, Cnf.ValueList("Suite::%s::Architectures" % (suite)));
- for architecture in architectures:
- filename = "%s/dists/%s/%s/binary-%s/Packages.gz" % (Cnf["Dir::Root"], suite, component, architecture);
- # apt_pkg.ParseTagFile needs a real file handle
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
- if (result != 0):
- sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
- sys.exit(result);
- packages = utils.open_file(temp_filename);
- Packages = apt_pkg.ParseTagFile(packages);
- while Packages.Step():
- package = Packages.Section.Find('Package');
- source = Packages.Section.Find('Source', "");
- version = Packages.Section.Find('Version');
- if source == "":
- source = package;
- if bin2source.has_key(package) and \
- apt_pkg.VersionCompare(version, bin2source[package]["version"]) > 0:
- bin2source[package]["version"] = version
- bin2source[package]["source"] = source
- else:
- bin2source[package] = {}
- bin2source[package]["version"] = version
- bin2source[package]["source"] = source
- if source.find("(") != -1:
- m = utils.re_extract_src_version.match(source);
- source = m.group(1);
- version = m.group(2);
- if not bin_pkgs.has_key(package):
- nbs.setdefault(source,{})
- nbs[source].setdefault(package, {})
- nbs[source][package][version] = "";
- else:
- previous_source = bin_pkgs[package]
- if previous_source != source:
- key_list = [ source, previous_source ]
- key_list.sort()
- key = '~'.join(key_list)
- duplicate_bins.setdefault(key, [])
- if package not in duplicate_bins[key]:
- duplicate_bins[key].append(package)
- packages.close();
- os.unlink(temp_filename);
-
- if "obsolete source" in checks:
- do_obsolete_source(duplicate_bins, bin2source)
-
- # Distinguish dubious (version numbers match) and 'real' NBS (they don't)
- dubious_nbs = {};
- real_nbs = {};
- for source in nbs.keys():
- for package in nbs[source].keys():
- versions = nbs[source][package].keys();
- versions.sort(apt_pkg.VersionCompare);
- latest_version = versions.pop();
- source_version = source_versions.get(source,"0");
- if apt_pkg.VersionCompare(latest_version, source_version) == 0:
- add_nbs(dubious_nbs, source, latest_version, package);
- else:
- add_nbs(real_nbs, source, latest_version, package);
-
- if "nviu" in checks:
- do_nviu();
-
- if "nbs" in checks:
- do_nbs(real_nbs);
-
- ###
-
- if Options["Mode"] == "full":
- print "="*75
- print
-
- if "bnb" in checks:
- print "Unbuilt binary packages";
- print "-----------------------";
- print
- keys = bin_not_built.keys();
- keys.sort();
- for source in keys:
- binaries = bin_not_built[source].keys();
- binaries.sort();
- print " o %s: %s" % (source, ", ".join(binaries));
- print ;
-
- if "bms" in checks:
- print "Built from multiple source packages";
- print "-----------------------------------";
- print ;
- keys = duplicate_bins.keys();
- keys.sort();
- for key in keys:
- (source_a, source_b) = key.split("~");
- print " o %s & %s => %s" % (source_a, source_b, ", ".join(duplicate_bins[key]));
- print ;
-
- if "anais" in checks:
- print "Architecture Not Allowed In Source";
- print "----------------------------------";
- print anais_output;
- print ;
-
- if "dubious nbs" in checks:
- do_dubious_nbs(dubious_nbs);
-
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# rhona, cleans up unassociated binary and source packages
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: rhona,v 1.29 2005-11-25 06:59:45 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# 07:05|<elmo> well.. *shrug*.. no, probably not.. but to fix it,
-# | we're going to have to implement reference counting
-# | through dependencies.. do we really want to go down
-# | that road?
-#
-# 07:05|<Culus> elmo: Augh! <brain jumps out of skull>
-
-################################################################################
-
-import os, pg, stat, sys, time
-import apt_pkg
-import utils
-
-################################################################################
-
-projectB = None;
-Cnf = None;
-Options = None;
-now_date = None; # mark newly "deleted" things as deleted "now"
-delete_date = None; # delete things marked "deleted" earler than this
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: rhona [OPTIONS]
-Clean old packages from suites.
-
- -n, --no-action don't do anything
- -h, --help show this help and exit"""
- sys.exit(exit_code)
-
-################################################################################
-
-def check_binaries():
- global delete_date, now_date;
-
- print "Checking for orphaned binary packages..."
-
- # Get the list of binary packages not in a suite and mark them for
- # deletion.
- q = projectB.query("""
-SELECT b.file FROM binaries b, files f
- WHERE f.last_used IS NULL AND b.file = f.id
- AND NOT EXISTS (SELECT 1 FROM bin_associations ba WHERE ba.bin = b.id)""");
- ql = q.getresult();
-
- projectB.query("BEGIN WORK");
- for i in ql:
- file_id = i[0];
- projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, file_id))
- projectB.query("COMMIT WORK");
-
- # Check for any binaries which are marked for eventual deletion
- # but are now used again.
- q = projectB.query("""
-SELECT b.file FROM binaries b, files f
- WHERE f.last_used IS NOT NULL AND f.id = b.file
- AND EXISTS (SELECT 1 FROM bin_associations ba WHERE ba.bin = b.id)""");
- ql = q.getresult();
-
- projectB.query("BEGIN WORK");
- for i in ql:
- file_id = i[0];
- projectB.query("UPDATE files SET last_used = NULL WHERE id = %s" % (file_id));
- projectB.query("COMMIT WORK");
-
-########################################
-
-def check_sources():
- global delete_date, now_date;
-
- print "Checking for orphaned source packages..."
-
- # Get the list of source packages not in a suite and not used by
- # any binaries.
- q = projectB.query("""
-SELECT s.id, s.file FROM source s, files f
- WHERE f.last_used IS NULL AND s.file = f.id
- AND NOT EXISTS (SELECT 1 FROM src_associations sa WHERE sa.source = s.id)
- AND NOT EXISTS (SELECT 1 FROM binaries b WHERE b.source = s.id)""");
-
- #### XXX: this should ignore cases where the files for the binary b
- #### have been marked for deletion (so the delay between bins go
- #### byebye and sources go byebye is 0 instead of StayOfExecution)
-
- ql = q.getresult();
-
- projectB.query("BEGIN WORK");
- for i in ql:
- source_id = i[0];
- dsc_file_id = i[1];
-
- # Mark the .dsc file for deletion
- projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, dsc_file_id))
- # Mark all other files references by .dsc too if they're not used by anyone else
- x = projectB.query("SELECT f.id FROM files f, dsc_files d WHERE d.source = %s AND d.file = f.id" % (source_id));
- for j in x.getresult():
- file_id = j[0];
- y = projectB.query("SELECT id FROM dsc_files d WHERE d.file = %s" % (file_id));
- if len(y.getresult()) == 1:
- projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s AND last_used IS NULL" % (now_date, file_id));
- projectB.query("COMMIT WORK");
-
- # Check for any sources which are marked for deletion but which
- # are now used again.
-
- q = projectB.query("""
-SELECT f.id FROM source s, files f, dsc_files df
- WHERE f.last_used IS NOT NULL AND s.id = df.source AND df.file = f.id
- AND ((EXISTS (SELECT 1 FROM src_associations sa WHERE sa.source = s.id))
- OR (EXISTS (SELECT 1 FROM binaries b WHERE b.source = s.id)))""");
-
- #### XXX: this should also handle deleted binaries specially (ie, not
- #### reinstate sources because of them
-
- ql = q.getresult();
- # Could be done in SQL; but left this way for hysterical raisins
- # [and freedom to innovate don'cha know?]
- projectB.query("BEGIN WORK");
- for i in ql:
- file_id = i[0];
- projectB.query("UPDATE files SET last_used = NULL WHERE id = %s" % (file_id));
- projectB.query("COMMIT WORK");
-
-########################################
-
-def check_files():
- global delete_date, now_date;
-
- # FIXME: this is evil; nothing should ever be in this state. if
- # they are, it's a bug and the files should not be auto-deleted.
-
- return;
-
- print "Checking for unused files..."
- q = projectB.query("""
-SELECT id FROM files f
- WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.file = f.id)
- AND NOT EXISTS (SELECT 1 FROM dsc_files df WHERE df.file = f.id)""");
-
- projectB.query("BEGIN WORK");
- for i in q.getresult():
- file_id = i[0];
- projectB.query("UPDATE files SET last_used = '%s' WHERE id = %s" % (now_date, file_id));
- projectB.query("COMMIT WORK");
-
-def clean_binaries():
- global delete_date, now_date;
-
- # We do this here so that the binaries we remove will have their
- # source also removed (if possible).
-
- # XXX: why doesn't this remove the files here as well? I don't think it
- # buys anything keeping this separate
- print "Cleaning binaries from the DB..."
- if not Options["No-Action"]:
- before = time.time();
- sys.stdout.write("[Deleting from binaries table... ");
- sys.stderr.write("DELETE FROM binaries WHERE EXISTS (SELECT 1 FROM files WHERE binaries.file = files.id AND files.last_used <= '%s')\n" % (delete_date));
- projectB.query("DELETE FROM binaries WHERE EXISTS (SELECT 1 FROM files WHERE binaries.file = files.id AND files.last_used <= '%s')" % (delete_date));
- sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
-
-########################################
-
-def clean():
- global delete_date, now_date;
- count = 0;
- size = 0;
-
- print "Cleaning out packages..."
-
- date = time.strftime("%Y-%m-%d");
- dest = Cnf["Dir::Morgue"] + '/' + Cnf["Rhona::MorgueSubDir"] + '/' + date;
- if not os.path.exists(dest):
- os.mkdir(dest);
-
- # Delete from source
- if not Options["No-Action"]:
- before = time.time();
- sys.stdout.write("[Deleting from source table... ");
- projectB.query("DELETE FROM dsc_files WHERE EXISTS (SELECT 1 FROM source s, files f, dsc_files df WHERE f.last_used <= '%s' AND s.file = f.id AND s.id = df.source AND df.id = dsc_files.id)" % (delete_date));
- projectB.query("DELETE FROM source WHERE EXISTS (SELECT 1 FROM files WHERE source.file = files.id AND files.last_used <= '%s')" % (delete_date));
- sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
-
- # Delete files from the pool
- q = projectB.query("SELECT l.path, f.filename FROM location l, files f WHERE f.last_used <= '%s' AND l.id = f.location" % (delete_date));
- for i in q.getresult():
- filename = i[0] + i[1];
- if not os.path.exists(filename):
- utils.warn("can not find '%s'." % (filename));
- continue;
- if os.path.isfile(filename):
- if os.path.islink(filename):
- count += 1;
- if Options["No-Action"]:
- print "Removing symlink %s..." % (filename);
- else:
- os.unlink(filename);
- else:
- size += os.stat(filename)[stat.ST_SIZE];
- count += 1;
-
- dest_filename = dest + '/' + os.path.basename(filename);
- # If the destination file exists; try to find another filename to use
- if os.path.exists(dest_filename):
- dest_filename = utils.find_next_free(dest_filename);
-
- if Options["No-Action"]:
- print "Cleaning %s -> %s ..." % (filename, dest_filename);
- else:
- utils.move(filename, dest_filename);
- else:
- utils.fubar("%s is neither symlink nor file?!" % (filename));
-
- # Delete from the 'files' table
- if not Options["No-Action"]:
- before = time.time();
- sys.stdout.write("[Deleting from files table... ");
- projectB.query("DELETE FROM files WHERE last_used <= '%s'" % (delete_date));
- sys.stdout.write("done. (%d seconds)]\n" % (int(time.time()-before)));
- if count > 0:
- sys.stderr.write("Cleaned %d files, %s.\n" % (count, utils.size_type(size)));
-
-################################################################################
-
-def clean_maintainers():
- print "Cleaning out unused Maintainer entries..."
-
- q = projectB.query("""
-SELECT m.id FROM maintainer m
- WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.maintainer = m.id)
- AND NOT EXISTS (SELECT 1 FROM source s WHERE s.maintainer = m.id)""");
- ql = q.getresult();
-
- count = 0;
- projectB.query("BEGIN WORK");
- for i in ql:
- maintainer_id = i[0];
- if not Options["No-Action"]:
- projectB.query("DELETE FROM maintainer WHERE id = %s" % (maintainer_id));
- count += 1;
- projectB.query("COMMIT WORK");
-
- if count > 0:
- sys.stderr.write("Cleared out %d maintainer entries.\n" % (count));
-
-################################################################################
-
-def clean_fingerprints():
- print "Cleaning out unused fingerprint entries..."
-
- q = projectB.query("""
-SELECT f.id FROM fingerprint f
- WHERE NOT EXISTS (SELECT 1 FROM binaries b WHERE b.sig_fpr = f.id)
- AND NOT EXISTS (SELECT 1 FROM source s WHERE s.sig_fpr = f.id)""");
- ql = q.getresult();
-
- count = 0;
- projectB.query("BEGIN WORK");
- for i in ql:
- fingerprint_id = i[0];
- if not Options["No-Action"]:
- projectB.query("DELETE FROM fingerprint WHERE id = %s" % (fingerprint_id));
- count += 1;
- projectB.query("COMMIT WORK");
-
- if count > 0:
- sys.stderr.write("Cleared out %d fingerprint entries.\n" % (count));
-
-################################################################################
-
-def clean_queue_build():
- global now_date;
-
- if not Cnf.ValueList("Dinstall::QueueBuildSuites") or Options["No-Action"]:
- return;
-
- print "Cleaning out queue build symlinks..."
-
- our_delete_date = time.strftime("%Y-%m-%d %H:%M", time.localtime(time.time()-int(Cnf["Rhona::QueueBuildStayOfExecution"])));
- count = 0;
-
- q = projectB.query("SELECT filename FROM queue_build WHERE last_used <= '%s'" % (our_delete_date));
- for i in q.getresult():
- filename = i[0];
- if not os.path.exists(filename):
- utils.warn("%s (from queue_build) doesn't exist." % (filename));
- continue;
- if not Cnf.FindB("Dinstall::SecurityQueueBuild") and not os.path.islink(filename):
- utils.fubar("%s (from queue_build) should be a symlink but isn't." % (filename));
- os.unlink(filename);
- count += 1;
- projectB.query("DELETE FROM queue_build WHERE last_used <= '%s'" % (our_delete_date));
-
- if count:
- sys.stderr.write("Cleaned %d queue_build files.\n" % (count));
-
-################################################################################
-
-def main():
- global Cnf, Options, projectB, delete_date, now_date;
-
- Cnf = utils.get_conf()
- for i in ["Help", "No-Action" ]:
- if not Cnf.has_key("Rhona::Options::%s" % (i)):
- Cnf["Rhona::Options::%s" % (i)] = "";
-
- Arguments = [('h',"help","Rhona::Options::Help"),
- ('n',"no-action","Rhona::Options::No-Action")];
-
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Rhona::Options")
-
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
-
- now_date = time.strftime("%Y-%m-%d %H:%M");
- delete_date = time.strftime("%Y-%m-%d %H:%M", time.localtime(time.time()-int(Cnf["Rhona::StayOfExecution"])));
-
- check_binaries();
- clean_binaries();
- check_sources();
- check_files();
- clean();
- clean_maintainers();
- clean_fingerprints();
- clean_queue_build();
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Check for users with no packages in the archive
-# Copyright (C) 2003 James Troup <james@nocrew.org>
-# $Id: rosamund,v 1.1 2003-09-07 13:48:51 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import ldap, pg, sys, time;
-import apt_pkg;
-import utils;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: rosamund
-Checks for users with no packages in the archive
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def get_ldap_value(entry, value):
- ret = entry.get(value);
- if not ret:
- return "";
- else:
- # FIXME: what about > 0 ?
- return ret[0];
-
-def main():
- global Cnf, projectB;
-
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Rosamund::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Rosamund::Options::%s" % (i)):
- Cnf["Rosamund::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Rosamund::Options")
- if Options["Help"]:
- usage();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
-
- before = time.time();
- sys.stderr.write("[Getting info from the LDAP server...");
- LDAPDn = Cnf["Emilie::LDAPDn"];
- LDAPServer = Cnf["Emilie::LDAPServer"];
- l = ldap.open(LDAPServer);
- l.simple_bind_s("","");
- Attrs = l.search_s(LDAPDn, ldap.SCOPE_ONELEVEL,
- "(&(keyfingerprint=*)(gidnumber=%s))" % (Cnf["Julia::ValidGID"]),
- ["uid", "cn", "mn", "sn", "createtimestamp"]);
- sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
-
-
- db_uid = {};
- db_unstable_uid = {};
-
- before = time.time();
- sys.stderr.write("[Getting UID info for entire archive...");
- q = projectB.query("SELECT DISTINCT u.uid FROM uid u, fingerprint f WHERE f.uid = u.id;");
- sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
- for i in q.getresult():
- db_uid[i[0]] = "";
-
- before = time.time();
- sys.stderr.write("[Getting UID info for unstable...");
- q = projectB.query("""
-SELECT DISTINCT u.uid FROM suite su, src_associations sa, source s, fingerprint f, uid u
- WHERE f.uid = u.id AND sa.source = s.id AND sa.suite = su.id
- AND su.suite_name = 'unstable' AND s.sig_fpr = f.id
-UNION
-SELECT DISTINCT u.uid FROM suite su, bin_associations ba, binaries b, fingerprint f, uid u
- WHERE f.uid = u.id AND ba.bin = b.id AND ba.suite = su.id
- AND su.suite_name = 'unstable' AND b.sig_fpr = f.id""");
- sys.stderr.write("done. (%d seconds)]\n" % (int(time.time()-before)));
- for i in q.getresult():
- db_unstable_uid[i[0]] = "";
-
- now = time.time();
-
- for i in Attrs:
- entry = i[1];
- uid = entry["uid"][0];
- created = time.mktime(time.strptime(entry["createtimestamp"][0][:8], '%Y%m%d'));
- diff = now - created;
- # 31536000 is 1 year in seconds, i.e. 60 * 60 * 24 * 365
- if diff < 31536000 / 2:
- when = "Less than 6 months ago";
- elif diff < 31536000:
- when = "Less than 1 year ago";
- elif diff < 31536000 * 1.5:
- when = "Less than 18 months ago";
- elif diff < 31536000 * 2:
- when = "Less than 2 years ago";
- elif diff < 31536000 * 3:
- when = "Less than 3 years ago";
- else:
- when = "More than 3 years ago";
- name = " ".join([get_ldap_value(entry, "cn"),
- get_ldap_value(entry, "mn"),
- get_ldap_value(entry, "sn")]);
- if not db_uid.has_key(uid):
- print "NONE %s (%s) %s" % (uid, name, when);
- else:
- if not db_unstable_uid.has_key(uid):
- print "NOT_UNSTABLE %s (%s) %s" % (uid, name, when);
-
-############################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Initial setup of an archive
-# Copyright (C) 2002, 2004 James Troup <james@nocrew.org>
-# $Id: rose,v 1.4 2004-03-11 00:20:51 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys;
-import utils;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-AptCnf = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: rose
-Creates directories for an archive based on katie.conf configuration file.
-
- -h, --help show this help and exit."""
- sys.exit(exit_code)
-
-################################################################################
-
-def do_dir(target, config_name):
- if os.path.exists(target):
- if not os.path.isdir(target):
- utils.fubar("%s (%s) is not a directory." % (target, config_name));
- else:
- print "Creating %s ..." % (target);
- os.makedirs(target);
-
-def process_file(config, config_name):
- if config.has_key(config_name):
- target = os.path.dirname(config[config_name]);
- do_dir(target, config_name);
-
-def process_tree(config, tree):
- for entry in config.SubTree(tree).List():
- entry = entry.lower();
- if tree == "Dir":
- if entry in [ "poolroot", "queue" , "morguereject" ]:
- continue;
- config_name = "%s::%s" % (tree, entry);
- target = config[config_name];
- do_dir(target, config_name);
-
-def process_morguesubdir(subdir):
- config_name = "%s::MorgueSubDir" % (subdir);
- if Cnf.has_key(config_name):
- target = os.path.join(Cnf["Dir::Morgue"], Cnf[config_name]);
- do_dir(target, config_name);
-
-######################################################################
-
-def create_directories():
- # Process directories from apt.conf
- process_tree(Cnf, "Dir");
- process_tree(Cnf, "Dir::Queue");
- for file in [ "Dinstall::LockFile", "Melanie::LogFile", "Neve::ExportDir" ]:
- process_file(Cnf, file);
- for subdir in [ "Shania", "Rhona" ]:
- process_morguesubdir(subdir);
-
- # Process directories from apt.conf
- process_tree(AptCnf, "Dir");
- for tree in AptCnf.SubTree("Tree").List():
- config_name = "Tree::%s" % (tree);
- tree_dir = os.path.join(Cnf["Dir::Root"], tree);
- do_dir(tree_dir, tree);
- for file in [ "FileList", "SourceFileList" ]:
- process_file(AptCnf, "%s::%s" % (config_name, file));
- for component in AptCnf["%s::Sections" % (config_name)].split():
- for architecture in AptCnf["%s::Architectures" % (config_name)].split():
- if architecture != "source":
- architecture = "binary-"+architecture;
- target = os.path.join(tree_dir,component,architecture);
- do_dir(target, "%s, %s, %s" % (tree, component, architecture));
-
-
-################################################################################
-
-def main ():
- global AptCnf, Cnf, projectB;
-
- Cnf = utils.get_conf()
- Arguments = [('h',"help","Rose::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Rose::Options::%s" % (i)):
- Cnf["Rose::Options::%s" % (i)] = "";
-
- apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Rose::Options")
- if Options["Help"]:
- usage();
-
- AptCnf = apt_pkg.newConfiguration();
- apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file());
-
- create_directories();
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Various statistical pr0nography fun and games
-# Copyright (C) 2000, 2001, 2002, 2003 James Troup <james@nocrew.org>
-# $Id: saffron,v 1.3 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <aj> can we change the standards instead?
-# <neuro> standards?
-# <aj> whatever we're not conforming to
-# <aj> if there's no written standard, why don't we declare linux as
-# the defacto standard
-# <aj> go us!
-
-# [aj's attempt to avoid ABI changes for released architecture(s)]
-
-################################################################################
-
-import pg, sys;
-import utils;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: saffron STAT
-Print various stats.
-
- -h, --help show this help and exit.
-
-The following STAT modes are available:
-
- arch-space - displays space used by each architecture
- pkg-nums - displays the number of packages by suite/architecture
- daily-install - displays daily install stats suitable for graphing
-"""
- sys.exit(exit_code)
-
-################################################################################
-
-def per_arch_space_use():
- q = projectB.query("""
-SELECT a.arch_string as Architecture, sum(f.size)
- FROM files f, binaries b, architecture a
- WHERE a.id=b.architecture AND f.id=b.file
- GROUP BY a.arch_string""");
- print q;
- q = projectB.query("SELECT sum(size) FROM files WHERE filename ~ '.(diff.gz|tar.gz|dsc)$'");
- print q;
-
-################################################################################
-
-def daily_install_stats():
- stats = {};
- file = utils.open_file("2001-11");
- for line in file.readlines():
- split = line.strip().split('~');
- program = split[1];
- if program != "katie":
- continue;
- action = split[2];
- if action != "installing changes" and action != "installed":
- continue;
- date = split[0][:8];
- if not stats.has_key(date):
- stats[date] = {};
- stats[date]["packages"] = 0;
- stats[date]["size"] = 0.0;
- if action == "installing changes":
- stats[date]["packages"] += 1;
- elif action == "installed":
- stats[date]["size"] += float(split[5]);
-
- dates = stats.keys();
- dates.sort();
- for date in dates:
- packages = stats[date]["packages"]
- size = int(stats[date]["size"] / 1024.0 / 1024.0)
- print "%s %s %s" % (date, packages, size);
-
-################################################################################
-
-def longest(list):
- longest = 0;
- for i in list:
- l = len(i);
- if l > longest:
- longest = l;
- return longest;
-
-def suite_sort(a, b):
- if Cnf.has_key("Suite::%s::Priority" % (a)):
- a_priority = int(Cnf["Suite::%s::Priority" % (a)]);
- else:
- a_priority = 0;
- if Cnf.has_key("Suite::%s::Priority" % (b)):
- b_priority = int(Cnf["Suite::%s::Priority" % (b)]);
- else:
- b_priority = 0;
- return cmp(a_priority, b_priority);
-
-def output_format(suite):
- output_suite = [];
- for word in suite.split("-"):
- output_suite.append(word[0]);
- return "-".join(output_suite);
-
-# Obvious query with GROUP BY and mapped names -> 50 seconds
-# GROUP BY but ids instead of suite/architecture names -> 28 seconds
-# Simple query -> 14 seconds
-# Simple query into large dictionary + processing -> 21 seconds
-# Simple query into large pre-created dictionary + processing -> 18 seconds
-
-def number_of_packages():
- arches = {};
- arch_ids = {};
- suites = {};
- suite_ids = {};
- d = {};
- # Build up suite mapping
- q = projectB.query("SELECT id, suite_name FROM suite");
- suite_ql = q.getresult();
- for i in suite_ql:
- (id, name) = i;
- suites[id] = name;
- suite_ids[name] = id;
- # Build up architecture mapping
- q = projectB.query("SELECT id, arch_string FROM architecture");
- for i in q.getresult():
- (id, name) = i;
- arches[id] = name;
- arch_ids[name] = id;
- # Pre-create the dictionary
- for suite_id in suites.keys():
- d[suite_id] = {};
- for arch_id in arches.keys():
- d[suite_id][arch_id] = 0;
- # Get the raw data for binaries
- q = projectB.query("""
-SELECT ba.suite, b.architecture
- FROM binaries b, bin_associations ba
- WHERE b.id = ba.bin""");
- # Simultate 'GROUP by suite, architecture' with a dictionary
- for i in q.getresult():
- (suite_id, arch_id) = i;
- d[suite_id][arch_id] = d[suite_id][arch_id] + 1;
- # Get the raw data for source
- arch_id = arch_ids["source"];
- q = projectB.query("""
-SELECT suite, count(suite) FROM src_associations GROUP BY suite;""");
- for i in q.getresult():
- (suite_id, count) = i;
- d[suite_id][arch_id] = d[suite_id][arch_id] + count;
- ## Print the results
- # Setup
- suite_list = suites.values();
- suite_list.sort(suite_sort);
- suite_id_list = [];
- suite_arches = {};
- for suite in suite_list:
- suite_id = suite_ids[suite];
- suite_arches[suite_id] = {};
- for arch in Cnf.ValueList("Suite::%s::Architectures" % (suite)):
- suite_arches[suite_id][arch] = "";
- suite_id_list.append(suite_id);
- output_list = map(lambda x: output_format(x), suite_list);
- longest_suite = longest(output_list);
- arch_list = arches.values();
- arch_list.sort();
- longest_arch = longest(arch_list);
- # Header
- output = (" "*longest_arch) + " |"
- for suite in output_list:
- output = output + suite.center(longest_suite)+" |";
- output = output + "\n"+(len(output)*"-")+"\n";
- # per-arch data
- arch_list = arches.values();
- arch_list.sort();
- longest_arch = longest(arch_list);
- for arch in arch_list:
- arch_id = arch_ids[arch];
- output = output + arch.center(longest_arch)+" |";
- for suite_id in suite_id_list:
- if suite_arches[suite_id].has_key(arch):
- count = repr(d[suite_id][arch_id]);
- else:
- count = "-";
- output = output + count.rjust(longest_suite)+" |";
- output = output + "\n";
- print output;
-
-################################################################################
-
-def main ():
- global Cnf, projectB;
-
- Cnf = utils.get_conf();
- Arguments = [('h',"help","Saffron::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Saffron::Options::%s" % (i)):
- Cnf["Saffron::Options::%s" % (i)] = "";
-
- args = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Saffron::Options")
- if Options["Help"]:
- usage();
-
- if len(args) < 1:
- utils.warn("saffron requires at least one argument");
- usage(1);
- elif len(args) > 1:
- utils.warn("saffron accepts only one argument");
- usage(1);
- mode = args[0].lower();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
-
- if mode == "arch-space":
- per_arch_space_use();
- elif mode == "pkg-nums":
- number_of_packages();
- elif mode == "daily-install":
- daily_install_stats();
- else:
- utils.warn("unknown mode '%s'" % (mode));
- usage(1);
-
-################################################################################
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-x <- read.table("x.1",row.names=1,col.names=c("Packages", "Sizes"))
-y <- t(x)
-postscript(file="x4.png")
-barplot(y, beside=TRUE, col = c("red", "green"), main="Daily dinstall run size", legend = colnames(x), xlab="Date", ylab="Packages/Size (Mb)")
-axis(4)
-dev.off()
--- /dev/null
+#! /bin/sh
+# $Id: copyoverrides,v 1.2 2001-01-10 06:01:07 troup Exp $
+
+set -e
+. $SCRIPTVARS
+echo 'Copying override files into public view ...'
+
+for f in $copyoverrides ; do
+ cd $overridedir
+ chmod g+w override.$f
+
+ cd $indices
+ rm -f .newover-$f.gz
+ pc="`gzip 2>&1 -9nv <$overridedir/override.$f >.newover-$f.gz`"
+ set +e
+ nf=override.$f.gz
+ cmp -s .newover-$f.gz $nf
+ rc=$?
+ set -e
+ if [ $rc = 0 ]; then
+ rm -f .newover-$f.gz
+ elif [ $rc = 1 -o ! -f $nf ]; then
+ echo " installing new $nf $pc"
+ mv -f .newover-$f.gz $nf
+ chmod g+w $nf
+ else
+ echo $? $pc
+ exit 1
+ fi
+done
--- /dev/null
+#!/bin/sh
+# Update the md5sums file
+# $Id: mkchecksums,v 1.2 2000-12-20 08:15:35 troup Exp $
+
+set -e
+. $SCRIPTVARS
+
+dsynclist=$dbdir/dsync.list
+md5list=$indices/md5sums
+
+echo -n "Creating md5 / dsync index file ... "
+
+cd "$ftpdir"
+dsync-flist -q generate $dsynclist --exclude $dsynclist --md5
+dsync-flist -q md5sums $dsynclist | gzip -9n > ${md5list}.gz
+dsync-flist -q link-dups $dsynclist || true
--- /dev/null
+#!/bin/sh
+# Update the ls-lR.
+# $Id: mklslar,v 1.3 2001-09-24 21:47:54 rmurray Exp $
+
+set -e
+. $SCRIPTVARS
+
+cd $ftpdir
+
+filename=ls-lR
+
+echo "Removing any core files ..."
+find -type f -name core -print0 | xargs -0r rm -v
+
+echo "Checking permissions on files in the FTP tree ..."
+find -type f \( \! -perm -444 -o -perm +002 \) -ls
+find -type d \( \! -perm -555 -o -perm +002 \) -ls
+
+echo "Checking symlinks ..."
+symlinks -rd .
+
+echo "Creating recursive directory listing ... "
+rm -f .$filename.new
+TZ=UTC ls -lR | grep -v Archive_Maintenance_In_Progress > .$filename.new
+
+if [ -r $filename ] ; then
+ mv -f $filename $filename.old
+ mv -f .$filename.new $filename
+ rm -f $filename.patch.gz
+ diff -u $filename.old $filename | gzip -9cfn - >$filename.patch.gz
+ rm -f $filename.old
+else
+ mv -f .$filename.new $filename
+fi
+
+gzip -9cfN $filename >$filename.gz
--- /dev/null
+#! /bin/sh
+# $Id: mkmaintainers,v 1.3 2004-02-27 20:09:51 troup Exp $
+
+echo
+echo -n 'Creating Maintainers index ... '
+
+set -e
+. $SCRIPTVARS
+cd $masterdir
+
+nonusmaint="$masterdir/Maintainers_Versions-non-US"
+
+
+if wget -T15 -q -O Maintainers_Versions-non-US.gz http://non-us.debian.org/indices-non-US/Maintainers_Versions.gz; then
+ rm -f $nonusmaint
+ gunzip -c ${nonusmaint}.gz > $nonusmaint
+ rm -f ${nonusmaint}.gz
+fi
+
+cd $indices
+$masterdir/charisma $nonusmaint $masterdir/pseudo-packages.maintainers | sed -e "s/~[^ ]*\([ ]\)/\1/" | awk '{printf "%-20s ", $1; for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n";}' > .new-maintainers
+
+set +e
+cmp .new-maintainers Maintainers >/dev/null
+rc=$?
+set -e
+if [ $rc = 1 ] || [ ! -f Maintainers ] ; then
+ echo -n "installing Maintainers ... "
+ mv -f .new-maintainers Maintainers
+ gzip -9v <Maintainers >.new-maintainers.gz
+ mv -f .new-maintainers.gz Maintainers.gz
+elif [ $rc = 0 ] ; then
+ echo '(same as before)'
+ rm -f .new-maintainers
+else
+ echo cmp returned $rc
+ false
+fi
--- /dev/null
+x <- read.table("x.1",row.names=1,col.names=c("Packages", "Sizes"))
+y <- t(x)
+postscript(file="x4.png")
+barplot(y, beside=TRUE, col = c("red", "green"), main="Daily dinstall run size", legend = colnames(x), xlab="Date", ylab="Packages/Size (Mb)")
+axis(4)
+dev.off()
--- /dev/null
+#!/bin/bash
+#
+# Updates wanna-build databases after the archive maintenance
+# finishes
+#
+# Files:
+# Sources-* == upstream fetched file
+# Sources.* == uncompressed, concat'd version
+PATH="/bin:/usr/bin"
+#testing must be before unstable so late upld don't build for testing needlessly
+DISTS="oldstable-security stable-security testing-security stable testing unstable"
+STATS_DISTS="unstable testing stable"
+SECTIONS="main contrib non-free"
+ARCHS_oldstable="m68k arm sparc alpha powerpc i386 mips mipsel ia64 hppa s390"
+ARCHS_stable="$ARCHS_oldstable"
+ARCHS_testing="$ARCHS_stable"
+ARCHS_unstable="$ARCHS_testing hurd-i386"
+TMPDIR="/org/wanna-build/tmp"
+WGETOPT="-q -t2 -w0 -T10"
+CURLOPT="-q -s -S -f -y 5 -K /org/wanna-build/trigger.curlrc"
+LOCKFILE="/org/wanna-build/tmp/DB_Maintenance_In_Progress"
+
+DAY=`date +%w`
+
+if lockfile -! -l 3600 $LOCKFILE; then
+ echo "Cannot lock $LOCKFILE"
+ exit 1
+fi
+
+cleanup() {
+ rm -f "$LOCKFILE"
+}
+trap cleanup 0
+
+echo Updating wanna-build databases...
+umask 027
+
+if [ "$DAY" = "0" ]; then
+ savelog -c 26 -p /org/wanna-build/db/merge.log
+fi
+
+exec >> /org/wanna-build/db/merge.log 2>&1
+
+echo -------------------------------------------------------------------------
+echo "merge triggered `date`"
+
+cd $TMPDIR
+
+#
+# Make one big Packages and Sources file.
+#
+for d in $DISTS ; do
+ dist=`echo $d | sed s/-.*$//`
+ case "$dist" in
+ oldstable)
+ ARCHS="$ARCHS_oldstable"
+ ;;
+ stable)
+ ARCHS="$ARCHS_stable"
+ ;;
+ testing)
+ ARCHS="$ARCHS_testing"
+ ;;
+ *)
+ ARCHS="$ARCHS_unstable"
+ ;;
+ esac
+ rm -f Sources.$d
+ if [ "$d" = "unstable" ]; then
+ gzip -dc /org/incoming.debian.org/buildd/Sources.gz >> Sources.$d
+ fi
+ for a in $ARCHS ; do
+ rm -f Packages.$d.$a quinn-$d.$a
+ if [ "$d" = "unstable" ]; then
+ gzip -dc /org/incoming.debian.org/buildd/Packages.gz >> Packages.$d.$a
+ fi
+ done
+
+ for s in $SECTIONS ; do
+ if echo $d | grep -qv -- -security; then
+ rm -f Sources.gz
+ gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/source/Sources.gz >> Sources.$d
+ if [ "$d" = "testing" -o "$d" = "stable" ]; then
+ gzip -dc /org/ftp.debian.org/ftp/dists/$d-proposed-updates/$s/source/Sources.gz >> Sources.$d
+ fi
+
+ rm -f Packages.gz
+ for a in $ARCHS ; do
+ gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/binary-$a/Packages.gz >> Packages.$d.$a
+ if [ "$d" = "testing" -o "$d" = "stable" ]; then
+ gzip -dc /org/ftp.debian.org/ftp/dists/$d-proposed-updates/$s/binary-$a/Packages.gz >> Packages.$d.$a
+ fi
+ if [ "$d" = "unstable" -a "$s" = "main" ]; then
+ gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/debian-installer/binary-$a/Packages.gz >> Packages.$d.$a
+ fi
+ done
+ else
+ rm -f Sources.gz
+ if wget $WGETOPT http://security.debian.org/debian-security/dists/$dist/updates/$s/source/Sources.gz; then
+ mv Sources.gz Sources-$d.$s.gz
+ fi
+ gzip -dc Sources-$d.$s.gz >> Sources.$d
+ if [ "$s" = "main" ]; then
+ if curl $CURLOPT http://security.debian.org/buildd/$dist/Sources.gz -o Sources.gz; then
+ mv Sources.gz Sources-$d.accepted.gz
+ fi
+ gzip -dc Sources-$d.accepted.gz >> Sources.$d
+ if curl $CURLOPT http://security.debian.org/buildd/$dist/Packages.gz -o Packages.gz; then
+ mv Packages.gz Packages.$d.accepted.gz
+ fi
+ fi
+ rm -f Packages.gz
+ for a in $ARCHS ; do
+ if wget $WGETOPT http://security.debian.org/debian-security/dists/$dist/updates/$s/binary-$a/Packages.gz; then
+ mv Packages.gz Packages.$d.$s.$a.gz
+ fi
+ gzip -dc Packages.$d.$s.$a.gz >> Packages.$d.$a
+ if [ "$s" = "main" ]; then
+ gzip -dc Packages.$d.accepted.gz >> Packages.$d.$a
+ fi
+ done
+ fi
+ done
+
+ for a in $ARCHS ; do
+ if [ "$d" = "unstable" -o ! -e "quinn-unstable.$a-old" ]; then
+ quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a >> quinn-$d.$a
+ else
+ if echo $d | grep -qv -- -security; then
+ quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a | fgrep -v -f quinn-unstable.$a-old | grep ":out-of-date\]$" >> quinn-$d.$a
+ sed -e 's/\[\w*:\w*]$//' quinn-$d-security.$a > quinn-$d-security.$a.grep
+ grep -vf quinn-$d-security.$a quinn-$d.$a > quinn-$d.$a.grep
+ mv quinn-$d.$a.grep quinn-$d.$a
+ rm quinn-$d-security.$a.grep
+ else
+ quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a >> quinn-$d.$a
+ fi
+ fi
+ done
+done
+
+umask 002
+for a in $ARCHS_unstable ; do
+ wanna-build --create-maintenance-lock --database=$a/build-db
+
+ for d in $DISTS ; do
+ dist=`echo $d | sed s/-.*$//`
+ case "$dist" in
+ oldstable)
+ if echo $ARCHS_oldstable | grep -q -v "\b$a\b"; then
+ continue
+ fi
+ ;;
+ stable)
+ if echo $ARCHS_stable | grep -q -v "\b$a\b"; then
+ continue
+ fi
+ ;;
+ testing)
+ if echo $ARCHS_testing | grep -q -v "\b$a\b"; then
+ continue
+ fi
+ ;;
+ *)
+ if echo $ARCHS_unstable | grep -q -v "\b$a\b"; then
+ continue
+ fi
+ ;;
+ esac
+ perl -pi -e 's#^(non-free)/.*$##msg' quinn-$d.$a
+ wanna-build --merge-all --arch=$a --dist=$d --database=$a/build-db Packages.$d.$a quinn-$d.$a Sources.$d
+ mv Packages.$d.$a Packages.$d.$a-old
+ mv quinn-$d.$a quinn-$d.$a-old
+ done
+ if [ "$DAY" = "0" ]; then
+ savelog -p -c 26 /org/wanna-build/db/$a/transactions.log
+ fi
+ wanna-build --remove-maintenance-lock --database=$a/build-db
+done
+umask 022
+for d in $DISTS; do
+ mv Sources.$d Sources.$d-old
+done
+
+echo "merge ended `date`"
+/org/wanna-build/bin/wb-graph >> /org/wanna-build/etc/graph-data
+/org/wanna-build/bin/wb-graph -p >> /org/wanna-build/etc/graph2-data
+rm -f "$LOCKFILE"
+trap -
+/org/buildd.debian.org/bin/makegraph
+for a in $ARCHS_stable; do
+ echo Last Updated: `date -u` > /org/buildd.debian.org/web/stats/$a.txt
+ for d in $STATS_DISTS; do
+ /org/wanna-build/bin/wanna-build-statistics --database=$a/build-db --dist=$d >> /org/buildd.debian.org/web/stats/$a.txt
+ done
+done
--- /dev/null
+#!/bin/sh -e
+
+. vars
+
+export TERM=linux
+
+destdir=$ftpdir/doc
+urlbase=http://www.debian.org/Bugs/
+
+cd $destdir
+
+convert () {
+ src=$1; dst=$2
+ rm -f .new-$dst
+ echo Generating $dst from http://www.debian.org/Bugs/$src ...
+ lynx -nolist -dump $urlbase$src | sed -e 's/^ *$//' | perl -00 -ne 'exit if /Back to the Debian Project homepage/; print unless ($.==1 || $.==2 || $.==3 || /^\s*Other BTS pages:$/m)' >.new-$dst
+ if cmp -s .new-$dst $dst ; then rm -f .new-$dst
+ else mv -f .new-$dst $dst
+ fi
+}
+
+convert Reporting.html bug-reporting.txt
+convert Access.html bug-log-access.txt
+convert server-request.html bug-log-mailserver.txt
+convert Developer.html bug-maint-info.txt
+convert server-control.html bug-maint-mailcontrol.txt
+convert server-refcard.html bug-mailserver-refcard.txt
--- /dev/null
+#!/bin/sh
+#
+# Fetches latest copy of mailing-lists.txt
+# Michael Beattie <mjb@debian.org>
+
+. vars
+
+cd $ftpdir/doc
+
+echo Updating archive version of mailing-lists.txt
+wget -t1 -T20 -q -N http://www.debian.org/misc/mailing-lists.txt || \
+ echo "Some error occured..."
+
--- /dev/null
+#!/bin/sh
+#
+# Very Very hackish script... dont laugh.
+# Michael Beattie <mjb@debian.org>
+
+. vars
+
+prog=$scriptdir/mirrorlist/mirror_list.pl
+masterlist=$scriptdir/mirrorlist/Mirrors.masterlist
+
+test ! -f $HOME/.cvspass && \
+ echo ":pserver:anonymous@cvs.debian.org:/cvs/webwml A" > $HOME/.cvspass
+grep -q "cvs.debian.org:/cvs/webwml" ~/.cvspass || \
+ echo ":pserver:anonymous@cvs.debian.org:/cvs/webwml A" >> $HOME/.cvspass
+
+cd $(dirname $masterlist)
+cvs update
+
+if [ ! -f $ftpdir/README.mirrors.html -o $masterlist -nt $ftpdir/README.mirrors.html ] ; then
+ rm -f $ftpdir/README.mirrors.html $ftpdir/README.mirrors.txt
+ $prog -m $masterlist -t html > $ftpdir/README.mirrors.html
+ $prog -m $masterlist -t text > $ftpdir/README.mirrors.txt
+ if [ ! -f $ftpdir/README.non-US -o $masterlist -nt $ftpdir/README.non-US ] ; then
+ rm -f $ftpdir/README.non-US
+ $prog -m $masterlist -t nonus > $ftpdir/README.non-US
+ install -m 664 $ftpdir/README.non-US $webdir
+ fi
+ echo Updated archive version of mirrors file
+fi
--- /dev/null
+#!/bin/sh
+#
+# Fetches up to date copy of REAME.non-US for pandora
+# Michael Beattie <mjb@debian.org>
+
+. vars-non-US
+
+cd $ftpdir
+
+echo Updating non-US version of README.non-US
+wget -t1 -T20 -q -N http://ftp-master.debian.org/README.non-US || \
+ echo "Some error occured..."
+
--- /dev/null
+-- Fix up after population of the database...
+
+-- First of all readd the constraints (takes ~1:30 on auric)
+
+ALTER TABLE files ADD CONSTRAINT files_location FOREIGN KEY (location) REFERENCES location(id) MATCH FULL;
+
+ALTER TABLE source ADD CONSTRAINT source_maintainer FOREIGN KEY (maintainer) REFERENCES maintainer(id) MATCH FULL;
+ALTER TABLE source ADD CONSTRAINT source_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
+ALTER TABLE source ADD CONSTRAINT source_sig_fpr FOREIGN KEY (sig_fpr) REFERENCES fingerprint(id) MATCH FULL;
+
+ALTER TABLE dsc_files ADD CONSTRAINT dsc_files_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
+ALTER TABLE dsc_files ADD CONSTRAINT dsc_files_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
+
+ALTER TABLE binaries ADD CONSTRAINT binaries_maintainer FOREIGN KEY (maintainer) REFERENCES maintainer(id) MATCH FULL;
+ALTER TABLE binaries ADD CONSTRAINT binaries_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
+ALTER TABLE binaries ADD CONSTRAINT binaries_architecture FOREIGN KEY (architecture) REFERENCES architecture(id) MATCH FULL;
+ALTER TABLE binaries ADD CONSTRAINT binaries_file FOREIGN KEY (file) REFERENCES files(id) MATCH FULL;
+ALTER TABLE binaries ADD CONSTRAINT binaries_sig_fpr FOREIGN KEY (sig_fpr) REFERENCES fingerprint(id) MATCH FULL;
+
+ALTER TABLE suite_architectures ADD CONSTRAINT suite_architectures_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
+ALTER TABLE suite_architectures ADD CONSTRAINT suite_architectures_architecture FOREIGN KEY (architecture) REFERENCES architecture(id) MATCH FULL;
+
+ALTER TABLE bin_associations ADD CONSTRAINT bin_associations_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
+ALTER TABLE bin_associations ADD CONSTRAINT bin_associations_bin FOREIGN KEY (bin) REFERENCES binaries(id) MATCH FULL;
+
+ALTER TABLE src_associations ADD CONSTRAINT src_associations_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
+ALTER TABLE src_associations ADD CONSTRAINT src_associations_source FOREIGN KEY (source) REFERENCES source(id) MATCH FULL;
+
+ALTER TABLE override ADD CONSTRAINT override_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
+ALTER TABLE override ADD CONSTRAINT override_component FOREIGN KEY (component) REFERENCES component(id) MATCH FULL;
+ALTER TABLE override ADD CONSTRAINT override_priority FOREIGN KEY (priority) REFERENCES priority(id) MATCH FULL;
+ALTER TABLE override ADD CONSTRAINT override_section FOREIGN KEY (section) REFERENCES section(id) MATCH FULL;
+ALTER TABLE override ADD CONSTRAINT override_type FOREIGN KEY (type) REFERENCES override_type(id) MATCH FULL;
+
+ALTER TABLE queue_build ADD CONSTRAINT queue_build_suite FOREIGN KEY (suite) REFERENCES suite(id) MATCH FULL;
+ALTER TABLE queue_build ADD CONSTRAINT queue_build_queue FOREIGN KEY (queue) REFERENCES queue(id) MATCH FULL;
+
+-- Then correct all the id SERIAL PRIMARY KEY columns...
+
+CREATE FUNCTION files_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM files'
+ LANGUAGE 'sql';
+CREATE FUNCTION source_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM source'
+ LANGUAGE 'sql';
+CREATE FUNCTION src_associations_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM src_associations'
+ LANGUAGE 'sql';
+CREATE FUNCTION dsc_files_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM dsc_files'
+ LANGUAGE 'sql';
+CREATE FUNCTION binaries_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM binaries'
+ LANGUAGE 'sql';
+CREATE FUNCTION bin_associations_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM bin_associations'
+ LANGUAGE 'sql';
+CREATE FUNCTION section_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM section'
+ LANGUAGE 'sql';
+CREATE FUNCTION priority_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM priority'
+ LANGUAGE 'sql';
+CREATE FUNCTION override_type_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM override_type'
+ LANGUAGE 'sql';
+CREATE FUNCTION maintainer_id_max() RETURNS INT4
+ AS 'SELECT max(id) FROM maintainer'
+ LANGUAGE 'sql';
+
+SELECT setval('files_id_seq', files_id_max());
+SELECT setval('source_id_seq', source_id_max());
+SELECT setval('src_associations_id_seq', src_associations_id_max());
+SELECT setval('dsc_files_id_seq', dsc_files_id_max());
+SELECT setval('binaries_id_seq', binaries_id_max());
+SELECT setval('bin_associations_id_seq', bin_associations_id_max());
+SELECT setval('section_id_seq', section_id_max());
+SELECT setval('priority_id_seq', priority_id_max());
+SELECT setval('override_type_id_seq', override_type_id_max());
+SELECT setval('maintainer_id_seq', maintainer_id_max());
+
+-- Vacuum the tables for efficency
+
+VACUUM archive;
+VACUUM component;
+VACUUM architecture;
+VACUUM maintainer;
+VACUUM location;
+VACUUM files;
+VACUUM source;
+VACUUM dsc_files;
+VACUUM binaries;
+VACUUM suite;
+VACUUM suite_architectures;
+VACUUM bin_associations;
+VACUUM src_associations;
+VACUUM section;
+VACUUM priority;
+VACUUM override_type;
+VACUUM override;
+
+-- FIXME: has to be a better way to do this
+GRANT ALL ON architecture, architecture_id_seq, archive,
+ archive_id_seq, bin_associations, bin_associations_id_seq, binaries,
+ binaries_id_seq, component, component_id_seq, dsc_files,
+ dsc_files_id_seq, files, files_id_seq, fingerprint,
+ fingerprint_id_seq, location, location_id_seq, maintainer,
+ maintainer_id_seq, override, override_type, override_type_id_seq,
+ priority, priority_id_seq, section, section_id_seq, source,
+ source_id_seq, src_associations, src_associations_id_seq, suite,
+ suite_architectures, suite_id_seq, queue_build, uid,
+ uid_id_seq TO GROUP ftpmaster;
+
+-- Read only access to user 'nobody'
+GRANT SELECT ON architecture, architecture_id_seq, archive,
+ archive_id_seq, bin_associations, bin_associations_id_seq, binaries,
+ binaries_id_seq, component, component_id_seq, dsc_files,
+ dsc_files_id_seq, files, files_id_seq, fingerprint,
+ fingerprint_id_seq, location, location_id_seq, maintainer,
+ maintainer_id_seq, override, override_type, override_type_id_seq,
+ priority, priority_id_seq, section, section_id_seq, source,
+ source_id_seq, src_associations, src_associations_id_seq, suite,
+ suite_architectures, suite_id_seq, queue_build, uid,
+ uid_id_seq TO PUBLIC;
--- /dev/null
+DROP DATABASE projectb;
+CREATE DATABASE projectb WITH ENCODING = 'SQL_ASCII';
+
+\c projectb
+
+CREATE TABLE archive (
+ id SERIAL PRIMARY KEY,
+ name TEXT UNIQUE NOT NULL,
+ origin_server TEXT,
+ description TEXT
+);
+
+CREATE TABLE component (
+ id SERIAL PRIMARY KEY,
+ name TEXT UNIQUE NOT NULL,
+ description TEXT,
+ meets_dfsg BOOLEAN
+);
+
+CREATE TABLE architecture (
+ id SERIAL PRIMARY KEY,
+ arch_string TEXT UNIQUE NOT NULL,
+ description TEXT
+);
+
+CREATE TABLE maintainer (
+ id SERIAL PRIMARY KEY,
+ name TEXT UNIQUE NOT NULL
+);
+
+CREATE TABLE uid (
+ id SERIAL PRIMARY KEY,
+ uid TEXT UNIQUE NOT NULL
+);
+
+CREATE TABLE fingerprint (
+ id SERIAL PRIMARY KEY,
+ fingerprint TEXT UNIQUE NOT NULL,
+ uid INT4 REFERENCES uid
+);
+
+CREATE TABLE location (
+ id SERIAL PRIMARY KEY,
+ path TEXT NOT NULL,
+ component INT4 REFERENCES component,
+ archive INT4 REFERENCES archive,
+ type TEXT NOT NULL
+);
+
+-- No references below here to allow sane population; added post-population
+
+CREATE TABLE files (
+ id SERIAL PRIMARY KEY,
+ filename TEXT NOT NULL,
+ size INT8 NOT NULL,
+ md5sum TEXT NOT NULL,
+ location INT4 NOT NULL, -- REFERENCES location
+ last_used TIMESTAMP,
+ unique (filename, location)
+);
+
+CREATE TABLE source (
+ id SERIAL PRIMARY KEY,
+ source TEXT NOT NULL,
+ version TEXT NOT NULL,
+ maintainer INT4 NOT NULL, -- REFERENCES maintainer
+ file INT4 UNIQUE NOT NULL, -- REFERENCES files
+ install_date TIMESTAMP NOT NULL,
+ sig_fpr INT4 NOT NULL, -- REFERENCES fingerprint
+ unique (source, version)
+);
+
+CREATE TABLE dsc_files (
+ id SERIAL PRIMARY KEY,
+ source INT4 NOT NULL, -- REFERENCES source,
+ file INT4 NOT NULL, -- RERENCES files
+ unique (source, file)
+);
+
+CREATE TABLE binaries (
+ id SERIAL PRIMARY KEY,
+ package TEXT NOT NULL,
+ version TEXT NOT NULL,
+ maintainer INT4 NOT NULL, -- REFERENCES maintainer
+ source INT4, -- REFERENCES source,
+ architecture INT4 NOT NULL, -- REFERENCES architecture
+ file INT4 UNIQUE NOT NULL, -- REFERENCES files,
+ type TEXT NOT NULL,
+-- joeyh@ doesn't want .udebs and .debs with the same name, which is why the unique () doesn't mention type
+ sig_fpr INT4 NOT NULL, -- REFERENCES fingerprint
+ unique (package, version, architecture)
+);
+
+CREATE TABLE suite (
+ id SERIAL PRIMARY KEY,
+ suite_name TEXT NOT NULL,
+ version TEXT,
+ origin TEXT,
+ label TEXT,
+ policy_engine TEXT,
+ description TEXT
+);
+
+CREATE TABLE queue (
+ id SERIAL PRIMARY KEY,
+ queue_name TEXT NOT NULL
+);
+
+CREATE TABLE suite_architectures (
+ suite INT4 NOT NULL, -- REFERENCES suite
+ architecture INT4 NOT NULL, -- REFERENCES architecture
+ unique (suite, architecture)
+);
+
+CREATE TABLE bin_associations (
+ id SERIAL PRIMARY KEY,
+ suite INT4 NOT NULL, -- REFERENCES suite
+ bin INT4 NOT NULL, -- REFERENCES binaries
+ unique (suite, bin)
+);
+
+CREATE TABLE src_associations (
+ id SERIAL PRIMARY KEY,
+ suite INT4 NOT NULL, -- REFERENCES suite
+ source INT4 NOT NULL, -- REFERENCES source
+ unique (suite, source)
+);
+
+CREATE TABLE section (
+ id SERIAL PRIMARY KEY,
+ section TEXT UNIQUE NOT NULL
+);
+
+CREATE TABLE priority (
+ id SERIAL PRIMARY KEY,
+ priority TEXT UNIQUE NOT NULL,
+ level INT4 UNIQUE NOT NULL
+);
+
+CREATE TABLE override_type (
+ id SERIAL PRIMARY KEY,
+ type TEXT UNIQUE NOT NULL
+);
+
+CREATE TABLE override (
+ package TEXT NOT NULL,
+ suite INT4 NOT NULL, -- references suite
+ component INT4 NOT NULL, -- references component
+ priority INT4, -- references priority
+ section INT4 NOT NULL, -- references section
+ type INT4 NOT NULL, -- references override_type
+ maintainer TEXT,
+ unique (suite, component, package, type)
+);
+
+CREATE TABLE queue_build (
+ suite INT4 NOT NULL, -- references suite
+ queue INT4 NOT NULL, -- references queue
+ filename TEXT NOT NULL,
+ in_queue BOOLEAN NOT NULL,
+ last_used TIMESTAMP
+);
+
+-- Critical indexes
+
+CREATE INDEX bin_associations_bin ON bin_associations (bin);
+CREATE INDEX src_associations_source ON src_associations (source);
+CREATE INDEX source_maintainer ON source (maintainer);
+CREATE INDEX binaries_maintainer ON binaries (maintainer);
+CREATE INDEX binaries_fingerprint on binaries (sig_fpr);
+CREATE INDEX source_fingerprint on source (sig_fpr);
+CREATE INDEX dsc_files_file ON dsc_files (file);
--- /dev/null
+
+CREATE TABLE disembargo (
+ package TEXT NOT NULL,
+ version TEXT NOT NULL
+);
+
+GRANT ALL ON disembargo TO GROUP ftpmaster;
+GRANT SELECT ON disembargo TO PUBLIC;
+++ /dev/null
-#!/usr/bin/env python
-
-# Clean incoming of old unused files
-# Copyright (C) 2000, 2001, 2002 James Troup <james@nocrew.org>
-# $Id: shania,v 1.18 2005-03-06 21:51:51 rmurray Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# <aj> Bdale, a ham-er, and the leader,
-# <aj> Willy, a GCC maintainer,
-# <aj> Lamont-work, 'cause he's the top uploader....
-# <aj> Penguin Puff' save the day!
-# <aj> Porting code, trying to build the world,
-# <aj> Here they come just in time...
-# <aj> The Penguin Puff' Guys!
-# <aj> [repeat]
-# <aj> Penguin Puff'!
-# <aj> willy: btw, if you don't maintain gcc you need to start, since
-# the lyrics fit really well that way
-
-################################################################################
-
-import os, stat, sys, time;
-import utils;
-import apt_pkg;
-
-################################################################################
-
-Cnf = None;
-Options = None;
-del_dir = None;
-delete_date = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: shania [OPTIONS]
-Clean out incoming directories.
-
- -d, --days=DAYS remove anything older than DAYS old
- -i, --incoming=INCOMING the incoming directory to clean
- -n, --no-action don't do anything
- -v, --verbose explain what is being done
- -h, --help show this help and exit"""
-
- sys.exit(exit_code)
-
-################################################################################
-
-def init ():
- global delete_date, del_dir;
-
- delete_date = int(time.time())-(int(Options["Days"])*84600);
-
- # Ensure a directory exists to remove files to
- if not Options["No-Action"]:
- date = time.strftime("%Y-%m-%d");
- del_dir = Cnf["Dir::Morgue"] + '/' + Cnf["Shania::MorgueSubDir"] + '/' + date;
- if not os.path.exists(del_dir):
- os.makedirs(del_dir, 02775);
- if not os.path.isdir(del_dir):
- utils.fubar("%s must be a directory." % (del_dir));
-
- # Move to the directory to clean
- incoming = Options["Incoming"];
- if incoming == "":
- incoming = Cnf["Dir::Queue::Unchecked"];
- os.chdir(incoming);
-
-# Remove a file to the morgue
-def remove (file):
- if os.access(file, os.R_OK):
- dest_filename = del_dir + '/' + os.path.basename(file);
- # If the destination file exists; try to find another filename to use
- if os.path.exists(dest_filename):
- dest_filename = utils.find_next_free(dest_filename, 10);
- utils.move(file, dest_filename, 0660);
- else:
- utils.warn("skipping '%s', permission denied." % (os.path.basename(file)));
-
-# Removes any old files.
-# [Used for Incoming/REJECT]
-#
-def flush_old ():
- for file in os.listdir('.'):
- if os.path.isfile(file):
- if os.stat(file)[stat.ST_MTIME] < delete_date:
- if Options["No-Action"]:
- print "I: Would delete '%s'." % (os.path.basename(file));
- else:
- if Options["Verbose"]:
- print "Removing '%s' (to '%s')." % (os.path.basename(file), del_dir);
- remove(file);
- else:
- if Options["Verbose"]:
- print "Skipping, too new, '%s'." % (os.path.basename(file));
-
-# Removes any files which are old orphans (not associated with a valid .changes file).
-# [Used for Incoming]
-#
-def flush_orphans ():
- all_files = {};
- changes_files = [];
-
- # Build up the list of all files in the directory
- for i in os.listdir('.'):
- if os.path.isfile(i):
- all_files[i] = 1;
- if i.endswith(".changes"):
- changes_files.append(i);
-
- # Proces all .changes and .dsc files.
- for changes_filename in changes_files:
- try:
- changes = utils.parse_changes(changes_filename);
- files = utils.build_file_list(changes);
- except:
- utils.warn("error processing '%s'; skipping it. [Got %s]" % (changes_filename, sys.exc_type));
- continue;
-
- dsc_files = {};
- for file in files.keys():
- if file.endswith(".dsc"):
- try:
- dsc = utils.parse_changes(file);
- dsc_files = utils.build_file_list(dsc, is_a_dsc=1);
- except:
- utils.warn("error processing '%s'; skipping it. [Got %s]" % (file, sys.exc_type));
- continue;
-
- # Ensure all the files we've seen aren't deleted
- keys = [];
- for i in (files.keys(), dsc_files.keys(), [changes_filename]):
- keys.extend(i);
- for key in keys:
- if all_files.has_key(key):
- if Options["Verbose"]:
- print "Skipping, has parents, '%s'." % (key);
- del all_files[key];
-
- # Anthing left at this stage is not referenced by a .changes (or
- # a .dsc) and should be deleted if old enough.
- for file in all_files.keys():
- if os.stat(file)[stat.ST_MTIME] < delete_date:
- if Options["No-Action"]:
- print "I: Would delete '%s'." % (os.path.basename(file));
- else:
- if Options["Verbose"]:
- print "Removing '%s' (to '%s')." % (os.path.basename(file), del_dir);
- remove(file);
- else:
- if Options["Verbose"]:
- print "Skipping, too new, '%s'." % (os.path.basename(file));
-
-################################################################################
-
-def main ():
- global Cnf, Options;
-
- Cnf = utils.get_conf()
-
- for i in ["Help", "Incoming", "No-Action", "Verbose" ]:
- if not Cnf.has_key("Shania::Options::%s" % (i)):
- Cnf["Shania::Options::%s" % (i)] = "";
- if not Cnf.has_key("Shania::Options::Days"):
- Cnf["Shania::Options::Days"] = "14";
-
- Arguments = [('h',"help","Shania::Options::Help"),
- ('d',"days","Shania::Options::Days", "IntLevel"),
- ('i',"incoming","Shania::Options::Incoming", "HasArg"),
- ('n',"no-action","Shania::Options::No-Action"),
- ('v',"verbose","Shania::Options::Verbose")];
-
- apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Shania::Options")
-
- if Options["Help"]:
- usage();
-
- init();
-
- if Options["Verbose"]:
- print "Processing incoming..."
- flush_orphans();
-
- reject = Cnf["Dir::Queue::Reject"]
- if os.path.exists(reject) and os.path.isdir(reject):
- if Options["Verbose"]:
- print "Processing incoming/REJECT..."
- os.chdir(reject);
- flush_old();
-
-#######################################################################################
-
-if __name__ == '__main__':
- main();
+++ /dev/null
-#!/usr/bin/env python
-
-# Launch dak functionality
-# Copyright (c) 2005 Anthony Towns <ajt@debian.org>
-# $Id: dak,v 1.1 2005-11-17 08:47:31 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# well I don't know where you're from but in AMERICA, there's a little
-# thing called "abstinent until proven guilty."
-# -- http://harrietmiers.blogspot.com/2005/10/wow-i-feel-loved.html
-
-# (if James had a blog, I bet I could find a funny quote in it to use!)
-
-################################################################################
-
-import sys
-
-################################################################################
-
-# maps a command name to a module name
-functionality = [
- ("ls", "Show which suites packages are in",
- ("madison", "main"), ["madison"]),
- ("rm", "Remove packages from suites", "melanie"),
-
- ("decode-dot-dak", "Display contents of a .katie file", "ashley"),
- ("override", "Query/change the overrides", "alicia"),
-
- ("install", "Install a package from accepted (security only)",
- "amber"), # XXX - hmm (ajt)
- ("reject-proposed-updates", "Manually reject from proposed-updates", "lauren"),
- ("process-new", "Process NEW and BYHAND packages", "lisa"),
-
- ("control-overrides", "Manipulate/list override entries in bulk",
- "natalie"),
- ("control-suite", "Manipulate suites in bulk", "heidi"),
-
- ("stats", "Generate stats pr0n", "saffron"),
- ("cruft-report", "Check for obsolete or duplicated packages",
- "rene"),
- ("queue-report", "Produce a report on NEW and BYHAND packages",
- "helena"),
- ("compare-suites", "Show fixable discrepencies between suites",
- "andrea"),
-
- ("check-archive", "Archive sanity checks", "tea"),
- ("check-overrides", "Override cruft checks", "cindy"),
- ("check-proposed-updates", "Dependency checking for proposed-updates",
- "jeri"),
-
- ("examine-package", "Show information useful for NEW processing",
- "fernanda"),
-
- ("init-db", "Update the database to match the conf file",
- "alyson"),
- ("init-dirs", "Initial setup of the archive", "rose"),
- ("import-archive", "Populate SQL database based from an archive tree",
- "neve"),
-
- ("poolize", "Move packages from dists/ to pool/", "catherine"),
- ("symlink-dists", "Generate compatability symlinks from dists/",
- "claire"),
-
- ("process-unchecked", "Process packages in queue/unchecked", "jennifer"),
-
- ("process-accepted", "Install packages into the pool", "kelly"),
- ("generate-releases", "Generate Release files", "ziyi"),
- ("generate-index-diffs", "Generate .diff/Index files", "tiffani"),
-
- ("make-suite-file-list",
- "Generate lists of packages per suite for apt-ftparchive", "jenna"),
- ("make-maintainers", "Generates Maintainers file for BTS etc",
- "charisma"),
- ("make-overrides", "Generates override files", "denise"),
-
- ("mirror-split", "Split the pool/ by architecture groups",
- "billie"),
-
- ("clean-proposed-updates", "Remove obsolete .changes from proposed-updates",
- "halle"),
- ("clean-queues", "Clean cruft from incoming", "shania"),
- ("clean-suites",
- "Clean unused/superseded packages from the archive", "rhona"),
-
- ("split-done", "Split queue/done into a data-based hierarchy",
- "nina"),
-
- ("import-ldap-fingerprints",
- "Syncs fingerprint and uid tables with Debian LDAP db", "emilie"),
- ("import-users-from-passwd",
- "Sync PostgreSQL users with passwd file", "julia"),
- ("find-null-maintainers",
- "Check for users with no packages in the archive", "rosamund"),
-]
-
-names = {}
-for f in functionality:
- if isinstance(f[2], str):
- names[f[2]] = names[f[0]] = (f[2], "main")
- else:
- names[f[0]] = f[2]
- for a in f[3]: names[a] = f[2]
-
-################################################################################
-
-def main():
- if len(sys.argv) == 0:
- print "err, argc == 0? how is that possible?"
- sys.exit(1);
- elif len(sys.argv) == 1 or (len(sys.argv) == 2 and sys.argv[1] == "--help"):
- print "Sub commands:"
- for f in functionality:
- print " %-23s %s" % (f[0], f[1])
- sys.exit(0);
- else:
- # should set PATH based on sys.argv[0] maybe
- # possibly should set names based on sys.argv[0] too
- sys.path = [sys.path[0]+"/py-symlinks"] + sys.path
-
- cmdname = sys.argv[0]
- cmdname = cmdname[cmdname.rfind("/")+1:]
- if cmdname in names:
- pass # invoke directly
- else:
- cmdname = sys.argv[1]
- sys.argv = [sys.argv[0] + " " + sys.argv[1]] + sys.argv[2:]
- if cmdname not in names:
- match = []
- for f in names:
- if f.startswith(cmdname):
- match.append(f)
- if len(match) == 1:
- cmdname = match[0]
- elif len(match) > 1:
- print "ambiguous command: %s" % ", ".join(match)
- sys.exit(1);
- else:
- print "unknown command \"%s\"" % (cmdname)
- sys.exit(1);
-
- func = names[cmdname]
- x = __import__(func[0])
- x.__getattribute__(func[1])()
-
-if __name__ == "__main__":
- main()
-
+++ /dev/null
-/* Wrapper round apt's version compare functions for PostgreSQL. */
-/* Copyright (C) 2001, James Troup <james@nocrew.org> */
-
-/* This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-/* NB: do not try to use the VERSION-1 calling conventions for
- C-Language functions; it works on i386 but segfaults the postgres
- child backend on Sparc. */
-
-#include <apt-pkg/debversion.h>
-
-extern "C"
-{
-
-#include <postgres.h>
-
- int versioncmp(text *A, text *B);
-
- int
- versioncmp (text *A, text *B)
- {
- int result, txt_size;
- char *a, *b;
-
- txt_size = VARSIZE(A)-VARHDRSZ;
- a = (char *) palloc(txt_size+1);
- memcpy(a, VARDATA(A), txt_size);
- a[txt_size] = '\0';
-
- txt_size = VARSIZE(B)-VARHDRSZ;
- b = (char *) palloc(txt_size+1);
- memcpy(b, VARDATA(B), txt_size);
- b[txt_size] = '\0';
-
- result = debVS.CmpVersion (a, b);
-
- pfree (a);
- pfree (b);
-
- return (result);
- }
-
-}
--- /dev/null
+#!/usr/bin/make -f
+
+CXXFLAGS = -I/usr/include/postgresql/ -I/usr/include/postgresql/server/ -fPIC -Wall
+CFLAGS = -fPIC -Wall
+LDFLAGS = -fPIC
+LIBS = -lapt-pkg
+
+C++ = g++
+
+all: sql-aptvc.so
+
+sql-aptvc.o: sql-aptvc.cpp
+sql-aptvc.so: sql-aptvc.o
+ $(C++) $(LDFLAGS) $(LIBS) -shared -o $@ $<
+clean:
+ rm -f sql-aptvc.so sql-aptvc.o
+
--- /dev/null
+/* Wrapper round apt's version compare functions for PostgreSQL. */
+/* Copyright (C) 2001, James Troup <james@nocrew.org> */
+
+/* This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+/* NB: do not try to use the VERSION-1 calling conventions for
+ C-Language functions; it works on i386 but segfaults the postgres
+ child backend on Sparc. */
+
+#include <apt-pkg/debversion.h>
+
+extern "C"
+{
+
+#include <postgres.h>
+
+ int versioncmp(text *A, text *B);
+
+ int
+ versioncmp (text *A, text *B)
+ {
+ int result, txt_size;
+ char *a, *b;
+
+ txt_size = VARSIZE(A)-VARHDRSZ;
+ a = (char *) palloc(txt_size+1);
+ memcpy(a, VARDATA(A), txt_size);
+ a[txt_size] = '\0';
+
+ txt_size = VARSIZE(B)-VARHDRSZ;
+ b = (char *) palloc(txt_size+1);
+ memcpy(b, VARDATA(B), txt_size);
+ b[txt_size] = '\0';
+
+ result = debVS.CmpVersion (a, b);
+
+ pfree (a);
+ pfree (b);
+
+ return (result);
+ }
+
+}
+++ /dev/null
-#!/usr/bin/env python
-
-# Various different sanity checks
-# Copyright (C) 2000, 2001, 2002, 2003, 2004 James Troup <james@nocrew.org>
-# $Id: tea,v 1.31 2004-11-27 18:03:11 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# And, lo, a great and menacing voice rose from the depths, and with
-# great wrath and vehemence it's voice boomed across the
-# land... ``hehehehehehe... that *tickles*''
-# -- aj on IRC
-
-################################################################################
-
-import commands, os, pg, stat, string, sys, time;
-import db_access, utils;
-import apt_pkg, apt_inst;
-
-################################################################################
-
-Cnf = None;
-projectB = None;
-db_files = {};
-waste = 0.0;
-excluded = {};
-current_file = None;
-future_files = {};
-current_time = time.time();
-
-################################################################################
-
-def usage(exit_code=0):
- print """Usage: tea MODE
-Run various sanity checks of the archive and/or database.
-
- -h, --help show this help and exit.
-
-The following MODEs are available:
-
- md5sums - validate the md5sums stored in the database
- files - check files in the database against what's in the archive
- dsc-syntax - validate the syntax of .dsc files in the archive
- missing-overrides - check for missing overrides
- source-in-one-dir - ensure the source for each package is in one directory
- timestamps - check for future timestamps in .deb's
- tar-gz-in-dsc - ensure each .dsc lists a .tar.gz file
- validate-indices - ensure files mentioned in Packages & Sources exist
- files-not-symlinks - check files in the database aren't symlinks
- validate-builddeps - validate build-dependencies of .dsc files in the archive
-"""
- sys.exit(exit_code)
-
-################################################################################
-
-def process_dir (unused, dirname, filenames):
- global waste, db_files, excluded;
-
- if dirname.find('/disks-') != -1 or dirname.find('upgrade-') != -1:
- return;
- # hack; can't handle .changes files
- if dirname.find('proposed-updates') != -1:
- return;
- for name in filenames:
- filename = os.path.abspath(dirname+'/'+name);
- filename = filename.replace('potato-proposed-updates', 'proposed-updates');
- if os.path.isfile(filename) and not os.path.islink(filename) and not db_files.has_key(filename) and not excluded.has_key(filename):
- waste += os.stat(filename)[stat.ST_SIZE];
- print filename
-
-################################################################################
-
-def check_files():
- global db_files;
-
- print "Building list of database files...";
- q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id")
- ql = q.getresult();
-
- db_files.clear();
- for i in ql:
- filename = os.path.abspath(i[0] + i[1]);
- db_files[filename] = "";
- if os.access(filename, os.R_OK) == 0:
- utils.warn("'%s' doesn't exist." % (filename));
-
- filename = Cnf["Dir::Override"]+'override.unreferenced';
- if os.path.exists(filename):
- file = utils.open_file(filename);
- for filename in file.readlines():
- filename = filename[:-1];
- excluded[filename] = "";
-
- print "Checking against existent files...";
-
- os.path.walk(Cnf["Dir::Root"]+'pool/', process_dir, None);
-
- print
- print "%s wasted..." % (utils.size_type(waste));
-
-################################################################################
-
-def check_dscs():
- count = 0;
- suite = 'unstable';
- for component in Cnf.SubTree("Component").List():
- if component == "mixed":
- continue;
- component = component.lower();
- list_filename = '%s%s_%s_source.list' % (Cnf["Dir::Lists"], suite, component);
- list_file = utils.open_file(list_filename);
- for line in list_file.readlines():
- file = line[:-1];
- try:
- utils.parse_changes(file, signing_rules=1);
- except utils.invalid_dsc_format_exc, line:
- utils.warn("syntax error in .dsc file '%s', line %s." % (file, line));
- count += 1;
-
- if count:
- utils.warn("Found %s invalid .dsc files." % (count));
-
-################################################################################
-
-def check_override():
- for suite in [ "stable", "unstable" ]:
- print suite
- print "-"*len(suite)
- print
- suite_id = db_access.get_suite_id(suite);
- q = projectB.query("""
-SELECT DISTINCT b.package FROM binaries b, bin_associations ba
- WHERE b.id = ba.bin AND ba.suite = %s AND NOT EXISTS
- (SELECT 1 FROM override o WHERE o.suite = %s AND o.package = b.package)"""
- % (suite_id, suite_id));
- print q
- q = projectB.query("""
-SELECT DISTINCT s.source FROM source s, src_associations sa
- WHERE s.id = sa.source AND sa.suite = %s AND NOT EXISTS
- (SELECT 1 FROM override o WHERE o.suite = %s and o.package = s.source)"""
- % (suite_id, suite_id));
- print q
-
-################################################################################
-
-# Ensure that the source files for any given package is all in one
-# directory so that 'apt-get source' works...
-
-def check_source_in_one_dir():
- # Not the most enterprising method, but hey...
- broken_count = 0;
- q = projectB.query("SELECT id FROM source;");
- for i in q.getresult():
- source_id = i[0];
- q2 = projectB.query("""
-SELECT l.path, f.filename FROM files f, dsc_files df, location l WHERE df.source = %s AND f.id = df.file AND l.id = f.location"""
- % (source_id));
- first_path = "";
- first_filename = "";
- broken = 0;
- for j in q2.getresult():
- filename = j[0] + j[1];
- path = os.path.dirname(filename);
- if first_path == "":
- first_path = path;
- first_filename = filename;
- elif first_path != path:
- symlink = path + '/' + os.path.basename(first_filename);
- if not os.path.exists(symlink):
- broken = 1;
- print "WOAH, we got a live one here... %s [%s] {%s}" % (filename, source_id, symlink);
- if broken:
- broken_count += 1;
- print "Found %d source packages where the source is not all in one directory." % (broken_count);
-
-################################################################################
-
-def check_md5sums():
- print "Getting file information from database...";
- q = projectB.query("SELECT l.path, f.filename, f.md5sum, f.size FROM files f, location l WHERE f.location = l.id")
- ql = q.getresult();
-
- print "Checking file md5sums & sizes...";
- for i in ql:
- filename = os.path.abspath(i[0] + i[1]);
- db_md5sum = i[2];
- db_size = int(i[3]);
- try:
- file = utils.open_file(filename);
- except:
- utils.warn("can't open '%s'." % (filename));
- continue;
- md5sum = apt_pkg.md5sum(file);
- size = os.stat(filename)[stat.ST_SIZE];
- if md5sum != db_md5sum:
- utils.warn("**WARNING** md5sum mismatch for '%s' ('%s' [current] vs. '%s' [db])." % (filename, md5sum, db_md5sum));
- if size != db_size:
- utils.warn("**WARNING** size mismatch for '%s' ('%s' [current] vs. '%s' [db])." % (filename, size, db_size));
-
- print "Done."
-
-################################################################################
-#
-# Check all files for timestamps in the future; common from hardware
-# (e.g. alpha) which have far-future dates as their default dates.
-
-def Ent(Kind,Name,Link,Mode,UID,GID,Size,MTime,Major,Minor):
- global future_files;
-
- if MTime > current_time:
- future_files[current_file] = MTime;
- print "%s: %s '%s','%s',%u,%u,%u,%u,%u,%u,%u" % (current_file, Kind,Name,Link,Mode,UID,GID,Size, MTime, Major, Minor);
-
-def check_timestamps():
- global current_file;
-
- q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id AND f.filename ~ '.deb$'")
- ql = q.getresult();
- db_files.clear();
- count = 0;
- for i in ql:
- filename = os.path.abspath(i[0] + i[1]);
- if os.access(filename, os.R_OK):
- file = utils.open_file(filename);
- current_file = filename;
- sys.stderr.write("Processing %s.\n" % (filename));
- apt_inst.debExtract(file,Ent,"control.tar.gz");
- file.seek(0);
- apt_inst.debExtract(file,Ent,"data.tar.gz");
- count += 1;
- print "Checked %d files (out of %d)." % (count, len(db_files.keys()));
-
-################################################################################
-
-def check_missing_tar_gz_in_dsc():
- count = 0;
-
- print "Building list of database files...";
- q = projectB.query("SELECT l.path, f.filename FROM files f, location l WHERE f.location = l.id AND f.filename ~ '.dsc$'");
- ql = q.getresult();
- if ql:
- print "Checking %d files..." % len(ql);
- else:
- print "No files to check."
- for i in ql:
- filename = os.path.abspath(i[0] + i[1]);
- try:
- # NB: don't enforce .dsc syntax
- dsc = utils.parse_changes(filename);
- except:
- utils.fubar("error parsing .dsc file '%s'." % (filename));
- dsc_files = utils.build_file_list(dsc, is_a_dsc=1);
- has_tar = 0;
- for file in dsc_files.keys():
- m = utils.re_issource.match(file);
- if not m:
- utils.fubar("%s not recognised as source." % (file));
- type = m.group(3);
- if type == "orig.tar.gz" or type == "tar.gz":
- has_tar = 1;
- if not has_tar:
- utils.warn("%s has no .tar.gz in the .dsc file." % (file));
- count += 1;
-
- if count:
- utils.warn("Found %s invalid .dsc files." % (count));
-
-
-################################################################################
-
-def validate_sources(suite, component):
- filename = "%s/dists/%s/%s/source/Sources.gz" % (Cnf["Dir::Root"], suite, component);
- print "Processing %s..." % (filename);
- # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
- if (result != 0):
- sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
- sys.exit(result);
- sources = utils.open_file(temp_filename);
- Sources = apt_pkg.ParseTagFile(sources);
- while Sources.Step():
- source = Sources.Section.Find('Package');
- directory = Sources.Section.Find('Directory');
- files = Sources.Section.Find('Files');
- for i in files.split('\n'):
- (md5, size, name) = i.split();
- filename = "%s/%s/%s" % (Cnf["Dir::Root"], directory, name);
- if not os.path.exists(filename):
- if directory.find("potato") == -1:
- print "W: %s missing." % (filename);
- else:
- pool_location = utils.poolify (source, component);
- pool_filename = "%s/%s/%s" % (Cnf["Dir::Pool"], pool_location, name);
- if not os.path.exists(pool_filename):
- print "E: %s missing (%s)." % (filename, pool_filename);
- else:
- # Create symlink
- pool_filename = os.path.normpath(pool_filename);
- filename = os.path.normpath(filename);
- src = utils.clean_symlink(pool_filename, filename, Cnf["Dir::Root"]);
- print "Symlinking: %s -> %s" % (filename, src);
- #os.symlink(src, filename);
- sources.close();
- os.unlink(temp_filename);
-
-########################################
-
-def validate_packages(suite, component, architecture):
- filename = "%s/dists/%s/%s/binary-%s/Packages.gz" \
- % (Cnf["Dir::Root"], suite, component, architecture);
- print "Processing %s..." % (filename);
- # apt_pkg.ParseTagFile needs a real file handle and can't handle a GzipFile instance...
- temp_filename = utils.temp_filename();
- (result, output) = commands.getstatusoutput("gunzip -c %s > %s" % (filename, temp_filename));
- if (result != 0):
- sys.stderr.write("Gunzip invocation failed!\n%s\n" % (output));
- sys.exit(result);
- packages = utils.open_file(temp_filename);
- Packages = apt_pkg.ParseTagFile(packages);
- while Packages.Step():
- filename = "%s/%s" % (Cnf["Dir::Root"], Packages.Section.Find('Filename'));
- if not os.path.exists(filename):
- print "W: %s missing." % (filename);
- packages.close();
- os.unlink(temp_filename);
-
-########################################
-
-def check_indices_files_exist():
- for suite in [ "stable", "testing", "unstable" ]:
- for component in Cnf.ValueList("Suite::%s::Components" % (suite)):
- architectures = Cnf.ValueList("Suite::%s::Architectures" % (suite));
- for arch in map(string.lower, architectures):
- if arch == "source":
- validate_sources(suite, component);
- elif arch == "all":
- continue;
- else:
- validate_packages(suite, component, arch);
-
-################################################################################
-
-def check_files_not_symlinks():
- print "Building list of database files... ",;
- before = time.time();
- q = projectB.query("SELECT l.path, f.filename, f.id FROM files f, location l WHERE f.location = l.id")
- print "done. (%d seconds)" % (int(time.time()-before));
- q_files = q.getresult();
-
-# locations = {};
-# q = projectB.query("SELECT l.path, c.name, l.id FROM location l, component c WHERE l.component = c.id");
-# for i in q.getresult():
-# path = os.path.normpath(i[0] + i[1]);
-# locations[path] = (i[0], i[2]);
-
-# q = projectB.query("BEGIN WORK");
- for i in q_files:
- filename = os.path.normpath(i[0] + i[1]);
-# file_id = i[2];
- if os.access(filename, os.R_OK) == 0:
- utils.warn("%s: doesn't exist." % (filename));
- else:
- if os.path.islink(filename):
- utils.warn("%s: is a symlink." % (filename));
- # You probably don't want to use the rest of this...
-# print "%s: is a symlink." % (filename);
-# dest = os.readlink(filename);
-# if not os.path.isabs(dest):
-# dest = os.path.normpath(os.path.join(os.path.dirname(filename), dest));
-# print "--> %s" % (dest);
-# # Determine suitable location ID
-# # [in what must be the suckiest way possible?]
-# location_id = None;
-# for path in locations.keys():
-# if dest.find(path) == 0:
-# (location, location_id) = locations[path];
-# break;
-# if not location_id:
-# utils.fubar("Can't find location for %s (%s)." % (dest, filename));
-# new_filename = dest.replace(location, "");
-# q = projectB.query("UPDATE files SET filename = '%s', location = %s WHERE id = %s" % (new_filename, location_id, file_id));
-# q = projectB.query("COMMIT WORK");
-
-################################################################################
-
-def chk_bd_process_dir (unused, dirname, filenames):
- for name in filenames:
- if not name.endswith(".dsc"):
- continue;
- filename = os.path.abspath(dirname+'/'+name);
- dsc = utils.parse_changes(filename);
- for field_name in [ "build-depends", "build-depends-indep" ]:
- field = dsc.get(field_name);
- if field:
- try:
- apt_pkg.ParseSrcDepends(field);
- except:
- print "E: [%s] %s: %s" % (filename, field_name, field);
- pass;
-
-################################################################################
-
-def check_build_depends():
- os.path.walk(Cnf["Dir::Root"], chk_bd_process_dir, None);
-
-################################################################################
-
-def main ():
- global Cnf, projectB, db_files, waste, excluded;
-
- Cnf = utils.get_conf();
- Arguments = [('h',"help","Tea::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Tea::Options::%s" % (i)):
- Cnf["Tea::Options::%s" % (i)] = "";
-
- args = apt_pkg.ParseCommandLine(Cnf, Arguments, sys.argv);
-
- Options = Cnf.SubTree("Tea::Options")
- if Options["Help"]:
- usage();
-
- if len(args) < 1:
- utils.warn("tea requires at least one argument");
- usage(1);
- elif len(args) > 1:
- utils.warn("tea accepts only one argument");
- usage(1);
- mode = args[0].lower();
-
- projectB = pg.connect(Cnf["DB::Name"], Cnf["DB::Host"], int(Cnf["DB::Port"]));
- db_access.init(Cnf, projectB);
-
- if mode == "md5sums":
- check_md5sums();
- elif mode == "files":
- check_files();
- elif mode == "dsc-syntax":
- check_dscs();
- elif mode == "missing-overrides":
- check_override();
- elif mode == "source-in-one-dir":
- check_source_in_one_dir();
- elif mode == "timestamps":
- check_timestamps();
- elif mode == "tar-gz-in-dsc":
- check_missing_tar_gz_in_dsc();
- elif mode == "validate-indices":
- check_indices_files_exist();
- elif mode == "files-not-symlinks":
- check_files_not_symlinks();
- elif mode == "validate-builddeps":
- check_build_depends();
- else:
- utils.warn("unknown mode '%s'" % (mode));
- usage(1);
-
-################################################################################
-
-if __name__ == '__main__':
- main();
-
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
-
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-
-Format: 1.0
-Source: amaya
-Version: 3.2.1-1
-Binary: amaya
-Maintainer: Steve Dunham <dunham@debian.org>
-Architecture: any
-Standards-Version: 2.4.0.0
-Files:
- 07f95f92b7cb0f12f7cf65ee5c5fbde2 4532418 amaya_3.2.1.orig.tar.gz
- da06b390946745d9efaf9e7df8e05092 4817 amaya_3.2.1-1.diff.gz
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.2 (GNU/Linux)
-Comment: For info see http://www.gnupg.org
-
-iD8DBQE5j091iPgEjVqvb1kRAvFtAJ0asUAaac6ebfR3YeaH16HjL7F3GwCfV+AQ
-rhYnRmVuNMa8oYSvL4hl/Yw=
-=EFAA
------END PGP SIGNATURE-----
+++ /dev/null
-#!/usr/bin/env python
-
-# Check utils.parse_changes()'s .dsc file validation
-# Copyright (C) 2000 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.1 2001-01-28 09:06:44 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-def main ():
- # Valid .dsc
- utils.parse_changes('1.dsc',1);
-
- # Missing blank line before signature body
- try:
- utils.parse_changes('2.dsc',1);
- except utils.invalid_dsc_format_exc, line:
- if line != 14:
- fail("Incorrect line number ('%s') for test #2." % (line));
- else:
- fail("Test #2 wasn't recognised as invalid.");
-
- # Missing blank line after signature header
- try:
- utils.parse_changes('3.dsc',1);
- except utils.invalid_dsc_format_exc, line:
- if line != 14:
- fail("Incorrect line number ('%s') for test #3." % (line));
- else:
- fail("Test #3 wasn't recognised as invalid.");
-
- # No blank lines at all
- try:
- utils.parse_changes('4.dsc',1);
- except utils.invalid_dsc_format_exc, line:
- if line != 19:
- fail("Incorrect line number ('%s') for test #4." % (line));
- else:
- fail("Test #4 wasn't recognised as invalid.");
-
- # Extra blank line before signature body
- try:
- utils.parse_changes('5.dsc',1);
- except utils.invalid_dsc_format_exc, line:
- if line != 15:
- fail("Incorrect line number ('%s') for test #5." % (line));
- else:
- fail("Test #5 wasn't recognised as invalid.");
-
- # Extra blank line after signature header
- try:
- utils.parse_changes('6.dsc',1);
- except utils.invalid_dsc_format_exc, line:
- if line != 5:
- fail("Incorrect line number ('%s') for test #6." % (line));
- else:
- fail("Test #6 wasn't recognised as invalid.");
-
- # Valid .dsc ; ignoring errors
- utils.parse_changes('1.dsc', 0);
-
- # Invalid .dsc ; ignoring errors
- utils.parse_changes('2.dsc', 0);
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Check utils.parse_changes()'s for handling empty files
-# Copyright (C) 2000 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.1 2001-03-02 02:31:07 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-def main ():
- # Empty .changes file; should raise a 'parse error' exception.
- try:
- utils.parse_changes('empty.changes', 0)
- except utils.changes_parse_error_exc, line:
- if line != "[Empty changes file]":
- fail("Returned exception with unexcpected error message `%s'." % (line));
- else:
- fail("Didn't raise a 'parse error' exception for a zero-length .changes file.");
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-
-Format: 1.7
-Date: Fri, 20 Apr 2001 02:47:21 -0400
-Source: krb5
-Binary: krb5-kdc krb5-doc krb5-rsh-server libkrb5-dev libkrb53 krb5-ftpd
- krb5-clients krb5-user libkadm54 krb5-telnetd krb5-admin-server
-Architecture: m68k
-Version: 1.2.2-4
-Distribution: unstable
-Urgency: low
-Maintainer: buildd m68k user account <buildd@ax.westfalen.de>
-Changed-By: Sam Hartman <hartmans@debian.org>
-Description:
- krb5-admin-server - Mit Kerberos master server (kadmind)
- krb5-clients - Secure replacements for ftp, telnet and rsh using MIT Kerberos
- krb5-ftpd - Secure FTP server supporting MIT Kerberos
- krb5-kdc - Mit Kerberos key server (KDC)
- krb5-rsh-server - Secure replacements for rshd and rlogind using MIT Kerberos
- krb5-telnetd - Secure telnet server supporting MIT Kerberos
- krb5-user - Basic programs to authenticate using MIT Kerberos
- libkadm54 - MIT Kerberos administration runtime libraries
- libkrb5-dev - Headers and development libraries for MIT Kerberos
- libkrb53 - MIT Kerberos runtime libraries
-Closes: 94407
-Changes:
- krb5 (1.2.2-4) unstable; urgency=low
- .
- * Fix shared libraries to build with gcc not ld to properly include
- -lgcc symbols, closes: #94407
-Files:
- 563dac1cdd3ba922f9301fe074fbfc80 65836 non-us/main optional libkadm54_1.2.2-4_m68k.deb
- bb620f589c17ab0ebea1aa6e10ca52ad 272198 non-us/main optional libkrb53_1.2.2-4_m68k.deb
- 40af6e64b3030a179e0de25bd95c95e9 143264 non-us/main optional krb5-user_1.2.2-4_m68k.deb
- ffe4e5e7b2cab162dc608d56278276cf 141870 non-us/main optional krb5-clients_1.2.2-4_m68k.deb
- 4fe01d1acb4b82ce0b8b72652a9a15ae 54592 non-us/main optional krb5-rsh-server_1.2.2-4_m68k.deb
- b3c8c617ea72008a33b869b75d2485bf 41292 non-us/main optional krb5-ftpd_1.2.2-4_m68k.deb
- 5908f8f60fe536d7bfc1ef3fdd9d74cc 42090 non-us/main optional krb5-telnetd_1.2.2-4_m68k.deb
- 650ea769009a312396e56503d0059ebc 160236 non-us/main optional krb5-kdc_1.2.2-4_m68k.deb
- 399c9de4e9d7d0b0f5626793808a4391 160392 non-us/main optional krb5-admin-server_1.2.2-4_m68k.deb
- 6f962fe530c3187e986268b4e4d27de9 398662 non-us/main optional libkrb5-dev_1.2.2-4_m68k.deb
-
------BEGIN PGP SIGNATURE-----
-Version: 2.6.3i
-Charset: noconv
-
-iQCVAwUBOvVPPm547I3m3eHJAQHyaQP+M7RXVEqZ2/xHiPzaPcZRJ4q7o0zbMaU8
-qG/Mi6kuR1EhRNMjMH4Cp6ctbhRDHK5FR/8v7UkOd+ETDAhiw7eqJnLC60EZxZ/H
-CiOs8JklAXDERkQ3i7EYybv46Gxx91pIs2nE4xVKnG16d/wFELWMBLY6skF1B2/g
-zZju3cuFCCE=
-=Vm59
------END PGP SIGNATURE-----
-
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Check utils.parse_changes()'s for handling of multi-line fields
-# Copyright (C) 2000 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.2 2002-10-16 02:47:32 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-# The deal here is that for the first 6 months of katie's
-# implementation it has been misparsing multi-line fields in .changes
-# files; specifically multi-line fields where there _is_ data on the
-# first line. So, for example:
-
-# Foo: bar baz
-# bat bant
-
-# Became "foo: bar bazbat bant" rather than "foo: bar baz\nbat bant"
-
-################################################################################
-
-import os, sys
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-def main ():
- # Valid .changes file with a multi-line Binary: field
- try:
- changes = utils.parse_changes('krb5_1.2.2-4_m68k.changes', 0)
- except utils.changes_parse_error_exc, line:
- fail("parse_changes() returned an exception with error message `%s'." % (line));
-
- o = changes.get("binary", "")
- if o != "":
- del changes["binary"]
- changes["binary"] = {}
- for j in o.split():
- changes["binary"][j] = 1
-
- if not changes["binary"].has_key("krb5-ftpd"):
- fail("parse_changes() is broken; 'krb5-ftpd' is not in the Binary: dictionary.");
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-# Check utils.extract_component_from_section()
-# Copyright (C) 2000 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.3 2002-10-16 02:47:32 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys;
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-# prefix: non-US
-# component: main, contrib, non-free
-# section: games, admin, libs, [...]
-
-# [1] Order is as above.
-# [2] Prefix is optional for the default archive, but mandatory when
-# uploads are going anywhere else.
-# [3] Default component is main and may be omitted.
-# [4] Section is optional.
-# [5] Prefix is case insensitive
-# [6] Everything else is case sensitive.
-
-def test(input, output):
- result = utils.extract_component_from_section(input);
- if result != output:
- fail ("%s -> %r [should have been %r]" % (input, result, output));
-
-def main ():
- # Err, whoops? should probably be "utils", "main"...
- input = "main/utils"; output = ("main/utils", "main");
- test (input, output);
-
-
- # Validate #3
- input = "utils"; output = ("utils", "main");
- test (input, output);
-
- input = "non-free/libs"; output = ("non-free/libs", "non-free");
- test (input, output);
-
- input = "contrib/net"; output = ("contrib/net", "contrib");
- test (input, output);
-
-
- # Validate #3 with a prefix
- input = "non-US"; output = ("non-US", "non-US/main");
- test (input, output);
-
-
- # Validate #4
- input = "main"; output = ("main", "main");
- test (input, output);
-
- input = "contrib"; output = ("contrib", "contrib");
- test (input, output);
-
- input = "non-free"; output = ("non-free", "non-free");
- test (input, output);
-
-
- # Validate #4 with a prefix
- input = "non-US/main"; output = ("non-US/main", "non-US/main");
- test (input, output);
-
- input = "non-US/contrib"; output = ("non-US/contrib", "non-US/contrib");
- test (input, output);
-
- input = "non-US/non-free"; output = ("non-US/non-free", "non-US/non-free");
- test (input, output);
-
-
- # Validate #5
- input = "non-us"; output = ("non-us", "non-US/main");
- test (input, output);
-
- input = "non-us/contrib"; output = ("non-us/contrib", "non-US/contrib");
- test (input, output);
-
-
- # Validate #6 (section)
- input = "utIls"; output = ("utIls", "main");
- test (input, output);
-
- # Others..
- input = "non-US/libs"; output = ("non-US/libs", "non-US/main");
- test (input, output);
- input = "non-US/main/libs"; output = ("non-US/main/libs", "non-US/main");
- test (input, output);
- input = "non-US/contrib/libs"; output = ("non-US/contrib/libs", "non-US/contrib");
- test (input, output);
- input = "non-US/non-free/libs"; output = ("non-US/non-free/libs", "non-US/non-free");
- test (input, output);
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.7
-Date: Tue, 9 Sep 2003 01:16:01 +0100
-Source: gawk
-Binary: gawk
-Architecture: source i386
-Version: 1:3.1.3-2
-Distribution: unstable
-Urgency: low
-Maintainer: James Troup <james@nocrew.org>
-Changed-By: James Troup <james@nocrew.org>
-Description:
- gawk - GNU awk, a pattern scanning and processing language
-Closes: 204699 204701
-Changes:
- gawk (1:3.1.3-2) unstable; urgency=low
- .
- * debian/control (Standards-Version): bump to 3.6.1.0.
- .
- * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
- Thanks to <vle@gmx.net> for reporting the bug and forwarding it
- upstream. Closes: #204701
- .
- * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
- [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
- forwarding it upstream. Closes: #204699
-Files:
- 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
- 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
- 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.6 (GNU/Linux)
-
-iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
-fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
-=O7Gh
------END PGP SIGNATURE-----
-You: have been 0wned
+++ /dev/null
-You: have been 0wned
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.7
-Date: Tue, 9 Sep 2003 01:16:01 +0100
-Source: gawk
-Binary: gawk
-Architecture: source i386
-Version: 1:3.1.3-2
-Distribution: unstable
-Urgency: low
-Maintainer: James Troup <james@nocrew.org>
-Changed-By: James Troup <james@nocrew.org>
-Description:
- gawk - GNU awk, a pattern scanning and processing language
-Closes: 204699 204701
-Changes:
- gawk (1:3.1.3-2) unstable; urgency=low
- .
- * debian/control (Standards-Version): bump to 3.6.1.0.
- .
- * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
- Thanks to <vle@gmx.net> for reporting the bug and forwarding it
- upstream. Closes: #204701
- .
- * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
- [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
- forwarding it upstream. Closes: #204699
-Files:
- 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
- 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
- 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.6 (GNU/Linux)
-
-iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
-fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
-=O7Gh
------END PGP SIGNATURE-----
+++ /dev/null
-#!/usr/bin/env python
-
-# Check utils.parse_changes() correctly ignores data outside the signed area
-# Copyright (C) 2004 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.3 2004-03-11 00:22:19 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-def main ():
- for file in [ "valid", "bogus-pre", "bogus-post" ]:
- for strict_whitespace in [ 0, 1 ]:
- try:
- changes = utils.parse_changes("%s.changes" % (file), strict_whitespace)
- except utils.changes_parse_error_exc, line:
- fail("%s[%s]: parse_changes() returned an exception with error message `%s'." % (file, strict_whitespace, line));
- oh_dear = changes.get("you");
- if oh_dear:
- fail("%s[%s]: parsed and accepted unsigned data!" % (file, strict_whitespace));
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA1
-
-Format: 1.7
-Date: Tue, 9 Sep 2003 01:16:01 +0100
-Source: gawk
-Binary: gawk
-Architecture: source i386
-Version: 1:3.1.3-2
-Distribution: unstable
-Urgency: low
-Maintainer: James Troup <james@nocrew.org>
-Changed-By: James Troup <james@nocrew.org>
-Description:
- gawk - GNU awk, a pattern scanning and processing language
-Closes: 204699 204701
-Changes:
- gawk (1:3.1.3-2) unstable; urgency=low
- .
- * debian/control (Standards-Version): bump to 3.6.1.0.
- .
- * 02_fix-ascii.dpatch: new patch from upstream to fix [[:ascii:]].
- Thanks to <vle@gmx.net> for reporting the bug and forwarding it
- upstream. Closes: #204701
- .
- * 03_fix-high-char-ranges.dpatch: new patch from upstream to fix
- [\x80-\xff]. Thanks to <vle@gmx.net> for reporting the bug and
- forwarding it upstream. Closes: #204699
-Files:
- 0e6542c48bcc9d9586fc8ebe4e7242a4 561 interpreters optional gawk_3.1.3-2.dsc
- 50a29dce4a2c6e2ac38069eb7c41d9c4 8302 interpreters optional gawk_3.1.3-2.diff.gz
- 5a255c7b421ac699804212e10205f22d 871114 interpreters optional gawk_3.1.3-2_i386.deb
-
------BEGIN PGP SIGNATURE-----
-Version: GnuPG v1.0.6 (GNU/Linux)
-
-iEYEARECAAYFAj9dHWsACgkQgD/uEicUG7DUnACglndvU4LCA0/k36Qp873N0Sau
-fCwAoMdgIOUBcUfMqXvVnxdW03ev5bNB
-=O7Gh
------END PGP SIGNATURE-----
+++ /dev/null
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-# Test utils.fix_maintainer()
-# Copyright (C) 2004 James Troup <james@nocrew.org>
-# $Id: test.py,v 1.2 2004-06-23 23:11:51 troup Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import os, sys
-
-sys.path.append(os.path.abspath('../../'));
-
-import utils
-
-################################################################################
-
-def fail(message):
- sys.stderr.write("%s\n" % (message));
- sys.exit(1);
-
-################################################################################
-
-def check_valid(s, xa, xb, xc, xd):
- (a, b, c, d) = utils.fix_maintainer(s)
- if a != xa:
- fail("rfc822_maint: %s (returned) != %s (expected [From: '%s']" % (a, xa, s));
- if b != xb:
- fail("rfc2047_maint: %s (returned) != %s (expected [From: '%s']" % (b, xb, s));
- if c != xc:
- fail("name: %s (returned) != %s (expected [From: '%s']" % (c, xc, s));
- if d != xd:
- fail("email: %s (returned) != %s (expected [From: '%s']" % (d, xd, s));
-
-def check_invalid(s):
- try:
- utils.fix_maintainer(s);
- fail("%s was parsed successfully but is expected to be invalid." % (s));
- except utils.ParseMaintError, unused:
- pass;
-
-def main ():
- # Check Valid UTF-8 maintainer field
- s = "Noèl Köthe <noel@debian.org>"
- xa = "Noèl Köthe <noel@debian.org>"
- xb = "=?utf-8?b?Tm/DqGwgS8O2dGhl?= <noel@debian.org>"
- xc = "Noèl Köthe"
- xd = "noel@debian.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check valid ISO-8859-1 maintainer field
- s = "Noèl Köthe <noel@debian.org>"
- xa = "Noèl Köthe <noel@debian.org>"
- xb = "=?iso-8859-1?q?No=E8l_K=F6the?= <noel@debian.org>"
- xc = "Noèl Köthe"
- xd = "noel@debian.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check valid ASCII maintainer field
- s = "James Troup <james@nocrew.org>"
- xa = "James Troup <james@nocrew.org>"
- xb = "James Troup <james@nocrew.org>"
- xc = "James Troup"
- xd = "james@nocrew.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check "Debian vs RFC822" fixup of names with '.' or ',' in them
- s = "James J. Troup <james@nocrew.org>"
- xa = "james@nocrew.org (James J. Troup)"
- xb = "james@nocrew.org (James J. Troup)"
- xc = "James J. Troup"
- xd = "james@nocrew.org"
- check_valid(s, xa, xb, xc, xd);
- s = "James J, Troup <james@nocrew.org>"
- xa = "james@nocrew.org (James J, Troup)"
- xb = "james@nocrew.org (James J, Troup)"
- xc = "James J, Troup"
- xd = "james@nocrew.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check just-email form
- s = "james@nocrew.org"
- xa = " <james@nocrew.org>"
- xb = " <james@nocrew.org>"
- xc = ""
- xd = "james@nocrew.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check bracketed just-email form
- s = "<james@nocrew.org>"
- xa = " <james@nocrew.org>"
- xb = " <james@nocrew.org>"
- xc = ""
- xd = "james@nocrew.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check Krazy quoted-string local part email address
- s = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
- xa = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
- xb = "Cris van Pelt <\"Cris van Pelt\"@tribe.eu.org>"
- xc = "Cris van Pelt"
- xd = "\"Cris van Pelt\"@tribe.eu.org"
- check_valid(s, xa, xb, xc, xd);
-
- # Check empty string
- s = xa = xb = xc = xd = "";
- check_valid(s, xa, xb, xc, xd);
-
- # Check for missing email address
- check_invalid("James Troup");
- # Check for invalid email address
- check_invalid("James Troup <james@nocrew.org");
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/usr/bin/env python
-
-###########################################################
-# generates partial package updates list
-
-# idea and basic implementation by Anthony, some changes by Andreas
-# parts are stolen from ziyi
-#
-# Copyright (C) 2004-5 Anthony Towns <aj@azure.humbug.org.au>
-# Copyright (C) 2004-5 Andreas Barth <aba@not.so.argh.org>
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-
-# < elmo> bah, don't bother me with annoying facts
-# < elmo> I was on a roll
-
-
-################################################################################
-
-import sys, os, tempfile
-import apt_pkg
-import utils
-
-################################################################################
-
-projectB = None;
-Cnf = None;
-Logger = None;
-Options = None;
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: tiffani [OPTIONS] [suites]
-Write out ed-style diffs to Packages/Source lists
-
- -h, --help show this help and exit
- -c give the canonical path of the file
- -p name for the patch (defaults to current time)
- -n take no action
- """
- sys.exit(exit_code);
-
-
-def tryunlink(file):
- try:
- os.unlink(file)
- except OSError:
- print "warning: removing of %s denied" % (file)
-
-def smartstat(file):
- for ext in ["", ".gz", ".bz2"]:
- if os.path.isfile(file + ext):
- return (ext, os.stat(file + ext))
- return (None, None)
-
-def smartlink(f, t):
- if os.path.isfile(f):
- os.link(f,t)
- elif os.path.isfile("%s.gz" % (f)):
- os.system("gzip -d < %s.gz > %s" % (f, t))
- elif os.path.isfile("%s.bz2" % (f)):
- os.system("bzip2 -d < %s.bz2 > %s" % (f, t))
- else:
- print "missing: %s" % (f)
- raise IOError, f
-
-def smartopen(file):
- if os.path.isfile(file):
- f = open(file, "r")
- elif os.path.isfile("%s.gz" % file):
- f = create_temp_file(os.popen("zcat %s.gz" % file, "r"))
- elif os.path.isfile("%s.bz2" % file):
- f = create_temp_file(os.popen("bzcat %s.bz2" % file, "r"))
- else:
- f = None
- return f
-
-def pipe_file(f, t):
- f.seek(0)
- while 1:
- l = f.read()
- if not l: break
- t.write(l)
- t.close()
-
-class Updates:
- def __init__(self, readpath = None, max = 14):
- self.can_path = None
- self.history = {}
- self.history_order = []
- self.max = max
- self.readpath = readpath
- self.filesizesha1 = None
-
- if readpath:
- try:
- f = open(readpath + "/Index")
- x = f.readline()
-
- def read_hashs(ind, f, self, x=x):
- while 1:
- x = f.readline()
- if not x or x[0] != " ": break
- l = x.split()
- if not self.history.has_key(l[2]):
- self.history[l[2]] = [None,None]
- self.history_order.append(l[2])
- self.history[l[2]][ind] = (l[0], int(l[1]))
- return x
-
- while x:
- l = x.split()
-
- if len(l) == 0:
- x = f.readline()
- continue
-
- if l[0] == "SHA1-History:":
- x = read_hashs(0,f,self)
- continue
-
- if l[0] == "SHA1-Patches:":
- x = read_hashs(1,f,self)
- continue
-
- if l[0] == "Canonical-Name:" or l[0]=="Canonical-Path:":
- self.can_path = l[1]
-
- if l[0] == "SHA1-Current:" and len(l) == 3:
- self.filesizesha1 = (l[1], int(l[2]))
-
- x = f.readline()
-
- except IOError:
- 0
-
- def dump(self, out=sys.stdout):
- if self.can_path:
- out.write("Canonical-Path: %s\n" % (self.can_path))
-
- if self.filesizesha1:
- out.write("SHA1-Current: %s %7d\n" % (self.filesizesha1))
-
- hs = self.history
- l = self.history_order[:]
-
- cnt = len(l)
- if cnt > self.max:
- for h in l[:cnt-self.max]:
- tryunlink("%s/%s.gz" % (self.readpath, h))
- del hs[h]
- l = l[cnt-self.max:]
- self.history_order = l[:]
-
- out.write("SHA1-History:\n")
- for h in l:
- out.write(" %s %7d %s\n" % (hs[h][0][0], hs[h][0][1], h))
- out.write("SHA1-Patches:\n")
- for h in l:
- out.write(" %s %7d %s\n" % (hs[h][1][0], hs[h][1][1], h))
-
-def create_temp_file(r):
- f = tempfile.TemporaryFile()
- while 1:
- x = r.readline()
- if not x: break
- f.write(x)
- r.close()
- del x,r
- f.flush()
- f.seek(0)
- return f
-
-def sizesha1(f):
- size = os.fstat(f.fileno())[6]
- f.seek(0)
- sha1sum = apt_pkg.sha1sum(f)
- return (sha1sum, size)
-
-def genchanges(Options, outdir, oldfile, origfile, maxdiffs = 14):
- if Options.has_key("NoAct"):
- print "not doing anything"
- return
-
- patchname = Options["PatchName"]
-
- # origfile = /path/to/Packages
- # oldfile = ./Packages
- # newfile = ./Packages.tmp
- # difffile = outdir/patchname
- # index => outdir/Index
-
- # (outdir, oldfile, origfile) = argv
-
- newfile = oldfile + ".new"
- difffile = "%s/%s" % (outdir, patchname)
-
- upd = Updates(outdir, int(maxdiffs))
- (oldext, oldstat) = smartstat(oldfile)
- (origext, origstat) = smartstat(origfile)
- if not origstat:
- print "%s doesn't exist" % (origfile)
- return
- if not oldstat:
- print "initial run"
- os.link(origfile + origext, oldfile + origext)
- return
-
- if oldstat[1:3] == origstat[1:3]:
- print "hardlink unbroken, assuming unchanged"
- return
-
- oldf = smartopen(oldfile)
- oldsizesha1 = sizesha1(oldf)
-
- # should probably early exit if either of these checks fail
- # alternatively (optionally?) could just trim the patch history
-
- if upd.filesizesha1:
- if upd.filesizesha1 != oldsizesha1:
- print "old file seems to have changed! %s %s => %s %s" % (upd.filesizesha1 + oldsizesha1)
-
- # XXX this should be usable now
- #
- #for d in upd.history.keys():
- # df = smartopen("%s/%s" % (outdir,d))
- # act_sha1size = sizesha1(df)
- # df.close()
- # exp_sha1size = upd.history[d][1]
- # if act_sha1size != exp_sha1size:
- # print "patch file %s seems to have changed! %s %s => %s %s" % \
- # (d,) + exp_sha1size + act_sha1size
-
- if Options.has_key("CanonicalPath"): upd.can_path=Options["CanonicalPath"]
-
- if os.path.exists(newfile): os.unlink(newfile)
- smartlink(origfile, newfile)
- newf = open(newfile, "r")
- newsizesha1 = sizesha1(newf)
- newf.close()
-
- if newsizesha1 == oldsizesha1:
- os.unlink(newfile)
- oldf.close()
- print "file unchanged, not generating diff"
- else:
- if not os.path.isdir(outdir): os.mkdir(outdir)
- print "generating diff"
- w = os.popen("diff --ed - %s | gzip -c -9 > %s.gz" %
- (newfile, difffile), "w")
- pipe_file(oldf, w)
- oldf.close()
-
- difff = smartopen(difffile)
- difsizesha1 = sizesha1(difff)
- difff.close()
-
- upd.history[patchname] = (oldsizesha1, difsizesha1)
- upd.history_order.append(patchname)
-
- upd.filesizesha1 = newsizesha1
-
- os.unlink(oldfile + oldext)
- os.link(origfile + origext, oldfile + origext)
- os.unlink(newfile)
-
- f = open(outdir + "/Index", "w")
- upd.dump(f)
- f.close()
-
-
-def main():
- global Cnf, Options, Logger
-
- os.umask(0002);
-
- Cnf = utils.get_conf();
- Arguments = [ ('h', "help", "Tiffani::Options::Help"),
- ('c', None, "Tiffani::Options::CanonicalPath", "hasArg"),
- ('p', "patchname", "Tiffani::Options::PatchName", "hasArg"),
- ('r', "rootdir", "Tiffani::Options::RootDir", "hasArg"),
- ('d', "tmpdir", "Tiffani::Options::TempDir", "hasArg"),
- ('m', "maxdiffs", "Tiffani::Options::MaxDiffs", "hasArg"),
- ('n', "n-act", "Tiffani::Options::NoAct"),
- ];
- suites = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv);
- Options = Cnf.SubTree("Tiffani::Options");
- if Options.has_key("Help"): usage();
-
- maxdiffs = Options.get("MaxDiffs::Default", "14")
- maxpackages = Options.get("MaxDiffs::Packages", maxdiffs)
- maxcontents = Options.get("MaxDiffs::Contents", maxdiffs)
- maxsources = Options.get("MaxDiffs::Sources", maxdiffs)
-
- if not Options.has_key("PatchName"):
- format = "%Y-%m-%d-%H%M.%S"
- i,o = os.popen2("date +%s" % (format))
- i.close()
- Options["PatchName"] = o.readline()[:-1]
- o.close()
-
- AptCnf = apt_pkg.newConfiguration()
- apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file())
-
- if Options.has_key("RootDir"): Cnf["Dir::Root"] = Options["RootDir"]
-
- if not suites:
- suites = Cnf.SubTree("Suite").List()
-
- for suite in suites:
- if suite == "Experimental": continue
-
- print "Processing: " + suite
- SuiteBlock = Cnf.SubTree("Suite::" + suite)
-
- if SuiteBlock.has_key("Untouchable"):
- print "Skipping: " + suite + " (untouchable)"
- continue
-
- suite = suite.lower()
-
- architectures = SuiteBlock.ValueList("Architectures")
-
- if SuiteBlock.has_key("Components"):
- components = SuiteBlock.ValueList("Components")
- else:
- components = []
-
- suite_suffix = Cnf.Find("Dinstall::SuiteSuffix");
- if components and suite_suffix:
- longsuite = suite + "/" + suite_suffix;
- else:
- longsuite = suite;
-
- tree = SuiteBlock.get("Tree", "dists/%s" % (longsuite))
-
- if AptCnf.has_key("tree::%s" % (tree)):
- sections = AptCnf["tree::%s::Sections" % (tree)].split()
- elif AptCnf.has_key("bindirectory::%s" % (tree)):
- sections = AptCnf["bindirectory::%s::Sections" % (tree)].split()
- else:
- aptcnf_filename = os.path.basename(utils.which_apt_conf_file());
- print "ALERT: suite %s not in %s, nor untouchable!" % (suite, aptcnf_filename);
- continue
-
- for architecture in architectures:
- if architecture == "all":
- continue
-
- if architecture != "source":
- # Process Contents
- file = "%s/Contents-%s" % (Cnf["Dir::Root"] + tree,
- architecture)
- storename = "%s/%s_contents_%s" % (Options["TempDir"], suite, architecture)
- print "running contents for %s %s : " % (suite, architecture),
- genchanges(Options, file + ".diff", storename, file, \
- Cnf.get("Suite::%s::Tiffani::MaxDiffs::Contents" % (suite), maxcontents))
-
- # use sections instead of components since katie.conf
- # treats "foo/bar main" as suite "foo", suitesuffix "bar" and
- # component "bar/main". suck.
-
- for component in sections:
- if architecture == "source":
- longarch = architecture
- packages = "Sources"
- maxsuite = maxsources
- else:
- longarch = "binary-%s"% (architecture)
- packages = "Packages"
- maxsuite = maxpackages
-
- file = "%s/%s/%s/%s" % (Cnf["Dir::Root"] + tree,
- component, longarch, packages)
- storename = "%s/%s_%s_%s" % (Options["TempDir"], suite, component, architecture)
- print "running for %s %s %s : " % (suite, component, architecture),
- genchanges(Options, file + ".diff", storename, file, \
- Cnf.get("Suite::%s::Tiffani::MaxDiffs::%s" % (suite, packages), maxsuite))
-
-################################################################################
-
-if __name__ == '__main__':
- main()
+++ /dev/null
-#!/bin/sh -e
-
-. vars
-
-export TERM=linux
-
-destdir=$ftpdir/doc
-urlbase=http://www.debian.org/Bugs/
-
-cd $destdir
-
-convert () {
- src=$1; dst=$2
- rm -f .new-$dst
- echo Generating $dst from http://www.debian.org/Bugs/$src ...
- lynx -nolist -dump $urlbase$src | sed -e 's/^ *$//' | perl -00 -ne 'exit if /Back to the Debian Project homepage/; print unless ($.==1 || $.==2 || $.==3 || /^\s*Other BTS pages:$/m)' >.new-$dst
- if cmp -s .new-$dst $dst ; then rm -f .new-$dst
- else mv -f .new-$dst $dst
- fi
-}
-
-convert Reporting.html bug-reporting.txt
-convert Access.html bug-log-access.txt
-convert server-request.html bug-log-mailserver.txt
-convert Developer.html bug-maint-info.txt
-convert server-control.html bug-maint-mailcontrol.txt
-convert server-refcard.html bug-mailserver-refcard.txt
+++ /dev/null
-#!/bin/sh
-#
-# Fetches latest copy of mailing-lists.txt
-# Michael Beattie <mjb@debian.org>
-
-. vars
-
-cd $ftpdir/doc
-
-echo Updating archive version of mailing-lists.txt
-wget -t1 -T20 -q -N http://www.debian.org/misc/mailing-lists.txt || \
- echo "Some error occured..."
-
+++ /dev/null
-#!/bin/sh
-#
-# Very Very hackish script... dont laugh.
-# Michael Beattie <mjb@debian.org>
-
-. vars
-
-prog=$scriptdir/mirrorlist/mirror_list.pl
-masterlist=$scriptdir/mirrorlist/Mirrors.masterlist
-
-test ! -f $HOME/.cvspass && \
- echo ":pserver:anonymous@cvs.debian.org:/cvs/webwml A" > $HOME/.cvspass
-grep -q "cvs.debian.org:/cvs/webwml" ~/.cvspass || \
- echo ":pserver:anonymous@cvs.debian.org:/cvs/webwml A" >> $HOME/.cvspass
-
-cd $(dirname $masterlist)
-cvs update
-
-if [ ! -f $ftpdir/README.mirrors.html -o $masterlist -nt $ftpdir/README.mirrors.html ] ; then
- rm -f $ftpdir/README.mirrors.html $ftpdir/README.mirrors.txt
- $prog -m $masterlist -t html > $ftpdir/README.mirrors.html
- $prog -m $masterlist -t text > $ftpdir/README.mirrors.txt
- if [ ! -f $ftpdir/README.non-US -o $masterlist -nt $ftpdir/README.non-US ] ; then
- rm -f $ftpdir/README.non-US
- $prog -m $masterlist -t nonus > $ftpdir/README.non-US
- install -m 664 $ftpdir/README.non-US $webdir
- fi
- echo Updated archive version of mirrors file
-fi
+++ /dev/null
-#!/bin/sh
-#
-# Fetches up to date copy of REAME.non-US for pandora
-# Michael Beattie <mjb@debian.org>
-
-. vars-non-US
-
-cd $ftpdir
-
-echo Updating non-US version of README.non-US
-wget -t1 -T20 -q -N http://ftp-master.debian.org/README.non-US || \
- echo "Some error occured..."
-
+++ /dev/null
-#!/usr/bin/env python
-
-# Utility functions
-# Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005 James Troup <james@nocrew.org>
-# $Id: utils.py,v 1.73 2005-03-18 05:24:38 troup Exp $
-
-################################################################################
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-################################################################################
-
-import codecs, commands, email.Header, os, pwd, re, select, socket, shutil, \
- string, sys, tempfile, traceback;
-import apt_pkg;
-import db_access;
-
-################################################################################
-
-re_comments = re.compile(r"\#.*")
-re_no_epoch = re.compile(r"^\d+\:")
-re_no_revision = re.compile(r"-[^-]+$")
-re_arch_from_filename = re.compile(r"/binary-[^/]+/")
-re_extract_src_version = re.compile (r"(\S+)\s*\((.*)\)")
-re_isadeb = re.compile (r"(.+?)_(.+?)_(.+)\.u?deb$");
-re_issource = re.compile (r"(.+)_(.+?)\.(orig\.tar\.gz|diff\.gz|tar\.gz|dsc)$");
-
-re_single_line_field = re.compile(r"^(\S*)\s*:\s*(.*)");
-re_multi_line_field = re.compile(r"^\s(.*)");
-re_taint_free = re.compile(r"^[-+~/\.\w]+$");
-
-re_parse_maintainer = re.compile(r"^\s*(\S.*\S)\s*\<([^\>]+)\>");
-
-changes_parse_error_exc = "Can't parse line in .changes file";
-invalid_dsc_format_exc = "Invalid .dsc file";
-nk_format_exc = "Unknown Format: in .changes file";
-no_files_exc = "No Files: field in .dsc or .changes file.";
-cant_open_exc = "Can't open file";
-unknown_hostname_exc = "Unknown hostname";
-cant_overwrite_exc = "Permission denied; can't overwrite existent file."
-file_exists_exc = "Destination file exists";
-sendmail_failed_exc = "Sendmail invocation failed";
-tried_too_hard_exc = "Tried too hard to find a free filename.";
-
-default_config = "/etc/katie/katie.conf";
-default_apt_config = "/etc/katie/apt.conf";
-
-################################################################################
-
-class Error(Exception):
- """Base class for exceptions in this module."""
- pass;
-
-class ParseMaintError(Error):
- """Exception raised for errors in parsing a maintainer field.
-
- Attributes:
- message -- explanation of the error
- """
-
- def __init__(self, message):
- self.args = message,;
- self.message = message;
-
-################################################################################
-
-def open_file(filename, mode='r'):
- try:
- f = open(filename, mode);
- except IOError:
- raise cant_open_exc, filename;
- return f
-
-################################################################################
-
-def our_raw_input(prompt=""):
- if prompt:
- sys.stdout.write(prompt);
- sys.stdout.flush();
- try:
- ret = raw_input();
- return ret;
- except EOFError:
- sys.stderr.write("\nUser interrupt (^D).\n");
- raise SystemExit;
-
-################################################################################
-
-def str_isnum (s):
- for c in s:
- if c not in string.digits:
- return 0;
- return 1;
-
-################################################################################
-
-def extract_component_from_section(section):
- component = "";
-
- if section.find('/') != -1:
- component = section.split('/')[0];
- if component.lower() == "non-us" and section.find('/') != -1:
- s = component + '/' + section.split('/')[1];
- if Cnf.has_key("Component::%s" % s): # Avoid e.g. non-US/libs
- component = s;
-
- if section.lower() == "non-us":
- component = "non-US/main";
-
- # non-US prefix is case insensitive
- if component.lower()[:6] == "non-us":
- component = "non-US"+component[6:];
-
- # Expand default component
- if component == "":
- if Cnf.has_key("Component::%s" % section):
- component = section;
- else:
- component = "main";
- elif component == "non-US":
- component = "non-US/main";
-
- return (section, component);
-
-################################################################################
-
-def parse_changes(filename, signing_rules=0):
- """Parses a changes file and returns a dictionary where each field is a
-key. The mandatory first argument is the filename of the .changes
-file.
-
-signing_rules is an optional argument:
-
- o If signing_rules == -1, no signature is required.
- o If signing_rules == 0 (the default), a signature is required.
- o If signing_rules == 1, it turns on the same strict format checking
- as dpkg-source.
-
-The rules for (signing_rules == 1)-mode are:
-
- o The PGP header consists of "-----BEGIN PGP SIGNED MESSAGE-----"
- followed by any PGP header data and must end with a blank line.
-
- o The data section must end with a blank line and must be followed by
- "-----BEGIN PGP SIGNATURE-----".
-"""
-
- error = "";
- changes = {};
-
- changes_in = open_file(filename);
- lines = changes_in.readlines();
-
- if not lines:
- raise changes_parse_error_exc, "[Empty changes file]";
-
- # Reindex by line number so we can easily verify the format of
- # .dsc files...
- index = 0;
- indexed_lines = {};
- for line in lines:
- index += 1;
- indexed_lines[index] = line[:-1];
-
- inside_signature = 0;
-
- num_of_lines = len(indexed_lines.keys());
- index = 0;
- first = -1;
- while index < num_of_lines:
- index += 1;
- line = indexed_lines[index];
- if line == "":
- if signing_rules == 1:
- index += 1;
- if index > num_of_lines:
- raise invalid_dsc_format_exc, index;
- line = indexed_lines[index];
- if not line.startswith("-----BEGIN PGP SIGNATURE"):
- raise invalid_dsc_format_exc, index;
- inside_signature = 0;
- break;
- else:
- continue;
- if line.startswith("-----BEGIN PGP SIGNATURE"):
- break;
- if line.startswith("-----BEGIN PGP SIGNED MESSAGE"):
- inside_signature = 1;
- if signing_rules == 1:
- while index < num_of_lines and line != "":
- index += 1;
- line = indexed_lines[index];
- continue;
- # If we're not inside the signed data, don't process anything
- if signing_rules >= 0 and not inside_signature:
- continue;
- slf = re_single_line_field.match(line);
- if slf:
- field = slf.groups()[0].lower();
- changes[field] = slf.groups()[1];
- first = 1;
- continue;
- if line == " .":
- changes[field] += '\n';
- continue;
- mlf = re_multi_line_field.match(line);
- if mlf:
- if first == -1:
- raise changes_parse_error_exc, "'%s'\n [Multi-line field continuing on from nothing?]" % (line);
- if first == 1 and changes[field] != "":
- changes[field] += '\n';
- first = 0;
- changes[field] += mlf.groups()[0] + '\n';
- continue;
- error += line;
-
- if signing_rules == 1 and inside_signature:
- raise invalid_dsc_format_exc, index;
-
- changes_in.close();
- changes["filecontents"] = "".join(lines);
-
- if error:
- raise changes_parse_error_exc, error;
-
- return changes;
-
-################################################################################
-
-# Dropped support for 1.4 and ``buggy dchanges 3.4'' (?!) compared to di.pl
-
-def build_file_list(changes, is_a_dsc=0):
- files = {};
-
- # Make sure we have a Files: field to parse...
- if not changes.has_key("files"):
- raise no_files_exc;
-
- # Make sure we recognise the format of the Files: field
- format = changes.get("format", "");
- if format != "":
- format = float(format);
- if not is_a_dsc and (format < 1.5 or format > 2.0):
- raise nk_format_exc, format;
-
- # Parse each entry/line:
- for i in changes["files"].split('\n'):
- if not i:
- break;
- s = i.split();
- section = priority = "";
- try:
- if is_a_dsc:
- (md5, size, name) = s;
- else:
- (md5, size, section, priority, name) = s;
- except ValueError:
- raise changes_parse_error_exc, i;
-
- if section == "":
- section = "-";
- if priority == "":
- priority = "-";
-
- (section, component) = extract_component_from_section(section);
-
- files[name] = Dict(md5sum=md5, size=size, section=section,
- priority=priority, component=component);
-
- return files
-
-################################################################################
-
-def force_to_utf8(s):
- """Forces a string to UTF-8. If the string isn't already UTF-8,
-it's assumed to be ISO-8859-1."""
- try:
- unicode(s, 'utf-8');
- return s;
- except UnicodeError:
- latin1_s = unicode(s,'iso8859-1');
- return latin1_s.encode('utf-8');
-
-def rfc2047_encode(s):
- """Encodes a (header) string per RFC2047 if necessary. If the
-string is neither ASCII nor UTF-8, it's assumed to be ISO-8859-1."""
- try:
- codecs.lookup('ascii')[1](s)
- return s;
- except UnicodeError:
- pass;
- try:
- codecs.lookup('utf-8')[1](s)
- h = email.Header.Header(s, 'utf-8', 998);
- return str(h);
- except UnicodeError:
- h = email.Header.Header(s, 'iso-8859-1', 998);
- return str(h);
-
-################################################################################
-
-# <Culus> 'The standard sucks, but my tool is supposed to interoperate
-# with it. I know - I'll fix the suckage and make things
-# incompatible!'
-
-def fix_maintainer (maintainer):
- """Parses a Maintainer or Changed-By field and returns:
- (1) an RFC822 compatible version,
- (2) an RFC2047 compatible version,
- (3) the name
- (4) the email
-
-The name is forced to UTF-8 for both (1) and (3). If the name field
-contains '.' or ',' (as allowed by Debian policy), (1) and (2) are
-switched to 'email (name)' format."""
- maintainer = maintainer.strip()
- if not maintainer:
- return ('', '', '', '');
-
- if maintainer.find("<") == -1:
- email = maintainer;
- name = "";
- elif (maintainer[0] == "<" and maintainer[-1:] == ">"):
- email = maintainer[1:-1];
- name = "";
- else:
- m = re_parse_maintainer.match(maintainer);
- if not m:
- raise ParseMaintError, "Doesn't parse as a valid Maintainer field."
- name = m.group(1);
- email = m.group(2);
-
- # Get an RFC2047 compliant version of the name
- rfc2047_name = rfc2047_encode(name);
-
- # Force the name to be UTF-8
- name = force_to_utf8(name);
-
- if name.find(',') != -1 or name.find('.') != -1:
- rfc822_maint = "%s (%s)" % (email, name);
- rfc2047_maint = "%s (%s)" % (email, rfc2047_name);
- else:
- rfc822_maint = "%s <%s>" % (name, email);
- rfc2047_maint = "%s <%s>" % (rfc2047_name, email);
-
- if email.find("@") == -1 and email.find("buildd_") != 0:
- raise ParseMaintError, "No @ found in email address part."
-
- return (rfc822_maint, rfc2047_maint, name, email);
-
-################################################################################
-
-# sendmail wrapper, takes _either_ a message string or a file as arguments
-def send_mail (message, filename=""):
- # If we've been passed a string dump it into a temporary file
- if message:
- filename = tempfile.mktemp();
- fd = os.open(filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, 0700);
- os.write (fd, message);
- os.close (fd);
-
- # Invoke sendmail
- (result, output) = commands.getstatusoutput("%s < %s" % (Cnf["Dinstall::SendmailCommand"], filename));
- if (result != 0):
- raise sendmail_failed_exc, output;
-
- # Clean up any temporary files
- if message:
- os.unlink (filename);
-
-################################################################################
-
-def poolify (source, component):
- if component:
- component += '/';
- # FIXME: this is nasty
- component = component.lower().replace("non-us/", "non-US/");
- if source[:3] == "lib":
- return component + source[:4] + '/' + source + '/'
- else:
- return component + source[:1] + '/' + source + '/'
-
-################################################################################
-
-def move (src, dest, overwrite = 0, perms = 0664):
- if os.path.exists(dest) and os.path.isdir(dest):
- dest_dir = dest;
- else:
- dest_dir = os.path.dirname(dest);
- if not os.path.exists(dest_dir):
- umask = os.umask(00000);
- os.makedirs(dest_dir, 02775);
- os.umask(umask);
- #print "Moving %s to %s..." % (src, dest);
- if os.path.exists(dest) and os.path.isdir(dest):
- dest += '/' + os.path.basename(src);
- # Don't overwrite unless forced to
- if os.path.exists(dest):
- if not overwrite:
- fubar("Can't move %s to %s - file already exists." % (src, dest));
- else:
- if not os.access(dest, os.W_OK):
- fubar("Can't move %s to %s - can't write to existing file." % (src, dest));
- shutil.copy2(src, dest);
- os.chmod(dest, perms);
- os.unlink(src);
-
-def copy (src, dest, overwrite = 0, perms = 0664):
- if os.path.exists(dest) and os.path.isdir(dest):
- dest_dir = dest;
- else:
- dest_dir = os.path.dirname(dest);
- if not os.path.exists(dest_dir):
- umask = os.umask(00000);
- os.makedirs(dest_dir, 02775);
- os.umask(umask);
- #print "Copying %s to %s..." % (src, dest);
- if os.path.exists(dest) and os.path.isdir(dest):
- dest += '/' + os.path.basename(src);
- # Don't overwrite unless forced to
- if os.path.exists(dest):
- if not overwrite:
- raise file_exists_exc
- else:
- if not os.access(dest, os.W_OK):
- raise cant_overwrite_exc
- shutil.copy2(src, dest);
- os.chmod(dest, perms);
-
-################################################################################
-
-def where_am_i ():
- res = socket.gethostbyaddr(socket.gethostname());
- database_hostname = Cnf.get("Config::" + res[0] + "::DatabaseHostname");
- if database_hostname:
- return database_hostname;
- else:
- return res[0];
-
-def which_conf_file ():
- res = socket.gethostbyaddr(socket.gethostname());
- if Cnf.get("Config::" + res[0] + "::KatieConfig"):
- return Cnf["Config::" + res[0] + "::KatieConfig"]
- else:
- return default_config;
-
-def which_apt_conf_file ():
- res = socket.gethostbyaddr(socket.gethostname());
- if Cnf.get("Config::" + res[0] + "::AptConfig"):
- return Cnf["Config::" + res[0] + "::AptConfig"]
- else:
- return default_apt_config;
-
-################################################################################
-
-# Escape characters which have meaning to SQL's regex comparison operator ('~')
-# (woefully incomplete)
-
-def regex_safe (s):
- s = s.replace('+', '\\\\+');
- s = s.replace('.', '\\\\.');
- return s
-
-################################################################################
-
-# Perform a substition of template
-def TemplateSubst(map, filename):
- file = open_file(filename);
- template = file.read();
- for x in map.keys():
- template = template.replace(x,map[x]);
- file.close();
- return template;
-
-################################################################################
-
-def fubar(msg, exit_code=1):
- sys.stderr.write("E: %s\n" % (msg));
- sys.exit(exit_code);
-
-def warn(msg):
- sys.stderr.write("W: %s\n" % (msg));
-
-################################################################################
-
-# Returns the user name with a laughable attempt at rfc822 conformancy
-# (read: removing stray periods).
-def whoami ():
- return pwd.getpwuid(os.getuid())[4].split(',')[0].replace('.', '');
-
-################################################################################
-
-def size_type (c):
- t = " B";
- if c > 10240:
- c = c / 1024;
- t = " KB";
- if c > 10240:
- c = c / 1024;
- t = " MB";
- return ("%d%s" % (c, t))
-
-################################################################################
-
-def cc_fix_changes (changes):
- o = changes.get("architecture", "");
- if o:
- del changes["architecture"];
- changes["architecture"] = {};
- for j in o.split():
- changes["architecture"][j] = 1;
-
-# Sort by source name, source version, 'have source', and then by filename
-def changes_compare (a, b):
- try:
- a_changes = parse_changes(a);
- except:
- return -1;
-
- try:
- b_changes = parse_changes(b);
- except:
- return 1;
-
- cc_fix_changes (a_changes);
- cc_fix_changes (b_changes);
-
- # Sort by source name
- a_source = a_changes.get("source");
- b_source = b_changes.get("source");
- q = cmp (a_source, b_source);
- if q:
- return q;
-
- # Sort by source version
- a_version = a_changes.get("version", "0");
- b_version = b_changes.get("version", "0");
- q = apt_pkg.VersionCompare(a_version, b_version);
- if q:
- return q;
-
- # Sort by 'have source'
- a_has_source = a_changes["architecture"].get("source");
- b_has_source = b_changes["architecture"].get("source");
- if a_has_source and not b_has_source:
- return -1;
- elif b_has_source and not a_has_source:
- return 1;
-
- # Fall back to sort by filename
- return cmp(a, b);
-
-################################################################################
-
-def find_next_free (dest, too_many=100):
- extra = 0;
- orig_dest = dest;
- while os.path.exists(dest) and extra < too_many:
- dest = orig_dest + '.' + repr(extra);
- extra += 1;
- if extra >= too_many:
- raise tried_too_hard_exc;
- return dest;
-
-################################################################################
-
-def result_join (original, sep = '\t'):
- list = [];
- for i in xrange(len(original)):
- if original[i] == None:
- list.append("");
- else:
- list.append(original[i]);
- return sep.join(list);
-
-################################################################################
-
-def prefix_multi_line_string(str, prefix, include_blank_lines=0):
- out = "";
- for line in str.split('\n'):
- line = line.strip();
- if line or include_blank_lines:
- out += "%s%s\n" % (prefix, line);
- # Strip trailing new line
- if out:
- out = out[:-1];
- return out;
-
-################################################################################
-
-def validate_changes_file_arg(filename, require_changes=1):
- """'filename' is either a .changes or .katie file. If 'filename' is a
-.katie file, it's changed to be the corresponding .changes file. The
-function then checks if the .changes file a) exists and b) is
-readable and returns the .changes filename if so. If there's a
-problem, the next action depends on the option 'require_changes'
-argument:
-
- o If 'require_changes' == -1, errors are ignored and the .changes
- filename is returned.
- o If 'require_changes' == 0, a warning is given and 'None' is returned.
- o If 'require_changes' == 1, a fatal error is raised.
-"""
- error = None;
-
- orig_filename = filename
- if filename.endswith(".katie"):
- filename = filename[:-6]+".changes";
-
- if not filename.endswith(".changes"):
- error = "invalid file type; not a changes file";
- else:
- if not os.access(filename,os.R_OK):
- if os.path.exists(filename):
- error = "permission denied";
- else:
- error = "file not found";
-
- if error:
- if require_changes == 1:
- fubar("%s: %s." % (orig_filename, error));
- elif require_changes == 0:
- warn("Skipping %s - %s" % (orig_filename, error));
- return None;
- else: # We only care about the .katie file
- return filename;
- else:
- return filename;
-
-################################################################################
-
-def real_arch(arch):
- return (arch != "source" and arch != "all");
-
-################################################################################
-
-def join_with_commas_and(list):
- if len(list) == 0: return "nothing";
- if len(list) == 1: return list[0];
- return ", ".join(list[:-1]) + " and " + list[-1];
-
-################################################################################
-
-def pp_deps (deps):
- pp_deps = [];
- for atom in deps:
- (pkg, version, constraint) = atom;
- if constraint:
- pp_dep = "%s (%s %s)" % (pkg, constraint, version);
- else:
- pp_dep = pkg;
- pp_deps.append(pp_dep);
- return " |".join(pp_deps);
-
-################################################################################
-
-def get_conf():
- return Cnf;
-
-################################################################################
-
-# Handle -a, -c and -s arguments; returns them as SQL constraints
-def parse_args(Options):
- # Process suite
- if Options["Suite"]:
- suite_ids_list = [];
- for suite in split_args(Options["Suite"]):
- suite_id = db_access.get_suite_id(suite);
- if suite_id == -1:
- warn("suite '%s' not recognised." % (suite));
- else:
- suite_ids_list.append(suite_id);
- if suite_ids_list:
- con_suites = "AND su.id IN (%s)" % ", ".join(map(str, suite_ids_list));
- else:
- fubar("No valid suite given.");
- else:
- con_suites = "";
-
- # Process component
- if Options["Component"]:
- component_ids_list = [];
- for component in split_args(Options["Component"]):
- component_id = db_access.get_component_id(component);
- if component_id == -1:
- warn("component '%s' not recognised." % (component));
- else:
- component_ids_list.append(component_id);
- if component_ids_list:
- con_components = "AND c.id IN (%s)" % ", ".join(map(str, component_ids_list));
- else:
- fubar("No valid component given.");
- else:
- con_components = "";
-
- # Process architecture
- con_architectures = "";
- if Options["Architecture"]:
- arch_ids_list = [];
- check_source = 0;
- for architecture in split_args(Options["Architecture"]):
- if architecture == "source":
- check_source = 1;
- else:
- architecture_id = db_access.get_architecture_id(architecture);
- if architecture_id == -1:
- warn("architecture '%s' not recognised." % (architecture));
- else:
- arch_ids_list.append(architecture_id);
- if arch_ids_list:
- con_architectures = "AND a.id IN (%s)" % ", ".join(map(str, arch_ids_list));
- else:
- if not check_source:
- fubar("No valid architecture given.");
- else:
- check_source = 1;
-
- return (con_suites, con_architectures, con_components, check_source);
-
-################################################################################
-
-# Inspired(tm) by Bryn Keller's print_exc_plus (See
-# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52215)
-
-def print_exc():
- tb = sys.exc_info()[2];
- while tb.tb_next:
- tb = tb.tb_next;
- stack = [];
- frame = tb.tb_frame;
- while frame:
- stack.append(frame);
- frame = frame.f_back;
- stack.reverse();
- traceback.print_exc();
- for frame in stack:
- print "\nFrame %s in %s at line %s" % (frame.f_code.co_name,
- frame.f_code.co_filename,
- frame.f_lineno);
- for key, value in frame.f_locals.items():
- print "\t%20s = " % key,;
- try:
- print value;
- except:
- print "<unable to print>";
-
-################################################################################
-
-def try_with_debug(function):
- try:
- function();
- except SystemExit:
- raise;
- except:
- print_exc();
-
-################################################################################
-
-# Function for use in sorting lists of architectures.
-# Sorts normally except that 'source' dominates all others.
-
-def arch_compare_sw (a, b):
- if a == "source" and b == "source":
- return 0;
- elif a == "source":
- return -1;
- elif b == "source":
- return 1;
-
- return cmp (a, b);
-
-################################################################################
-
-# Split command line arguments which can be separated by either commas
-# or whitespace. If dwim is set, it will complain about string ending
-# in comma since this usually means someone did 'madison -a i386, m68k
-# foo' or something and the inevitable confusion resulting from 'm68k'
-# being treated as an argument is undesirable.
-
-def split_args (s, dwim=1):
- if s.find(",") == -1:
- return s.split();
- else:
- if s[-1:] == "," and dwim:
- fubar("split_args: found trailing comma, spurious space maybe?");
- return s.split(",");
-
-################################################################################
-
-def Dict(**dict): return dict
-
-########################################
-
-# Our very own version of commands.getouputstatus(), hacked to support
-# gpgv's status fd.
-def gpgv_get_status_output(cmd, status_read, status_write):
- cmd = ['/bin/sh', '-c', cmd];
- p2cread, p2cwrite = os.pipe();
- c2pread, c2pwrite = os.pipe();
- errout, errin = os.pipe();
- pid = os.fork();
- if pid == 0:
- # Child
- os.close(0);
- os.close(1);
- os.dup(p2cread);
- os.dup(c2pwrite);
- os.close(2);
- os.dup(errin);
- for i in range(3, 256):
- if i != status_write:
- try:
- os.close(i);
- except:
- pass;
- try:
- os.execvp(cmd[0], cmd);
- finally:
- os._exit(1);
-
- # Parent
- os.close(p2cread)
- os.dup2(c2pread, c2pwrite);
- os.dup2(errout, errin);
-
- output = status = "";
- while 1:
- i, o, e = select.select([c2pwrite, errin, status_read], [], []);
- more_data = [];
- for fd in i:
- r = os.read(fd, 8196);
- if len(r) > 0:
- more_data.append(fd);
- if fd == c2pwrite or fd == errin:
- output += r;
- elif fd == status_read:
- status += r;
- else:
- fubar("Unexpected file descriptor [%s] returned from select\n" % (fd));
- if not more_data:
- pid, exit_status = os.waitpid(pid, 0)
- try:
- os.close(status_write);
- os.close(status_read);
- os.close(c2pread);
- os.close(c2pwrite);
- os.close(p2cwrite);
- os.close(errin);
- os.close(errout);
- except:
- pass;
- break;
-
- return output, status, exit_status;
-
-############################################################
-
-
-def check_signature (sig_filename, reject, data_filename="", keyrings=None):
- """Check the signature of a file and return the fingerprint if the
-signature is valid or 'None' if it's not. The first argument is the
-filename whose signature should be checked. The second argument is a
-reject function and is called when an error is found. The reject()
-function must allow for two arguments: the first is the error message,
-the second is an optional prefix string. It's possible for reject()
-to be called more than once during an invocation of check_signature().
-The third argument is optional and is the name of the files the
-detached signature applies to. The fourth argument is optional and is
-a *list* of keyrings to use.
-"""
-
- # Ensure the filename contains no shell meta-characters or other badness
- if not re_taint_free.match(sig_filename):
- reject("!!WARNING!! tainted signature filename: '%s'." % (sig_filename));
- return None;
-
- if data_filename and not re_taint_free.match(data_filename):
- reject("!!WARNING!! tainted data filename: '%s'." % (data_filename));
- return None;
-
- if not keyrings:
- keyrings = (Cnf["Dinstall::PGPKeyring"], Cnf["Dinstall::GPGKeyring"])
-
- # Build the command line
- status_read, status_write = os.pipe();
- cmd = "gpgv --status-fd %s" % (status_write);
- for keyring in keyrings:
- cmd += " --keyring %s" % (keyring);
- cmd += " %s %s" % (sig_filename, data_filename);
- # Invoke gpgv on the file
- (output, status, exit_status) = gpgv_get_status_output(cmd, status_read, status_write);
-
- # Process the status-fd output
- keywords = {};
- bad = internal_error = "";
- for line in status.split('\n'):
- line = line.strip();
- if line == "":
- continue;
- split = line.split();
- if len(split) < 2:
- internal_error += "gpgv status line is malformed (< 2 atoms) ['%s'].\n" % (line);
- continue;
- (gnupg, keyword) = split[:2];
- if gnupg != "[GNUPG:]":
- internal_error += "gpgv status line is malformed (incorrect prefix '%s').\n" % (gnupg);
- continue;
- args = split[2:];
- if keywords.has_key(keyword) and (keyword != "NODATA" and keyword != "SIGEXPIRED"):
- internal_error += "found duplicate status token ('%s').\n" % (keyword);
- continue;
- else:
- keywords[keyword] = args;
-
- # If we failed to parse the status-fd output, let's just whine and bail now
- if internal_error:
- reject("internal error while performing signature check on %s." % (sig_filename));
- reject(internal_error, "");
- reject("Please report the above errors to the Archive maintainers by replying to this mail.", "");
- return None;
-
- # Now check for obviously bad things in the processed output
- if keywords.has_key("SIGEXPIRED"):
- reject("The key used to sign %s has expired." % (sig_filename));
- bad = 1;
- if keywords.has_key("KEYREVOKED"):
- reject("The key used to sign %s has been revoked." % (sig_filename));
- bad = 1;
- if keywords.has_key("BADSIG"):
- reject("bad signature on %s." % (sig_filename));
- bad = 1;
- if keywords.has_key("ERRSIG") and not keywords.has_key("NO_PUBKEY"):
- reject("failed to check signature on %s." % (sig_filename));
- bad = 1;
- if keywords.has_key("NO_PUBKEY"):
- args = keywords["NO_PUBKEY"];
- if len(args) >= 1:
- key = args[0];
- reject("The key (0x%s) used to sign %s wasn't found in the keyring(s)." % (key, sig_filename));
- bad = 1;
- if keywords.has_key("BADARMOR"):
- reject("ASCII armour of signature was corrupt in %s." % (sig_filename));
- bad = 1;
- if keywords.has_key("NODATA"):
- reject("no signature found in %s." % (sig_filename));
- bad = 1;
-
- if bad:
- return None;
-
- # Next check gpgv exited with a zero return code
- if exit_status:
- reject("gpgv failed while checking %s." % (sig_filename));
- if status.strip():
- reject(prefix_multi_line_string(status, " [GPG status-fd output:] "), "");
- else:
- reject(prefix_multi_line_string(output, " [GPG output:] "), "");
- return None;
-
- # Sanity check the good stuff we expect
- if not keywords.has_key("VALIDSIG"):
- reject("signature on %s does not appear to be valid [No VALIDSIG]." % (sig_filename));
- bad = 1;
- else:
- args = keywords["VALIDSIG"];
- if len(args) < 1:
- reject("internal error while checking signature on %s." % (sig_filename));
- bad = 1;
- else:
- fingerprint = args[0];
- if not keywords.has_key("GOODSIG"):
- reject("signature on %s does not appear to be valid [No GOODSIG]." % (sig_filename));
- bad = 1;
- if not keywords.has_key("SIG_ID"):
- reject("signature on %s does not appear to be valid [No SIG_ID]." % (sig_filename));
- bad = 1;
-
- # Finally ensure there's not something we don't recognise
- known_keywords = Dict(VALIDSIG="",SIG_ID="",GOODSIG="",BADSIG="",ERRSIG="",
- SIGEXPIRED="",KEYREVOKED="",NO_PUBKEY="",BADARMOR="",
- NODATA="");
-
- for keyword in keywords.keys():
- if not known_keywords.has_key(keyword):
- reject("found unknown status token '%s' from gpgv with args '%r' in %s." % (keyword, keywords[keyword], sig_filename));
- bad = 1;
-
- if bad:
- return None;
- else:
- return fingerprint;
-
-################################################################################
-
-# Inspired(tm) by http://www.zopelabs.com/cookbook/1022242603
-
-def wrap(paragraph, max_length, prefix=""):
- line = "";
- s = "";
- have_started = 0;
- words = paragraph.split();
-
- for word in words:
- word_size = len(word);
- if word_size > max_length:
- if have_started:
- s += line + '\n' + prefix;
- s += word + '\n' + prefix;
- else:
- if have_started:
- new_length = len(line) + word_size + 1;
- if new_length > max_length:
- s += line + '\n' + prefix;
- line = word;
- else:
- line += ' ' + word;
- else:
- line = word;
- have_started = 1;
-
- if have_started:
- s += line;
-
- return s;
-
-################################################################################
-
-# Relativize an absolute symlink from 'src' -> 'dest' relative to 'root'.
-# Returns fixed 'src'
-def clean_symlink (src, dest, root):
- src = src.replace(root, '', 1);
- dest = dest.replace(root, '', 1);
- dest = os.path.dirname(dest);
- new_src = '../' * len(dest.split('/'));
- return new_src + src;
-
-################################################################################
-
-def temp_filename(directory=None, dotprefix=None, perms=0700):
- """Return a secure and unique filename by pre-creating it.
-If 'directory' is non-null, it will be the directory the file is pre-created in.
-If 'dotprefix' is non-null, the filename will be prefixed with a '.'."""
-
- if directory:
- old_tempdir = tempfile.tempdir;
- tempfile.tempdir = directory;
-
- filename = tempfile.mktemp();
-
- if dotprefix:
- filename = "%s/.%s" % (os.path.dirname(filename), os.path.basename(filename));
- fd = os.open(filename, os.O_RDWR|os.O_CREAT|os.O_EXCL, perms);
- os.close(fd);
-
- if directory:
- tempfile.tempdir = old_tempdir;
-
- return filename;
-
-################################################################################
-
-apt_pkg.init();
-
-Cnf = apt_pkg.newConfiguration();
-apt_pkg.ReadConfigFileISC(Cnf,default_config);
-
-if which_conf_file() != default_config:
- apt_pkg.ReadConfigFileISC(Cnf,which_conf_file());
-
-################################################################################
+++ /dev/null
-# locations used by many scripts
-
-base=/org/ftp.debian.org
-ftpdir=$base/ftp
-webdir=$base/web
-indices=$ftpdir/indices
-archs="alpha arm hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sh sparc"
-
-scriptdir=$base/scripts
-masterdir=$base/katie/
-dbdir=$base/database/
-lockdir=$base/lock/
-overridedir=$scriptdir/override
-extoverridedir=$scriptdir/external-overrides
-
-queuedir=$base/queue/
-unchecked=$queuedir/unchecked/
-accepted=$queuedir/accepted/
-incoming=$base/incoming
-
-ftpgroup=debadmin
-
-copyoverrides="etch.contrib etch.contrib.src etch.main etch.main.src etch.non-free etch.non-free.src etch.extra.main etch.extra.non-free etch.extra.contrib etch.main.debian-installer woody.contrib woody.contrib.src woody.main woody.main.src woody.non-free woody.non-free.src sarge.contrib sarge.contrib.src sarge.main sarge.main.src sarge.non-free sarge.non-free.src sid.contrib sid.contrib.src sid.main sid.main.debian-installer sid.main.src sid.non-free sid.non-free.src sid.extra.contrib sid.extra.main sid.extra.non-free woody.extra.contrib woody.extra.main woody.extra.non-free sarge.extra.contrib sarge.extra.main sarge.extra.non-free"
-
-PATH=$masterdir:$PATH
-umask 022
-
+++ /dev/null
-# locations used by many scripts
-
-nonushome=/org/non-us.debian.org
-ftpdir=$nonushome/ftp
-indices=$ftpdir/indices-non-US
-archs="alpha arm hppa hurd-i386 i386 ia64 m68k mips mipsel powerpc s390 sh sparc"
-
-masterdir=$nonushome/katie
-overridedir=$nonushome/scripts/override
-dbdir=$nonushome/database/
-queuedir=$nonushome/queue/
-unchecked=$queuedir/unchecked/
-accepted=$queuedir/accepted/
-incoming=$nonushome/incoming/
-
-packagesfiles=packagesfiles-non-US
-sourcesfiles=sourcesfiles-non-US
-contentsfiles=contentsfiles-non-US
-
-copyoverrides="potato.contrib potato.contrib.src potato.main potato.main.src potato.non-free potato.non-free.src woody.contrib woody.contrib.src woody.main woody.main.src woody.non-free woody.non-free.src sarge.contrib sarge.contrib.src sarge.main sarge.main.src sarge.non-free sarge.non-free.src sid.contrib sid.contrib.src sid.main sid.main.src sid.non-free sid.non-free.src"
-
-PATH=$masterdir:$PATH
-umask 022
+++ /dev/null
-# locations used by many scripts
-
-base=/org/security.debian.org
-ftpdir=$base/ftp/
-masterdir=$base/katie/
-overridedir=$base/override
-queuedir=$base/queue/
-unchecked=$queuedir/unchecked/
-accepted=$queuedir/accepted/
-done=$queuedir/done/
-
-uploadhost=ftp-master.debian.org
-uploaddir=/pub/UploadQueue/
-
-components="main non-free contrib"
-suites="oldstable stable testing"
-override_types="deb dsc udeb"
-
-PATH=$masterdir:$PATH
-umask 022
-
+++ /dev/null
-#!/bin/bash
-#
-# Updates wanna-build databases after the archive maintenance
-# finishes
-#
-# Files:
-# Sources-* == upstream fetched file
-# Sources.* == uncompressed, concat'd version
-PATH="/bin:/usr/bin"
-#testing must be before unstable so late upld don't build for testing needlessly
-DISTS="oldstable-security stable-security testing-security stable testing unstable"
-STATS_DISTS="unstable testing stable"
-SECTIONS="main contrib non-free"
-ARCHS_oldstable="m68k arm sparc alpha powerpc i386 mips mipsel ia64 hppa s390"
-ARCHS_stable="$ARCHS_oldstable"
-ARCHS_testing="$ARCHS_stable"
-ARCHS_unstable="$ARCHS_testing hurd-i386"
-TMPDIR="/org/wanna-build/tmp"
-WGETOPT="-q -t2 -w0 -T10"
-CURLOPT="-q -s -S -f -y 5 -K /org/wanna-build/trigger.curlrc"
-LOCKFILE="/org/wanna-build/tmp/DB_Maintenance_In_Progress"
-
-DAY=`date +%w`
-
-if lockfile -! -l 3600 $LOCKFILE; then
- echo "Cannot lock $LOCKFILE"
- exit 1
-fi
-
-cleanup() {
- rm -f "$LOCKFILE"
-}
-trap cleanup 0
-
-echo Updating wanna-build databases...
-umask 027
-
-if [ "$DAY" = "0" ]; then
- savelog -c 26 -p /org/wanna-build/db/merge.log
-fi
-
-exec >> /org/wanna-build/db/merge.log 2>&1
-
-echo -------------------------------------------------------------------------
-echo "merge triggered `date`"
-
-cd $TMPDIR
-
-#
-# Make one big Packages and Sources file.
-#
-for d in $DISTS ; do
- dist=`echo $d | sed s/-.*$//`
- case "$dist" in
- oldstable)
- ARCHS="$ARCHS_oldstable"
- ;;
- stable)
- ARCHS="$ARCHS_stable"
- ;;
- testing)
- ARCHS="$ARCHS_testing"
- ;;
- *)
- ARCHS="$ARCHS_unstable"
- ;;
- esac
- rm -f Sources.$d
- if [ "$d" = "unstable" ]; then
- gzip -dc /org/incoming.debian.org/buildd/Sources.gz >> Sources.$d
- fi
- for a in $ARCHS ; do
- rm -f Packages.$d.$a quinn-$d.$a
- if [ "$d" = "unstable" ]; then
- gzip -dc /org/incoming.debian.org/buildd/Packages.gz >> Packages.$d.$a
- fi
- done
-
- for s in $SECTIONS ; do
- if echo $d | grep -qv -- -security; then
- rm -f Sources.gz
- gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/source/Sources.gz >> Sources.$d
- if [ "$d" = "testing" -o "$d" = "stable" ]; then
- gzip -dc /org/ftp.debian.org/ftp/dists/$d-proposed-updates/$s/source/Sources.gz >> Sources.$d
- fi
-
- rm -f Packages.gz
- for a in $ARCHS ; do
- gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/binary-$a/Packages.gz >> Packages.$d.$a
- if [ "$d" = "testing" -o "$d" = "stable" ]; then
- gzip -dc /org/ftp.debian.org/ftp/dists/$d-proposed-updates/$s/binary-$a/Packages.gz >> Packages.$d.$a
- fi
- if [ "$d" = "unstable" -a "$s" = "main" ]; then
- gzip -dc /org/ftp.debian.org/ftp/dists/$d/$s/debian-installer/binary-$a/Packages.gz >> Packages.$d.$a
- fi
- done
- else
- rm -f Sources.gz
- if wget $WGETOPT http://security.debian.org/debian-security/dists/$dist/updates/$s/source/Sources.gz; then
- mv Sources.gz Sources-$d.$s.gz
- fi
- gzip -dc Sources-$d.$s.gz >> Sources.$d
- if [ "$s" = "main" ]; then
- if curl $CURLOPT http://security.debian.org/buildd/$dist/Sources.gz -o Sources.gz; then
- mv Sources.gz Sources-$d.accepted.gz
- fi
- gzip -dc Sources-$d.accepted.gz >> Sources.$d
- if curl $CURLOPT http://security.debian.org/buildd/$dist/Packages.gz -o Packages.gz; then
- mv Packages.gz Packages.$d.accepted.gz
- fi
- fi
- rm -f Packages.gz
- for a in $ARCHS ; do
- if wget $WGETOPT http://security.debian.org/debian-security/dists/$dist/updates/$s/binary-$a/Packages.gz; then
- mv Packages.gz Packages.$d.$s.$a.gz
- fi
- gzip -dc Packages.$d.$s.$a.gz >> Packages.$d.$a
- if [ "$s" = "main" ]; then
- gzip -dc Packages.$d.accepted.gz >> Packages.$d.$a
- fi
- done
- fi
- done
-
- for a in $ARCHS ; do
- if [ "$d" = "unstable" -o ! -e "quinn-unstable.$a-old" ]; then
- quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a >> quinn-$d.$a
- else
- if echo $d | grep -qv -- -security; then
- quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a | fgrep -v -f quinn-unstable.$a-old | grep ":out-of-date\]$" >> quinn-$d.$a
- sed -e 's/\[\w*:\w*]$//' quinn-$d-security.$a > quinn-$d-security.$a.grep
- grep -vf quinn-$d-security.$a quinn-$d.$a > quinn-$d.$a.grep
- mv quinn-$d.$a.grep quinn-$d.$a
- rm quinn-$d-security.$a.grep
- else
- quinn-diff -A $a -a /org/buildd.debian.org/web/quinn-diff/Packages-arch-specific -s Sources.$d -p Packages.$d.$a >> quinn-$d.$a
- fi
- fi
- done
-done
-
-umask 002
-for a in $ARCHS_unstable ; do
- wanna-build --create-maintenance-lock --database=$a/build-db
-
- for d in $DISTS ; do
- dist=`echo $d | sed s/-.*$//`
- case "$dist" in
- oldstable)
- if echo $ARCHS_oldstable | grep -q -v "\b$a\b"; then
- continue
- fi
- ;;
- stable)
- if echo $ARCHS_stable | grep -q -v "\b$a\b"; then
- continue
- fi
- ;;
- testing)
- if echo $ARCHS_testing | grep -q -v "\b$a\b"; then
- continue
- fi
- ;;
- *)
- if echo $ARCHS_unstable | grep -q -v "\b$a\b"; then
- continue
- fi
- ;;
- esac
- perl -pi -e 's#^(non-free)/.*$##msg' quinn-$d.$a
- wanna-build --merge-all --arch=$a --dist=$d --database=$a/build-db Packages.$d.$a quinn-$d.$a Sources.$d
- mv Packages.$d.$a Packages.$d.$a-old
- mv quinn-$d.$a quinn-$d.$a-old
- done
- if [ "$DAY" = "0" ]; then
- savelog -p -c 26 /org/wanna-build/db/$a/transactions.log
- fi
- wanna-build --remove-maintenance-lock --database=$a/build-db
-done
-umask 022
-for d in $DISTS; do
- mv Sources.$d Sources.$d-old
-done
-
-echo "merge ended `date`"
-/org/wanna-build/bin/wb-graph >> /org/wanna-build/etc/graph-data
-/org/wanna-build/bin/wb-graph -p >> /org/wanna-build/etc/graph2-data
-rm -f "$LOCKFILE"
-trap -
-/org/buildd.debian.org/bin/makegraph
-for a in $ARCHS_stable; do
- echo Last Updated: `date -u` > /org/buildd.debian.org/web/stats/$a.txt
- for d in $STATS_DISTS; do
- /org/wanna-build/bin/wanna-build-statistics --database=$a/build-db --dist=$d >> /org/buildd.debian.org/web/stats/$a.txt
- done
-done
+++ /dev/null
-#!/usr/bin/env python
-
-# Create all the Release files
-
-# Copyright (C) 2001, 2002 Anthony Towns <ajt@debian.org>
-# $Id: ziyi,v 1.27 2005-11-15 09:50:32 ajt Exp $
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-# ``Bored now''
-
-################################################################################
-
-import sys, os, popen2, tempfile, stat, time
-import utils
-import apt_pkg
-
-################################################################################
-
-Cnf = None
-projectB = None
-out = None
-AptCnf = None
-
-################################################################################
-
-def usage (exit_code=0):
- print """Usage: ziyi [OPTION]... [SUITE]...
-Generate Release files (for SUITE).
-
- -h, --help show this help and exit
-
-If no SUITE is given Release files are generated for all suites."""
-
- sys.exit(exit_code)
-
-################################################################################
-
-def add_tiffani (files, path, indexstem):
- index = "%s.diff/Index" % (indexstem)
- filepath = "%s/%s" % (path, index)
- if os.path.exists(filepath):
- #print "ALERT: there was a tiffani file %s" % (filepath)
- files.append(index)
-
-def compressnames (tree,type,file):
- compress = AptCnf.get("%s::%s::Compress" % (tree,type), AptCnf.get("Default::%s::Compress" % (type), ". gzip"))
- result = []
- cl = compress.split()
- uncompress = ("." not in cl)
- for mode in compress.split():
- if mode == ".":
- result.append(file)
- elif mode == "gzip":
- if uncompress:
- result.append("<zcat/.gz>" + file)
- uncompress = 0
- result.append(file + ".gz")
- elif mode == "bzip2":
- if uncompress:
- result.append("<bzcat/.bz2>" + file)
- uncompress = 0
- result.append(file + ".bz2")
- return result
-
-def create_temp_file (cmd):
- f = tempfile.TemporaryFile()
- r = popen2.popen2(cmd)
- r[1].close()
- r = r[0]
- size = 0
- while 1:
- x = r.readline()
- if not x:
- r.close()
- del x,r
- break
- f.write(x)
- size += len(x)
- f.flush()
- f.seek(0)
- return (size, f)
-
-def print_md5sha_files (tree, files, hashop):
- path = Cnf["Dir::Root"] + tree + "/"
- for name in files:
- try:
- if name[0] == "<":
- j = name.index("/")
- k = name.index(">")
- (cat, ext, name) = (name[1:j], name[j+1:k], name[k+1:])
- (size, file_handle) = create_temp_file("%s %s%s%s" %
- (cat, path, name, ext))
- else:
- size = os.stat(path + name)[stat.ST_SIZE]
- file_handle = utils.open_file(path + name)
- except utils.cant_open_exc:
- print "ALERT: Couldn't open " + path + name
- else:
- hash = hashop(file_handle)
- file_handle.close()
- out.write(" %s %8d %s\n" % (hash, size, name))
-
-def print_md5_files (tree, files):
- print_md5sha_files (tree, files, apt_pkg.md5sum)
-
-def print_sha1_files (tree, files):
- print_md5sha_files (tree, files, apt_pkg.sha1sum)
-
-################################################################################
-
-def main ():
- global Cnf, AptCnf, projectB, out
- out = sys.stdout;
-
- Cnf = utils.get_conf()
-
- Arguments = [('h',"help","Ziyi::Options::Help")];
- for i in [ "help" ]:
- if not Cnf.has_key("Ziyi::Options::%s" % (i)):
- Cnf["Ziyi::Options::%s" % (i)] = "";
-
- suites = apt_pkg.ParseCommandLine(Cnf,Arguments,sys.argv)
- Options = Cnf.SubTree("Ziyi::Options")
-
- if Options["Help"]:
- usage();
-
- AptCnf = apt_pkg.newConfiguration()
- apt_pkg.ReadConfigFileISC(AptCnf,utils.which_apt_conf_file())
-
- if not suites:
- suites = Cnf.SubTree("Suite").List()
-
- for suite in suites:
- print "Processing: " + suite
- SuiteBlock = Cnf.SubTree("Suite::" + suite)
-
- if SuiteBlock.has_key("Untouchable"):
- print "Skipping: " + suite + " (untouchable)"
- continue
-
- suite = suite.lower()
-
- origin = SuiteBlock["Origin"]
- label = SuiteBlock.get("Label", origin)
- version = SuiteBlock.get("Version", "")
- codename = SuiteBlock.get("CodeName", "")
-
- if SuiteBlock.has_key("NotAutomatic"):
- notautomatic = "yes"
- else:
- notautomatic = ""
-
- if SuiteBlock.has_key("Components"):
- components = SuiteBlock.ValueList("Components")
- else:
- components = []
-
- suite_suffix = Cnf.Find("Dinstall::SuiteSuffix");
- if components and suite_suffix:
- longsuite = suite + "/" + suite_suffix;
- else:
- longsuite = suite;
-
- tree = SuiteBlock.get("Tree", "dists/%s" % (longsuite))
-
- if AptCnf.has_key("tree::%s" % (tree)):
- pass
- elif AptCnf.has_key("bindirectory::%s" % (tree)):
- pass
- else:
- aptcnf_filename = os.path.basename(utils.which_apt_conf_file());
- print "ALERT: suite %s not in %s, nor untouchable!" % (suite, aptcnf_filename);
- continue
-
- print Cnf["Dir::Root"] + tree + "/Release"
- out = open(Cnf["Dir::Root"] + tree + "/Release", "w")
-
- out.write("Origin: %s\n" % (origin))
- out.write("Label: %s\n" % (label))
- out.write("Suite: %s\n" % (suite))
- if version != "":
- out.write("Version: %s\n" % (version))
- if codename != "":
- out.write("Codename: %s\n" % (codename))
- out.write("Date: %s\n" % (time.strftime("%a, %d %b %Y %H:%M:%S UTC", time.gmtime(time.time()))))
- if notautomatic != "":
- out.write("NotAutomatic: %s\n" % (notautomatic))
- out.write("Architectures: %s\n" % (" ".join(filter(utils.real_arch, SuiteBlock.ValueList("Architectures")))))
- if components:
- out.write("Components: %s\n" % (" ".join(components)))
-
- out.write("Description: %s\n" % (SuiteBlock["Description"]))
-
- files = []
-
- if AptCnf.has_key("tree::%s" % (tree)):
- for sec in AptCnf["tree::%s::Sections" % (tree)].split():
- for arch in AptCnf["tree::%s::Architectures" % (tree)].split():
- if arch == "source":
- filepath = "%s/%s/Sources" % (sec, arch)
- for file in compressnames("tree::%s" % (tree), "Sources", filepath):
- files.append(file)
- add_tiffani(files, Cnf["Dir::Root"] + tree, filepath)
- else:
- disks = "%s/disks-%s" % (sec, arch)
- diskspath = Cnf["Dir::Root"]+tree+"/"+disks
- if os.path.exists(diskspath):
- for dir in os.listdir(diskspath):
- if os.path.exists("%s/%s/md5sum.txt" % (diskspath, dir)):
- files.append("%s/%s/md5sum.txt" % (disks, dir))
-
- filepath = "%s/binary-%s/Packages" % (sec, arch)
- for file in compressnames("tree::%s" % (tree), "Packages", filepath):
- files.append(file)
- add_tiffani(files, Cnf["Dir::Root"] + tree, filepath)
-
- if arch == "source":
- rel = "%s/%s/Release" % (sec, arch)
- else:
- rel = "%s/binary-%s/Release" % (sec, arch)
- relpath = Cnf["Dir::Root"]+tree+"/"+rel
-
- try:
- release = open(relpath, "w")
- #release = open(longsuite.replace("/","_") + "_" + arch + "_" + sec + "_Release", "w")
- except IOError:
- utils.fubar("Couldn't write to " + relpath);
-
- release.write("Archive: %s\n" % (suite))
- if version != "":
- release.write("Version: %s\n" % (version))
- if suite_suffix:
- release.write("Component: %s/%s\n" % (suite_suffix,sec));
- else:
- release.write("Component: %s\n" % (sec));
- release.write("Origin: %s\n" % (origin))
- release.write("Label: %s\n" % (label))
- if notautomatic != "":
- release.write("NotAutomatic: %s\n" % (notautomatic))
- release.write("Architecture: %s\n" % (arch))
- release.close()
- files.append(rel)
-
- if AptCnf.has_key("tree::%s/main" % (tree)):
- sec = AptCnf["tree::%s/main::Sections" % (tree)].split()[0]
- if sec != "debian-installer":
- print "ALERT: weird non debian-installer section in %s" % (tree)
-
- for arch in AptCnf["tree::%s/main::Architectures" % (tree)].split():
- if arch != "source": # always true
- for file in compressnames("tree::%s/main" % (tree), "Packages", "main/%s/binary-%s/Packages" % (sec, arch)):
- files.append(file)
- elif AptCnf.has_key("tree::%s::FakeDI" % (tree)):
- usetree = AptCnf["tree::%s::FakeDI" % (tree)]
- sec = AptCnf["tree::%s/main::Sections" % (usetree)].split()[0]
- if sec != "debian-installer":
- print "ALERT: weird non debian-installer section in %s" % (usetree)
-
- for arch in AptCnf["tree::%s/main::Architectures" % (usetree)].split():
- if arch != "source": # always true
- for file in compressnames("tree::%s/main" % (usetree), "Packages", "main/%s/binary-%s/Packages" % (sec, arch)):
- files.append(file)
-
- elif AptCnf.has_key("bindirectory::%s" % (tree)):
- for file in compressnames("bindirectory::%s" % (tree), "Packages", AptCnf["bindirectory::%s::Packages" % (tree)]):
- files.append(file.replace(tree+"/","",1))
- for file in compressnames("bindirectory::%s" % (tree), "Sources", AptCnf["bindirectory::%s::Sources" % (tree)]):
- files.append(file.replace(tree+"/","",1))
- else:
- print "ALERT: no tree/bindirectory for %s" % (tree)
-
- out.write("MD5Sum:\n")
- print_md5_files(tree, files)
- out.write("SHA1:\n")
- print_sha1_files(tree, files)
-
- out.close()
- if Cnf.has_key("Dinstall::SigningKeyring"):
- keyring = "--secret-keyring \"%s\"" % Cnf["Dinstall::SigningKeyring"]
- if Cnf.has_key("Dinstall::SigningPubKeyring"):
- keyring += " --keyring \"%s\"" % Cnf["Dinstall::SigningPubKeyring"]
-
- arguments = "--no-options --batch --no-tty --armour"
- if Cnf.has_key("Dinstall::SigningKeyIds"):
- signkeyids = Cnf["Dinstall::SigningKeyIds"].split()
- else:
- signkeyids = [""]
-
- dest = Cnf["Dir::Root"] + tree + "/Release.gpg"
- if os.path.exists(dest):
- os.unlink(dest)
-
- for keyid in signkeyids:
- if keyid != "": defkeyid = "--default-key %s" % keyid
- else: defkeyid = ""
- os.system("gpg %s %s %s --detach-sign <%s >>%s" %
- (keyring, defkeyid, arguments,
- Cnf["Dir::Root"] + tree + "/Release", dest))
-
-#######################################################################################
-
-if __name__ == '__main__':
- main()
-