r272 - in /debtorrent/branches/unique: ./ DebTorrent/ DebTorrent/BT1/ debian/

camrdale-guest at users.alioth.debian.org camrdale-guest at users.alioth.debian.org
Sun Aug 19 09:33:45 UTC 2007


Author: camrdale-guest
Date: Sun Aug 19 09:33:45 2007
New Revision: 272

URL: http://svn.debian.org/wsvn/debtorrent/?sc=1&rev=272
Log:
Merged revisions 226-271 via svnmerge from 
svn+ssh://camrdale-guest@svn.debian.org/svn/debtorrent/debtorrent/trunk

................
  r228 | camrdale-guest | 2007-08-13 09:52:20 -0700 (Mon, 13 Aug 2007) | 1 line
  
  Fix a bug that caused restarts to fail when downloaded files have been modified.
................
  r238 | camrdale-guest | 2007-08-14 12:18:16 -0700 (Tue, 14 Aug 2007) | 1 line
  
  Fix a bug in deleting old cached data.
................
  r255 | camrdale-guest | 2007-08-15 11:04:53 -0700 (Wed, 15 Aug 2007) | 1 line
  
  Fix a small tracker bug.
................
  r258 | camrdale-guest | 2007-08-16 10:29:38 -0700 (Thu, 16 Aug 2007) | 29 lines
  
  Merged revisions 202-242,244-257 via svnmerge from 
  svn+ssh://camrdale-guest@svn.debian.org/svn/debtorrent/debtorrent/branches/http1.1
  
  ........
    r202 | camrdale-guest | 2007-08-05 22:11:35 -0700 (Sun, 05 Aug 2007) | 1 line
    
    Switch the AptListener to queue requests by file name and then connection to allow for multiple requests per HTTP connection.
  ........
    r203 | camrdale-guest | 2007-08-06 16:17:57 -0700 (Mon, 06 Aug 2007) | 1 line
    
    Upgrade the HTTP server to support HTTP/1.1 connections, including persistent connections and pipelining.
  ........
    r247 | camrdale-guest | 2007-08-14 18:23:35 -0700 (Tue, 14 Aug 2007) | 1 line
    
    Introduce protocol tracking in anticipation of new protocols.
  ........
    r249 | camrdale-guest | 2007-08-14 20:14:09 -0700 (Tue, 14 Aug 2007) | 1 line
    
    Add support for the DEBTORRENT protocol.
  ........
    r251 | camrdale-guest | 2007-08-15 00:02:30 -0700 (Wed, 15 Aug 2007) | 1 line
    
    Return DEBTORRENT requests in any order.
  ........
    r257 | camrdale-guest | 2007-08-16 02:59:54 -0700 (Thu, 16 Aug 2007) | 1 line
    
    Update the HTTP cache to not thread too many requests at a time.
  ........
................
  r260 | camrdale-guest | 2007-08-16 19:21:10 -0700 (Thu, 16 Aug 2007) | 1 line
  
  Prevent HTTP sockets from being closed due to timeout when they are just waiting for a long download.
................
  r261 | camrdale-guest | 2007-08-16 21:22:48 -0700 (Thu, 16 Aug 2007) | 1 line
  
  Make the Packages decompression and torrent creation threaded.
................
  r262 | camrdale-guest | 2007-08-17 00:07:24 -0700 (Fri, 17 Aug 2007) | 1 line
  
  Make sure the FileSelector priorities aren't updated before the initialization is complete.
................
  r264 | camrdale-guest | 2007-08-17 12:54:11 -0700 (Fri, 17 Aug 2007) | 1 line
  
  On startup when saved state is not available, scan the directory for already downloaded files.
................
  r265 | camrdale-guest | 2007-08-17 17:39:04 -0700 (Fri, 17 Aug 2007) | 1 line
  
  Only connect to unique peers from the tracker that are not already connected.
................
  r266 | camrdale-guest | 2007-08-17 19:06:35 -0700 (Fri, 17 Aug 2007) | 1 line
  
  Remove some unnecessary stuff that was supporting old python versions.
................
  r267 | camrdale-guest | 2007-08-17 20:12:43 -0700 (Fri, 17 Aug 2007) | 1 line
  
  Update version and changelog for upcoming release.
................
  r268 | camrdale-guest | 2007-08-18 16:45:45 -0700 (Sat, 18 Aug 2007) | 1 line
  
  Fix a tracker bug that caused all torrents' peers to be returned for every request.
................
  r269 | camrdale-guest | 2007-08-18 19:14:33 -0700 (Sat, 18 Aug 2007) | 1 line
  
  Clean up the options and add an init and config script for the tracker.
................
  r270 | camrdale-guest | 2007-08-19 00:33:13 -0700 (Sun, 19 Aug 2007) | 1 line
  
  Run the programs in a loop, exiting only if an interrupt is received.
................
  r271 | camrdale-guest | 2007-08-19 01:08:31 -0700 (Sun, 19 Aug 2007) | 1 line
  
  Change the default tracker port to 6969.
................

Added:
    debtorrent/branches/unique/debian/debtorrent-tracker.default
      - copied unchanged from r271, debtorrent/trunk/debian/debtorrent-tracker.default
    debtorrent/branches/unique/debian/debtorrent-tracker.init
      - copied unchanged from r271, debtorrent/trunk/debian/debtorrent-tracker.init
    debtorrent/branches/unique/debtorrent-tracker.conf
      - copied unchanged from r271, debtorrent/trunk/debtorrent-tracker.conf
Modified:
    debtorrent/branches/unique/   (props changed)
    debtorrent/branches/unique/DebTorrent/BT1/AptListener.py
    debtorrent/branches/unique/DebTorrent/BT1/Choker.py
    debtorrent/branches/unique/DebTorrent/BT1/Connecter.py
    debtorrent/branches/unique/DebTorrent/BT1/Downloader.py
    debtorrent/branches/unique/DebTorrent/BT1/DownloaderFeedback.py
    debtorrent/branches/unique/DebTorrent/BT1/Encrypter.py
    debtorrent/branches/unique/DebTorrent/BT1/FileSelector.py
    debtorrent/branches/unique/DebTorrent/BT1/HTTPDownloader.py
    debtorrent/branches/unique/DebTorrent/BT1/NatCheck.py
    debtorrent/branches/unique/DebTorrent/BT1/PiecePicker.py
    debtorrent/branches/unique/DebTorrent/BT1/Rerequester.py
    debtorrent/branches/unique/DebTorrent/BT1/Statistics.py
    debtorrent/branches/unique/DebTorrent/BT1/Storage.py
    debtorrent/branches/unique/DebTorrent/BT1/StorageWrapper.py
    debtorrent/branches/unique/DebTorrent/BT1/StreamCheck.py
    debtorrent/branches/unique/DebTorrent/BT1/T2T.py
    debtorrent/branches/unique/DebTorrent/BT1/Uploader.py
    debtorrent/branches/unique/DebTorrent/BT1/makemetafile.py
    debtorrent/branches/unique/DebTorrent/BT1/track.py
    debtorrent/branches/unique/DebTorrent/BTcrypto.py
    debtorrent/branches/unique/DebTorrent/ConfigDir.py
    debtorrent/branches/unique/DebTorrent/HTTPCache.py
    debtorrent/branches/unique/DebTorrent/HTTPHandler.py
    debtorrent/branches/unique/DebTorrent/RateLimiter.py
    debtorrent/branches/unique/DebTorrent/RateMeasure.py
    debtorrent/branches/unique/DebTorrent/RawServer.py
    debtorrent/branches/unique/DebTorrent/ServerPortHandler.py
    debtorrent/branches/unique/DebTorrent/SocketHandler.py
    debtorrent/branches/unique/DebTorrent/__init__.py
    debtorrent/branches/unique/DebTorrent/bencode.py
    debtorrent/branches/unique/DebTorrent/bitfield.py
    debtorrent/branches/unique/DebTorrent/download_bt1.py
    debtorrent/branches/unique/DebTorrent/inifile.py
    debtorrent/branches/unique/DebTorrent/iprangeparse.py
    debtorrent/branches/unique/DebTorrent/launchmanycore.py
    debtorrent/branches/unique/DebTorrent/parsedir.py
    debtorrent/branches/unique/DebTorrent/piecebuffer.py
    debtorrent/branches/unique/DebTorrent/subnetparse.py
    debtorrent/branches/unique/DebTorrent/torrentlistparse.py
    debtorrent/branches/unique/TODO
    debtorrent/branches/unique/btcompletedir.py
    debtorrent/branches/unique/btmakemetafile.py
    debtorrent/branches/unique/btshowmetainfo.py
    debtorrent/branches/unique/debian/changelog
    debtorrent/branches/unique/debian/control
    debtorrent/branches/unique/debian/debtorrent-client.init
    debtorrent/branches/unique/debian/debtorrent-client.sgml
    debtorrent/branches/unique/debian/debtorrent-tracker.sgml
    debtorrent/branches/unique/debian/debtorrent.install
    debtorrent/branches/unique/debian/rules
    debtorrent/branches/unique/debtorrent-client.py
    debtorrent/branches/unique/debtorrent-tracker.py
    debtorrent/branches/unique/setup.py
    debtorrent/branches/unique/test.py

Propchange: debtorrent/branches/unique/
------------------------------------------------------------------------------
--- svnmerge-integrated (original)
+++ svnmerge-integrated Sun Aug 19 09:33:45 2007
@@ -1,1 +1,1 @@
-/debtorrent/trunk:1-225
+/debtorrent/trunk:1-271

Modified: debtorrent/branches/unique/DebTorrent/BT1/AptListener.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/AptListener.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/AptListener.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/AptListener.py Sun Aug 19 09:33:45 2007
@@ -22,13 +22,11 @@
 from urlparse import urlparse
 from os.path import join
 from cStringIO import StringIO
-from gzip import GzipFile
-from bz2 import decompress
 from time import time, gmtime, strftime
 from DebTorrent.clock import clock
 from sha import sha
 from binascii import a2b_hex, b2a_hex
-from makemetafile import getpieces, getsubpieces, getordering, uniconvert, convert_all
+from makemetafile import TorrentCreator
 from DebTorrent.HTTPCache import HTTPCache
 from btformats import check_message
 import os, logging
@@ -94,11 +92,13 @@
         are lists of L{DebTorrent.HTTPHandler.HTTPConnection} objects which are the
         requests that are pending for that path.
     @type request_queue: C{dictionary}
-    @ivar request_queue: the pending HTTP get requests that are waiting for download.
-        Keys are L{DebTorrent.HTTPHandler.HTTPConnection} objects, values are
-        (L{DebTorrent.download_bt1.BT1Download}, C{int}, C{list} of C{int}, C{float})
-        which are the torrent downloader, file index, list of pieces needed, and 
-        the time of the original request.
+    @ivar request_queue: the pending HTTP package requests that are waiting for download.
+        Keys are the file names (including mirror) requested, values are dictionaries
+        with keys of L{DebTorrent.HTTPHandler.HTTPConnection} objects and values of
+        (L{DebTorrent.download_bt1.BT1Download}, C{int},
+        L{DebTorrent.HTTPHandler.HTTPRequest}, C{list} of C{int}, C{float})
+        which are the torrent downloader, file index, HTTP request object to answer, 
+        list of pieces needed, and the time of the original request.
     
     """
 
@@ -149,26 +149,34 @@
         self.request_queue = {}
         rawserver.add_task(self.process_queue, 1)
         
-    def enqueue_request(self, connection, downloader, file_num, pieces_needed):
+    def enqueue_request(self, connection, file, downloader, file_num, httpreq, pieces_needed):
         """Add a new download request to the queue of those waiting for pieces.
         
         @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
         @param connection: the conection the request came in on
+        @type file: C{string}
+        @param file: the file to download, starting with the mirror name
         @type downloader: L{DebTorrent.download_bt1.BT1Download}
         @param downloader: the torrent download that has the file
         @type file_num: C{int}
         @param file_num: the index of the file in the torrent
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: the HTTP request object to answer (for queueing)
         @type pieces_needed: C{list} of C{int}
         @param pieces_needed: the list of pieces in the torrent that still 
             need to download
         
         """
         
-        assert not self.request_queue.has_key(connection)
-        
-        logger.info('queueing request as file '+str(file_num)+' needs pieces: '+str(pieces_needed))
-
-        self.request_queue[connection] = (downloader, file_num, pieces_needed, clock())
+        # Get the file's queue and check it for this connection
+        queue = self.request_queue.setdefault(file, {})
+        if connection in queue:
+            logger.error('Received multiple requests for the same file on one connection')
+            return
+
+        logger.info('queueing request as file '+file+' needs pieces: '+str(pieces_needed))
+
+        queue[connection] = (downloader, file_num, httpreq, pieces_needed, clock())
         
     def process_queue(self):
         """Process the queue of waiting requests."""
@@ -177,29 +185,32 @@
         self.rawserver.add_task(self.process_queue, 1)
         
         closed_conns = []
-        for c, v in self.request_queue.items():
-            # Check for a closed connection
-            if c.closed:
-                closed_conns.append(c)
-                logger.warning('connection closed while request queued for file '+str(v[1]))
-                continue
-                
-            # Remove the downloaded pieces from the list of needed ones
-            for piece in list(v[2]):
-                if v[0].storagewrapper.do_I_have(piece):
-                    logger.debug('queued request for file '+str(v[1])+' got piece '+str(piece))
-                    v[2].remove(piece)
+        for file, queue in self.request_queue.items():
+            for c, v in queue.items():
+                # Check for a closed connection
+                if c.closed:
+                    closed_conns.append((file, c))
+                    logger.warning('connection closed while request queued for file '+file)
+                    continue
                     
-            # If no more pieces are needed, return the answer and remove the request
-            if not v[2]:
-                logger.info('queued request for file '+str(v[1])+' is complete')
-                del self.request_queue[c]
-                v[0].storagewrapper.set_file_readonly(v[1])
-                self.answer_package(c, v[0], v[1])
-
-        # Remove closed connections from the queue
-        for c in closed_conns:
-            del self.request_queue[c]
+                # Remove the downloaded pieces from the list of needed ones
+                for piece in list(v[3]):
+                    if v[0].storagewrapper.do_I_have(piece):
+                        logger.debug('queued request for file '+file+' got piece '+str(piece))
+                        v[3].remove(piece)
+                        
+                # If no more pieces are needed, return the answer and remove the request
+                if not v[3]:
+                    logger.info('queued request for file '+file+' is complete')
+                    closed_conns.append((file, c))
+                    v[0].storagewrapper.set_file_readonly(v[1])
+                    self.answer_package(c, file, v[0], v[1], v[2])
+
+        # Remove closed/finished connections from the queue
+        for (file, c) in closed_conns:
+            self.request_queue[file].pop(c)
+            if not self.request_queue[file]:
+                self.request_queue.pop(file)
 
 
     def get_infopage(self):
@@ -308,7 +319,7 @@
         return (200, 'OK', {'Server': VERSION, 'Content-Type': 'text/html; charset=iso-8859-1'}, """<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">\n<html><head><title>Meow</title>\n</head>\n<body style="color: rgb(255, 255, 255); background-color: rgb(0, 0, 0);">\n<div><big style="font-weight: bold;"><big><big><span style="font-family: arial,helvetica,sans-serif;">I&nbsp;IZ&nbsp;TAKIN&nbsp;BRAKE</span></big></big></big><br></div>\n<pre><b><tt>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .-o=o-.<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ,&nbsp; /=o=o=o=\ .--.<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; _|\|=o=O=o=O=|&nbsp;&nbsp;&nbsp; \<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; __.'&nbsp; a`\=o=o=o=(`\&nbsp;&nbsp; /<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; '.&nbsp;&nbsp; a 4/`|.-""'`\ \ ;'`)&nbsp;&nbsp; .---.<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \&nbsp;&nbsp; .'&nbsp; /&nbsp;&nbsp; .--'&nbsp; |_.'&nbsp;&nbsp; / .-._)<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; `)&nbsp; _.'&nbsp;&nbsp; /&nbsp;&nbsp;&nbsp;&nbsp; /`-.__.' /<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; `'-.____;&nbsp;&nbsp;&nbsp;&nbsp; /'-.___.-'<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; `\"""`</tt></b></pre>\n<div><big style="font-weight: bold;"><big><big><span style="font-family: arial,helvetica,sans-serif;">FRM&nbsp;GETIN&nbsp;UR&nbsp;PACKAGES</span></big></big></big><br></div>\n</body>\n</html>""")
 
 
-    def get_cached(self, connection, path, headers):
+    def get_cached(self, connection, path, headers, httpreq):
         """Proxy the (possibly cached) download of a file from a mirror.
         
         @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
@@ -317,6 +328,8 @@
         @param path: the path of the file to download, starting with the mirror name
         @type headers: C{dictionary}
         @param headers: the headers from the request
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: the HTTP request object to answer (for queueing)
         @rtype: (C{int}, C{string}, C{dictionary}, C{string})
         @return: the HTTP status code, status message, headers, and downloaded file
             (or None if the file is being downloaded)
@@ -336,10 +349,10 @@
             if r[0] not in (200, 304):
                 # Get Debs from the debtorrent download, others are straight download
                 if path[-1][-4:] == '.deb':
-                    return self.get_package(connection, path)
+                    return self.get_package(connection, path, httpreq)
                 else:
                     # Save the connection info and start downloading the file
-                    self.cache_waiting.setdefault('/'.join(path), []).append(connection)
+                    self.cache_waiting.setdefault('/'.join(path), []).append((connection, httpreq))
                     self.Cache.download_get(path, self.get_cached_callback)
                     return None
             
@@ -348,9 +361,11 @@
                     # Oops, we do need the cached file after all to start the torrent
                     logger.info('retrieving the cached Packages file to start the torrent')
                     r2 = self.Cache.cache_get(path)
-                    self.got_Packages(path, r2[3])
+                    TorrentCreator(path, r2[3], self.start_torrent, 
+                                   self.rawserver.add_task, self.config['separate_all'])
                 else:
-                    self.got_Packages(path, r[3])                    
+                    TorrentCreator(path, r[3], self.start_torrent, 
+                                   self.rawserver.add_task, self.config['separate_all'])
 
             return r
         
@@ -382,23 +397,26 @@
 
         # If it's a torrent file, start it
         if r[0] == 200 and path[-1] in ('Packages', 'Packages.gz', 'Packages.bz2'):
-            self.got_Packages(path, r[3])
-
-        for connection in connections:
+            TorrentCreator(path, r[3], self.start_torrent,
+                           self.rawserver.add_task, self.config['separate_all'])
+
+        for (connection, httpreq) in connections:
             # Check to make sure the requester is still waiting
             if connection.closed:
                 logger.warning('Retrieved the file, but the requester is gone: '+'/'.join(path))
                 continue
             
-            connection.answer(r)
-            
-    def get_package(self, connection, path):
+            connection.answer(r, httpreq)
+            
+    def get_package(self, connection, path, httpreq):
         """Download a package file from a torrent.
         
         @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
         @param connection: the conection the request came in on
         @type path: C{list} of C{string}
         @param path: the path of the file to download, starting with the mirror name
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: the HTTP request object to answer (for queueing)
         @rtype: (C{int}, C{string}, C{dictionary}, C{string})
         @return: the HTTP status code, status message, headers, and package data
             (or None if the package is to be downloaded)
@@ -425,7 +443,9 @@
             if not d.storagewrapper.do_I_have(piece):
                 pieces_needed.append(piece)
             elif not pieces_needed:
-                data = data + d.storagewrapper.get_piece(piece, 0, -1).getarray().tostring()
+                piecebuf = d.storagewrapper.get_piece(piece, 0, -1)
+                data += piecebuf.getarray().tostring()
+                piecebuf.release()
         
         if not pieces_needed:
             return (200, 'OK', {'Server': VERSION, 'Content-Type': 'text/plain'}, data)
@@ -437,20 +457,24 @@
         d.fileselector.set_priority(f, 1)
         
         # Add the connection to the list of those needing responses
-        self.enqueue_request(connection, d, f, pieces_needed)
+        self.enqueue_request(connection, '/'.join(path), d, f, httpreq, pieces_needed)
         
         return None
         
     
-    def answer_package(self, connection, d, f):
+    def answer_package(self, connection, file, d, f, httpreq):
         """Send the newly downloaded package file to the requester.
         
         @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
         @param connection: the conection the request came in on
+        @type file: C{string}
+        @param file: the file to download, starting with the mirror name
         @type d: L{DebTorrent.download_bt1.BT1Download}
         @param d: the torrent download that has the file
         @type f: C{int}
         @param f: the index of the file in the torrent
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: the HTTP request object to answer (for queueing)
         
         """
 
@@ -466,77 +490,24 @@
             if not d.storagewrapper.do_I_have(piece):
                 pieces_needed.append(piece)
             elif not pieces_needed:
-                data = data + d.storagewrapper.get_piece(piece, 0, -1).getarray().tostring()
+                piecebuf = d.storagewrapper.get_piece(piece, 0, -1)
+                data += piecebuf.getarray().tostring()
+                piecebuf.release()
         
         if not pieces_needed:
-            connection.answer((200, 'OK', {'Server': VERSION, 'Content-Type': 'text/plain'}, data))
+            connection.answer((200, 'OK', {'Server': VERSION, 'Content-Type': 'text/plain'}, data), httpreq)
             return
 
         # Something strange has happened, requeue it
         logger.warning('requeuing request for file '+str(f)+' as it still needs pieces: '+str(pieces_needed))
-        self.enqueue_request(connection, d, f, pieces_needed)
-        
-    
-    def got_Packages(self, path, data):
-        """Process a downloaded Packages file and start a torrent.
-        
-        @type path: C{list} of C{string}
-        @param path: the path of the file to download, starting with the mirror name
-        @type data: C{string}
-        @param data: the downloaded Packages file
-        
-        """
-
-        try:
-            # Decompress the data
-            if path[-1].endswith('.gz'):
-                compressed = StringIO(data)
-                f = GzipFile(fileobj = compressed)
-                data = f.read()
-            elif path[-1].endswith('.bz2'):
-                data = decompress(data)
-            
-            name = '_'.join(path[:-1])
-
-            assert data[:8] == "Package:"
-            h = data.split('\n')
-        except:
-            logger.warning('Packages file is not in the correct format')
-            return 
-
-        try:
-            sub_pieces = getsubpieces('_'.join(path))
-            
-            (piece_ordering, ordering_headers) = getordering('_'.join(path))
-            if self.config['separate_all']:
-                (piece_ordering_all, ordering_all_headers) = getordering('_'.join(path), all = True)
-            else:
-                piece_ordering_all = {}
-                ordering_all_headers = {}
-        
-            (info, info_all) = getpieces(h, separate_all = self.config['separate_all'],
-                                         sub_pieces = sub_pieces,
-                                         piece_ordering = piece_ordering,
-                                         piece_ordering_all = piece_ordering_all,
-                                         num_pieces = int(ordering_headers.get('NextPiece', 0)),
-                                         num_all_pieces = int(ordering_all_headers.get('NextPiece', 0)))
-        except:
-            logger.exception('Failed to create torrent for: %s', name)
-            return
-        
-        if info and self.config['separate_all'] in (0, 2, 3):
-            self.start_torrent(info, ordering_headers, name, path)
-            
-        if info_all and self.config['separate_all'] in (1, 3):
-            self.start_torrent(info_all, ordering_all_headers, convert_all(name), path)
-
-    def start_torrent(self, info, headers, name, path):
+        self.enqueue_request(connection, file, d, f, httpreq, pieces_needed)
+        
+    
+    def start_torrent(self, response, name, path):
         """Start the torrent running.
         
-        @type info: C{dictionary}
-        @param info: the info dictionary to use for the torrent
-        @type headers: C{dictionary}
-        @param headers: the headers from the torrent file
+        @type response: C{dictionary}
+        @param response: the metainfo dictionary to use for the torrent
         @type name: C{string}
         @param name: the name to use for the torrent
         @type path: C{list} of C{string}
@@ -544,32 +515,12 @@
         
         """
         
-        response = {'info': info,
-                    'announce': self.config['default_tracker'],
-                    'name': uniconvert(name)}
-
-        if "Tracker" in headers:
-            response['announce'] = headers["Tracker"].strip()
-            del headers["Tracker"]
-        if "Torrent" in headers:
-            response['identifier'] = a2b_hex(headers["Torrent"].strip())
-            del headers["Torrent"]
-        for header, value in headers.items():
-            response[header] = value.strip()
-
+        if 'announce' not in response or not response['announce']:
+            response['announce'] = self.config['default_tracker']
+        
         if self.config["force_tracker"]:
             response['announce'] = self.config["force_tracker"]
             
-        if path.count('dists'):
-            mirror = 'http://' + '/'.join(path[:path.index('dists')]) + '/'
-            response.setdefault('deb_mirrors', []).append(mirror)
-        
-        try:
-            check_message(response)
-        except:
-            logger.exception('Poorly formatted torrent, not starting')
-            return
-        
         infohash = sha(bencode(response['info'])).digest()
         
         a = {}
@@ -644,7 +595,7 @@
                 response)
 
 
-    def get(self, connection, path, headers):
+    def get(self, connection, path, headers, httpreq):
         """Respond to a GET request.
         
         Process a GET request from APT/browser/other. Process the request,
@@ -657,6 +608,8 @@
         @param path: the URL being requested
         @type headers: C{dictionary}
         @param headers: the headers from the request
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: the HTTP request object to answer (for queueing)
         @rtype: (C{int}, C{string}, C{dictionary}, C{string})
         @return: the HTTP status code, status message, headers, and message body
         
@@ -728,7 +681,7 @@
             if 'Packages.diff' in path:
                 return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
             
-            return self.get_cached(connection, path, headers)
+            return self.get_cached(connection, path, headers, httpreq)
             
         except ValueError, e:
             logger.exception('Bad request from: '+ip)

Modified: debtorrent/branches/unique/DebTorrent/BT1/Choker.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Choker.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Choker.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Choker.py Sun Aug 19 09:33:45 2007
@@ -8,11 +8,6 @@
 
 from random import randrange, shuffle
 from DebTorrent.clock import clock
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 class Choker:
     """Manages the choking and unchoking of other downloaders.

Modified: debtorrent/branches/unique/DebTorrent/BT1/Connecter.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Connecter.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Connecter.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Connecter.py Sun Aug 19 09:33:45 2007
@@ -32,36 +32,10 @@
 from DebTorrent.bitfield import Bitfield
 from DebTorrent.clock import clock
 from binascii import b2a_hex
+import struct
 import logging
 
-try:
-    True
-except:
-    True = 1
-    False = 0
-
 logger = logging.getLogger('DebTorrent.BT1.Connecter')
-
-def toint(s):
-    """Convert four-byte big endian representation to a long.
-    
-    @type s: C{string}
-    @param s: the string to convert
-    
-    """
-    
-    return long(b2a_hex(s), 16)
-
-def tobinary(i):
-    """Convert an integer to a four-byte big endian representation.
-    
-    @type i: C{int}
-    @param i: the integer to convert
-    
-    """
-    
-    return (chr(i >> 24) + chr((i >> 16) & 0xFF) + 
-        chr((i >> 8) & 0xFF) + chr(i & 0xFF))
 
 CHOKE = chr(0)
 UNCHOKE = chr(1)
@@ -229,8 +203,7 @@
         
         """
         
-        self._send_message(REQUEST + tobinary(index) + 
-            tobinary(begin) + tobinary(length))
+        self._send_message(REQUEST + struct.pack('>iii', index, begin, length))
         logger.debug(self.get_ip()+': sent request '+str(index)+', '+str(begin)+'-'+str(begin+length))
 
     def send_cancel(self, index, begin, length):
@@ -247,8 +220,7 @@
         
         """
         
-        self._send_message(CANCEL + tobinary(index) + 
-            tobinary(begin) + tobinary(length))
+        self._send_message(CANCEL + struct.pack('>iii', index, begin, length))
         logger.debug(self.get_ip()+': sent cancel '+str(index)+', '+str(begin)+'-'+str(begin+length))
 
     def send_bitfield(self, bitfield):
@@ -269,7 +241,7 @@
         
         """
         
-        self._send_message(HAVE + tobinary(index))
+        self._send_message(HAVE + struct.pack('>i', index))
 
     def send_keepalive(self):
         """Send a keepalive message."""
@@ -287,7 +259,7 @@
             logger.debug(self.get_ip()+': SENDING MESSAGE '+str(ord(s[0]))+' ('+str(len(s))+')')
         else:
             logger.debug(self.get_ip()+': SENDING MESSAGE keepalive (0)')
-        s = tobinary(len(s))+s
+        s = struct.pack('>i', len(s))+s
         if self.partial_message:
             self.outqueue.append(s)
         else:
@@ -311,8 +283,8 @@
                 return 0
             index, begin, piece = s
             self.partial_message = ''.join((
-                            tobinary(len(piece) + 9), PIECE,
-                            tobinary(index), tobinary(begin), piece.tostring() ))
+                            struct.pack('>i', len(piece) + 9), PIECE,
+                            struct.pack('>ii', index, begin), piece.tostring() ))
             logger.debug(self.get_ip()+': sending chunk '+str(index)+', '+str(begin)+'-'+str(begin+len(piece)))
 
         if bytes < len(self.partial_message):
@@ -324,7 +296,7 @@
         self.partial_message = None
         if self.send_choke_queued:
             self.send_choke_queued = False
-            self.outqueue.append(tobinary(1)+CHOKE)
+            self.outqueue.append(struct.pack('>i', 1)+CHOKE)
             self.upload.choke_sent()
             self.just_unchoked = 0
         q.extend(self.outqueue)
@@ -567,7 +539,7 @@
                 logger.warning(c.get_ip()+': bad message length, closing connection')
                 connection.close()
                 return
-            i = toint(message[1:])
+            i = struct.unpack('>i', message[1:])[0]
             if i >= self.numpieces:
                 logger.debug(c.get_ip()+': dropping an unknown piece number')
                 return
@@ -587,36 +559,36 @@
                 logger.warning(c.get_ip()+': bad message length, closing connection')
                 connection.close()
                 return
-            i = toint(message[1:5])
+            i = struct.unpack('>i', message[1:5])[0]
             if i >= self.numpieces:
                 logger.warning(c.get_ip()+': bad piece number, closing connection')
                 connection.close()
                 return
-            c.got_request(i, toint(message[5:9]), 
-                toint(message[9:]))
+            c.got_request(i, struct.unpack('>i', message[5:9])[0], 
+                struct.unpack('>i', message[9:])[0])
         elif t == CANCEL:
             if len(message) != 13:
                 logger.warning(c.get_ip()+': bad message length, closing connection')
                 connection.close()
                 return
-            i = toint(message[1:5])
+            i = struct.unpack('>i', message[1:5])[0]
             if i >= self.numpieces:
                 logger.warning(c.get_ip()+': bad piece number, closing connection')
                 connection.close()
                 return
-            c.upload.got_cancel(i, toint(message[5:9]), 
-                toint(message[9:]))
+            c.upload.got_cancel(i, struct.unpack('>i', message[5:9])[0], 
+                struct.unpack('>i', message[9:])[0])
         elif t == PIECE:
             if len(message) <= 9:
                 logger.warning(c.get_ip()+': bad message length, closing connection')
                 connection.close()
                 return
-            i = toint(message[1:5])
+            i = struct.unpack('>i', message[1:5])[0]
             if i >= self.numpieces:
                 logger.warning(c.get_ip()+': bad piece number, closing connection')
                 connection.close()
                 return
-            if c.download.got_piece(i, toint(message[5:9]), message[9:]):
+            if c.download.got_piece(i, struct.unpack('>i', message[5:9])[0], message[9:]):
                 self.got_piece(i)
         else:
             logger.warning(c.get_ip()+': unknown message type, closing connection')

Modified: debtorrent/branches/unique/DebTorrent/BT1/Downloader.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Downloader.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Downloader.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Downloader.py Sun Aug 19 09:33:45 2007
@@ -18,11 +18,6 @@
 from random import shuffle
 from DebTorrent.clock import clock
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.Downloader')
 

Modified: debtorrent/branches/unique/DebTorrent/BT1/DownloaderFeedback.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/DownloaderFeedback.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/DownloaderFeedback.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/DownloaderFeedback.py Sun Aug 19 09:33:45 2007
@@ -15,12 +15,6 @@
 from cStringIO import StringIO
 from urllib import quote
 from threading import Event
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 INIT_STATE = (('R','R+'),('L','L+'))
 

Modified: debtorrent/branches/unique/DebTorrent/BT1/Encrypter.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Encrypter.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Encrypter.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Encrypter.py Sun Aug 19 09:33:45 2007
@@ -13,11 +13,6 @@
     outstanding before new connections to initiate get queued
 @type option_pattern: C{string}
 @var option_pattern: the supported options to send to all peers
- at type hexchars: C{string}
- at var hexchars: the hex characters
- at type hexmap: C{list} of C{string}
- at var hexmap: a mapping from the first 256 integers to their 2-byte hex
-    representation
 @type incompletecounter: L{IncompleteCounter}
 @var incompletecounter: the counter to use to track the number of incomplete
     connections outstanding
@@ -26,19 +21,12 @@
 
 from cStringIO import StringIO
 from binascii import b2a_hex
+import struct
 from socket import error as socketerror
-from urllib import quote
 from DebTorrent.BTcrypto import Crypto
-from DebTorrent.__init__ import protocol_name
+from DebTorrent.__init__ import protocol_name, make_readable
 import logging
 
-try:
-    True
-except:
-    True = 1
-    False = 0
-    bool = lambda x: not not x
-
 logger = logging.getLogger('DebTorrent.BT1.Encrypter')
 
 DEBUG = False
@@ -47,67 +35,6 @@
 
 option_pattern = chr(0)*8
 
-def toint(s):
-    """Convert a 2-byte big-endian string representation back to an integer.
-    
-    @type s: C{string}
-    @param s: the 2 byte big-endian string representation to convert
-    @rtype: C{long}
-    @return: the integer
-    
-    """
-    
-    return long(b2a_hex(s), 16)
-
-def tobinary16(i):
-    """Convert an integer to a 2-byte big-endian string representation.
-    
-    @type i: C{int}
-    @param i: the integer to convert
-    @rtype: C{string}
-    @return: the 2 byte big-endian string representation
-    
-    """
-    
-    return chr((i >> 8) & 0xFF) + chr(i & 0xFF)
-
-hexchars = '0123456789ABCDEF'
-hexmap = []
-for i in xrange(256):
-    hexmap.append(hexchars[(i&0xF0)/16]+hexchars[i&0x0F])
-
-def tohex(s):
-    """Convert a string of characters to it's hex representation.
-    
-    @type s: C{string}
-    @param s: the string to convert
-    @rtype: C{string}
-    @return: the resulting hex string
-    
-    """
-    
-    r = []
-    for c in s:
-        r.append(hexmap[ord(c)])
-    return ''.join(r)
-
-def make_readable(s):
-    """Convert a string to be human-readable.
-    
-    @type s: C{string}
-    @param s: the string to convert
-    @rtype: C{string}
-    @return: the resulting hex string, or the original string if it was already
-        readable
-    
-    """
-    
-    if not s:
-        return ''
-    if quote(s).find('%') >= 0:
-        return tohex(s)
-    return '"'+s+'"'
-   
 
 class IncompleteCounter:
     """Keep track of the number of oustanding incomplete connections.
@@ -154,6 +81,8 @@
     @ivar connection: the low-level connection to the peer
     @type connecter: L{Connecter.Connecter}
     @ivar connecter: the Connecter instance to use
+    @type dns: (C{string}, C{int})
+    @ivar dns: the IP address and port to connect to
     @type id: C{string}
     @ivar id: the peer ID of the peer
     @type locally_initiated: C{boolean}
@@ -211,7 +140,7 @@
 
     """
     
-    def __init__(self, Encoder, connection, id,
+    def __init__(self, Encoder, connection, dns, id,
                  ext_handshake=False, encrypted = None, options = None):
         """Initialize the instance and start handling the connection.
         
@@ -219,14 +148,17 @@
         @param Encoder: the collection of all connections
         @type connection: L{DebTorrent.SocketHandler.SingleSocket}
         @param connection: the low-level connection to the peer
+        @type dns: (C{string}, C{int})
+        @param dns: the IP address and port to connect to
         @type id: C{string}
         @param id: the peer ID of the peer to connect to (will be None if 
             the connection is being initiated locally)
         @type ext_handshake: C{boolean}
         @param ext_handshake: whether the connection has already been
             handshaked by another module (optional, defaults to False)
-        @type encrypted: C{DebTorrent.BT1Crypto.Crypto}
-        @param encrypted: the already created Crypto instance, if the connection
+        @type encrypted: C{int} or C{DebTorrent.BT1Crypto.Crypto}
+        @param encrypted: the type of encryption the connection supports
+            (0 for none), or the already created Crypto instance, if the connection
             was externally handshaked (optional, defaults to creating a new one)
         @type options: C{string}
         @param options: the options read from the externally handshaked
@@ -237,6 +169,7 @@
         self.Encoder = Encoder
         self.connection = connection
         self.connecter = Encoder.connecter
+        self.dns = dns
         self.id = id
         self.locally_initiated = (id != None)
         self.readable_id = make_readable(id)
@@ -460,7 +393,7 @@
                       + self.encrypter.encrypt(
                             ('\x00'*8)            # VC
                           + cryptmode             # acceptable crypto modes
-                          + tobinary16(len(padc))
+                          + struct.pack('>h', len(padc))
                           + padc                  # PadC
                           + '\x00\x00' ) )        # no initial payload data
             self._max_search = 520
@@ -541,7 +474,7 @@
         if s[:8] != ('\x00'*8):             # check VC
             logger.info('Dropped the encrypted connection due to a bad VC: '+self.readable_id)
             return None
-        self.cryptmode = toint(s[8:12]) % 4
+        self.cryptmode = struct.unpack('>i', s[8:12])[0] % 4
         if self.cryptmode == 0:
             logger.info('Dropped the encrypted connection due to a bad crypt mode: '+self.readable_id)
             return None                     # no encryption selected
@@ -578,7 +511,7 @@
         padd = self.encrypter.padding()
         self.write( ('\x00'*8)            # VC
                   + cryptmode             # encryption mode
-                  + tobinary16(len(padd))
+                  + struct.pack('>h', len(padd))
                   + padd )                # PadD
         if ialen:
             return ialen, self.read_crypto_ia
@@ -652,7 +585,7 @@
         
         """
         
-        self.cryptmode = toint(s[:4]) % 4
+        self.cryptmode = struct.unpack('>i',s[:4])[0] % 4
         if self.cryptmode == 1:             # only header encryption
             if self.Encoder.config['crypto_only']:
                 logger.info('Dropped the header-only encrypted connection: '+self.readable_id)
@@ -779,7 +712,7 @@
                 return None
         self.complete = self.Encoder.got_id(self)
         if not self.complete:
-            logger.warning('Connection disallowed for security: '+self.readable_id)
+            logger.warning('Connection to %r disallowed for security: '+self.readable_id, self.dns)
             return None
         if self.locally_initiated:
             self.write(self.Encoder.my_id)
@@ -801,7 +734,7 @@
         
         """
         
-        l = toint(s)
+        l = struct.unpack('>i', s)[0]
         if l > self.Encoder.max_len:
             logger.warning('Dropped the connection due to bad length message: '+self.readable_id)
             return None
@@ -836,8 +769,8 @@
 
     def _auto_close(self):
         """Close the connection if the handshake is not yet complete."""
-        if not self.complete:
-            logger.warning('Connection dropped due to handshake taking too long: '+self.readable_id)
+        if not self.complete and not self.closed:
+            logger.warning('Connection to %r dropped due to handshake taking too long: '+self.readable_id, self.dns)
             self.close()
 
     def close(self):
@@ -1199,7 +1132,7 @@
             logger.info('Not connecting due to too many connections: '+str(len(self.connections))+' >= '+str(self.max_connections))
             return True
         if id == self.my_id:
-            logger.info('Not connecting due to it being my ID: '+id)
+            logger.info('Not connecting due to it being my ID: '+dns[0])
             return True
         if not self.check_ip(ip=dns[0]):
             logger.info('Not connecting due to the IP being banned: '+dns[0])
@@ -1220,10 +1153,12 @@
             if self.config['security'] and ip != 'unknown' and ip == dns[0]:
                 logger.info('Not connecting due to a matching IP: '+ip)
                 return True
+            if dns == v.dns:
+                logger.info('Not connecting due to already being connected: %r', dns)
         try:
             logger.debug('initiating connection to: '+str(dns)+', '+str(id)+', '+str(encrypted))
             c = self.raw_server.start_connection(dns)
-            con = Connection(self, c, id, encrypted = encrypted)
+            con = Connection(self, c, dns, id, encrypted = encrypted)
             self.connections[c] = con
             c.set_handler(con)
         except socketerror:
@@ -1320,7 +1255,9 @@
                             str(len(self.connections))+' >= '+str(self.max_connections))
             connection.close()
             return False
-        con = Connection(self, connection, None)
+        dns = connection.getpeername()
+        logger.info("Reveived a connection from: %r", dns)
+        con = Connection(self, connection, dns, None)
         self.connections[connection] = con
         connection.set_handler(con)
         return True
@@ -1357,7 +1294,9 @@
             logger.info('Not allowing external connection due to the IP being banned: '+dns[0])
             connection.close()
             return False
-        con = Connection(self, connection, None,
+        dns = connection.getpeername()
+        logger.info("Received an externally handled connection from: %r", dns)
+        con = Connection(self, connection, dns, None,
                 ext_handshake = True, encrypted = encrypted, options = options)
         self.connections[connection] = con
         connection.set_handler(con)

Modified: debtorrent/branches/unique/DebTorrent/BT1/FileSelector.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/FileSelector.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/FileSelector.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/FileSelector.py Sun Aug 19 09:33:45 2007
@@ -13,11 +13,6 @@
 
 from random import shuffle
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.FileSelector')
 
@@ -46,6 +41,8 @@
          0 -- highest priority
          1 -- medium priority
          2 -- lowest priority
+    @type init_priority: C{list} of C{int}
+    @ivar init_priority: the initial unpickled priority of each file in the download
     @type new_priority: C{list} of C{int}
     @ivar new_priority: the new priority of each file in the download
     @type new_partials: C{list} of C{int}
@@ -64,8 +61,6 @@
         newly enabled piece download
     @type rerequestfunc: C{method}
     @ivar rerequestfunc: method to call to request more peers
-    @type new_piece_priority: C{list} of C{int}
-    @ivar new_piece_priority: the new priority for each piece in the download
     
     """
     
@@ -99,11 +94,15 @@
         self.failfunc = failfunc
         self.downloader = None
         self.picker = picker
+        self.cancelfunc = None
+        self.requestmorefunc = None
+        self.rerequestfunc = None
 
         storage.set_bufferdir(bufferdir)
         
         self.numfiles = len(files)
         self.priority = [-1] * self.numfiles
+        self.init_priority = None
         self.new_priority = None
         self.new_partials = None
         self.filepieces = []
@@ -141,8 +140,8 @@
         
 
 
-    def init_priority(self, new_priority):
-        """Initialize the priorities of all the files.
+    def init_priorities(self, init_priority):
+        """Initialize the priorities of all the files from the unpickled state.
         
         @type new_priority: C{list} of C{int}
         @param new_priority: the new file priorities
@@ -152,8 +151,8 @@
         """
         
         try:
-            assert len(new_priority) == self.numfiles
-            for v in new_priority:
+            assert len(init_priority) == self.numfiles
+            for v in init_priority:
                 assert type(v) in (type(0),type(0L))
                 assert v >= -1
                 assert v <= 2
@@ -162,12 +161,12 @@
             return False
         try:
             for f in xrange(self.numfiles):
-                if new_priority[f] < 0:
+                if init_priority[f] < 0:
                     self.storage.disable_file(f)
                 elif self.files[f][1] > 0:
                     self.storage.enable_file(f)
             self.storage.reset_file_status()
-            self.new_priority = new_priority
+            self.init_priority = init_priority
         except (IOError, OSError), e:
             self.failfunc("can't open partial file for "
                           + self.files[f][0] + ': ' + str(e))
@@ -216,19 +215,17 @@
                     else:
                         new_priority.append(saved_files.get(file, -1))
 
-                if not self.init_priority(new_priority):
+                if not self.init_priorities(new_priority):
                     return
             except:
                 logger.exception('Error unpickling file priority cache')
                 return
             
         pieces = self.storage.unpickle(d)
-        if not pieces:  # don't bother, nothing restoreable
-            return
-        new_piece_priority = self._get_piece_priority_list(self.new_priority)
-        self.storagewrapper.reblock([i == -1 for i in new_piece_priority])
+        init_piece_priority = self._get_piece_priority_list(self.init_priority)
+        self.storagewrapper.reblock([i == -1 for i in init_piece_priority])
         self.new_partials = self.storagewrapper.unpickle(d, pieces)
-        self.piece_priority = self._initialize_piece_priority(self.new_priority)
+        self.piece_priority = self._initialize_piece_priority(self.init_priority)
 
 
     def tie_in(self, cancelfunc, requestmorefunc, rerequestfunc):
@@ -248,52 +245,21 @@
         self.requestmorefunc = requestmorefunc
         self.rerequestfunc = rerequestfunc
 
-        if self.new_priority:
-            self.priority = self.new_priority
-            self.new_priority = None
-            self.new_piece_priority = self._set_piece_priority(self.priority)
-
+        # Set up the unpickled initial priorities
+        if self.init_priority:
+            self.priority = self.init_priority
+            self.init_priority = None
+
+        # Set up the unpickled list of partially completed pieces
         if self.new_partials:
             shuffle(self.new_partials)
             for p in self.new_partials:
                 self.picker.requested(p)
         self.new_partials = None
         
-
-    def _initialize_files_disabled(self, old_priority, new_priority):
-        """Initialize the disabled files on startup.
-        
-        @type old_priority: C{list} of C{int}
-        @param old_priority: the old file priorities
-        @type new_priority: C{list} of C{int}
-        @param new_priority: the new file priorities
-        @rtype: C{boolean}
-        @return: whether the initialization was successful
-        
-        """
-        
-        old_disabled = [p == -1 for p in old_priority]
-        new_disabled = [p == -1 for p in new_priority]
-        files_updated = False        
-        try:
-            for f in xrange(self.numfiles):
-                if new_disabled[f] and not old_disabled[f]:
-                    self.storage.disable_file(f)
-                    files_updated = True
-                if old_disabled[f] and not new_disabled[f] and self.files[f][1] > 0:
-                    self.storage.enable_file(f)
-                    files_updated = True
-        except (IOError, OSError), e:
-            if new_disabled[f]:
-                msg = "can't open partial file for "
-            else:
-                msg = 'unable to open '
-            self.failfunc(msg + self.files[f][0] + ': ' + str(e))
-            return False
-        if files_updated:
-            self.storage.reset_file_status()
-        return True        
-
+        # Schedule the processing of any early arrivals of new priorities
+        if self.new_priority:
+            self.sched(self.set_priorities_now)
 
     def _set_files_disabled(self, old_priority, new_priority):
         """Disable files based on a new priority setting.
@@ -428,23 +394,6 @@
         return new_piece_priority        
 
 
-    def initialize_priorities_now(self, new_priority = None):
-        """Initialize the priorities on startup.
-        
-        @type new_priority: C{list} of C{int}
-        @param new_priority: the new file priorities
-            (optional, defaults to not initializing anything)
-        
-        """
-
-        if not new_priority:
-            return
-        old_priority = self.priority
-        self.priority = new_priority
-        if not self._initialize_files_disabled(old_priority, new_priority):
-            return
-        self.piece_priority = self._initialize_piece_priority(new_priority)
-
     def set_priorities_now(self, new_priority = None):
         """Set the new priorities.
         
@@ -474,7 +423,9 @@
         """
 
         self.new_priority = new_priority
-        self.sched(self.set_priorities_now)
+        # Only set the new priorities if the tie_in initialization is complete
+        if self.requestmorefunc:
+            self.sched(self.set_priorities_now)
         
     def set_priority(self, f, p):
         """Set the priority of a single file.
@@ -500,7 +451,9 @@
         
         priority = self.new_priority
         if not priority:
-            priority = self.priority    # potential race condition
+            priority = self.init_priority
+            if not priority:
+                priority = self.priority    # potential race condition
         return [i for i in priority]
 
     def __setitem__(self, index, val):
@@ -528,7 +481,10 @@
         try:
             return self.new_priority[index]
         except:
-            return self.priority[index]
+            try:
+                return self.init_priority[index]
+            except:
+                return self.priority[index]
 
 
     def finish(self):

Modified: debtorrent/branches/unique/DebTorrent/BT1/HTTPDownloader.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/HTTPDownloader.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/HTTPDownloader.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/HTTPDownloader.py Sun Aug 19 09:33:45 2007
@@ -23,11 +23,6 @@
 from threading import Thread
 import logging
 from DebTorrent.__init__ import product_name,version_short
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.HTTPDownloader')
 

Modified: debtorrent/branches/unique/DebTorrent/BT1/NatCheck.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/NatCheck.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/NatCheck.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/NatCheck.py Sun Aug 19 09:33:45 2007
@@ -16,12 +16,7 @@
 from DebTorrent.BTcrypto import Crypto, CRYPTO_OK
 from DebTorrent.__init__ import protocol_name
 from binascii import b2a_hex
-
-try:
-    True
-except:
-    True = 1
-    False = 0
+import struct
 
 CHECK_PEER_ID_ENCRYPTED = True
 
@@ -203,7 +198,7 @@
                   + self.encrypter.encrypt(
                         ('\x00'*8)            # VC
                       + cryptmode             # acceptable crypto modes
-                      + tobinary16(len(padc))
+                      + struct.pack('>h', len(padc))
                       + padc                  # PadC
                       + '\x00\x00' ) )        # no initial payload data
         self._max_search = 520
@@ -265,7 +260,7 @@
         
         """
         
-        self.cryptmode = toint(s[:4]) % 4
+        self.cryptmode = struct.unpack('>i',s[:4])[0] % 4
         if self.cryptmode != 2:
             logger.info('Dropped the encrypted connection due to an unknown crypt mode: '+self.readable_id)
             return None                     # unknown encryption

Modified: debtorrent/branches/unique/DebTorrent/BT1/PiecePicker.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/PiecePicker.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/PiecePicker.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/PiecePicker.py Sun Aug 19 09:33:45 2007
@@ -14,11 +14,6 @@
 from random import randrange, shuffle
 from DebTorrent.clock import clock
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.Rerequester')
 

Modified: debtorrent/branches/unique/DebTorrent/BT1/Rerequester.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Rerequester.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Rerequester.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Rerequester.py Sun Aug 19 09:33:45 2007
@@ -37,12 +37,6 @@
     def getpid():
         return 1
     
-try:
-    True
-except:
-    True = 1
-    False = 0
-
 logger = logging.getLogger('DebTorrent.BT1.Rerequester')
 
 mapbase64 = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.-'
@@ -656,7 +650,7 @@
         self.last = r.get('last')
 #        ps = len(r['peers']) + self.howmany()
         p = r['peers']
-        peers = []
+        new_peers = {}
         if type(p) == type(''):
             lenpeers = len(p)/6
         else:
@@ -672,12 +666,17 @@
             for x in xrange(0, len(p), 6):
                 ip = '.'.join([str(ord(i)) for i in p[x:x+4]])
                 port = (ord(p[x+4]) << 8) | ord(p[x+5])
-                peers.append(((ip, port), 0, cflags[int(x/6)]))
+                new_peers[(ip, port)] = (0, cflags[int(x/6)])
         else:
             for i in xrange(len(p)):
                 x = p[i]
-                peers.append(((x['ip'].strip(), x['port']),
-                              x.get('peer id',0), cflags[i]))
+                new_peers[(x['ip'].strip(), x['port'])] = (x.get('peer id',0),
+                                                           cflags[i])
+        
+        # Now build the list of peers that are unique in the list
+        peers = []
+        for dns, (id, crypto) in new_peers.items():
+            peers.append((dns, id, crypto))
         logger.info('received from tracker: '+str(peers))
         ps = len(peers) + self.howmany()
         if ps < self.maxpeers:

Modified: debtorrent/branches/unique/DebTorrent/BT1/Statistics.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Statistics.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Statistics.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Statistics.py Sun Aug 19 09:33:45 2007
@@ -7,11 +7,6 @@
 """Generate statistics for the swarm."""
 
 from threading import Event
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 class Statistics_Response:
     """Empty class to add arbitrary variables to."""

Modified: debtorrent/branches/unique/DebTorrent/BT1/Storage.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Storage.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Storage.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Storage.py Sun Aug 19 09:33:45 2007
@@ -28,18 +28,9 @@
 from os.path import exists, getsize, getmtime, basename, split
 from os import makedirs
 import logging
-try:
-    from os import fsync
-except ImportError:
-    fsync = lambda x: None
+from os import fsync
 from bisect import bisect
     
-try:
-    True
-except:
-    True = 1
-    False = 0
-
 logger = logging.getLogger('DebTorrent.BT1.Storage')
 
 MAXREADSIZE = 32768
@@ -133,7 +124,7 @@
     """
     
     def __init__(self, files, piece_lengths, doneflag, config,
-                 disabled_files = None):
+                 enabled_files = None):
         """Initializes the Storage.
         
         Initializes some variables, and calculates defaults for others,
@@ -147,8 +138,8 @@
         @param doneflag: the flag that indicates when the program is to be shutdown
         @type config: C{dictionary}
         @param config: the configuration information
-        @type disabled_files: C{list} of C{boolean}
-        @param disabled_files: list of true for the files that are disabled
+        @type enabled_files: C{list} of C{boolean}
+        @param enabled_files: list of true for the files that are enabled
             (optional, default is all files disabled)
         
         """
@@ -179,8 +170,8 @@
         self.lock_while_reading = config.get('lock_while_reading', False)
         self.lock = Lock()
 
-        if not disabled_files:
-            disabled_files = [True] * len(files)
+        if not enabled_files:
+            enabled_files = [False] * len(files)
 
         for i in xrange(len(files)):
             file, length = files[i]
@@ -223,9 +214,7 @@
                     cur_piece -= 1
                     piece_total -= self.piece_lengths[cur_piece]
                 self.file_pieces.append((start_piece, end_piece))
-                if disabled_files[i]:
-                    l = 0
-                else:
+                if enabled_files[i]:
                     if exists(file):
                         l = getsize(file)
                         if l > length:
@@ -242,6 +231,8 @@
                         h.close()
                     self.mtimes[file] = getmtime(file)
                     self.tops[file] = l
+                else:
+                    l = 0
                 self.sizes[file] = length
                 so_far += l
 
@@ -529,8 +520,6 @@
         for l in self.working_ranges:
             self.ranges.extend(l)
         self.begins = [i[0] for i in self.ranges]
-        logger.debug('new file ranges: '+str(self.ranges))
-        logger.debug('new file begins: '+str(self.begins))
 
     def get_file_range(self, index):
         """Get the file name and range that corresponds to this piece.
@@ -924,7 +913,8 @@
                         if valid_pieces.has_key(p):
                             del valid_pieces[p]
         except:
-            logger.exception('Error unpickling data cache')
+            if 'files' in data:
+                logger.exception('Error unpickling data cache')
             return []
 
         logger.info('Final list of valid pieces: '+str(valid_pieces.keys()))

Modified: debtorrent/branches/unique/DebTorrent/BT1/StorageWrapper.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/StorageWrapper.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/StorageWrapper.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/StorageWrapper.py Sun Aug 19 09:33:45 2007
@@ -18,17 +18,7 @@
 from DebTorrent.clock import clock
 from random import randrange
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
-try:
-    from bisect import insort
-except:
-    def insort(l, item):
-        l.append(item)
-        l.sort()
+from bisect import insort
 
 logger = logging.getLogger('DebTorrent.BT1.StorageWrapper')
 
@@ -1397,8 +1387,9 @@
         @param begin: the offset within the piece of the request
         @type length: C{int}
         @param length: the length of the request
-        @rtype: C{string}
-        @return: the requested data, or None if there was a problem
+        @rtype: L{DebTorrent.piecebuffer.SingleBuffer} or C{string}
+        @return: the requested data (in a piecebuffer if the request was for
+            the entire piece), or None if there was a problem
         
         """
         
@@ -1446,7 +1437,7 @@
         @type flush_first: C{boolean}
         @param flush_first: whether to flush the files before reading the data
             (optional, default is not to flush)
-        @rtype: C{string}
+        @rtype: L{DebTorrent.piecebuffer.SingleBuffer}
         @return: the requested data, or None if there was a problem
         
         """
@@ -1741,7 +1732,8 @@
 
             assert amount_obtained + amount_inactive == self.amount_desired
         except:
-            logger.exception('Error unpickling data cache')
+            if 'pieces' in data:
+                logger.exception('Error unpickling data cache')
             return []   # invalid data, discard everything
 
         self.have = have

Modified: debtorrent/branches/unique/DebTorrent/BT1/StreamCheck.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/StreamCheck.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/StreamCheck.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/StreamCheck.py Sun Aug 19 09:33:45 2007
@@ -15,46 +15,13 @@
 from binascii import b2a_hex
 from socket import error as socketerror
 from urllib import quote
-from DebTorrent.__init__ import protocol_name
+from DebTorrent.__init__ import protocol_name, make_readable
 import Connecter
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.StreamCheck')
 
 option_pattern = chr(0)*8
-
-def toint(s):
-    return long(b2a_hex(s), 16)
-
-def tobinary(i):
-    return (chr(i >> 24) + chr((i >> 16) & 0xFF) + 
-        chr((i >> 8) & 0xFF) + chr(i & 0xFF))
-
-hexchars = '0123456789ABCDEF'
-hexmap = []
-for i in xrange(256):
-    hexmap.append(hexchars[(i&0xF0)/16]+hexchars[i&0x0F])
-
-def tohex(s):
-    r = []
-    for c in s:
-        r.append(hexmap[ord(c)])
-    return ''.join(r)
-
-def make_readable(s):
-    if not s:
-        return ''
-    if quote(s).find('%') >= 0:
-        return tohex(s)
-    return '"'+s+'"'
-   
-def toint(s):
-    return long(b2a_hex(s), 16)
 
 # header, reserved, download id, my id, [length, message]
 
@@ -83,7 +50,7 @@
         return 20, self.read_download_id
 
     def read_download_id(self, s):
-        logger.debug(str(self.no)+' download ID ' + tohex(s))
+        logger.debug(str(self.no)+' download ID ' + b2a_hex(s))
         return 20, self.read_peer_id
 
     def read_peer_id(self, s):
@@ -91,7 +58,7 @@
         return 4, self.read_len
 
     def read_len(self, s):
-        l = toint(s)
+        l = struct.unpack('>i',s)[0]
         if l > 2 ** 23:
             logger.warning(str(self.no)+' BAD LENGTH: '+str(l)+' ('+s+')')
         return l, self.read_message
@@ -106,21 +73,16 @@
             if len(s) != 13:
                 logger.warning(str(self.no)+' BAD REQUEST SIZE: '+str(len(s)))
                 return 4, self.read_len
-            index = toint(s[1:5])
-            begin = toint(s[5:9])
-            length = toint(s[9:])
+            index, begin, length = struct.unpack('>iii',s[1:])
             logger.info(str(self.no)+' Request: '+str(index)+': '+str(begin)+'-'+str(begin)+'+'+str(length))
         elif m == Connecter.CANCEL:
             if len(s) != 13:
                 logger.warning(str(self.no)+' BAD CANCEL SIZE: '+str(len(s)))
                 return 4, self.read_len
-            index = toint(s[1:5])
-            begin = toint(s[5:9])
-            length = toint(s[9:])
+            index, begin, length = struct.unpack('>iii',s[1:])
             logger.info(str(self.no)+' Cancel: '+str(index)+': '+str(begin)+'-'+str(begin)+'+'+str(length))
         elif m == Connecter.PIECE:
-            index = toint(s[1:5])
-            begin = toint(s[5:9])
+            index, begin = struct.unpack('>ii',s[1:9])
             length = len(s)-9
             logger.info(str(self.no)+' Piece: '+str(index)+': '+str(begin)+'-'+str(begin)+'+'+str(length))
         else:

Modified: debtorrent/branches/unique/DebTorrent/BT1/T2T.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/T2T.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/T2T.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/T2T.py Sun Aug 19 09:33:45 2007
@@ -22,11 +22,6 @@
 from string import lower
 import sys, logging
 import __init__
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.BT1.T2T')
 

Modified: debtorrent/branches/unique/DebTorrent/BT1/Uploader.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/Uploader.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/Uploader.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/Uploader.py Sun Aug 19 09:33:45 2007
@@ -7,12 +7,6 @@
 """Manage uploading to a single peer."""
 
 from DebTorrent.CurrentRateMeasure import Measure
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 class Upload:
     """Manage uploading to a single peer.

Modified: debtorrent/branches/unique/DebTorrent/BT1/makemetafile.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/makemetafile.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/makemetafile.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/makemetafile.py Sun Aug 19 09:33:45 2007
@@ -24,12 +24,13 @@
 from copy import copy
 from string import strip
 from DebTorrent.bencode import bencode
-from btformats import check_info
-from threading import Event
+from btformats import check_info, check_message
+from threading import Event, Thread
 from time import time
 from traceback import print_exc
 from DebTorrent.zurllib import urlopen
 import gzip
+from bz2 import decompress
 from StringIO import StringIO
 from re import subn
 import binascii, logging
@@ -771,3 +772,149 @@
                 make_meta_file(i, params, progress = vc)
         except ValueError:
             print_exc()
+
+class TorrentCreator:
+    """Create a torrent metainfo from a downloaded Packages file (threaded).
+    
+    """
+    
+    def __init__(self, path, data, callback, sched, separate_all = 0):
+        """Process a downloaded Packages file and start the torrent making thread.
+        
+        @type path: C{list} of C{string}
+        @param path: the path of the file to download, starting with the mirror name
+        @type data: C{string}
+        @param data: the downloaded Packages file
+        @type callback: C{method}
+        @param callback: the method to call with the torrent when it has been created
+        @type sched: C{method}
+        @param sched: the method to call to schedule future invocation of a function
+        @type separate_all: C{boolean}
+        @param separate_all: whether to separate the architecture:all packages into
+            a separate torrent (optional, defaults to False)
+        
+        """
+
+        self.path = path
+        self.data = data
+        self.callback = callback
+        self.sched = sched
+        self.separate_all = separate_all
+        self.name = '_'.join(self.path[:-1])
+        self.mirror = ''
+        if self.path.count('dists'):
+            self.mirror = 'http://' + '/'.join(self.path[:self.path.index('dists')]) + '/'
+        
+        self.responses = []
+
+        # Create and start the thread to create the torrent metainfo
+        logger.debug('starting thread to create torrent for: '+self.name)
+        rq = Thread(target = self._create, name = 'TorrentCreator('+self.name+')')
+        rq.setDaemon(False)
+        rq.start()
+    
+    def _create(self):
+        """Process a downloaded Packages file and start a torrent."""
+
+        h = []
+        try:
+            # Decompress the data
+            if self.path[-1].endswith('.gz'):
+                compressed = StringIO(self.data)
+                f = gzip.GzipFile(fileobj = compressed)
+                self.data = f.read()
+            elif self.path[-1].endswith('.bz2'):
+                self.data = decompress(self.data)
+            
+            assert self.data[:8] == "Package:"
+            h = self.data.split('\n')
+            self.data = ''
+        except:
+            logger.warning('Packages file is not in the correct format: '+'/'.join(path))
+            self.data = ''
+            del h[:]
+            self.sched(self._finished)
+            return
+
+        logger.debug('Packages file successfully decompressed')
+
+        try:
+            sub_pieces = getsubpieces('_'.join(self.path))
+            
+            (piece_ordering, ordering_headers) = getordering('_'.join(self.path))
+            if self.separate_all:
+                (piece_ordering_all, ordering_all_headers) = getordering('_'.join(self.path), all = True)
+            else:
+                piece_ordering_all = {}
+                ordering_all_headers = {}
+        
+            (info, info_all) = getpieces(h, separate_all = self.separate_all,
+                                         sub_pieces = sub_pieces,
+                                         piece_ordering = piece_ordering,
+                                         piece_ordering_all = piece_ordering_all,
+                                         num_pieces = int(ordering_headers.get('NextPiece', 0)),
+                                         num_all_pieces = int(ordering_all_headers.get('NextPiece', 0)))
+            del h[:]
+        except:
+            logger.exception('Failed to create torrent for: %s', self.name)
+            del h[:]
+            self.sched(self._finished)
+            return
+
+        name = self.name
+        if info and self.separate_all in (0, 2, 3):
+            response = self._create_response(info, ordering_headers, name)
+            if response:
+                self.responses.append((response, name))
+
+        name = convert_all(self.name)
+        if info_all and self.separate_all in (1, 3):
+            response = self._create_response(info_all, ordering_all_headers, name)
+            if response:
+                self.responses.append((response, name))
+        
+        self.sched(self._finished)
+
+    def _create_response(self, info, headers, name):
+        """Create a response from an info dictionary and some torrent headers.
+        
+        @type info: C{dictionary}
+        @param info: the info dictionary to use for the torrent
+        @type headers: C{dictionary}
+        @param headers: the headers from the torrent file
+        @type name: C{string}
+        @param name: the name to use for the torrent
+        @rtype: C{dictionary}
+        @return: the metainfo dictionary of the torrent, or None if there was
+            a problem
+
+        """
+        
+        response = {'info': info,
+                    'name': uniconvert(name)}
+
+        if "Tracker" in headers:
+            response['announce'] = headers["Tracker"].strip()
+        if "Torrent" in headers:
+            response['identifier'] = binascii.a2b_hex(headers["Torrent"].strip())
+        for header, value in headers.items():
+            response[header] = value.strip()
+
+        if self.mirror:
+            response.setdefault('deb_mirrors', []).append(self.mirror)
+        
+        try:
+            check_message(response)
+        except:
+            logger.exception('Poorly formatted torrent, not starting: '+name)
+            return None
+        
+        return response
+
+    def _finished(self):
+        """Wrap up the creation and call the callback function."""
+        
+        for (response, name) in self.responses:
+            self.callback(response, name, self.path)
+        
+        del self.responses[:]

Modified: debtorrent/branches/unique/DebTorrent/BT1/track.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BT1/track.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BT1/track.py (original)
+++ debtorrent/branches/unique/DebTorrent/BT1/track.py Sun Aug 19 09:33:45 2007
@@ -43,7 +43,7 @@
 from random import shuffle, seed, randrange
 from sha import sha
 from types import StringType, IntType, LongType, ListType, DictType
-from binascii import b2a_hex, a2b_hex, a2b_base64
+from binascii import b2a_hex, a2b_hex
 from string import lower
 import sys, os, logging
 import signal
@@ -51,17 +51,25 @@
 import DebTorrent.__init__
 from DebTorrent.__init__ import version, createPeerID
 from DebTorrent.ConfigDir import ConfigDir
-try:
-    True
-except:
-    True = 1
-    False = 0
-    bool = lambda x: not not x
 
 logger = logging.getLogger('DebTorrent.BT1.track')
 
 defaults = [
-    ('port', 80, "Port to listen on."),
+    # Not in the config file
+    ('configfile', '', 'the configuration file to use, if not specified then ' +
+        'a file in /etc/debtorrent will be used, followed by ' +
+        'a file in the .DebTorrent directory in the user\'s home directory'),
+    # Locations
+    ('cache_dir', '', 'the directory to use to get/store cache files, if not ' + 
+        'specified then a .DebTorrent directory in the user\'s home directory ' +
+        'will be used'),
+    ('save_state_interval', 5 * 60, 'seconds between saving state to a file'),
+    ('log_dir', '',
+        'directory to write the logfiles to (default is to use the cache directory)'),
+    ('log_level', 10,
+        'level to write the logfiles at, varies from 10 (debug) to 50 (critical)'),
+    # Connections
+    ('port', 6969, "port to listen on"),
     ('bind', '', 'comma-separated list of ips/hostnames to bind to locally'),
 #    ('ipv6_enabled', autodetect_ipv6(),
     ('ipv6_enabled', 0,
@@ -69,21 +77,45 @@
     ('ipv6_binds_v4', autodetect_socket_style(),
         'set if an IPv6 server socket will also field IPv4 connections'),
     ('socket_timeout', 15, 'timeout for closing connections'),
-    ('save_state_interval', 5 * 60, 'seconds between saving state to a file'),
-    ('timeout_downloaders_interval', 45 * 60, 'seconds between expiring downloaders'),
-    ('reannounce_interval', 30 * 60, 'seconds downloaders should wait between reannouncements'),
-    ('response_size', 50, 'number of peers to send in an info message'),
     ('timeout_check_interval', 5,
         'time to wait between checking if any connections have timed out'),
-    ('nat_check', 3,
-        "how many times to check if a downloader is behind a NAT (0 = don't check)"),
-    ('min_time_between_log_flushes', 3.0,
-        'minimum time it must have been since the last flush to do another one'),
-    ('min_time_between_cache_refreshes', 600.0,
-        'minimum time in seconds before a cache is considered stale and is flushed'),
+    # Allowed Torrents and Peers
     ('allowed_dir', '', 'only allow downloads for .dtorrents in this dir'),
     ('allowed_list', '', 'only allow downloads for hashes in this list (hex format, one per line)'),
     ('allowed_controls', 0, 'allow special keys in torrents in the allowed_dir to affect tracker access'),
+    ('allowed_ips', '', 'only allow connections from IPs specified in the given file; '+
+             'file contains subnet data in the format: aa.bb.cc.dd/len'),
+    ('banned_ips', '', "don't allow connections from IPs specified in the given file; "+
+             'file contains IP range data in the format: xxx:xxx:ip1-ip2'),
+    ('parse_dir_interval', 60, 'seconds between reloading of allowed_dir or allowed_file ' +
+             'and allowed_ips and banned_ips lists'),
+    # Peer Requests
+    ('compact_reqd', 1, "only allow peers that accept a compact response"),
+    ('reannounce_interval', 30 * 60, 'seconds downloaders should wait between reannouncements'),
+    ('response_size', 50, 'number of peers to send in an info message'),
+    ('nat_check', 3,
+        "how many times to check if a downloader is behind a NAT (0 = don't check)"),
+    ('timeout_downloaders_interval', 45 * 60, 'seconds between expiring downloaders'),
+    ('min_time_between_cache_refreshes', 600.0,
+        'minimum time in seconds before a cache is considered stale and is flushed'),
+    ('only_local_override_ip', 2, "ignore the ip GET parameter from machines which aren't on local network IPs " +
+             "(0 = never, 1 = always, 2 = ignore if NAT checking is not enabled)"),
+    ('dedicated_seed_id', '', 'allows tracker to monitor dedicated seed(s) and flag torrents as seeded'),
+    # Non-Peer Requests
+    ('show_infopage', 1, "whether to display an info page when the tracker's root dir is loaded"),
+    ('infopage_redirect', '', 'a URL to redirect the info page to'),
+    ('favicon', '', 'file containing x-icon data to return when browser requests favicon.ico'),
+    ('show_names', 1, 'whether to display names from allowed dir'),
+    ('allow_get', 0, 'use with allowed_dir; adds a /file?hash={hash} url that allows users to download the torrent file'),
+    ('keep_dead', 0, 'keep dead torrents after they expire (so they still show up on your scrape and web page)'),
+    ('scrape_allowed', 'full', 'scrape access allowed (can be none, specific or full)'),
+    # Request Logging
+    ('min_time_between_log_flushes', 3.0,
+        'minimum time it must have been since the last flush to do another one'),
+    ('hupmonitor', 0, 'whether to reopen the log file upon receipt of HUP signal'),
+    ('log_nat_checks', 0,
+        "whether to add entries to the log for nat-check results"),
+    # Multi-Tracker
     ('multitracker_enabled', 0, 'whether to enable multitracker operation'),
     ('multitracker_allowed', 'autodetect', 'whether to allow incoming tracker announces (can be none, autodetect or all)'),
     ('multitracker_reannounce_interval', 2 * 60, 'seconds between outgoing tracker announces'),
@@ -91,33 +123,8 @@
     ('aggregate_forward', '', 'format: <url>[,<password>] - if set, forwards all non-multitracker to this url with this optional password'),
     ('aggregator', '0', 'whether to act as a data aggregator rather than a tracker.  If enabled, may be 1, or <password>; ' +
              'if password is set, then an incoming password is required for access'),
-    ('hupmonitor', 0, 'whether to reopen the log file upon receipt of HUP signal'),
     ('http_timeout', 60, 
-        'number of seconds to wait before assuming that an http connection has timed out'),
-    ('parse_dir_interval', 60, 'seconds between reloading of allowed_dir or allowed_file ' +
-             'and allowed_ips and banned_ips lists'),
-    ('show_infopage', 1, "whether to display an info page when the tracker's root dir is loaded"),
-    ('infopage_redirect', '', 'a URL to redirect the info page to'),
-    ('show_names', 1, 'whether to display names from allowed dir'),
-    ('favicon', '', 'file containing x-icon data to return when browser requests favicon.ico'),
-    ('allowed_ips', '', 'only allow connections from IPs specified in the given file; '+
-             'file contains subnet data in the format: aa.bb.cc.dd/len'),
-    ('banned_ips', '', "don't allow connections from IPs specified in the given file; "+
-             'file contains IP range data in the format: xxx:xxx:ip1-ip2'),
-    ('only_local_override_ip', 2, "ignore the ip GET parameter from machines which aren't on local network IPs " +
-             "(0 = never, 1 = always, 2 = ignore if NAT checking is not enabled)"),
-    ('cache_dir', '', 'the directory to use to get/store cache files, if not ' + 
-        'specified then a .DebTorrent directory in the user\'s home directory ' +
-        'will be used'),
-    ('log_dir', '',
-        'directory to write the logfiles to (default is to use the cache directory)'),
-    ('log_level', 30,
-        'level to write the logfiles at, varies from 10 (debug) to 50 (critical)'),
-    ('allow_get', 0, 'use with allowed_dir; adds a /file?hash={hash} url that allows users to download the torrent file'),
-    ('keep_dead', 0, 'keep dead torrents after they expire (so they still show up on your /scrape and web page)'),
-    ('scrape_allowed', 'full', 'scrape access allowed (can be none, specific or full)'),
-    ('dedicated_seed_id', '', 'allows tracker to monitor dedicated seed(s) and flag torrents as seeded'),
-    ('compact_reqd', 1, "only allow peers that accept a compact response"),
+        'number of seconds to wait before assuming that an http connection to another tracker has timed out'),
   ]
 
 def statefiletemplate(x):
@@ -452,10 +459,9 @@
         self.becache = {}
 
         if config['compact_reqd']:
-            x = 3
+            self.cache_default_len = 3
         else:
-            x = 5
-        self.cache_default = [({},{}) for i in xrange(x)]
+            self.cache_default_len = 5
         for infohash, ds in self.downloads.items():
             self.seedcount[infohash] = 0
             for x,y in ds.items():
@@ -775,6 +781,8 @@
         if not self.allowed.has_key(hash):
             logger.warning('Request for unknown torrent file: '+b2a_hex(hash))
             return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+    def cache_default(self):
+        return [({},{}) for i in xrange(self.cache_default_len)]
         fname = self.allowed[hash]['file']
         fpath = self.allowed[hash]['path']
         return (200, 'OK', {'Content-Type': 'application/x-debtorrent',
@@ -984,7 +992,8 @@
                         l = self.becache[infohash]
                         y = not peer['left']
                         for x in l:
-                            del x[y][myid]
+                            if myid in x[y]:
+                                del x[y][myid]
                     if natted >= 0:
                         del peer['nat'] # restart NAT testing
                 if natted and natted < self.natcheck:
@@ -1042,7 +1051,7 @@
             cache = self.cached_t.setdefault(infohash, None)
             if ( not cache or len(cache[1]) < rsize
                  or cache[0] + self.config['min_time_between_cache_refreshes'] < clock() ):
-                bc = self.becache.setdefault(infohash,self.cache_default)
+                bc = self.becache.setdefault(infohash,self.cache_default())
                 cache = [ clock(), bc[0][0].values() + bc[0][1].values() ]
                 self.cached_t[infohash] = cache
                 shuffle(cache[1])
@@ -1057,7 +1066,7 @@
             data['peers'] = []
             return data
 
-        bc = self.becache.setdefault(infohash,self.cache_default)
+        bc = self.becache.setdefault(infohash,self.cache_default())
         len_l = len(bc[2][0])
         len_s = len(bc[2][1])
         if not (len_l+len_s):   # caches are empty!
@@ -1125,7 +1134,7 @@
         return data
 
 
-    def get(self, connection, path, headers):
+    def get(self, connection, path, headers, httpreq):
         """Respond to a GET request to the tracker.
         
         Process a GET request from a peer/tracker/browser. Process the request,
@@ -1138,6 +1147,8 @@
         @param path: the URL being requested
         @type headers: C{dictionary}
         @param headers: the headers from the request
+        @type httpreq: L{DebTorrent.HTTPHandler.HTTPRequest}
+        @param httpreq: not used since HTTP 1.1 is not used by the tracker
         @rtype: (C{int}, C{string}, C{dictionary}, C{string})
         @return: the HTTP status code, status message, headers, and message body
         
@@ -1297,7 +1308,7 @@
         """
         
         seed = not peer['left']
-        bc = self.becache.setdefault(infohash,self.cache_default)
+        bc = self.becache.setdefault(infohash,self.cache_default())
         cp = compact_peer_info(ip, port)
         reqc = peer['requirecrypto']
         bc[2][seed][peerid] = (cp,chr(reqc))
@@ -1467,13 +1478,16 @@
     
     @type params: C{list}
     @param params: the command line arguments to the tracker
+    @rtype: C{boolean}
+    @return: whether the server should be restarted
     
     """
     
+    restart = False
     configdefaults = {}
     try:
         # Load the configuration data
-        configdir = ConfigDir('debtorrent-client')
+        configdir = ConfigDir('debtorrent-tracker')
         defaultsToIgnore = ['configfile']
         configdir.setDefaults(defaults,defaultsToIgnore)
         configdefaults = configdir.loadConfig(params)
@@ -1507,21 +1521,31 @@
         logger.error(formatDefinitions(defaults, 80))
         logging.shutdown()
         sys.exit(1)
-
-    r = RawServer(Event(), config['timeout_check_interval'],
-                  config['socket_timeout'], ipv6_enable = config['ipv6_enabled'])
-    
-    t = Tracker(config, r, configdir)
-    
-    r.bind(config['port'], config['bind'],
-           reuse = True, ipv6_socket_style = config['ipv6_binds_v4'])
-
-    r.listen_forever(HTTPHandler(t.get, config['min_time_between_log_flushes'],
-                                 logfile, config['hupmonitor']))
-    
-    t.save_state()
+    except:
+        logger.exception('unhandled exception')
+
+    try:
+        r = RawServer(Event(), config['timeout_check_interval'],
+                      config['socket_timeout'], ipv6_enable = config['ipv6_enabled'])
+        
+        t = Tracker(config, r, configdir)
+        
+        r.bind(config['port'], config['bind'],
+               reuse = True, ipv6_socket_style = config['ipv6_binds_v4'])
+    
+        restart = r.listen_forever(HTTPHandler(t.get,
+                                config['min_time_between_log_flushes'],
+                                logfile, config['hupmonitor']))
+        
+        t.save_state()
+
+        r.shutdown()
+    except:
+        logger.exception('unhandled exception')
+
     logger.info('Shutting down')
     logging.shutdown()
+    return restart
 
 def size_format(s):
     """Format a byte size for reading by the user.

Modified: debtorrent/branches/unique/DebTorrent/BTcrypto.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/BTcrypto.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/BTcrypto.py (original)
+++ debtorrent/branches/unique/DebTorrent/BTcrypto.py Sun Aug 19 09:33:45 2007
@@ -27,12 +27,6 @@
     urandom = lambda x: ''.join([chr(randint(0,255)) for i in xrange(x)])
 from sha import sha
 
-try:
-    True
-except:
-    True = 1
-    False = 0
-    
 try:
     from Crypto.Cipher import ARC4
     CRYPTO_OK = True

Modified: debtorrent/branches/unique/DebTorrent/ConfigDir.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/ConfigDir.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/ConfigDir.py (original)
+++ debtorrent/branches/unique/DebTorrent/ConfigDir.py Sun Aug 19 09:33:45 2007
@@ -8,12 +8,6 @@
 
 @type DIRNAME: C{string}
 @var DIRNAME: the directory name to use for storing config files
- at type hexchars: C{string}
- at var hexchars: the 16 hex characters, in order
- at type hexmap: C{list}
- at var hexmap: a mapping from characters to the hex repesentation of the character
- at type revmap: C{dictionary}
- at var revmap: the reverse of L{hexmap}
 
 """
 
@@ -24,51 +18,10 @@
 from __init__ import product_name, version_short
 import sys,os
 from time import time, strftime
-
-try:
-    True
-except:
-    True = 1
-    False = 0
+from binascii import b2a_hex, a2b_hex
 
 DIRNAME = '.'+product_name
 MASTER_CONFIG = '/etc/debtorrent'
-
-hexchars = '0123456789abcdef'
-hexmap = []
-revmap = {}
-for i in xrange(256):
-    x = hexchars[(i&0xF0)/16]+hexchars[i&0x0F]
-    hexmap.append(x)
-    revmap[x] = chr(i)
-
-def tohex(s):
-    """Convert a string to hex representation.
-    
-    @type s: C{string}
-    @param s: the string to convert
-    @rtype: C{string}
-    @return: the converted string
-    
-    """
-    
-    r = []
-    for c in s:
-        r.append(hexmap[ord(c)])
-    return ''.join(r)
-
-def unhex(s):
-    """Convert a hex representation back to a string.
-    
-    @type s: C{string}
-    @param s: the hex representation of a string
-    @rtype: C{string}
-    @return: the original string
-    
-    """
-    
-    r = [ revmap[s[x:x+2]] for x in xrange(0, len(s), 2) ]
-    return ''.join(r)
 
 def copyfile(oldpath, newpath):
     """Simple file copy, all in RAM.
@@ -408,7 +361,7 @@
                 f, garbage = f.split('.')
             except:
                 pass
-            d[unhex(f)] = 1
+            d[a2b_hex(f)] = 1
         return d.keys()
 
     def getTorrentVariations(self, t):
@@ -421,7 +374,7 @@
         
         """
         
-        t = tohex(t)
+        t = b2a_hex(t)
         d = []
         for f in os.listdir(self.dir_torrentcache):
             f = os.path.basename(f)
@@ -448,7 +401,7 @@
         
         if v == -1:
             v = max(self.getTorrentVariations(t))   # potential exception
-        t = tohex(t)
+        t = b2a_hex(t)
         if v:
             t += '.'+str(v)
         try:
@@ -504,7 +457,7 @@
                 v = max(self.getTorrentVariations(t))+1
             except:
                 v = 0
-        t = tohex(t)
+        t = b2a_hex(t)
         if v:
             t += '.'+str(v)
         try:
@@ -533,7 +486,7 @@
         
         if self.TorrentDataBuffer.has_key(t):
             return self.TorrentDataBuffer[t]
-        t = os.path.join(self.dir_datacache,tohex(t))
+        t = os.path.join(self.dir_datacache,b2a_hex(t))
         if not os.path.exists(t):
             return None
         try:
@@ -562,7 +515,7 @@
 
         self.TorrentDataBuffer[t] = data
         try:
-            f = open(os.path.join(self.dir_datacache,tohex(t)),'wb')
+            f = open(os.path.join(self.dir_datacache,b2a_hex(t)),'wb')
             f.write(bencode(data))
             success = True
         except:
@@ -584,7 +537,7 @@
         """
 
         try:
-            os.remove(os.path.join(self.dir_datacache,tohex(t)))
+            os.remove(os.path.join(self.dir_datacache,b2a_hex(t)))
         except:
             pass
 
@@ -598,7 +551,7 @@
         
         """
 
-        return os.path.join(self.dir_piececache,tohex(t))
+        return os.path.join(self.dir_piececache,b2a_hex(t))
 
 
     ###### EXPIRATION HANDLING ######
@@ -630,7 +583,7 @@
             except:
                 pass
             try:
-                f = unhex(f)
+                f = a2b_hex(f)
                 assert len(f) == 20
             except:
                 continue
@@ -645,7 +598,7 @@
         for f in os.listdir(self.dir_datacache):
             p = os.path.join(self.dir_datacache,f)
             try:
-                f = unhex(os.path.basename(f))
+                f = a2b_hex(os.path.basename(f))
                 assert len(f) == 20
             except:
                 continue
@@ -659,7 +612,7 @@
         for f in os.listdir(self.dir_piececache):
             p = os.path.join(self.dir_piececache,f)
             try:
-                f = unhex(os.path.basename(f))
+                f = a2b_hex(os.path.basename(f))
                 assert len(f) == 20
             except:
                 continue
@@ -674,7 +627,7 @@
             names.setdefault(f,[]).append(p)
 
         for k,v in times.items():
-            if max(v) < exptime and not k in still_active:
+            if max(v) < exptime and not k in still_active and k in names:
                 for f in names[k]:
                     try:
                         os.remove(f)

Modified: debtorrent/branches/unique/DebTorrent/HTTPCache.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/HTTPCache.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/HTTPCache.py (original)
+++ debtorrent/branches/unique/DebTorrent/HTTPCache.py Sun Aug 19 09:33:45 2007
@@ -13,63 +13,45 @@
 @var VERSION: the UserAgent identifier sent to all sites
 @type alas: C{string}
 @var alas: the message to send when the data is not found
+ at type TIMEOUT: C{float}
+ at var TIMEOUT: the number of seconds after which an idle connection is closed
 
 """
 
-from httplib import HTTPConnection
+from httplib import HTTPConnection, BadStatusLine
+from socket import gaierror
 from threading import Thread
 from DebTorrent.__init__ import product_name,version_short
+from clock import clock
 from os.path import join, split, getmtime, getsize, exists
 from os import utime, makedirs, listdir
 from time import strftime, strptime, gmtime
 from calendar import timegm
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.HTTPCache')
 
 time_format = '%a, %d %b %Y %H:%M:%S'
 VERSION = product_name+'/'+version_short
 alas = 'your file may exist elsewhere in the universe\nbut alas, not here\n'
+TIMEOUT = 60.0
 
 class CacheRequest:
-    """Download a file needed for the HTTP download cache.
-    
-    @type handler: L{HTTPCache}
-    @ivar handler: the cache manager for the download
+    """A new request to send to the server for the cache.
+    
     @type path: C{list} of C{string}
     @ivar path: the server and path to download
-    @type server: C{string}
-    @ivar server: the webserver address and port to connect to 
-    @type url: C{string}
-    @ivar url: the URL to request from the site
     @type func: C{method}
     @ivar func: the method to call when the download completes
-    @type connection: C{HTTPConnection}
-    @ivar connection: the connection to the HTTP server
-    @type headers: C{dictionary}
-    @ivar headres: the HTTP headers to send in the request, and the headers
-        returned by the response
-    @type active: C{boolean}
-    @ivar active: whether there is a download underway
-    @type received_data: C{string}
-    @ivar received_data: the data returned from the server
-    @type connection_status: C{int}
-    @ivar connection_status: the status code returned by the server
-    @type connection_response: C{string}
-    @ivar connection_response: the status message returned by the server
+    @type response: (C{int}, C{string}, C{dictionary}, C{string})
+    @ivar response: the HTTP status code, status message, headers, and
+        downloaded data
     
     """
     
-    def __init__(self, handler, path, func):
+    def __init__(self, path, func):
         """Initialize the instance.
         
-        @type handler: L{HTTPCache}
-        @param handler: the cache manager for the download
         @type path: C{list} of C{string}
         @param path: the server and path to download
         @type func: C{method}
@@ -77,24 +59,124 @@
         
         """
         
+        self.path = path
+        self.func = func
+        self.response = None
+        
+    def save_response(self, r):
+        """Save a returned response from the server.
+        
+        @type r: C{httplib.HTTPResponse}
+        @param r: the response from the server
+        
+        """
+        
+        self.response = (r.status, r.reason, dict(r.getheaders()), r.read())
+        
+    def error(self, error_msg):
+        """Save an error response.
+        
+        @type error_msg: C{string}
+        @param error_msg: the error that occurred
+        
+        """
+        
+        self.response = (502, 'Bad Gateway', {},
+                         'error accessing http server: '+error_msg)
+
+class CacheConnection:
+    """Download files needed for the HTTP download cache from a single server.
+    
+    @type handler: L{HTTPCache}
+    @ivar handler: the cache manager for the download
+    @type server: C{string}
+    @ivar server: the webserver address and port to connect to 
+    @type request: L{CacheRequest}
+    @ivar request: the request currently in progress
+    @type request_queue: C{list} of L{CacheRequest}
+    @ivar request_queue: the waiting requests
+    @type connection: C{HTTPConnection}
+    @ivar connection: the connection to the HTTP server
+    @type url: C{string}
+    @ivar url: the URL to request from the site
+    @type headers: C{dictionary}
+    @ivar headers: the HTTP headers to send in the request
+    @type active: C{boolean}
+    @ivar active: whether there is a download underway
+    @type closed: C{boolean}
+    @ivar closed: whether ther connection has been closed
+    @type last_action: C{float}
+    @ivar last_action: the last time an action occurred
+    
+    """
+    
+    def __init__(self, handler, server):
+        """Initialize the instance.
+        
+        @type handler: L{HTTPCache}
+        @param handler: the cache manager for the download
+        @type server: C{string}
+        @param server: the server name to send the requests to
+        
+        """
+        
         self.handler = handler
-        self.path = path
-        self.server = path[0]
-        self.url = '/' + '/'.join(path[1:])
-        self.func = func
+        self.server = server
+        self.request = None
+        self.request_queue = []
+        self.headers = {'User-Agent': VERSION}
+        self.active = False
+        self.closed = False
+        self.last_action = clock()
+
         try:
             self.connection = HTTPConnection(self.server)
         except:
-            logger.exception('cannot connect to http seed: '+self.server)
+            logger.exception('cannot connect to http server: '+self.server)
+            self.close()
+
+    def queue(self, path, func):
+        """Queue a download for later starting.
+        
+        @type path: C{list} of C{string}
+        @param path: the server and path to download
+        @type func: C{method}
+        @param func: the method to call when the download completes
+        @rtype: C{boolean}
+        @return: whether the download was successfully queued
+        
+        """
+        
+        assert path[0] == self.server
+        if self.closed:
+            return False
+
+        logger.debug('queueing request for '+'/'.join(path))
+        self.request_queue.append(CacheRequest(path, func))
+        self._run_queue()
+
+        return True
+        
+    def _run_queue(self):
+        """Start the next element in the queue downloading."""
+        
+        # Check if one is already running
+        if self.active or self.closed:
             return
         
-        self.headers = {'User-Agent': VERSION}
-        self.active = False
+        # If the queue is empty, then we are done
+        if not self.request_queue:
+            self.handler.rawserver.add_task(self.auto_close, int(TIMEOUT)+1)
+            return
+        
+        self.active = True
+        self.last_action = clock()
+        self.request = self.request_queue.pop(0)
+        self.url = '/' + '/'.join(self.request.path[1:])
         logger.debug('starting thread to download '+self.url)
-        rq = Thread(target = self._request, name = 'HTTPCache.CacheRequest._request')
+        rq = Thread(target = self._request, name = 'CacheRequest('+self.server+')')
         rq.setDaemon(False)
         rq.start()
-        self.active = True
 
     def _request(self):
         """Do the request."""
@@ -104,30 +186,77 @@
         
         try:
             logger.debug('sending request GET '+self.url+', '+str(self.headers))
-            self.connection.request('GET',self.url, None, self.headers)
-            
-            r = self.connection.getresponse()
+            self.connection.request('GET', self.url, None, self.headers)
+            
+            # Check for closed persistent connection due to server timeout
+            try:
+                r = self.connection.getresponse()
+            except BadStatusLine:
+                # Reopen the connection to get a new socket
+                logger.debug('persistent connection closed, attempting to reopen')
+                self.connection.close()
+                self.connection.connect()
+                logger.debug('sending request GET '+self.url+', '+str(self.headers))
+                self.connection.request('GET',self.url, None, self.headers)
+                r = self.connection.getresponse()
                 
             logger.debug('got response '+str(r.status)+', '+r.reason+', '+str(r.getheaders()))
-            self.connection_status = r.status
-            self.connection_response = r.reason
-            self.headers = dict(r.getheaders())
-            self.received_data = r.read()
+            self.request.save_response(r)
+        except gaierror, e:
+            logger.warning('could not contact http server '+self.server+': '+str(e))
+            self.request.error('could not contact http server '+self.server+': '+str(e))
         except Exception, e:
             logger.exception('error accessing http server')
-            self.connection_status = 500
-            self.connection_response = 'Internal Server Error'
-            self.headers = {}
-            self.received_data = 'error accessing http server: '+str(e)
+            self.request.error(str(e))
+        self.last_action = clock()
         self.handler.rawserver.add_task(self.request_finished)
 
     def request_finished(self):
         """Process the completed request."""
+        
+        # Save the result
+        request = self.request
+        self.request = None
+        
+        # Start the next queued item running
+        self.active = False
+        self._run_queue()
+        
+        # Return the result
+        self.handler.download_complete(request.path, request.func,
+                                       request.response)
+        
+    def auto_close(self):
+        """Close the connection if it has been idle."""
+        if (not self.active and not self.closed and not self.request and 
+            not self.request_queue and (clock() - self.last_action) >= TIMEOUT):
+            self.close()
+    
+    def close(self):
+        """Close the connection."""
+        logger.info('Closing the connection to: '+self.server)
+        self.closed = True
         self.connection.close()
-        self.active = False
-        self.handler.download_complete(self, self.path, self.func, 
-                   (self.connection_status, self.connection_response, 
-                    self.headers, self.received_data))
+        
+        # Process the current request
+        if self.request:
+            if not self.request.response:
+                self.request.error('connection closed prematurely')
+            self.handler.download_complete(self.request.path,
+                                           self.request.func,
+                                           self.request.response)
+            self.request = None
+        
+        # Process any waiting requests
+        for request in self.request_queue:
+            if not request.response:
+                request.error('connection closed prematurely')
+            self.handler.download_complete(request.path, request.func,
+                                           request.response)
+        del self.request_queue[:]
+        
+        # Remove the connection to the server
+        self.handler.remove(self, self.server)
 
 
 class HTTPCache:
@@ -135,8 +264,9 @@
     
     @type rawserver: L{Debtorrent.RawServer.RawServer}
     @ivar rawserver: the server
-    @type downloads: C{list} of L{CacheRequest}
-    @ivar downloads: the list of all current downloads for the cache
+    @type downloads: C{dictionary}
+    @ivar downloads: the current downloads, keys are the server names, values
+        are the L{CacheConnection} objects used to download from the server
     @type cachedir: C{string}
     @ivar cachedir: the directory to save cache files in
     
@@ -153,7 +283,7 @@
         """
         
         self.rawserver = rawserver
-        self.downloads = []
+        self.downloads = {}
         self.cachedir = cachedir
 
     def download_get(self, path, func):
@@ -166,18 +296,35 @@
         
         """
         
-        logger.info('Starting a downloader for: http://'+'/'.join(path))
-        self.downloads.append(CacheRequest(self, path, func))
-
-    def download_complete(self, d, path, func, r):
-        """Remove a completed download from the list and process the data.
-        
-        Once a download has been completed, remove the downloader from the 
-        list and save the downloaded file in the file system. Then return the
-        data to the callback function. 
-        
-        @type d: L{CacheRequest}
-        @param d: the cache request that is completed
+        if path[0] not in self.downloads:
+            logger.info('Opening a connection to server: '+path[0])
+            self.downloads[path[0]] = CacheConnection(self, path[0])
+
+        if not self.downloads[path[0]].queue(path, func):
+            func(path, (500, 'Internal Server Error', 
+                        {'Server': VERSION, 
+                         'Content-Type': 'text/html; charset=iso-8859-1'},
+                        'Server could not be contacted'))
+
+    def remove(self, d, server):
+        """Remove a completed download connection.
+        
+        @type d: L{CacheConnection}
+        @param d: the server connection that is no longer needed
+        @type server: C{string}
+        @param server: the server the connection was to
+        
+        """
+        
+        assert self.downloads[server] == d
+        del self.downloads[server]
+
+    def download_complete(self, path, func, r):
+        """Process the returned data from a request.
+        
+        Once a download has been completed, save the downloaded file in the
+        file system. Then return the data to the callback function.
+        
         @type path: C{list} of C{string}
         @param path: the server and path that was downloaded
         @type func: C{method}
@@ -188,7 +335,6 @@
         """
         
         logger.info('download completed for: http://'+'/'.join(path))
-        self.downloads.remove(d)
 
         file = self.get_filename(path)
         headers = {'Server': VERSION}
@@ -210,7 +356,7 @@
                 times = (mtime, mtime)
                 utime(file, times)
             except:
-                pass
+                logger.exception('Failed to set the cache time for the file')
 
         # Use the headers we want
         if exists(file):

Modified: debtorrent/branches/unique/DebTorrent/HTTPHandler.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/HTTPHandler.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/HTTPHandler.py (original)
+++ debtorrent/branches/unique/DebTorrent/HTTPHandler.py Sun Aug 19 09:33:45 2007
@@ -21,11 +21,6 @@
 from clock import clock
 from gzip import GzipFile
 import signal, logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.HTTPHandler')
 
@@ -33,6 +28,8 @@
 
 months = [None, 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
     'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
+
+DEBTORRENT_PROTOCOL = "0.1"
 
 def isotime(secs = None):
     """Create an ISO formatted string of the time.
@@ -48,6 +45,89 @@
     if secs == None:
         secs = time.time()
     return time.strftime('%Y-%m-%d %H:%M UTC', time.gmtime(secs))
+
+class HTTPRequest:
+    """A single request on an HTTP connection.
+    
+    Handles one of possibly many HTTP GET or HEAD requests from a client using
+    HTTP/1.1.
+    
+    @type header: C{string}
+    @ivar header: the first header line received from the request
+    @type command: C{string}
+    @ivar command: the requested command ('GET' or 'HEAD')
+    @type path: C{string}
+    @ivar path: the requested path to get
+    @type encoding: C{string}
+    @ivar encoding: the encoding to use when sending the response
+    @type headers: C{dictionary}
+    @ivar headers: the headers received with the request
+    @type answer: (C{int}, C{string}, C{dictionary}, C{string})
+    @ivar answer: the HTTP status code, status message, headers, and package
+        data, or None if the answer is not yet available
+    
+    """
+    
+    def __init__(self, header, command, path, encoding, headers):
+        """Initialize the instance.
+        
+        @type header: C{string}
+        @param header: the first header line received from the request
+        @type command: C{string}
+        @param command: the requested command ('GET' or 'HEAD')
+        @type path: C{string}
+        @param path: the requested path to get
+        @type encoding: C{string}
+        @param encoding: the encoding to use when sending the response
+        @type headers: C{dictionary}
+        @param headers: the headers received with the request
+        
+        """
+        
+        self.header = header
+        self.command = command
+        self.path = path
+        self.encoding = encoding
+        self.headers = headers
+        self.answer = None
+        
+    def save_answer(self, r):
+        """Save an answer, replacing the old one if it's better.
+        
+        @type r: (C{int}, C{string}, C{dictionary}, C{string})
+        @param r: the HTTP status code, status message, headers, and package data
+        
+        """
+        
+        # Queue the answer
+        if self.answer:
+            logger.error('An answer already exists for this request, keeping the better one')
+            # Better means lower code, or newer response if codes are the same
+            if r[0] <= self.answer[0]:
+                self.answer = r
+        else:
+            self.answer = r
+        
+    def has_answer(self):
+        """Determine whether an answer is available for the request.
+        
+        @rtype: C{boolean}
+        @return: whether the answer is available yet
+        
+        """
+        
+        return not not self.answer
+
+    def get_answer(self):
+        """Get the saved answer.
+        
+        @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+        @return: the HTTP status code, status message, headers, and package
+            data, or None if the answer is not yet available
+        
+        """
+        
+        return self.answer
 
 class HTTPConnection:
     """A single connection from an HTTP client.
@@ -60,6 +140,15 @@
     @ivar connection: the new connection that was created
     @type buf: C{string}
     @ivar buf: the buffered data received on the connection
+    @type requests: C{list} of L{HTTPRequest}
+    @ivar requests: the outstanding requests for paths
+    @type protocol: C{string}
+    @ivar protocol: the protocol used to make the request
+    @type version: (C{int}, C{int})
+    @ivar version: the protocol version of the request
+    @type close_connection: C{boolean}
+    @ivar close_connection: whether the connection will be closed after this
+        request
     @type closed: C{boolean}
     @ivar closed: whether the connection has been closed
     @type done: C{boolean}
@@ -72,8 +161,8 @@
     @ivar header: the first header line received from the request
     @type command: C{string}
     @ivar command: the requested command ('GET' or 'HEAD')
-    @type pre1: C{boolean}
-    @ivar pre1: whether the request is from a pre version 1.0 client
+    @type path: C{string}
+    @ivar path: the requested path to get
     @type headers: C{dictionary}
     @ivar headers: the headers received with the request
     @type encoding: C{string}
@@ -94,9 +183,14 @@
         self.handler = handler
         self.connection = connection
         self.buf = ''
+        self.requests = []
+        self.protocol = ''
+        self.version = None
+        self.close_connection = True
         self.closed = False
         self.done = False
         self.donereading = False
+        self.req_count = 0
         self.next_func = self.read_type
 
     def get_ip(self):
@@ -148,23 +242,61 @@
         
         """
         
+        self.req_count += 1
         self.header = data.strip()
         words = data.split()
         if len(words) == 3:
-            self.command, self.path, garbage = words
-            self.pre1 = False
+            # Must be HTTP 1.0 or greater
+            self.command, self.path, version = words
+
+            try:
+                # Extract the protocol from the request
+                self.protocol, base_version_number = version.split('/', 1)
+            except:
+                logger.error("Bad request protocol (%r)", version)
+                return None
+            
+            if self.handler.protocol >= "HTTP/1.1":
+                try:
+                    # Extract the version number from the request
+                    self.protocol, base_version_number = version.split('/', 1)
+                    version_number = base_version_number.split(".")
+                    if len(version_number) != 2:
+                        logger.error("Bad request version (%r)", version)
+                        return None
+                    self.version = int(version_number[0]), int(version_number[1])
+                except (ValueError, IndexError):
+                    logger.error("Bad request version (%r)", version)
+                    return None
+                
+                # Use persistent connections for DEBTORRENT/HTTP1.1
+                if (self.protocol == "DEBTORRENT" or 
+                    (self.protocol == "HTTP" and self.version >= (1, 1))):
+                    self.close_connection = False
+                    
+            elif self.protocol != "HTTP":
+                logger.error("Unsupported protocol (%r)", version)
+                return None
+            else:
+                self.version = (1, 0)
+            
         elif len(words) == 2:
+            # Old HTTP 0.9 connections don't include the version and only support GET
             self.command, self.path = words
-            self.pre1 = True
+            self.protocol = 'HTTP'
+            self.version = (0, 9)
             if self.command != 'GET':
                 logger.warning('connection closed, improper command: '+self.command)
                 return None
         else:
             logger.warning('connection closed, corrupt header line: '+data)
             return None
+        
         if self.command not in ('HEAD', 'GET'):
             logger.warning('connection closed, improper command: '+self.command)
             return None
+        
+        logger.info(str(self.req_count)+': '+self.protocol+' '+self.header)
         self.headers = {}
         return self.read_header
 
@@ -179,16 +311,51 @@
         """
         
         data = data.strip()
+        
+        # A blank line indicates the headers are done
         if data == '':
-            self.donereading = True
+            # Get the encoding to use for the answer
             if self.headers.get('accept-encoding','').find('gzip') > -1:
                 self.encoding = 'gzip'
             else:
                 self.encoding = 'identity'
-            r = self.handler.getfunc(self, self.path, self.headers)
+                
+            # Check for persistent connection headers
+            conntype = self.headers.get('Connection', "").lower()
+            if conntype == 'close':
+                self.close_connection = True
+            elif conntype == 'keep-alive' and self.handler.protocol >= "HTTP/1.1":
+                self.close_connection = False
+
+            # If this is not the last request
+            newrequest = None
+            if not self.close_connection or self.requests:
+                newrequest = HTTPRequest(self.header, self.command, self.path,
+                                         self.encoding, self.headers)
+                self.requests.append(newrequest)
+
+            # Call the function to process the request
+            r = self.handler.getfunc(self, self.path, self.headers, newrequest)
+
+            # Send the answer if available
             if r is not None:
-                self.answer(r)
-            return None
+                if newrequest:
+                    # Multiple requests, so queue it for possible sending
+                    self.answer(r, newrequest)
+                else:
+                    # It's the only request, so just send it
+                    self.send_answer(r, self.header, self.command, self.path,
+                                     self.encoding, self.headers)
+                
+            # Request complete, close or wait for more
+            if self.close_connection:
+                self.donereading = True
+                return None
+            else:
+                self.close_connection = True
+                return self.read_type
+            
+        # Process the header line
         try:
             i = data.index(':')
         except ValueError:
@@ -198,8 +365,52 @@
         logger.debug(data[:i].strip() + ": " + data[i+1:].strip())
         return self.read_header
 
-    def answer(self, (responsecode, responsestring, headers, data)):
-        """Send a response to the client on the connection and close it.
+    def answer(self, r, httpreq):
+        """Add a response to the queued responses and check if any are ready to send.
+        
+        @type r: (C{int}, C{string}, C{dictionary}, C{string})
+        @param r: the HTTP status code, status message, headers, and package data
+        @type httpreq: L{HTTPRequest}
+        @param httpreq: the request the answer is for
+        
+        """
+        
+        if self.closed:
+            logger.warning('connection closed before anwswer, dropping data')
+            return
+        
+        if not self.requests:
+            if httpreq is None:
+                # There's only one request allowed, so send the answer
+                self.send_answer(r, self.header, self.command, self.path,
+                                 self.encoding, self.headers)
+            else:
+                logger.error('got answer for unknown request')
+            return
+
+        if httpreq:
+            if httpreq not in self.requests:
+                logger.error('Got an answer for an unknown request')
+            else:
+                if self.protocol == "DEBTORRENT":
+                    # DEBTORRENT requests get sent immediately
+                    self.requests.remove(httpreq)
+                    self.send_answer(r, httpreq.header, httpreq.command,
+                                     httpreq.path, httpreq.encoding,
+                                     httpreq.headers)
+                else:
+                    httpreq.save_answer(r)
+
+        # Answer all possible requests
+        while self.requests and self.requests[0].has_answer():
+            httpreq = self.requests.pop(0)
+            r = httpreq.get_answer()
+            self.send_answer(r, httpreq.header, httpreq.command, httpreq.path,
+                             httpreq.encoding, httpreq.headers)
+
+    def send_answer(self, (responsecode, responsestring, headers, data),
+                    header, command, path, encoding, req_headers):
+        """Send out the complete request.
         
         @type responsecode: C{int}
         @param responsecode: the response code to send
@@ -209,53 +420,86 @@
         @param headers: the headers to send with the response
         @type data: C{string}
         @param data: the data to send with the response
-        
-        """
-        
-        if self.closed:
-            logger.warning('connection closed before anwswer, dropping data')
-            return
-        if self.encoding == 'gzip':
+        @type header: C{string}
+        @param header: the first header line received from the request
+        @type command: C{string}
+        @param command: the requested command ('GET' or 'HEAD')
+        @type path: C{string}
+        @param path: the requested path to get
+        @type encoding: C{string}
+        @param encoding: the encoding to use when sending the response
+        @type req_headers: C{dictionary}
+        @param req_headers: the headers received with the request
+        
+        """
+
+        # Encode the response data
+        if encoding == 'gzip':
             compressed = StringIO()
             gz = GzipFile(fileobj = compressed, mode = 'wb', compresslevel = 9)
             gz.write(data)
             gz.close()
             cdata = compressed.getvalue()
             if len(cdata) >= len(data):
-                self.encoding = 'identity'
+                encoding = 'identity'
             else:
                 logger.debug('Compressed: '+str(len(cdata))+'  Uncompressed: '+str(len(data)))
                 data = cdata
                 headers['Content-Encoding'] = 'gzip'
 
         # i'm abusing the identd field here, but this should be ok
-        if self.encoding == 'identity':
+        if encoding == 'identity':
             ident = '-'
         else:
             ident = self.encoding
         self.handler.write_log( self.connection.get_ip(), ident, '-',
-                                self.header, responsecode, len(data),
-                                self.headers.get('referer','-'),
-                                self.headers.get('user-agent','-') )
-        self.done = True
-        logger.info('sending response: '+str(responsecode)+' '+responsestring+
+                                header, responsecode, len(data),
+                                req_headers.get('referer', '-'),
+                                req_headers.get('user-agent', '-') )
+
+        logger.info('sending response: '+self.protocol+' '+str(responsecode)+' '+responsestring+
                     ' ('+str(len(data))+' bytes)')
+        
         r = StringIO()
-        r.write('HTTP/1.0 ' + str(responsecode) + ' ' + 
-            responsestring + '\r\n')
-        if not self.pre1:
+        
+        # Write the header line
+        if self.protocol == "HTTP":
+            r.write(self.handler.protocol + ' ' + str(responsecode) + ' ' + 
+                    responsestring + '\r\n')
+        elif self.protocol == "DEBTORRENT":
+            r.write('DEBTORRENT/'+DEBTORRENT_PROTOCOL+' '+path+' '+
+                    str(responsecode)+' '+responsestring+'\r\n')
+            
+        # Write the individual headers
+        if self.version >= (1, 0) or self.protocol != 'HTTP':
             headers['Content-Length'] = len(data)
             for key, value in headers.items():
                 r.write(key + ': ' + str(value) + '\r\n')
             r.write('\r\n')
-        if self.command != 'HEAD':
+            
+        # Don't write the body if only the headers are requested
+        if command != 'HEAD':
             r.write(data)
+            
         self.connection.write(r.getvalue())
-        if self.connection.is_flushed():
-            self.connection.shutdown(1)
-
+    
+    def close(self):
+        """Close the connection and drop all pending requests/answers."""
+        logger.debug('HTTP connection closed')
+        self.closed = True
+        del self.connection
+        self.next_func = None
+        for httpreq in self.requests:
+            if httpreq.has_answer():
+                logger.debug('Connection lost before answer could be sent: '+httpreq.path)
+        del self.requests[:]
+        
+        
 class HTTPHandler:
     """The handler for all new and existing HTTP connections.
+    
+    Supports HTTP/1.1 persistent connections with pipelining if the protocol
+    is set to 'HTTP/1.1'.
     
     @type connections: C{dictionary}
     @ivar connections: all the existing connections, keys are the connection 
@@ -270,10 +514,13 @@
     @ivar logfile: the file name to write the logs to
     @type log: C{file}
     @ivar log: the file to write the logs to
+    @type protocol: C{string}
+    @ivar protocol: the HTTP protocol version to use
     
     """
     
-    def __init__(self, getfunc, minflush, logfile = None, hupmonitor = None):
+    def __init__(self, getfunc, minflush, logfile = None, hupmonitor = None,
+                 protocol = 'HTTP/1.0'):
         """Initialize the instance.
         
         @type getfunc: C{method}
@@ -286,6 +533,9 @@
         @type hupmonitor: C{boolean}
         @param hupmonitor: whether to reopen the log file on a HUP signal
             (optional, default is False)
+        @type protocol: C{string}
+        @param protocol: the HTTP protocol version to use
+            (optional, defaults to HTTP/1.0)
         
         """
         
@@ -295,6 +545,7 @@
         self.lastflush = clock()
         self.logfile = None
         self.log = None
+        self.protocol = protocol
         if (logfile) and (logfile != '-'):
             try:
                 self.logfile = logfile
@@ -322,6 +573,7 @@
         
         """
         
+        logger.debug('new external connection')
         self.connections[connection] = HTTPConnection(self, connection)
 
     def connection_flushed(self, connection):
@@ -332,7 +584,9 @@
         
         """
         
+        logger.debug('connection flushed')
         if self.connections[connection].done:
+            logger.debug('connection shutdown')
             connection.shutdown(1)
 
     def connection_lost(self, connection):
@@ -343,10 +597,9 @@
         
         """
         
+        logger.debug('connection lost')
         ec = self.connections[connection]
-        ec.closed = True
-        del ec.connection
-        del ec.next_func
+        ec.close()
         del self.connections[connection]
 
     def data_came_in(self, connection, data):
@@ -361,6 +614,7 @@
         
         c = self.connections[connection]
         if not c.data_came_in(data) and not c.closed:
+            logger.debug('closing connection')
             c.connection.shutdown(1)
 
     def write_log(self, ip, ident, username, header,

Modified: debtorrent/branches/unique/DebTorrent/RateLimiter.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/RateLimiter.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/RateLimiter.py (original)
+++ debtorrent/branches/unique/DebTorrent/RateLimiter.py Sun Aug 19 09:33:45 2007
@@ -51,16 +51,6 @@
 from CurrentRateMeasure import Measure
 from cStringIO import StringIO
 from math import sqrt
-
-try:
-    True
-except:
-    True = 1
-    False = 0
-try:
-    sum([1])
-except:
-    sum = lambda a: reduce(lambda x,y: x+y, a, 0)
 
 logger = logging.getLogger('DebTorrent.RateLimiter')
 

Modified: debtorrent/branches/unique/DebTorrent/RateMeasure.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/RateMeasure.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/RateMeasure.py (original)
+++ debtorrent/branches/unique/DebTorrent/RateMeasure.py Sun Aug 19 09:33:45 2007
@@ -12,11 +12,6 @@
 """
 
 from clock import clock
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 FACTOR = 0.999
 

Modified: debtorrent/branches/unique/DebTorrent/RawServer.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/RawServer.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/RawServer.py (original)
+++ debtorrent/branches/unique/DebTorrent/RawServer.py Sun Aug 19 09:33:45 2007
@@ -24,11 +24,6 @@
 from clock import clock
 from signal import signal, SIGINT, SIG_DFL
 import sys, logging
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.RawServer')
 
@@ -303,6 +298,8 @@
         
         @type handler: unknown
         @param handler: the default data handler to use to process data on connections
+        @rtype: C{boolean}
+        @return: whether the server should be restarted
         
         """
         
@@ -321,7 +318,7 @@
                         period = 0
                     events = self.sockethandler.do_poll(period)
                     if self.doneflag.isSet():
-                        return
+                        return True
                     while self.funcs and self.funcs[0][0] <= clock():
                         garbage1, func, id = self.funcs.pop(0)
                         if id in self.tasks_to_kill:
@@ -331,32 +328,32 @@
                             func()
                         except (SystemError, MemoryError), e:
                             logger.exception('Occurred while running '+func.__name__)
-                            return
+                            return True
                         except KeyboardInterrupt:
                             signal(SIGINT, SIG_DFL)
                             self.exception(True)
-                            return
+                            return False
                         except:
                             self.exception()
                     self.sockethandler.close_dead()
                     self.sockethandler.handle_events(events)
                     if self.doneflag.isSet():
-                        return
+                        return True
                     self.sockethandler.close_dead()
                 except (SystemError, MemoryError), e:
                     logger.exception('Occurred while processing queued functions')
-                    return
+                    return True
                 except error:
                     if self.doneflag.isSet():
-                        return
+                        return True
                 except KeyboardInterrupt:
                     signal(SIGINT, SIG_DFL)
                     self.exception(True)
-                    return
+                    return False
                 except:
                     self.exception()
                 if self.exccount > 10:
-                    return
+                    return True
         finally:
 #            self.sockethandler.shutdown()
             self.finished.set()

Modified: debtorrent/branches/unique/DebTorrent/ServerPortHandler.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/ServerPortHandler.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/ServerPortHandler.py (original)
+++ debtorrent/branches/unique/DebTorrent/ServerPortHandler.py Sun Aug 19 09:33:45 2007
@@ -17,14 +17,8 @@
 #from RawServer import RawServer
 from BTcrypto import Crypto
 from binascii import b2a_hex
+from BT1.Encrypter import protocol_name
 import logging
-try:
-    True
-except:
-    True = 1
-    False = 0
-
-from BT1.Encrypter import protocol_name
 
 logger = logging.getLogger('DebTorrent.ServerPortHandler')
 
@@ -591,13 +585,21 @@
         del self.singlerawservers[info_hash]
 
     def listen_forever(self):
-        """Call the master server's listen loop."""
-        self.rawserver.listen_forever(self)
+        """Call the master server's listen loop.
+        
+        @rtype: C{boolean}
+        @return: whether the server should be restarted
+        
+        """
+        
+        restart = self.rawserver.listen_forever(self)
         for srs in self.singlerawservers.values():
             srs.finished = True
             srs.running = False
             srs.doneflag.set()
         
+        return restart
+        
     ### RawServer handler functions ###
     # be wary of name collisions
 

Modified: debtorrent/branches/unique/DebTorrent/SocketHandler.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/SocketHandler.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/SocketHandler.py (original)
+++ debtorrent/branches/unique/DebTorrent/SocketHandler.py Sun Aug 19 09:33:45 2007
@@ -27,11 +27,6 @@
 from random import shuffle, randrange
 # from BT1.StreamCheck import StreamCheck
 # import inspect
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.SocketHandler')
 
@@ -56,12 +51,12 @@
     @ivar connected: whether this socket has received an event yet
     @type skipped: C{int}
     @ivar skipped: the number of consecutive writes to the socket that have failed
-    @type ip: C{string}
-    @ivar ip: the IP address to use if one can't be obtained from the socket
+    @type dns: (C{string}, C{int})
+    @ivar dns: the IP address and port to use if one can't be obtained from the socket
     
     """
     
-    def __init__(self, socket_handler, sock, handler, ip = None):
+    def __init__(self, socket_handler, sock, handler, dns = None):
         """
         
         @type socket_handler: L{SocketHandler}
@@ -86,12 +81,13 @@
         self.skipped = 0
 #        self.check = StreamCheck()
         try:
-            self.ip = self.socket.getpeername()[0]
+            self.dns = self.socket.getpeername()
         except:
-            if ip is None:
-                self.ip = 'unknown'
+            if dns is None:
+                self.dns = ('unknown', 0)
             else:
-                self.ip = ip
+                self.dns = dns
+        logger.debug('new socket: %r', self.dns)
         
     def get_ip(self, real=False):
         """Get the IP address of the socket.
@@ -107,14 +103,34 @@
         
         if real:
             try:
-                self.ip = self.socket.getpeername()[0]
+                self.dns = self.socket.getpeername()
             except:
                 pass
-        return self.ip
+        return self.dns[0]
+        
+    def getpeername(self, real=False):
+        """Get the IP address and port of the socket.
+        
+        @type real: C{boolean}
+        @param real: whether to try and get the IP address directly from the 
+            socket or trust the one supplied when the instance was created 
+            (optional, defaults to False)
+        @rtype: (C{string}, C{int})
+        @return: the IP address and port of the remote connection
+        
+        """
+        
+        if real:
+            try:
+                self.dns = self.socket.getpeername()
+            except:
+                pass
+        return self.dns
         
     def close(self):
         """Close the socket."""
         assert self.socket
+        logger.debug('close socket: %r', self.dns)
         self.connected = False
         sock = self.socket
         self.socket = None
@@ -132,6 +148,7 @@
         
         """
         
+        logger.debug('socket %r shutdown:'+str(val), self.dns)
         self.socket.shutdown(val)
 
     def is_flushed(self):
@@ -192,7 +209,7 @@
             if self.skipped >= 3:
                 dead = True
             if dead:
-                logger.debug('Socket is dead from write: '+self.ip)
+                logger.debug('Socket is dead from write: %r', self.dns)
                 self.socket_handler.dead_from_write.append(self)
                 return
         if self.buffer:
@@ -312,10 +329,7 @@
                 socktype = socket.AF_INET
             bind = bind.split(',')
             for addr in bind:
-                if sys.version_info < (2,2):
-                    addrinfos.append((socket.AF_INET, None, None, None, (addr, port)))
-                else:
-                    addrinfos.extend(socket.getaddrinfo(addr, port,
+                addrinfos.extend(socket.getaddrinfo(addr, port,
                                                socktype, socket.SOCK_STREAM))
         else:
             if self.ipv6_enable:
@@ -446,7 +460,7 @@
         except Exception, e:
             raise socket.error(str(e))
         self.poll.register(sock, POLLIN)
-        s = SingleSocket(self, sock, handler, dns[0])
+        s = SingleSocket(self, sock, handler, dns)
         self.single_sockets[sock.fileno()] = s
         return s
 
@@ -470,30 +484,27 @@
         
         if handler is None:
             handler = self.handler
-        if sys.version_info < (2,2):
-            s = self.start_connection_raw(dns,socket.AF_INET,handler)
+        if self.ipv6_enable:
+            socktype = socket.AF_UNSPEC
         else:
-            if self.ipv6_enable:
-                socktype = socket.AF_UNSPEC
-            else:
-                socktype = socket.AF_INET
-            try:
-                addrinfos = socket.getaddrinfo(dns[0], int(dns[1]),
-                                               socktype, socket.SOCK_STREAM)
-            except socket.error, e:
-                raise
-            except Exception, e:
-                raise socket.error(str(e))
-            if randomize:
-                shuffle(addrinfos)
-            for addrinfo in addrinfos:
-                try:
-                    s = self.start_connection_raw(addrinfo[4],addrinfo[0],handler)
-                    break
-                except:
-                    pass
-            else:
-                raise socket.error('unable to connect')
+            socktype = socket.AF_INET
+        try:
+            addrinfos = socket.getaddrinfo(dns[0], int(dns[1]),
+                                           socktype, socket.SOCK_STREAM)
+        except socket.error, e:
+            raise
+        except Exception, e:
+            raise socket.error(str(e))
+        if randomize:
+            shuffle(addrinfos)
+        for addrinfo in addrinfos:
+            try:
+                s = self.start_connection_raw(addrinfo[4],addrinfo[0],handler)
+                break
+            except:
+                pass
+        else:
+            raise socket.error('unable to connect')
         return s
 
 
@@ -551,6 +562,7 @@
                             self._close_socket(s)
                             continue
                 if (event & POLLOUT) and s.socket and not s.is_flushed():
+                    s.last_hit = clock()
                     s.try_write()
                     if s.is_flushed():
                         s.handler.connection_flushed(s)

Modified: debtorrent/branches/unique/DebTorrent/__init__.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/__init__.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/__init__.py (original)
+++ debtorrent/branches/unique/DebTorrent/__init__.py Sun Aug 19 09:33:45 2007
@@ -22,7 +22,7 @@
 """
 
 product_name = 'DebTorrent'
-version_short = 'T-0.1.3.1'
+version_short = 'T-0.1.4'
 
 version = version_short+' ('+product_name+')'
 report_email = 'debtorrent-devel at lists.alioth.debian.org'
@@ -31,6 +31,7 @@
 from sha import sha
 from time import time, clock
 from binascii import b2a_hex
+from urllib import quote
 import logging
 try:
     from os import getpid
@@ -88,6 +89,23 @@
         
 resetPeerIDs()
 
+def make_readable(s):
+    """Convert a string peer ID to be human-readable.
+    
+    @type s: C{string}
+    @param s: the string to convert
+    @rtype: C{string}
+    @return: the resulting hex string, or the original string if it was already
+        readable
+    
+    """
+    
+    if not s:
+        return ''
+    if quote(s).find('%') >= 0:
+        return b2a_hex(s)
+    return '"'+s+'"'
+
 def createPeerID(ins = '---'):
     """Generate a somewhat random peer ID
     
@@ -102,5 +120,5 @@
     
     assert type(ins) is StringType
     assert len(ins) == 3
-    logger.info('New peer ID: '+b2a_hex(_idprefix + ins + _idrandom[0]))
+    logger.info('New peer ID: '+make_readable(_idprefix + ins + _idrandom[0]))
     return _idprefix + ins + _idrandom[0]

Modified: debtorrent/branches/unique/DebTorrent/bencode.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/bencode.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/bencode.py (original)
+++ debtorrent/branches/unique/DebTorrent/bencode.py Sun Aug 19 09:33:45 2007
@@ -174,6 +174,7 @@
         r, l = decode_func[x[0]](x, 0)
 #    except (IndexError, KeyError):
     except (IndexError, KeyError, ValueError):
+        logger.exception('bad bencoded data')
         raise ValueError, "bad bencoded data"
     if not sloppy and l != len(x):
         raise ValueError, "bad bencoded data"

Modified: debtorrent/branches/unique/DebTorrent/bitfield.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/bitfield.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/bitfield.py (original)
+++ debtorrent/branches/unique/DebTorrent/bitfield.py Sun Aug 19 09:33:45 2007
@@ -17,18 +17,7 @@
 
 """
 
-try:
-    True
-except:
-    True = 1
-    False = 0
-    bool = lambda x: not not x
-
-try:
-    sum([1])
-    negsum = lambda a: len(a)-sum(a)
-except:
-    negsum = lambda a: reduce(lambda x,y: x+(not y), a, 0)
+negsum = lambda a: len(a)-sum(a)
     
 def _int_to_booleans(x):
     """Convert an integer to a list of booleans.

Modified: debtorrent/branches/unique/DebTorrent/download_bt1.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/download_bt1.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/download_bt1.py (original)
+++ debtorrent/branches/unique/DebTorrent/download_bt1.py Sun Aug 19 09:33:45 2007
@@ -38,7 +38,7 @@
 from ConfigDir import ConfigDir
 from bencode import bencode, bdecode
 from sha import sha
-from os import path, makedirs, listdir
+from os import path, makedirs, listdir, walk
 from parseargs import parseargs, formatDefinitions, defaultargs
 from socket import error as socketerror
 from random import seed
@@ -50,12 +50,6 @@
 from gzip import GzipFile
 from StringIO import StringIO
 import binascii, logging
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.download_bt1')
 
@@ -842,6 +836,30 @@
             self.errorfunc(reason)
         
 
+    def find_files(self):
+        """Search through the save directory to find files which already exist.
+        
+        @rtype: C{list} of C{boolean}
+        @return: a list of which entries in the list L{files} already exist
+        
+        """
+        
+        found_files = {}
+        for root, dirs, files in walk(self.filename):
+            for file in files:
+                found_files[path.join(root, file)] = 1
+        
+        if not found_files:
+            return None
+        
+        enabled_files = []
+        for file, length in self.files:
+            if file in found_files:
+                enabled_files.append(True)
+            else:
+                enabled_files.append(False)
+        return enabled_files
+        
     def initFiles(self, old_style = False):
         """Initialize the files for the download.
         
@@ -859,19 +877,29 @@
         if self.doneflag.isSet():
             return None
 
-        disabled_files = None
+        enabled_files = None
         data = self.appdataobj.getTorrentData(self.infohash)
         try:
             d = data['resume data']['priority']
             assert len(d) == len(self.files)
-            disabled_files = [x == -1 for x in d]
+            enabled_files = [x != -1 for x in d]
         except:
-            pass
+            if data:
+                logger.exception('pickled data is corrupt, manually finding and hash checking old files')
+            enabled_files = self.find_files()
+            if enabled_files:
+                priority = []
+                for found in enabled_files:
+                    if found:
+                        priority.append(1)
+                    else:
+                        priority.append(-1)
+                data = {'resume data': {'priority': priority}}
 
         try:
             try:
                 self.storage = Storage(self.files, self.piece_lengths,
-                                       self.doneflag, self.config, disabled_files)
+                                       self.doneflag, self.config, enabled_files)
             except IOError, e:
                 logger.exception('trouble accessing files')
                 self.errorfunc('trouble accessing files - ' + str(e))

Modified: debtorrent/branches/unique/DebTorrent/inifile.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/inifile.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/inifile.py (original)
+++ debtorrent/branches/unique/DebTorrent/inifile.py Sun Aug 19 09:33:45 2007
@@ -36,16 +36,7 @@
 from cStringIO import StringIO
 from types import DictType, StringType
 import logging
-try:
-    from types import BooleanType
-except ImportError:
-    BooleanType = None
-
-try:
-    True
-except:
-    True = 1
-    False = 0
+from types import BooleanType
 
 logger = logging.getLogger('DebTorrent.inifile')
 

Modified: debtorrent/branches/unique/DebTorrent/iprangeparse.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/iprangeparse.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/iprangeparse.py (original)
+++ debtorrent/branches/unique/DebTorrent/iprangeparse.py Sun Aug 19 09:33:45 2007
@@ -16,13 +16,6 @@
 
 from bisect import bisect, insort
 import logging
-
-try:
-    True
-except:
-    True = 1
-    False = 0
-    bool = lambda x: not not x
 
 logger = logging.getLogger('DebTorrent.iprangeparse')
 

Modified: debtorrent/branches/unique/DebTorrent/launchmanycore.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/launchmanycore.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/launchmanycore.py (original)
+++ debtorrent/branches/unique/DebTorrent/launchmanycore.py Sun Aug 19 09:33:45 2007
@@ -35,12 +35,6 @@
 import logging
 from DebTorrent.BT1.AptListener import AptListener
 from DebTorrent.HTTPHandler import HTTPHandler
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.launchmanycore')
 
@@ -336,55 +330,69 @@
         @param configdir: the configuration and cache directory manager
         
         """
-        
+
+        self.config = config
+        self.configdir = configdir
+
+        self.torrent_cache = {}
+        self.file_cache = {}
+        self.blocked_files = {}
+
+        self.torrent_list = []
+        self.downloads = {}
+        self.counter = 0
+        self.doneflag = Event()
+
+        self.hashcheck_queue = []
+        self.hashcheck_current = None
+        
+    def run(self):
+        """Run the mutliple downloads.
+        
+        @rtype: C{boolean}
+        @return: whether the server ended normally
+        
+        """
+
+        restart = False
         try:
-            self.config = config
-            self.configdir = configdir
-
-            self.torrent_cache = {}
-            self.file_cache = {}
-            self.blocked_files = {}
-
-            self.torrent_list = []
-            self.downloads = {}
-            self.counter = 0
-            self.doneflag = Event()
-
-            self.hashcheck_queue = []
-            self.hashcheck_current = None
-            
-            self.rawserver = RawServer(self.doneflag, config['timeout_check_interval'],
-                              config['timeout'], ipv6_enable = config['ipv6_enabled'])
+            self.rawserver = RawServer(self.doneflag, self.config['timeout_check_interval'],
+                              self.config['timeout'], ipv6_enable = self.config['ipv6_enabled'])
 
             self.listen_port = self.rawserver.find_and_bind(
-                            config['minport'], config['maxport'], config['bind'],
-                            ipv6_socket_style = config['ipv6_binds_v4'],
-                            randomizer = config['random_port'])
-
-            if config['log_dir']:
-                logfile = os.path.join(config['log_dir'], 'apt-access.log')
+                            self.config['minport'], self.config['maxport'], self.config['bind'],
+                            ipv6_socket_style = self.config['ipv6_binds_v4'],
+                            randomizer = self.config['random_port'])
+
+            if self.config['log_dir']:
+                logfile = os.path.join(self.config['log_dir'], 'apt-access.log')
             else:
                 logfile = os.path.join(self.configdir.cache_dir, 'apt-access.log')
 
-            self.aptlistener = AptListener(self, config, self.rawserver)
-            self.rawserver.bind(config['apt_port'], config['bind'],
-                   reuse = True, ipv6_socket_style = config['ipv6_binds_v4'])
+            self.aptlistener = AptListener(self, self.config, self.rawserver)
+            self.rawserver.bind(self.config['apt_port'], self.config['bind'],
+                   reuse = True, ipv6_socket_style = self.config['ipv6_binds_v4'])
             self.rawserver.set_handler(HTTPHandler(self.aptlistener.get, 
-                                                   config['min_time_between_log_flushes'],
-                                                   logfile, config['hupmonitor']), 
-                                       config['apt_port'])
+                                                   self.config['min_time_between_log_flushes'],
+                                                   logfile, self.config['hupmonitor'],
+                                                   'HTTP/1.1'), 
+                                       self.config['apt_port'])
     
             self.ratelimiter = RateLimiter(self.rawserver.add_task,
-                                           config['upload_unit_size'])
-            self.ratelimiter.set_upload_rate(config['max_upload_rate'])
-
-            self.handler = MultiHandler(self.rawserver, self.doneflag, config)
+                                           self.config['upload_unit_size'])
+            self.ratelimiter.set_upload_rate(self.config['max_upload_rate'])
+
+            self.handler = MultiHandler(self.rawserver, self.doneflag, self.config)
             seed(createPeerID())
 
             # Restore the previous state of the downloads
-            self.unpickle(self.configdir.getState())
+            still_running = self.unpickle(self.configdir.getState())
             
-            self.handler.listen_forever()
+            # Expire any old cached files
+            self.configdir.deleteOldCacheData(self.config['expire_cache_data'],
+                                              still_running, True)
+            
+            restart = self.handler.listen_forever()
 
             # Save the current state of the downloads
             self.configdir.saveState(self.pickle())
@@ -398,6 +406,8 @@
 
         except:
             logger.exception('SYSTEM ERROR - EXCEPTION GENERATED')
+        
+        return restart
 
 
     def gather_stats(self):
@@ -787,14 +797,18 @@
         
         @type data: C{dictionary}
         @param data: the saved state of the previously running downloads downloads
+        @rtype: C{list} of C{string}
+        @return: the list of torrent hashes that are still running
         
         """
         
         if data is None:
-            return
-        
+            return []
+        
+        still_running = []
         d = data['torrent cache']
         for hash in d:
+            still_running.append(hash)
             paused = d[hash].pop('paused', False)
             metainfo = self.configdir.getTorrent(hash)
             if metainfo:
@@ -802,3 +816,5 @@
                 self.add(hash, d[hash], False)
                 if paused:
                     self.downloads[hash].d.Pause()
+        
+        return still_running

Modified: debtorrent/branches/unique/DebTorrent/parsedir.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/parsedir.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/parsedir.py (original)
+++ debtorrent/branches/unique/DebTorrent/parsedir.py Sun Aug 19 09:33:45 2007
@@ -16,12 +16,6 @@
 from os.path import exists, isfile
 from sha import sha
 import sys, os, logging
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.parsedir')
 

Modified: debtorrent/branches/unique/DebTorrent/piecebuffer.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/piecebuffer.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/piecebuffer.py (original)
+++ debtorrent/branches/unique/DebTorrent/piecebuffer.py Sun Aug 19 09:33:45 2007
@@ -19,11 +19,6 @@
 from threading import Lock
 import logging
 # import inspect
-try:
-    True
-except:
-    True = 1
-    False = 0
     
 logger = logging.getLogger('DebTorrent.piecebuffer')
 

Modified: debtorrent/branches/unique/DebTorrent/subnetparse.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/subnetparse.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/subnetparse.py (original)
+++ debtorrent/branches/unique/DebTorrent/subnetparse.py Sun Aug 19 09:33:45 2007
@@ -20,13 +20,6 @@
 
 from bisect import bisect, insort
 import logging
-
-try:
-    True
-except:
-    True = 1
-    False = 0
-    bool = lambda x: not not x
 
 logger = logging.getLogger('DebTorrent.subnetparse')
 

Modified: debtorrent/branches/unique/DebTorrent/torrentlistparse.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/DebTorrent/torrentlistparse.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/DebTorrent/torrentlistparse.py (original)
+++ debtorrent/branches/unique/DebTorrent/torrentlistparse.py Sun Aug 19 09:33:45 2007
@@ -13,12 +13,6 @@
 
 from binascii import unhexlify
 import logging
-
-try:
-    True
-except:
-    True = 1
-    False = 0
 
 logger = logging.getLogger('DebTorrent.torrentlistparse')
 

Modified: debtorrent/branches/unique/TODO
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/TODO?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/TODO (original)
+++ debtorrent/branches/unique/TODO Sun Aug 19 09:33:45 2007
@@ -38,15 +38,6 @@
 more efficient by adding callbacks to PiecePicker or StorageWrapper, so that
 when a piece comes in and passes the hash check, then the AptListener will
 process any queued requests for that piece.
-
-
-HTTPHandler should support HTTP/1.1 and persistent connections/pipelining
-
-Currently HTTPHandler is HTTP/1.0, and so doesn't support persistent 
-connections. These would be useful as APT could then pipeline multiple requests
-at a time to DebTorrent for processing. This would require something like the
-AptListener callbacks, as the connections would then have to support multiple 
-queued package requests.
 
 
 Different forms of HTTP Downloading may open too many connections

Modified: debtorrent/branches/unique/btcompletedir.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/btcompletedir.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/btcompletedir.py (original)
+++ debtorrent/branches/unique/btcompletedir.py Sun Aug 19 09:33:45 2007
@@ -21,8 +21,8 @@
     except:
         pass
 
-from sys import argv, version, exit
-assert version >= '2', "Install Python 2.0 or greater"
+from sys import argv, version_info, exit
+assert version_info >= (2,3), "Install Python 2.3 or greater"
 from os.path import split
 from DebTorrent.BT1.makemetafile import defaults, completedir, print_announcelist_details
 from DebTorrent.parseargs import parseargs, formatDefinitions

Modified: debtorrent/branches/unique/btmakemetafile.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/btmakemetafile.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/btmakemetafile.py (original)
+++ debtorrent/branches/unique/btmakemetafile.py Sun Aug 19 09:33:45 2007
@@ -23,9 +23,9 @@
     except:
         pass
 
-from sys import argv, version, exit
+from sys import argv, version, exit, version_info
 from os.path import split
-assert version >= '2', "Install Python 2.0 or greater"
+assert version_info >= (2,3), 'Requires Python 2.3 or better'
 from DebTorrent.BT1.makemetafile import make_meta_file, defaults, print_announcelist_details
 from DebTorrent.parseargs import parseargs, formatDefinitions
 import logging

Modified: debtorrent/branches/unique/btshowmetainfo.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/btshowmetainfo.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/btshowmetainfo.py (original)
+++ debtorrent/branches/unique/btshowmetainfo.py Sun Aug 19 09:33:45 2007
@@ -20,6 +20,8 @@
 
 logging.basicConfig()
 logger = logging.getLogger()
+
+assert version_info >= (2,3), 'Requires Python 2.3 or better'
 
 NAME, EXT = splitext(basename(argv[0]))
 VERSION = '20030621'

Modified: debtorrent/branches/unique/debian/changelog
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/changelog?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/changelog (original)
+++ debtorrent/branches/unique/debian/changelog Sun Aug 19 09:33:45 2007
@@ -1,3 +1,23 @@
+debtorrent (0.1.4) unstable; urgency=low
+
+  * APT coomunication supports HTTP/1.1 connections, including
+    persistent connections and pipelining
+  * Add support for the new debtorrent APT transport method
+    (see the new apt-transport-debtorrent package)
+  * Make the Packages decompression and torrent creation threaded
+  * Improve the startup initialization of files
+  * Add init and configuration files for the tracker
+  * bug fixes:
+    - restarts would fail when downloaded files have been modified
+    - deleting old cached data would fail
+    - small tracker bug causing exceptions
+    - prevent enabling files before the initialization is complete
+    - only connect to unique peers from the tracker that are not
+      already connected
+    - tracker would return all torrents' peers for every request
+
+ -- Cameron Dale <camrdale at gmail.com>  Sat, 17 Aug 2007 14:13:00 -0700
+
 debtorrent (0.1.3.1) unstable; urgency=low
 
   * First debian package release (Closes: #428005)

Modified: debtorrent/branches/unique/debian/control
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/control?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/control (original)
+++ debtorrent/branches/unique/debian/control Sun Aug 19 09:33:45 2007
@@ -12,7 +12,7 @@
 Architecture: all
 Depends: ${python:Depends}, adduser
 Suggests: python-psyco
-Recommends: python-crypto
+Recommends: python-crypto, apt-transport-debtorrent
 Provides: python-debtorrent
 Description: bittorrent proxy for downloading Debian packages
  DebTorrent is a proxy for downloading Debian packages files with APT.

Modified: debtorrent/branches/unique/debian/debtorrent-client.init
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/debtorrent-client.init?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/debtorrent-client.init (original)
+++ debtorrent/branches/unique/debian/debtorrent-client.init Sun Aug 19 09:33:45 2007
@@ -15,7 +15,7 @@
 #                    This provides the debtorrent daemon client.
 ### END INIT INFO
 
-# /etc/init.d/debtorrent: start and stop the debtorrent client daemon
+# /etc/init.d/debtorrent-client: start and stop the debtorrent client daemon
 
 DAEMON=/usr/bin/debtorrent-client
 NAME="debtorrent-client"

Modified: debtorrent/branches/unique/debian/debtorrent-client.sgml
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/debtorrent-client.sgml?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/debtorrent-client.sgml (original)
+++ debtorrent/branches/unique/debian/debtorrent-client.sgml Sun Aug 19 09:33:45 2007
@@ -75,7 +75,8 @@
 
     <para>These programs follow the usual &gnu; command line syntax,
       with long options starting with two dashes (`--').  A summary of
-      options is included below.</para>
+      options is included below. For more detail, see the configuration
+      file in /etc/debtorrent.</para>
     
   <refsect2>
     <title>CONFIG FILES</title>
@@ -135,7 +136,7 @@
         <listitem>
           <para>log messages that are greater than or equal to <replaceable>level</replaceable> to log files, 
             the basic log levels are 50 (critical), 40 (errors), 30 (warnings), 20 (info), and 10 (debug)
-            (the default is 30)</para>
+            (the default is 10)</para>
         </listitem>
       </varlistentry>
     </variablelist>

Modified: debtorrent/branches/unique/debian/debtorrent-tracker.sgml
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/debtorrent-tracker.sgml?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/debtorrent-tracker.sgml (original)
+++ debtorrent/branches/unique/debian/debtorrent-tracker.sgml Sun Aug 19 09:33:45 2007
@@ -63,27 +63,74 @@
       of the completion of each client, and communicates that information when
       requested to other clients.</para>
 
-    <para>There is one required option, --dfile, which specifies what <replaceable>file</replaceable> to store the recent downloader information.</para>
-
   </refsect1>
   <refsect1>
     <title>OPTIONS</title>
 
     <para>These programs follow the usual &gnu; command line syntax,
       with long options starting with two dashes (`--').  A summary of
-      options is included below.</para>
-
-    <variablelist>
-      <varlistentry>
-        <term><option>--dfile <replaceable>file</replaceable></option></term>
-        <listitem>
-          <para>the <replaceable>file</replaceable> to store the recent downloader information (required)</para>
-        </listitem>
-      </varlistentry>
+      options is included below. For more detail, see the configuration
+      file in /etc/debtorrent.</para>
+
+  <refsect2>
+    <title>CONFIG FILES</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--configfile <replaceable>filename</replaceable></option></term>
+         <listitem>
+          <para>the <replaceable>filename</replaceable> to use for the configuration file, if not specified then a file in
+            /etc/debtorrent will be used, followed by a file in the .DebTorrent directory in the user's home directory</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--save_options</option> 0|1</term>
+        <listitem>
+          <para>whether to save the current options as the new default configuration
+            for the current program (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>LOCATIONS</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--cache_dir <replaceable>directory</replaceable></option></term>
+        <listitem>
+          <para>the local <replaceable>directory</replaceable> to save cache data in, if left blank then a .DebTorrent directory in the user's home directory will be used</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--save_dfile_interval <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>the number of <replaceable>seconds</replaceable> between saving the dfile (defaults to 300)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--log_dir <replaceable>directory</replaceable></option></term>
+        <listitem>
+          <para>the local <replaceable>directory</replaceable> to save log files in, if left blank then the cache directory will be used</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--log_level <replaceable>level</replaceable></option></term>
+        <listitem>
+          <para>log messages that are greater than or equal to <replaceable>level</replaceable> to log files, 
+            the basic log levels are 50 (critical), 40 (errors), 30 (warnings), 20 (info), and 10 (debug)
+            (the default is 10)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>CONNECTIONS</title>
+    <variablelist>
       <varlistentry>
         <term><option>--port <replaceable>port</replaceable></option></term>
         <listitem>
-          <para>the <replaceable>port</replaceable> to listen on (defaults to 80)</para>
+          <para>the <replaceable>port</replaceable> to listen on (defaults to 6969)</para>
         </listitem>
       </varlistentry>
       <varlistentry>
@@ -95,31 +142,84 @@
       <varlistentry>
         <term><option>--ipv6_enabled</option> 0|1</term>
         <listitem>
-          <para>whether to allow the tracker to connect to peers via IPv6 (defaults to 0)</para>
+          <para>whether to allow the client to connect to peers via IPv6 (defaults to 0)</para>
         </listitem>
       </varlistentry>
       <varlistentry>
         <term><option>--ipv6_binds_v4</option> 0|1</term>
         <listitem>
-          <para>whether an IPv6 server socket will also field IPv4 connections (defaults to 0)</para>
+          <para>set if an IPv6 server socket won't also field IPv4 connections (defaults to 0)</para>
         </listitem>
       </varlistentry>
       <varlistentry>
         <term><option>--socket_timeout <replaceable>seconds</replaceable></option></term>
         <listitem>
-          <para>then number of <replaceable>seconds</replaceable> to use as a timeout for closing connections (defaults to 15)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--save_dfile_interval <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>the number of <replaceable>seconds</replaceable> between saving the dfile (defaults to 300)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--timeout_downloaders_interval <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>the number of <replaceable>seconds</replaceable> between expiring downloaders (defaults to 2700)</para>
+          <para>the number of <replaceable>seconds</replaceable> to wait between closing sockets which nothing has been received on
+        (defaults to 15)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--timeout_check_interval <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>the number of <replaceable>seconds</replaceable> to wait between checking if any connections have timed out (defaults to 5)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>ALLOWED TORRENTS AND PEERS</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--allowed_dir <replaceable>directory</replaceable></option></term>
+        <listitem>
+          <para>only allow downloads for torrents in this <replaceable>directory</replaceable> (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--allowed_list <replaceable>file</replaceable></option></term>
+        <listitem>
+          <para>only allow downloads for hashes in this <replaceable>file</replaceable> (hex format, one per
+	    line), cannot be used with allowed_dir (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--allowed_controls</option> 0|1</term>
+        <listitem>
+          <para>whether to allow special keys in torrents in the allowed_dir to affect tracker
+	    access (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--allowed_ips <replaceable>file</replaceable></option></term>
+        <listitem>
+          <para>only allow connections from IPs specified in the given <replaceable>file</replaceable>, which
+        contains subnet data in the format: aa.bb.cc.dd/len (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--banned_ips <replaceable>file</replaceable></option></term>
+        <listitem>
+          <para>don't allow connections from IPs specified in the given <replaceable>file</replaceable>, which
+        contains IP range data in the format: xxx:xxx:ip1-ip2 (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--parse_dir_interval <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>number of <replaceable>seconds</replaceable> between reloading of allowed_dir (defaults to 60)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>PEER REQUESTS</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--compact_reqd</option> 0|1</term>
+        <listitem>
+          <para>whether to only allow peers that accept a compact response (defaults to 1)</para>
         </listitem>
       </varlistentry>
       <varlistentry>
@@ -136,13 +236,6 @@
         </listitem>
       </varlistentry>
       <varlistentry>
-        <term><option>--timeout_check_interval <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>the number of <replaceable>seconds</replaceable> to wait between checking if any connections have timed out
-	    (defaults to 5)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
         <term><option>--nat_check <replaceable>num</replaceable></option></term>
         <listitem>
           <para>check <replaceable>num</replaceable> times if a downloader is behind a NAT (0 = don't
@@ -150,16 +243,9 @@
         </listitem>
       </varlistentry>
       <varlistentry>
-        <term><option>--log_nat_checks</option> 0|1</term>
-        <listitem>
-          <para>whether to add entries to the log for nat-check results (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--min_time_between_log_flushes <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>the minimum number of <replaceable>seconds</replaceable> it must have been since the last flush to do another one
-	    (defaults to 3.0)</para>
+        <term><option>--timeout_downloaders_interval <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>the number of <replaceable>seconds</replaceable> between expiring downloaders (defaults to 2700)</para>
         </listitem>
       </varlistentry>
       <varlistentry>
@@ -167,125 +253,6 @@
         <listitem>
           <para>the minimum number of <replaceable>seconds</replaceable> before a cache is considered stale and is
 	    flushed (defaults to 600.0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--allowed_dir <replaceable>directory</replaceable></option></term>
-        <listitem>
-          <para>only allow downloads for torrents in this <replaceable>directory</replaceable> (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--allowed_list <replaceable>file</replaceable></option></term>
-        <listitem>
-          <para>only allow downloads for hashes in this <replaceable>file</replaceable> (hex format, one per
-	    line), cannot be used with allowed_dir (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--allowed_controls</option> 0|1</term>
-        <listitem>
-          <para>whether to allow special keys in torrents in the allowed_dir to affect tracker
-	    access (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--multitracker_enabled</option> 0|1</term>
-        <listitem>
-          <para>whether to enable multitracker operation (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--multitracker_allowed</option> autodetect|none|all</term>
-        <listitem>
-          <para>whether to allow incoming tracker announces (can be none, autodetect
-	    or all) (defaults to 'autodetect')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--multitracker_reannounce_interval <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>number of <replaceable>seconds</replaceable> between outgoing tracker announces (defaults to 120)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--multitracker_maxpeers <replaceable>num</replaceable></option></term>
-        <listitem>
-          <para>the <replaceable>num</replaceable> of peers to get in a tracker announce (defaults to 20)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--aggregate_forward <replaceable>url</replaceable>[,<replaceable>password</replaceable>]</option></term>
-        <listitem>
-          <para>if set, forwards all non-multitracker to
-	    this <replaceable>url</replaceable> with this optional <replaceable>password</replaceable> (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--aggregator</option> 0|1|<replaceable>password</replaceable></term>
-        <listitem>
-          <para>whether to act as a data aggregator rather than a tracker. If
-	    enabled, may be 1, or <replaceable>password</replaceable>; if <replaceable>password</replaceable> is set, then an
-	      incoming password is required for access (defaults to '0')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--hupmonitor</option> 0|1</term>
-        <listitem>
-          <para>whether to reopen the log file upon receipt of HUP signal (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--http_timeout <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>number of <replaceable>seconds</replaceable> to wait before assuming that an http connection has
-	    timed out (defaults to 60)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--parse_dir_interval <replaceable>seconds</replaceable></option></term>
-        <listitem>
-          <para>number of <replaceable>seconds</replaceable> between reloading of allowed_dir (defaults to 60)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--show_infopage</option> 0|1</term>
-        <listitem>
-          <para>whether to display an info page when the tracker's root dir is loaded
-	    (defaults to 1)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--infopage_redirect <replaceable>URL</replaceable></option></term>
-        <listitem>
-          <para>redirect the info page to this <replaceable>URL</replaceable> (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--show_names</option> 0|1</term>
-        <listitem>
-          <para>whether to display names from allowed dir (defaults to 1)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--favicon <replaceable>filename</replaceable></option></term>
-        <listitem>
-          <para>the <replaceable>filename</replaceable> containing x-icon data to return when browser requests
-	    favicon.ico (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--allowed_ips <replaceable>file</replaceable></option></term>
-        <listitem>
-          <para>only allow connections from IPs specified in the given <replaceable>file</replaceable>, which
-	    contains subnet data in the format: aa.bb.cc.dd/len (defaults to '')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--banned_ips <replaceable>file</replaceable></option></term>
-        <listitem>
-          <para>don't allow connections from IPs specified in the given <replaceable>file</replaceable>, which
-	    contains IP range data in the format: xxx:xxx:ip1-ip2 (defaults to '')</para>
         </listitem>
       </varlistentry>
       <varlistentry>
@@ -295,32 +262,6 @@
 	    network IPs (0 = never, 1 = always, 2 = ignore if NAT checking is not
 	    enabled) (defaults to 2)</para>
         </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--logfile <replaceable>file</replaceable></option></term>
-        <listitem>
-          <para>write tracker logs to this <replaceable>file</replaceable>, use '-' for stdout (defaults to '-')</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--allow_get</option> 0|1</term>
-        <listitem>
-          <para>use with allowed_dir; adds a /file?hash=<replaceable>hash</replaceable> URL that allows users
-	    to download the torrent file (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--keep_dead</option> 0|1</term>
-        <listitem>
-          <para>keep dead torrents after they expire (so they still show up on your
-	    /scrape and web page) (defaults to 0)</para>
-        </listitem>
-      </varlistentry>
-      <varlistentry>
-        <term><option>--scrape_allowed</option> full|specific|none</term>
-        <listitem>
-          <para>scrape access allowed (can be none, specific or full) (defaults to full)</para>
-	</listitem>
       </varlistentry>
       <varlistentry>
         <term><option>--dedicated_seed_id <replaceable>code</replaceable></option></term>
@@ -331,6 +272,138 @@
 	</listitem>
       </varlistentry>
     </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>NON-PEER REQUESTS</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--show_infopage</option> 0|1</term>
+        <listitem>
+          <para>whether to display an info page when the tracker's root dir is loaded
+        (defaults to 1)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--infopage_redirect <replaceable>URL</replaceable></option></term>
+        <listitem>
+          <para>redirect the info page to this <replaceable>URL</replaceable> (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--favicon <replaceable>filename</replaceable></option></term>
+        <listitem>
+          <para>the <replaceable>filename</replaceable> containing x-icon data to return when browser requests
+        favicon.ico (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--show_names</option> 0|1</term>
+        <listitem>
+          <para>whether to display the name of the torrent on the infopage (defaults to 1)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--allow_get</option> 0|1</term>
+        <listitem>
+          <para>use with allowed_dir; adds a /file?hash=<replaceable>hash</replaceable> URL that allows users
+        to download the torrent file (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--keep_dead</option> 0|1</term>
+        <listitem>
+          <para>keep dead torrents after they expire (so they still show up on your
+	    scrape and web page) (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--scrape_allowed</option> full|specific|none</term>
+        <listitem>
+          <para>scrape access allowed (can be none, specific or full) (defaults to full)</para>
+	</listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>REQUEST LOGGING</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--min_time_between_log_flushes <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>the minimum number of <replaceable>seconds</replaceable> it must have been since the last flush to do another one
+        (defaults to 3.0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--hupmonitor</option> 0|1</term>
+        <listitem>
+          <para>whether to reopen the log file upon receipt of HUP signal (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--log_nat_checks</option> 0|1</term>
+        <listitem>
+          <para>whether to add entries to the log for nat-check results (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
+  <refsect2>
+    <title>MULTI-TRACKER</title>
+    <variablelist>
+      <varlistentry>
+        <term><option>--multitracker_enabled</option> 0|1</term>
+        <listitem>
+          <para>whether to enable multitracker operation (defaults to 0)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--multitracker_allowed</option> autodetect|none|all</term>
+        <listitem>
+          <para>whether to allow incoming tracker announces (can be none, autodetect
+	    or all) (defaults to 'autodetect')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--multitracker_reannounce_interval <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>number of <replaceable>seconds</replaceable> between outgoing tracker announces (defaults to 120)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--multitracker_maxpeers <replaceable>num</replaceable></option></term>
+        <listitem>
+          <para>the <replaceable>num</replaceable> of peers to get in a tracker announce (defaults to 20)</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--aggregate_forward <replaceable>url</replaceable>[,<replaceable>password</replaceable>]</option></term>
+        <listitem>
+          <para>if set, forwards all non-multitracker to
+	    this <replaceable>url</replaceable> with this optional <replaceable>password</replaceable> (defaults to '')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--aggregator</option> 0|1|<replaceable>password</replaceable></term>
+        <listitem>
+          <para>whether to act as a data aggregator rather than a tracker. If
+	    enabled, may be 1, or <replaceable>password</replaceable>; if <replaceable>password</replaceable> is set, then an
+	      incoming password is required for access (defaults to '0')</para>
+        </listitem>
+      </varlistentry>
+      <varlistentry>
+        <term><option>--http_timeout <replaceable>seconds</replaceable></option></term>
+        <listitem>
+          <para>number of <replaceable>seconds</replaceable> to wait before assuming that an http connection has
+	    timed out (defaults to 60)</para>
+        </listitem>
+      </varlistentry>
+    </variablelist>
+  </refsect2>
+
   </refsect1>
 
   <refsect1>

Modified: debtorrent/branches/unique/debian/debtorrent.install
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/debtorrent.install?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/debtorrent.install (original)
+++ debtorrent/branches/unique/debian/debtorrent.install Sun Aug 19 09:33:45 2007
@@ -1,1 +1,2 @@
 debtorrent-client.conf etc/debtorrent
+debtorrent-tracker.conf etc/debtorrent

Modified: debtorrent/branches/unique/debian/rules
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debian/rules?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debian/rules (original)
+++ debtorrent/branches/unique/debian/rules Sun Aug 19 09:33:45 2007
@@ -53,7 +53,7 @@
 	
 	# Remove the .py from the end of each of these and move them out of
 	# the path
-	for i in test btmakemetafile btcompletedir btreannounce \
+	for i in test hippy btmakemetafile btcompletedir btreannounce \
 		btrename btshowmetainfo btcopyannounce btsetdebmirrors; \
 	do mv debian/debtorrent/usr/bin/$$i.py debian/debtorrent/usr/share/debtorrent/$$i; done
 
@@ -68,6 +68,7 @@
 	dh_fixperms
 	dh_pysupport
 	dh_installinit --name=debtorrent-client
+	dh_installinit --name=debtorrent-tracker
 	dh_installdeb
 	dh_shlibdeps
 	dh_gencontrol

Modified: debtorrent/branches/unique/debtorrent-client.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debtorrent-client.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debtorrent-client.py (original)
+++ debtorrent/branches/unique/debtorrent-client.py Sun Aug 19 09:33:45 2007
@@ -26,10 +26,12 @@
 from DebTorrent.launchmanycore import LaunchMany
 from DebTorrent.download_bt1 import defaults, get_usage
 from DebTorrent.parseargs import parseargs
-from sys import argv, exit
+from sys import argv, exit, version_info
 import os, logging
 from DebTorrent import version
 from DebTorrent.ConfigDir import ConfigDir
+
+assert version_info >= (2,3), 'Requires Python 2.3 or better'
 
 logger = logging.getLogger()
 
@@ -40,9 +42,12 @@
     
     @type params: C{list} of C{strings}
     @param params: a list of the command-line arguments given to the script
+    @rtype: C{boolean}
+    @return: whether the server should be restarted
     
     """
-    
+
+    restart = False
     configdefaults = {}
     try:
         # Load the configuration data
@@ -72,7 +77,6 @@
         # Continue
         if config['save_options']:
             configdir.saveConfig(config)
-        configdir.deleteOldCacheData(config['expire_cache_data'])
     except ValueError, e:
         logger.error('error: ' + str(e))
         logger.error("Usage: debtorrent-client.py <global options>")
@@ -83,11 +87,18 @@
         logger.exception('error: ' + str(e))
         logging.shutdown()
         exit(2)
+    except:
+        logger.exception('unhandled exception')
 
-    LaunchMany(config, configdir)
+    try:
+        many_launcher = LaunchMany(config, configdir)
+        restart = many_launcher.run()
+    except:
+        logger.exception('unhandled exception')
 
     logger.info('Shutting down')
     logging.shutdown()
+    return restart
 
 if __name__ == '__main__':
     if argv[1:2] == ['--version']:
@@ -101,4 +112,6 @@
         # pstats.Stats(p).strip_dirs().sort_stats('cumulative').print_stats()
         pstats.Stats(p).strip_dirs().sort_stats('time').print_stats()
     else:
-        run(argv[1:])
+        # Run the client in a loop, exiting when it says it shouldn't be restarted
+        while run(argv[1:]):
+            pass

Modified: debtorrent/branches/unique/debtorrent-tracker.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/debtorrent-tracker.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/debtorrent-tracker.py (original)
+++ debtorrent/branches/unique/debtorrent-tracker.py Sun Aug 19 09:33:45 2007
@@ -19,8 +19,10 @@
 
 PROFILE = 0
     
-from sys import argv
+from sys import argv, version_info
 from DebTorrent.BT1.track import track
+
+assert version_info >= (2,3), 'Requires Python 2.3 or better'
 
 if __name__ == '__main__':
     if PROFILE:
@@ -31,4 +33,6 @@
 #        pstats.Stats(p).strip_dirs().sort_stats('cumulative').print_stats()
         pstats.Stats(p).strip_dirs().sort_stats('time').print_stats()
     else:
-        track(argv[1:])
+        # Run the tracker in a loop, exiting when it says it shouldn't be restarted
+        while track(argv[1:]):
+            pass

Modified: debtorrent/branches/unique/setup.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/setup.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/setup.py (original)
+++ debtorrent/branches/unique/setup.py Sun Aug 19 09:33:45 2007
@@ -15,7 +15,7 @@
 """
 
 import sys
-assert sys.version >= '2', "Install Python 2.0 or greater"
+assert sys.version_info >= (2,3), "Install Python 2.3 or greater"
 from distutils.core import setup, Extension
 import DebTorrent
 
@@ -31,7 +31,7 @@
 
     scripts = ["btmakemetafile.py", "btcompletedir.py", "btreannounce.py", 
                "btrename.py", "btshowmetainfo.py", 'btcopyannounce.py',
-               'btsetdebmirrors.py', 'test.py',
+               'btsetdebmirrors.py', 'test.py', 'hippy.py',
                'debtorrent-client.py', 'debtorrent-tracker.py'
         ]
     )

Modified: debtorrent/branches/unique/test.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/branches/unique/test.py?rev=272&op=diff
==============================================================================
--- debtorrent/branches/unique/test.py (original)
+++ debtorrent/branches/unique/test.py Sun Aug 19 09:33:45 2007
@@ -173,7 +173,85 @@
               (1, ['update']),
               ]),
 
-        '7': ('Run this test multiple times to test restarting the downloader.',
+        '7': ('Test pipelining of multiple simultaneous downloads.',
+             {1: []},
+             {1: (1, [], {})},
+             [(1, ['update']), 
+              (1, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              ]),
+
+        '8': ('Test pipelining of multiple simultaneous downloads with many peers.',
+             {1: []},
+             {1: (1, [], {}),
+              2: (1, [], {}),
+              3: (1, [], {}),
+              4: (1, [], {}),
+              5: (1, [], {}),
+              6: (1, [], {})},
+             [(1, ['update']), 
+              (1, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              (2, ['update']), 
+              (2, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              (3, ['update']), 
+              (3, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              (4, ['update']), 
+              (4, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              (5, ['update']), 
+              (5, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              (6, ['update']), 
+              (6, ['install', 'aboot-base', 'aap-doc', 'ada-reference-manual',
+                   'aspectj-doc', 'fop-doc', 'jswat-doc', 'asis-doc',
+                   'bison-doc', 'crash-whitepaper', 'doc-iana',
+                   'bash-doc', 'apt-howto-common', 'autotools-dev',
+                   'aptitude-doc-en', 'armagetron-common', 'asr-manpages',
+                   'atomix-data', 'alcovebook-sgml-doc', 'alamin-doc',
+                   'aegis-doc', 'afbackup-common', 'airstrike-common',
+                   ]),
+              ]),
+
+        '9': ('Run this test multiple times to test restarting the downloader.',
              {1: []},
              {1: (1, [], {'clean': False})},
              [(1, ['update']), 
@@ -189,7 +267,7 @@
               (1, ['install', 'doc-iana']),
               ]),
 
-        '8': ('Test updating to a (possibly) out-of-date mirror.',
+        '10': ('Test updating to a (possibly) out-of-date mirror.',
              {1: []},
              {1: (1, [], {'mirror': 'debian.mirror.iweb.ca/debian'})},
              [(1, ['update']),
@@ -497,7 +575,7 @@
     if not exists(join([downloader_dir, 'etc', 'apt', 'sources.list'])):
         # Create apt's config files
         f = open(join([downloader_dir, 'etc', 'apt', 'sources.list']), 'w')
-        f.write('deb http://localhost:' + str(num_down) + '988/' + mirror + '/ unstable ' + suites + '\n')
+        f.write('deb debtorrent://localhost:' + str(num_down) + '988/' + mirror + '/ unstable ' + suites + '\n')
         f.close()
 
     if not exists(join([downloader_dir, 'etc', 'apt', 'apt.conf'])):




More information about the Debtorrent-commits mailing list