r103 - in /debtorrent/trunk: ./ DebTorrent/ DebTorrent/BT1/ docs/
camrdale-guest at users.alioth.debian.org
camrdale-guest at users.alioth.debian.org
Wed Jun 13 21:32:06 UTC 2007
Author: camrdale-guest
Date: Wed Jun 13 21:32:05 2007
New Revision: 103
URL: http://svn.debian.org/wsvn/debtorrent/?sc=1&rev=103
Log:
Merged revisions 46-59,61-70,72-102 via svnmerge from
svn+ssh://camrdale-guest@svn.debian.org/svn/debtorrent/debtorrent/branches/http-listen
................
r46 | camrdale-guest | 2007-05-09 23:41:07 -0700 (Wed, 09 May 2007) | 1 line
Made downloaders listen to another port, with an HTTPHandler for it, and an AptListener copied from the tracker
................
r47 | camrdale-guest | 2007-05-10 16:48:08 -0700 (Thu, 10 May 2007) | 1 line
Make priority default to never (currently causes error due to bittornado bug)
................
r61 | camrdale-guest | 2007-05-23 23:01:53 -0700 (Wed, 23 May 2007) | 13 lines
Blocked revisions 51,53 via svnmerge
........
r51 | camrdale-guest | 2007-05-17 20:55:42 -0700 (Thu, 17 May 2007) | 1 line
Add ability to parse dpkg status for priorities of files to download (only for btdownloadheadless)
........
r53 | camrdale-guest | 2007-05-18 12:54:26 -0700 (Fri, 18 May 2007) | 2 lines
Update status parsing to work better with saveas options.
Add status parsing to btlaunchamny.
........
................
r72 | camrdale-guest | 2007-05-28 23:01:30 -0700 (Mon, 28 May 2007) | 1 line
Remove old docstring references to dpkg status stuff.
................
r73 | camrdale-guest | 2007-05-30 14:17:44 -0700 (Wed, 30 May 2007) | 2 lines
Completed changing of default to all files disabled.
Some modifications to DEBUG messages.
................
r74 | camrdale-guest | 2007-05-30 17:36:28 -0700 (Wed, 30 May 2007) | 1 line
More DEBUG messages
................
r75 | camrdale-guest | 2007-05-31 19:07:36 -0700 (Thu, 31 May 2007) | 1 line
Documented the HTTPDownloader before modifying it.
................
r76 | camrdale-guest | 2007-05-31 22:56:58 -0700 (Thu, 31 May 2007) | 1 line
First attempt at modifying HTTPDownloader to work with mirrors.
................
r77 | camrdale-guest | 2007-06-01 19:08:32 -0700 (Fri, 01 Jun 2007) | 1 line
Rename the http-seeds to deb_mirrors
................
r78 | camrdale-guest | 2007-06-01 19:10:56 -0700 (Fri, 01 Jun 2007) | 1 line
Moved btsethttpseeds to btsetdebmirrors
................
r79 | camrdale-guest | 2007-06-01 22:01:03 -0700 (Fri, 01 Jun 2007) | 1 line
More documentation.
................
r80 | camrdale-guest | 2007-06-01 22:22:54 -0700 (Fri, 01 Jun 2007) | 1 line
Remove unneeded tracker stuff from AptListener, and document the remaining.
................
r81 | camrdale-guest | 2007-06-01 22:24:27 -0700 (Fri, 01 Jun 2007) | 1 line
Set the ID keyword on AptListener.
................
r82 | camrdale-guest | 2007-06-01 22:45:31 -0700 (Fri, 01 Jun 2007) | 1 line
Fix some problems with the documentation.
................
r83 | camrdale-guest | 2007-06-01 22:50:00 -0700 (Fri, 01 Jun 2007) | 1 line
Add the new files to the epydoc config.
................
r84 | camrdale-guest | 2007-06-02 13:35:00 -0700 (Sat, 02 Jun 2007) | 1 line
More documentation.
................
r85 | camrdale-guest | 2007-06-03 22:05:08 -0700 (Sun, 03 Jun 2007) | 2 lines
Make AptListener proxy download all requested files.
Add AptListener configuration options.
................
r95 | camrdale-guest | 2007-06-08 19:55:23 -0700 (Fri, 08 Jun 2007) | 2 lines
Make deb_mirrors backup HTTP downloading work.
Tested to work on a mirror, including with sub-package pieces (Yay).
................
r96 | camrdale-guest | 2007-06-11 00:05:55 -0700 (Mon, 11 Jun 2007) | 1 line
Some more documentation.
................
r97 | camrdale-guest | 2007-06-11 15:38:06 -0700 (Mon, 11 Jun 2007) | 1 line
First attempt at downloading packages through debtorrent (currently broken).
................
r98 | camrdale-guest | 2007-06-12 14:57:07 -0700 (Tue, 12 Jun 2007) | 1 line
Make the initial state all pieces uninteresting.
................
r99 | camrdale-guest | 2007-06-12 20:05:21 -0700 (Tue, 12 Jun 2007) | 2 lines
Getting package (.deb) files from torrents works.
Fixed info_page dispplay (though its boring).
................
r100 | camrdale-guest | 2007-06-12 23:33:07 -0700 (Tue, 12 Jun 2007) | 1 line
Fix resuming of downloads (broken due to default of all files disabled).
................
r101 | camrdale-guest | 2007-06-13 14:00:55 -0700 (Wed, 13 Jun 2007) | 2 lines
AptListener starts torrents when downloading Packages files.
btlaunchmany no longer scans a directory.
................
Added:
debtorrent/trunk/DebTorrent/BT1/AptListener.py (contents, props changed)
- copied, changed from r47, debtorrent/branches/http-listen/DebTorrent/BT1/AptListener.py
debtorrent/trunk/btsetdebmirrors.py
- copied unchanged from r101, debtorrent/branches/http-listen/btsetdebmirrors.py
Removed:
debtorrent/trunk/btsethttpseeds.py
Modified:
debtorrent/trunk/ (props changed)
debtorrent/trunk/DebTorrent/BT1/FileSelector.py
debtorrent/trunk/DebTorrent/BT1/HTTPDownloader.py
debtorrent/trunk/DebTorrent/BT1/PiecePicker.py
debtorrent/trunk/DebTorrent/BT1/Storage.py
debtorrent/trunk/DebTorrent/BT1/makemetafile.py
debtorrent/trunk/DebTorrent/BT1/track.py
debtorrent/trunk/DebTorrent/ConnChoice.py
debtorrent/trunk/DebTorrent/RateLimiter.py
debtorrent/trunk/DebTorrent/RawServer.py
debtorrent/trunk/DebTorrent/ServerPortHandler.py
debtorrent/trunk/DebTorrent/SocketHandler.py
debtorrent/trunk/DebTorrent/__init__.py
debtorrent/trunk/DebTorrent/download_bt1.py
debtorrent/trunk/DebTorrent/launchmanycore.py
debtorrent/trunk/DebTorrent/piecebuffer.py
debtorrent/trunk/DebTorrent/zurllib.py
debtorrent/trunk/btdownloadheadless.py
debtorrent/trunk/btshowmetainfo.py
debtorrent/trunk/docs/epydoc.config
debtorrent/trunk/setup.py
Propchange: debtorrent/trunk/
------------------------------------------------------------------------------
--- svn:ignore (original)
+++ svn:ignore Wed Jun 13 21:32:05 2007
@@ -1,1 +1,3 @@
*.pyc
+.project
+.pydevproject
Propchange: debtorrent/trunk/
------------------------------------------------------------------------------
svnmerge-blocked = /debtorrent/trunk:51,53
Propchange: debtorrent/trunk/
------------------------------------------------------------------------------
--- svnmerge-integrated (original)
+++ svnmerge-integrated Wed Jun 13 21:32:05 2007
@@ -1,1 +1,1 @@
-/debtorrent/branches/hippy:1-87 /debtorrent/branches/http-listen:1-45
+/debtorrent/branches/hippy:1-87 /debtorrent/branches/http-listen:1-102
Copied: debtorrent/trunk/DebTorrent/BT1/AptListener.py (from r47, debtorrent/branches/http-listen/DebTorrent/BT1/AptListener.py)
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/AptListener.py?rev=103&op=diff
==============================================================================
--- debtorrent/branches/http-listen/DebTorrent/BT1/AptListener.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/AptListener.py Wed Jun 13 21:32:05 2007
@@ -1,15 +1,21 @@
# Written by Cameron Dale
# see LICENSE.txt for license information
-
+#
# $Id$
+
+"""Listen for download requests from Apt.
+
+ at type alas: C{string}
+ at var alas: the message to send when the data is not found
+ at type VERSION: C{string}
+ at var VERSION: the Server identifier sent to all sites
+
+"""
from DebTorrent.parseargs import parseargs, formatDefinitions
from DebTorrent.RawServer import RawServer, autodetect_ipv6, autodetect_socket_style
from DebTorrent.HTTPHandler import HTTPHandler, months, weekdays
from DebTorrent.parsedir import parsedir
-from NatCheck import NatCheck, CHECK_PEER_ID_ENCRYPTED
-from DebTorrent.BTcrypto import CRYPTO_OK
-from T2T import T2TList
from DebTorrent.subnetparse import IP_List, ipv6_to_ipv4, to_ipv4, is_valid_ip, is_ipv4
from DebTorrent.iprangeparse import IP_List as IP_Range_List
from DebTorrent.torrentlistparse import parsetorrentlist
@@ -21,6 +27,8 @@
from os import rename, getpid
from os.path import exists, isfile
from cStringIO import StringIO
+from gzip import GzipFile
+from bz2 import decompress
from traceback import print_exc
from time import time, gmtime, strftime, localtime
from DebTorrent.clock import clock
@@ -29,76 +37,35 @@
from types import StringType, IntType, LongType, ListType, DictType
from binascii import b2a_hex, a2b_hex, a2b_base64
from string import lower
+from time import sleep
+from makemetafile import uniconvertl, uniconvert
+from os.path import split
import sys, os
import signal
import re
import DebTorrent.__init__
-from DebTorrent.__init__ import version, createPeerID
+from DebTorrent.__init__ import version, createPeerID, product_name,version_short
+
try:
True
except:
True = 1
False = 0
bool = lambda x: not not x
-
-defaults = [
- ('port', 80, "Port to listen on."),
- ('dfile', None, 'file to store recent downloader info in'),
- ('bind', '', 'comma-separated list of ips/hostnames to bind to locally'),
-# ('ipv6_enabled', autodetect_ipv6(),
- ('ipv6_enabled', 0,
- 'allow the client to connect to peers via IPv6'),
- ('ipv6_binds_v4', autodetect_socket_style(),
- 'set if an IPv6 server socket will also field IPv4 connections'),
- ('socket_timeout', 15, 'timeout for closing connections'),
- ('save_dfile_interval', 5 * 60, 'seconds between saving dfile'),
- ('timeout_downloaders_interval', 45 * 60, 'seconds between expiring downloaders'),
- ('reannounce_interval', 30 * 60, 'seconds downloaders should wait between reannouncements'),
- ('response_size', 50, 'number of peers to send in an info message'),
- ('timeout_check_interval', 5,
- 'time to wait between checking if any connections have timed out'),
- ('nat_check', 3,
- "how many times to check if a downloader is behind a NAT (0 = don't check)"),
- ('log_nat_checks', 0,
- "whether to add entries to the log for nat-check results"),
- ('min_time_between_log_flushes', 3.0,
- 'minimum time it must have been since the last flush to do another one'),
- ('min_time_between_cache_refreshes', 600.0,
- 'minimum time in seconds before a cache is considered stale and is flushed'),
- ('allowed_dir', '', 'only allow downloads for .dtorrents in this dir'),
- ('allowed_list', '', 'only allow downloads for hashes in this list (hex format, one per line)'),
- ('allowed_controls', 0, 'allow special keys in torrents in the allowed_dir to affect tracker access'),
- ('multitracker_enabled', 0, 'whether to enable multitracker operation'),
- ('multitracker_allowed', 'autodetect', 'whether to allow incoming tracker announces (can be none, autodetect or all)'),
- ('multitracker_reannounce_interval', 2 * 60, 'seconds between outgoing tracker announces'),
- ('multitracker_maxpeers', 20, 'number of peers to get in a tracker announce'),
- ('aggregate_forward', '', 'format: <url>[,<password>] - if set, forwards all non-multitracker to this url with this optional password'),
- ('aggregator', '0', 'whether to act as a data aggregator rather than a tracker. If enabled, may be 1, or <password>; ' +
- 'if password is set, then an incoming password is required for access'),
- ('hupmonitor', 0, 'whether to reopen the log file upon receipt of HUP signal'),
- ('http_timeout', 60,
- 'number of seconds to wait before assuming that an http connection has timed out'),
- ('parse_dir_interval', 60, 'seconds between reloading of allowed_dir or allowed_file ' +
- 'and allowed_ips and banned_ips lists'),
- ('show_infopage', 1, "whether to display an info page when the tracker's root dir is loaded"),
- ('infopage_redirect', '', 'a URL to redirect the info page to'),
- ('show_names', 1, 'whether to display names from allowed dir'),
- ('favicon', '', 'file containing x-icon data to return when browser requests favicon.ico'),
- ('allowed_ips', '', 'only allow connections from IPs specified in the given file; '+
- 'file contains subnet data in the format: aa.bb.cc.dd/len'),
- ('banned_ips', '', "don't allow connections from IPs specified in the given file; "+
- 'file contains IP range data in the format: xxx:xxx:ip1-ip2'),
- ('only_local_override_ip', 2, "ignore the ip GET parameter from machines which aren't on local network IPs " +
- "(0 = never, 1 = always, 2 = ignore if NAT checking is not enabled)"),
- ('logfile', '', 'file to write the tracker logs, use - for stdout (default)'),
- ('allow_get', 0, 'use with allowed_dir; adds a /file?hash={hash} url that allows users to download the torrent file'),
- ('keep_dead', 0, 'keep dead torrents after they expire (so they still show up on your /scrape and web page)'),
- ('scrape_allowed', 'full', 'scrape access allowed (can be none, specific or full)'),
- ('dedicated_seed_id', '', 'allows tracker to monitor dedicated seed(s) and flag torrents as seeded'),
- ('compact_reqd', 1, "only allow peers that accept a compact response"),
- ]
+
+DEBUG = True
+
+VERSION = product_name+'/'+version_short
def statefiletemplate(x):
+ """Check the saved state file for corruption.
+
+ @type x: C{dictionary}
+ @param x: the dictionary of information retrieved from the state file
+ @raise ValueError: if the state file info is corrupt
+
+ """
+
if type(x) != DictType:
raise ValueError
for cname, cinfo in x.items():
@@ -153,68 +120,102 @@
alas = 'your file may exist elsewhere in the universe\nbut alas, not here\n'
-local_IPs = IP_List()
-local_IPs.set_intranet_addresses()
-
def isotime(secs = None):
+ """Create an ISO formatted string of the time.
+
+ @type secs: C{float}
+ @param secs: number of seconds since the epoch
+ (optional, default is to use the current time)
+ @rtype: C{string}
+ @return: the ISO formatted string representation of the time
+
+ """
+
if secs == None:
secs = time()
return strftime('%Y-%m-%d %H:%M UTC', gmtime(secs))
-http_via_filter = re.compile(' for ([0-9.]+)\Z')
-
-def _get_forwarded_ip(headers):
- header = headers.get('x-forwarded-for')
- if header:
- try:
- x,y = header.split(',')
- except:
- return header
- if is_valid_ip(x) and not local_IPs.includes(x):
- return x
- return y
- header = headers.get('client-ip')
- if header:
- return header
- header = headers.get('via')
- if header:
- x = http_via_filter.search(header)
- try:
- return x.group(1)
- except:
- pass
- header = headers.get('from')
- #if header:
- # return header
- #return None
- return header
-
-def get_forwarded_ip(headers):
- x = _get_forwarded_ip(headers)
- if not is_valid_ip(x) or local_IPs.includes(x):
- return None
- return x
-
-def compact_peer_info(ip, port):
- try:
- s = ( ''.join([chr(int(i)) for i in ip.split('.')])
- + chr((port & 0xFF00) >> 8) + chr(port & 0xFF) )
- if len(s) != 6:
- raise ValueError
- except:
- s = '' # not a valid IP, must be a domain name
- return s
-
class AptListener:
- def __init__(self, config, rawserver):
+ """Listen for Apt requests to download files.
+
+ @type handler: unknown
+ @ivar handler: the download handler to use
+ @type config: C{dictionary}
+ @ivar config: the configuration parameters
+ @type dfile: C{string}
+ @ivar dfile: the state file to use when saving the current state
+ @type parse_dir_interval: C{int}
+ @ivar parse_dir_interval: seconds between reloading of the allowed
+ directory or file, and the lists of allowed and banned IPs
+ @type favicon: C{string}
+ @ivar favicon: file containing x-icon data
+ @type rawserver: L{DebTorrent.RawServer}
+ @ivar rawserver: the server to use for scheduling
+ @type times: unknown
+ @ivar times: unknown
+ @type state: C{dictionary}
+ @ivar state: the current state information for the tracking
+ @type allowed_IPs: unknown
+ @ivar allowed_IPs: unknown
+ @type banned_IPs: unknown
+ @ivar banned_IPs: unknown
+ @type allowed_ip_mtime: unknown
+ @ivar allowed_ip_mtime: unknown
+ @type banned_ip_mtime: unknown
+ @ivar banned_ip_mtime: unknown
+ @type trackerid: unknown
+ @ivar trackerid: unknown
+ @type save_dfile_interval: C{int}
+ @ivar save_dfile_interval: seconds between saving the state file
+ @type show_names: C{boolean}
+ @ivar show_names: whether to display names from allowed dir
+ @type prevtime: unknown
+ @ivar prevtime: unknown
+ @type logfile: unknown
+ @ivar logfile: unknown
+ @type log: unknown
+ @ivar log: unknown
+ @type allow_get: unknown
+ @ivar allow_get: unknown
+ @type allowed: unknown
+ @ivar allowed: unknown
+ @type allowed_list_mtime: unknown
+ @ivar allowed_list_mtime: unknown
+ @type allowed_dir_files: unknown
+ @ivar allowed_dir_files: unknown
+ @type allowed_dir_blocked: unknown
+ @ivar allowed_dir_blocked: unknown
+ @type uq_broken: unknown
+ @ivar uq_broken: unknown
+ @type Filter: unknown
+ @ivar Filter: unknown
+ @type request_queue: C{dictionary}
+ @ivar request_queue: the pending HTTP get requests that are waiting for download.
+ Keys are L{DebTorrent.HTTPHandler.HTTPConnection} objects, values are
+ (L{DebTorrent.download_bt1.BT1Download}, C{int}, C{list} of C{int}, C{float})
+ which are the torrent downloader, file index, list of pieces needed, and
+ the time of the original request.
+
+ """
+
+ def __init__(self, handler, config, rawserver):
+ """Initialize the instance.
+
+ @type handler: unknown
+ @param handler: the download handler to use
+ @type config: C{dictionary}
+ @param config: the configuration parameters
+ @type rawserver: L{DebTorrent.RawServer}
+ @param rawserver: the server to use for scheduling
+
+ """
+
+ self.handler = handler
self.config = config
- return
- self.response_size = config['response_size']
self.dfile = config['dfile']
- self.natcheck = config['nat_check']
favicon = config['favicon']
- self.parse_dir_interval = config['parse_dir_interval']
+ self.parse_dir_interval = config['apt_parse_dir_interval']
self.favicon = None
if favicon:
try:
@@ -224,11 +225,8 @@
except:
print "**warning** specified favicon file -- %s -- does not exist." % favicon
self.rawserver = rawserver
- self.cached = {} # format: infohash: [[time1, l1, s1], [time2, l2, s2], ...]
- self.cached_t = {} # format: infohash: [time, cache]
self.times = {}
self.state = {}
- self.seedcount = {}
self.allowed_IPs = None
self.banned_IPs = None
@@ -237,14 +235,6 @@
self.banned_ip_mtime = 0
self.read_ip_lists()
- self.only_local_override_ip = config['only_local_override_ip']
- if self.only_local_override_ip == 2:
- self.only_local_override_ip = not config['nat_check']
-
- if CHECK_PEER_ID_ENCRYPTED and not CRYPTO_OK:
- print ('**warning** crypto library not installed,' +
- ' cannot completely verify encrypted peers')
-
if exists(self.dfile):
try:
h = open(self.dfile, 'rb')
@@ -257,56 +247,14 @@
self.state = tempstate
except:
print '**warning** statefile '+self.dfile+' corrupt; resetting'
- self.downloads = self.state.setdefault('peers', {})
- self.completed = self.state.setdefault('completed', {})
-
- self.becache = {}
- ''' format: infohash: [[l0, s0], [l1, s1], ...]
- l0,s0 = compact, not requirecrypto=1
- l1,s1 = compact, only supportcrypto=1
- l2,s2 = [compact, crypto_flag], all peers
- if --compact_reqd 0:
- l3,s3 = [ip,port,id]
- l4,l4 = [ip,port] nopeerid
- '''
- if config['compact_reqd']:
- x = 3
- else:
- x = 5
- self.cache_default = [({},{}) for i in xrange(x)]
- for infohash, ds in self.downloads.items():
- self.seedcount[infohash] = 0
- for x,y in ds.items():
- ip = y['ip']
- if ( (self.allowed_IPs and not self.allowed_IPs.includes(ip))
- or (self.banned_IPs and self.banned_IPs.includes(ip)) ):
- del ds[x]
- continue
- if not y['left']:
- self.seedcount[infohash] += 1
- if y.get('nat',-1):
- continue
- gip = y.get('given_ip')
- if is_valid_ip(gip) and (
- not self.only_local_override_ip or local_IPs.includes(ip) ):
- ip = gip
- self.natcheckOK(infohash,x,ip,y['port'],y)
-
- for x in self.downloads.keys():
- self.times[x] = {}
- for y in self.downloads[x].keys():
- self.times[x][y] = 0
self.trackerid = createPeerID('-T-')
seed(self.trackerid)
- self.reannounce_interval = config['reannounce_interval']
self.save_dfile_interval = config['save_dfile_interval']
self.show_names = config['show_names']
- rawserver.add_task(self.save_state, self.save_dfile_interval)
+ #rawserver.add_task(self.save_state, self.save_dfile_interval)
self.prevtime = clock()
- self.timeout_downloaders_interval = config['timeout_downloaders_interval']
- rawserver.add_task(self.expire_downloaders, self.timeout_downloaders_interval)
self.logfile = None
self.log = None
if (config['logfile']) and (config['logfile'] != '-'):
@@ -332,11 +280,6 @@
self.allow_get = config['allow_get']
- self.t2tlist = T2TList(config['multitracker_enabled'], self.trackerid,
- config['multitracker_reannounce_interval'],
- config['multitracker_maxpeers'], config['http_timeout'],
- self.rawserver)
-
if config['allowed_list']:
if config['allowed_dir']:
print '**warning** allowed_dir and allowed_list options cannot be used together'
@@ -346,9 +289,6 @@
self.allowed_list_mtime = 0
self.parse_allowed()
self.remove_from_state('allowed','allowed_dir_files')
- if config['multitracker_allowed'] == 'autodetect':
- config['multitracker_allowed'] = 'none'
- config['allowed_controls'] = 0
elif config['allowed_dir']:
self.allowed = self.state.setdefault('allowed',{})
@@ -360,70 +300,74 @@
else:
self.allowed = None
self.remove_from_state('allowed','allowed_dir_files', 'allowed_list')
- if config['multitracker_allowed'] == 'autodetect':
- config['multitracker_allowed'] = 'none'
- config['allowed_controls'] = 0
self.uq_broken = unquote('+') != ' '
- self.keep_dead = config['keep_dead']
self.Filter = Filter(rawserver.add_task)
- aggregator = config['aggregator']
- if aggregator == '0':
- self.is_aggregator = False
- self.aggregator_key = None
- else:
- self.is_aggregator = True
- if aggregator == '1':
- self.aggregator_key = None
- else:
- self.aggregator_key = aggregator
- self.natcheck = False
-
- send = config['aggregate_forward']
- if not send:
- self.aggregate_forward = None
- else:
- try:
- self.aggregate_forward, self.aggregate_password = send.split(',')
- except:
- self.aggregate_forward = send
- self.aggregate_password = None
-
- self.dedicated_seed_id = config['dedicated_seed_id']
- self.is_seeded = {}
-
- self.cachetime = 0
- self.cachetimeupdate()
-
- def cachetimeupdate(self):
- self.cachetime += 1 # raw clock, but more efficient for cache
- self.rawserver.add_task(self.cachetimeupdate,1)
-
- def aggregate_senddata(self, query):
- url = self.aggregate_forward+'?'+query
- if self.aggregate_password is not None:
- url += '&password='+self.aggregate_password
- rq = Thread(target = self._aggregate_senddata, args = [url])
- rq.setDaemon(False)
- rq.start()
-
- def _aggregate_senddata(self, url): # just send, don't attempt to error check,
- try: # discard any returned data
- h = urlopen(url)
- h.read()
- h.close()
- except:
- return
+ self.request_queue = {}
+ rawserver.add_task(self.process_queue, 1)
+
+ def enqueue_request(self, connection, downloader, file_num, pieces_needed):
+ """Add a new download request to the queue of those waiting for pieces.
+
+ @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
+ @param connection: the conection the request came in on
+ @type downloader: L{DebTorrent.download_bt1.BT1Download}
+ @param downloader: the torrent download that has the file
+ @type file_num: C{int}
+ @param file_num: the index of the file in the torrent
+ @type pieces_needed: C{list} of C{int}
+ @param pieces_needed: the list of pieces in the torrent that still
+ need to download
+
+ """
+
+ assert not self.request_queue.has_key(connection)
+
+ if DEBUG:
+ print 'queueing request as file', file_num, 'needs pieces:', pieces_needed
+
+ self.request_queue[connection] = (downloader, file_num, pieces_needed, clock())
+
+ def process_queue(self):
+ """Process the queue of waiting requests."""
+
+ # Schedule it again
+ self.rawserver.add_task(self.process_queue, 1)
+
+ for c, v in self.request_queue.items():
+ # Remove the downloaded pieces from the list of needed ones
+ for piece in list(v[2]):
+ if v[0].storagewrapper.do_I_have(piece):
+ if DEBUG:
+ print 'queued request for file', v[1], 'got piece', piece
+ v[2].remove(piece)
+
+ # If no more pieces are needed, return the answer and remove the request
+ if not v[2]:
+ if DEBUG:
+ print 'queued request for file', v[1], 'is complete'
+ del self.request_queue[c]
+ self.answer_package(c, v[0], v[1])
def get_infopage(self):
+ """Format the info page to display for normal browsers.
+
+ Formats the currently downloading torrents into a table in human-readable
+ format to display in a browser window.
+
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and message body
+
+ """
+
try:
if not self.config['show_infopage']:
- return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+ return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
red = self.config['infopage_redirect']
if red:
- return (302, 'Found', {'Content-Type': 'text/html', 'Location': red},
+ return (302, 'Found', {'Server': VERSION, 'Content-Type': 'text/html', 'Location': red},
'<A HREF="'+red+'">Click Here</A>')
s = StringIO()
@@ -434,417 +378,326 @@
s.write('</head>\n<body>\n' \
'<h3>DebTorrent download info</h3>\n'\
'<ul>\n'
- '<li><strong>tracker version:</strong> %s</li>\n' \
- '<li><strong>server time:</strong> %s</li>\n' \
+ '<li><strong>client version:</strong> %s</li>\n' \
+ '<li><strong>client time:</strong> %s</li>\n' \
'</ul>\n' % (version, isotime()))
- if self.config['allowed_dir']:
- if self.show_names:
- names = [ (self.allowed[hash]['name'],hash)
- for hash in self.allowed.keys() ]
- else:
- names = [ (None,hash)
- for hash in self.allowed.keys() ]
- else:
- names = [ (None,hash) for hash in self.downloads.keys() ]
- if not names:
- s.write('<p>not tracking any files yet...</p>\n')
- else:
- names.sort()
- tn = 0
- tc = 0
- td = 0
- tt = 0 # Total transferred
- ts = 0 # Total size
- nf = 0 # Number of files displayed
- if self.config['allowed_dir'] and self.show_names:
- s.write('<table summary="files" border="1">\n' \
- '<tr><th>info hash</th><th>torrent name</th><th align="right">size</th><th align="right">complete</th><th align="right">downloading</th><th align="right">downloaded</th><th align="right">transferred</th></tr>\n')
- else:
- s.write('<table summary="files">\n' \
- '<tr><th>info hash</th><th align="right">complete</th><th align="right">downloading</th><th align="right">downloaded</th></tr>\n')
- for name,hash in names:
- l = self.downloads[hash]
- n = self.completed.get(hash, 0)
- tn = tn + n
- c = self.seedcount[hash]
- tc = tc + c
- d = len(l) - c
- td = td + d
- if self.config['allowed_dir'] and self.show_names:
- if self.allowed.has_key(hash):
- nf = nf + 1
- sz = self.allowed[hash]['length'] # size
- ts = ts + sz
- szt = sz * n # Transferred for this torrent
- tt = tt + szt
- if self.allow_get == 1:
- linkname = '<a href="/file?info_hash=' + quote(hash) + '">' + name + '</a>'
- else:
- linkname = name
- s.write('<tr><td><code>%s</code></td><td>%s</td><td align="right">%s</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td><td align="right">%s</td></tr>\n' \
- % (b2a_hex(hash), linkname, size_format(sz), c, d, n, size_format(szt)))
- else:
- s.write('<tr><td><code>%s</code></td><td align="right"><code>%i</code></td><td align="right"><code>%i</code></td><td align="right"><code>%i</code></td></tr>\n' \
- % (b2a_hex(hash), c, d, n))
- if self.config['allowed_dir'] and self.show_names:
- s.write('<tr><td align="right" colspan="2">%i files</td><td align="right">%s</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td><td align="right">%s</td></tr>\n'
- % (nf, size_format(ts), tc, td, tn, size_format(tt)))
- else:
- s.write('<tr><td align="right">%i files</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td></tr>\n'
- % (nf, tc, td, tn))
- s.write('</table>\n' \
- '<ul>\n' \
- '<li><em>info hash:</em> SHA1 hash of the "info" section of the metainfo (*.dtorrent)</li>\n' \
- '<li><em>complete:</em> number of connected clients with the complete file</li>\n' \
- '<li><em>downloading:</em> number of connected clients still downloading</li>\n' \
- '<li><em>downloaded:</em> reported complete downloads</li>\n' \
- '<li><em>transferred:</em> torrent size * total downloaded (does not include partial transfers)</li>\n' \
- '</ul>\n')
+# if self.config['allowed_dir']:
+# if self.show_names:
+# names = [ (self.allowed[hash]['name'],hash)
+# for hash in self.allowed.keys() ]
+# else:
+# names = [ (None,hash)
+# for hash in self.allowed.keys() ]
+# else:
+# names = [ (None,hash) for hash in self.downloads.keys() ]
+# if not names:
+# s.write('<p>not downloading any files yet...</p>\n')
+# else:
+# names.sort()
+# tn = 0
+# tc = 0
+# td = 0
+# tt = 0 # Total transferred
+# ts = 0 # Total size
+# nf = 0 # Number of files displayed
+# if self.config['allowed_dir'] and self.show_names:
+# s.write('<table summary="files" border="1">\n' \
+# '<tr><th>info hash</th><th>torrent name</th><th align="right">size</th><th align="right">complete</th><th align="right">downloading</th><th align="right">downloaded</th><th align="right">transferred</th></tr>\n')
+# else:
+# s.write('<table summary="files">\n' \
+# '<tr><th>info hash</th><th align="right">complete</th><th align="right">downloading</th><th align="right">downloaded</th></tr>\n')
+# for name,hash in names:
+# l = self.downloads[hash]
+# n = self.completed.get(hash, 0)
+# tn = tn + n
+# c = self.seedcount[hash]
+# tc = tc + c
+# d = len(l) - c
+# td = td + d
+# if self.config['allowed_dir'] and self.show_names:
+# if self.allowed.has_key(hash):
+# nf = nf + 1
+# sz = self.allowed[hash]['length'] # size
+# ts = ts + sz
+# szt = sz * n # Transferred for this torrent
+# tt = tt + szt
+# if self.allow_get == 1:
+# linkname = '<a href="/file?info_hash=' + quote(hash) + '">' + name + '</a>'
+# else:
+# linkname = name
+# s.write('<tr><td><code>%s</code></td><td>%s</td><td align="right">%s</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td><td align="right">%s</td></tr>\n' \
+# % (b2a_hex(hash), linkname, size_format(sz), c, d, n, size_format(szt)))
+# else:
+# s.write('<tr><td><code>%s</code></td><td align="right"><code>%i</code></td><td align="right"><code>%i</code></td><td align="right"><code>%i</code></td></tr>\n' \
+# % (b2a_hex(hash), c, d, n))
+# if self.config['allowed_dir'] and self.show_names:
+# s.write('<tr><td align="right" colspan="2">%i files</td><td align="right">%s</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td><td align="right">%s</td></tr>\n'
+# % (nf, size_format(ts), tc, td, tn, size_format(tt)))
+# else:
+# s.write('<tr><td align="right">%i files</td><td align="right">%i</td><td align="right">%i</td><td align="right">%i</td></tr>\n'
+# % (nf, tc, td, tn))
+# s.write('</table>\n' \
+# '<ul>\n' \
+# '<li><em>info hash:</em> SHA1 hash of the "info" section of the metainfo (*.dtorrent)</li>\n' \
+# '<li><em>complete:</em> number of connected clients with the complete file</li>\n' \
+# '<li><em>downloading:</em> number of connected clients still downloading</li>\n' \
+# '<li><em>downloaded:</em> reported complete downloads</li>\n' \
+# '<li><em>transferred:</em> torrent size * total downloaded (does not include partial transfers)</li>\n' \
+# '</ul>\n')
s.write('</body>\n' \
'</html>\n')
- return (200, 'OK', {'Content-Type': 'text/html; charset=iso-8859-1'}, s.getvalue())
+ return (200, 'OK', {'Server': VERSION, 'Content-Type': 'text/html; charset=iso-8859-1'}, s.getvalue())
except:
print_exc()
- return (500, 'Internal Server Error', {'Content-Type': 'text/html; charset=iso-8859-1'}, 'Server Error')
-
-
- def scrapedata(self, hash, return_name = True):
- l = self.downloads[hash]
- n = self.completed.get(hash, 0)
- c = self.seedcount[hash]
- d = len(l) - c
- f = {'complete': c, 'incomplete': d, 'downloaded': n}
- if return_name and self.show_names and self.config['allowed_dir']:
- f['name'] = self.allowed[hash]['name']
- return (f)
-
- def get_scrape(self, paramslist):
- fs = {}
- if paramslist.has_key('info_hash'):
- if self.config['scrape_allowed'] not in ['specific', 'full']:
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason':
- 'specific scrape function is not available with this tracker.'}))
- for hash in paramslist['info_hash']:
- if self.allowed is not None:
- if self.allowed.has_key(hash):
- fs[hash] = self.scrapedata(hash)
- else:
- if self.downloads.has_key(hash):
- fs[hash] = self.scrapedata(hash)
- else:
- if self.config['scrape_allowed'] != 'full':
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason':
- 'full scrape function is not available with this tracker.'}))
- if self.allowed is not None:
- keys = self.allowed.keys()
+ return (500, 'Internal Server Error', {'Server': VERSION, 'Content-Type': 'text/html; charset=iso-8859-1'}, 'Server Error')
+
+
+ def get_meow(self):
+ return (200, 'OK', {'Server': VERSION, 'Content-Type': 'text/html; charset=iso-8859-1'}, """<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">\n<html><head><title>Meow</title>\n</head>\n<body style="color: rgb(255, 255, 255); background-color: rgb(0, 0, 0);">\n<div><big style="font-weight: bold;"><big><big><span style="font-family: arial,helvetica,sans-serif;">I IZ TAKIN BRAKE</span></big></big></big><br></div>\n<pre><b><tt> .-o=o-.<br> , /=o=o=o=\ .--.<br> _|\|=o=O=o=O=| \<br> __.' a`\=o=o=o=(`\ /<br> '. a 4/`|.-""'`\ \ ;'`) .---.<br> \ .' / .--' |_.' / .-._)<br> `) _.' / /`-.__.' /<br> `'-.____; /'-.___.-'<br> `\"""`</tt></b></pre>\n<div><big style="font-weight: bold;"><big><big><span style="font-family: arial,helvetica,sans-serif;">FRM GETIN UR PACKAGES</span></big></big></big><br></div>\n</body>\n</html>""")
+
+
+ def get_file(self, path):
+ """Proxy the download of a file from a mirror.
+
+ @type path: C{list} of C{string}
+ @param path: the path of the file to download, starting with the mirror name
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and bencoded
+ metainfo file
+
+ """
+
+ try:
+ url = 'http://'
+ url += '/'.join(path)
+ if DEBUG:
+ print 'fetching:', url
+ f = urlopen(url)
+ headers = {}
+ for k,v in f.response.getheaders():
+ if k.lower() != 'content-length':
+ headers[k] = v
+ data = f.read()
+
+ return (200, 'OK', headers, data)
+
+ except IOError, e:
+ try:
+ (msg, status) = e
+ except:
+ status = 404
+ msg = 'Unknown error occurred'
+ return (status, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, msg)
+
+ def get_package(self, connection, path):
+ """Download a package file from a torrent.
+
+ @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
+ @param connection: the conection the request came in on
+ @type path: C{list} of C{string}
+ @param path: the path of the file to download, starting with the mirror name
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and package data
+ (or None if the package is to be downloaded)
+
+ """
+
+ # Find the file in one of the torrent downloads
+ d, f = self.handler.find_file(path[0], path[1:])
+
+ if d is None:
+ return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+
+ # Check if the file has already been downloaded
+ data = ''
+ pieces_needed = []
+ start_piece, end_piece = d.fileselector.storage.file_pieces[f]
+ for piece in xrange(start_piece, end_piece+1):
+ if not d.storagewrapper.do_I_have(piece):
+ pieces_needed.append(piece)
+ elif not pieces_needed:
+ data = data + d.storagewrapper.get_piece(piece, 0, -1).getarray().tostring()
+
+ if not pieces_needed:
+ return (200, 'OK', {'Server': VERSION, 'Content-Type': 'text/plain'}, data)
+
+ # Check if the torrent is running/not paused
+ if d.doneflag.isSet():
+ return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+
+ if not d.unpauseflag.isSet():
+ d.Unpause()
+
+ # Enable the download of the piece
+ d.fileselector.set_priority(f, 1)
+
+ # Add the connection to the list of those needing responses
+ self.enqueue_request(connection, d, f, pieces_needed)
+
+ return None
+
+
+ def answer_package(self, connection, d, f):
+ """Send the newly downloaded package file to the requester.
+
+ @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
+ @param connection: the conection the request came in on
+ @type d: L{DebTorrent.download_bt1.BT1Download}
+ @param d: the torrent download that has the file
+ @type f: C{int}
+ @param f: the index of the file in the torrent
+
+ """
+
+ # Check to make sure the requester is still waiting
+ if connection.closed:
+ return
+
+ # Check if the file has been downloaded
+ data = ''
+ pieces_needed = []
+ start_piece, end_piece = d.fileselector.storage.file_pieces[f]
+ for piece in xrange(start_piece, end_piece+1):
+ if not d.storagewrapper.do_I_have(piece):
+ pieces_needed.append(piece)
+ elif not pieces_needed:
+ data = data + d.storagewrapper.get_piece(piece, 0, -1).getarray().tostring()
+
+ if not pieces_needed:
+ connection.answer((200, 'OK', {'Server': VERSION, 'Content-Type': 'text/plain'}, data))
+ return
+
+ # Something strange has happened, requeue it
+ if DEBUG:
+ print 'request for', f, 'still needs pieces:', pieces_needed
+ self.enqueue_request(connection, d, f, pieces_needed)
+
+
+ def get_Packages(self, path):
+ """Download a Packages file and start a torrent.
+
+ @type path: C{list} of C{string}
+ @param path: the path of the file to download, starting with the mirror name
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and Packages file
+
+ """
+
+ # Download the Packages file
+ r = self.get_file(path)
+
+ if not r[0] == 200:
+ return r
+
+ try:
+ # Decompress the data
+ if path[-1].endswith('.gz'):
+ compressed = StringIO(r[3])
+ f = GzipFile(fileobj = compressed)
+ data = f.read()
+ elif path[-1].endswith('.bz2'):
+ data = decompress(r[3])
else:
- keys = self.downloads.keys()
- for hash in keys:
- fs[hash] = self.scrapedata(hash)
-
- return (200, 'OK', {'Content-Type': 'text/plain'}, bencode({'files': fs}))
-
-
- def get_file(self, hash):
- if not self.allow_get:
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- 'get function is not available with this tracker.')
- if not self.allowed.has_key(hash):
- return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
- fname = self.allowed[hash]['file']
- fpath = self.allowed[hash]['path']
- return (200, 'OK', {'Content-Type': 'application/x-debtorrent',
- 'Content-Disposition': 'attachment; filename=' + fname},
- open(fpath, 'rb').read())
-
-
- def check_allowed(self, infohash, paramslist):
- if ( self.aggregator_key is not None
- and not ( paramslist.has_key('password')
- and paramslist['password'][0] == self.aggregator_key ) ):
- return (200, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason':
- 'Requested download is not authorized for use with this tracker.'}))
-
- if self.allowed is not None:
- if not self.allowed.has_key(infohash):
- return (200, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason':
- 'Requested download is not authorized for use with this tracker.'}))
- if self.config['allowed_controls']:
- if self.allowed[infohash].has_key('failure reason'):
- return (200, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason': self.allowed[infohash]['failure reason']}))
-
- if paramslist.has_key('tracker'):
- if ( self.config['multitracker_allowed'] == 'none' or # turned off
- paramslist['peer_id'][0] == self.trackerid ): # oops! contacted myself
- return (200, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason': 'disallowed'}))
-
- if ( self.config['multitracker_allowed'] == 'autodetect'
- and not self.allowed[infohash].has_key('announce-list') ):
- return (200, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason':
- 'Requested download is not authorized for multitracker use.'}))
-
- return None
-
-
- def add_data(self, infohash, event, ip, paramslist):
- peers = self.downloads.setdefault(infohash, {})
- ts = self.times.setdefault(infohash, {})
- self.completed.setdefault(infohash, 0)
- self.seedcount.setdefault(infohash, 0)
-
- def params(key, default = None, l = paramslist):
- if l.has_key(key):
- return l[key][0]
- return default
-
- myid = params('peer_id','')
- if len(myid) != 20:
- raise ValueError, 'id not of length 20'
- if event not in ['started', 'completed', 'stopped', 'snooped', None]:
- raise ValueError, 'invalid event'
- port = params('cryptoport')
- if port is None:
- port = params('port','')
- port = long(port)
- if port < 0 or port > 65535:
- raise ValueError, 'invalid port'
- left = long(params('left',''))
- if left < 0:
- raise ValueError, 'invalid amount left'
- uploaded = long(params('uploaded',''))
- downloaded = long(params('downloaded',''))
- if params('supportcrypto'):
- supportcrypto = 1
- try:
- s = int(params['requirecrypto'])
- chr(s)
- except:
- s = 0
- requirecrypto = s
- else:
- supportcrypto = 0
- requirecrypto = 0
-
- peer = peers.get(myid)
- islocal = local_IPs.includes(ip)
- mykey = params('key')
- if peer:
- auth = peer.get('key',-1) == mykey or peer.get('ip') == ip
-
- gip = params('ip')
- if is_valid_ip(gip) and (islocal or not self.only_local_override_ip):
- ip1 = gip
- else:
- ip1 = ip
-
- if params('numwant') is not None:
- rsize = min(int(params('numwant')),self.response_size)
- else:
- rsize = self.response_size
-
- if event == 'stopped':
- if peer:
- if auth:
- self.delete_peer(infohash,myid)
-
- elif not peer:
- ts[myid] = clock()
- peer = { 'ip': ip, 'port': port, 'left': left,
- 'supportcrypto': supportcrypto,
- 'requirecrypto': requirecrypto }
- if mykey:
- peer['key'] = mykey
- if gip:
- peer['given ip'] = gip
- if port:
- if not self.natcheck or islocal:
- peer['nat'] = 0
- self.natcheckOK(infohash,myid,ip1,port,peer)
- else:
- NatCheck(self.connectback_result,infohash,myid,ip1,port,
- self.rawserver,encrypted=requirecrypto)
- else:
- peer['nat'] = 2**30
- if event == 'completed':
- self.completed[infohash] += 1
- if not left:
- self.seedcount[infohash] += 1
-
- peers[myid] = peer
-
- else:
- if not auth:
- return rsize # return w/o changing stats
-
- ts[myid] = clock()
- if not left and peer['left']:
- self.completed[infohash] += 1
- self.seedcount[infohash] += 1
- if not peer.get('nat', -1):
- for bc in self.becache[infohash]:
- if bc[0].has_key(myid):
- bc[1][myid] = bc[0][myid]
- del bc[0][myid]
- elif left and not peer['left']:
- self.completed[infohash] -= 1
- self.seedcount[infohash] -= 1
- if not peer.get('nat', -1):
- for bc in self.becache[infohash]:
- if bc[1].has_key(myid):
- bc[0][myid] = bc[1][myid]
- del bc[1][myid]
- peer['left'] = left
-
- if port:
- recheck = False
- if ip != peer['ip']:
- peer['ip'] = ip
- recheck = True
- if gip != peer.get('given ip'):
- if gip:
- peer['given ip'] = gip
- elif peer.has_key('given ip'):
- del peer['given ip']
- recheck = True
-
- natted = peer.get('nat', -1)
- if recheck:
- if natted == 0:
- l = self.becache[infohash]
- y = not peer['left']
- for x in l:
- del x[y][myid]
- if natted >= 0:
- del peer['nat'] # restart NAT testing
- if natted and natted < self.natcheck:
- recheck = True
-
- if recheck:
- if not self.natcheck or islocal:
- peer['nat'] = 0
- self.natcheckOK(infohash,myid,ip1,port,peer)
- else:
- NatCheck(self.connectback_result,infohash,myid,ip1,port,
- self.rawserver,encrypted=requirecrypto)
-
- return rsize
-
-
- def peerlist(self, infohash, stopped, tracker, is_seed,
- return_type, rsize, supportcrypto):
- data = {} # return data
- seeds = self.seedcount[infohash]
- data['complete'] = seeds
- data['incomplete'] = len(self.downloads[infohash]) - seeds
-
- if ( self.config['allowed_controls']
- and self.allowed[infohash].has_key('warning message') ):
- data['warning message'] = self.allowed[infohash]['warning message']
-
- if tracker:
- data['interval'] = self.config['multitracker_reannounce_interval']
- if not rsize:
- return data
- cache = self.cached_t.setdefault(infohash, None)
- if ( not cache or len(cache[1]) < rsize
- or cache[0] + self.config['min_time_between_cache_refreshes'] < clock() ):
- bc = self.becache.setdefault(infohash,self.cache_default)
- cache = [ clock(), bc[0][0].values() + bc[0][1].values() ]
- self.cached_t[infohash] = cache
- shuffle(cache[1])
- cache = cache[1]
-
- data['peers'] = cache[-rsize:]
- del cache[-rsize:]
- return data
-
- data['interval'] = self.reannounce_interval
- if stopped or not rsize: # save some bandwidth
- data['peers'] = []
- return data
-
- bc = self.becache.setdefault(infohash,self.cache_default)
- len_l = len(bc[2][0])
- len_s = len(bc[2][1])
- if not (len_l+len_s): # caches are empty!
- data['peers'] = []
- return data
- l_get_size = int(float(rsize)*(len_l)/(len_l+len_s))
- cache = self.cached.setdefault(infohash,[None,None,None])[return_type]
- if cache and ( not cache[1]
- or (is_seed and len(cache[1]) < rsize)
- or len(cache[1]) < l_get_size
- or cache[0]+self.config['min_time_between_cache_refreshes'] < self.cachetime ):
- cache = None
- if not cache:
- peers = self.downloads[infohash]
- if self.config['compact_reqd']:
- vv = ([],[],[])
- else:
- vv = ([],[],[],[],[])
- for key, ip, port in self.t2tlist.harvest(infohash): # empty if disabled
- if not peers.has_key(key):
- cp = compact_peer_info(ip, port)
- vv[0].append(cp)
- vv[2].append((cp,'\x00'))
- if not self.config['compact_reqd']:
- vv[3].append({'ip': ip, 'port': port, 'peer id': key})
- vv[4].append({'ip': ip, 'port': port})
- cache = [ self.cachetime,
- bc[return_type][0].values()+vv[return_type],
- bc[return_type][1].values() ]
- shuffle(cache[1])
- shuffle(cache[2])
- self.cached[infohash][return_type] = cache
- for rr in xrange(len(self.cached[infohash])):
- if rr != return_type:
- try:
- self.cached[infohash][rr][1].extend(vv[rr])
- except:
- pass
- if len(cache[1]) < l_get_size:
- peerdata = cache[1]
- if not is_seed:
- peerdata.extend(cache[2])
- cache[1] = []
- cache[2] = []
- else:
- if not is_seed:
- peerdata = cache[2][l_get_size-rsize:]
- del cache[2][l_get_size-rsize:]
- rsize -= len(peerdata)
- else:
- peerdata = []
- if rsize:
- peerdata.extend(cache[1][-rsize:])
- del cache[1][-rsize:]
- if return_type == 0:
- data['peers'] = ''.join(peerdata)
- elif return_type == 1:
- data['crypto_flags'] = "0x01"*len(peerdata)
- data['peers'] = ''.join(peerdata)
- elif return_type == 2:
- data['crypto_flags'] = ''.join([p[1] for p in peerdata])
- data['peers'] = ''.join([p[0] for p in peerdata])
- else:
- data['peers'] = peerdata
- return data
-
-
+ data = r[3]
+
+ name = "dt_" + '_'.join(path)
+
+ assert data[:8] == "Package:"
+ h = data.split('\n')
+ except:
+ if DEBUG:
+ print 'ERROR: Packages file could not be converted to a torrent'
+ return r
+
+ pieces = []
+ lengths = []
+ fs = []
+
+ p = [None, None, None]
+ for line in h:
+ line = line.rstrip()
+ if line == "":
+ if (p[0] and p[1] and p[2]):
+ fpath = []
+ while p[1]:
+ p[1],d = split(p[1])
+ fpath.insert(0,d)
+ fs.append({'length': p[0], 'path': fpath})
+ lengths.append(p[0])
+ pieces.append(p[2])
+ p = [None, None, None]
+ if line[:9] == "Filename:":
+ p[1] = line[10:]
+ if line[:5] == "Size:":
+ p[0] = long(line[6:])
+ if line[:5] == "SHA1:":
+ p[2] = a2b_hex(line[6:])
+
+ response = {'info': {'pieces': ''.join(pieces),
+ 'piece lengths': lengths, 'files': fs },
+ 'announce': 'http://dttracker.debian.net:6969/announce',
+ 'name': name }
+
+ if path.count('dists'):
+ mirror = 'http://' + '/'.join(path[:path.index('dists')]) + '/'
+ response['deb_mirrors'] = [mirror]
+
+ infohash = sha(bencode(response['info'])).digest()
+
+ if self.handler.has_torrent(infohash):
+ return r
+
+ a = {}
+ a['path'] = '/'.join(path)
+ a['file'] = name
+ a['type'] = path[-1]
+ i = response['info']
+ l = 0
+ nf = 0
+ if i.has_key('length'):
+ l = i.get('length',0)
+ nf = 1
+ elif i.has_key('files'):
+ for li in i['files']:
+ nf += 1
+ if li.has_key('length'):
+ l += li['length']
+ a['numfiles'] = nf
+ a['length'] = l
+ a['name'] = name
+ def setkey(k, d = response, a = a):
+ if d.has_key(k):
+ a[k] = d[k]
+ setkey('failure reason')
+ setkey('warning message')
+ setkey('announce-list')
+ a['metainfo'] = response
+
+ self.handler.add(infohash, a)
+
+ return r
+
+
def get(self, connection, path, headers):
- print 'URL: ' + path + '\n'
- print 'HEADERS: ',
- print headers,
- print
- return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+ """Respond to a GET request.
+
+ Process a GET request from APT/browser/other. Process the request,
+ calling the helper functions above if needed. Return the response to
+ be returned to the requester.
+
+ @type connection: L{DebTorrent.HTTPHandler.HTTPConnection}
+ @param connection: the conection the request came in on
+ @type path: C{string}
+ @param path: the URL being requested
+ @type headers: C{dictionary}
+ @param headers: the headers from the request
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and message body
+
+ """
+
+# return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
real_ip = connection.get_ip()
ip = real_ip
if is_ipv4(ip):
@@ -858,21 +711,26 @@
if ( (self.allowed_IPs and not self.allowed_IPs.includes(ip))
or (self.banned_IPs and self.banned_IPs.includes(ip)) ):
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
+ return (400, 'Not Authorized', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
bencode({'failure reason':
- 'your IP is not allowed on this tracker'}))
-
- nip = get_forwarded_ip(headers)
- if nip and not self.only_local_override_ip:
- ip = nip
- try:
- ip = to_ipv4(ip)
- ipv4 = True
- except ValueError:
- ipv4 = False
+ 'your IP is not allowed on this proxy'}))
paramslist = {}
def params(key, default = None, l = paramslist):
+ """Get the user parameter, or the default.
+
+ @type key: C{string}
+ @param key: the parameter to get
+ @type default: C{string}
+ @param default: the default value to use if no parameter is set
+ (optional, defaults to None)
+ @type l: C{dictionary}
+ @param l: the user parameters (optional, defaults to L{paramslist})
+ @rtype: C{string}
+ @return: the parameter's value
+
+ """
+
if l.has_key(key):
return l[key][0]
return default
@@ -892,128 +750,35 @@
if path == '' or path == 'index.html':
return self.get_infopage()
- if (path == 'file'):
- return self.get_file(params('info_hash'))
- if path == 'favicon.ico' and self.favicon is not None:
- return (200, 'OK', {'Content-Type' : 'image/x-icon'}, self.favicon)
-
- # automated access from here on
-
- if path in ('scrape', 'scrape.php', 'tracker.php/scrape'):
- return self.get_scrape(paramslist)
-
- if not path in ('announce', 'announce.php', 'tracker.php/announce'):
- return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
-
- # main tracker function
-
- filtered = self.Filter.check(real_ip, paramslist, headers)
- if filtered:
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'failure reason': filtered}))
-
- infohash = params('info_hash')
- if not infohash:
- raise ValueError, 'no info hash'
-
- notallowed = self.check_allowed(infohash, paramslist)
- if notallowed:
- return notallowed
-
- event = params('event')
-
- rsize = self.add_data(infohash, event, ip, paramslist)
-
+ if path == 'meow':
+ return self.get_meow()
+ if path == 'favicon.ico':
+ if self.favicon is not None:
+ return (200, 'OK', {'Server': VERSION, 'Content-Type' : 'image/x-icon'}, self.favicon)
+ else:
+ return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+
+ # Process the rest as a proxy
+ path = path.split('/')
+
+ if 'Packages.diff' in path:
+ return (404, 'Not Found', {'Server': VERSION, 'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+
+ if path[-1] in ('Packages', 'Packages.gz', 'Packages.bz2'):
+ return self.get_Packages(path)
+
+ if path[-1][-4:] == '.deb':
+ return self.get_package(connection, path)
+
+ return self.get_file(path)
+
except ValueError, e:
- return (400, 'Bad Request', {'Content-Type': 'text/plain'},
+ return (400, 'Bad Request', {'Server': VERSION, 'Content-Type': 'text/plain'},
'you sent me garbage - ' + str(e))
- if self.aggregate_forward and not paramslist.has_key('tracker'):
- self.aggregate_senddata(query)
-
- if self.is_aggregator: # don't return peer data here
- return (200, 'OK', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- bencode({'response': 'OK'}))
-
- if params('compact') and ipv4:
- if params('requirecrypto'):
- return_type = 1
- elif params('supportcrypto'):
- return_type = 2
- else:
- return_type = 0
- elif self.config['compact_reqd'] and ipv4:
- return (400, 'Bad Request', {'Content-Type': 'text/plain'},
- 'your client is outdated, please upgrade')
- elif params('no_peer_id'):
- return_type = 4
- else:
- return_type = 3
-
- data = self.peerlist(infohash, event=='stopped',
- params('tracker'), not params('left'),
- return_type, rsize, params('supportcrypto'))
-
- if paramslist.has_key('scrape'): # deprecated
- data['scrape'] = self.scrapedata(infohash, False)
-
- if self.dedicated_seed_id:
- if params('seed_id') == self.dedicated_seed_id and params('left') == 0:
- self.is_seeded[infohash] = True
- if params('check_seeded') and self.is_seeded.get(infohash):
- data['seeded'] = 1
-
- return (200, 'OK', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, bencode(data))
-
-
- def natcheckOK(self, infohash, peerid, ip, port, peer):
- seed = not peer['left']
- bc = self.becache.setdefault(infohash,self.cache_default)
- cp = compact_peer_info(ip, port)
- reqc = peer['requirecrypto']
- bc[2][seed][peerid] = (cp,chr(reqc))
- if peer['supportcrypto']:
- bc[1][seed][peerid] = cp
- if not reqc:
- bc[0][seed][peerid] = cp
- if not self.config['compact_reqd']:
- bc[3][seed][peerid] = Bencached(bencode({'ip': ip, 'port': port,
- 'peer id': peerid}))
- bc[4][seed][peerid] = Bencached(bencode({'ip': ip, 'port': port}))
-
-
- def natchecklog(self, peerid, ip, port, result):
- year, month, day, hour, minute, second, a, b, c = localtime(time())
- print '%s - %s [%02d/%3s/%04d:%02d:%02d:%02d] "!natcheck-%s:%i" %i 0 - -' % (
- ip, quote(peerid), day, months[month], year, hour, minute, second,
- ip, port, result)
-
- def connectback_result(self, result, downloadid, peerid, ip, port):
- record = self.downloads.get(downloadid,{}).get(peerid)
- if ( record is None
- or (record['ip'] != ip and record.get('given ip') != ip)
- or record['port'] != port ):
- if self.config['log_nat_checks']:
- self.natchecklog(peerid, ip, port, 404)
- return
- if self.config['log_nat_checks']:
- if result:
- x = 200
- else:
- x = 503
- self.natchecklog(peerid, ip, port, x)
- if not record.has_key('nat'):
- record['nat'] = int(not result)
- if result:
- self.natcheckOK(downloadid,peerid,ip,port,record)
- elif result and record['nat']:
- record['nat'] = 0
- self.natcheckOK(downloadid,peerid,ip,port,record)
- elif not result:
- record['nat'] += 1
-
def remove_from_state(self, *l):
+ """Remove all the input parameter names from the current state."""
for s in l:
try:
del self.state[s]
@@ -1021,6 +786,7 @@
pass
def save_state(self):
+ """Save the state file to disk."""
self.rawserver.add_task(self.save_state, self.save_dfile_interval)
h = open(self.dfile, 'wb')
h.write(bencode(self.state))
@@ -1028,6 +794,7 @@
def parse_allowed(self):
+ """Periodically parse the directory and list for allowed torrents."""
self.rawserver.add_task(self.parse_allowed, self.parse_dir_interval)
if self.config['allowed_dir']:
@@ -1062,6 +829,7 @@
def read_ip_lists(self):
+ """Periodically parse the allowed and banned IPs lists."""
self.rawserver.add_task(self.read_ip_lists,self.parse_dir_interval)
f = self.config['allowed_ips']
@@ -1083,37 +851,16 @@
print '**warning** unable to read banned_IP list'
- def delete_peer(self, infohash, peerid):
- dls = self.downloads[infohash]
- peer = dls[peerid]
- if not peer['left']:
- self.seedcount[infohash] -= 1
- if not peer.get('nat',-1):
- l = self.becache[infohash]
- y = not peer['left']
- for x in l:
- if x[y].has_key(peerid):
- del x[y][peerid]
- del self.times[infohash][peerid]
- del dls[peerid]
-
- def expire_downloaders(self):
- for x in self.times.keys():
- for myid, t in self.times[x].items():
- if t < self.prevtime:
- self.delete_peer(x,myid)
- self.prevtime = clock()
- if (self.keep_dead != 1):
- for key, value in self.downloads.items():
- if len(value) == 0 and (
- self.allowed is None or not self.allowed.has_key(key) ):
- del self.times[key]
- del self.downloads[key]
- del self.seedcount[key]
- self.rawserver.add_task(self.expire_downloaders, self.timeout_downloaders_interval)
-
-
def size_format(s):
+ """Format a byte size for reading by the user.
+
+ @type s: C{long}
+ @param s: the number of bytes
+ @rtype: C{string}
+ @return: the formatted size with appropriate units
+
+ """
+
if (s < 1024):
r = str(s) + 'B'
elif (s < 1048576):
Propchange: debtorrent/trunk/DebTorrent/BT1/AptListener.py
------------------------------------------------------------------------------
svn:keywords = ID
Modified: debtorrent/trunk/DebTorrent/BT1/FileSelector.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/FileSelector.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/FileSelector.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/FileSelector.py Wed Jun 13 21:32:05 2007
@@ -15,19 +15,19 @@
class FileSelector:
def __init__(self, files, piece_lengths, bufferdir,
- storage, storagewrapper, sched, failfunc):
+ storage, storagewrapper, sched, picker, failfunc):
self.files = files
self.storage = storage
self.storagewrapper = storagewrapper
self.sched = sched
self.failfunc = failfunc
self.downloader = None
- self.picker = None
+ self.picker = picker
storage.set_bufferdir(bufferdir)
self.numfiles = len(files)
- self.priority = [1] * self.numfiles
+ self.priority = [-1] * self.numfiles
self.new_priority = None
self.new_partials = None
self.filepieces = []
@@ -51,7 +51,7 @@
pieces = range(start_piece,end_piece+1)
self.filepieces.append(tuple(pieces))
self.numpieces = len(piece_lengths)
- self.piece_priority = [1] * self.numpieces
+ self.piece_priority = [-1] * self.numpieces
@@ -66,13 +66,12 @@
# print_exc()
return False
try:
- files_updated = False
for f in xrange(self.numfiles):
if new_priority[f] < 0:
self.storage.disable_file(f)
- files_updated = True
- if files_updated:
- self.storage.reset_file_status()
+ else:
+ self.storage.enable_file(f)
+ self.storage.reset_file_status()
self.new_priority = new_priority
except (IOError, OSError), e:
self.failfunc("can't open partial file for "
@@ -97,10 +96,10 @@
new_piece_priority = self._get_piece_priority_list(self.new_priority)
self.storagewrapper.reblock([i == -1 for i in new_piece_priority])
self.new_partials = self.storagewrapper.unpickle(d, pieces)
-
-
- def tie_in(self, picker, cancelfunc, requestmorefunc, rerequestfunc):
- self.picker = picker
+ self.piece_priority = self._initialize_piece_priority(self.new_priority)
+
+
+ def tie_in(self, cancelfunc, requestmorefunc, rerequestfunc):
self.cancelfunc = cancelfunc
self.requestmorefunc = requestmorefunc
self.rerequestfunc = rerequestfunc
@@ -199,6 +198,18 @@
l[i] = min(l[i],file_priority_list[f])
return l
+
+ def _initialize_piece_priority(self, new_priority):
+ was_complete = self.storagewrapper.am_I_complete()
+ new_piece_priority = self._get_piece_priority_list(new_priority)
+ pieces = range(self.numpieces)
+ shuffle(pieces)
+ for piece in pieces:
+ self.picker.set_priority(piece,new_piece_priority[piece])
+ self.storagewrapper.reblock([i == -1 for i in new_piece_priority])
+
+ return new_piece_priority
+
def _set_piece_priority(self, new_priority):
was_complete = self.storagewrapper.am_I_complete()
@@ -233,7 +244,7 @@
self.priority = new_priority
if not self._initialize_files_disabled(old_priority, new_priority):
return
-# self.piece_priority = self._set_piece_priority(new_priority)
+ self.piece_priority = self._initialize_piece_priority(new_priority)
def set_priorities_now(self, new_priority = None):
if not new_priority:
Modified: debtorrent/trunk/DebTorrent/BT1/HTTPDownloader.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/HTTPDownloader.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/HTTPDownloader.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/HTTPDownloader.py Wed Jun 13 21:32:05 2007
@@ -1,8 +1,17 @@
# Written by John Hoffman
# Modified by Cameron Dale
# see LICENSE.txt for license information
-
+#
# $Id$
+
+"""Manage downloading pieces over HTTP.
+
+ at type VERSION: C{string}
+ at var VERSION: the UserAgent identifier sent to all sites
+ at type haveall: L{haveComplete}
+ at var haveall: instance of the seed's bitfield
+
+"""
from DebTorrent.CurrentRateMeasure import Measure
from random import randint
@@ -17,23 +26,95 @@
True = 1
False = 0
-EXPIRE_TIME = 60 * 60
+DEBUG = True
VERSION = product_name+'/'+version_short
class haveComplete:
+ """Dummy class similar to L{Debtorrent.bitfield.Bitfield}.
+
+ This class represents the HTTP seed's bitfield, which is always complete
+ and has every piece because it is a seed.
+
+ """
def complete(self):
+ """Dummy function to always return true."""
return True
def __getitem__(self, x):
+ """Dummy function to always return true."""
return True
haveall = haveComplete()
class SingleDownload:
+ """Control HTTP downloads from a single site.
+
+ @type downloader: L{HTTPDownloader}
+ @ivar downloader: the collection of all HTTP downloads
+ @type baseurl: C{string}
+ @ivar baseurl: the complete URL to append download info to
+ @type netloc: C{string}
+ @ivar netloc: the webserver address and port to connect to
+ (from the L{baseurl}
+ @type connection: C{HTTPConnection}
+ @ivar connection: the connection to the HTTP server
+ @type seedurl: C{string}
+ @ivar seedurl: the path component from the L{baseurl}
+ @type params: C{string}
+ @ivar params: the parameters component from the L{baseurl}
+ @type query: C{string}
+ @ivar query: the query component from the L{baseurl}
+ @type headers: C{dictionary}
+ @ivar headres: the HTTP headers to send in the request
+ @type measure: L{DebTorrent.CurrentRateMeasure.Measure}
+ @ivar measure: tracks the download rate from the site
+ @type index: C{int}
+ @ivar index: the piece index currently being downloaded
+ @type url: C{string}
+ @ivar url: the URL to request from the site
+ @type requests: C{list} of requests
+ @ivar requests: a list of the requests for a piece's ranges
+ @type request_size: C{int}
+ @ivar request_size: the total size of all requests
+ @type endflag: C{boolean}
+ @ivar endflag: unknown
+ @type error: C{string}
+ @ivar error: the error received from the server
+ @type retry_period: C{int}
+ @ivar retry_period: the time to wait before making another request
+ @type _retry_period: C{int}
+ @ivar _retry_period: the server-specified time to wait before making
+ another request
+ @type errorcount: C{int}
+ @ivar errorcount: the number of download errors that have occurred since
+ the last successful download from the site
+ @type goodseed: C{boolean}
+ @ivar goodseed: whether there has been a successful download from the seed
+ @type active: C{boolean}
+ @ivar active: whether there is a download underway
+ @type cancelled: C{boolean}
+ @ivar cancelled: whether the download has been cancelled
+ @type received_data: C{string}
+ @ivar received_data: the data returned from the most recent request
+ @type connection_status: C{int}
+ @ivar connection_status: the status code returned by the server for the
+ most recent request
+
+ """
+
def __init__(self, downloader, url):
+ """Initialize the instance.
+
+ @type downloader: L{HTTPDownloader}
+ @param downloader: the collection of all HTTP downloads
+ @type url: C{string}
+ @param url: the base URL to add download info to
+
+ """
+
self.downloader = downloader
self.baseurl = url
try:
- (scheme, self.netloc, path, pars, query, fragment) = urlparse(url)
+ (scheme, self.netloc, path, params, query, fragment) = urlparse(url)
except:
self.downloader.errorfunc('cannot parse http seed address: '+url)
return
@@ -46,13 +127,18 @@
self.downloader.errorfunc('cannot connect to http seed: '+url)
return
self.seedurl = path
- if pars:
- self.seedurl += ';'+pars
- self.seedurl += '?'
+ if path[-1:] != '/':
+ self.seedurl += '/'
+ if params:
+ self.params = ';'+params
+ else:
+ self.params = ''
if query:
- self.seedurl += query+'&'
- self.seedurl += 'info_hash='+quote(self.downloader.infohash)
-
+ self.query = '?'+query+'&'
+ else:
+ self.query = ''
+
+ self.headers = {'User-Agent': VERSION}
self.measure = Measure(downloader.max_rate_period)
self.index = None
self.url = ''
@@ -69,6 +155,13 @@
self.resched(randint(2,10))
def resched(self, len = None):
+ """(Re)Schedule a download from the HTTP seed.
+
+ @type len: C{int}
+ @param len: the amount of time to wait before doing the download (seconds)
+
+ """
+
if len is None:
len = self.retry_period
if self.errorcount > 3:
@@ -76,12 +169,28 @@
self.downloader.rawserver.add_task(self.download, len)
def _want(self, index):
+ """Determine whether the piece is needed.
+
+ @type index: C{int}
+ @param index: the piece index
+ @rtype: C{boolean}
+ @return: whether the piece is needed
+
+ """
+
if self.endflag:
return self.downloader.storage.do_I_have_requests(index)
else:
return self.downloader.storage.is_unstarted(index)
def download(self):
+ """Start a request for a piece.
+
+ Finds a new piece to download from the picker, creates the URL for the
+ request, and then starts the request.
+
+ """
+
self.cancelled = False
if self.downloader.picker.am_I_complete():
self.downloader.downloads.remove(self)
@@ -95,16 +204,30 @@
self.endflag = True
self.resched()
else:
- self.url = ( self.seedurl+'&piece='+str(self.index) )
+ if DEBUG:
+ print 'HTTPDownloader: downloading piece', self.index
+ (start, end, length, file) = self.downloader.storage.storage.get_file_range(self.index)
+ filename = self.downloader.filenamefunc()
+ if len(filename) > 0 and file.startswith(filename):
+ file = file[1+len(filename):]
+ self.url = ( self.seedurl + file + self.params + self.query )
self._get_requests()
- if self.request_size < self.downloader.storage._piecelen(self.index):
- self.url += '&ranges='+self._request_ranges()
+ if self.headers.has_key('Range'):
+ del self.headers['Range']
+ if self.request_size < length:
+ self.headers['Range'] = 'bytes=' + self._request_ranges(start, end)
rq = Thread(target = self._request)
rq.setDaemon(False)
rq.start()
self.active = True
def _request(self):
+ """Do the request.
+
+ Send the request to the server and wait for the response. Then
+ process the response and save the result.
+
+ """
import encodings.ascii
import encodings.punycode
import encodings.idna
@@ -112,13 +235,20 @@
self.error = None
self.received_data = None
try:
- self.connection.request('GET',self.url, None,
- {'User-Agent': VERSION})
+ if DEBUG:
+ print 'HTTPDownloader: sending request'
+ print 'GET', self.url, self.headers
+ self.connection.request('GET',self.url, None, self.headers)
r = self.connection.getresponse()
+ if DEBUG:
+ print 'HTTPDownloader: got response'
+ print r.status, r.reason, r.getheaders()
self.connection_status = r.status
self.received_data = r.read()
except Exception, e:
self.error = 'error accessing http seed: '+str(e)
+ if DEBUG:
+ print 'error accessing http seed: '+str(e)
try:
self.connection.close()
except:
@@ -130,6 +260,7 @@
self.downloader.rawserver.add_task(self.request_finished)
def request_finished(self):
+ """Process the completed request and schedule another."""
self.active = False
if self.error is not None:
if self.goodseed:
@@ -149,13 +280,23 @@
self.resched()
def _got_data(self):
+ """Process the returned data from the request.
+
+ Update the rate measures, pass the data to the storage, mark the piece
+ as complete.
+
+ @rtype: C{boolean}
+ @return: whether the data was good
+
+ """
+
if self.connection_status == 503: # seed is busy
try:
self.retry_period = max(int(self.received_data),5)
except:
pass
return False
- if self.connection_status != 200:
+ if self.connection_status not in [200, 206]:
self.errorcount += 1
return False
self._retry_period = 1
@@ -179,6 +320,7 @@
return True
def _get_requests(self):
+ """Get the requests for a piece."""
self.requests = []
self.request_size = 0L
while self.downloader.storage.do_I_have_requests(self.index):
@@ -188,6 +330,13 @@
self.requests.sort()
def _fulfill_requests(self):
+ """Pass the downloaded data to the storage.
+
+ @rtype: C{boolean}
+ @return: whether the piece was successfully received (hash checked)
+
+ """
+
start = 0L
success = True
while self.requests:
@@ -200,11 +349,23 @@
return success
def _release_requests(self):
+ """Release any pending requests for piece ranges."""
for begin, length in self.requests:
self.downloader.storage.request_lost(self.index, begin, length)
self.requests = []
- def _request_ranges(self):
+ def _request_ranges(self, offset, end):
+ """Build a list of ranges to request from the site.
+
+ @type offset: C{long}
+ @param offset: the offset within the file that the piece starts at
+ @type end: C{long}
+ @param end: the offset within the file that the piece ends at
+ @rtype: C{string}
+ @return: the comma separated ranges to request
+
+ """
+
s = ''
begin, length = self.requests[0]
for begin1, length1 in self.requests[1:]:
@@ -214,18 +375,84 @@
else:
if s:
s += ','
- s += str(begin)+'-'+str(begin+length-1)
+ assert offset+begin+length <= end
+ s += str(offset + begin)+'-'+str(offset+begin+length-1)
begin, length = begin1, length1
if s:
s += ','
- s += str(begin)+'-'+str(begin+length-1)
+ assert offset+begin+length <= end
+ s += str(offset+begin)+'-'+str(offset+begin+length-1)
return s
class HTTPDownloader:
+ """Collection of all the HTTP downloads.
+
+ @type storage: L{StorageWrapper.StorageWrapper}
+ @ivar storage: the piece storage instance
+ @type picker: L{PiecePicker.PiecePicker}
+ @ivar picker: the piece choosing instance
+ @type rawserver: L{Debtorrent.RawServer.RawServer}
+ @ivar rawserver: the server
+ @type finflag: C{threading.Event}
+ @ivar finflag: the flag indicating when the download is complete
+ @type errorfunc: C{method}
+ @ivar errorfunc: the method to call when an error occurs
+ @type peerdownloader: L{Downloader.Downloader}
+ @ivar peerdownloader: the instance of the collection of normal downloaders
+ @type infohash: C{string}
+ @ivar infohash: the info hash
+ @type max_rate_period: C{float}
+ @ivar max_rate_period: maximum amount of time to guess the current
+ rate estimate represents
+ @type gotpiecefunc: C{method}
+ @ivar gotpiecefunc: the method to call when a piece comes in
+ @type measurefunc: C{method}
+ @ivar measurefunc: the method to call to add downloaded data to the total
+ download rate measurement
+ @type filenamefunc: C{method}
+ @ivar filenamefunc: the method to call to determine the file name that
+ the download is being saved under
+ @type downloads: C{list} of L{SingleDownload}
+ @ivar downloads: the list of all current download connections to sites
+ @type seedsfound: C{int}
+ @ivar seedsfound: the number of seeds successfully downloaded from
+
+ """
+
def __init__(self, storage, picker, rawserver,
finflag, errorfunc, peerdownloader,
- max_rate_period, infohash, measurefunc, gotpiecefunc):
+ max_rate_period, infohash, measurefunc, gotpiecefunc,
+ filenamefunc):
+ """Initialize the instance.
+
+ @type storage: L{StorageWrapper.StorageWrapper}
+ @param storage: the piece storage instance
+ @type picker: L{PiecePicker.PiecePicker}
+ @param picker: the piece choosing instance
+ @type rawserver: L{Debtorrent.RawServer.RawServer}
+ @param rawserver: the server
+ @type finflag: C{threading.Event}
+ @param finflag: the flag indicating when the download is complete
+ @type errorfunc: C{method}
+ @param errorfunc: the method to call when an error occurs
+ @type peerdownloader: L{Downloader.Downloader}
+ @param peerdownloader: the instance of the collection of normal downloaders
+ @type max_rate_period: C{float}
+ @param max_rate_period: maximum amount of time to guess the current
+ rate estimate represents
+ @type infohash: C{string}
+ @param infohash: the info hash
+ @type measurefunc: C{method}
+ @param measurefunc: the method to call to add downloaded data to the total
+ download rate measurement
+ @type gotpiecefunc: C{method}
+ @param gotpiecefunc: the method to call when a piece comes in
+ @type filenamefunc: C{method}
+ @param filenamefunc: the method to call to determine the save location
+
+ """
+
self.storage = storage
self.picker = picker
self.rawserver = rawserver
@@ -238,17 +465,45 @@
self.measurefunc = measurefunc
self.downloads = []
self.seedsfound = 0
+ self.filenamefunc = filenamefunc
def make_download(self, url):
+ """Create a new download from a site.
+
+ @type url: C{string}
+ @param url: the base URL to use for downloading from that site
+ @rtype: L{SingleDownload}
+ @return: the SingleDownload instance created
+
+ """
+
+ if DEBUG:
+ print 'Starting a deb_mirror downloader for:', url
self.downloads.append(SingleDownload(self, url))
return self.downloads[-1]
def get_downloads(self):
+ """Get the list of all current downloads.
+
+ @rtype: C{list} of L{SingleDownload}
+ @return: all current downloads from sites
+
+ """
+
if self.finflag.isSet():
return []
return self.downloads
def cancel_piece_download(self, pieces):
+ """Cancel any active downloads for the pieces.
+
+ @type pieces: C{list} of C{int}
+ @param pieces: the list of pieces to cancel downloads of
+
+ """
+
+ if DEBUG:
+ print 'Cancelling all HTTP downloads for pieces:', pieces
for d in self.downloads:
if d.active and d.index in pieces:
d.cancelled = True
Modified: debtorrent/trunk/DebTorrent/BT1/PiecePicker.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/PiecePicker.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/PiecePicker.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/PiecePicker.py Wed Jun 13 21:32:05 2007
@@ -24,7 +24,7 @@
self.started = []
self.totalcount = 0
self.numhaves = [0] * numpieces
- self.priority = [1] * numpieces
+ self.priority = [-1] * numpieces
self.removed_partials = {}
self.crosscount = [numpieces]
self.crosscount2 = [numpieces]
@@ -46,7 +46,8 @@
self.pos_in_interests = [0] * self.numpieces
for i in xrange(self.numpieces):
self.pos_in_interests[interests[i]] = i
- self.interests.append(interests)
+ # Init all interest levels to empty
+ self.interests.append([])
def got_have(self, piece):
Modified: debtorrent/trunk/DebTorrent/BT1/Storage.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/Storage.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/Storage.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/Storage.py Wed Jun 13 21:32:05 2007
@@ -77,6 +77,9 @@
end offset, offset within the file, and file name
@type file_pieces: C{list} of (C{int}, C{int})
@ivar file_pieces: for each file, the starting and ending piece of the file
+ @type piece_files: C{dictionary}
+ @ivar piece_files: for each piece, the starting and ending offset of the
+ piece in the file, the length of the file, and the file name
@type disabled_ranges: C{list} of C{tuple}
@ivar disabled_ranges: for each file, a tuple containing the working range,
shared pieces, and disabled range (see L{_get_disabled_ranges} for their
@@ -148,7 +151,7 @@
@param config: the configuration information
@type disabled_files: C{list} of C{boolean}
@param disabled_files: list of true for the files that are disabled
- (optional, default is no files disabled)
+ (optional, default is all files disabled)
@raise IOError: unknown
@raise ValueError: unknown
@@ -157,9 +160,10 @@
self.files = files
self.piece_lengths = piece_lengths
self.doneflag = doneflag
- self.disabled = [False] * len(files)
+ self.disabled = [True] * len(files)
self.file_ranges = []
self.file_pieces = []
+ self.piece_files = {}
self.disabled_ranges = []
self.working_ranges = []
numfiles = 0
@@ -180,7 +184,7 @@
self.lock = Lock()
if not disabled_files:
- disabled_files = [False] * len(files)
+ disabled_files = [True] * len(files)
for i in xrange(len(files)):
file, length = files[i]
@@ -197,14 +201,19 @@
else:
range = (total, total + length, 0, file)
self.file_ranges.append(range)
- self.working_ranges.append([range])
+ self.working_ranges.append([])
numfiles += 1
total += length
start_piece = cur_piece
+ cur_piece_offset = 0l
for cur_piece in xrange(start_piece,len(self.piece_lengths)+1):
if piece_total >= total:
break
+ self.piece_files[cur_piece] = (cur_piece_offset,
+ cur_piece_offset + self.piece_lengths[cur_piece],
+ length, file)
piece_total += self.piece_lengths[cur_piece]
+ cur_piece_offset += self.piece_lengths[cur_piece]
end_piece = cur_piece-1
if piece_total > total:
cur_piece -= 1
@@ -232,6 +241,8 @@
self.sizes[file] = length
so_far += l
+ if DEBUG:
+ print 'piece_files:', self.piece_files
self.total_length = total
self._reset_ranges()
@@ -527,8 +538,21 @@
self.ranges.extend(l)
self.begins = [i[0] for i in self.ranges]
if DEBUG:
- print str(self.ranges)
- print str(self.begins)
+ print 'file ranges:', str(self.ranges)
+ print 'file begins:', str(self.begins)
+
+ def get_file_range(self, index):
+ """Get the file name and range that corresponds to this piece.
+
+ @type index: C{int}
+ @param index: the piece index to get a file range for
+ @rtype: (C{long}, C{long}, C{long}, C{string})
+ @return: the start and end offsets of the piece in the file, the length
+ of the file, and the name of the file
+
+ """
+
+ return self.piece_files[index]
def _intervals(self, pos, amount):
"""Get the files that are within the range.
@@ -549,7 +573,7 @@
r = []
stop = pos + amount
- p = bisect(self.begins, pos) - 1
+ p = max(bisect(self.begins, pos) - 1,0)
while p < len(self.ranges):
begin, end, offset, file = self.ranges[p]
if begin >= stop:
@@ -708,8 +732,8 @@
update_pieces = []
if DEBUG:
- print str(working_range)
- print str(update_pieces)
+ print 'working range:', str(working_range)
+ print 'update pieces:', str(update_pieces)
r = (tuple(working_range), tuple(update_pieces), tuple(disabled_files))
self.disabled_ranges[f] = r
return r
@@ -735,6 +759,8 @@
if not self.disabled[f]:
return
+ if DEBUG:
+ print 'enabling file '+self.files[f][0]
self.disabled[f] = False
r = self.file_ranges[f]
if not r:
@@ -760,6 +786,8 @@
"""
if self.disabled[f]:
return
+ if DEBUG:
+ print 'disabling file '+self.files[f][0]
self.disabled[f] = True
r = self._get_disabled_ranges(f)
if not r:
@@ -837,8 +865,9 @@
if not self.files[i][1]: # length == 0
continue
if self.disabled[i]:
- for start, end, offset, file in self._get_disabled_ranges(i)[2]:
- pfiles.extend([basename(file),getsize(file),int(getmtime(file))])
+ # Removed due to files always ending on pieces
+ #for start, end, offset, file in self._get_disabled_ranges(i)[2]:
+ # pfiles.extend([basename(file),getsize(file),int(getmtime(file))])
continue
file = self.files[i][0]
files.extend([i,getsize(file),int(getmtime(file))])
@@ -886,7 +915,7 @@
valid_pieces[p] = 1
if DEBUG:
- print valid_pieces.keys()
+ print 'Saved list of valid pieces:', valid_pieces.keys()
def test(old, size, mtime):
"""Test that the file has not changed since the status save.
@@ -933,6 +962,6 @@
return []
if DEBUG:
- print valid_pieces.keys()
+ print 'Final list of valid pieces:', valid_pieces.keys()
return valid_pieces.keys()
Modified: debtorrent/trunk/DebTorrent/BT1/makemetafile.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/makemetafile.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/makemetafile.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/makemetafile.py Wed Jun 13 21:32:05 2007
@@ -26,8 +26,8 @@
defaults = [
('announce_list', '',
'a list of announce URLs - explained below'),
- ('httpseeds', '',
- 'a list of http seed URLs - explained below'),
+ ('deb_mirrors', '',
+ 'a list of mirror URLs - explained below'),
('piece_size_pow2', 0,
"which power of 2 to set the piece size to (0 = automatic)"),
('comment', '',
@@ -59,7 +59,7 @@
print (' http://tracker1.com|http://backup1.com,http://backup2.com')
print (' (tries tracker 1 first, then tries between the 2 backups randomly)')
print ('')
- print (' httpseeds = optional list of http-seed URLs, in the format:')
+ print (' deb_mirrors = optional list of mirror URLs, in the format:')
print (' url[|url...]')
def uniconvertl(l, e):
@@ -139,10 +139,10 @@
l.append(tier.split(','))
data['announce-list'] = l
- if params.has_key('real_httpseeds'): # shortcut for progs calling in from outside
- data['httpseeds'] = params['real_httpseeds']
- elif params.has_key('httpseeds') and params['httpseeds']:
- data['httpseeds'] = params['httpseeds'].split('|')
+ if params.has_key('real_deb_mirrors'): # shortcut for progs calling in from outside
+ data['deb_mirrors'] = params['real_deb_mirrors']
+ elif params.has_key('deb_mirrors') and params['deb_mirrors']:
+ data['deb_mirrors'] = params['deb_mirrors'].split('|')
h.write(bencode(data))
h.close()
Modified: debtorrent/trunk/DebTorrent/BT1/track.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/BT1/track.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/BT1/track.py (original)
+++ debtorrent/trunk/DebTorrent/BT1/track.py Wed Jun 13 21:32:05 2007
@@ -1,8 +1,22 @@
# Written by Bram Cohen
# Modified by Cameron Dale
# see LICENSE.txt for license information
-
+#
# $Id$
+
+"""Tools to track a download swarm.
+
+ at type defaults: C{list} of (C{string}, C{unknown}, C{string})
+ at var defaults: the parameter names, default values, and descriptions
+ at type alas: C{string}
+ at var alas: the message to send when the data is not found
+ at type local_IPs: L{DebTorrent.subnetparse.IP_List}
+ at var local_IPs: the list of IP subnets that are considered local
+ at type http_via_filter: C{Regular Expression}
+ at var http_via_filter: the regular expression object to search 'via'
+ header information for the NAT IP address
+
+"""
from DebTorrent.parseargs import parseargs, formatDefinitions
from DebTorrent.RawServer import RawServer, autodetect_ipv6, autodetect_socket_style
@@ -100,6 +114,14 @@
]
def statefiletemplate(x):
+ """Check the saved state file for corruption.
+
+ @type x: C{dictionary}
+ @param x: the dictionary of information retrieved from the state file
+ @raise ValueError: if the state file info is corrupt
+
+ """
+
if type(x) != DictType:
raise ValueError
for cname, cinfo in x.items():
@@ -159,6 +181,16 @@
def isotime(secs = None):
+ """Create an ISO formatted string of the time.
+
+ @type secs: C{float}
+ @param secs: number of seconds since the epoch
+ (optional, default is to use the current time)
+ @rtype: C{string}
+ @return: the ISO formatted string representation of the time
+
+ """
+
if secs == None:
secs = time()
return strftime('%Y-%m-%d %H:%M UTC', gmtime(secs))
@@ -166,6 +198,15 @@
http_via_filter = re.compile(' for ([0-9.]+)\Z')
def _get_forwarded_ip(headers):
+ """Extract the unNATed IP address from the headers.
+
+ @type headers: C{dictionary}
+ @param headers: the headers received from the client
+ @rtype: C{string}
+ @return: the extracted IP address
+
+ """
+
header = headers.get('x-forwarded-for')
if header:
try:
@@ -192,12 +233,33 @@
return header
def get_forwarded_ip(headers):
+ """Extract the unNATed IP address from the headers.
+
+ @type headers: C{dictionary}
+ @param headers: the headers received from the client
+ @rtype: C{string}
+ @return: the extracted IP address (or None if one could not be extracted)
+
+ """
+
x = _get_forwarded_ip(headers)
if not is_valid_ip(x) or local_IPs.includes(x):
return None
return x
def compact_peer_info(ip, port):
+ """Create a compact representation of peer contact info.
+
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type port: C{int}
+ @param port: the port number to contact the peer on
+ @rtype: C{string}
+ @return: the compact representation (or the empty string if there is no
+ compact representation)
+
+ """
+
try:
s = ( ''.join([chr(int(i)) for i in ip.split('.')])
+ chr((port & 0xFF00) >> 8) + chr(port & 0xFF) )
@@ -208,7 +270,123 @@
return s
class Tracker:
+ """Track a download swarm.
+
+ @type config: C{dictionary}
+ @ivar config: the configuration parameters
+ @type response_size: unknown
+ @ivar response_size: unknown
+ @type dfile: C{string}
+ @ivar dfile: the state file to use when saving the current state
+ @type natcheck: C{int}
+ @ivar natcheck: how many times to check if a downloader is behind a NAT
+ @type parse_dir_interval: C{int}
+ @ivar parse_dir_interval: seconds between reloading of the allowed
+ directory or file, and the lists of allowed and banned IPs
+ @type favicon: C{string}
+ @ivar favicon: file containing x-icon data
+ @type rawserver: L{DebTorrent.RawServer.RawServer}
+ @ivar rawserver: the server to use for scheduling
+ @type cached: unknown
+ @ivar cached: unknown
+ @type cached_t: unknown
+ @ivar cached_t: unknown
+ @type times: unknown
+ @ivar times: unknown
+ @type state: C{dictionary}
+ @ivar state: the current state information for the tracking
+ @type seedcount: unknown
+ @ivar seedcount: unknown
+ @type allowed_IPs: unknown
+ @ivar allowed_IPs: unknown
+ @type banned_IPs: unknown
+ @ivar banned_IPs: unknown
+ @type allowed_ip_mtime: unknown
+ @ivar allowed_ip_mtime: unknown
+ @type banned_ip_mtime: unknown
+ @ivar banned_ip_mtime: unknown
+ @type only_local_override_ip: C{boolean}
+ @ivar only_local_override_ip: whether to ignore the "ip" parameter from
+ machines which aren't on local network IPs
+ @type downloads: unknown
+ @ivar downloads: unknown
+ @type completed: unknown
+ @ivar completed: unknown
+ @type becache: C{list} of C{list} of C{dictionary}
+ @ivar becache: keys are the infohashes, values are the cached peer data.
+
+ peer set format::
+ [[l0, s0], [l1, s1], ...]
+ l,s = dictionaries of leechers and seeders (by peer ID)
+ l0,s0 = compact representation, don't require crypto
+ l1,s1 = compact representation, support crypto
+ l2,s2 = [compact representation, crypto_flag], for all peers
+ additionally, if --compact_reqd = 0:
+ l3,s3 = [ip,port,peerid] for all peers
+ l4,l4 = [ip,port] for all peers
+ @type cache_default: unknown
+ @ivar cache_default: unknown
+ @type trackerid: unknown
+ @ivar trackerid: unknown
+ @type reannounce_interval: C{int}
+ @ivar reannounce_interval: seconds downloaders should wait between reannouncements
+ @type save_dfile_interval: C{int}
+ @ivar save_dfile_interval: seconds between saving the state file
+ @type show_names: C{boolean}
+ @ivar show_names: whether to display names from allowed dir
+ @type prevtime: unknown
+ @ivar prevtime: unknown
+ @type timeout_downloaders_interval: C{int}
+ @ivar timeout_downloaders_interval: seconds between expiring downloaders
+ @type logfile: unknown
+ @ivar logfile: unknown
+ @type log: unknown
+ @ivar log: unknown
+ @type allow_get: unknown
+ @ivar allow_get: unknown
+ @type t2tlist: L{T2T.T2TList}
+ @ivar t2tlist: unknown
+ @type allowed: unknown
+ @ivar allowed: unknown
+ @type allowed_list_mtime: unknown
+ @ivar allowed_list_mtime: unknown
+ @type allowed_dir_files: unknown
+ @ivar allowed_dir_files: unknown
+ @type allowed_dir_blocked: unknown
+ @ivar allowed_dir_blocked: unknown
+ @type uq_broken: unknown
+ @ivar uq_broken: unknown
+ @type keep_dead: unknown
+ @ivar keep_dead: unknown
+ @type Filter: unknown
+ @ivar Filter: unknown
+ @type is_aggregator: unknown
+ @ivar is_aggregator: unknown
+ @type aggregator_key: unknown
+ @ivar aggregator_key: unknown
+ @type aggregate_forward: unknown
+ @ivar aggregate_forward: unknown
+ @type aggregate_password: unknown
+ @ivar aggregate_password: unknown
+ @type dedicated_seed_id: unknown
+ @ivar dedicated_seed_id: unknown
+ @type is_seeded: unknown
+ @ivar is_seeded: unknown
+ @type cachetime: unknown
+ @ivar cachetime: unknown
+
+ """
+
def __init__(self, config, rawserver):
+ """Initialize the instance.
+
+ @type config: C{dictionary}
+ @param config: the configuration parameters
+ @type rawserver: L{DebTorrent.RawServer.RawServer}
+ @param rawserver: the server to use for scheduling
+
+ """
+
self.config = config
self.response_size = config['response_size']
self.dfile = config['dfile']
@@ -261,14 +439,7 @@
self.completed = self.state.setdefault('completed', {})
self.becache = {}
- ''' format: infohash: [[l0, s0], [l1, s1], ...]
- l0,s0 = compact, not requirecrypto=1
- l1,s1 = compact, only supportcrypto=1
- l2,s2 = [compact, crypto_flag], all peers
- if --compact_reqd 0:
- l3,s3 = [ip,port,id]
- l4,l4 = [ip,port] nopeerid
- '''
+
if config['compact_reqd']:
x = 3
else:
@@ -320,6 +491,19 @@
if config['hupmonitor']:
def huphandler(signum, frame, self = self):
+ """Function to handle SIGHUP signals.
+
+ Reopens the log file when a SIGHUP is received.
+
+ @type signum: unknown
+ @param signum: ignored
+ @type frame: unknown
+ @param frame: ignored
+ @type self: L{Track}
+ @param self: the Track instance to reopen the log of
+
+ """
+
try:
self.log.close ()
self.log = open(self.logfile,'a')
@@ -397,10 +581,18 @@
self.cachetimeupdate()
def cachetimeupdate(self):
+ """Update the L{cachetime} every second."""
self.cachetime += 1 # raw clock, but more efficient for cache
self.rawserver.add_task(self.cachetimeupdate,1)
def aggregate_senddata(self, query):
+ """Fork sending data to a tracker aggregator.
+
+ @type query: C{string}
+ @param query: the query to send
+
+ """
+
url = self.aggregate_forward+'?'+query
if self.aggregate_password is not None:
url += '&password='+self.aggregate_password
@@ -408,8 +600,17 @@
rq.setDaemon(False)
rq.start()
- def _aggregate_senddata(self, url): # just send, don't attempt to error check,
- try: # discard any returned data
+ def _aggregate_senddata(self, url):
+ """Send a URL request to a tracker data aggregator.
+
+ Just send, don't attempt to error check, discard any returned data.
+
+ @type url: C{string}
+ @param url: the URL to request
+
+ """
+
+ try:
h = urlopen(url)
h.read()
h.close()
@@ -418,6 +619,16 @@
def get_infopage(self):
+ """Format the info page to display for normal browsers.
+
+ Formats the currently tracked torrents into a table in human-readable
+ format to display in a browser window.
+
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and message body
+
+ """
+
try:
if not self.config['show_infopage']:
return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
@@ -510,6 +721,18 @@
def scrapedata(self, hash, return_name = True):
+ """Retrieve the scrape data for a single torrent.
+
+ @type hash: C{string}
+ @param hash: the infohash of the torrent to get scrape data for
+ @type return_name: C{boolean}
+ @param return_name: whether to return the name of the torrent
+ (optional, defaults to True)
+ @rtype: C{dictionary}
+ @return: the scrape data for the torrent
+
+ """
+
l = self.downloads[hash]
n = self.completed.get(hash, 0)
c = self.seedcount[hash]
@@ -520,6 +743,16 @@
return (f)
def get_scrape(self, paramslist):
+ """Get the scrape data for all the active torrents.
+
+ @type paramslist: C{dictionary}
+ @param paramslist: the query parameters from the GET request
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and bencoded
+ message body
+
+ """
+
fs = {}
if paramslist.has_key('info_hash'):
if self.config['scrape_allowed'] not in ['specific', 'full']:
@@ -549,19 +782,41 @@
def get_file(self, hash):
- if not self.allow_get:
- return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
- 'get function is not available with this tracker.')
- if not self.allowed.has_key(hash):
- return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
- fname = self.allowed[hash]['file']
- fpath = self.allowed[hash]['path']
- return (200, 'OK', {'Content-Type': 'application/x-debtorrent',
- 'Content-Disposition': 'attachment; filename=' + fname},
- open(fpath, 'rb').read())
+ """Get the metainfo file for a torrent.
+
+ @type hash: C{string}
+ @param hash: the infohash of the torrent to get the metainfo of
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and bencoded
+ metainfo file
+
+ """
+
+ if not self.allow_get:
+ return (400, 'Not Authorized', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'},
+ 'get function is not available with this tracker.')
+ if not self.allowed.has_key(hash):
+ return (404, 'Not Found', {'Content-Type': 'text/plain', 'Pragma': 'no-cache'}, alas)
+ fname = self.allowed[hash]['file']
+ fpath = self.allowed[hash]['path']
+ return (200, 'OK', {'Content-Type': 'application/x-debtorrent',
+ 'Content-Disposition': 'attachment; filename=' + fname},
+ open(fpath, 'rb').read())
def check_allowed(self, infohash, paramslist):
+ """Determine whether the tracker is tracking this torrent.
+
+ @type infohash: C{string}
+ @param infohash: the infohash of the torrent to check
+ @type paramslist: C{dictionary}
+ @param paramslist: the query parameters from the GET request
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and bencoded
+ message body if the request is not allowed, or None if it is
+
+ """
+
if ( self.aggregator_key is not None
and not ( paramslist.has_key('password')
and paramslist['password'][0] == self.aggregator_key ) ):
@@ -595,12 +850,42 @@
def add_data(self, infohash, event, ip, paramslist):
+ """Add the received data from the peer to the cache.
+
+ @type infohash: C{string}
+ @param infohash: the infohash of the torrent to add to
+ @type event: C{string}
+ @param event: the type of event being reported by the peer
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type paramslist: C{dictionary}
+ @param paramslist: the query parameters from the GET request
+ @rtype: C{int}
+ @return: the number of peers to return in a peer list
+ @raise ValueError: if the request from the peer is corrupt
+
+ """
+
peers = self.downloads.setdefault(infohash, {})
ts = self.times.setdefault(infohash, {})
self.completed.setdefault(infohash, 0)
self.seedcount.setdefault(infohash, 0)
def params(key, default = None, l = paramslist):
+ """Get the user parameter, or the default.
+
+ @type key: C{string}
+ @param key: the parameter to get
+ @type default: C{string}
+ @param default: the default value to use if no parameter is set
+ (optional, defaults to None)
+ @type l: C{dictionary}
+ @param l: the user parameters (optional, defaults to L{paramslist})
+ @rtype: C{string}
+ @return: the parameter's value
+
+ """
+
if l.has_key(key):
return l[key][0]
return default
@@ -740,6 +1025,28 @@
def peerlist(self, infohash, stopped, tracker, is_seed,
return_type, rsize, supportcrypto):
+ """Create a list of peers to return to the client.
+
+ @type infohash: C{string}
+ @param infohash: the infohash of the torrent to get peers from
+ @type stopped: C{boolean}
+ @param stopped: whether the peer has stopped
+ @type tracker: C{boolean}
+ @param tracker: whether the peer is a tracker
+ @type is_seed: C{boolean}
+ @param is_seed: whether the peer is currently seeding
+ @type return_type: C{int}
+ @param return_type: the format of the list to return (compact, ...)
+ @type rsize: C{int}
+ @param rsize: the number of peers to return
+ @type supportcrypto: C{boolean}
+ @param supportcrypto: whether the peer supports encrypted communication
+ (not used)
+ @rtype: C{dictionary}
+ @return: the info to return to the client
+
+ """
+
data = {} # return data
seeds = self.seedcount[infohash]
data['complete'] = seeds
@@ -840,6 +1147,23 @@
def get(self, connection, path, headers):
+ """Respond to a GET request to the tracker.
+
+ Process a GET request from a peer/tracker/browser. Process the request,
+ calling the helper functions above if needed. Return the response to
+ be returned to the requester.
+
+ @type connection: unknown
+ @param connection: the conection the request came in on
+ @type path: C{string}
+ @param path: the URL being requested
+ @type headers: C{dictionary}
+ @param headers: the headers from the request
+ @rtype: (C{int}, C{string}, C{dictionary}, C{string})
+ @return: the HTTP status code, status message, headers, and message body
+
+ """
+
real_ip = connection.get_ip()
ip = real_ip
if is_ipv4(ip):
@@ -868,6 +1192,20 @@
paramslist = {}
def params(key, default = None, l = paramslist):
+ """Get the user parameter, or the default.
+
+ @type key: C{string}
+ @param key: the parameter to get
+ @type default: C{string}
+ @param default: the default value to use if no parameter is set
+ (optional, defaults to None)
+ @type l: C{dictionary}
+ @param l: the user parameters (optional, defaults to L{paramslist})
+ @rtype: C{string}
+ @return: the parameter's value
+
+ """
+
if l.has_key(key):
return l[key][0]
return default
@@ -962,6 +1300,21 @@
def natcheckOK(self, infohash, peerid, ip, port, peer):
+ """Add the unNATed peer to the cache.
+
+ @type infohash: C{string}
+ @param infohash: the infohash of the torrent the peer is in
+ @type peerid: C{string}
+ @param peerid: the peer ID of the peer
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type port: C{int}
+ @param port: the port to contact the peer on
+ @type peer: C{dictionary}
+ @param peer: various information about the peer
+
+ """
+
seed = not peer['left']
bc = self.becache.setdefault(infohash,self.cache_default)
cp = compact_peer_info(ip, port)
@@ -978,12 +1331,40 @@
def natchecklog(self, peerid, ip, port, result):
+ """Log the results of any NAT checks performed.
+
+ @type peerid: C{string}
+ @param peerid: the peer ID of the peer
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type port: C{int}
+ @param port: the port to contact the peer on
+ @type result: C{int}
+ @param result: the HTTP status code result of the NAT check
+
+ """
+
year, month, day, hour, minute, second, a, b, c = localtime(time())
print '%s - %s [%02d/%3s/%04d:%02d:%02d:%02d] "!natcheck-%s:%i" %i 0 - -' % (
ip, quote(peerid), day, months[month], year, hour, minute, second,
ip, port, result)
def connectback_result(self, result, downloadid, peerid, ip, port):
+ """Process a NAT check attempt and result.
+
+ @type result: C{boolean}
+ @param result: whether the NAT check was successful
+ @type downloadid: C{string}
+ @param downloadid: the infohash of the torrent the peer is in
+ @type peerid: C{string}
+ @param peerid: the peer ID of the peer
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type port: C{int}
+ @param port: the port to contact the peer on
+
+ """
+
record = self.downloads.get(downloadid,{}).get(peerid)
if ( record is None
or (record['ip'] != ip and record.get('given ip') != ip)
@@ -1009,6 +1390,7 @@
def remove_from_state(self, *l):
+ """Remove all the input parameter names from the current state."""
for s in l:
try:
del self.state[s]
@@ -1016,6 +1398,7 @@
pass
def save_state(self):
+ """Save the state file to disk."""
self.rawserver.add_task(self.save_state, self.save_dfile_interval)
h = open(self.dfile, 'wb')
h.write(bencode(self.state))
@@ -1023,6 +1406,7 @@
def parse_allowed(self):
+ """Periodically parse the directory and list for allowed torrents."""
self.rawserver.add_task(self.parse_allowed, self.parse_dir_interval)
if self.config['allowed_dir']:
@@ -1057,6 +1441,7 @@
def read_ip_lists(self):
+ """Periodically parse the allowed and banned IPs lists."""
self.rawserver.add_task(self.read_ip_lists,self.parse_dir_interval)
f = self.config['allowed_ips']
@@ -1079,6 +1464,15 @@
def delete_peer(self, infohash, peerid):
+ """Delete all cached data for the peer.
+
+ @type infohash: C{string}
+ @param infohash: the infohash of the torrent to delete the peer from
+ @type peerid: C{string}
+ @param peerid: the peer ID of the peer to delete
+
+ """
+
dls = self.downloads[infohash]
peer = dls[peerid]
if not peer['left']:
@@ -1093,6 +1487,7 @@
del dls[peerid]
def expire_downloaders(self):
+ """Periodically remove all old downloaders from the cached data."""
for x in self.times.keys():
for myid, t in self.times[x].items():
if t < self.prevtime:
@@ -1109,6 +1504,13 @@
def track(args):
+ """Start the server and tracker.
+
+ @type args: C{list}
+ @param args: the command line arguments to the tracker
+
+ """
+
if len(args) == 0:
print formatDefinitions(defaults, 80)
return
@@ -1128,6 +1530,15 @@
print '# Shutting down: ' + isotime()
def size_format(s):
+ """Format a byte size for reading by the user.
+
+ @type s: C{long}
+ @param s: the number of bytes
+ @rtype: C{string}
+ @return: the formatted size with appropriate units
+
+ """
+
if (s < 1024):
r = str(s) + 'B'
elif (s < 1048576):
Modified: debtorrent/trunk/DebTorrent/ConnChoice.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/ConnChoice.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/ConnChoice.py (original)
+++ debtorrent/trunk/DebTorrent/ConnChoice.py Wed Jun 13 21:32:05 2007
@@ -6,7 +6,7 @@
"""Sets the connection choices that are available.
@type connChoices: C{list} of C{dictionary}
- at var connChoiceList: Details for each type of connection. Includes limits
+ at var connChoices: Details for each type of connection. Includes limits
for each type on the upload rate and number of connections.
@type connChoiceList: C{list} of C{string}
@var connChoiceList: the names of the connections that are available
Modified: debtorrent/trunk/DebTorrent/RateLimiter.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/RateLimiter.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/RateLimiter.py (original)
+++ debtorrent/trunk/DebTorrent/RateLimiter.py Wed Jun 13 21:32:05 2007
@@ -112,7 +112,7 @@
def ping(self, delay):
if DEBUG:
- print delay
+ print 'ping delay:', delay
if not self.autoadjust:
return
self.pings.append(delay > PING_BOUNDARY)
Modified: debtorrent/trunk/DebTorrent/RawServer.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/RawServer.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/RawServer.py (original)
+++ debtorrent/trunk/DebTorrent/RawServer.py Wed Jun 13 21:32:05 2007
@@ -194,5 +194,8 @@
if not kbint: # don't report here if it's a keyboard interrupt
self.errorfunc(data.getvalue())
+ def set_handler(self, handler, port = None):
+ self.sockethandler.set_handler(handler, port)
+
def shutdown(self):
self.sockethandler.shutdown()
Modified: debtorrent/trunk/DebTorrent/ServerPortHandler.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/ServerPortHandler.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/ServerPortHandler.py (original)
+++ debtorrent/trunk/DebTorrent/ServerPortHandler.py Wed Jun 13 21:32:05 2007
@@ -1,8 +1,15 @@
# Written by John Hoffman
# Modified by Cameron Dale
# see LICENSE.txt for license information
-
+#
# $Id$
+
+"""Wrappers to handle multiple torrent downloads.
+
+ at type default_task_id: C{mutable}
+ at var default_task_id: the default task ID to use for scheduling all unspecified tasks
+
+"""
from cStringIO import StringIO
#from RawServer import RawServer
@@ -18,7 +25,47 @@
default_task_id = []
class SingleRawServer:
+ """Simplified Server to handle one of many torrents.
+
+ This class provides a wrapper around a master L{RawServer.RawServer}
+ instance, processing requests with the same interface and passing them
+ on to the master server.
+
+ @type info_hash: C{string}
+ @ivar info_hash: the torrent infohash this Server is responsible for
+ @type doneflag: C{threading.Event}
+ @ivar doneflag: flag to indicate this torrent is being shutdown
+ @type protocol: C{string}
+ @ivar protocol: the name of the communication protocol
+ @type multihandler: L{MultiHandler}
+ @ivar multihandler: the collection of all individual simplified servers
+ @type rawserver: L{RawServer.RawServer}
+ @ivar rawserver: the master Server instance
+ @type finished: C{boolean}
+ @ivar finished: whether this torrent has been shutdown
+ @type running: C{boolean}
+ @ivar running: whether this torrent has been started and is running
+ @type handler: unknown
+ @ivar handler: the data handler to use to process data received on the connection
+ @type taskqueue: C{list}
+ @ivar taskqueue: unknown
+
+ """
+
def __init__(self, info_hash, multihandler, doneflag, protocol):
+ """Initialize the instance.
+
+ @type info_hash: C{string}
+ @param info_hash: the torrent infohash this Server is responsible for
+ @type multihandler: L{MultiHandler}
+ @param multihandler: the collection of all individual simplified servers
+ @type doneflag: C{threading.Event}
+ @param doneflag: flag to indicate this torrent is being shutdown
+ @type protocol: C{string}
+ @param protocol: the name of the communication protocol
+
+ """
+
self.info_hash = info_hash
self.doneflag = doneflag
self.protocol = protocol
@@ -30,10 +77,12 @@
self.taskqueue = []
def shutdown(self):
+ """Tell the collection to shutdown this torrent."""
if not self.finished:
self.multihandler.shutdown_torrent(self.info_hash)
def _shutdown(self):
+ """Shutdown this torrent."""
if not self.finished:
self.finished = True
self.running = False
@@ -43,6 +92,20 @@
def _external_connection_made(self, c, options, already_read,
encrypted = None ):
+ """Processes a new socket connection to this torrent.
+
+ @type c: unknown
+ @param c: the new connection
+ @type options: unknown
+ @param options: the protocol options the connected peer supports
+ @type already_read: C{string}
+ @param already_read: the data that has already been read from the connection
+ @type encrypted: L{BTcrypto.Crypto}
+ @param encrypted: the Crypto instance to use to encrypt this connections
+ communication (optional, defaults to None)
+
+ """
+
if self.running:
c.set_handler(self.handler)
self.handler.externally_handshaked_connection_made(
@@ -51,6 +114,17 @@
### RawServer functions ###
def add_task(self, func, delay=0, id = default_task_id):
+ """Passes a delayed call to a method on to the master Server.
+
+ @type func: C{method}
+ @param func: the method to call
+ @type delay: C{int}
+ @param delay: the number of seconds to delay before calling
+ @type id: C{mutable}
+ @param id: the ID of the task
+
+ """
+
if id is default_task_id:
id = self.info_hash
if not self.finished:
@@ -60,6 +134,18 @@
# pass # not handled here
def start_connection(self, dns, handler = None):
+ """Tell the master Server to start a new connection to a peer.
+
+ @type dns: (C{string}, C{int})
+ @param dns: the IP address and port number to contact the peer on
+ @type handler: unknown
+ @param handler: the data handler to use to process data on the connection
+ (optional, defaults to the L{handler})
+ @rtype: L{SocketHandler.SingleSocket}
+ @return: the new connection made to the peer
+
+ """
+
if not handler:
handler = self.handler
c = self.rawserver.start_connection(dns, handler)
@@ -69,19 +155,89 @@
# pass # don't call with this
def start_listening(self, handler):
+ """Start the Server listening (but not forever).
+
+ @type handler: unknown
+ @param handler: the default handler to call when data comes in
+ @rtype: C{method}
+ @return: the method to call to shutdown the torrent download
+
+ """
+
self.handler = handler
self.running = True
return self.shutdown # obviously, doesn't listen forever
def is_finished(self):
+ """Check if the torrent download has been shutdown.
+
+ @rtype: C{boolean}
+ @return: whether the torrent has been shutdown
+
+ """
+
return self.finished
def get_exception_flag(self):
+ """Get the master Server's exception flag.
+
+ @rtype: C{threading.Event}
+ @return: the flag used to indicate exceptions
+
+ """
+
return self.rawserver.get_exception_flag()
-class NewSocketHandler: # hand a new socket off where it belongs
+class NewSocketHandler:
+ """Read the handshake and hand a new socket connection off to where it belongs.
+
+ This class wraps some of the functionality of the
+ L{BT1.Encrypter.Connection} class. It will receive connections from
+ the Server, read the protocol handshake, assign them to the proper
+ torrent server, and pass the connection on to the Encrypter Connection.
+
+ @type multihandler: L{MultiHandler}
+ @ivar multihandler: the collection of all torrent Servers
+ @type connection: unknown
+ @ivar connection: the connection to handle
+ @type closed: C{boolean}
+ @ivar closed: whether the connection has been closed
+ @type buffer: C{string}
+ @ivar buffer: the buffer of unprocessed data received on the connection
+ @type complete: C{boolean}
+ @ivar complete: whether the handshake is complete
+ @type read: C{method}
+ @ivar read: the method to call to read data from the connection
+ @type write: C{method}
+ @ivar write: the method to call to write data to the connnection
+ @type next_len: C{int}
+ @ivar next_len: the length of the protocol name header in the connection
+ @type next_func: C{method}
+ @ivar next_func: the method to call to read the protocol name from the connection
+ @type protocol: C{string}
+ @ivar protocol: the protocol name used by the connection
+ @type encrypted: C{boolean}
+ @ivar encrypted: whether the connection is encrypted
+ @type encrypter: L{BTcrypto.Crypto}
+ @ivar encrypter: the encrypter to use for the connection
+ @type _max_search: C{int}
+ @ivar _max_search: the number of remaining bytes to search for the pattern
+ @type options: C{string}
+ @ivar options: the protocol options read from the connection
+
+ """
+
def __init__(self, multihandler, connection):
+ """Initialize the instance.
+
+ @type multihandler: L{MultiHandler}
+ @param multihandler: the collection of all torrent Servers
+ @type connection: unknown
+ @param connection: the new connection to handle
+
+ """
+
self.multihandler = multihandler
self.connection = connection
connection.set_handler(self)
@@ -94,10 +250,12 @@
self.multihandler.rawserver.add_task(self._auto_close, 30)
def _auto_close(self):
+ """Automatically close the connection if it is not fully connected."""
if not self.complete:
self.close()
def close(self):
+ """Close the connection."""
if not self.closed:
self.connection.close()
self.closed = True
@@ -105,12 +263,32 @@
# copied from Encrypter and modified
def _read_header(self, s):
+ """Check if the protocol header matches.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{int}, C{method}
+ @return: the next length to read and method to call with the data
+ (or None if something went wrong)
+
+ """
+
if s == chr(len(protocol_name))+protocol_name:
self.protocol = protocol_name
return 8, self.read_options
return None
def read_header(self, s):
+ """Process the (possibly encrypted) protocol header from the connection.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{int}, C{method}
+ @return: the next length to read and method to call with the data
+ (or None if something went wrong)
+
+ """
+
if self._read_header(s):
if self.multihandler.config['crypto_only']:
return None
@@ -123,12 +301,32 @@
return self.encrypter.keylength, self.read_crypto_header
def read_crypto_header(self, s):
+ """Start to read an encrypted header from the connection.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{int}, C{method}
+ @return: the next length to read and method to call with the data
+
+ """
+
self.encrypter.received_key(s)
self.write(self.encrypter.pubkey+self.encrypter.padding())
self._max_search = 520
return 0, self.read_crypto_block3a
def _search_for_pattern(self, s, pat):
+ """Search for a pattern in the initial connection data.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @type pat: C{string}
+ @param pat: the data to search for
+ @rtype: C{boolean}
+ @return: whether the pattern was found
+
+ """
+
p = s.find(pat)
if p < 0:
self._max_search -= len(s)+1-len(pat)
@@ -141,11 +339,32 @@
return True
def read_crypto_block3a(self, s):
+ """Find the block3a crypto information in the connection.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{int}, C{method}
+ @return: the next length to read and method to call with the data
+
+ """
+
if not self._search_for_pattern(s,self.encrypter.block3a):
return -1, self.read_crypto_block3a # wait for more data
return 20, self.read_crypto_block3b
def read_crypto_block3b(self, s):
+ """Process the block3b crypto information in the connection.
+
+ Passes the connection off to the appropriate torrent's Server if the
+ correct block is found.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{boolean}
+ @return: whether the crypto block was found
+
+ """
+
for k in self.multihandler.singlerawservers.keys():
if self.encrypter.test_skey(s,k):
self.multihandler.singlerawservers[k]._external_connection_made(
@@ -155,10 +374,30 @@
return None
def read_options(self, s):
+ """Process the protocol options from the connection.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{int}, C{method}
+ @return: the next length to read and method to call with the data
+
+ """
+
self.options = s
return 20, self.read_download_id
def read_download_id(self, s):
+ """Read the torrent infohash from the connection.
+
+ Passes the connection off to the appropriate torrent's Server.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{boolean}
+ @return: whether a torrent was found to assign the connection to
+
+ """
+
if self.multihandler.singlerawservers.has_key(s):
if self.multihandler.singlerawservers[s].protocol == self.protocol:
self.multihandler.singlerawservers[s]._external_connection_made(
@@ -168,15 +407,56 @@
def read_dead(self, s):
+ """Do nothing.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+ @rtype: C{none}
+ @return: None
+
+ """
+
return None
def data_came_in(self, garbage, s):
+ """Process the read data from the connection.
+
+ @type garbage: unknown
+ @param garbage: thrown away
+ @type s: C{string}
+ @param s: the data read from the conection
+
+ """
+
self.read(s)
def _write_buffer(self, s):
+ """Add the read data from the connection back onto the start of the buffer.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+
+ """
+
self.buffer = s+self.buffer
def _read(self, s):
+ """Process the data read from the connection.
+
+ Processes incoming data on the connection. The data is bufferred, then
+ the L{next_func} method is called with the L{next_len} amount of the
+ data. If it returns None, the connection is closed. If it returns True,
+ the connection handshake is complete and the connection is established.
+ Otherwise it returns C{int},C{method}, which is the next length to read
+ and method to call with the data. If the length is 0, it will read all
+ the available data. If the length is -1 it will wait for more data to
+ caome in.
+
+ @type s: C{string}
+ @param s: the data read from the conection
+
+ """
+
self.buffer += s
while True:
if self.closed:
@@ -209,13 +489,56 @@
def connection_flushed(self, ss):
+ """Do nothing.
+
+ @type ss: unknown
+ @param ss: the connection that was flushed
+
+ """
+
pass
def connection_lost(self, ss):
+ """Close the lost connection.
+
+ @type ss: unknown
+ @param ss: the connection that was lost
+
+ """
+
self.closed = True
class MultiHandler:
+ """Collection of Servers/Port Handlers for multiple torrents.
+
+ @type rawserver: L{RawServer.RawServer}
+ @ivar rawserver: the master Server
+ @type masterdoneflag: C{threading.Event}
+ @ivar masterdoneflag: the flag to indicate stopping to the master Server
+ @type config: C{dictionary}
+ @ivar config: the configuration parameters
+ @type singlerawservers: C{dictionary}
+ @ivar singlerawservers: keys are torrent infohash strings, values are
+ individual L{SingleRawServer} for the torrents
+ @type connections: C{dictionary}
+ @ivar connections: unknown
+ @type taskqueues: C{dictionary}
+ @ivar taskqueues: unknown
+
+ """
+
def __init__(self, rawserver, doneflag, config):
+ """Initialize the instance.
+
+ @type rawserver: L{RawServer.RawServer}
+ @param rawserver: the master Server
+ @type doneflag: C{threading.Event}
+ @param doneflag: the flag to indicate stopping to the master Server
+ @type config: C{dictionary}
+ @param config: the configuration parameters
+
+ """
+
self.rawserver = rawserver
self.masterdoneflag = doneflag
self.config = config
@@ -224,15 +547,37 @@
self.taskqueues = {}
def newRawServer(self, info_hash, doneflag, protocol=protocol_name):
+ """Create a new Server for the torrent.
+
+ @type info_hash: C{string}
+ @param info_hash: the torrent's infohash
+ @type doneflag: C{threading.Event}
+ @param doneflag: the flag to indicate stopping to the new Server
+ @type protocol: C{string}
+ @param protocol: the name to use for the communication protocol
+ (optional, defaults to L{DebTorrent.protocol_name})
+ @rtype: L{SingleRawServer}
+ @return: the new Server that was created
+
+ """
+
new = SingleRawServer(info_hash, self, doneflag, protocol)
self.singlerawservers[info_hash] = new
return new
def shutdown_torrent(self, info_hash):
+ """Shutdown a single torrent's Server.
+
+ @type info_hash: C{string}
+ @param info_hash: the torrent's infohash
+
+ """
+
self.singlerawservers[info_hash]._shutdown()
del self.singlerawservers[info_hash]
def listen_forever(self):
+ """Call the master server's listen loop."""
self.rawserver.listen_forever(self)
for srs in self.singlerawservers.values():
srs.finished = True
@@ -243,4 +588,11 @@
# be wary of name collisions
def external_connection_made(self, ss):
+ """Handle a new incoming connection from the master Server.
+
+ @type ss: unknown
+ @param ss: unknown
+
+ """
+
NewSocketHandler(self, ss)
Modified: debtorrent/trunk/DebTorrent/SocketHandler.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/SocketHandler.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/SocketHandler.py (original)
+++ debtorrent/trunk/DebTorrent/SocketHandler.py Wed Jun 13 21:32:05 2007
@@ -135,6 +135,9 @@
self.max_connects = 1000
self.port_forwarded = None
self.servers = {}
+ self.interfaces = []
+ self.ports = []
+ self.handlers = {}
def scan_for_timeouts(self):
t = clock() - self.timeout
@@ -149,8 +152,9 @@
def bind(self, port, bind = '', reuse = False, ipv6_socket_style = 1, upnp = 0):
port = int(port)
addrinfos = []
- self.servers = {}
- self.interfaces = []
+ # Don't reinitialize to allow multiple binds
+ newservers = {}
+ newinterfaces = []
# if bind != "" thread it as a comma seperated list and bind to all
# addresses (can be ips or hostnames) else bind to default ipv6 and
# ipv4 address
@@ -178,34 +182,39 @@
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.setblocking(0)
server.bind(addrinfo[4])
- self.servers[server.fileno()] = server
+ newservers[server.fileno()] = server
if bind:
- self.interfaces.append(server.getsockname()[0])
+ newinterfaces.append(server.getsockname()[0])
server.listen(64)
self.poll.register(server, POLLIN)
except socket.error, e:
- for server in self.servers.values():
+ for server in newservers.values():
try:
server.close()
except:
pass
- if self.ipv6_enable and ipv6_socket_style == 0 and self.servers:
+ if self.ipv6_enable and ipv6_socket_style == 0 and newservers:
raise socket.error('blocked port (may require ipv6_binds_v4 to be set)')
raise socket.error(str(e))
- if not self.servers:
+ if not newservers:
raise socket.error('unable to open server port')
if upnp:
if not UPnP_open_port(port):
- for server in self.servers.values():
+ for server in newservers.values():
try:
server.close()
except:
pass
- self.servers = None
- self.interfaces = None
+ newservers = None
+ newinterfaces = None
raise socket.error(UPnP_ERROR)
self.port_forwarded = port
- self.port = port
+ self.ports.append(port)
+ # Save the newly created items
+ for key,value in newservers.items():
+ self.servers[key] = value
+ for item in newinterfaces:
+ self.interfaces.append(item)
def find_and_bind(self, minport, maxport, bind = '', reuse = False,
ipv6_socket_style = 1, upnp = 0, randomizer = False):
@@ -231,8 +240,11 @@
raise socket.error(str(e))
- def set_handler(self, handler):
- self.handler = handler
+ def set_handler(self, handler, port = None):
+ if port is None:
+ self.handler = handler
+ else:
+ self.handlers[port] = handler
def start_connection_raw(self, dns, socktype = socket.AF_INET, handler = None):
@@ -296,12 +308,14 @@
print "lost server socket"
elif len(self.single_sockets) < self.max_connects:
try:
+ port = s.getsockname()[1]
+ handler = self.handlers.get(port, self.handler)
newsock, addr = s.accept()
newsock.setblocking(0)
- nss = SingleSocket(self, newsock, self.handler)
+ nss = SingleSocket(self, newsock, handler)
self.single_sockets[newsock.fileno()] = nss
self.poll.register(newsock, POLLIN)
- self.handler.external_connection_made(nss)
+ handler.external_connection_made(nss)
except socket.error:
self._sleep()
else:
@@ -358,7 +372,7 @@
def get_stats(self):
return { 'interfaces': self.interfaces,
- 'port': self.port,
+ 'port': self.ports,
'upnp': self.port_forwarded is not None }
Modified: debtorrent/trunk/DebTorrent/__init__.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/__init__.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/__init__.py (original)
+++ debtorrent/trunk/DebTorrent/__init__.py Wed Jun 13 21:32:05 2007
@@ -3,7 +3,7 @@
#
# $Id$
-"""The main package to implement the debtorrent protocol.
+"""The main package to implement the DebTorrent protocol.
This package, and it's subpackage L{BT1}, contains all the modules needed
to implement the DebTorrent protocol.
@@ -12,6 +12,10 @@
@var product_name: the name given for the package
@type version_short: C{string}
@var version_short: the short version number
+ at type protocol_name: C{string}
+ at var protocol_name: the protocol name to use in handshaking
+ at type mapbase64: C{string}
+ at var mapbase64: the mapping from 64 bit numbers to string characters
"""
@@ -45,6 +49,7 @@
_idrandom = [None]
def resetPeerIDs():
+ """Reset the generation of peer IDs before generating a new random one."""
try:
f = open('/dev/urandom','rb')
x = f.read(20)
@@ -77,6 +82,17 @@
resetPeerIDs()
def createPeerID(ins = '---'):
+ """Generate a somewhat random peer ID
+
+ @type ins: C{string}
+ @param ins: the length 3 string to insert in the middle of the peer ID
+ between the prefix and the random part of the ID
+ (optional, defaults to '---')
+ @rtype: C{string}
+ @return: the peer ID to use
+
+ """
+
assert type(ins) is StringType
assert len(ins) == 3
return _idprefix + ins + _idrandom[0]
Modified: debtorrent/trunk/DebTorrent/download_bt1.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/download_bt1.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/download_bt1.py (original)
+++ debtorrent/trunk/DebTorrent/download_bt1.py Wed Jun 13 21:32:05 2007
@@ -84,6 +84,9 @@
('maxport', 60000, 'maximum port to listen on'),
('random_port', 1, 'whether to choose randomly inside the port range ' +
'instead of counting up linearly'),
+ ('port', 9988, 'port to listen for apt on'),
+ ('min_time_between_log_flushes', 3.0,
+ 'minimum time it must have been since the last flush to do another one'),
('responsefile', '',
'file the server response was stored in, alternative to url'),
('url', '',
@@ -191,6 +194,30 @@
"minutes between automatic flushes to disk (0 = disabled)"),
('dedicated_seed_id', '',
"code to send to tracker identifying as a dedicated seed"),
+ ('dfile', '', 'file to store recent apt downloader info in'),
+ ('socket_timeout', 15, 'timeout for closing connections'),
+ ('save_dfile_interval', 5 * 60, 'seconds between saving dfile'),
+ ('min_time_between_log_flushes', 3.0,
+ 'minimum time it must have been since the last flush to do another one'),
+ ('min_time_between_cache_refreshes', 600.0,
+ 'minimum time in seconds before a cache is considered stale and is flushed'),
+ ('allowed_dir', '', 'only allow downloads for .dtorrents in this dir'),
+ ('allowed_list', '', 'only allow downloads for hashes in this list (hex format, one per line)'),
+ ('hupmonitor', 0, 'whether to reopen the log file upon receipt of HUP signal'),
+ ('http_timeout', 60,
+ 'number of seconds to wait before assuming that an http connection has timed out'),
+ ('apt_parse_dir_interval', 60, 'seconds between reloading of allowed_dir or allowed_file ' +
+ 'and allowed_ips and banned_ips lists for apt'),
+ ('show_infopage', 1, "whether to display an info page when the tracker's root dir is loaded"),
+ ('infopage_redirect', '', 'a URL to redirect the info page to'),
+ ('show_names', 1, 'whether to display names from allowed dir'),
+ ('favicon', '', 'file containing x-icon data to return when browser requests favicon.ico'),
+ ('allowed_ips', '', 'only allow connections from IPs specified in the given file; '+
+ 'file contains subnet data in the format: aa.bb.cc.dd/len'),
+ ('banned_ips', '', "don't allow connections from IPs specified in the given file; "+
+ 'file contains IP range data in the format: xxx:xxx:ip1-ip2'),
+ ('logfile', '', 'file to write the tracker logs, use - for stdout (default)'),
+ ('allow_get', 0, 'use with allowed_dir; adds a /file?hash={hash} url that allows users to download the torrent file'),
]
argslistheader = 'Arguments are:\n\n'
@@ -364,11 +391,6 @@
to be used
@type url: C{string}
@param url: the URL to download the metainfo file from
- @type status_to_download: C{int}
- @param status_to_download: determines which packages to download based on
- /var/lib/dpkg/status (0 = disabled [download all or use --priority],
- 1 = download updated versions of installed packages,
- 2 = download all installed packages)
@type errorfunc: C{function}
@param errorfunc: the function to use to print any error messages
@rtype: C{dictionary}
@@ -439,11 +461,6 @@
to be used
@type url: C{string}
@param url: the URL to download the metainfo file from
- @type status_to_download: C{int}
- @param status_to_download: determines which packages to download based on
- /var/lib/dpkg/status (0 = disabled [download all or use --priority],
- 1 = download updated versions of installed packages,
- 2 = download all installed packages)
@rtype: C{dictionary}
@return: the metainfo data
@@ -558,7 +575,7 @@
@ivar infohash: the hash of the info from the response data
@type myid: C{string}
@ivar myid: the peer ID to use
- @type rawserver: L{Rawserver.Rawserver}
+ @type rawserver: L{RawServer.RawServer}
@ivar rawserver: the server controlling the program
@type port: C{int}
@ivar port: the port being listened to
@@ -570,8 +587,6 @@
@ivar piece_lengths: the lengths of the pieces
@type len_pieces: C{int}
@ivar len_pieces: the number of pieces
- @type status_priority: C{dictionary}
- @ivar status_priority: the file priorities from the dpkg/status file
@type argslistheader: C{string}
@ivar argslistheader: the header to print before the default config
@type unpauseflag: C{threading.Event}
@@ -589,13 +604,13 @@
@type spewflag: C{threading.Event}
@ivar spewflag: unknown
@type superseedflag: C{threading.Event}
- @ivar superseedflag: unknown
- @type whenpaused: unknown
- @ivar whenpaused: unknown
+ @ivar superseedflag: indicates the upload is in super-seed mode
+ @type whenpaused: C{float}
+ @ivar whenpaused: the time when the download was paused
@type finflag: C{threading.Event}
- @ivar finflag: unknown
- @type rerequest: unknown
- @ivar rerequest: unknown
+ @ivar finflag: whether the download is complete
+ @type rerequest: L{BT1.Rerequester.Rerequester}
+ @ivar rerequest: the Rerequester instance to use to communicate with the tracker
@type tcp_ack_fudge: C{float}
@ivar tcp_ack_fudge: the fraction of TCP ACK download overhead to add to
upload rate calculations
@@ -604,13 +619,13 @@
@type appdataobj: L{ConfigDir.ConfigDir}
@ivar appdataobj: the configuration and cache directory manager
@type excflag: C{threading.Event}
- @ivar excflag: unknown
- @type failed: unknown
- @ivar failed: unknown
- @type checking: unknown
- @ivar checking: unknown
- @type started: unknown
- @ivar started: unknown
+ @ivar excflag: whether an exception has occurred
+ @type failed: C{boolean}
+ @ivar failed: whether the download failed
+ @type checking: C{boolean}
+ @ivar checking: whether the download is in the initialization phase
+ @type started: C{boolean}
+ @ivar started: whether the download has been started
@type picker: L{BT1.PiecePicker.PiecePicker}
@ivar picker: the PiecePicker instance
@type choker: L{BT1.Choker.Choker}
@@ -621,30 +636,30 @@
@ivar files: the full file names and lengths of all the files in the download
@type datalength: C{long}
@ivar datalength: the total length of the download
- @type priority: unknown
- @ivar priority: unknown
- @type storage: unknown
- @ivar storage: unknown
- @type upmeasure: unknown
- @ivar upmeasure: unknown
- @type downmeasure: unknown
- @ivar downmeasure: unknown
- @type ratelimiter: unknown
- @ivar ratelimiter: unknown
- @type ratemeasure: unknown
- @ivar ratemeasure: unknown
- @type ratemeasure_datarejected: unknown
- @ivar ratemeasure_datarejected: unknown
- @type connecter: unknown
- @ivar connecter: unknown
- @type encoder: unknown
- @ivar encoder: unknown
- @type encoder_ban: unknown
- @ivar encoder_ban: unknown
- @type httpdownloader: unknown
- @ivar httpdownloader: unknown
- @type statistics: unknown
- @ivar statistics: unknown
+ @type priority: C{list}
+ @ivar priority: the priorities to download the files at
+ @type storage: L{BT1.Storage.Storage}
+ @ivar storage: the file storage instance
+ @type upmeasure: L{CurrentRateMeasure.Measure}
+ @ivar upmeasure: the measure of the upload rate
+ @type downmeasure: L{CurrentRateMeasure.Measure}
+ @ivar downmeasure: the measure of the download rate
+ @type ratelimiter: L{RateLimiter.RateLimiter}
+ @ivar ratelimiter: the RateLimiter instance to limit the upload rate
+ @type ratemeasure: L{RateMeasure.RateMeasure}
+ @ivar ratemeasure: the RateMeasure instance
+ @type ratemeasure_datarejected: C{method}
+ @ivar ratemeasure_datarejected: the method to call when incoming data failed
+ @type connecter: L{BT1.Connecter.Connecter}
+ @ivar connecter: the Connecter instance to manage all the connections
+ @type encoder: L{BT1.Encrypter.Encoder}
+ @ivar encoder: the port listener for connections
+ @type encoder_ban: C{method}
+ @ivar encoder_ban: the method to call to ban and IP address
+ @type httpdownloader: L{BT1.HTTPDownloader.HTTPDownloader}
+ @ivar httpdownloader: the backup HTTP downloader
+ @type statistics: L{BT1.Statistics.Statistics}
+ @ivar statistics: the statistics gathering instance
"""
@@ -671,13 +686,10 @@
@param infohash: the hash of the info from the response data
@type id: C{string}
@param id: the peer ID to use
- @type rawserver: L{Rawserver.Rawserver}
+ @type rawserver: L{RawServer.RawServer}
@param rawserver: the server controlling the program
@type port: C{int}
@param port: the port being listened to
- @type status_priority: C{dictionary}
- @param status_priority: the file priorities, keys are file names,
- values are the priority to use (optional, defaults to download all)
@type appdataobj: L{ConfigDir.ConfigDir}
@param appdataobj: the configuration and cache directory manager
@@ -982,7 +994,7 @@
self.fileselector = FileSelector(self.files, self.piece_lengths,
self.appdataobj.getPieceDir(self.infohash),
self.storage, self.storagewrapper,
- self.rawserver.add_task,
+ self.rawserver.add_task, self.picker,
self._failed)
if data:
data = data.get('resume data')
@@ -1059,26 +1071,80 @@
self.encoder_ban(ip)
def _received_raw_data(self, x):
+ """Update the rate limiter when data comes in.
+
+ @type x: C{int}
+ @param x: the number of bytes that were received
+
+ """
+
if self.tcp_ack_fudge:
x = int(x*self.tcp_ack_fudge)
self.ratelimiter.adjust_sent(x)
def _received_data(self, x):
+ """Add received data to the rate measures.
+
+ @type x: C{int}
+ @param x: the number of bytes that were received
+
+ """
+
self.downmeasure.update_rate(x)
self.ratemeasure.data_came_in(x)
def _received_http_data(self, x):
+ """Add received HTTP data to the rate measures.
+
+ @type x: C{int}
+ @param x: the number of bytes that were received
+
+ """
+
self.downmeasure.update_rate(x)
self.ratemeasure.data_came_in(x)
self.downloader.external_data_received(x)
def _cancelfunc(self, pieces):
+ """Cancel the download of pieces.
+
+ @type pieces: C{list} of C{int}
+ @param pieces: the pieces to stop downloading
+
+ """
+
self.downloader.cancel_piece_download(pieces)
self.httpdownloader.cancel_piece_download(pieces)
+
def _reqmorefunc(self, pieces):
+ """Request to download the pieces.
+
+ @type pieces: C{list} of C{int}
+ @param pieces: the pieces to request
+
+ """
+
self.downloader.requeue_piece_download(pieces)
def startEngine(self, ratelimiter = None, statusfunc = None):
+ """Start various downloader engines.
+
+ Starts the upload and download L{CurrentRateMeasure.Measure},
+ L{RateLimiter.RateLimiter}, L{BT1.Downloader.Downloader},
+ L{BT1.Connecter.Connecter}, L{BT1.Encrypter.Encoder}, and
+ L{BT1.HTTPDownloader.HTTPDownloader}.
+
+ @type ratelimiter: L{RateLimiter.RateLimiter}
+ @param ratelimiter: the RateLimiter instance to use
+ (optional, defaults to starting a new one)
+ @type statusfunc: C{method}
+ @param statusfunc: the method to call to report the current status
+ (optional, defaults to L{statusfunc}
+ @rtype: C{boolean}
+ @return: whether the engines were started
+
+ """
+
if self.doneflag.isSet():
return False
if not statusfunc:
@@ -1129,14 +1195,14 @@
self.httpdownloader = HTTPDownloader(self.storagewrapper, self.picker,
self.rawserver, self.finflag, self.errorfunc, self.downloader,
self.config['max_rate_period'], self.infohash, self._received_http_data,
- self.connecter.got_piece)
- if self.response.has_key('httpseeds') and not self.finflag.isSet():
- for u in self.response['httpseeds']:
+ self.connecter.got_piece, self.getFilename)
+ if self.response.has_key('deb_mirrors') and not self.finflag.isSet():
+ for u in self.response['deb_mirrors']:
self.httpdownloader.make_download(u)
if self.selector_enabled:
- self.fileselector.tie_in(self.picker, self._cancelfunc,
- self._reqmorefunc, self.rerequest_ondownloadmore)
+ self.fileselector.tie_in(self._cancelfunc, self._reqmorefunc,
+ self.rerequest_ondownloadmore)
if self.priority:
self.fileselector.set_priorities_now(self.priority)
self.appdataobj.deleteTorrentData(self.infohash)
@@ -1150,23 +1216,43 @@
def rerequest_complete(self):
+ """Send the completed event to the tracker."""
if self.rerequest:
self.rerequest.announce(1)
def rerequest_stopped(self):
+ """Send the stopped event to the tracker."""
if self.rerequest:
self.rerequest.announce(2)
def rerequest_lastfailed(self):
+ """Check if the last tracker request failed.
+
+ @rtype: C{boolean}
+ @return: whether it failed (or False if there is no Rerequester)
+
+ """
+
if self.rerequest:
return self.rerequest.last_failed
return False
def rerequest_ondownloadmore(self):
+ """Try to trigger a tracker request."""
if self.rerequest:
self.rerequest.hit()
def startRerequester(self, seededfunc = None, force_rapid_update = False):
+ """Start the tracker requester.
+
+ @type seededfunc: C{method}
+ @param seededfunc: unknown
+ @type force_rapid_update: C{boolean}
+ @param force_rapid_update: whether to do quick tracker updates when
+ requested (optional, defaults to False)
+
+ """
+
if self.response.has_key('announce-list'):
trackerlist = self.response['announce-list']
else:
@@ -1187,6 +1273,7 @@
def _init_stats(self):
+ """Start the statistics aggregater."""
self.statistics = Statistics(self.upmeasure, self.downmeasure,
self.connecter, self.httpdownloader, self.ratelimiter,
self.rerequest_lastfailed, self.filedatflag)
@@ -1196,6 +1283,13 @@
self.spewflag.set()
def autoStats(self, displayfunc = None):
+ """Start the statistics automatic displayer for the user.
+
+ @type displayfunc: C{method}
+ @param displayfunc: the method to call to print the current stats
+
+ """
+
if not displayfunc:
displayfunc = self.statusfunc
@@ -1207,6 +1301,13 @@
displayfunc, self.config['display_interval'])
def startStats(self):
+ """Start a statistics gatherer.
+
+ @rtype: C{method}
+ @return: the method to call to get the gathered statistics
+
+ """
+
self._init_stats()
d = DownloaderFeedback(self.choker, self.httpdownloader, self.rawserver.add_task,
self.upmeasure.get_rate, self.downmeasure.get_rate,
@@ -1216,10 +1317,26 @@
def getPortHandler(self):
+ """Get the object that is called when a connection comes in.
+
+ @rtype: L{BT1.Encrypter.Encoder}
+ @return: the object responsible for listening to a port
+
+ """
+
return self.encoder
def shutdown(self, torrentdata = {}):
+ """Shutdown the running download.
+
+ @type torrentdata: C{dictionary}
+ @param torrentdata: any data that needs to be saved (pickled)
+ @rtype: C{boolean}
+ @return: False if a failure or exception occurred
+
+ """
+
if self.checking or self.started:
self.storagewrapper.sync()
self.storage.close()
@@ -1237,77 +1354,232 @@
def setUploadRate(self, rate):
+ """Set a new maximum upload rate.
+
+ @type rate: C{float}
+ @param rate: the new upload rate (kB/s)
+
+ """
+
try:
def s(self = self, rate = rate):
+ """Worker function to actually set the rate.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to set the rate of
+ (optional, defaults to the current instance)
+ @type rate: C{float}
+ @param rate: the new rate to set
+ (optional, defaults to the L{setUploadRate} rate
+
+ """
+
self.config['max_upload_rate'] = rate
self.ratelimiter.set_upload_rate(rate)
+
self.rawserver.add_task(s)
except AttributeError:
pass
def setConns(self, conns, conns2 = None):
+ """Set the number of connections limits.
+
+ @type conns: C{int}
+ @param conns: the number of uploads to fill out to with extra
+ optimistic unchokes
+ @type conns2: C{int}
+ @param conns2: the maximum number of uploads to allow at once
+ (optional, defaults to the value of L{conns})
+
+ """
+
if not conns2:
conns2 = conns
try:
def s(self = self, conns = conns, conns2 = conns2):
+ """Worker function to actually set the connection limits.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to set the rate of
+ (optional, defaults to the current instance)
+ @type conns: C{int}
+ @param conns: the number of uploads to fill out to with extra
+ optimistic unchokes
+ @type conns2: C{int}
+ @param conns2: the maximum number of uploads to allow at once
+
+ """
+
self.config['min_uploads'] = conns
self.config['max_uploads'] = conns2
if (conns > 30):
self.config['max_initiate'] = conns + 10
+
self.rawserver.add_task(s)
except AttributeError:
pass
def setDownloadRate(self, rate):
+ """Set a new maximum download rate.
+
+ @type rate: C{float}
+ @param rate: the new download rate (kB/s)
+
+ """
+
try:
def s(self = self, rate = rate):
+ """Worker function to actually set the rate.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to set the rate of
+ (optional, defaults to the current instance)
+ @type rate: C{float}
+ @param rate: the new rate to set
+ (optional, defaults to the L{setDownloadRate} rate
+
+ """
+
self.config['max_download_rate'] = rate
self.downloader.set_download_rate(rate)
+
self.rawserver.add_task(s)
except AttributeError:
pass
def startConnection(self, ip, port, id):
+ """Start a new connection to a peer.
+
+ @type ip: C{string}
+ @param ip: the IP address of the peer
+ @type port: C{int}
+ @param port: the port to contact the peer on
+ @type id: C{string}
+ @param id: the peer's ID
+
+ """
+
self.encoder._start_connection((ip, port), id)
def _startConnection(self, ipandport, id):
+ """Start a new connection to a peer.
+
+ @type ipandport: (C{string}, C{int})
+ @param ipandport: the IP address and port to contact the peer on
+ @type id: C{string}
+ @param id: the peer's ID
+
+ """
+
self.encoder._start_connection(ipandport, id)
def setInitiate(self, initiate):
+ """Set the maximum number of connections to initiate.
+
+ @type initiate: C{int}
+ @param initiate: the new maximum
+
+ """
+
try:
def s(self = self, initiate = initiate):
+ """Worker function to actually set the maximum.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to set the max connections of
+ (optional, defaults to the current instance)
+ @type initiate: C{int}
+ @param initiate: the new maximum
+
+ """
+
self.config['max_initiate'] = initiate
+
self.rawserver.add_task(s)
except AttributeError:
pass
def getConfig(self):
+ """Get the current configuration parameters.
+
+ @rtype: C{dictionary}
+ @return: the configuration parameters
+
+ """
+
return self.config
def getDefaults(self):
+ """Get the default configuration parameters.
+
+ @rtype: C{dictionary}
+ @return: the default configuration parameters
+
+ """
+
return defaultargs(defaults)
def getUsageText(self):
+ """Get the header only for the usage text (not used).
+
+ @rtype: C{string}
+ @return: the header of the usage text
+
+ """
+
return self.argslistheader
def reannounce(self, special = None):
+ """Reannounce to the tracker.
+
+ @type special: C{string}
+ @param special: the URL of the tracker to announce to
+ (optional, defaults to the tracker list from the metainfo file)
+
+ """
+
try:
def r(self = self, special = special):
+ """Worker function to actually do the announcing.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to announce
+ (optional, defaults to the current instance)
+ @type special: C{string}
+ @param special: the URL of the tracker to announce to
+
+ """
+
if special is None:
self.rerequest.announce()
else:
self.rerequest.announce(specialurl = special)
+
self.rawserver.add_task(r)
except AttributeError:
pass
def getResponse(self):
+ """Get the response data from the metainfo file.
+
+ @rtype: C{dictionary}
+ @return: the response data (or None if there isn't any
+
+ """
+
try:
return self.response
except:
return None
def Pause(self):
+ """Schedule the pausing of the download.
+
+ @rtype: C{boolean}
+ @return: whether the download pause was schedules
+
+ """
+
if not self.storagewrapper:
return False
self.unpauseflag.clear()
@@ -1315,6 +1587,8 @@
return True
def onPause(self):
+ """Pause the download."""
+
self.whenpaused = clock()
if not self.downloader:
return
@@ -1323,10 +1597,12 @@
self.choker.pause(True)
def Unpause(self):
+ """Schedule the resuming of the download."""
self.unpauseflag.set()
self.rawserver.add_task(self.onUnpause)
def onUnpause(self):
+ """Resume the download."""
if not self.downloader:
return
self.downloader.pause(False)
@@ -1336,32 +1612,73 @@
self.rerequest.announce(3) # rerequest automatically if paused for >60 seconds
def set_super_seed(self):
+ """Schedule the change of the upload into super-seed mode."""
try:
self.superseedflag.set()
def s(self = self):
+ """Worker function to actually call the change to super-seed.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to change to super-seed
+ (optional, defaults to the current instance)
+
+ """
if self.finflag.isSet():
self._set_super_seed()
+
self.rawserver.add_task(s)
except AttributeError:
pass
def _set_super_seed(self):
+ """Change the upload into super-seed mode."""
if not self.super_seeding_active:
self.super_seeding_active = True
self.errorfunc(' ** SUPER-SEED OPERATION ACTIVE **\n' +
' please set Max uploads so each peer gets 6-8 kB/s')
def s(self = self):
+ """Worker function to actually change to super-seed.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to change to super-seed
+ (optional, defaults to the current instance)
+
+ """
+
self.downloader.set_super_seed()
self.choker.set_super_seed()
+
self.rawserver.add_task(s)
if self.finflag.isSet(): # mode started when already finished
def r(self = self):
+ """Worker function to actually do the announcing.
+
+ @type self: L{BT1Download}
+ @param self: the BT1Download instance to announce
+ (optional, defaults to the current instance)
+
+ """
+
self.rerequest.announce(3) # so after kicking everyone off, reannounce
+
self.rawserver.add_task(r)
def am_I_finished(self):
+ """Determine if the download is complete or still under way.
+
+ @rtype: C{boolean}
+ @return: whether the download is complete
+
+ """
+
return self.finflag.isSet()
def get_transfer_stats(self):
+ """Get the total amount of data transferred.
+
+ @rtype: (C{long}, C{long})
+ @return: the measured total upload and download bytes
+
+ """
+
return self.upmeasure.get_total(), self.downmeasure.get_total()
-
Modified: debtorrent/trunk/DebTorrent/launchmanycore.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/launchmanycore.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/launchmanycore.py (original)
+++ debtorrent/trunk/DebTorrent/launchmanycore.py Wed Jun 13 21:32:05 2007
@@ -1,10 +1,12 @@
#!/usr/bin/env python
-
+#
# Written by John Hoffman
# Modified by Cameron Dale
# see LICENSE.txt for license information
-
+#
# $Id$
+
+"""Manage the downloading of multiple torrents in one process."""
from DebTorrent import PSYCO
if PSYCO.psyco:
@@ -25,11 +27,13 @@
from socket import error as socketerror
from threading import Event
from sys import argv, exit
-import sys, os
+import sys, os, binascii
from clock import clock
from __init__ import createPeerID, mapbase64, version
from cStringIO import StringIO
from traceback import print_exc
+from DebTorrent.BT1.AptListener import AptListener
+from DebTorrent.HTTPHandler import HTTPHandler
try:
True
@@ -37,8 +41,21 @@
True = 1
False = 0
+DEBUG = True
def fmttime(n):
+ """Formats seconds into a human-readable time.
+
+ Formats a given number of seconds into a human-readable time appropriate
+ for display to the user.
+
+ @type n: C{int}
+ @param n: the number of seconds
+ @rtype: C{string}
+ @return: a displayable representation of the number of seconds
+
+ """
+
try:
n = int(n) # n may be None or too large
assert n < 5184000 # 60 days
@@ -49,7 +66,63 @@
return '%d:%02d:%02d' % (h, m, s)
class SingleDownload:
+ """Manage a single torrent download.
+
+ @type controller: L{LaunchMany}
+ @ivar controller: the manager for all torrent downloads
+ @type hash: C{string}
+ @ivar hash: the info hash of the torrent
+ @type response: C{dictionary}
+ @ivar response: the meta info for the torrent
+ @type config: C{dictionary}
+ @ivar config: the configuration parameters
+ @type doneflag: C{threading.Event}
+ @ivar doneflag: the flag that indicates when the torrent is to be shutdown
+ @type waiting: C{boolean}
+ @ivar waiting: unknown
+ @type checking: C{boolean}
+ @ivar checking: unknown
+ @type working: C{boolean}
+ @ivar working: unknown
+ @type seed: C{boolean}
+ @ivar seed: unknown
+ @type closed: C{boolean}
+ @ivar closed: unknown
+ @type status_msg: C{string}
+ @ivar status_msg: the current activity the torrent is engaged in
+ @type status_err: C{list} of C{string}
+ @ivar status_err: the list of errors that have occurred
+ @type status_errtime: C{int}
+ @ivar status_errtime: the time of the last error
+ @type status_done: C{float}
+ @ivar status_done: the fraction of the current activity that is complete
+ @type rawserver: L{ServerPortHandler.SingleRawServer}
+ @ivar rawserver: the simplified Server to use to handle this torrent
+ @type d: L{download_bt1.BT1Download}
+ @ivar d: the downloader for the torrent
+ @type _hashcheckfunc: C{method}
+ @ivar _hashcheckfunc: the method to call to hash check the torrent
+ @type statsfunc: C{method}
+ @ivar statsfunc: the method to call to get the statistics for the running download
+
+ """
+
def __init__(self, controller, hash, response, config, myid):
+ """Initialize the instance and start a new downloader.
+
+ @type controller: L{LaunchMany}
+ @param controller: the manager for all torrent downloads
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+ @type response: C{dictionary}
+ @param response: the meta info for the torrent
+ @type config: C{dictionary}
+ @param config: the configuration parameters
+ @type myid: C{string}
+ @param myid: the peer ID to use
+
+ """
+
self.controller = controller
self.hash = hash
self.response = response
@@ -75,6 +148,7 @@
self.d = d
def start(self):
+ """Initialize the new torrent download and schedule it for hash checking."""
if not self.d.saveAs(self.saveAs):
self._shutdown()
return
@@ -86,9 +160,31 @@
def saveAs(self, name, length, saveas, isdir):
+ """Determine the location to save the torrent in.
+
+ @type name: C{string}
+ @param name: the name from the torrent's metainfo
+ @type length: C{long}
+ @param length: the total length of the torrent download (not used)
+ @type saveas: C{string}
+ @param saveas: the user specified location to save to
+ @type isdir: C{boolean}
+ @param isdir: whether the torrent needs a directory
+ @rtype: C{string}
+ @return: the location to save the torrent in
+
+ """
+
return self.controller.saveAs(self.hash, name, saveas, isdir)
def hashcheck_start(self, donefunc):
+ """Start the hash checking of the torrent.
+
+ @type donefunc: C{method}
+ @param donefunc: the method to call when the hash checking is complete
+
+ """
+
if self.is_dead():
self._shutdown()
return
@@ -97,6 +193,7 @@
self._hashcheckfunc(donefunc)
def hashcheck_callback(self):
+ """Start the torrent running now that hash checking is complete."""
self.checking = False
if self.is_dead():
self._shutdown()
@@ -110,12 +207,27 @@
self.working = True
def is_dead(self):
+ """Check if the torrent download has been shutdown.
+
+ @rtype: C{boolean}
+ @return: whether the torrent download has been shutdown
+
+ """
+
return self.doneflag.isSet()
def _shutdown(self):
+ """Loudly shutdown the running torrent."""
self.shutdown(False)
def shutdown(self, quiet=True):
+ """Shutdown the running torrent.
+
+ @type quiet: C{boolean}
+ @param quiet: whether to announce the shutdown (optional, defaults to True)
+
+ """
+
if self.closed:
return
self.doneflag.set()
@@ -132,16 +244,36 @@
def display(self, activity = None, fractionDone = None):
- # really only used by StorageWrapper now
+ """Update the current activity's status for later display.
+
+ Really only used by StorageWrapper now.
+
+ @type activity: C{string}
+ @param activity: the activity currently under way
+ (optional, defaults to not changing the current activity)
+ @type fractionDone: C{float}
+ @param fractionDone: the fraction of the activity that is complete
+ (optional, defaults to not changing the current fraction done)
+
+ """
+
if activity:
self.status_msg = activity
if fractionDone is not None:
self.status_done = float(fractionDone)
def finished(self):
+ """Indicate that the download has completed."""
self.seed = True
def error(self, msg):
+ """Add a new error to the list of errors that have occurred.
+
+ @type msg: C{string}
+ @param msg: the error message
+
+ """
+
if self.doneflag.isSet():
self._shutdown()
self.status_err.append(msg)
@@ -149,7 +281,59 @@
class LaunchMany:
+ """Manage the collection of all single torrent downloads.
+
+ @type config: C{dictionary}
+ @ivar config: the configuration parameters
+ @type Output: unknown
+ @ivar Output: the displayer instance to use
+ @type torrent_dir: C{string}
+ @ivar torrent_dir: the directory to parse for torrent files
+ @type torrent_cache: C{dictionary}
+ @ivar torrent_cache: the cache of known torrents, keys are info hashes
+ @type file_cache: C{dictionary}
+ @ivar file_cache: the files found in the parsing of the torrent directory
+ @type blocked_files: C{dictionary}
+ @ivar blocked_files: the torrents in the torrent directory that will not be run
+ @type scan_period: C{int}
+ @ivar scan_period: the number of seconds between scans of L{torrent_dir}
+ @type stats_period: C{int}
+ @ivar stats_period: the number of seconds between printing the stats for the user
+ @type torrent_list: C{list} of C{string}
+ @ivar torrent_list: the list of known torrents' info hashes
+ @type downloads: C{dictionary}
+ @ivar downloads: the currently running downloaders, keys are info hashes
+ @type counter: C{int}
+ @ivar counter: the number of torrents that have been started so far
+ @type doneflag: C{threading.Event}
+ @ivar doneflag: flag to indicate all is to be stopped
+ @type hashcheck_queue: C{list} of C{string}
+ @ivar hashcheck_queue: the list of torrent info hashes waiting to be hash checked
+ @type hashcheck_current: C{string}
+ @ivar hashcheck_current: the info hash of the torrent currently being hash checked
+ @type rawserver: L{RawServer.RawServer}
+ @ivar rawserver: the Server instance to use for the downloads
+ @type listen_port: C{int}
+ @ivar listen_port: the port to listen on for incoming torrent connections
+ @type aptlistener: L{BT1.AptListener.AptListener}
+ @ivar aptlistener: the AptListener instance used to listen for incoming connections from Apt
+ @type ratelimiter: L{RateLimiter.RateLimiter}
+ @ivar ratelimiter: the limiter used to cap the maximum upload rate
+ @type handler: L{ServerPortHandler.MultiHandler}
+ @ivar handler: the multi torrent port listener used to handle connections
+
+ """
+
def __init__(self, config, Output):
+ """Initialize the instance.
+
+ @type config: C{dictionary}
+ @param config: the configuration parameters
+ @type Output: unknown
+ @param Output: the displayer instance to use
+
+ """
+
try:
self.config = config
self.Output = Output
@@ -188,13 +372,20 @@
self.failed("Couldn't listen - " + str(e))
return
+ self.aptlistener = AptListener(self, config, self.rawserver)
+ self.rawserver.bind(config['port'], config['bind'],
+ reuse = True, ipv6_socket_style = config['ipv6_binds_v4'])
+ self.rawserver.set_handler(HTTPHandler(self.aptlistener.get,
+ config['min_time_between_log_flushes']),
+ config['port'])
+
self.ratelimiter = RateLimiter(self.rawserver.add_task,
config['upload_unit_size'])
self.ratelimiter.set_upload_rate(config['max_upload_rate'])
self.handler = MultiHandler(self.rawserver, self.doneflag, config)
seed(createPeerID())
- self.rawserver.add_task(self.scan, 0)
+# self.rawserver.add_task(self.scan, 0)
self.rawserver.add_task(self.stats, 0)
self.handler.listen_forever()
@@ -213,6 +404,7 @@
def scan(self):
+ """Scan the torrent directory for changes."""
self.rawserver.add_task(self.scan, self.scan_period)
r = parsedir(self.torrent_dir, self.torrent_cache,
@@ -229,7 +421,8 @@
self.Output.message('added "'+data['path']+'"')
self.add(hash, data)
- def stats(self):
+ def stats(self):
+ """Call the Output display with the currently running torrents' statistics."""
self.rawserver.add_task(self.stats, self.stats_period)
data = []
for hash in self.torrent_list:
@@ -296,11 +489,28 @@
self.doneflag.set()
def remove(self, hash):
+ """Stop and remove a running torrent.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+
+ """
+
self.torrent_list.remove(hash)
self.downloads[hash].shutdown()
del self.downloads[hash]
def add(self, hash, data):
+ """Start a new torrent running.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+ @type data: C{dictionary}
+ @param data: various info about the torrent, including the metainfo
+
+ """
+
+ self.torrent_cache.setdefault(hash, data)
c = self.counter
self.counter += 1
x = ''
@@ -315,6 +525,21 @@
def saveAs(self, hash, name, saveas, isdir):
+ """Determine the location to save the torrent in.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+ @type name: C{string}
+ @param name: the name from the torrent's metainfo
+ @type saveas: C{string}
+ @param saveas: the user specified location to save to
+ @type isdir: C{boolean}
+ @param isdir: whether the torrent needs a directory
+ @rtype: C{string}
+ @return: the location to save the torrent in
+
+ """
+
x = self.torrent_cache[hash]
style = self.config['saveas_style']
if style == 1 or style == 3:
@@ -346,17 +571,69 @@
return saveas
+ def find_file(self, mirror, path):
+ """Find which running torrent has the file.
+
+ Checks the metainfo of each torrent in the cache to find one that
+ has a file whose 'path' matches the given file's path.
+
+ @type mirror: C{string}
+ @param mirror: mirror name to find the download in
+ @type path: C{list} of C{string}
+ @param path: the path of the file to find
+ @rtype: L{download_bt1.BT1Download}, C{int}
+ @return: the running torrent that contains the file and the file's number
+ (or None if no running torrents contain the file)
+
+ """
+
+ file = '/'.join(path)
+ if DEBUG:
+ print 'Trying to find file:', file
+
+ # Check each torrent in the cache
+ for hash, data in self.torrent_cache.items():
+ # Make sure this torrent is from the mirror in question
+ # (TODO: later make this more certain by not prepending 'dt_' to the name)
+ if data['metainfo']['name'].find(mirror) == -1:
+ continue
+
+ file_num = -1
+ for f in data['metainfo']['info']['files']:
+ file_num += 1
+
+ # Check that the file ends with the desired file name (TODO: security risk?)
+ if file.endswith('/'.join(f['path'])):
+ if DEBUG:
+ print 'Found file in:', binascii.b2a_hex(hash)
+ return self.downloads[hash].d, file_num
+
+ if DEBUG:
+ print 'Failed to find file.'
+ return None, None
+
+
def hashchecksched(self, hash = None):
+ """Schedule a new torrent for hash checking.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent to schedule
+ (optional, default is to start the next torrent in the queue)
+
+ """
+
if hash:
self.hashcheck_queue.append(hash)
if not self.hashcheck_current:
self._hashcheck_start()
def _hashcheck_start(self):
+ """Start hash checking the next torrent in the queue."""
self.hashcheck_current = self.hashcheck_queue.pop(0)
self.downloads[self.hashcheck_current].hashcheck_start(self.hashcheck_callback)
def hashcheck_callback(self):
+ """Start another torrent's hash check now that the current one is complete."""
self.downloads[self.hashcheck_current].hashcheck_callback()
if self.hashcheck_queue:
self._hashcheck_start()
@@ -364,10 +641,36 @@
self.hashcheck_current = None
def died(self, hash):
+ """Inform the Output that the torrent has died.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+
+ """
+
if self.torrent_cache.has_key(hash):
self.Output.message('DIED: "'+self.torrent_cache[hash]['path']+'"')
+ def has_torrent(self, hash):
+ """Determine whether there is a downloader for the torrent.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+ @rtype: C{boolean}
+ @return: whether the torrent is in the cache of known torrents
+
+ """
+
+ return self.torrent_cache.has_key(hash)
+
def was_stopped(self, hash):
+ """Remove the torrent from the hash check queue, even if it's already happening.
+
+ @type hash: C{string}
+ @param hash: the info hash of the torrent
+
+ """
+
try:
self.hashcheck_queue.remove(hash)
except:
@@ -378,7 +681,21 @@
self._hashcheck_start()
def failed(self, s):
+ """Indicate to the Output that a failure has occurred.
+
+ @type s: C{string}
+ @param s: the failure message
+
+ """
+
self.Output.message('FAILURE: '+s)
def exchandler(self, s):
+ """Indicate to the Output that an exception has occurred.
+
+ @type s: C{string}
+ @param s: the exception that occurred
+
+ """
+
self.Output.exception(s)
Modified: debtorrent/trunk/DebTorrent/piecebuffer.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/piecebuffer.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/piecebuffer.py (original)
+++ debtorrent/trunk/DebTorrent/piecebuffer.py Wed Jun 13 21:32:05 2007
@@ -13,7 +13,7 @@
True = 1
False = 0
-DEBUG = True
+DEBUG = False
class SingleBuffer:
def __init__(self, pool):
@@ -22,7 +22,7 @@
def init(self):
if DEBUG:
- print self.count
+ print 'new/pooled buffer index:', self.count
'''
for x in xrange(6,1,-1):
try:
@@ -57,7 +57,7 @@
def release(self):
if DEBUG:
- print -self.count
+ print 'released buffer with index:', self.count
self.pool.release(self)
Modified: debtorrent/trunk/DebTorrent/zurllib.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/DebTorrent/zurllib.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/DebTorrent/zurllib.py (original)
+++ debtorrent/trunk/DebTorrent/zurllib.py Wed Jun 13 21:32:05 2007
@@ -1,8 +1,17 @@
# Written by John Hoffman
# Modified by Cameron Dale
# see LICENSE.txt for license information
+#
+# $Id$
-# $Id$
+"""A high-level fetcher for WWW data, similar to the urllib module.
+
+ at type VERSION: C{string}
+ at var VERSION: the User-Agent header to send on all connections
+ at type MAX_REDIRECTS: C{int}
+ at var MAX_REDIRECTS: the maximum number of redirects to follow
+
+"""
from httplib import HTTPConnection, HTTPSConnection, HTTPException
from urlparse import urlparse
@@ -17,16 +26,22 @@
MAX_REDIRECTS = 10
-class btHTTPcon(HTTPConnection): # attempt to add automatic connection timeout
+class btHTTPcon(HTTPConnection):
+ """Attempt to add automatic connection timeout to HTTPConnection."""
+
def connect(self):
+ """Redefine the connect to include a socket timeout."""
HTTPConnection.connect(self)
try:
self.sock.settimeout(30)
except:
pass
-class btHTTPScon(HTTPSConnection): # attempt to add automatic connection timeout
+class btHTTPScon(HTTPSConnection):
+ """Attempt to add automatic connection timeout to HTTPSConnection."""
+
def connect(self):
+ """Redefine the connect to include a socket timeout."""
HTTPSConnection.connect(self)
try:
self.sock.settimeout(30)
@@ -34,12 +49,41 @@
pass
class urlopen:
+ """Opens a URL for reading.
+
+ @type tries: C{int}
+ @ivar tries: the number of attempts to open it so far
+ @type error_return: C{dictionary}
+ @ivar error_return: the bencoded returned data if an error occurred
+ @type connection: L{btHTTPcon} or L{btHTTPScon}
+ @ivar connection: the connection to the server
+ @type response: C{httplib.HTTPResponse}
+ @ivar response: the response from the server
+
+ """
+
def __init__(self, url):
+ """Initialize the instance and call the open method.
+
+ @type url: C{string}
+ @param url: the URL to open
+
+ """
+
self.tries = 0
self._open(url.strip())
self.error_return = None
def _open(self, url):
+ """Open a connection and request the URL, saving the response.
+
+ @type url: C{string}
+ @param url: the URL to open
+ @raise IOError: if there was a problem with the URL, or if the
+ server returned an error
+
+ """
+
self.tries += 1
if self.tries > MAX_REDIRECTS:
raise IOError, ('http error', 500,
@@ -84,11 +128,24 @@
raise IOError, ('http error', status, self.response.reason)
def read(self):
+ """Read the response data from the previous request.
+
+ @rtype: C{string}
+ @return: the response, or the error if an error occurred
+
+ """
if self.error_return:
return self.error_return
return self._read()
def _read(self):
+ """Read the response data and maybe decompress it.
+
+ @rtype: C{string}
+ @return: the processed response data
+
+ """
+
data = self.response.read()
if self.response.getheader('Content-Encoding','').find('gzip') >= 0:
try:
@@ -100,4 +157,5 @@
return data
def close(self):
+ """Closes the connection to the server."""
self.connection.close()
Modified: debtorrent/trunk/btdownloadheadless.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/btdownloadheadless.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/btdownloadheadless.py (original)
+++ debtorrent/trunk/btdownloadheadless.py Wed Jun 13 21:32:05 2007
@@ -37,6 +37,8 @@
from DebTorrent.clock import clock
from DebTorrent import createPeerID, version
from DebTorrent.ConfigDir import ConfigDir
+from DebTorrent.BT1.AptListener import AptListener
+from DebTorrent.HTTPHandler import HTTPHandler
assert sys.version >= '2', "Install Python 2.0 or greater"
try:
@@ -295,6 +297,13 @@
h.failed()
return
+ aptlistener = AptListener(config, rawserver)
+ rawserver.bind(config['port'], config['bind'],
+ reuse = True, ipv6_socket_style = config['ipv6_binds_v4'])
+ rawserver.set_handler(HTTPHandler(aptlistener.get,
+ config['min_time_between_log_flushes']),
+ config['port'])
+
response = get_response(config['responsefile'], config['url'], h.error)
if not response:
break
Modified: debtorrent/trunk/btshowmetainfo.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/btshowmetainfo.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/btshowmetainfo.py (original)
+++ debtorrent/trunk/btshowmetainfo.py Wed Jun 13 21:32:05 2007
@@ -67,14 +67,14 @@
for i in list:
liststring+=i
print 'announce-list.: %s' % liststring
- if metainfo.has_key('httpseeds'):
+ if metainfo.has_key('deb_mirrors'):
list = []
- for seed in metainfo['httpseeds']:
+ for seed in metainfo['deb_mirrors']:
list += [seed,'|']
del list[-1]
liststring = ''
for i in list:
liststring+=i
- print 'http seeds....: %s' % liststring
+ print 'mirror URLs....: %s' % liststring
if metainfo.has_key('comment'):
print 'comment.......: %s' % metainfo['comment']
Modified: debtorrent/trunk/docs/epydoc.config
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/docs/epydoc.config?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/docs/epydoc.config (original)
+++ debtorrent/trunk/docs/epydoc.config Wed Jun 13 21:32:05 2007
@@ -3,7 +3,7 @@
# The list of objects to document. Objects can be named using
# dotted names, module filenames, or package directory names.
# Alases for this option include "objects" and "values".
-modules: DebTorrent btdownloadheadless.py btlaunchmany.py bttrack.py btcompletedir.py btcopyannounce.py btmakemetafile.py btreannounce.py btrename.py btsethttpseeds.py btshowmetainfo.py
+modules: DebTorrent btdownloadheadless.py btlaunchmany.py bttrack.py btcompletedir.py btcopyannounce.py btmakemetafile.py btreannounce.py btrename.py btsetdebmirrors.py btshowmetainfo.py
# The type of output that should be generated. Should be one
# of: html, text, latex, dvi, ps, pdf.
@@ -131,7 +131,7 @@
# The name of one or more pstat files (generated by the profile
# or hotshot module). These are used to generate call graphs.
-pstat: docs/pstat/btdownloadheadless.pstat docs/pstat/bttrack.pstat
+pstat: docs/pstat/btdownloadheadless.pstat docs/pstat/bttrack.pstat docs/pstat/btlaunchmany.pstat
# Specify the font used to generate Graphviz graphs.
# (e.g., helvetica or times).
Modified: debtorrent/trunk/setup.py
URL: http://svn.debian.org/wsvn/debtorrent/debtorrent/trunk/setup.py?rev=103&op=diff
==============================================================================
--- debtorrent/trunk/setup.py (original)
+++ debtorrent/trunk/setup.py Wed Jun 13 21:32:05 2007
@@ -32,6 +32,6 @@
scripts = ["btdownloadheadless.py",
"bttrack.py", "btmakemetafile.py", "btlaunchmany.py", "btcompletedir.py",
"btreannounce.py", "btrename.py", "btshowmetainfo.py",
- 'btcopyannounce.py', 'btsethttpseeds.py',
+ 'btcopyannounce.py', 'btsetdebmirrors.py',
]
)
More information about the Debtorrent-commits
mailing list