[DRE-commits] [ruby-elasticsearch-transport] 01/04: Imported Upstream version 1.0.4

Tim Potter tpot-guest at moszumanska.debian.org
Mon Sep 8 04:36:46 UTC 2014


This is an automated email from the git hooks/post-receive script.

tpot-guest pushed a commit to branch master
in repository ruby-elasticsearch-transport.

commit cf75159f95c0a429693e4a1f3951e4149069a568
Author: Tim Potter <tpot at hp.com>
Date:   Wed Aug 6 16:58:55 2014 +1000

    Imported Upstream version 1.0.4
---
 .gitignore                                         |  17 +
 Gemfile                                            |  16 +
 LICENSE.txt                                        |  13 +
 README.md                                          | 376 +++++++++++++++++
 Rakefile                                           |  80 ++++
 checksums.yaml.gz                                  | Bin 0 -> 424 bytes
 elasticsearch-transport.gemspec                    |  69 ++++
 lib/elasticsearch-transport.rb                     |   1 +
 lib/elasticsearch/transport.rb                     |  30 ++
 lib/elasticsearch/transport/client.rb              | 167 ++++++++
 lib/elasticsearch/transport/transport/base.rb      | 258 ++++++++++++
 .../transport/transport/connections/collection.rb  |  93 +++++
 .../transport/transport/connections/connection.rb  | 121 ++++++
 .../transport/transport/connections/selector.rb    |  63 +++
 lib/elasticsearch/transport/transport/errors.rb    |  73 ++++
 lib/elasticsearch/transport/transport/http/curb.rb |  80 ++++
 .../transport/transport/http/faraday.rb            |  60 +++
 lib/elasticsearch/transport/transport/response.rb  |  21 +
 .../transport/transport/serializer/multi_json.rb   |  36 ++
 lib/elasticsearch/transport/transport/sniffer.rb   |  46 +++
 lib/elasticsearch/transport/version.rb             |   5 +
 metadata.yml                                       | 407 ++++++++++++++++++
 test/integration/client_test.rb                    | 134 ++++++
 test/integration/transport_test.rb                 |  73 ++++
 test/profile/client_benchmark_test.rb              | 125 ++++++
 test/test_helper.rb                                |  74 ++++
 test/unit/client_test.rb                           | 204 +++++++++
 test/unit/connection_collection_test.rb            |  83 ++++
 test/unit/connection_selector_test.rb              |  64 +++
 test/unit/connection_test.rb                       | 100 +++++
 test/unit/response_test.rb                         |  15 +
 test/unit/serializer_test.rb                       |  16 +
 test/unit/sniffer_test.rb                          | 145 +++++++
 test/unit/transport_base_test.rb                   | 455 +++++++++++++++++++++
 test/unit/transport_curb_test.rb                   |  93 +++++
 test/unit/transport_faraday_test.rb                | 140 +++++++
 36 files changed, 3753 insertions(+)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..d87d4be
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,17 @@
+*.gem
+*.rbc
+.bundle
+.config
+.yardoc
+Gemfile.lock
+InstalledFiles
+_yardoc
+coverage
+doc/
+lib/bundler/man
+pkg
+rdoc
+spec/reports
+test/tmp
+test/version_tmp
+tmp
diff --git a/Gemfile b/Gemfile
new file mode 100644
index 0000000..61bf628
--- /dev/null
+++ b/Gemfile
@@ -0,0 +1,16 @@
+source 'https://rubygems.org'
+
+# Specify your gem's dependencies in elasticsearch-transport.gemspec
+gemspec
+
+if File.exists? File.expand_path("../../elasticsearch-api/elasticsearch-api.gemspec", __FILE__)
+  gem 'elasticsearch-api', :path => File.expand_path("../../elasticsearch-api", __FILE__), :require => false
+end
+
+if File.exists? File.expand_path("../../elasticsearch-extensions", __FILE__)
+  gem 'elasticsearch-extensions', :path => File.expand_path("../../elasticsearch-extensions", __FILE__), :require => false
+end
+
+if File.exists? File.expand_path("../../elasticsearch/elasticsearch.gemspec", __FILE__)
+  gem 'elasticsearch', :path => File.expand_path("../../elasticsearch", __FILE__), :require => false
+end
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..7b9fbe3
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,13 @@
+Copyright (c) 2013 Elasticsearch
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..e3acd2f
--- /dev/null
+++ b/README.md
@@ -0,0 +1,376 @@
+# Elasticsearch::Transport
+
+**This library is part of the [`elasticsearch-ruby`](https://github.com/elasticsearch/elasticsearch-ruby/) package;
+please refer to it, unless you want to use this library standalone.**
+
+----
+
+The `elasticsearch-transport` library provides a low-level Ruby client for connecting
+to an [Elasticsearch](http://elasticsearch.org) cluster.
+
+It handles connecting to multiple nodes in the cluster, rotating across connections,
+logging and tracing requests and responses, maintaining failed connections,
+discovering nodes in the cluster, and provides an abstraction for
+data serialization and transport.
+
+It does not handle calling the Elasticsearch API;
+see the [`elasticsearch-api`](https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-api) library.
+
+The library is compatible with Ruby 1.8.7 or higher and with Elasticsearch 0.90 and 1.0.
+
+Features overview:
+
+* Pluggable logging and tracing
+* Plugabble connection selection strategies (round-robin, random, custom)
+* Pluggable transport implementation, customizable and extendable
+* Pluggable serializer implementation
+* Request retries and dead connections handling
+* Node reloading (based on cluster state) on errors or on demand
+
+For optimal performance, you should use a HTTP library which supports persistent ("keep-alive") connections,
+e.g. [Patron](https://github.com/toland/patron) or [Typhoeus](https://github.com/typhoeus/typhoeus).
+Just `require 'patron'` or `require 'typhoeus'; require 'typhoeus/adapters/faraday'` in your code,
+and it will be automatically used; other automatically used libraries are
+[HTTPClient](https://rubygems.org/gems/httpclient) and
+[Net::HTTP::Persistent](https://rubygems.org/gems/net-http-persistent).
+
+For detailed information, see example configurations [below](#transport-implementations).
+
+## Installation
+
+Install the package from [Rubygems](https://rubygems.org):
+
+    gem install elasticsearch-transport
+
+To use an unreleased version, either add it to your `Gemfile` for [Bundler](http://gembundler.com):
+
+    gem 'elasticsearch-transport', git: 'git://github.com/elasticsearch/elasticsearch-ruby.git'
+
+or install it from a source code checkout:
+
+    git clone https://github.com/elasticsearch/elasticsearch-ruby.git
+    cd elasticsearch-ruby/elasticsearch-transport
+    bundle install
+    rake install
+
+## Example Usage
+
+In the simplest form, connect to Elasticsearch running on <http://localhost:9200>
+without any configuration:
+
+    require 'elasticsearch/transport'
+
+    client = Elasticsearch::Client.new
+    response = client.perform_request 'GET', '_cluster/health'
+    # => #<Elasticsearch::Transport::Transport::Response:0x007fc5d506ce38 @status=200, @body={ ... } >
+
+Full documentation is available at <http://rubydoc.info/gems/elasticsearch-transport>.
+
+## Configuration
+
+The client supports many configurations options for setting up and managing connections,
+configuring logging, customizing the transport library, etc.
+
+### Setting Hosts
+
+To connect to a specific Elasticsearch host:
+
+    Elasticsearch::Client.new host: 'search.myserver.com'
+
+To connect to a host with specific port:
+
+    Elasticsearch::Client.new host: 'myhost:8080'
+
+To connect to multiple hosts:
+
+    Elasticsearch::Client.new hosts: ['myhost1', 'myhost2']
+
+Instead of Strings, you can pass host information as an array of Hashes:
+
+    Elasticsearch::Client.new hosts: [ { host: 'myhost1', port: 8080 }, { host: 'myhost2', port: 8080 } ]
+
+    Elasticsearch::Client.new hosts: [
+      { host: 'my-protected-host',
+        port: '443',
+        user: 'USERNAME',
+        password: 'PASSWORD',
+        scheme: 'https'
+      } ]
+
+Scheme, HTTP authentication credentials and URL prefixes are handled automatically:
+
+    Elasticsearch::Client.new url: 'https://username:password@api.server.org:4430/search'
+
+The client will automatically round-robin across the hosts
+(unless you select or implement a different [connection selector](#connection-selector)).
+
+### Logging
+
+To log requests and responses to standard output with the default logger (an instance of Ruby's {::Logger} class),
+set the `log` argument:
+
+    Elasticsearch::Client.new log: true
+
+To trace requests and responses in the _Curl_ format, set the `trace` argument:
+
+    Elasticsearch::Client.new trace: true
+
+You can customize the default logger or tracer:
+
+    client.transport.logger.formatter = proc { |s, d, p, m| "#{s}: #{m}\n" }
+    client.transport.logger.level = Logger::INFO
+
+Or, you can use a custom {::Logger} instance:
+
+    Elasticsearch::Client.new logger: Logger.new(STDERR)
+
+You can pass the client any conforming logger implementation:
+
+    require 'logging' # https://github.com/TwP/logging/
+
+    log = Logging.logger['elasticsearch']
+    log.add_appenders Logging.appenders.stdout
+    log.level = :info
+
+    client = Elasticsearch::Client.new logger: log
+
+### Randomizing Hosts
+
+If you pass multiple hosts to the client, it rotates across them in a round-robin fashion, by default.
+When the same client would be running in multiple processes (eg. in a Ruby web server such as Thin),
+it might keep connecting to the same nodes "at once". To prevent this, you can randomize the hosts
+collection on initialization and reloading:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], randomize_hosts: true
+
+### Retrying on Failures
+
+When the client is initialized with multiple hosts, it makes sense to retry a failed request
+on a different host:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: true
+
+You can specify how many times should the client retry the request before it raises an exception:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], retry_on_failure: 5
+
+### Reloading Hosts
+
+Elasticsearch by default dynamically discovers new nodes in the cluster. You can leverage this
+in the client, and periodically check for new nodes to spread the load.
+
+To retrieve and use the information from the
+[_Nodes Info API_](http://www.elasticsearch.org/guide/reference/api/admin-cluster-nodes-info/)
+on every 10,000th request:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_connections: true
+
+You can pass a specific number of requests after which the reloading should be performed:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_connections: 1_000
+
+To reload connections on failures, use:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], reload_on_failure: true
+
+The reloading will timeout if not finished under 1 second by default. To change the setting:
+
+    Elasticsearch::Client.new hosts: ['localhost:9200', 'localhost:9201'], sniffer_timeout: 3
+
+### Connection Selector
+
+By default, the client will rotate the connections in a round-robin fashion, using the
+{Elasticsearch::Transport::Transport::Connections::Selector::RoundRobin} strategy.
+
+You can implement your own strategy to customize the behaviour. For example,
+let's have a "rack aware" strategy, which will prefer the nodes with a specific
+[attribute](https://github.com/elasticsearch/elasticsearch/blob/1.0/config/elasticsearch.yml#L81-L85).
+Only when these would be unavailable, the strategy will use the other nodes:
+
+    class RackIdSelector
+      include Elasticsearch::Transport::Transport::Connections::Selector::Base
+
+      def select(options={})
+        connections.select do |c|
+          # Try selecting the nodes with a `rack_id:x1` attribute first
+          c.host[:attributes] && c.host[:attributes][:rack_id] == 'x1'
+        end.sample || connections.to_a.sample
+      end
+    end
+
+    Elasticsearch::Client.new hosts: ['x1.search.org', 'x2.search.org'], selector_class: RackIdSelector
+
+### Transport Implementations
+
+By default, the client will use the [_Faraday_](https://rubygems.org/gems/faraday) HTTP library
+as a transport implementation.
+
+It will auto-detect and use an _adapter_ for _Faraday_ based on gems loaded in your code,
+preferring HTTP clients with support for persistent connections.
+
+To use the [_Patron_](https://github.com/toland/patron) HTTP, for example, just require it:
+
+    require 'patron'
+
+Then, create a new client, and the _Patron_  gem will be used as the "driver":
+
+    client = Elasticsearch::Client.new
+
+    client.transport.connections.first.connection.builder.handlers
+    # => [Faraday::Adapter::Patron]
+
+    10.times do
+      client.nodes.stats(metric: 'http')['nodes'].values.each do |n|
+        puts "#{n['name']} : #{n['http']['total_opened']}"
+      end
+    end
+
+    # => Stiletoo : 24
+    # => Stiletoo : 24
+    # => Stiletoo : 24
+    # => ...
+
+To use a specific adapter for _Faraday_, pass it as the `adapter` argument:
+
+    client = Elasticsearch::Client.new adapter: :net_http_persistent
+
+    client.transport.connections.first.connection.builder.handlers
+    # => [Faraday::Adapter::NetHttpPersistent]
+
+To configure the _Faraday_ instance, pass a configuration block to the transport constructor:
+
+    require 'typhoeus'
+    require 'typhoeus/adapters/faraday'
+
+    transport_configuration = lambda do |f|
+      f.response :logger
+      f.adapter  :typhoeus
+    end
+
+    transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new \
+      hosts: [ { host: 'localhost', port: '9200' } ],
+      &transport_configuration
+
+    # Pass the transport to the client
+    #
+    client = Elasticsearch::Client.new transport: transport
+
+To pass options to the
+[`Faraday::Connection`](https://github.com/lostisland/faraday/blob/master/lib/faraday/connection.rb)
+constructor, use the `transport_options` key:
+
+    client = Elasticsearch::Client.new transport_options: {
+      request: { open_timeout: 1 },
+      headers: { user_agent:   'MyApp' },
+      params:  { :format => 'yaml' }
+    }
+
+You can also use a bundled [_Curb_](https://rubygems.org/gems/curb) based transport implementation:
+
+    require 'curb'
+    require 'elasticsearch/transport/transport/http/curb'
+
+    client = Elasticsearch::Client.new transport_class: Elasticsearch::Transport::Transport::HTTP::Curb
+
+    client.transport.connections.first.connection
+    # => #<Curl::Easy http://localhost:9200/>
+
+It's possible to customize the _Curb_ instance by passing a block to the constructor as well
+(in this case, as an inline block):
+
+    transport = Elasticsearch::Transport::Transport::HTTP::Curb.new \
+      hosts: [ { host: 'localhost', port: '9200' } ],
+      & lambda { |c| c.verbose = true }
+
+    client = Elasticsearch::Client.new transport: transport
+
+Instead of passing the transport to the constructor, you can inject it at run time:
+
+    # Set up the transport
+    #
+    faraday_configuration = lambda do |f|
+      f.instance_variable_set :@ssl, { verify: false }
+      f.adapter :excon
+    end
+
+    faraday_client = Elasticsearch::Transport::Transport::HTTP::Faraday.new \
+      hosts: [ { host: 'my-protected-host',
+                 port: '443',
+                 user: 'USERNAME',
+                 password: 'PASSWORD',
+                 scheme: 'https'
+              }],
+      &faraday_configuration
+
+    # Create a default client
+    #
+    client = Elasticsearch::Client.new
+
+    # Inject the transport to the client
+    #
+    client.transport = faraday_client
+
+You can write your own transport implementation easily, by including the
+{Elasticsearch::Transport::Transport::Base} module, implementing the required contract,
+and passing it to the client as the `transport_class` parameter -- or injecting it directly.
+
+### Serializer Implementations
+
+By default, the [MultiJSON](http://rubygems.org/gems/multi_json) library is used as the
+serializer implementation, and it will pick up the "right" adapter based on gems available.
+
+The serialization component is pluggable, though, so you can write your own by including the
+{Elasticsearch::Transport::Transport::Serializer::Base} module, implementing the required contract,
+and passing it to the client as the `serializer_class` or `serializer` parameter.
+
+## Development and Community
+
+For local development, clone the repository and run `bundle install`. See `rake -T` for a list of
+available Rake tasks for running tests, generating documentation, starting a testing cluster, etc.
+
+Bug fixes and features must be covered by unit tests. Integration tests are written in Ruby 1.9 syntax.
+
+Github's pull requests and issues are used to communicate, send bug reports and code contributions.
+
+## The Architecture
+
+* {Elasticsearch::Transport::Client} is composed of {Elasticsearch::Transport::Transport}
+
+* {Elasticsearch::Transport::Transport} is composed of {Elasticsearch::Transport::Transport::Connections},
+  and an instance of logger, tracer, serializer and sniffer.
+
+* Logger and tracer can be any object conforming to Ruby logging interface,
+  ie. an instance of [`Logger`](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/logger/rdoc/Logger.html),
+  [_log4r_](https://rubygems.org/gems/log4r), [_logging_](https://github.com/TwP/logging/), etc.
+
+* The {Elasticsearch::Transport::Transport::Serializer::Base} implementations handle converting data for Elasticsearch
+  (eg. to JSON). You can implement your own serializer.
+
+* {Elasticsearch::Transport::Transport::Sniffer} allows to discover nodes in the cluster and use them as connections.
+
+* {Elasticsearch::Transport::Transport::Connections::Collection} is composed of
+  {Elasticsearch::Transport::Transport::Connections::Connection} instances and a selector instance.
+
+* {Elasticsearch::Transport::Transport::Connections::Connection} contains the connection attributes such as hostname and port,
+  as well as the concrete persistent "session" connected to a specific node.
+
+* The {Elasticsearch::Transport::Transport::Connections::Selector::Base} implementations allow to choose connections
+  from the pool, eg. in a round-robin or random fashion. You can implement your own selector strategy.
+
+## License
+
+This software is licensed under the Apache 2 license, quoted below.
+
+    Copyright (c) 2013 Elasticsearch <http://www.elasticsearch.org>
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
diff --git a/Rakefile b/Rakefile
new file mode 100644
index 0000000..5ed844a
--- /dev/null
+++ b/Rakefile
@@ -0,0 +1,80 @@
+require "bundler/gem_tasks"
+
+desc "Run unit tests"
+task :default => 'test:unit'
+task :test    => 'test:unit'
+
+# ----- Test tasks ------------------------------------------------------------
+
+require 'rake/testtask'
+namespace :test do
+  task :ci_reporter do
+    ENV['CI_REPORTS'] ||= 'tmp/reports'
+    if defined?(RUBY_VERSION) && RUBY_VERSION < '1.9'
+      require 'ci/reporter/rake/test_unit'
+      Rake::Task['ci:setup:testunit'].invoke
+    else
+      require 'ci/reporter/rake/minitest'
+      Rake::Task['ci:setup:minitest'].invoke
+    end
+  end
+
+  Rake::TestTask.new(:unit) do |test|
+    Rake::Task['test:ci_reporter'].invoke if ENV['CI']
+    test.libs << 'lib' << 'test'
+    test.test_files = FileList["test/unit/**/*_test.rb"]
+    # test.verbose = true
+    # test.warning = true
+  end
+
+  Rake::TestTask.new(:integration) do |test|
+    Rake::Task['test:ci_reporter'].invoke if ENV['CI']
+    test.libs << 'lib' << 'test'
+    test.test_files = FileList["test/integration/**/*_test.rb"]
+  end
+
+  Rake::TestTask.new(:all) do |test|
+    Rake::Task['test:ci_reporter'].invoke if ENV['CI']
+    test.libs << 'lib' << 'test'
+    test.test_files = FileList["test/unit/**/*_test.rb", "test/integration/**/*_test.rb"]
+  end
+
+  Rake::TestTask.new(:profile) do |test|
+    Rake::Task['test:ci_reporter'].invoke if ENV['CI']
+    test.libs << 'lib' << 'test'
+    test.test_files = FileList["test/profile/**/*_test.rb"]
+  end
+
+  namespace :cluster do
+    desc "Start Elasticsearch nodes for tests"
+    task :start do
+      $LOAD_PATH << File.expand_path('../lib', __FILE__) << File.expand_path('../test', __FILE__)
+      require 'elasticsearch/extensions/test/cluster'
+      Elasticsearch::Extensions::Test::Cluster.start
+    end
+
+    desc "Stop Elasticsearch nodes for tests"
+    task :stop do
+      $LOAD_PATH << File.expand_path('../lib', __FILE__) << File.expand_path('../test', __FILE__)
+      require 'elasticsearch/extensions/test/cluster'
+      Elasticsearch::Extensions::Test::Cluster.stop
+    end
+  end
+end
+
+# ----- Documentation tasks ---------------------------------------------------
+
+require 'yard'
+YARD::Rake::YardocTask.new(:doc) do |t|
+  t.options = %w| --embed-mixins --markup=markdown |
+end
+
+# ----- Code analysis tasks ---------------------------------------------------
+
+if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+  require 'cane/rake_task'
+  Cane::RakeTask.new(:quality) do |cane|
+    cane.abc_max = 15
+    cane.no_style = true
+  end
+end
diff --git a/checksums.yaml.gz b/checksums.yaml.gz
new file mode 100644
index 0000000..76d63be
Binary files /dev/null and b/checksums.yaml.gz differ
diff --git a/elasticsearch-transport.gemspec b/elasticsearch-transport.gemspec
new file mode 100644
index 0000000..ff79bc1
--- /dev/null
+++ b/elasticsearch-transport.gemspec
@@ -0,0 +1,69 @@
+# coding: utf-8
+lib = File.expand_path('../lib', __FILE__)
+$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
+require 'elasticsearch/transport/version'
+
+Gem::Specification.new do |s|
+  s.name          = "elasticsearch-transport"
+  s.version       = Elasticsearch::Transport::VERSION
+  s.authors       = ["Karel Minarik"]
+  s.email         = ["karel.minarik at elasticsearch.org"]
+  s.summary       = "Ruby client for Elasticsearch."
+  s.homepage      = "https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport"
+  s.license       = "Apache 2"
+
+  s.files         = `git ls-files`.split($/)
+  s.executables   = s.files.grep(%r{^bin/}) { |f| File.basename(f) }
+  s.test_files    = s.files.grep(%r{^(test|spec|features)/})
+  s.require_paths = ["lib"]
+
+  s.extra_rdoc_files  = [ "README.md", "LICENSE.txt" ]
+  s.rdoc_options      = [ "--charset=UTF-8" ]
+
+  s.add_dependency "multi_json"
+  s.add_dependency "faraday"
+
+  if defined?(RUBY_VERSION) && RUBY_VERSION < '1.9'
+    s.add_dependency "system_timer"
+  end
+
+  s.add_development_dependency "bundler", "> 1"
+  s.add_development_dependency "rake"
+
+  if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+    s.add_development_dependency "elasticsearch-extensions"
+  end
+
+  s.add_development_dependency "ansi"
+  s.add_development_dependency "shoulda-context"
+  s.add_development_dependency "mocha"
+  s.add_development_dependency "turn"
+  s.add_development_dependency "yard"
+  s.add_development_dependency "pry"
+  s.add_development_dependency "ci_reporter"
+
+  # Gems for testing integrations
+  s.add_development_dependency "curb"   unless defined? JRUBY_VERSION
+  s.add_development_dependency "patron" unless defined? JRUBY_VERSION
+  s.add_development_dependency "typhoeus", '~> 0.6'
+
+  # Prevent unit test failures on Ruby 1.8
+  if defined?(RUBY_VERSION) && RUBY_VERSION < '1.9'
+    s.add_development_dependency "test-unit", '~> 2'
+    s.add_development_dependency "json"
+  end
+
+  if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+    s.add_development_dependency "minitest", "~> 4.0"
+    s.add_development_dependency "ruby-prof"    unless defined?(JRUBY_VERSION) || defined?(Rubinius)
+    s.add_development_dependency "require-prof" unless defined?(JRUBY_VERSION) || defined?(Rubinius)
+    s.add_development_dependency "simplecov"
+    s.add_development_dependency "simplecov-rcov"
+    s.add_development_dependency "cane"
+    s.add_development_dependency "coveralls"
+  end
+
+  s.description = <<-DESC.gsub(/^    /, '')
+    Ruby client for Elasticsearch. See the `elasticsearch` gem for full integration.
+  DESC
+end
diff --git a/lib/elasticsearch-transport.rb b/lib/elasticsearch-transport.rb
new file mode 100644
index 0000000..644a9f0
--- /dev/null
+++ b/lib/elasticsearch-transport.rb
@@ -0,0 +1 @@
+require 'elasticsearch/transport'
diff --git a/lib/elasticsearch/transport.rb b/lib/elasticsearch/transport.rb
new file mode 100644
index 0000000..5e5de9e
--- /dev/null
+++ b/lib/elasticsearch/transport.rb
@@ -0,0 +1,30 @@
+require "uri"
+require "time"
+require "timeout"
+require "multi_json"
+require "faraday"
+
+require "elasticsearch/transport/transport/serializer/multi_json"
+require "elasticsearch/transport/transport/sniffer"
+require "elasticsearch/transport/transport/response"
+require "elasticsearch/transport/transport/errors"
+require "elasticsearch/transport/transport/base"
+require "elasticsearch/transport/transport/connections/selector"
+require "elasticsearch/transport/transport/connections/connection"
+require "elasticsearch/transport/transport/connections/collection"
+require "elasticsearch/transport/transport/http/faraday"
+require "elasticsearch/transport/client"
+
+require "elasticsearch/transport/version"
+
+module Elasticsearch
+  module Client
+
+    # A convenience wrapper for {::Elasticsearch::Transport::Client#initialize}.
+    #
+    def new(arguments={})
+      Elasticsearch::Transport::Client.new(arguments)
+    end
+    extend self
+  end
+end
diff --git a/lib/elasticsearch/transport/client.rb b/lib/elasticsearch/transport/client.rb
new file mode 100644
index 0000000..889bc4f
--- /dev/null
+++ b/lib/elasticsearch/transport/client.rb
@@ -0,0 +1,167 @@
+module Elasticsearch
+  module Transport
+
+    # Handles communication with an Elasticsearch cluster.
+    #
+    # See {file:README.md README} for usage and code examples.
+    #
+    class Client
+      DEFAULT_TRANSPORT_CLASS  = Transport::HTTP::Faraday
+
+      DEFAULT_LOGGER = lambda do
+        require 'logger'
+        logger = Logger.new(STDERR)
+        logger.progname = 'elasticsearch'
+        logger.formatter = proc { |severity, datetime, progname, msg| "#{datetime}: #{msg}\n" }
+        logger
+      end
+
+      DEFAULT_TRACER = lambda do
+        require 'logger'
+        logger = Logger.new(STDERR)
+        logger.progname = 'elasticsearch.tracer'
+        logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
+        logger
+      end
+
+      # Returns the transport object.
+      #
+      # @see Elasticsearch::Transport::Transport::Base
+      # @see Elasticsearch::Transport::Transport::HTTP::Faraday
+      #
+      attr_accessor :transport
+
+      # Create a client connected to an Elasticsearch cluster.
+      #
+      # @option arguments [String,Array] :hosts Single host passed as a String or Hash, or multiple hosts
+      #                                         passed as an Array; `host` or `url` keys are also valid
+      #
+      # @option arguments [Boolean] :log   Use the default logger (disabled by default)
+      #
+      # @option arguments [Boolean] :trace Use the default tracer (disabled by default)
+      #
+      # @option arguments [Object] :logger An instance of a Logger-compatible object
+      #
+      # @option arguments [Object] :tracer An instance of a Logger-compatible object
+      #
+      # @option arguments [Number] :resurrect_after After how many seconds a dead connection should be tried again
+      #
+      # @option arguments [Boolean,Number] :reload_connections Reload connections after X requests (false by default)
+      #
+      # @option arguments [Boolean] :randomize_hosts   Shuffle connections on initialization and reload (false by default)
+      #
+      # @option arguments [Integer] :sniffer_timeout   Timeout for reloading connections in seconds (1 by default)
+      #
+      # @option arguments [Boolean,Number] :retry_on_failure   Retry X times when request fails before raising and
+      #                                                        exception (false by default)
+      #
+      # @option arguments [Boolean] :reload_on_failure Reload connections after failure (false by default)
+      #
+      # @option arguments [Symbol] :adapter A specific adapter for Faraday (e.g. `:patron`)
+      #
+      # @option arguments [Hash] :transport_options Options to be passed to the `Faraday::Connection` constructor
+      #
+      # @option arguments [Constant] :transport_class  A specific transport class to use, will be initialized by
+      #                                                the client and passed hosts and all arguments
+      #
+      # @option arguments [Object] :transport A specific transport instance
+      #
+      # @option arguments [Constant] :serializer_class A specific serializer class to use, will be initialized by
+      #                                               the transport and passed the transport instance
+      #
+      # @option arguments [Constant] :selector An instance of selector strategy implemented with
+      #                                        {Elasticsearch::Transport::Transport::Connections::Selector::Base}.
+      #
+      def initialize(arguments={})
+        hosts            = arguments[:hosts] || arguments[:host] || arguments[:url]
+
+        arguments[:logger] ||= arguments[:log]   ? DEFAULT_LOGGER.call() : nil
+        arguments[:tracer] ||= arguments[:trace] ? DEFAULT_TRACER.call() : nil
+        arguments[:reload_connections] ||= false
+        arguments[:retry_on_failure]   ||= false
+        arguments[:reload_on_failure]  ||= false
+        arguments[:randomize_hosts]    ||= false
+        arguments[:transport_options]  ||= {}
+
+        transport_class  = arguments[:transport_class] || DEFAULT_TRANSPORT_CLASS
+
+        @transport       = arguments[:transport] || begin
+          if transport_class == Transport::HTTP::Faraday
+            transport_class.new(:hosts => __extract_hosts(hosts, arguments), :options => arguments) do |faraday|
+              faraday.adapter(arguments[:adapter] || __auto_detect_adapter)
+            end
+          else
+            transport_class.new(:hosts => __extract_hosts(hosts, arguments), :options => arguments)
+          end
+        end
+      end
+
+      # Performs a request through delegation to {#transport}.
+      #
+      def perform_request(method, path, params={}, body=nil)
+        transport.perform_request method, path, params, body
+      end
+
+      # Normalizes and returns hosts configuration.
+      #
+      # Arrayifies the `hosts_config` argument and extracts `host` and `port` info from strings.
+      # Performs shuffling when the `randomize_hosts` option is set.
+      #
+      # TODO: Refactor, so it's available in Elasticsearch::Transport::Base as well
+      #
+      # @return [Array<Hash>]
+      # @raise  [ArgumentError]
+      #
+      # @api private
+      #
+      def __extract_hosts(hosts_config=nil, options={})
+        hosts_config = hosts_config.nil? ? ['localhost'] : Array(hosts_config)
+
+        hosts = hosts_config.map do |host|
+          case host
+          when String
+            if host =~ /^[a-z]+\:\/\//
+              uri = URI.parse(host)
+              { :scheme => uri.scheme, :user => uri.user, :password => uri.password, :host => uri.host, :path => uri.path, :port => uri.port.to_s }
+            else
+              host, port = host.split(':')
+              { :host => host, :port => port }
+            end
+          when URI
+            { :scheme => host.scheme, :user => host.user, :password => host.password, :host => host.host, :path => host.path, :port => host.port.to_s }
+          when Hash
+            host
+          else
+            raise ArgumentError, "Please pass host as a String, URI or Hash -- #{host.class} given."
+          end
+        end
+
+        hosts.shuffle! if options[:randomize_hosts]
+        hosts
+      end
+
+      # Auto-detect the best adapter (HTTP "driver") available, based on libraries
+      # loaded by the user, preferring those with persistent connections
+      # ("keep-alive") by default
+      #
+      # @return [Symbol]
+      #
+      # @api private
+      #
+      def __auto_detect_adapter
+        case
+        when defined?(::Patron)
+          :patron
+        when defined?(::Typhoeus)
+          :typhoeus
+        when defined?(::HTTPClient)
+          :httpclient
+        when defined?(::Net::HTTP::Persistent)
+          :net_http_persistent
+        else
+          ::Faraday.default_adapter
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/base.rb b/lib/elasticsearch/transport/transport/base.rb
new file mode 100644
index 0000000..582ebd3
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/base.rb
@@ -0,0 +1,258 @@
+module Elasticsearch
+  module Transport
+    module Transport
+
+      # @abstract Module with common functionality for transport implementations.
+      #
+      module Base
+        DEFAULT_PORT             = 9200
+        DEFAULT_PROTOCOL         = 'http'
+        DEFAULT_RELOAD_AFTER     = 10_000 # Requests
+        DEFAULT_RESURRECT_AFTER  = 60     # Seconds
+        DEFAULT_MAX_RETRIES      = 3      # Requests
+        DEFAULT_SERIALIZER_CLASS = Serializer::MultiJson
+
+        attr_reader   :hosts, :options, :connections, :counter, :last_request_at, :protocol
+        attr_accessor :serializer, :sniffer, :logger, :tracer, :reload_after, :resurrect_after, :max_retries
+
+        # Creates a new transport object.
+        #
+        # @param arguments [Hash] Settings and options for the transport
+        # @param block     [Proc] Lambda or Proc which can be evaluated in the context of the "session" object
+        #
+        # @option arguments [Array] :hosts   An Array of normalized hosts information
+        # @option arguments [Array] :options A Hash with options (usually passed by {Client})
+        #
+        # @see Client#initialize
+        #
+        def initialize(arguments={}, &block)
+          @hosts       = arguments[:hosts]   || []
+          @options     = arguments[:options] || {}
+          @block       = block
+          @connections = __build_connections
+
+          @serializer  = options[:serializer] || ( options[:serializer_class] ? options[:serializer_class].new(self) : DEFAULT_SERIALIZER_CLASS.new(self) )
+          @protocol    = options[:protocol] || DEFAULT_PROTOCOL
+
+          @logger      = options[:logger]
+          @tracer      = options[:tracer]
+
+          @sniffer     = options[:sniffer_class] ? options[:sniffer_class].new(self) : Sniffer.new(self)
+          @counter     = 0
+          @last_request_at = Time.now
+          @reload_after    = options[:reload_connections].is_a?(Fixnum) ? options[:reload_connections] : DEFAULT_RELOAD_AFTER
+          @resurrect_after = options[:resurrect_after] || DEFAULT_RESURRECT_AFTER
+          @max_retries     = options[:retry_on_failure].is_a?(Fixnum)   ? options[:retry_on_failure]   : DEFAULT_MAX_RETRIES
+        end
+
+        # Returns a connection from the connection pool by delegating to {Connections::Collection#get_connection}.
+        #
+        # Resurrects dead connection if the `resurrect_after` timeout has passed.
+        # Increments the counter and performs connection reloading if the `reload_connections` option is set.
+        #
+        # @return [Connections::Connection]
+        # @see    Connections::Collection#get_connection
+        #
+        def get_connection(options={})
+          resurrect_dead_connections! if Time.now > @last_request_at + @resurrect_after
+
+          connection = connections.get_connection(options)
+          @counter  += 1
+
+          reload_connections!         if @options[:reload_connections] && counter % reload_after == 0
+          connection
+        end
+
+        # Reloads and replaces the connection collection based on cluster information.
+        #
+        # @see Sniffer#hosts
+        #
+        def reload_connections!
+          hosts = sniffer.hosts
+          __rebuild_connections :hosts => hosts, :options => options
+          self
+        rescue SnifferTimeoutError
+          logger.error "[SnifferTimeoutError] Timeout when reloading connections." if logger
+          self
+        end
+
+        # Tries to "resurrect" all eligible dead connections.
+        #
+        # @see Connections::Connection#resurrect!
+        #
+        def resurrect_dead_connections!
+          connections.dead.each { |c| c.resurrect! }
+        end
+
+        # Replaces the connections collection.
+        #
+        # @api private
+        #
+        def __rebuild_connections(arguments={})
+          @hosts       = arguments[:hosts]    || []
+          @options     = arguments[:options]  || {}
+          @connections = __build_connections
+        end
+
+        # Log request and response information.
+        #
+        # @api private
+        #
+        def __log(method, path, params, body, url, response, json, took, duration)
+          logger.info  "#{method.to_s.upcase} #{url} " +
+                       "[status:#{response.status}, request:#{sprintf('%.3fs', duration)}, query:#{took}]"
+          logger.debug "> #{__convert_to_json(body)}" if body
+          logger.debug "< #{response.body}"
+        end
+
+        # Log failed request.
+        #
+        # @api private
+        def __log_failed(response)
+          logger.fatal "[#{response.status}] #{response.body}"
+        end
+
+        # Trace the request in the `curl` format.
+        #
+        # @api private
+        def __trace(method, path, params, body, url, response, json, took, duration)
+          trace_url  = "http://localhost:9200/#{path}?pretty" +
+                       ( params.empty? ? '' : "&#{::Faraday::Utils::ParamsHash[params].to_query}" )
+          trace_body = body ? " -d '#{__convert_to_json(body, :pretty => true)}'" : ''
+          tracer.info  "curl -X #{method.to_s.upcase} '#{trace_url}'#{trace_body}\n"
+          tracer.debug "# #{Time.now.iso8601} [#{response.status}] (#{format('%.3f', duration)}s)\n#"
+          tracer.debug json ? serializer.dump(json, :pretty => true).gsub(/^/, '# ').sub(/\}$/, "\n# }")+"\n" : "# #{response.body}\n"
+        end
+
+        # Raise error specific for the HTTP response status or a generic server error
+        #
+        # @api private
+        def __raise_transport_error(response)
+          error = ERRORS[response.status] || ServerError
+          raise error.new "[#{response.status}] #{response.body}"
+        end
+
+        # Converts any non-String object to JSON
+        #
+        # @api private
+        def __convert_to_json(o=nil, options={})
+          o = o.is_a?(String) ? o : serializer.dump(o, options)
+        end
+
+        # Returns a full URL based on information from host
+        #
+        # @param host [Hash] Host configuration passed in from {Client}
+        #
+        # @api private
+        def __full_url(host)
+          url  = "#{host[:protocol]}://"
+          url += "#{host[:user]}:#{host[:password]}@" if host[:user]
+          url += "#{host[:host]}:#{host[:port]}"
+          url += "#{host[:path]}" if host[:path]
+          url
+        end
+
+        # Performs a request to Elasticsearch, while handling logging, tracing, marking dead connections,
+        # retrying the request and reloading the connections.
+        #
+        # @abstract The transport implementation has to implement this method either in full,
+        #           or by invoking this method with a block. See {HTTP::Faraday#perform_request} for an example.
+        #
+        # @param method [String] Request method
+        # @param path   [String] The API endpoint
+        # @param params [Hash]   Request parameters (will be serialized by {Connections::Connection#full_url})
+        # @param body   [Hash]   Request body (will be serialized by the {#serializer})
+        # @param block  [Proc]   Code block to evaluate, passed from the implementation
+        #
+        # @return [Response]
+        # @raise  [NoMethodError] If no block is passed
+        # @raise  [ServerError]   If request failed on server
+        # @raise  [Error]         If no connection is available
+        #
+        def perform_request(method, path, params={}, body=nil, &block)
+          raise NoMethodError, "Implement this method in your transport class" unless block_given?
+          start = Time.now if logger || tracer
+          tries = 0
+
+          begin
+            tries     += 1
+            connection = get_connection or raise Error.new("Cannot get new connection from pool.")
+
+            if connection.connection.respond_to?(:params) && connection.connection.params.respond_to?(:to_hash)
+              params = connection.connection.params.merge(params.to_hash)
+            end
+
+            url        = connection.full_url(path, params)
+
+            response   = block.call(connection, url)
+
+            connection.healthy! if connection.failures > 0
+
+          rescue *host_unreachable_exceptions => e
+            logger.error "[#{e.class}] #{e.message} #{connection.host.inspect}" if logger
+
+            connection.dead!
+
+            if @options[:reload_on_failure] and tries < connections.all.size
+              logger.warn "[#{e.class}] Reloading connections (attempt #{tries} of #{connections.size})" if logger
+              reload_connections! and retry
+            end
+
+            if @options[:retry_on_failure]
+              logger.warn "[#{e.class}] Attempt #{tries} connecting to #{connection.host.inspect}" if logger
+              if tries <= max_retries
+                retry
+              else
+                logger.fatal "[#{e.class}] Cannot connect to #{connection.host.inspect} after #{tries} tries" if logger
+                raise e
+              end
+            else
+              raise e
+            end
+
+          rescue Exception => e
+            logger.fatal "[#{e.class}] #{e.message} (#{connection.host.inspect})" if logger
+            raise e
+          end
+
+          duration = Time.now-start if logger || tracer
+
+          if response.status.to_i >= 300
+            __log    method, path, params, body, url, response, nil, 'N/A', duration if logger
+            __trace  method, path, params, body, url, response, nil, 'N/A', duration if tracer
+            __log_failed response if logger
+            __raise_transport_error response
+          end
+
+          json     = serializer.load(response.body) if response.headers && response.headers["content-type"] =~ /json/
+          took     = (json['took'] ? sprintf('%.3fs', json['took']/1000.0) : 'n/a') rescue 'n/a' if logger || tracer
+
+          __log   method, path, params, body, url, response, json, took, duration if logger
+          __trace method, path, params, body, url, response, json, took, duration if tracer
+
+          Response.new response.status, json || response.body, response.headers
+        ensure
+          @last_request_at = Time.now
+        end
+
+        # @abstract Returns an Array of connection errors specific to the transport implementation.
+        #           See {HTTP::Faraday#host_unreachable_exceptions} for an example.
+        #
+        # @return [Array]
+        #
+        def host_unreachable_exceptions
+          [Errno::ECONNREFUSED]
+        end
+
+        # @abstract A transport implementation must implement this method.
+        #           See {HTTP::Faraday#__build_connections} for an example.
+        #
+        # @return [Connections::Collection]
+        # @api    private
+        def __build_connections
+          raise NoMethodError, "Implement this method in your class"
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/connections/collection.rb b/lib/elasticsearch/transport/transport/connections/collection.rb
new file mode 100644
index 0000000..6d9147b
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/connections/collection.rb
@@ -0,0 +1,93 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module Connections
+
+        # Wraps the collection of connections for the transport object as an Enumerable object.
+        #
+        # @see Base#connections
+        # @see Selector::Base#select
+        # @see Connection
+        #
+        class Collection
+          include Enumerable
+
+          DEFAULT_SELECTOR = Selector::RoundRobin
+
+          attr_reader :selector
+
+          # @option arguments [Array]    :connections    An array of {Connection} objects.
+          # @option arguments [Constant] :selector_class The class to be used as a connection selector strategy.
+          # @option arguments [Object]   :selector       The selector strategy object.
+          #
+          def initialize(arguments={})
+            selector_class = arguments[:selector_class] || DEFAULT_SELECTOR
+            @connections   = arguments[:connections]    || []
+            @selector      = arguments[:selector]       || selector_class.new(arguments)
+          end
+
+          # Returns an Array of hosts information in this collection as Hashes.
+          #
+          # @return [Array]
+          #
+          def hosts
+            @connections.to_a.map { |c| c.host }
+          end
+
+          # Returns an Array of alive connections.
+          #
+          # @return [Array]
+          #
+          def connections
+            @connections.reject { |c| c.dead? }
+          end
+          alias :alive :connections
+
+          # Returns an Array of dead connections.
+          #
+          # @return [Array]
+          #
+          def dead
+            @connections.select { |c| c.dead? }
+          end
+
+          # Returns an Array of all connections, both dead and alive
+          #
+          # @return [Array]
+          #
+          def all
+            @connections
+          end
+
+          # Returns a connection.
+          #
+          # If there are no alive connections, resurrects a connection with least failures.
+          # Delegates to selector's `#select` method to get the connection.
+          #
+          # @return [Connection]
+          #
+          def get_connection(options={})
+            if connections.empty? && dead_connection = dead.sort { |a,b| a.failures <=> b.failures }.first
+              dead_connection.alive!
+            end
+            selector.select(options)
+          end
+
+          def each(&block)
+            connections.each(&block)
+          end
+
+          def slice(*args)
+            connections.slice(*args)
+          end
+          alias :[] :slice
+
+          def size
+            connections.size
+          end
+        end
+
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/connections/connection.rb b/lib/elasticsearch/transport/transport/connections/connection.rb
new file mode 100644
index 0000000..0efd0b3
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/connections/connection.rb
@@ -0,0 +1,121 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module Connections
+
+        # Wraps the connection information and logic.
+        #
+        # The Connection instance wraps the host information (hostname, port, attributes, etc),
+        # as well as the "session" (a transport client object, such as a {HTTP::Faraday} instance).
+        #
+        # It provides methods to construct and properly encode the URLs and paths for passing them
+        # to the transport client object.
+        #
+        # It provides methods to handle connection livecycle (dead, alive, healthy).
+        #
+        class Connection
+          DEFAULT_RESURRECT_TIMEOUT = 60
+
+          attr_reader :host, :connection, :options, :failures, :dead_since
+
+          # @option arguments [Hash]   :host       Host information (example: `{host: 'localhost', port: 9200}`)
+          # @option arguments [Object] :connection The transport-specific physical connection or "session"
+          # @option arguments [Hash]   :options    Options (usually passed in from transport)
+          #
+          def initialize(arguments={})
+            @host       = arguments[:host]
+            @connection = arguments[:connection]
+            @options    = arguments[:options] || {}
+
+            @options[:resurrect_timeout] ||= DEFAULT_RESURRECT_TIMEOUT
+            @failures = 0
+          end
+
+          # Returns the complete endpoint URL with host, port, path and serialized parameters.
+          #
+          # @return [String]
+          #
+          def full_url(path, params={})
+            url  = "#{host[:protocol]}://"
+            url += "#{host[:user]}:#{host[:password]}@" if host[:user]
+            url += "#{host[:host]}:#{host[:port]}"
+            url += "#{host[:path]}" if host[:path]
+            url += "/#{full_path(path, params)}"
+          end
+
+          # Returns the complete endpoint path with serialized parameters.
+          #
+          # @return [String]
+          #
+          def full_path(path, params={})
+            path + (params.empty? ? '' : "?#{::Faraday::Utils::ParamsHash[params].to_query}")
+          end
+
+          # Returns true when this connection has been marked dead, false otherwise.
+          #
+          # @return [Boolean]
+          #
+          def dead?
+            @dead || false
+          end
+
+          # Marks this connection as dead, incrementing the `failures` counter and
+          # storing the current time as `dead_since`.
+          #
+          # @return [self]
+          #
+          def dead!
+            @dead       = true
+            @failures  += 1
+            @dead_since = Time.now
+            self
+          end
+
+          # Marks this connection as alive, ie. it is eligible to be returned from the pool by the selector.
+          #
+          # @return [self]
+          #
+          def alive!
+            @dead     = false
+            self
+          end
+
+          # Marks this connection as healthy, ie. a request has been successfully performed with it.
+          #
+          # @return [self]
+          #
+          def healthy!
+            @dead     = false
+            @failures = 0
+            self
+          end
+
+          # Marks this connection as alive, if the required timeout has passed.
+          #
+          # @return [self,nil]
+          # @see    DEFAULT_RESURRECT_TIMEOUT
+          # @see    #resurrectable?
+          #
+          def resurrect!
+            alive! if resurrectable?
+          end
+
+          # Returns true if the connection is eligible to be resurrected as alive, false otherwise.
+          #
+          # @return [Boolean]
+          #
+          def resurrectable?
+            Time.now > @dead_since + ( @options[:resurrect_timeout] * 2 ** (@failures-1) )
+          end
+
+          # @return [String]
+          #
+          def to_s
+            "<#{self.class.name} host: #{host} (#{dead? ? 'dead since ' + dead_since.to_s : 'alive'})>"
+          end
+        end
+
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/connections/selector.rb b/lib/elasticsearch/transport/transport/connections/selector.rb
new file mode 100644
index 0000000..3fec180
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/connections/selector.rb
@@ -0,0 +1,63 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module Connections
+        module Selector
+
+          # @abstract Common functionality for connection selector implementations.
+          #
+          module Base
+            attr_reader :connections
+
+            # @option arguments [Connections::Collection] :connections Collection with connections.
+            #
+            def initialize(arguments={})
+              @connections = arguments[:connections]
+            end
+
+            # @abstract Selector strategies implement this method to
+            #           select and return a connection from the pool.
+            #
+            # @return [Connection]
+            #
+            def select(options={})
+              raise NoMethodError, "Implement this method in the selector implementation."
+            end
+          end
+
+          # "Random connection" selector strategy.
+          #
+          class Random
+            include Base
+
+            # Returns a random connection from the collection.
+            #
+            # @return [Connections::Connection]
+            #
+            def select(options={})
+              connections.to_a.send( defined?(RUBY_VERSION) && RUBY_VERSION > '1.9' ? :sample : :choice)
+            end
+          end
+
+          # "Round-robin" selector strategy (default).
+          #
+          class RoundRobin
+            include Base
+
+            # Returns the next connection from the collection, rotating them in round-robin fashion.
+            #
+            # @return [Connections::Connection]
+            #
+            def select(options={})
+              # On Ruby 1.9, Array#rotate could be used instead
+              @current = @current.nil? ? 0 : @current+1
+              @current = 0 if @current >= connections.size
+              connections[@current]
+            end
+          end
+
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/errors.rb b/lib/elasticsearch/transport/transport/errors.rb
new file mode 100644
index 0000000..2a83e94
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/errors.rb
@@ -0,0 +1,73 @@
+module Elasticsearch
+  module Transport
+    module Transport
+
+      # Generic client error
+      #
+      class Error < StandardError; end
+
+      # Reloading connections timeout (1 sec by default)
+      #
+      class SnifferTimeoutError < Timeout::Error; end
+
+      # Elasticsearch server error (HTTP status 5xx)
+      #
+      class ServerError < StandardError; end
+
+      module Errors; end
+
+      HTTP_STATUSES = {
+        300 => 'MultipleChoices',
+        301 => 'MovedPermanently',
+        302 => 'Found',
+        303 => 'SeeOther',
+        304 => 'NotModified',
+        305 => 'UseProxy',
+        307 => 'TemporaryRedirect',
+        308 => 'PermanentRedirect',
+
+        400 => 'BadRequest',
+        401 => 'Unauthorized',
+        402 => 'PaymentRequired',
+        403 => 'Forbidden',
+        404 => 'NotFound',
+        405 => 'MethodNotAllowed',
+        406 => 'NotAcceptable',
+        407 => 'ProxyAuthenticationRequired',
+        408 => 'RequestTimeout',
+        409 => 'Conflict',
+        410 => 'Gone',
+        411 => 'LengthRequired',
+        412 => 'PreconditionFailed',
+        413 => 'RequestEntityTooLarge',
+        414 => 'RequestURITooLong',
+        415 => 'UnsupportedMediaType',
+        416 => 'RequestedRangeNotSatisfiable',
+        417 => 'ExpectationFailed',
+        418 => 'ImATeapot',
+        421 => 'TooManyConnectionsFromThisIP',
+        426 => 'UpgradeRequired',
+        450 => 'BlockedByWindowsParentalControls',
+        494 => 'RequestHeaderTooLarge',
+        497 => 'HTTPToHTTPS',
+        499 => 'ClientClosedRequest',
+
+        500 => 'InternalServerError',
+        501 => 'NotImplemented',
+        502 => 'BadGateway',
+        503 => 'ServiceUnavailable',
+        504 => 'GatewayTimeout',
+        505 => 'HTTPVersionNotSupported',
+        506 => 'VariantAlsoNegotiates',
+        510 => 'NotExtended'
+      }
+
+      ERRORS = HTTP_STATUSES.inject({}) do |sum, error|
+        status, name = error
+        sum[status] = Errors.const_set name, Class.new(ServerError)
+        sum
+      end
+
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/http/curb.rb b/lib/elasticsearch/transport/transport/http/curb.rb
new file mode 100644
index 0000000..41b15b3
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/http/curb.rb
@@ -0,0 +1,80 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module HTTP
+
+        # Alternative HTTP transport implementation, using the [_Curb_](https://rubygems.org/gems/curb) client.
+        #
+        # @see Transport::Base
+        #
+        class Curb
+          include Base
+
+          # Performs the request by invoking {Transport::Base#perform_request} with a block.
+          #
+          # @return [Response]
+          # @see    Transport::Base#perform_request
+          #
+          def perform_request(method, path, params={}, body=nil)
+            super do |connection,url|
+              connection.connection.url = url
+
+              case method
+                when 'HEAD'
+                when 'GET', 'POST', 'PUT', 'DELETE'
+                  connection.connection.put_data = __convert_to_json(body) if body
+                else raise ArgumentError, "Unsupported HTTP method: #{method}"
+              end
+
+              connection.connection.http(method.to_sym)
+
+              headers = {}
+              headers['content-type'] = 'application/json' if connection.connection.header_str =~ /\/json/
+
+              Response.new connection.connection.response_code,
+                           connection.connection.body_str,
+                           headers
+            end
+          end
+
+          # Builds and returns a collection of connections.
+          #
+          # @return [Connections::Collection]
+          #
+          def __build_connections
+            Connections::Collection.new \
+              :connections => hosts.map { |host|
+                host[:protocol]   = host[:scheme] || DEFAULT_PROTOCOL
+                host[:port]     ||= DEFAULT_PORT
+
+                client = ::Curl::Easy.new
+                client.resolve_mode = :ipv4
+                client.headers      = {'User-Agent' => "Curb #{Curl::CURB_VERSION}"}
+                client.url          = __full_url(host)
+
+                if host[:user]
+                  client.http_auth_types = host[:auth_type] || :basic
+                  client.username = host[:user]
+                  client.password = host[:password]
+                end
+
+                client.instance_eval &@block if @block
+
+                Connections::Connection.new :host => host, :connection => client
+              },
+              :selector => options[:selector]
+          end
+
+          # Returns an array of implementation specific connection errors.
+          #
+          # @return [Array]
+          #
+          def host_unreachable_exceptions
+            [::Curl::Err::HostResolutionError, ::Curl::Err::ConnectionFailedError]
+          end
+        end
+
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/http/faraday.rb b/lib/elasticsearch/transport/transport/http/faraday.rb
new file mode 100644
index 0000000..04800a2
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/http/faraday.rb
@@ -0,0 +1,60 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module HTTP
+
+        # The default transport implementation, using the [_Faraday_](https://rubygems.org/gems/faraday)
+        # library for abstracting the HTTP client.
+        #
+        # @see Transport::Base
+        #
+        class Faraday
+          include Base
+
+          # Performs the request by invoking {Transport::Base#perform_request} with a block.
+          #
+          # @return [Response]
+          # @see    Transport::Base#perform_request
+          #
+          def perform_request(method, path, params={}, body=nil)
+            super do |connection, url|
+              response = connection.connection.run_request \
+                method.downcase.to_sym,
+                url,
+                ( body ? __convert_to_json(body) : nil ),
+                {}
+              Response.new response.status, response.body, response.headers
+            end
+          end
+
+          # Builds and returns a collection of connections.
+          #
+          # @return [Connections::Collection]
+          #
+          def __build_connections
+            Connections::Collection.new \
+              :connections => hosts.map { |host|
+                host[:protocol]   = host[:scheme] || DEFAULT_PROTOCOL
+                host[:port]     ||= DEFAULT_PORT
+                url               = __full_url(host)
+
+                Connections::Connection.new \
+                  :host => host,
+                  :connection => ::Faraday::Connection.new(url, (options[:transport_options] || {}), &@block )
+              },
+              :selector_class => options[:selector_class],
+              :selector => options[:selector]
+          end
+
+          # Returns an array of implementation specific connection errors.
+          #
+          # @return [Array]
+          #
+          def host_unreachable_exceptions
+            [::Faraday::Error::ConnectionFailed, ::Faraday::Error::TimeoutError]
+          end
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/response.rb b/lib/elasticsearch/transport/transport/response.rb
new file mode 100644
index 0000000..0d689ee
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/response.rb
@@ -0,0 +1,21 @@
+module Elasticsearch
+  module Transport
+    module Transport
+
+      # Wraps the response from Elasticsearch.
+      #
+      class Response
+        attr_reader :status, :body, :headers
+
+        # @param status  [Integer] Response status code
+        # @param body    [String]  Response body
+        # @param headers [Hash]    Response headers
+        def initialize(status, body, headers={})
+          @status, @body, @headers = status, body, headers
+          @body = body.force_encoding('UTF-8') if body.respond_to?(:force_encoding)
+        end
+      end
+
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/serializer/multi_json.rb b/lib/elasticsearch/transport/transport/serializer/multi_json.rb
new file mode 100644
index 0000000..1a2a5ba
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/serializer/multi_json.rb
@@ -0,0 +1,36 @@
+module Elasticsearch
+  module Transport
+    module Transport
+      module Serializer
+
+        # An abstract class for implementing serializer implementations
+        #
+        module Base
+          # @param transport [Object] The instance of transport which uses this serializer
+          #
+          def initialize(transport=nil)
+            @transport = transport
+          end
+        end
+
+        # A default JSON serializer (using [MultiJSON](http://rubygems.org/gems/multi_json))
+        #
+        class MultiJson
+          include Base
+
+          # De-serialize a Hash from JSON string
+          #
+          def load(string, options={})
+            ::MultiJson.load(string, options)
+          end
+
+          # Serialize a Hash to JSON string
+          #
+          def dump(object, options={})
+            ::MultiJson.dump(object, options)
+          end
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/transport/sniffer.rb b/lib/elasticsearch/transport/transport/sniffer.rb
new file mode 100644
index 0000000..d202b30
--- /dev/null
+++ b/lib/elasticsearch/transport/transport/sniffer.rb
@@ -0,0 +1,46 @@
+module Elasticsearch
+  module Transport
+    module Transport
+
+      # Handles node discovery ("sniffing").
+      #
+      class Sniffer
+        RE_URL  = /\/([^:]*):([0-9]+)\]/ # Use named groups on Ruby 1.9: /\/(?<host>[^:]*):(?<port>[0-9]+)\]/
+
+        attr_reader   :transport
+        attr_accessor :timeout
+
+        # @param transport [Object] A transport instance.
+        #
+        def initialize(transport)
+          @transport = transport
+          @timeout   = transport.options[:sniffer_timeout] || 1
+        end
+
+        # Retrieves the node list from the Elasticsearch's
+        # [_Nodes Info API_](http://www.elasticsearch.org/guide/reference/api/admin-cluster-nodes-info/)
+        # and returns a normalized Array of information suitable for passing to transport.
+        #
+        # Shuffles the collection before returning it when the `randomize_hosts` option is set for transport.
+        #
+        # @return [Array<Hash>]
+        # @raise  [SnifferTimeoutError]
+        #
+        def hosts
+          Timeout::timeout(timeout, SnifferTimeoutError) do
+            nodes = transport.perform_request('GET', '_nodes/http').body
+            hosts = nodes['nodes'].map do |id,info|
+              if matches = info["#{transport.protocol}_address"].to_s.match(RE_URL)
+                # TODO: Implement lightweight "indifferent access" here
+                info.merge :host => matches[1], :port => matches[2], :id => id
+              end
+            end.compact
+
+            hosts.shuffle! if transport.options[:randomize_hosts]
+            hosts
+          end
+        end
+      end
+    end
+  end
+end
diff --git a/lib/elasticsearch/transport/version.rb b/lib/elasticsearch/transport/version.rb
new file mode 100644
index 0000000..0cdc034
--- /dev/null
+++ b/lib/elasticsearch/transport/version.rb
@@ -0,0 +1,5 @@
+module Elasticsearch
+  module Transport
+    VERSION = "1.0.4"
+  end
+end
diff --git a/metadata.yml b/metadata.yml
new file mode 100644
index 0000000..8d5e797
--- /dev/null
+++ b/metadata.yml
@@ -0,0 +1,407 @@
+--- !ruby/object:Gem::Specification
+name: elasticsearch-transport
+version: !ruby/object:Gem::Version
+  version: 1.0.4
+platform: ruby
+authors:
+- Karel Minarik
+autorequire: 
+bindir: bin
+cert_chain: []
+date: 2014-06-25 00:00:00.000000000 Z
+dependencies:
+- !ruby/object:Gem::Dependency
+  name: multi_json
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :runtime
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: faraday
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :runtime
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: bundler
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>'
+      - !ruby/object:Gem::Version
+        version: '1'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>'
+      - !ruby/object:Gem::Version
+        version: '1'
+- !ruby/object:Gem::Dependency
+  name: rake
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: elasticsearch-extensions
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: ansi
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: shoulda-context
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: mocha
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: turn
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: yard
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: pry
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: ci_reporter
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: curb
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: patron
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: typhoeus
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ~>
+      - !ruby/object:Gem::Version
+        version: '0.6'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ~>
+      - !ruby/object:Gem::Version
+        version: '0.6'
+- !ruby/object:Gem::Dependency
+  name: minitest
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ~>
+      - !ruby/object:Gem::Version
+        version: '4.0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ~>
+      - !ruby/object:Gem::Version
+        version: '4.0'
+- !ruby/object:Gem::Dependency
+  name: ruby-prof
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: require-prof
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: simplecov
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: simplecov-rcov
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: cane
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+- !ruby/object:Gem::Dependency
+  name: coveralls
+  requirement: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+  type: :development
+  prerelease: false
+  version_requirements: !ruby/object:Gem::Requirement
+    requirements:
+    - - ! '>='
+      - !ruby/object:Gem::Version
+        version: '0'
+description: ! 'Ruby client for Elasticsearch. See the `elasticsearch` gem for full
+  integration.
+
+'
+email:
+- karel.minarik at elasticsearch.org
+executables: []
+extensions: []
+extra_rdoc_files:
+- README.md
+- LICENSE.txt
+files:
+- .gitignore
+- Gemfile
+- LICENSE.txt
+- README.md
+- Rakefile
+- elasticsearch-transport.gemspec
+- lib/elasticsearch-transport.rb
+- lib/elasticsearch/transport.rb
+- lib/elasticsearch/transport/client.rb
+- lib/elasticsearch/transport/transport/base.rb
+- lib/elasticsearch/transport/transport/connections/collection.rb
+- lib/elasticsearch/transport/transport/connections/connection.rb
+- lib/elasticsearch/transport/transport/connections/selector.rb
+- lib/elasticsearch/transport/transport/errors.rb
+- lib/elasticsearch/transport/transport/http/curb.rb
+- lib/elasticsearch/transport/transport/http/faraday.rb
+- lib/elasticsearch/transport/transport/response.rb
+- lib/elasticsearch/transport/transport/serializer/multi_json.rb
+- lib/elasticsearch/transport/transport/sniffer.rb
+- lib/elasticsearch/transport/version.rb
+- test/integration/client_test.rb
+- test/integration/transport_test.rb
+- test/profile/client_benchmark_test.rb
+- test/test_helper.rb
+- test/unit/client_test.rb
+- test/unit/connection_collection_test.rb
+- test/unit/connection_selector_test.rb
+- test/unit/connection_test.rb
+- test/unit/response_test.rb
+- test/unit/serializer_test.rb
+- test/unit/sniffer_test.rb
+- test/unit/transport_base_test.rb
+- test/unit/transport_curb_test.rb
+- test/unit/transport_faraday_test.rb
+homepage: https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport
+licenses:
+- Apache 2
+metadata: {}
+post_install_message: 
+rdoc_options:
+- --charset=UTF-8
+require_paths:
+- lib
+required_ruby_version: !ruby/object:Gem::Requirement
+  requirements:
+  - - ! '>='
+    - !ruby/object:Gem::Version
+      version: '0'
+required_rubygems_version: !ruby/object:Gem::Requirement
+  requirements:
+  - - ! '>='
+    - !ruby/object:Gem::Version
+      version: '0'
+requirements: []
+rubyforge_project: 
+rubygems_version: 2.2.2
+signing_key: 
+specification_version: 4
+summary: Ruby client for Elasticsearch.
+test_files:
+- test/integration/client_test.rb
+- test/integration/transport_test.rb
+- test/profile/client_benchmark_test.rb
+- test/test_helper.rb
+- test/unit/client_test.rb
+- test/unit/connection_collection_test.rb
+- test/unit/connection_selector_test.rb
+- test/unit/connection_test.rb
+- test/unit/response_test.rb
+- test/unit/serializer_test.rb
+- test/unit/sniffer_test.rb
+- test/unit/transport_base_test.rb
+- test/unit/transport_curb_test.rb
+- test/unit/transport_faraday_test.rb
+has_rdoc: 
diff --git a/test/integration/client_test.rb b/test/integration/client_test.rb
new file mode 100644
index 0000000..3a5f765
--- /dev/null
+++ b/test/integration/client_test.rb
@@ -0,0 +1,134 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::ClientIntegrationTest < Elasticsearch::Test::IntegrationTestCase
+  startup do
+    Elasticsearch::Extensions::Test::Cluster.start(nodes: 2) if ENV['SERVER'] and not Elasticsearch::Extensions::Test::Cluster.running?
+  end
+
+  context "Elasticsearch client" do
+    teardown do
+      begin; Object.send(:remove_const, :Typhoeus); rescue NameError; end
+      begin; Object.send(:remove_const, :Patron); rescue NameError; end
+    end
+
+    setup do
+      @port = (ENV['TEST_CLUSTER_PORT'] || 9250).to_i
+      system "curl -X DELETE http://localhost:#{@port}/_all > /dev/null 2>&1"
+
+      @logger =  Logger.new(STDERR)
+      @logger.formatter = proc do |severity, datetime, progname, msg|
+        color = case severity
+          when /INFO/ then :green
+          when /ERROR|WARN|FATAL/ then :red
+          when /DEBUG/ then :cyan
+          else :white
+        end
+        ANSI.ansi(severity[0] + ' ', color, :faint) + ANSI.ansi(msg, :white, :faint) + "\n"
+      end
+
+      @client = Elasticsearch::Client.new host: "localhost:#{@port}"
+    end
+
+    should "connect to the cluster" do
+      assert_nothing_raised do
+        response = @client.perform_request 'GET', '_cluster/health'
+        assert_equal 2, response.body['number_of_nodes']
+      end
+    end
+
+    should "handle paths and URL parameters" do
+      @client.perform_request 'PUT', 'myindex/mydoc/1', {routing: 'XYZ'}, {foo: 'bar'}
+      @client.perform_request 'GET', '_cluster/health?wait_for_status=green', {}
+
+      response = @client.perform_request 'GET', 'myindex/mydoc/1?routing=XYZ'
+      assert_equal 200, response.status
+      assert_equal 'bar', response.body['_source']['foo']
+
+      assert_raise Elasticsearch::Transport::Transport::Errors::NotFound do
+        @client.perform_request 'GET', 'myindex/mydoc/1?routing=ABC'
+      end
+    end
+
+    context "with round robin selector" do
+      setup do
+        @client = Elasticsearch::Client.new \
+                    hosts:  ["localhost:#{@port}", "localhost:#{@port+1}" ],
+                    logger: (ENV['QUIET'] ? nil : @logger)
+      end
+
+      should "rotate nodes" do
+        # Hit node 1
+        response = @client.perform_request 'GET', '_nodes/_local'
+        assert_equal 'node-1', response.body['nodes'].to_a[0][1]['name']
+
+        # Hit node 2
+        response = @client.perform_request 'GET', '_nodes/_local'
+        assert_equal 'node-2', response.body['nodes'].to_a[0][1]['name']
+
+        # Hit node 1
+        response = @client.perform_request 'GET', '_nodes/_local'
+        assert_equal 'node-1', response.body['nodes'].to_a[0][1]['name']
+      end
+    end
+
+    context "with a sick node and retry on failure" do
+      setup do
+        @port = (ENV['TEST_CLUSTER_PORT'] || 9250).to_i
+        @client = Elasticsearch::Client.new \
+                    hosts: ["localhost:#{@port}", "foobar1"],
+                    logger: (ENV['QUIET'] ? nil : @logger),
+                    retry_on_failure: true
+      end
+
+      should "retry the request with next server" do
+        assert_nothing_raised do
+          5.times { @client.perform_request 'GET', '_nodes/_local' }
+        end
+      end
+
+      should "raise exception when it cannot get any healthy server" do
+        @client = Elasticsearch::Client.new \
+                  hosts: ["localhost:#{@port}", "foobar1", "foobar2", "foobar3"],
+                  logger: (ENV['QUIET'] ? nil : @logger),
+                  retry_on_failure: 1
+
+        assert_nothing_raised do
+          # First hit is OK
+          @client.perform_request 'GET', '_nodes/_local'
+        end
+
+        assert_raise Faraday::Error::ConnectionFailed do
+          # Second hit fails
+          @client.perform_request 'GET', '_nodes/_local'
+        end
+      end
+    end
+
+    context "with a sick node and reloading on failure" do
+      setup do
+        @client = Elasticsearch::Client.new \
+                  hosts: ["localhost:#{@port}", "foobar1", "foobar2"],
+                  logger: (ENV['QUIET'] ? nil : @logger),
+                  reload_on_failure: true
+      end
+
+      should "reload the connections" do
+        assert_equal 3, @client.transport.connections.size
+        assert_nothing_raised do
+          5.times { @client.perform_request 'GET', '_nodes/_local' }
+        end
+        assert_equal 2, @client.transport.connections.size
+      end
+    end
+
+    context "with Faraday adapters" do
+      should "automatically use the Patron client when loaded" do
+        require 'patron'
+        client = Elasticsearch::Transport::Client.new host: "localhost:#{@port}"
+
+        response = @client.perform_request 'GET', '_cluster/health'
+        assert_equal 200, response.status
+      end unless JRUBY
+    end
+  end
+end
diff --git a/test/integration/transport_test.rb b/test/integration/transport_test.rb
new file mode 100644
index 0000000..6dfa27c
--- /dev/null
+++ b/test/integration/transport_test.rb
@@ -0,0 +1,73 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::ClientIntegrationTest < Elasticsearch::Test::IntegrationTestCase
+  startup do
+    Elasticsearch::Extensions::Test::Cluster.start(nodes: 2) if ENV['SERVER'] and not Elasticsearch::Extensions::Test::Cluster.running?
+  end
+
+  context "Transport" do
+    setup do
+      @port = (ENV['TEST_CLUSTER_PORT'] || 9250).to_i
+      begin; Object.send(:remove_const, :Patron);   rescue NameError; end
+    end
+
+    should "allow to customize the Faraday adapter" do
+      require 'typhoeus'
+      require 'typhoeus/adapters/faraday'
+
+      transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new \
+        :hosts => [ { :host => 'localhost', :port => @port } ] do |f|
+          f.response :logger
+          f.adapter  :typhoeus
+        end
+
+      client = Elasticsearch::Transport::Client.new transport: transport
+      client.perform_request 'GET', ''
+    end
+
+    should "allow to define connection parameters and pass them" do
+      transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new \
+                    :hosts => [ { :host => 'localhost', :port => @port } ],
+                    :options => { :transport_options => {
+                                    :params => { :format => 'yaml' }
+                                  }
+                                }
+
+      client = Elasticsearch::Transport::Client.new transport: transport
+      response = client.perform_request 'GET', ''
+
+      assert response.body.start_with?("---\n"), "Response body should be YAML: #{response.body.inspect}"
+    end
+
+    should "use the Curb client" do
+      require 'curb'
+      require 'elasticsearch/transport/transport/http/curb'
+
+      transport = Elasticsearch::Transport::Transport::HTTP::Curb.new \
+        :hosts => [ { :host => 'localhost', :port => @port } ] do |curl|
+          curl.verbose = true
+        end
+
+      client = Elasticsearch::Transport::Client.new transport: transport
+      client.perform_request 'GET', ''
+    end unless JRUBY
+
+    should "deserialize JSON responses in the Curb client" do
+      require 'curb'
+      require 'elasticsearch/transport/transport/http/curb'
+
+      transport = Elasticsearch::Transport::Transport::HTTP::Curb.new \
+        :hosts => [ { :host => 'localhost', :port => @port } ] do |curl|
+          curl.verbose = true
+        end
+
+      client = Elasticsearch::Transport::Client.new transport: transport
+      response = client.perform_request 'GET', ''
+
+      assert_respond_to(response.body, :to_hash)
+      assert_equal 200, response.body['status']
+      assert_equal 'application/json', response.headers['content-type']
+    end unless JRUBY
+  end
+
+end
diff --git a/test/profile/client_benchmark_test.rb b/test/profile/client_benchmark_test.rb
new file mode 100644
index 0000000..94d2c4f
--- /dev/null
+++ b/test/profile/client_benchmark_test.rb
@@ -0,0 +1,125 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::ClientProfilingTest < Elasticsearch::Test::ProfilingTest
+  startup do
+    Elasticsearch::Extensions::Test::Cluster.start if ENV['SERVER'] and not Elasticsearch::Extensions::Test::Cluster.running?
+  end
+
+  context "Elasticsearch client benchmark" do
+    setup do
+      @port = (ENV['TEST_CLUSTER_PORT'] || 9250).to_i
+      client = Elasticsearch::Client.new host: "localhost:#{@port}", adapter: ::Faraday.default_adapter
+      client.perform_request 'DELETE', '/ruby_test_benchmark/' rescue nil
+      client.perform_request 'POST',   '/ruby_test_benchmark/', {index: {number_of_shards: 1, number_of_replicas: 0}}
+      100.times do client.perform_request 'POST',   '/ruby_test_benchmark_search/test/', {}, {foo: 'bar'}; end
+      client.perform_request 'POST',   '/ruby_test_benchmark_search/_refresh'
+    end
+    teardown do
+      client = Elasticsearch::Client.new host: "localhost:#{@port}"
+      client.perform_request 'DELETE', '/ruby_test_benchmark/'
+      client.perform_request 'DELETE', '/ruby_test_benchmark_search/'
+    end
+
+    context "with a single-node cluster and the default adapter" do
+      setup do
+        @client = Elasticsearch::Client.new hosts: "localhost:#{@port}", adapter: ::Faraday.default_adapter
+      end
+
+      measure "get the cluster info", count: 1_000 do
+        @client.perform_request 'GET', ''
+      end
+
+      measure "index a document" do
+        @client.perform_request 'POST', '/ruby_test_benchmark/test/', {}, {foo: 'bar'}
+      end
+
+      measure "search" do
+        @client.perform_request 'POST', '/ruby_test_benchmark_search/test/_search', {}, {query: {match: {foo: 'bar'}}}
+      end
+    end
+
+    context "with a two-node cluster and the default adapter" do
+      setup do
+        @client = Elasticsearch::Client.new hosts: ["localhost:#{@port}", "localhost:#{@port+1}"], adapter: ::Faraday.default_adapter
+      end
+
+      measure "get the cluster info", count: 1_000 do
+        @client.perform_request 'GET', ''
+      end
+
+      measure "index a document"do
+        @client.perform_request 'POST', '/ruby_test_benchmark/test/', {}, {foo: 'bar'}
+      end
+
+      measure "search" do
+        @client.perform_request 'POST', '/ruby_test_benchmark_search/test/_search', {}, {query: {match: {foo: 'bar'}}}
+      end
+    end
+
+    context "with a single-node cluster and the Curb client" do
+      setup do
+        require 'curb'
+        require 'elasticsearch/transport/transport/http/curb'
+        @client = Elasticsearch::Client.new host: "localhost:#{@port}",
+                                            transport_class: Elasticsearch::Transport::Transport::HTTP::Curb
+      end
+
+      measure "get the cluster info", count: 1_000 do
+        @client.perform_request 'GET', ''
+      end
+
+      measure "index a document" do
+        @client.perform_request 'POST', '/ruby_test_benchmark/test/', {}, {foo: 'bar'}
+      end
+
+      measure "search" do
+        @client.perform_request 'POST', '/ruby_test_benchmark_search/test/_search', {}, {query: {match: {foo: 'bar'}}}
+      end
+    end
+
+    context "with a single-node cluster and the Typhoeus client" do
+      require 'typhoeus'
+      require 'typhoeus/adapters/faraday'
+
+      setup do
+        transport = Elasticsearch::Transport::Transport::HTTP::Faraday.new \
+          :hosts => [ { :host => 'localhost', :port => @port } ] do |f|
+            f.adapter  :typhoeus
+          end
+
+        @client = Elasticsearch::Client.new transport: transport
+      end
+
+      measure "get the cluster info", count: 1_000 do
+        @client.perform_request 'GET', ''
+      end
+
+      measure "index a document" do
+        @client.perform_request 'POST', '/ruby_test_benchmark/test/', {}, {foo: 'bar'}
+      end
+
+      measure "search" do
+        @client.perform_request 'POST', '/ruby_test_benchmark_search/test/_search', {}, {query: {match: {foo: 'bar'}}}
+      end
+    end
+
+    context "with a single-node cluster and the Patron adapter" do
+      setup do
+        require 'patron'
+        @client = Elasticsearch::Client.new host: "localhost:#{@port}", adapter: :patron
+      end
+
+      measure "get the cluster info", count: 1_000 do
+        @client.perform_request 'GET', ''
+      end
+
+      measure "index a document" do
+        @client.perform_request 'POST', '/ruby_test_benchmark/test/', {}, {foo: 'bar'}
+      end
+
+      measure "search" do
+        @client.perform_request 'POST', '/ruby_test_benchmark_search/test/_search', {}, {query: {match: {foo: 'bar'}}}
+      end
+    end
+  end
+end
diff --git a/test/test_helper.rb b/test/test_helper.rb
new file mode 100644
index 0000000..0f87fb0
--- /dev/null
+++ b/test/test_helper.rb
@@ -0,0 +1,74 @@
+RUBY_1_8 = defined?(RUBY_VERSION) && RUBY_VERSION < '1.9'
+JRUBY    = defined?(JRUBY_VERSION)
+
+if RUBY_1_8 and not ENV['BUNDLE_GEMFILE']
+  require 'rubygems'
+  gem 'test-unit'
+end
+
+require 'rubygems' if RUBY_1_8
+
+if ENV['COVERAGE'] && ENV['CI'].nil? && !RUBY_1_8
+  require 'simplecov'
+  SimpleCov.start { add_filter "/test|test_/" }
+end
+
+if ENV['CI'] && !RUBY_1_8
+  require 'simplecov'
+  require 'simplecov-rcov'
+  SimpleCov.formatter = SimpleCov::Formatter::RcovFormatter
+  SimpleCov.start { add_filter "/test|test_" }
+end
+
+# Register `at_exit` handler for integration tests shutdown.
+# MUST be called before requiring `test/unit`.
+if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+  at_exit { Elasticsearch::Test::IntegrationTestCase.__run_at_exit_hooks }
+end
+
+require 'test/unit'
+require 'shoulda-context'
+require 'mocha/setup'
+require 'ansi/code'
+require 'turn' unless ENV["TM_FILEPATH"] || ENV["NOTURN"] || RUBY_1_8
+
+require 'require-prof' if ENV["REQUIRE_PROF"]
+require 'elasticsearch-transport'
+require 'logger'
+
+RequireProf.print_timing_infos if ENV["REQUIRE_PROF"]
+
+if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+  require 'elasticsearch/extensions/test/cluster'
+  require 'elasticsearch/extensions/test/startup_shutdown'
+  require 'elasticsearch/extensions/test/profiling' unless JRUBY
+end
+
+class Test::Unit::TestCase
+  def setup
+  end
+
+  def teardown
+  end
+end
+
+module Elasticsearch
+  module Test
+    class IntegrationTestCase < ::Test::Unit::TestCase
+      extend Elasticsearch::Extensions::Test::StartupShutdown
+
+      shutdown { Elasticsearch::Extensions::Test::Cluster.stop if ENV['SERVER'] && started? && Elasticsearch::Extensions::Test::Cluster.running? }
+      context "IntegrationTest" do; should "noop on Ruby 1.8" do; end; end if RUBY_1_8
+    end if defined?(RUBY_VERSION) && RUBY_VERSION > '1.9'
+  end
+
+  module Test
+    class ProfilingTest < ::Test::Unit::TestCase
+      extend Elasticsearch::Extensions::Test::StartupShutdown
+      extend Elasticsearch::Extensions::Test::Profiling
+
+      shutdown { Elasticsearch::Extensions::Test::Cluster.stop if ENV['SERVER'] && started? && Elasticsearch::Extensions::Test::Cluster.running? }
+      context "IntegrationTest" do; should "noop on Ruby 1.8" do; end; end if RUBY_1_8
+    end unless RUBY_1_8 || JRUBY
+  end
+end
diff --git a/test/unit/client_test.rb b/test/unit/client_test.rb
new file mode 100644
index 0000000..1daf030
--- /dev/null
+++ b/test/unit/client_test.rb
@@ -0,0 +1,204 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::ClientTest < Test::Unit::TestCase
+
+  class DummyTransport
+    def initialize(*); end
+  end
+
+  context "Client" do
+    setup do
+      Elasticsearch::Transport::Client::DEFAULT_TRANSPORT_CLASS.any_instance.stubs(:__build_connections)
+      @client = Elasticsearch::Transport::Client.new
+    end
+
+    should "be aliased as Elasticsearch::Client" do
+      assert_nothing_raised do
+        assert_instance_of(Elasticsearch::Transport::Client, Elasticsearch::Client.new)
+      end
+    end
+
+    should "have default transport" do
+      assert_instance_of Elasticsearch::Transport::Client::DEFAULT_TRANSPORT_CLASS, @client.transport
+    end
+
+    should "instantiate custom transport class" do
+      client = Elasticsearch::Transport::Client.new :transport_class => DummyTransport
+      assert_instance_of DummyTransport, client.transport
+    end
+
+    should "take custom transport instance" do
+      client = Elasticsearch::Transport::Client.new :transport => DummyTransport.new
+      assert_instance_of DummyTransport, client.transport
+    end
+
+    should "delegate performing requests to transport" do
+      assert_respond_to @client, :perform_request
+      @client.transport.expects(:perform_request)
+      @client.perform_request 'GET', '/'
+    end
+
+    should "have default logger for transport" do
+      client = Elasticsearch::Transport::Client.new :log => true
+      assert_respond_to client.transport.logger, :info
+    end
+
+    should "have default tracer for transport" do
+      client = Elasticsearch::Transport::Client.new :trace => true
+      assert_respond_to client.transport.tracer, :info
+    end
+
+    should "initialize the default transport class" do
+      Elasticsearch::Transport::Client::DEFAULT_TRANSPORT_CLASS.any_instance.
+        unstub(:__build_connections)
+
+      client = Elasticsearch::Client.new
+      assert_match /Faraday/, client.transport.connections.first.connection.headers['User-Agent']
+    end
+
+    context "when passed hosts" do
+      should "have localhost by default" do
+        c = Elasticsearch::Transport::Client.new
+        assert_equal 'localhost', c.transport.hosts.first[:host]
+      end
+
+      should "take :hosts, :host or :url" do
+        c1 = Elasticsearch::Transport::Client.new :hosts => ['foobar']
+        c2 = Elasticsearch::Transport::Client.new :host  => 'foobar'
+        c3 = Elasticsearch::Transport::Client.new :url   => 'foobar'
+        assert_equal 'foobar', c1.transport.hosts.first[:host]
+        assert_equal 'foobar', c2.transport.hosts.first[:host]
+        assert_equal 'foobar', c3.transport.hosts.first[:host]
+      end
+    end
+
+    context "extracting hosts" do
+      should "handle defaults" do
+        hosts = @client.__extract_hosts
+
+        assert_equal 'localhost', hosts[0][:host]
+        assert_nil                hosts[0][:port]
+      end
+
+      should "extract from string" do
+        hosts = @client.__extract_hosts 'myhost'
+
+        assert_equal 'myhost', hosts[0][:host]
+        assert_nil             hosts[0][:port]
+      end
+
+      should "extract from array" do
+        hosts = @client.__extract_hosts ['myhost']
+
+        assert_equal 'myhost', hosts[0][:host]
+      end
+
+      should "extract from array with multiple hosts" do
+        hosts = @client.__extract_hosts ['host1', 'host2']
+
+        assert_equal 'host1', hosts[0][:host]
+        assert_equal 'host2', hosts[1][:host]
+      end
+
+      should "extract from array with ports" do
+        hosts = @client.__extract_hosts ['host1:1000', 'host2:2000']
+
+        assert_equal 'host1', hosts[0][:host]
+        assert_equal '1000',  hosts[0][:port]
+
+        assert_equal 'host2', hosts[1][:host]
+        assert_equal '2000',  hosts[1][:port]
+      end
+
+      should "extract path" do
+        hosts = @client.__extract_hosts 'https://myhost:8080/api'
+
+        assert_equal '/api',  hosts[0][:path]
+      end
+
+      should "extract scheme (protocol)" do
+        hosts = @client.__extract_hosts 'https://myhost:8080'
+
+        assert_equal 'https',  hosts[0][:scheme]
+        assert_equal 'myhost', hosts[0][:host]
+        assert_equal '8080',   hosts[0][:port]
+      end
+
+      should "extract credentials" do
+        hosts = @client.__extract_hosts 'http://USERNAME:PASSWORD@myhost:8080'
+
+        assert_equal 'http',     hosts[0][:scheme]
+        assert_equal 'USERNAME', hosts[0][:user]
+        assert_equal 'PASSWORD', hosts[0][:password]
+        assert_equal 'myhost',   hosts[0][:host]
+        assert_equal '8080',     hosts[0][:port]
+      end
+
+      should "pass Hashes over" do
+        hosts = @client.__extract_hosts [{:host => 'myhost', :port => '1000', :foo => 'bar'}]
+
+        assert_equal 'myhost', hosts[0][:host]
+        assert_equal '1000',   hosts[0][:port]
+        assert_equal 'bar',    hosts[0][:foo]
+      end
+
+      should "use URL instance" do
+        require 'uri'
+        hosts = @client.__extract_hosts URI.parse('https://USERNAME:PASSWORD@myhost:4430')
+
+        assert_equal 'https',    hosts[0][:scheme]
+        assert_equal 'USERNAME', hosts[0][:user]
+        assert_equal 'PASSWORD', hosts[0][:password]
+        assert_equal 'myhost',   hosts[0][:host]
+        assert_equal '4430',     hosts[0][:port]
+      end
+
+      should "raise error for incompatible argument" do
+        assert_raise ArgumentError do
+          @client.__extract_hosts 123
+        end
+      end
+
+      should "randomize hosts" do
+        hosts = [ {:host => 'host1'}, {:host => 'host2'}, {:host => 'host3'}, {:host => 'host4'}, {:host => 'host5'}]
+
+        Array.any_instance.expects(:shuffle!).twice
+
+        @client.__extract_hosts(hosts, :randomize_hosts => true)
+        assert_same_elements hosts, @client.__extract_hosts(hosts, :randomize_hosts => true)
+      end
+    end
+
+    context "detecting adapter for Faraday" do
+      setup do
+        Elasticsearch::Transport::Client::DEFAULT_TRANSPORT_CLASS.any_instance.unstub(:__build_connections)
+        begin; Object.send(:remove_const, :Typhoeus); rescue NameError; end
+        begin; Object.send(:remove_const, :Patron);   rescue NameError; end
+      end
+
+      should "use the default adapter" do
+        c = Elasticsearch::Transport::Client.new
+        handlers = c.transport.connections.all.first.connection.builder.handlers
+
+        assert_includes handlers, Faraday::Adapter::NetHttp
+      end
+
+      should "use the adapter from arguments" do
+        c = Elasticsearch::Transport::Client.new :adapter => :typhoeus
+        handlers = c.transport.connections.all.first.connection.builder.handlers
+
+        assert_includes handlers, Faraday::Adapter::Typhoeus
+      end
+
+      should "detect the adapter" do
+        require 'patron'; load 'patron.rb'
+
+        c = Elasticsearch::Transport::Client.new
+        handlers = c.transport.connections.all.first.connection.builder.handlers
+
+        assert_includes handlers, Faraday::Adapter::Patron
+      end
+    end
+
+  end
+end
diff --git a/test/unit/connection_collection_test.rb b/test/unit/connection_collection_test.rb
new file mode 100644
index 0000000..8d4e361
--- /dev/null
+++ b/test/unit/connection_collection_test.rb
@@ -0,0 +1,83 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::Connections::CollectionTest < Test::Unit::TestCase
+  include Elasticsearch::Transport::Transport::Connections
+
+  context "Connection collection" do
+
+    should "have empty array as default connections array" do
+      assert_equal [], Collection.new.connections
+    end
+
+    should "have default selector class" do
+      assert_not_nil Collection.new.selector
+    end
+
+    should "initialize a custom selector class" do
+      c = Collection.new :selector_class => Selector::Random
+      assert_instance_of Selector::Random, c.selector
+    end
+
+    should "take a custom selector instance" do
+      c = Collection.new :selector => Selector::Random.new
+      assert_instance_of Selector::Random, c.selector
+    end
+
+    should "get connection from selector" do
+      c = Collection.new
+      c.selector.expects(:select).returns('OK')
+      assert_equal 'OK', c.get_connection
+    end
+
+    should "return an array of hosts" do
+      c = Collection.new :connections => [ Connection.new(:host => 'foo'), Connection.new(:host => 'bar') ]
+      assert_equal ['foo', 'bar'], c.hosts
+    end
+
+    should "be enumerable" do
+      c = Collection.new :connections => [ Connection.new(:host => 'foo'), Connection.new(:host => 'bar') ]
+
+      assert_equal ['FOO', 'BAR'], c.map { |i| i.host.upcase }
+      assert_equal 'foo',          c[0].host
+      assert_equal 'bar',          c[1].host
+      assert_equal  2,             c.size
+    end
+
+    context "with the dead pool" do
+      setup do
+        @collection = Collection.new :connections => [ Connection.new(:host => 'foo'), Connection.new(:host => 'bar') ]
+        @collection[1].dead!
+      end
+
+      should "not iterate over dead connections" do
+        assert_equal 1, @collection.size
+        assert_equal ['FOO'], @collection.map { |c| c.host.upcase }
+        assert_equal @collection.connections, @collection.alive
+      end
+
+      should "have dead connections collection" do
+        assert_equal 1, @collection.dead.size
+        assert_equal ['BAR'], @collection.dead.map { |c| c.host.upcase }
+      end
+
+      should "resurrect dead connection with least failures when no alive is available" do
+        c1 = Connection.new(:host => 'foo').dead!.dead!
+        c2 = Connection.new(:host => 'bar').dead!
+
+        @collection = Collection.new :connections => [ c1, c2 ]
+
+        assert_equal 0, @collection.size
+        assert_not_nil @collection.get_connection
+        assert_equal 1, @collection.size
+        assert_equal c2, @collection.first
+      end
+
+      should "return all connections" do
+        assert_equal 2, @collection.all.size
+      end
+
+    end
+
+  end
+
+end
diff --git a/test/unit/connection_selector_test.rb b/test/unit/connection_selector_test.rb
new file mode 100644
index 0000000..00a08f3
--- /dev/null
+++ b/test/unit/connection_selector_test.rb
@@ -0,0 +1,64 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::Connections::SelectorTest < Test::Unit::TestCase
+  include Elasticsearch::Transport::Transport::Connections::Selector
+
+  class DummyStrategySelector
+    include Elasticsearch::Transport::Transport::Connections::Selector::Base
+  end
+
+  class BackupStrategySelector
+    include Elasticsearch::Transport::Transport::Connections::Selector::Base
+
+    def select(options={})
+      connections.reject do |c|
+        c.host[:attributes] && c.host[:attributes][:backup]
+      end.send( defined?(RUBY_VERSION) && RUBY_VERSION > '1.9' ? :sample : :choice)
+    end
+  end
+
+  context "Connection selector" do
+
+    should "be initialized with connections" do
+      assert_equal [1, 2], Random.new(:connections => [1, 2]).connections
+    end
+
+    should "have the abstract select method" do
+      assert_raise(NoMethodError) { DummyStrategySelector.new.select }
+    end
+
+    context "in random strategy" do
+      setup do
+        @selector = Random.new :connections => ['A', 'B', 'C']
+      end
+
+      should "pick a connection" do
+        assert_not_nil @selector.select
+      end
+    end
+
+    context "in round-robin strategy" do
+      setup do
+        @selector = RoundRobin.new :connections => ['A', 'B', 'C']
+      end
+
+      should "rotate over connections" do
+        assert_equal 'A', @selector.select
+        assert_equal 'B', @selector.select
+        assert_equal 'C', @selector.select
+        assert_equal 'A', @selector.select
+      end
+    end
+
+    context "with a custom strategy" do
+
+      should "return proper connection" do
+        selector = BackupStrategySelector.new :connections => [ stub(:host => { :hostname => 'host1' }),
+                                                                stub(:host => { :hostname => 'host2', :attributes => { :backup => true }}) ]
+        10.times { assert_equal 'host1', selector.select.host[:hostname] }
+      end
+
+    end
+
+  end
+end
diff --git a/test/unit/connection_test.rb b/test/unit/connection_test.rb
new file mode 100644
index 0000000..911acb2
--- /dev/null
+++ b/test/unit/connection_test.rb
@@ -0,0 +1,100 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::Connections::ConnectionTest < Test::Unit::TestCase
+  include Elasticsearch::Transport::Transport::Connections
+
+  context "Connection" do
+
+    should "be initialized with :host, :connection, and :options" do
+      c = Connection.new :host => 'x', :connection => 'y', :options => {}
+      assert_equal 'x', c.host
+      assert_equal 'y', c.connection
+      assert_instance_of Hash, c.options
+    end
+
+    should "return full path" do
+      c = Connection.new
+      assert_equal '_search', c.full_path('_search')
+      assert_equal '_search', c.full_path('_search', {})
+      assert_equal '_search?foo=bar', c.full_path('_search', {:foo => 'bar'})
+      assert_equal '_search?foo=bar+bam', c.full_path('_search', {:foo => 'bar bam'})
+    end
+
+    should "return full url" do
+      c = Connection.new :host => { :protocol => 'http', :host => 'localhost', :port => '9200' }
+      assert_equal 'http://localhost:9200/_search?foo=bar', c.full_url('_search', {:foo => 'bar'})
+    end
+
+    should "return full url with credentials" do
+      c = Connection.new :host => { :protocol => 'http', :user => 'U', :password => 'P', :host => 'localhost', :port => '9200' }
+      assert_equal 'http://U:P@localhost:9200/_search?foo=bar', c.full_url('_search', {:foo => 'bar'})
+    end
+
+    should "return full url with path" do
+      c = Connection.new :host => { :protocol => 'http', :host => 'localhost', :port => '9200', :path => '/foo' }
+      assert_equal 'http://localhost:9200/foo/_search?foo=bar', c.full_url('_search', {:foo => 'bar'})
+    end
+
+    should "have a string representation" do
+      c = Connection.new :host => 'x'
+      assert_match /host: x/, c.to_s
+      assert_match /alive/,   c.to_s
+    end
+
+    should "not be dead by default" do
+      c = Connection.new
+      assert ! c.dead?
+    end
+
+    should "be dead when marked" do
+      c = Connection.new.dead!
+      assert c.dead?
+      assert_equal 1, c.failures
+      assert_in_delta c.dead_since, Time.now, 1
+    end
+
+    should "be alive when marked" do
+      c = Connection.new.dead!
+      assert c.dead?
+      assert_equal 1, c.failures
+      assert_in_delta c.dead_since, Time.now, 1
+
+      c.alive!
+      assert ! c.dead?
+      assert_equal 1, c.failures
+    end
+
+    should "be healthy when marked" do
+      c = Connection.new.dead!
+      assert c.dead?
+      assert_equal 1, c.failures
+      assert_in_delta c.dead_since, Time.now, 1
+
+      c.healthy!
+      assert ! c.dead?
+      assert_equal 0, c.failures
+    end
+
+    should "be resurrected if timeout passed" do
+      c = Connection.new.dead!
+
+      now = Time.now + 60
+      Time.stubs(:now).returns(now)
+
+      assert   c.resurrect!, c.inspect
+      assert ! c.dead?,      c.inspect
+    end
+
+    should "be resurrected if timeout passed for multiple failures" do
+      c = Connection.new.dead!.dead!
+
+      now = Time.now + 60*2
+      Time.stubs(:now).returns(now)
+
+      assert   c.resurrect!, c.inspect
+      assert ! c.dead?,      c.inspect
+    end
+
+  end
+
+end
diff --git a/test/unit/response_test.rb b/test/unit/response_test.rb
new file mode 100644
index 0000000..ad22ea2
--- /dev/null
+++ b/test/unit/response_test.rb
@@ -0,0 +1,15 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::ResponseTest < Test::Unit::TestCase
+  context "Response" do
+
+    should "force-encode the body into UTF" do
+      body = "Hello Encoding!".encode(Encoding::ISO_8859_1)
+      assert_equal 'ISO-8859-1', body.encoding.name
+
+      response = Elasticsearch::Transport::Transport::Response.new 200, body
+      assert_equal 'UTF-8', response.body.encoding.name
+    end unless RUBY_1_8
+
+  end
+end
\ No newline at end of file
diff --git a/test/unit/serializer_test.rb b/test/unit/serializer_test.rb
new file mode 100644
index 0000000..cab6a71
--- /dev/null
+++ b/test/unit/serializer_test.rb
@@ -0,0 +1,16 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::SerializerTest < Test::Unit::TestCase
+
+  context "Serializer" do
+
+    should "use MultiJson by default" do
+      ::MultiJson.expects(:load)
+      ::MultiJson.expects(:dump)
+      Elasticsearch::Transport::Transport::Serializer::MultiJson.new.load('{}')
+      Elasticsearch::Transport::Transport::Serializer::MultiJson.new.dump({})
+    end
+
+  end
+
+end
diff --git a/test/unit/sniffer_test.rb b/test/unit/sniffer_test.rb
new file mode 100644
index 0000000..b60e633
--- /dev/null
+++ b/test/unit/sniffer_test.rb
@@ -0,0 +1,145 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::SnifferTest < Test::Unit::TestCase
+
+  class DummyTransport
+    include Elasticsearch::Transport::Transport::Base
+    def __build_connections; hosts; end
+  end
+
+  def __nodes_info(json)
+    Elasticsearch::Transport::Transport::Response.new 200, MultiJson.load(json)
+  end
+
+  context "Sniffer" do
+    setup do
+      @transport = DummyTransport.new
+      @sniffer   = Elasticsearch::Transport::Transport::Sniffer.new @transport
+    end
+
+    should "be initialized with a transport instance" do
+      assert_equal @transport, @sniffer.transport
+    end
+
+    should "return an array of hosts as hashes" do
+      @transport.expects(:perform_request).returns __nodes_info <<-JSON
+        {
+          "ok" : true,
+          "cluster_name" : "elasticsearch_test",
+          "nodes" : {
+            "N1" : {
+              "name" : "Node 1",
+              "transport_address" : "inet[/192.168.1.23:9300]",
+              "hostname" : "testhost1",
+              "version" : "0.20.6",
+              "http_address" : "inet[/192.168.1.23:9200]",
+              "thrift_address" : "/192.168.1.23:9500",
+              "memcached_address" : "inet[/192.168.1.23:11211]"
+            }
+          }
+        }
+      JSON
+
+      hosts = @sniffer.hosts
+
+      assert_equal 1, hosts.size
+      assert_equal '192.168.1.23', hosts.first[:host]
+      assert_equal '9200',         hosts.first[:port]
+      assert_equal 'Node 1',       hosts.first['name']
+    end
+
+    should "skip hosts without a matching transport protocol" do
+      @transport = DummyTransport.new :options => { :protocol => 'memcached' }
+      @sniffer   = Elasticsearch::Transport::Transport::Sniffer.new @transport
+
+      @transport.expects(:perform_request).returns __nodes_info <<-JSON
+        {
+          "ok" : true,
+          "cluster_name" : "elasticsearch_test",
+          "nodes" : {
+            "N1" : {
+              "name" : "Memcached Node",
+              "http_address" : "inet[/192.168.1.23:9200]",
+              "memcached_address" : "inet[/192.168.1.23:11211]"
+            },
+            "N2" : {
+              "name" : "HTTP Node",
+              "http_address" : "inet[/192.168.1.23:9200]"
+            }
+          }
+        }
+      JSON
+
+      hosts = @sniffer.hosts
+
+      assert_equal 1, hosts.size
+      assert_equal '192.168.1.23',   hosts.first[:host]
+      assert_equal '11211',          hosts.first[:port]
+      assert_equal 'Memcached Node', hosts.first['name']
+    end
+
+    should "have configurable timeout" do
+      @transport = DummyTransport.new :options => { :sniffer_timeout => 0.001 }
+      @sniffer   = Elasticsearch::Transport::Transport::Sniffer.new @transport
+      assert_equal 0.001, @sniffer.timeout
+    end
+
+    should "have settable timeout" do
+      @transport = DummyTransport.new
+      @sniffer   = Elasticsearch::Transport::Transport::Sniffer.new @transport
+      assert_equal 1, @sniffer.timeout
+
+      @sniffer.timeout = 2
+      assert_equal 2, @sniffer.timeout
+    end
+
+    should "raise error on timeout" do
+      @transport.expects(:perform_request).raises(Elasticsearch::Transport::Transport::SnifferTimeoutError)
+
+      # TODO: Try to inject sleep into `perform_request` or make this test less ridiculous anyhow...
+      assert_raise Elasticsearch::Transport::Transport::SnifferTimeoutError do
+        @sniffer.hosts
+      end
+    end
+
+    should "randomize hosts" do
+      @transport = DummyTransport.new :options => { :randomize_hosts => true }
+      @sniffer   = Elasticsearch::Transport::Transport::Sniffer.new @transport
+
+      @transport.expects(:perform_request).returns __nodes_info <<-JSON
+        {
+          "ok" : true,
+          "cluster_name" : "elasticsearch_test",
+          "nodes" : {
+            "N1" : {
+              "name" : "Node 1",
+              "http_address" : "inet[/192.168.1.23:9200]"
+            },
+            "N2" : {
+              "name" : "Node 2",
+              "http_address" : "inet[/192.168.1.23:9201]"
+            },
+            "N3" : {
+              "name" : "Node 3",
+              "http_address" : "inet[/192.168.1.23:9202]"
+            },
+            "N4" : {
+              "name" : "Node 4",
+              "http_address" : "inet[/192.168.1.23:9203]"
+            },
+            "N5" : {
+              "name" : "Node 5",
+              "http_address" : "inet[/192.168.1.23:9204]"
+            }
+          }
+        }
+      JSON
+
+      Array.any_instance.expects(:shuffle!)
+
+      hosts = @sniffer.hosts
+    end
+
+  end
+
+end
diff --git a/test/unit/transport_base_test.rb b/test/unit/transport_base_test.rb
new file mode 100644
index 0000000..eab0f6e
--- /dev/null
+++ b/test/unit/transport_base_test.rb
@@ -0,0 +1,455 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::BaseTest < Test::Unit::TestCase
+
+  class EmptyTransport
+    include Elasticsearch::Transport::Transport::Base
+  end
+
+  class DummyTransport
+    include Elasticsearch::Transport::Transport::Base
+    def __build_connections; hosts; end
+  end
+
+  class DummyTransportPerformer < DummyTransport
+    def perform_request(method, path, params={}, body=nil, &block); super; end
+  end
+
+  class DummySerializer
+    def initialize(*); end
+  end
+
+  class DummySniffer
+    def initialize(*); end
+  end
+
+  context "Transport::Base" do
+    should "raise exception when it doesn't implement __build_connections" do
+      assert_raise NoMethodError do
+        EmptyTransport.new.__build_connections
+      end
+    end
+
+    should "build connections on initialization" do
+      DummyTransport.any_instance.expects(:__build_connections)
+      transport = DummyTransport.new
+    end
+
+    should "have default serializer" do
+      transport = DummyTransport.new
+      assert_instance_of Elasticsearch::Transport::Transport::Base::DEFAULT_SERIALIZER_CLASS, transport.serializer
+    end
+
+    should "have custom serializer" do
+      transport = DummyTransport.new :options => { :serializer_class => DummySerializer }
+      assert_instance_of DummySerializer, transport.serializer
+
+      transport = DummyTransport.new :options => { :serializer => DummySerializer.new }
+      assert_instance_of DummySerializer, transport.serializer
+    end
+
+    should "have default sniffer" do
+      transport = DummyTransport.new
+      assert_instance_of Elasticsearch::Transport::Transport::Sniffer, transport.sniffer
+    end
+
+    should "have custom sniffer" do
+      transport = DummyTransport.new :options => { :sniffer_class => DummySniffer }
+      assert_instance_of DummySniffer, transport.sniffer
+    end
+
+    context "when combining the URL" do
+      setup do
+        @transport   = DummyTransport.new
+        @basic_parts = { :protocol => 'http', :host => 'myhost', :port => 8080 }
+      end
+
+      should "combine basic parts" do
+        assert_equal 'http://myhost:8080', @transport.__full_url(@basic_parts)
+      end
+
+      should "combine path" do
+        assert_equal 'http://myhost:8080/api', @transport.__full_url(@basic_parts.merge :path => '/api')
+      end
+
+      should "combine authentication credentials" do
+        assert_equal 'http://U:P@myhost:8080', @transport.__full_url(@basic_parts.merge :user => 'U', :password => 'P')
+      end
+    end
+  end
+
+  context "getting a connection" do
+    setup do
+      @transport = DummyTransportPerformer.new :options => { :reload_connections => 5 }
+      @transport.stubs(:connections).returns(stub :get_connection => Object.new)
+      @transport.stubs(:sniffer).returns(stub :hosts => [])
+    end
+
+    should "get a connection" do
+      assert_not_nil @transport.get_connection
+    end
+
+    should "increment the counter" do
+      assert_equal 0, @transport.counter
+      3.times { @transport.get_connection }
+      assert_equal 3, @transport.counter
+    end
+
+    should "reload connections when it hits the threshold" do
+      @transport.expects(:reload_connections!).twice
+      12.times { @transport.get_connection }
+      assert_equal 12, @transport.counter
+    end
+  end
+
+  context "performing a request" do
+    setup do
+      @transport = DummyTransportPerformer.new
+    end
+
+    should "raise an error when no block is passed" do
+      assert_raise NoMethodError do
+        @transport.peform_request 'GET', '/'
+      end
+    end
+
+    should "get the connection" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      @transport.perform_request 'GET', '/' do; Elasticsearch::Transport::Transport::Response.new 200, 'OK'; end
+    end
+
+    should "raise an error when no connection is available" do
+      @transport.expects(:get_connection).returns(nil)
+      assert_raise Elasticsearch::Transport::Transport::Error do
+        @transport.perform_request 'GET', '/' do; Elasticsearch::Transport::Transport::Response.new 200, 'OK'; end
+      end
+    end
+
+    should "call the passed block" do
+      x = 0
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+
+      @transport.perform_request 'GET', '/' do |connection, url|
+        x += 1
+        Elasticsearch::Transport::Transport::Response.new 200, 'OK'
+      end
+
+      assert_equal 1, x
+    end
+
+    should "deserialize a response JSON body" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      @transport.serializer.expects(:load).returns({'foo' => 'bar'})
+
+      response = @transport.perform_request 'GET', '/' do
+                   Elasticsearch::Transport::Transport::Response.new 200, '{"foo":"bar"}', {"content-type" => 'application/json'}
+                 end
+
+      assert_instance_of Elasticsearch::Transport::Transport::Response, response
+      assert_equal 'bar', response.body['foo']
+    end
+
+    should "not deserialize a response string body" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      @transport.serializer.expects(:load).never
+      response = @transport.perform_request 'GET', '/' do
+                   Elasticsearch::Transport::Transport::Response.new 200, 'FOOBAR', {"content-type" => 'text/plain'}
+                 end
+
+      assert_instance_of Elasticsearch::Transport::Transport::Response, response
+      assert_equal 'FOOBAR', response.body
+    end
+
+    should "serialize non-String objects" do
+      @transport.serializer.expects(:dump).times(3)
+      @transport.__convert_to_json({:foo => 'bar'})
+      @transport.__convert_to_json([1, 2, 3])
+      @transport.__convert_to_json(nil)
+    end
+
+    should "not serialize a String object" do
+      @transport.serializer.expects(:dump).never
+      @transport.__convert_to_json('{"foo":"bar"}')
+    end
+
+    should "raise an error for HTTP status 404" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      assert_raise Elasticsearch::Transport::Transport::Errors::NotFound do
+        @transport.perform_request 'GET', '/' do
+          Elasticsearch::Transport::Transport::Response.new 404, 'NOT FOUND'
+        end
+      end
+    end
+
+    should "raise an error for HTTP status 404 with application/json content type" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      assert_raise Elasticsearch::Transport::Transport::Errors::NotFound do
+        @transport.perform_request 'GET', '/' do
+          Elasticsearch::Transport::Transport::Response.new 404, 'NOT FOUND', {"content-type" => 'application/json'}
+        end
+      end
+    end
+
+    should "raise an error on server failure" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+      assert_raise Elasticsearch::Transport::Transport::Errors::InternalServerError do
+        @transport.perform_request 'GET', '/' do
+          Elasticsearch::Transport::Transport::Response.new 500, 'ERROR'
+        end
+      end
+    end
+
+    should "raise an error on connection failure" do
+      @transport.expects(:get_connection).returns(stub_everything :failures => 1)
+
+      # `block.expects(:call).raises(::Errno::ECONNREFUSED)` fails on Ruby 1.8
+      block = lambda { |a, b| raise ::Errno::ECONNREFUSED }
+
+      assert_raise ::Errno::ECONNREFUSED do
+        @transport.perform_request 'GET', '/', &block
+      end
+    end
+
+    should "mark the connection as dead on failure" do
+      c = stub_everything :failures => 1
+      @transport.expects(:get_connection).returns(c)
+
+      block = lambda { |a,b| raise ::Errno::ECONNREFUSED }
+
+      c.expects(:dead!)
+
+      assert_raise( ::Errno::ECONNREFUSED ) { @transport.perform_request 'GET', '/', &block }
+    end
+  end
+
+  context "performing a request with reload connections on connection failures" do
+    setup do
+      fake_collection = stub_everything :get_connection => stub_everything(:failures => 1),
+                                        :all            => stub_everything(:size => 2)
+      @transport = DummyTransportPerformer.new :options => { :reload_on_failure => 2 }
+      @transport.stubs(:connections).
+                 returns(fake_collection)
+      @block = lambda { |c, u| puts "UNREACHABLE" }
+    end
+
+    should "reload connections when host is unreachable" do
+      @block.expects(:call).times(2).
+            raises(Errno::ECONNREFUSED).
+            then.returns(stub_everything :failures => 1)
+
+      @transport.expects(:reload_connections!).returns([])
+
+      @transport.perform_request('GET', '/', &@block)
+      assert_equal 2, @transport.counter
+    end
+  end unless RUBY_1_8
+
+  context "performing a request with retry on connection failures" do
+    setup do
+      @transport = DummyTransportPerformer.new :options => { :retry_on_failure => true }
+      @transport.stubs(:connections).returns(stub :get_connection => stub_everything(:failures => 1))
+      @block = Proc.new { |c, u| puts "UNREACHABLE" }
+    end
+
+    should "retry DEFAULT_MAX_RETRIES when host is unreachable" do
+      @block.expects(:call).times(4).
+            raises(Errno::ECONNREFUSED).
+            then.raises(Errno::ECONNREFUSED).
+            then.raises(Errno::ECONNREFUSED).
+            then.returns(stub_everything :failures => 1)
+
+      assert_nothing_raised do
+        @transport.perform_request('GET', '/', &@block)
+        assert_equal 4, @transport.counter
+      end
+    end
+
+    should "raise an error after max tries" do
+      @block.expects(:call).times(4).
+            raises(Errno::ECONNREFUSED).
+            then.raises(Errno::ECONNREFUSED).
+            then.raises(Errno::ECONNREFUSED).
+            then.raises(Errno::ECONNREFUSED).
+            then.returns(stub_everything :failures => 1)
+
+      assert_raise Errno::ECONNREFUSED do
+        @transport.perform_request('GET', '/', &@block)
+      end
+    end
+  end unless RUBY_1_8
+
+  context "logging" do
+    setup do
+      @transport = DummyTransportPerformer.new :options => { :logger => Logger.new('/dev/null') }
+
+      fake_connection = stub :full_url => 'localhost:9200/_search?size=1',
+                             :host     => 'localhost',
+                             :connection => stub_everything,
+                             :failures => 0,
+                             :healthy! => true
+
+      @transport.stubs(:get_connection).returns(fake_connection)
+      @transport.serializer.stubs(:load).returns 'foo' => 'bar'
+      @transport.serializer.stubs(:dump).returns '{"foo":"bar"}'
+    end
+
+    should "log the request and response" do
+      @transport.logger.expects(:info).  with do |line|
+        line =~ %r|POST localhost\:9200/_search\?size=1 \[status\:200, request:.*s, query:n/a\]|
+      end
+      @transport.logger.expects(:debug). with '> {"foo":"bar"}'
+      @transport.logger.expects(:debug). with '< {"foo":"bar"}'
+
+      @transport.perform_request 'POST', '_search', {:size => 1}, {:foo => 'bar'} do
+                   Elasticsearch::Transport::Transport::Response.new 200, '{"foo":"bar"}'
+                 end
+    end
+
+    should "log a failed Elasticsearch request" do
+      @block = Proc.new { |c, u| puts "ERROR" }
+      @block.expects(:call).returns(Elasticsearch::Transport::Transport::Response.new 500, 'ERROR')
+
+      @transport.expects(:__log)
+      @transport.logger.expects(:fatal)
+
+      assert_raise Elasticsearch::Transport::Transport::Errors::InternalServerError do
+        @transport.perform_request('POST', '_search', &@block)
+      end
+    end unless RUBY_1_8
+
+    should "log and re-raise a Ruby exception" do
+      @block = Proc.new { |c, u| puts "ERROR" }
+      @block.expects(:call).raises(Exception)
+
+      @transport.expects(:__log).never
+      @transport.logger.expects(:fatal)
+
+      assert_raise(Exception) { @transport.perform_request('POST', '_search', &@block) }
+    end unless RUBY_1_8
+  end
+
+  context "tracing" do
+    setup do
+      @transport = DummyTransportPerformer.new :options => { :tracer => Logger.new('/dev/null') }
+
+      fake_connection = stub :full_url => 'localhost:9200/_search?size=1',
+                             :host     => 'localhost',
+                             :connection => stub_everything,
+                             :failures => 0,
+                             :healthy! => true
+
+      @transport.stubs(:get_connection).returns(fake_connection)
+      @transport.serializer.stubs(:load).returns 'foo' => 'bar'
+      @transport.serializer.stubs(:dump).returns <<-JSON.gsub(/^      /, '')
+      {
+        "foo" : {
+          "bar" : {
+            "bam" : true
+          }
+        }
+      }
+      JSON
+    end
+
+    should "trace the request" do
+      @transport.tracer.expects(:info).  with do |message|
+        message == <<-CURL.gsub(/^            /, '')
+            curl -X POST 'http://localhost:9200/_search?pretty&size=1' -d '{
+              "foo" : {
+                "bar" : {
+                  "bam" : true
+                }
+              }
+            }
+            '
+          CURL
+      end.once
+
+      @transport.perform_request 'POST', '_search', {:size => 1}, {:q => 'foo'} do
+                   Elasticsearch::Transport::Transport::Response.new 200, '{"foo":"bar"}'
+                 end
+    end
+
+    should "trace a failed Elasticsearch request" do
+      @block = Proc.new { |c, u| puts "ERROR" }
+      @block.expects(:call).returns(Elasticsearch::Transport::Transport::Response.new 500, 'ERROR')
+
+      @transport.expects(:__trace)
+
+      assert_raise Elasticsearch::Transport::Transport::Errors::InternalServerError do
+        @transport.perform_request('POST', '_search', &@block)
+      end
+    end unless RUBY_1_8
+
+  end
+
+  context "reloading connections" do
+    setup do
+      @transport = DummyTransport.new :options => { :logger => Logger.new('/dev/null') }
+    end
+
+    should "rebuild connections" do
+      @transport.sniffer.expects(:hosts).returns([])
+      @transport.expects(:__rebuild_connections)
+      @transport.reload_connections!
+    end
+
+    should "log error and continue when timing out while sniffing hosts" do
+      @transport.sniffer.expects(:hosts).raises(Elasticsearch::Transport::Transport::SnifferTimeoutError)
+      @transport.logger.expects(:error)
+      assert_nothing_raised do
+        @transport.reload_connections!
+      end
+    end
+  end
+
+  context "rebuilding connections" do
+    setup do
+      @transport = DummyTransport.new
+    end
+
+    should "should replace the connections" do
+      assert_equal [], @transport.connections
+      @transport.__rebuild_connections :hosts => ['foo', 'bar']
+      assert_equal ['foo', 'bar'], @transport.connections
+    end
+  end
+
+  context "resurrecting connections" do
+    setup do
+      @transport = DummyTransportPerformer.new
+    end
+
+    should "delegate to dead connections" do
+      @transport.connections.expects(:dead).returns([])
+      @transport.resurrect_dead_connections!
+    end
+
+    should "not resurrect connections until timeout" do
+      @transport.connections.expects(:get_connection).returns(stub_everything :failures => 1).times(5)
+      @transport.expects(:resurrect_dead_connections!).never
+      5.times { @transport.get_connection }
+    end
+
+    should "resurrect connections after timeout" do
+      @transport.connections.expects(:get_connection).returns(stub_everything :failures => 1).times(5)
+      @transport.expects(:resurrect_dead_connections!)
+
+      4.times { @transport.get_connection }
+
+      now = Time.now + 60*2
+      Time.stubs(:now).returns(now)
+
+      @transport.get_connection
+    end
+
+    should "mark connection healthy if it succeeds" do
+      c = stub_everything(:failures => 1)
+      @transport.expects(:get_connection).returns(c)
+      c.expects(:healthy!)
+
+      @transport.perform_request('GET', '/') { |connection, url| Elasticsearch::Transport::Transport::Response.new 200, 'OK' }
+    end
+  end
+
+end
diff --git a/test/unit/transport_curb_test.rb b/test/unit/transport_curb_test.rb
new file mode 100644
index 0000000..a3f4691
--- /dev/null
+++ b/test/unit/transport_curb_test.rb
@@ -0,0 +1,93 @@
+require 'test_helper'
+
+if JRUBY
+  puts "'#{File.basename(__FILE__)}' not supported on JRuby #{RUBY_VERSION}"
+  exit(0)
+end
+
+require 'elasticsearch/transport/transport/http/curb'
+require 'curb'
+
+class Elasticsearch::Transport::Transport::HTTP::FaradayTest < Test::Unit::TestCase
+  include Elasticsearch::Transport::Transport::HTTP
+
+  context "Curb transport" do
+    setup do
+      @transport = Curb.new :hosts => [ { :host => 'foobar', :port => 1234 } ]
+    end
+
+    should "implement __build_connections" do
+      assert_equal 1, @transport.hosts.size
+      assert_equal 1, @transport.connections.size
+
+      assert_instance_of ::Curl::Easy,   @transport.connections.first.connection
+      assert_equal 'http://foobar:1234', @transport.connections.first.connection.url
+    end
+
+    should "perform the request" do
+      @transport.connections.first.connection.expects(:http).returns(stub_everything)
+      @transport.perform_request 'GET', '/'
+    end
+
+    should "set body for GET request" do
+      @transport.connections.first.connection.expects(:put_data=).with('{"foo":"bar"}')
+      @transport.connections.first.connection.expects(:http).with(:GET).returns(stub_everything)
+      @transport.perform_request 'GET', '/', {}, '{"foo":"bar"}'
+    end
+
+    should "set body for PUT request" do
+      @transport.connections.first.connection.expects(:put_data=)
+      @transport.connections.first.connection.expects(:http).with(:PUT).returns(stub_everything)
+      @transport.perform_request 'PUT', '/', {}, {:foo => 'bar'}
+    end
+
+    should "serialize the request body" do
+      @transport.connections.first.connection.expects(:http).with(:POST).returns(stub_everything)
+      @transport.serializer.expects(:dump)
+      @transport.perform_request 'POST', '/', {}, {:foo => 'bar'}
+    end
+
+    should "not serialize a String request body" do
+      @transport.connections.first.connection.expects(:http).with(:POST).returns(stub_everything)
+      @transport.serializer.expects(:dump).never
+      @transport.perform_request 'POST', '/', {}, '{"foo":"bar"}'
+    end
+
+    should "set application/json header" do
+      @transport.connections.first.connection.expects(:http).with(:GET).returns(stub_everything)
+      @transport.connections.first.connection.expects(:body_str).returns('{"foo":"bar"}')
+      @transport.connections.first.connection.expects(:header_str).returns('HTTP/1.1 200 OK\r\nContent-Type: application/json; charset=UTF-8\r\nContent-Length: 311\r\n\r\n')
+
+      response = @transport.perform_request 'GET', '/'
+
+      assert_equal 'application/json', response.headers['content-type']
+    end
+
+    should "handle HTTP methods" do
+      @transport.connections.first.connection.expects(:http).with(:HEAD).returns(stub_everything)
+      @transport.connections.first.connection.expects(:http).with(:GET).returns(stub_everything)
+      @transport.connections.first.connection.expects(:http).with(:PUT).returns(stub_everything)
+      @transport.connections.first.connection.expects(:http).with(:POST).returns(stub_everything)
+      @transport.connections.first.connection.expects(:http).with(:DELETE).returns(stub_everything)
+
+      %w| HEAD GET PUT POST DELETE |.each { |method| @transport.perform_request method, '/' }
+
+      assert_raise(ArgumentError) { @transport.perform_request 'FOOBAR', '/' }
+    end
+
+    should "allow to set options for Curb" do
+      transport = Curb.new :hosts => [ { :host => 'foobar', :port => 1234 } ] do |curl|
+        curl.headers["User-Agent"] = "myapp-0.0"
+      end
+
+      assert_equal "myapp-0.0", transport.connections.first.connection.headers["User-Agent"]
+    end
+
+    should "set the credentials if passed" do
+      transport = Curb.new :hosts => [ { :host => 'foobar', :port => 1234, :user => 'foo', :password => 'bar' } ]
+      assert_equal 'foo', transport.connections.first.connection.username
+      assert_equal 'bar', transport.connections.first.connection.password
+    end
+  end
+
+end
diff --git a/test/unit/transport_faraday_test.rb b/test/unit/transport_faraday_test.rb
new file mode 100644
index 0000000..70bbab8
--- /dev/null
+++ b/test/unit/transport_faraday_test.rb
@@ -0,0 +1,140 @@
+require 'test_helper'
+
+class Elasticsearch::Transport::Transport::HTTP::FaradayTest < Test::Unit::TestCase
+  include Elasticsearch::Transport::Transport::HTTP
+
+  context "Faraday transport" do
+    setup do
+      @transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ]
+    end
+
+    should "implement host_unreachable_exceptions" do
+      assert_instance_of Array, @transport.host_unreachable_exceptions
+    end
+
+    should "implement __build_connections" do
+      assert_equal 1, @transport.hosts.size
+      assert_equal 1, @transport.connections.size
+
+      assert_instance_of ::Faraday::Connection, @transport.connections.first.connection
+      assert_equal 'http://foobar:1234/',       @transport.connections.first.connection.url_prefix.to_s
+    end
+
+    should "perform the request" do
+      @transport.connections.first.connection.expects(:run_request).returns(stub_everything)
+      @transport.perform_request 'GET', '/'
+    end
+
+    should "return a Response" do
+      @transport.connections.first.connection.expects(:run_request).returns(stub_everything)
+      response = @transport.perform_request 'GET', '/'
+      assert_instance_of Elasticsearch::Transport::Transport::Response, response
+    end
+
+    should "properly prepare the request" do
+      @transport.connections.first.connection.expects(:run_request).with do |method, url, body, headers|
+        :post == method && '{"foo":"bar"}' == body
+      end.returns(stub_everything)
+      @transport.perform_request 'POST', '/', {}, {:foo => 'bar'}
+    end
+
+    should "serialize the request body" do
+      @transport.connections.first.connection.expects(:run_request).returns(stub_everything)
+      @transport.serializer.expects(:dump)
+      @transport.perform_request 'POST', '/', {}, {:foo => 'bar'}
+    end
+
+    should "not serialize a String request body" do
+      @transport.connections.first.connection.expects(:run_request).returns(stub_everything)
+      @transport.serializer.expects(:dump).never
+      @transport.perform_request 'POST', '/', {}, '{"foo":"bar"}'
+    end
+
+    should "pass the selector_class options to collection" do
+      @transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ],
+                               :options => { :selector_class => Elasticsearch::Transport::Transport::Connections::Selector::Random }
+      assert_instance_of Elasticsearch::Transport::Transport::Connections::Selector::Random,
+                         @transport.connections.selector
+    end
+
+    should "pass the selector option to collection" do
+      @transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ],
+                               :options => { :selector => Elasticsearch::Transport::Transport::Connections::Selector::Random.new }
+      assert_instance_of Elasticsearch::Transport::Transport::Connections::Selector::Random,
+                         @transport.connections.selector
+    end
+
+    should "pass a configuration block to the Faraday constructor" do
+      config_block = lambda do |f|
+        f.response :logger
+        f.path_prefix = '/moo'
+      end
+
+      transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ], &config_block
+
+      handlers = transport.connections.first.connection.builder.handlers
+
+      assert_equal 1, handlers.size
+      assert handlers.include?(::Faraday::Response::Logger), "#{handlers.inspect} does not include <::Faraday::Adapter::Logger>"
+
+      assert_equal '/moo',                   transport.connections.first.connection.path_prefix
+      assert_equal 'http://foobar:1234/moo', transport.connections.first.connection.url_prefix.to_s
+    end
+
+    should "pass transport_options to the Faraday constructor" do
+      transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ],
+                              :options => { :transport_options => {
+                                              :request => { :open_timeout => 1 },
+                                              :headers => { :foo_bar => 'bar'  },
+                                              :ssl     => { :verify => false }
+                                            }
+                                          }
+
+      assert_equal 1,     transport.connections.first.connection.options.open_timeout
+      assert_equal 'bar', transport.connections.first.connection.headers['Foo-Bar']
+      assert_equal false, transport.connections.first.connection.ssl.verify?
+    end
+
+    should "merge in parameters defined in the Faraday connection parameters" do
+      transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ],
+                              :options => { :transport_options => {
+                                              :params => { :format => 'yaml' }
+                                            }
+                                          }
+      # transport.logger = Logger.new(STDERR)
+
+      transport.connections.first.connection.expects(:run_request).
+        with do |method, url, params, body|
+          assert_match /\?format=yaml/, url
+          true
+        end.
+        returns(stub_everything)
+
+      transport.perform_request 'GET', ''
+    end
+
+    should "not overwrite request parameters with the Faraday connection parameters" do
+      transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234 } ],
+                              :options => { :transport_options => {
+                                              :params => { :format => 'yaml' }
+                                            }
+                                          }
+      # transport.logger = Logger.new(STDERR)
+
+      transport.connections.first.connection.expects(:run_request).
+        with do |method, url, params, body|
+          assert_match /\?format=json/, url
+          true
+        end.
+        returns(stub_everything)
+
+      transport.perform_request 'GET', '', { :format => 'json' }
+    end
+
+    should "set the credentials if passed" do
+      transport = Faraday.new :hosts => [ { :host => 'foobar', :port => 1234, :user => 'foo', :password => 'bar' } ]
+      assert_equal 'Basic Zm9vOmJhcg==', transport.connections.first.connection.headers['Authorization']
+    end
+  end
+
+end

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-ruby-extras/ruby-elasticsearch-transport.git



More information about the Pkg-ruby-extras-commits mailing list