[DRE-commits] [ruby-sequel] 01/03: Imported Upstream version 4.11.0

Dmitry Borodaenko angdraug at moszumanska.debian.org
Sun Jun 8 20:24:27 UTC 2014


This is an automated email from the git hooks/post-receive script.

angdraug pushed a commit to branch master
in repository ruby-sequel.

commit 371f365567d59301c912dc4472e04d11e5ec8dd7
Author: Dmitry Borodaenko <angdraug at gmail.com>
Date:   Sun Jun 8 13:19:47 2014 -0700

    Imported Upstream version 4.11.0
---
 .travis.gemfile                                    |    6 +
 .travis.yml                                        |    7 +-
 CHANGELOG                                          |  330 ++++-
 CONTRIBUTING                                       |   33 +
 MIT-LICENSE                                        |    2 +-
 README.rdoc                                        |   47 +-
 Rakefile                                           |   70 +-
 bin/sequel                                         |    3 +-
 doc/active_record.rdoc                             |    6 +-
 doc/advanced_associations.rdoc                     |  217 ++-
 doc/association_basics.rdoc                        |  206 +--
 doc/bin_sequel.rdoc                                |    4 +-
 doc/cheat_sheet.rdoc                               |    4 +-
 doc/core_extensions.rdoc                           |    6 +-
 doc/dataset_basics.rdoc                            |    2 +-
 doc/dataset_filtering.rdoc                         |   12 +-
 doc/migration.rdoc                                 |   34 +-
 doc/model_hooks.rdoc                               |    9 +
 doc/mssql_stored_procedures.rdoc                   |   43 +
 doc/object_model.rdoc                              |   24 +-
 doc/opening_databases.rdoc                         |   31 +-
 doc/postgresql.rdoc                                |   30 +-
 doc/querying.rdoc                                  |   29 +-
 doc/release_notes/3.18.0.txt                       |    5 +-
 doc/release_notes/3.9.0.txt                        |    2 +-
 doc/release_notes/4.10.0.txt                       |  226 ++++
 doc/release_notes/4.11.0.txt                       |  147 +++
 doc/release_notes/4.4.0.txt                        |   92 ++
 doc/release_notes/4.5.0.txt                        |   34 +
 doc/release_notes/4.6.0.txt                        |   30 +
 doc/release_notes/4.7.0.txt                        |  103 ++
 doc/release_notes/4.8.0.txt                        |  175 +++
 doc/release_notes/4.9.0.txt                        |  190 +++
 doc/schema_modification.rdoc                       |    2 +-
 doc/security.rdoc                                  |   19 +-
 doc/sql.rdoc                                       |   57 +-
 doc/testing.rdoc                                   |   18 +-
 doc/thread_safety.rdoc                             |    2 +-
 doc/transactions.rdoc                              |    8 +
 doc/validations.rdoc                               |   22 +-
 doc/virtual_rows.rdoc                              |   62 +-
 lib/sequel/adapters/db2.rb                         |    3 +-
 lib/sequel/adapters/ibmdb.rb                       |   42 +-
 lib/sequel/adapters/jdbc.rb                        |  381 +++---
 lib/sequel/adapters/jdbc/db2.rb                    |   41 +-
 lib/sequel/adapters/jdbc/derby.rb                  |   43 +-
 lib/sequel/adapters/jdbc/h2.rb                     |   68 +-
 lib/sequel/adapters/jdbc/hsqldb.rb                 |   79 +-
 lib/sequel/adapters/jdbc/jtds.rb                   |   15 -
 lib/sequel/adapters/jdbc/oracle.rb                 |   71 +-
 lib/sequel/adapters/jdbc/postgresql.rb             |  141 +-
 lib/sequel/adapters/jdbc/sqlanywhere.rb            |   59 +
 lib/sequel/adapters/jdbc/sqlite.rb                 |    7 +
 lib/sequel/adapters/jdbc/sqlserver.rb              |   40 +-
 lib/sequel/adapters/jdbc/transactions.rb           |   11 +-
 lib/sequel/adapters/mock.rb                        |   16 +-
 lib/sequel/adapters/mysql2.rb                      |   12 +-
 lib/sequel/adapters/odbc.rb                        |    3 +-
 lib/sequel/adapters/odbc/mssql.rb                  |    6 +-
 lib/sequel/adapters/openbase.rb                    |    8 +-
 lib/sequel/adapters/oracle.rb                      |    3 +-
 lib/sequel/adapters/postgres.rb                    |   54 +-
 lib/sequel/adapters/shared/access.rb               |   24 +-
 lib/sequel/adapters/shared/cubrid.rb               |   38 +-
 lib/sequel/adapters/shared/db2.rb                  |   43 +-
 lib/sequel/adapters/shared/firebird.rb             |   37 +-
 lib/sequel/adapters/shared/informix.rb             |    7 +-
 lib/sequel/adapters/shared/mssql.rb                |  261 ++--
 lib/sequel/adapters/shared/mysql.rb                |  134 +-
 lib/sequel/adapters/shared/oracle.rb               |  165 ++-
 lib/sequel/adapters/shared/postgres.rb             |  199 ++-
 lib/sequel/adapters/shared/progress.rb             |    6 +-
 lib/sequel/adapters/shared/sqlanywhere.rb          |  469 +++++++
 lib/sequel/adapters/shared/sqlite.rb               |   74 +-
 lib/sequel/adapters/sqlanywhere.rb                 |  177 +++
 lib/sequel/adapters/tinytds.rb                     |   26 +-
 .../utils/emulate_offset_with_reverse_and_count.rb |    9 +-
 .../utils/emulate_offset_with_row_number.rb        |   34 +-
 lib/sequel/adapters/utils/split_alter_table.rb     |    8 +
 lib/sequel/ast_transformer.rb                      |   16 +-
 lib/sequel/connection_pool.rb                      |   14 +-
 lib/sequel/core.rb                                 |   31 +-
 lib/sequel/database/connecting.rb                  |    2 +-
 lib/sequel/database/dataset_defaults.rb            |    3 +-
 lib/sequel/database/features.rb                    |   15 +
 lib/sequel/database/misc.rb                        |   12 +
 lib/sequel/database/query.rb                       |    7 +-
 lib/sequel/database/schema_generator.rb            |    6 +-
 lib/sequel/database/schema_methods.rb              |   50 +-
 lib/sequel/database/transactions.rb                |  107 +-
 lib/sequel/dataset.rb                              |    4 +-
 lib/sequel/dataset/actions.rb                      |  216 ++-
 lib/sequel/dataset/features.rb                     |   26 +-
 lib/sequel/dataset/graph.rb                        |   68 +-
 lib/sequel/dataset/misc.rb                         |   10 +-
 lib/sequel/dataset/mutation.rb                     |    2 +
 lib/sequel/dataset/placeholder_literalizer.rb      |  172 +++
 lib/sequel/dataset/prepared_statements.rb          |    8 +-
 lib/sequel/dataset/query.rb                        |   96 +-
 lib/sequel/dataset/sql.rb                          |  517 +++++---
 lib/sequel/extensions/columns_introspection.rb     |    4 +-
 lib/sequel/extensions/constraint_validations.rb    |    6 +-
 .../extensions/current_datetime_timestamp.rb       |   57 +
 lib/sequel/extensions/date_arithmetic.rb           |    6 +-
 lib/sequel/extensions/eval_inspect.rb              |   18 +-
 lib/sequel/extensions/migration.rb                 |    2 +-
 .../extensions/mssql_emulate_lateral_with_apply.rb |   11 +-
 lib/sequel/extensions/null_dataset.rb              |   12 +-
 lib/sequel/extensions/pg_array.rb                  |  191 ++-
 lib/sequel/extensions/pg_array_ops.rb              |   23 +-
 lib/sequel/extensions/pg_hstore.rb                 |   17 +-
 lib/sequel/extensions/pg_hstore_ops.rb             |   11 +-
 lib/sequel/extensions/pg_inet.rb                   |    3 +
 lib/sequel/extensions/pg_interval.rb               |    3 +
 lib/sequel/extensions/pg_json.rb                   |  160 ++-
 lib/sequel/extensions/pg_json_ops.rb               |  233 +++-
 lib/sequel/extensions/pg_range.rb                  |    7 +-
 lib/sequel/extensions/pg_range_ops.rb              |    4 +-
 lib/sequel/extensions/pg_row.rb                    |    8 +-
 lib/sequel/extensions/pg_row_ops.rb                |    6 +-
 lib/sequel/extensions/query.rb                     |   10 +-
 lib/sequel/extensions/schema_dumper.rb             |   98 +-
 lib/sequel/extensions/to_dot.rb                    |   14 +-
 lib/sequel/model.rb                                |    8 +-
 lib/sequel/model/associations.rb                   | 1308 +++++++++++++-----
 lib/sequel/model/base.rb                           |  270 +++-
 lib/sequel/model/errors.rb                         |    6 +
 lib/sequel/plugins/association_pks.rb              |   87 +-
 lib/sequel/plugins/auto_validations.rb             |   11 +-
 lib/sequel/plugins/class_table_inheritance.rb      |   84 +-
 lib/sequel/plugins/dataset_associations.rb         |   43 +-
 lib/sequel/plugins/defaults_setter.rb              |    2 +-
 lib/sequel/plugins/eager_each.rb                   |    9 +
 lib/sequel/plugins/instance_hooks.rb               |   24 +-
 lib/sequel/plugins/json_serializer.rb              |    2 +-
 lib/sequel/plugins/many_through_many.rb            |  161 ++-
 lib/sequel/plugins/mssql_optimistic_locking.rb     |   92 ++
 lib/sequel/plugins/nested_attributes.rb            |   12 +
 lib/sequel/plugins/pg_array_associations.rb        |  258 ++--
 lib/sequel/plugins/prepared_statements.rb          |    3 +-
 .../plugins/prepared_statements_associations.rb    |   63 +-
 lib/sequel/plugins/rcte_tree.rb                    |   30 +-
 lib/sequel/plugins/serialization.rb                |   16 +-
 lib/sequel/plugins/sharding.rb                     |   25 +-
 lib/sequel/plugins/single_table_inheritance.rb     |   21 +-
 lib/sequel/plugins/subclasses.rb                   |   10 +-
 lib/sequel/plugins/table_select.rb                 |   41 +
 lib/sequel/plugins/tactical_eager_loading.rb       |    9 +
 lib/sequel/plugins/timestamps.rb                   |    8 +-
 lib/sequel/plugins/touch.rb                        |    4 +-
 lib/sequel/plugins/tree.rb                         |    6 +-
 lib/sequel/plugins/update_or_create.rb             |   60 +
 lib/sequel/plugins/validation_class_methods.rb     |    2 +-
 lib/sequel/plugins/validation_helpers.rb           |    7 +-
 lib/sequel/sql.rb                                  |  302 ++++-
 lib/sequel/version.rb                              |    2 +-
 sequel.gemspec                                     |    3 +-
 spec/adapters/db2_spec.rb                          |   10 +-
 spec/adapters/mssql_spec.rb                        |  112 +-
 spec/adapters/mysql_spec.rb                        |   11 +
 spec/adapters/oracle_spec.rb                       |   31 +-
 spec/adapters/postgres_spec.rb                     |  741 +++++++----
 spec/adapters/spec_helper.rb                       |    4 +-
 spec/adapters/sqlanywhere_spec.rb                  |  170 +++
 spec/adapters/sqlite_spec.rb                       |    7 +
 spec/bin_spec.rb                                   |    6 +-
 spec/core/connection_pool_spec.rb                  |    6 +
 spec/core/database_spec.rb                         |  100 +-
 spec/core/dataset_spec.rb                          |  439 ++++++-
 spec/core/expression_filters_spec.rb               |  122 +-
 spec/core/mock_adapter_spec.rb                     |   16 +-
 spec/core/object_graph_spec.rb                     |  455 ++++---
 spec/core/placeholder_literalizer_spec.rb          |  145 ++
 spec/core/schema_generator_spec.rb                 |    6 +-
 spec/core/schema_spec.rb                           |   68 +-
 spec/core/spec_helper.rb                           |    4 +-
 spec/core_extensions_spec.rb                       |   18 +-
 spec/extensions/active_model_spec.rb               |   27 +-
 spec/extensions/association_pks_spec.rb            |   38 +-
 spec/extensions/association_proxies_spec.rb        |   18 +-
 spec/extensions/auto_validations_spec.rb           |   58 +-
 spec/extensions/caching_spec.rb                    |    8 +-
 spec/extensions/class_table_inheritance_spec.rb    |   26 +-
 spec/extensions/columns_introspection_spec.rb      |    1 +
 spec/extensions/constraint_validations_spec.rb     |   11 +-
 spec/extensions/core_refinements_spec.rb           |   16 +-
 spec/extensions/current_datetime_timestamp_spec.rb |   27 +
 spec/extensions/dataset_associations_spec.rb       |  106 +-
 spec/extensions/defaults_setter_spec.rb            |   12 +
 spec/extensions/eager_each_spec.rb                 |    6 +
 spec/extensions/error_splitter_spec.rb             |    2 +-
 spec/extensions/eval_inspect_spec.rb               |   12 +-
 spec/extensions/hook_class_methods_spec.rb         |   12 +-
 spec/extensions/instance_hooks_spec.rb             |   14 +
 spec/extensions/many_through_many_spec.rb          | 1258 +++++++++++++++++-
 spec/extensions/migration_spec.rb                  |  106 +-
 spec/extensions/mssql_optimistic_locking_spec.rb   |   91 ++
 spec/extensions/nested_attributes_spec.rb          |   38 +-
 spec/extensions/pagination_spec.rb                 |   18 +-
 spec/extensions/pg_array_associations_spec.rb      |  329 +++--
 spec/extensions/pg_array_ops_spec.rb               |    6 +
 spec/extensions/pg_array_spec.rb                   |   32 +-
 spec/extensions/pg_hstore_spec.rb                  |   30 +-
 spec/extensions/pg_interval_spec.rb                |    6 +-
 spec/extensions/pg_json_ops_spec.rb                |   99 ++
 spec/extensions/pg_json_spec.rb                    |  108 +-
 spec/extensions/pg_range_spec.rb                   |   40 +-
 spec/extensions/pg_row_spec.rb                     |    2 +-
 .../prepared_statements_associations_spec.rb       |   27 +-
 spec/extensions/rcte_tree_spec.rb                  |   68 +-
 spec/extensions/schema_caching_spec.rb             |    6 +-
 spec/extensions/sequel_3_dataset_methods_spec.rb   |    1 -
 spec/extensions/serialization_spec.rb              |   19 +
 spec/extensions/sharding_spec.rb                   |    4 +-
 spec/extensions/shared_caching_spec.rb             |   10 +-
 spec/extensions/single_table_inheritance_spec.rb   |   14 +-
 spec/extensions/spec_helper.rb                     |    6 +-
 spec/extensions/static_cache_spec.rb               |   32 +-
 spec/extensions/table_select_spec.rb               |   71 +
 spec/extensions/tactical_eager_loading_spec.rb     |    4 +
 spec/extensions/timestamps_spec.rb                 |   19 +
 spec/extensions/to_dot_spec.rb                     |   13 +-
 spec/extensions/touch_spec.rb                      |   13 +-
 spec/extensions/tree_spec.rb                       |   16 +-
 spec/extensions/update_or_create_spec.rb           |   81 ++
 spec/extensions/validation_class_methods_spec.rb   |   20 +-
 spec/extensions/validation_helpers_spec.rb         |   26 +-
 spec/integration/associations_test.rb              | 1382 +++++++++++++++++++-
 spec/integration/database_test.rb                  |   18 +-
 spec/integration/dataset_test.rb                   |  121 +-
 spec/integration/migrator_test.rb                  |  134 +-
 spec/integration/model_test.rb                     |   21 +-
 spec/integration/prepared_statement_test.rb        |    2 +-
 spec/integration/schema_test.rb                    |   69 +-
 spec/integration/spec_helper.rb                    |    4 +-
 spec/integration/transaction_test.rb               |   30 +-
 spec/integration/type_test.rb                      |    6 +-
 spec/model/association_reflection_spec.rb          |   95 +-
 spec/model/associations_spec.rb                    |  961 +++++++++++++-
 spec/model/base_spec.rb                            |   15 +-
 spec/model/class_dataset_methods_spec.rb           |    4 +-
 spec/model/eager_loading_spec.rb                   |  641 ++++++++-
 spec/model/model_spec.rb                           |  273 +++-
 spec/model/record_spec.rb                          |   40 +-
 spec/model/spec_helper.rb                          |    3 +-
 spec/model/validations_spec.rb                     |    2 +-
 spec/rspec_helper.rb                               |   18 +
 www/layout.html.erb                                |    4 +-
 www/make_www.rb                                    |    4 +-
 www/pages/{development => development.html.erb}    |    2 +-
 .../{documentation => documentation.html.erb}      |   15 +-
 www/pages/{index => index.html.erb}                |    2 +-
 www/pages/{plugins => plugins.html.erb}            |    9 +-
 www/pages/{press => press.html.erb}                |    0
 254 files changed, 15916 insertions(+), 3875 deletions(-)

diff --git a/.travis.gemfile b/.travis.gemfile
index cb65765..66eae00 100644
--- a/.travis.gemfile
+++ b/.travis.gemfile
@@ -20,3 +20,9 @@ gem "pg", :platform => :ruby
 gem 'jdbc-sqlite3', :platform => :jruby
 gem 'jdbc-mysql', :platform => :jruby
 gem 'jdbc-postgres', :platform => :jruby
+
+platforms :rbx do
+  gem 'racc'
+  gem 'rubysl', '~> 2.0'
+  gem 'psych'
+end
diff --git a/.travis.yml b/.travis.yml
index 1559d04..268ee61 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -4,10 +4,13 @@ rvm:
   - 1.9.2
   - 1.9.3
   - 2.0.0
+  - 2.1.0
   - jruby-18mode
   - jruby-19mode
-  - rbx-18mode
-  - rbx-19mode
+  - rbx-2
+matrix:
+  allow_failures:
+    - rvm: rbx-2
 script: bundle exec rake spec_travis
 gemfile: .travis.gemfile
 before_script:
diff --git a/CHANGELOG b/CHANGELOG
index bed2df5..b92a16b 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,331 @@
+=== 4.11.0 (2014-06-03)
+
+* Add :model_map option to class_table_inheritance plugin so class names don't need to be stored in the database (jeremyevans)
+
+* Set version when using for MySQL/SQLite emulation in the mock adapter (jeremyevans)
+
+* Add support for CUBRID/SQLAnywhere emulation to the mock adapter (jeremyevans)
+
+* Add support for the jsonb operators added in PostgreSQL 9.4 to the pg_json_ops extension (jeremyevans)
+
+* Add support for new json/jsonb functions added in PostgreSQL 9.4 to the pg_json_ops extension (jeremyevans)
+
+* Add support for the PostgreSQL 9.4+ jsonb type to the pg_json_ops extension (jeremyevans)
+
+* Add support for derived column lists to Sequel.as and SQL::AliasMethods#as (jeremyevans)
+
+* Support connecting to a DB2 catalog name in the ibmdb adapter (calh) (#821)
+
+* Fix warnings in some cases in the ibmdb adapter (calh) (#820)
+
+* Add SQL::Function#with_ordinality for creating set returning functions WITH ORDINALITY (jeremyevans)
+
+* Add SQL::Function#filter for creating filtered aggregate function calls (jeremyevans)
+
+* Add SQL::Function#within_group for creating ordered-set and hypothetical-set aggregate functions (jeremyevans)
+
+* Add SQL::Function#lateral, for creating set returning functions that will be preceded by LATERAL (jeremyevans)
+
+* Add SQL::Function#quoted and #unquoted methods, to enable/disable quoting of function names (jeremyevans)
+
+* Deprecate Dataset#{window,emulated,}_function_sql_append (jeremyevans)
+
+* Deprecate SQL::WindowFunction and SQL::EmulatedFunction classes, switch to using options on SQL::Function (jeremyevans)
+
+* Only modify changed_columns if deserialized value changes in the serialization plugin (jeremyevans) (#818)
+
+* Support PostgreSQL 9.4+ jsonb type in the pg_json extension (jeremyevans)
+
+* Allow Postgres::ArrayOp#unnest to accept arguments in the pg_array_ops extension (jeremyevans)
+
+* Add Postgres::ArrayOp#cardinality to the pg_array_ops extension (jeremyevans)
+
+* Add :check option to Database#create_view for WITH [LOCAL] CHECK OPTION support (jeremyevans)
+
+* Add :concurrently option to Database#refresh_view on PostgreSQL to support concurrent refresh of materialized views (jeremyevans)
+
+* Call the :after_connect Database option proc with both the connection and server/shard if it accepts 2 arguments (pedro, jeremyevans) (#813)
+
+* Make multiple plugins set values before validation instead of before create, works better with auto_validations (jeremyevans)
+
+* Support a default Dataset#import slice size, set to 500 on SQLite (jeremyevans) (#810)
+
+* Make :read_only transaction option be per-savepoint on PostgreSQL (jeremyevans) (#807)
+
+* Add :rank option to Dataset#full_text_search on PostgreSQL, to order by the ranking (jeremyevans) (#809)
+
+* Remove methods deprecated in 4.10.0 (jeremyevans)
+
+=== 4.10.0 (2014-05-01)
+
+* Make Model.include API same as Module.include (ged) (#803)
+
+* Dataset::PlaceholderLiteralizer now handles DelayedEvaluations correctly (jeremyevans)
+
+* Refactor type conversion in the jdbc adapter, for up to a 20% speedup (jeremyevans)
+
+* Add Dataset#with_fetch_size to jdbc adapter, for setting fetch size for JDBC ResultSets (jeremyevans)
+
+* Default to a fetch_size of 100 in the jdbc/oracle adapter, similar to the oci8-based oracle adapter (jeremyevans)
+
+* Add Database#fetch_size accessor and :fetch_size option to jdbc adapter, for setting JDBC Statement fetch size (jeremyevans)
+
+* Automatically determine array type in pg_array_associations plugin, explicitly cast arrays in more places (jeremyevans, maccman) (#800)
+
+* Speed up Dataset#literal for symbols 60% by caching results, speeding up dataset literalization up to 40% or more (jeremyevans)
+
+* Speed up Sequel.split_symbol 10-20x by caching results, speeding up dataset literalization up to 80% or more (jeremyevans)
+
+* Speed up dataset literalization for simple datasets by up to 100% (jeremyevans)
+
+* Support :fractional_seconds Database option on MySQL 5.6.5+ to support fractional seconds by default (jeremyevans) (#797)
+
+* Work around MySQL 5.6+ bug when combining DROP FOREIGN KEY and DROP INDEX in same ALTER TABLE statement (jeremyevans)
+
+* Make auto_validations plugin handle models that select from subqueries (jeremyevans)
+
+* Recognize additional disconnect errors in the postgres adapter (jeremyevans)
+
+* Make import/multi_insert insert multiple rows in a single query using a UNION on Oracle, DB2, and Firebird (jeremyevans)
+
+* Speed up association_pks many_to_many setter method by using Dataset#import (jeremyevans)
+
+* Add Model.prepared_finder, similar to .finder but using a prepared statement (jeremyevans)
+
+* Model.def_{add_method,association_dataset_methods,remove_methods} are now deprecated (jeremyevans)
+
+* Model.eager_loading_dataset and Model.apply_association_dataset_opts are now deprecated (jeremyevans)
+
+* Make prepared_statement_associations plugin handle one_through_one and one_through_many associations (jeremyevans)
+
+* Use placeholder literalizer for regular association loading for up to 85% speedup (jeremyevans)
+
+* Use placeholder literalizer for eager association loading for up to 20% speedup (jeremyevans)
+
+* Make Model#marshallable! work correctly when using the tactical_eager_loading plugin (jeremyevans)
+
+* Respect :foreign_key_constraint_name option when adding columns to existing table on MySQL (noah256) (#795)
+
+* AssociationReflection#association_dataset now handles joining tables if necessary (jeremyevans)
+
+* Support drop_view :if_exists option on SQLite, MySQL, H2, and HSQLDB (jeremyevans) (#793)
+
+* Support drop_table :if_exists option on HSQLDB (jeremyevans)
+
+* Add Database#transaction :auto_savepoint option, for automatically using a savepoint in nested transactions (jeremyevans)
+
+* Add :server_version Database option on Microsoft SQL Server, instead of querying the database for it (jeremyevans)
+
+* Support :correlated_subquery as an eager_graph and filter by associations limit strategy for one_to_* associations (jeremyevans)
+
+* Support named paramters in call_mssql_sproc on Microsoft SQL Server (y.zemlyanukhin, jeremyevans) (#792)
+
+* Handle placeholder literalizer arguments when emulating offsets (jeremyevans)
+
+* Don't attempt to emulate offsets if the dataset uses literal SQL (jeremyevans)
+
+* Use a UNION-based strategy by default to eagerly load limited associations (jeremyevans)
+
+* Support offsets without limits on MySQL, SQLite, H2, SQLAnywhere and CUBRID (jeremyevans)
+
+* Remove the install/uninstall rake tasks (jeremyevans)
+
+* Use INSERT VALUES with multiple rows for Dataset#import and #multi_insert on more databases (jeremyevans)
+
+* Support common table expressions (WITH clause) on SQLite >=3.8.3 (jeremyevans)
+
+=== 4.9.0 (2014-04-01)
+
+* Recognize CHECK constraint violations on newer versions of SQLite (jeremyevans)
+
+* Do not attempt to eager load when calling Dataset#columns in the eager_each plugin (jeremyevans)
+
+* Support :driver option for jdbc adapter, for specifying driver class for cases where getConnection doesn't work (jeremyevans) (#785)
+
+* Massive speedup for PostgreSQL array parser (jeremyevans) (#788)
+
+* Add current_datetime_timestamp extension, for current Time/DateTime instances that are literalized as CURRENT_TIMESTAMP (jeremyevans)
+
+* Recognize additional unique constraint violations on SQLite (jeremyevans) (#782)
+
+* Don't remove column value when validating nested attributes for one_to_* association where association foreign key is the model's primary key (jeremyevans)
+
+* Add Dataset#disable_insert_returning on PostgreSQL for skipping implicit use of RETURNING (jeremyevans)
+
+* Automatically optimize Model.[], .with_pk, and .with_pk! for models with composite keys (jeremyevans)
+
+* Automatically optimize Model.[] when called with a hash (jeremyevans)
+
+* Automatically optimize Model.find, .first, and .first! when called with a single argument (jeremyevans)
+
+* Add Model.finder for creating optimized finder methods using Dataset::PlaceholderLiteralizer (jeremyevans)
+
+* Add Dataset::PlaceholderLiteralizer optimization framework (jeremyevans)
+
+* Add Dataset#with_sql_{each,all,first,single_value,insert,update} optimized methods (jeremyevans)
+
+* Make pg_array extension use correct type when typecasting column values for smallint, oid, real, character, and varchar arrays (jeremyevans)
+
+* Make Database#column_schema_to_ruby_default a public method in the schema_dumper extension (jeremyevans) (#776)
+
+* Fix multiple corner cases in the eager_graph support (jeremyevans) (#771)
+
+* Use streaming to implement paging for Dataset#paged_each in the mysql2 adapter (jeremyevans)
+
+* Use a cursor to implement paging for Dataset#paged_each in the postgres adapter (jeremyevans)
+
+* Add Database#create_join_table? and #create_join_table! for consistency (jeremyevans)
+
+* Add Dataset#where_current_of to the postgres adapter for supporting updating rows based on a cursor's current position (jeremyevans)
+
+* Add Dataset#use_cursor :hold option in the postgres adapter for supporting cursor use outside of a transaction (jeremyevans)
+
+* Add Dataset#paged_each :strategy=>:filter option for increased performance (jeremyevans)
+
+=== 4.8.0 (2014-03-01)
+
+* Add SQL::AliasedExpression#alias alias for #aliaz (jeremyevans)
+
+* Handle SQL::Identifier, SQL::QualifiedIdentifier, and SQL::AliasedExpression objects as first argument to Dataset#graph (jeremyevans)
+
+* Respect qualification and aliases in symbols passed as first argument to Dataset#graph (dividedmind) (#769)
+
+* Recognize new constraint violation error messages in SQLite 3.8.2+ (itswindtw) (#766)
+
+* Use limit strategy to correctly handle limited associations in the dataset_associations plugin (jeremyevans)
+
+* Handle issues in dataset_associations plugin when dataset uses unqualified identifiers for associations requiring joins (jeremyevans)
+
+* Handle fractional seconds in input timestamps in the odbc/mssql adapter (Ross Attrill, jeremyevans)
+
+* Return fractional seconds in timestamps in the odbc adapter (jeremyevans)
+
+* Support :plain and :phrase options to Dataset#full_text_search on PostgreSQL (jeremyevans)
+
+* Use limit strategy to correctly handle filtering by limited associations (jeremyevans)
+
+* Simplify queries used for filtering by associations with conditions (jeremyevans)
+
+* Use an eager limit strategy by default for *_one associations with orders (jeremyevans)
+
+* Support :limit_strategy eager_graph option, for specifying strategy used for limited associations in that eager graph (jeremyevans)
+
+* Add eager_graph_with_options to model datasets, for specifying options specific to the eager_graph call (jeremyevans)
+
+* Handle offsets on *_many associations when eager graphing when there are no associated results (jeremyevans)
+
+* Make Database#register_array_type work without existing scalar conversion proc in the pg_array extension (jeremyevans)
+
+* Handle presence validations on foreign keys in associated objects when creating new associated objects in the nested_attributes plugin (jeremyevans)
+
+* Respect offsets when eager graphing *_one associations (jeremyevans)
+
+* Add association_join to model datasets, for setting up joins based on associations (jeremyevans)
+
+* Add one_through_many association to many_through_many plugin, for only returning a single record (jeremyevans)
+
+* Add :graph_order association option, useful when :order needs to contain qualified identifiers (jeremyevans)
+
+* Add one_through_one association, similar to many_to_many but only returning a single record (jeremyevans)
+
+=== 4.7.0 (2014-02-01)
+
+* Don't swallow underlying exception if there is an exception closing the cursor on PostgreSQL (jeremyevans) (#761)
+
+* Recognize primary key unique constraint violations on MSSQL and SQLAnywhere (jeremyevans)
+
+* Recognize composite unique constraint violations on SQLite (timcraft) (#758)
+
+* Make #* method without arguments on SQL::Function return a Function with * prepended to the arguments (jeremyevans)
+
+* Add #function to SQL::Identifier and SQL::QualifiedIdentifier, allowing for easy use of schema qualified functions or functions names that need quoting (jeremyevans)
+
+* Add SQL::Function#distinct for easier creation of aggregate functions using DISTINCT (jeremyevans)
+
+* Add SQL::Function#over for easier creation of window functions (jeremyevans)
+
+* Don't clear validation instance_hooks until after a successful save (jeremyevans)
+
+* Support :raise_on_save_failure option for one_to_many, pg_array_to_many, and many_to_pg_array associations (jeremyevans)
+
+* Make SQLTime#to_s return a string in HH:MM:SS format, since it shouldn't include date information (jeremyevans)
+
+* Support the Database#tables :schema option in the jdbc adapter (robbiegill, jeremyevans) (#755)
+
+* Automatically rollback transactions in killed threads in ruby 2.0+ (chanks) (#752)
+
+* Add update_or_create plugin, for updating an object if it exists, or creating such an object if it does not (jeremyevans)
+
+* Make auto_validations uniqueness validations work correctly for STI subclasses (jeremyevans)
+
+* Support :dataset option to validates_unique vaildation (jeremyevans)
+
+=== 4.6.0 (2014-01-02)
+
+* Add Database#call_mssql_sproc on MSSQL for calling stored procedures and handling output parameters (jrgns, jeremyevans) (#748)
+
+* Handle RuntimeErrors raised by oci8 in the oracle adapter (jeremyevans)
+
+* Support OFFSET/FETCH on Microsoft SQL Server 2012 (jeremyevans)
+
+* Support :server option for Database#{commit,rollback}_prepared_transaction on PostgreSQL, MySQL, and H2 (jeremyevans) (#743)
+
+* Do not attempt to eager load and raise an exception when doing Model.eager(...).naked.all (jeremyevans)
+
+* Recognize a couple additional disconnect errors in the jdbc/postgresql adapter (jeremyevans) (#742)
+
+=== 4.5.0 (2013-12-02)
+
+* Support :on_commit=>(:drop|:delete_rows|:preserve_rows) options when creating temp tables on PostgreSQL (rosenfeld) (#737)
+
+* Make Dataset#insert work on PostgreSQL if the table name is a SQL::PlaceholderLiteralString (jeremyevans) (#736)
+
+* Copy unique constraints when emulating alter_table operations on SQLite (jeremyevans) (#735)
+
+* Don't return clob column values as SQL::Blob instances in the db2 and ibmdb adapters unless use_clob_as_blob is true (jeremyevans)
+
+* Make use_clob_as_blob false by default on DB2 (jeremyevans)
+
+* Fix usage of Sequel::SQL::Blob objects as prepared statement arguments in jdbc/db2 adapter when use_clob_as_blob is false (jeremyevans)
+
+* Add mssql_optimistic_locking plugin, using a timestamp/rowversion column to protect against concurrent updates (pinx, jeremyevans) (#731)
+
+* Make Model.primary_key array immutable for composite keys (chanks) (#730)
+
+=== 4.4.0 (2013-11-01)
+
+* Make Database#tables not show tables in the recycle bin on Oracle (jeremyevans) (#728)
+
+* Don't automatically order on all columns when emulating offsets for unordered datasets on DB2 (jeremyevans)
+
+* Improve PostgreSQL type support in the jdbc/postgresql adapter (jeremyevans)
+
+* Make offset emulation on Oracle work when using columns that can't be ordered (jeremyevans, sdeming) (#724, #725)
+
+* Make filter by associations support handle associations with :conditions or block (jeremyevans)
+
+* Make association cloning handle :block correctly for clones of clones (jeremyevans)
+
+* Make association cloning handle :eager_block option correctly (jeremyevans)
+
+* Make add_primary_key work on h2 (jeremyevans)
+
+* Add support for foreign key parsing on Oracle (jeremyevans)
+
+* Add support for foreign key parsing to the jdbc adapter (jeremyevans)
+
+* Make add_foreign_key work on HSQLDB (jeremyevans)
+
+* Add table_select plugin for selecting table.* instead of * for model datasets (jeremyevans)
+
+* Issue constraint_validation table deletes before inserts, so modifying constraint via drop/add in same alter_table block works (jeremyevans)
+
+* Support add_*/remove_*/remove_all_* pg_array_to_many association methods on unsaved model objects (jeremyevans)
+
+* Add Sybase SQLAnywhere support via new sqlanywhere and jdbc/sqlanywhere adapters (gditrick, jeremyevans)
+
+* Add Dataset#offset for setting the offset separately from the limit (Paul Henry, jeremyevans) (#717)
+
 === 4.3.0 (2013-10-02)
 
 * Fix literalization of empty blobs on MySQL (jeremyevans) (#715)
@@ -3106,7 +3434,7 @@
 
 * Optimize Model.[] by using static sql when possible, for a 30-40% speed increase (jeremyevans)
 
-* Add Dataset#with_sql, which returns a clone of the datatset with static SQL (jeremyevans)
+* Add Dataset#with_sql, which returns a clone of the dataset with static SQL (jeremyevans)
 
 * Refactor Dataset#literal so it doesn't need to be overridden in subadapters, for a 20-25% performance increase (jeremyevans)
 
diff --git a/CONTRIBUTING b/CONTRIBUTING
new file mode 100644
index 0000000..2c5a333
--- /dev/null
+++ b/CONTRIBUTING
@@ -0,0 +1,33 @@
+Issue Guidelines
+----------------
+
+1) Issues should only be created for things that are definitely bugs.
+   If you are not sure that the behavior is a bug, ask about it on
+   IRC or the sequel-talk Google Group.  GitHub Issues should not be
+   used as a help forum.
+
+2) If you are sure it is a bug, then post a complete description of
+   the issue, the simplest possible self-contained example showing
+   the problem, the full backtrace of any exception, and for issues
+   involving database queries, an SQL log.
+
+Pull Request Guidelines
+-----------------------
+
+1) Try to include tests for all new features and substantial bug
+   fixes.  See the testing guide for details about testing Sequel.
+
+2) Try to include documentation for all new features.  In most cases
+   this should include RDoc method documentation, but updates to the
+   guides is also appropriate in some cases.
+
+3) Follow the style conventions of the surrounding code.  In most
+   cases, this is standard ruby style.
+
+4) Do not submit whitespace changes with code changes.  Sequel is not
+   pedantic about trailing whitespace, so if you have an editor that
+   automatically strips trailing whitespace, you may want to turn
+   that feature off.
+   
+5) All code in pull requests is assummed to be MIT licensed.  Do not
+   submit a pull request if that isn't the case.
diff --git a/MIT-LICENSE b/MIT-LICENSE
index 8dbacb3..a2ed369 100644
--- a/MIT-LICENSE
+++ b/MIT-LICENSE
@@ -1,5 +1,5 @@
 Copyright (c) 2007-2008 Sharon Rosner
-Copyright (c) 2008-2013 Jeremy Evans
+Copyright (c) 2008-2014 Jeremy Evans
 
 Permission is hereby granted, free of charge, to any person obtaining a copy
 of this software and associated documentation files (the "Software"), to
diff --git a/README.rdoc b/README.rdoc
index dfe98f3..e3ba008 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -13,17 +13,16 @@ toolkit for Ruby.
   configurations, and database sharding.
 * Sequel currently has adapters for ADO, Amalgalite, CUBRID,
   DataObjects, DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL,
-  Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLite3, Swift, and
-  TinyTDS.
+  Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLAnywhere, SQLite3,
+  Swift, and TinyTDS.
 
 == Resources
 
-* {Website}[http://sequel.rubyforge.org]
-* {Blog}[http://sequel.heroku.com]
+* {Website}[http://sequel.jeremyevans.net]
 * {Source code}[http://github.com/jeremyevans/sequel]
 * {Bug tracking}[http://github.com/jeremyevans/sequel/issues]
 * {Google group}[http://groups.google.com/group/sequel-talk]
-* {RDoc}[http://sequel.rubyforge.org/rdoc]
+* {RDoc}[http://sequel.jeremyevans.net/rdoc]
 
 To check out the source code:
   
@@ -35,7 +34,7 @@ If you have any comments or suggestions please post to the Google group.
 
 == Installation
 
-  sudo gem install sequel
+  gem install sequel
   
 == A Short Example
 
@@ -70,7 +69,7 @@ Sequel includes an IRB console for quick access to databases (usually referred t
 
 You get an IRB session with the database object stored in DB.
 
-In addition to providing an IRB shell (the default behavior), bin/sequel also has support for migrating databases, dumping schema migrations, and copying databases.  See the {bin/sequel guide}[link:files/doc/bin_sequel_rdoc.html] for more details.
+In addition to providing an IRB shell (the default behavior), bin/sequel also has support for migrating databases, dumping schema migrations, and copying databases.  See the {bin/sequel guide}[rdoc-ref:doc/bin_sequel.rdoc] for more details.
 
 == An Introduction
 
@@ -250,12 +249,12 @@ After filtering, you can retrieve the matching records by using any of the retri
 
   my_posts.each{|row| p row}
   
-See the {Dataset Filtering}[link:files/doc/dataset_filtering_rdoc.html] file for more details.
+See the {Dataset Filtering}[rdoc-ref:doc/dataset_filtering.rdoc] file for more details.
 
 === Security
 
 Designing apps with security in mind is a best practice.
-Please read the {Security Guide}[link:files/doc/security_rdoc.html] for details on security
+Please read the {Security Guide}[rdoc-ref:doc/security.rdoc] for details on security
 issues that you should be aware of when using Sequel.
 
 === Summarizing Records
@@ -313,7 +312,7 @@ You can also specify descending order:
 
 === Core Extensions
 
-Note the use of <tt>Sequel.desc(:stamp)</tt> in the above example.  Much of Sequel's DSL uses this style, calling methods on the Sequel module that return SQL expression objects.  Sequel also ships with a {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]) that integrates Sequel's DSL better into the ruby language, allowing you to write:
+Note the use of <tt>Sequel.desc(:stamp)</tt> in the above example.  Much of Sequel's DSL uses this style, calling methods on the Sequel module that return SQL expression objects.  Sequel also ships with a {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc]) that integrates Sequel's DSL better into the ruby language, allowing you to write:
 
   :stamp.desc
 
@@ -574,7 +573,7 @@ That will just change the value for the object, it will not update the row in th
 
 === Mass assignment
 
-You can also set the values for multiple columns in a single method call, using one of the mass-assignment methods.  See the {mass assignment guide}[link:files/doc/mass_assignment_rdoc.html] for details.  For example +set+ updates the model's column values without saving:
+You can also set the values for multiple columns in a single method call, using one of the mass-assignment methods.  See the {mass assignment guide}[rdoc-ref:doc/mass_assignment.rdoc] for details.  For example +set+ updates the model's column values without saving:
 
   post.set(:title=>'hey there', :updated_by=>'foo')
 
@@ -640,12 +639,14 @@ SQL query.
 
 === Associations
 
-Associations are used in order to specify relationships between model classes that reflect relationships between tables in the database, which are usually specified using foreign keys.  You specify model associations via the +many_to_one+, +one_to_one+, +one_to_many+, and +many_to_many+ class methods:
+Associations are used in order to specify relationships between model classes that reflect relationships between tables in the database, which are usually specified using foreign keys.  You specify model associations via class methods:
 
   class Post < Sequel::Model
     many_to_one :author
     one_to_many :comments
+    one_to_one :first_comment, :class=>:Comment, :order=>:id
     many_to_many :tags
+    one_through_one :first_tag, :class=>:Tag, :order=>:name, :right_key=>:tag_id
   end
 
 +many_to_one+ and +one_to_one+ create a getter and setter for each model object:
@@ -746,6 +747,28 @@ You can dynamically customize eager loads for both +eager+ and +eager_graph+ whi
   # Eagerly load only replies containing 'foo', and the person and tags for those replies
   Post.eager(:replies=>{proc{|ds| ds.where(Sequel.like(text, '%foo%'))}=>[:person, :tags]}).all
 
+=== Joining with Associations
+
+You can use the association_join method to add a join to the model's dataset based on the assocation:
+
+  Post.association_join(:author)
+  # SELECT * FROM posts
+  # INNER JOIN authors AS author ON (author.id = posts.author_id)
+
+This comes with variants for different join types:
+
+  Post.association_left_join(:replies)
+  # SELECT * FROM posts
+  # LEFT JOIN replies ON (replies.post_id = posts.id)
+
+Similar to the eager loading methods, you can use multiple associations and nested associations:
+
+  Post.association_join(:author, :replies=>:person).all
+  # SELECT * FROM posts
+  # INNER JOIN authors AS author ON (author.id = posts.author_id)
+  # INNER JOIN replies ON (replies.post_id = posts.id)
+  # INNER JOIN people AS person ON (person.id = replies.person_id)
+
 === Extending the underlying dataset
 
 The recommended way to implement table-wide logic by defining methods on the dataset using +dataset_module+:
diff --git a/Rakefile b/Rakefile
index 7da7cd9..d001f84 100644
--- a/Rakefile
+++ b/Rakefile
@@ -7,7 +7,6 @@ VERS = lambda do
   Sequel.version
 end
 CLEAN.include ["**/.*.sw?", "sequel-*.gem", ".config", "rdoc", "coverage", "www/public/*.html", "www/public/rdoc*", '**/*.rbc']
-SUDO = ENV['SUDO'] || 'sudo'
 
 # Gem Packaging and Release
 
@@ -16,16 +15,6 @@ task :package=>[:clean] do |p|
   sh %{#{FileUtils::RUBY} -S gem build sequel.gemspec}
 end
 
-desc "Install sequel gem"
-task :install=>[:package] do
-  sh %{#{SUDO} #{FileUtils::RUBY} -S gem install ./#{NAME}-#{VERS.call} --local}
-end
-
-desc "Uninstall sequel gem"
-task :uninstall=>[:clean] do
-  sh %{#{SUDO} #{FileUtils::RUBY} -S gem uninstall #{NAME}}
-end
-
 desc "Publish sequel gem to rubygems.org"
 task :release=>[:package] do
   sh %{#{FileUtils::RUBY} -S gem push ./#{NAME}-#{VERS.call}.gem}
@@ -38,16 +27,11 @@ task :website do
   sh %{#{FileUtils::RUBY} www/make_www.rb}
 end
 
-desc "Update Non-RDoc section of sequel.rubyforge.org"
-task :website_rf_base=>[:website] do
-  sh %{rsync -rt www/public/*.html rubyforge.org:/var/www/gforge-projects/sequel/}
-end
-
 ### RDoc
 
 RDOC_DEFAULT_OPTS = ["--line-numbers", "--inline-source", '--title', 'Sequel: The Database Toolkit for Ruby']
 
-allow_website_rdoc = begin
+begin
   # Sequel uses hanna-nouveau for the website RDoc.
   # Due to bugs in older versions of RDoc, and the
   # fact that hanna-nouveau does not support RDoc 4,
@@ -55,9 +39,7 @@ allow_website_rdoc = begin
   gem 'rdoc', '= 3.12.2'
   gem 'hanna-nouveau'
   RDOC_DEFAULT_OPTS.concat(['-f', 'hanna'])
-  true
 rescue Gem::LoadError
-  false
 end
 
 rdoc_task_class = begin
@@ -80,32 +62,25 @@ if rdoc_task_class
     rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/**/*.rb doc/*.rdoc doc/release_notes/*.txt"
   end
 
-  if allow_website_rdoc
-    desc "Make rdoc for website"
-    task :website_rdoc=>[:website_rdoc_main, :website_rdoc_adapters, :website_rdoc_plugins]
-  
-    rdoc_task_class.new(:website_rdoc_main) do |rdoc|
-      rdoc.rdoc_dir = "www/public/rdoc"
-      rdoc.options += RDOC_OPTS + %w'--no-ignore-invalid'
-      rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/*.rb lib/sequel/*.rb lib/sequel/{connection_pool,dataset,database,model}/*.rb doc/*.rdoc doc/release_notes/*.txt lib/sequel/extensions/migration.rb lib/sequel/extensions/core_extensions.rb"
-    end
-  
-    rdoc_task_class.new(:website_rdoc_adapters) do |rdoc|
-      rdoc.rdoc_dir = "www/public/rdoc-adapters"
-      rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid'
-      rdoc.rdoc_files.add %w"lib/sequel/adapters/**/*.rb"
-    end
-  
-    rdoc_task_class.new(:website_rdoc_plugins) do |rdoc|
-      rdoc.rdoc_dir = "www/public/rdoc-plugins"
-      rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid'
-      rdoc.rdoc_files.add %w"lib/sequel/{extensions,plugins}/**/*.rb"
-    end
-  
-    desc "Update sequel.rubyforge.org"
-    task :website_rf=>[:website, :website_rdoc] do
-      sh %{rsync -rvt www/public/* rubyforge.org:/var/www/gforge-projects/sequel/}
-    end
+  desc "Make rdoc for website"
+  task :website_rdoc=>[:website_rdoc_main, :website_rdoc_adapters, :website_rdoc_plugins]
+
+  rdoc_task_class.new(:website_rdoc_main) do |rdoc|
+    rdoc.rdoc_dir = "www/public/rdoc"
+    rdoc.options += RDOC_OPTS + %w'--no-ignore-invalid'
+    rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/*.rb lib/sequel/*.rb lib/sequel/{connection_pool,dataset,database,model}/*.rb doc/*.rdoc doc/release_notes/*.txt lib/sequel/extensions/migration.rb lib/sequel/extensions/core_extensions.rb"
+  end
+
+  rdoc_task_class.new(:website_rdoc_adapters) do |rdoc|
+    rdoc.rdoc_dir = "www/public/rdoc-adapters"
+    rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid'
+    rdoc.rdoc_files.add %w"lib/sequel/adapters/**/*.rb"
+  end
+
+  rdoc_task_class.new(:website_rdoc_plugins) do |rdoc|
+    rdoc.rdoc_dir = "www/public/rdoc-plugins"
+    rdoc.options += RDOC_DEFAULT_OPTS + %w'--main Sequel --no-ignore-invalid'
+    rdoc.rdoc_files.add %w"lib/sequel/{extensions,plugins}/**/*.rb doc/core_*"
   end
 end
 
@@ -133,7 +108,8 @@ begin
     desc "#{d} with -w, some warnings filtered"
     task "#{name}_w" do
       ENV['RUBYOPT'] ? (ENV['RUBYOPT'] += " -w") : (ENV['RUBYOPT'] = '-w')
-      sh "#{FileUtils::RUBY} -S rake #{name} 2>&1 | egrep -v \"(spec/.*: warning: (possibly )?useless use of == in void context|: warning: instance variable @.* not initialized|: warning: method redefined; discarding old|: warning: previous definition of)|rspec\""
+      rake = ENV['RAKE'] || "#{FileUtils::RUBY} -S rake"
+      sh "#{rake} #{name} 2>&1 | egrep -v \"(spec/.*: warning: (possibly )?useless use of == in void context|: warning: instance variable @.* not initialized|: warning: method redefined; discarding old|: warning: previous definition of)|rspec\""
     end
 
     desc d
@@ -170,7 +146,7 @@ begin
   spec_with_cov.call("spec_plugin", Dir["spec/extensions/*_spec.rb"].sort_by{rand}, "Run extension/plugin specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/([a-z_]+\.rb|adapters|connection_pool|database|dataset|model)"')}
   spec_with_cov.call("spec_integration", Dir["spec/integration/*_test.rb"], "Run integration tests")
 
-  %w'postgres sqlite mysql informix oracle firebird mssql db2'.each do |adapter|
+  %w'postgres sqlite mysql informix oracle firebird mssql db2 sqlanywhere'.each do |adapter|
     spec_with_cov.call("spec_#{adapter}", ["spec/adapters/#{adapter}_spec.rb"] + Dir["spec/integration/*_test.rb"], "Run #{adapter} specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/([a-z_]+\.rb|connection_pool|database|dataset|model|extensions|plugins)"')}
   end
 
diff --git a/bin/sequel b/bin/sequel
index 0c89c3a..665856b 100755
--- a/bin/sequel
+++ b/bin/sequel
@@ -2,6 +2,7 @@
 
 require 'rubygems'
 require 'optparse'
+$: << File.join(File.dirname(__FILE__), '..', 'lib')
 require 'sequel'
 
 code = nil
@@ -26,7 +27,7 @@ options = OptionParser.new do |opts|
   opts.separator "  sequel postgres://localhost/my_blog"
   opts.separator "  sequel config/database.yml"
   opts.separator ""
-  opts.separator "For more information see http://sequel.rubyforge.org"
+  opts.separator "For more information see http://sequel.jeremyevans.net"
   opts.separator ""
   opts.separator "Options:"
 
diff --git a/doc/active_record.rdoc b/doc/active_record.rdoc
index 3fcb603..4a71911 100644
--- a/doc/active_record.rdoc
+++ b/doc/active_record.rdoc
@@ -332,7 +332,7 @@ With either way of eager loading, you must call +all+ to retrieve all records at
   
 Like ActiveRecord, Sequel supports cascading of eager loading for both methods of eager loading.
 
-Unlike ActiveRecord, Sequel allows you to eager load custom associations using the <tt>:eager_loader</tt> and <tt>:eager_grapher</tt> association options.  See the {Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html] for more details.
+Unlike ActiveRecord, Sequel allows you to eager load custom associations using the <tt>:eager_loader</tt> and <tt>:eager_grapher</tt> association options.  See the {Advanced Associations guide}[rdoc-ref:doc/advanced_associations.rdoc] for more details.
 
 Table aliasing when eager loading via +eager_graph+ is different in Sequel than ActiveRecord.  Sequel will always attempt to use the association name, not the table name, for any associations.  If the association name has already been used, Sequel will append _N to it, where N starts at 0 and increases by 1.  For example, for a self referential association:
 
@@ -396,7 +396,7 @@ ActiveRecord option :: Sequel option
 <tt>:polymorphic</tt>, <tt>:as</tt>, <tt>:source_type</tt> :: The +sequel_polymorphic+ external plugin
 <tt>:include</tt> :: <tt>:eager</tt>, <tt>:eager_graph</tt>
 <tt>:readonly</tt> :: No equivalent, the Sequel <tt>:read_only</tt> option just means the modification methods are not created (it makes the association read only, not records retrieved through the association)
-<tt>:through</tt>, <tt>:source</tt> :: Use a +many_to_many+ association, or the +many_through_many+ plugin
+<tt>:through</tt>, <tt>:source</tt> :: Use a +many_to_many+ or +one_through_one+ association, or the +many_through_many+ plugin
 <tt>:touch</tt> :: The +touch+ plugin
 <tt>:autosave</tt> :: A +before_save+ or +after_save+ hook
 <tt>:finder_sql</tt> :: <tt>:dataset</tt> to set a custom dataset
@@ -581,7 +581,7 @@ Here's a mapping of ActiveRecord +find+ options to <tt>Sequel::Dataset</tt> meth
 :order :: order
 :group :: group
 :limit :: limit
-:offset :: limit # second entry in limit array
+:offset :: offset
 :joins :: join, left_join, etc. # many other join methods
 :include :: eager, eager_graph # eager does preloading, eager_graph does JOINs
 :select :: select
diff --git a/doc/advanced_associations.rdoc b/doc/advanced_associations.rdoc
index 990af38..c403304 100644
--- a/doc/advanced_associations.rdoc
+++ b/doc/advanced_associations.rdoc
@@ -240,6 +240,202 @@ Using the :eager_loader proc, you should be able to eagerly load all association
 that can be eagerly loaded, even if Sequel doesn't natively support such eager
 loading.
 
+== Limited Associations
+
+Sequel supports specifying limits and/or offsets for associations:
+
+  Artist.one_to_many :first_10_albums, :class=>:Album, :order=>:release_date, :limit=>10
+
+For retrieving the associated objects for a single object, this just uses
+a LIMIT:
+
+  artist.first_10_albums
+  # SELECT * FROM albums WHERE (artist_id = 1) LIMIT 10
+
+=== Eager Loading via eager
+
+However, if you want to eagerly load an association, you must use a different
+approach.  Sequel has 4 separate strategies for dealing with such cases.
+
+The default strategy used on all databases is a UNION-based approach, which
+will submit multiple subqueries in a UNION query:
+
+  Artist.where(:id=>[1,2]).eager(:first_10_albums).all
+  # SELECT * FROM (SELECT * FROM albums WHERE (artist_id = 1) LIMIT 10) UNION ALL
+  # SELECT * FROM (SELECT * FROM albums WHERE (artist_id = 2) LIMIT 10)
+
+This is the fastest way to load the associated objects on most databases, as long as
+there is an index on albums.artist_id.  Without an index it is probably the slowest
+approach, so make sure you have an index on the key columns.  If you cannot add an
+index, you'll want to manually specify the :eager_limit_strategy option as shown below.
+
+On PostgreSQL, for *_one associations that don't use an offset, you can
+choose to use a the distinct on strategy:
+
+  Artist.one_to_one :first_album, :class=>:Album, :order=>:release_date,
+    :eager_limit_strategy=>:distinct_on
+  Artist.where(:id=>[1,2]).eager(:first_album).all
+  # SELECT DISTINCT ON (albums.artist_id) *
+  # FROM albums
+  # WHERE (albums.artist_id IN (1, 2))
+  # ORDER BY albums.artist_id, release_date
+  
+Otherwise, if the database supports window functions, you can choose to use
+the window function strategy:
+
+  Artist.one_to_many :first_10_albums, :class=>:Album, :order=>:release_date, :limit=>10,
+    :eager_limit_strategy=>:window_function
+  Artist.where(:id=>[1,2]).eager(:first_10_albums).all
+  # SELECT * FROM (
+  #   SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY release_date) AS x_sequel_row_number_x
+  #   FROM tracks
+  #   WHERE (tracks.album_id IN (1, 2))
+  # ) AS t1
+  # WHERE (x_sequel_row_number_x <= 10)
+  
+Alternatively, you can use the :ruby strategy, which will fall back to
+retrieving all records, and then will slice the resulting array to get
+the first 10 after retrieval.
+
+=== Eager Loading via eager_graph_with_options
+
+When eager loading an association via eager_graph (which uses JOINs), the
+situation is similar.  While the UNION-based strategy cannot be used as
+you don't know the records being eagerly loaded in advance, Sequel can use
+a variant of the other 3 strategies.  By default it retrieves all records
+and then does the array slice in ruby.  As eager_graph does not support
+options, to use an eager_graph limit strategy you have to use the
+eager_graph_with_options method with the :limit_strategy option.
+
+The :distinct_on strategy uses DISTINCT ON in a subquery and JOINs that
+subquery:
+
+  Artist.eager_graph_with_options(:first_album, :limit_strategy=>true).all
+  # SELECT artists.id, artists.name, first_album.id AS first_album_id,
+  #        first_album.name AS first_album_name, first_album.artist_id,
+  #        first_album.release_date
+  # FROM artists 
+  # LEFT OUTER JOIN (
+  #   SELECT DISTINCT ON (albums.artist_id) *
+  #   FROM albums
+  #   ORDER BY albums.artist_id, release_date
+  # ) AS first_album ON (first_album.artist_id = artists.id)
+
+The :window_function approach JOINs to a nested subquery using a window
+function:
+
+  Artist.eager_graph_with_options(:first_10_albums, :limit_strategy=>true).all
+  # SELECT artists.id, artists.name, first_10_albums.id AS first_10_albums_id,
+  #        first_10_albums.name AS first_10_albums_name, first_10_albums.artist_id,
+  #        first_10_albums.release_date
+  # FROM artists 
+  # LEFT OUTER JOIN (
+  #   SELECT id, name, artist_id, release_date
+  #   FROM (
+  #     SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY release_date) AS x_sequel_row_number_x
+  #     FROM albums
+  #   ) AS t1 WHERE (x_sequel_row_number_x <= 10)
+  # ) AS first_10_albums ON (first_10_albums.artist_id = artists.id)
+
+The :correlated_subquery approach JOINs to a nested subquery using a correlated
+subquery:
+
+  Artist.eager_graph_with_options(:first_10_albums, :limit_strategy=>true).all
+  # SELECT artists.id, artists.name, first_10_albums.id AS first_10_albums_id,
+  #        first_10_albums.name AS first_10_albums_name, first_10_albums.artist_id,
+  #        first_10_albums.release_date
+  # FROM artists 
+  # LEFT OUTER JOIN (
+  #   SELECT *
+  #   FROM albums
+  #   WHERE albums.id IN (
+  #     SELECT t1.id
+  #     FROM tracks AS t1
+  #     WHERE (t1.album_id = tracks.album_id)
+  #     ORDER BY release_date
+  #     LIMIT 10
+  #   )
+  # ) AS first_10_albums ON (first_10_albums.artist_id = artists.id)
+
+(SELECT * FROM tracks WHERE (tracks.id IN (SELECT t1.id FROM tracks AS t1 WHERE (t1.album_id = tracks.album_id) LIMIT 1)))
+
+The reason that Sequel does not automatically use the :distinct_on, :window function
+or :correlated_subquery strategy for eager_graph is that it can perform much worse than the
+default of just doing the array slicing in ruby.  If you are only using eager_graph to
+return a few records, it may be cheaper to get all of their associated records and filter
+them in ruby as opposed to computing the set of limited associated records for all rows.
+
+It's recommended to only use an eager_graph limit strategy if you have benchmarked
+it against the default behavior and found it is faster for your use case.
+
+=== Filtering By Associations
+
+In order to return correct results, Sequel automatically uses a limit strategy when
+using filtering by associations with limited associations, if the database supports
+it.  As in the eager_graph case, the UNION-based strategy doesn't work.  Unlike
+in the eager and eager_graph cases, the array slicing in ruby approach does not work,
+you must use an SQL-based strategy.  Sequel will select an appropriate default
+strategy based on the database you are using, and you can override it using the
+:filter_limit_strategy option.
+
+The :distinct_on strategy:
+
+  Artist.where(:first_album=>Album[1]).all
+  # SELECT *
+  # FROM artists
+  # WHERE (artists.id IN (
+  #   SELECT albums.artist_id
+  #   FROM albums
+  #   WHERE ((albums.artist_id IS NOT NULL) AND (albums.id IN (
+  #     SELECT DISTINCT ON (albums.artist_id) albums.id
+  #     FROM albums
+  #     ORDER BY albums.artist_id, release_date
+  #   )) AND (albums.id = 1))))
+
+The :window_function strategy:
+
+  Artist.where(:first_10_albums=>Album[1]).all
+  # SELECT *
+  # FROM artists
+  # WHERE (artists.id IN (
+  #   SELECT albums.artist_id
+  #   FROM albums
+  #   WHERE ((albums.artist_id IS NOT NULL) AND (albums.id IN (
+  #     SELECT id FROM (
+  #       SELECT albums.id, row_number() OVER (PARTITION BY albums.artist_id ORDER BY release_date) AS x_sequel_row_number_x
+  #       FROM albums
+  #     ) AS t1
+  #     WHERE (x_sequel_row_number_x <= 10)
+  #   )) AND (albums.id = 1))))
+
+The :correlated_subquery strategy:
+
+  Artist.where(:first_10_albums=>Album[1]).all
+  # SELECT *
+  # FROM artists
+  # WHERE (artists.id IN (
+  #   SELECT albums.artist_id
+  #   FROM albums
+  #   WHERE ((albums.artist_id IS NOT NULL) AND (albums.id IN (
+  #     SELECT t1.id
+  #     FROM albums AS t1
+  #     WHERE (t1.artist_id = albums.artist_id)
+  #     ORDER BY release_date
+  #     LIMIT 1
+  #   )) AND (albums.id = 1))))
+
+Note that filtering by limited associations does not work on MySQL, as does not support
+any of the strategies.  It's also not supported when using composite keys on databases
+that don't support window functions and don't support multiple columns in IN.
+
+== Additional Association Types
+
+While the above examples for limited associations showed one_to_many and one_to_one associations,
+it's just because those are the simplest examples.  Sequel supports all of the same features for
+many_to_many and one_through_one associations that are enabled by default, as well as the
+many_through_many and one_through_many associations that are added by the many_through_many
+plugin.
+
 == ActiveRecord associations
 
 Sequel supports all of associations that ActiveRecord supports, though some
@@ -361,27 +557,14 @@ Sequel::Model:
 
   class Invoice < Sequel::Model
     many_to_one :client
-
-    # has_one :through equivalent 1
-    # eager load with :eager=>:firm option on :client association, and eager loading :client
-    def firm
-      client.firm if client
-    end
-
-    # has_one :through equivalent 2
-    # eager load the usual way
-    many_to_many :firms, :join_table=>:clients, :left_key=>:id, :left_primary_key=>:client_id, :right_key=>:firm_id
-    def firm
-      firms.first
-    end
-
-    # has_one :through equivalent 3
-    # eager loading requires custom :eager_loader proc
-    many_to_one :firm, :dataset=>proc{Firm.join(:clients, :firm_id=>:id, :id=>client_id).select_all(:firms)}
+    one_through_one :firm, :join_table=>:clients, :left_key=>:id, :left_primary_key=>:client_id, :right_key=>:firm_id
   end
 
   Firm.first.invoices
 
+To handle cases where there are multiple join tables, use the many_through_many
+plugin that ships with Sequel.
+
 === Polymorphic Associations
 
 Sequel discourages the use of polymorphic associations, which is the reason they
diff --git a/doc/association_basics.rdoc b/doc/association_basics.rdoc
index 5f2395c..5d51508 100644
--- a/doc/association_basics.rdoc
+++ b/doc/association_basics.rdoc
@@ -41,14 +41,23 @@ As is the code to add a related album to an artist:
 
   @artist.add_album(:name=>'RF')
 
+It also makes it easier to creating queries that use joins based on the association:
+
+  Artist.association_join(:albums)
+  # SELECT * FROM artists
+  # INNER JOIN albums ON (albums.artist_id = artists.id)
+
 == The Types of Associations
 
-Sequel has four different association types built in:
+Sequel has five different association types built in:
 
 * many_to_one
 * one_to_many
 * one_to_one
 * many_to_many
+* one_through_one
+
+It ships with additional association types via plugins.
 
 === many_to_one
 
@@ -68,13 +77,19 @@ table for each row in the associated table.
     many_to_one :artist
   end
 
-=== one_to_many
+=== one_to_many and one_to_one
 
 The one_to_many association is used when the table for the associated class
 contains a foreign key that references the primary key in the table for the
 current class.  It is named because for each row in the current table there
 can be many rows in the associated table:
 
+The one_to_one association can be thought of as a subset of the one_to_many association,
+but where there can only be either 0 or 1 records in the associated table.  This is
+useful if there is a unique constraint on the foreign key field in the associated table.
+It's also useful if you want to impose an order on the association and just want the
+first record returned.
+
   # Database schema:
   #  artists            albums
   #   :id   <----\       :id
@@ -84,27 +99,12 @@ can be many rows in the associated table:
   class Artist
     # Uses plural form of associated model name
     one_to_many :albums
-  end
-
-=== one_to_one
-
-The one_to_one association can be thought of as a subset of the one_to_many association,
-but where there can only be either 0 or 1 records in the associated table.
-It is the least frequently used of the four associations.  If you assume
-each artist cannot be associated with more than one album:
-
-  # Database schema:
-  #  artists            albums
-  #   :id   <----\       :id
-  #   :name       \----- :artist_id 
-  #                      :name
 
-  class Artist
     # Uses singular form of associated model name
     one_to_one :album
   end
-  
-=== many_to_many
+
+=== many_to_many and one_through_one
 
 The many_to_many association allows each row in the current table to be associated
 to many rows in the associated table, and each row in the associated table to
@@ -112,6 +112,14 @@ many rows in the current table, by using a join table to associate the two table
 If you assume each artist can have multiple albums and each album can have multiple
 artists:
 
+The one_through_one association can be thought of as a subset of the many_to_many
+association, but where there can only be 0 or 1 records in the associated table.
+This is useful if there is a unique constraint on the foreign key in the join table
+that refrences the current table.  It's also useful if you want to impose an order
+on the association and just want the first record returned.  The one_through_one
+association is so named because it sets up a one-to-one association through a
+single join table.
+
   # Database schema:
   #  albums 
   #   :id   <----\ 
@@ -124,14 +132,15 @@ artists:
   class Artist
     # Uses plural form of associated model name
     many_to_many :albums
-  end
-  class Album
-    many_to_many :artists
+
+    # Uses singular form of associated model name
+    one_through_one :album
   end
 
 === Differences Between many_to_one and one_to_one
 
-If you want to setup a 1-1 relationship between two models, you have to use
+If you want to setup a 1-1 relationship between two models, where the
+foreign key in one table references the associated table directly, you have to use
 many_to_one in one model, and one_to_one in the other model.  How do you
 know which to use in which model?
 
@@ -293,6 +302,8 @@ Examples:
   @artist.remove_album(@album)
   @artist.remove_all_albums
 
+one_through_one associations do not have any modification methods added.
+
 == Caching
 
 Associations are cached after being retrieved:
@@ -334,7 +345,7 @@ ending in +_dataset+ that returns a dataset representing the objects in the asso
   @album.artist_id
   # 10
   @album.artist_dataset
-  # SELECT * FROM artists WHERE (id = 10)
+  # SELECT * FROM artists WHERE (id = 10) LIMIT 1
   
   @artist.id
   # 20
@@ -349,7 +360,7 @@ it can be further filtered, ordered, etc.:
    order(:copies_sold).
    limit(10)
   # SELECT * FROM albums
-  # WHERE ((artist_id = 20) AND (name LIKE 'A%'))
+  # WHERE ((artist_id = 20) AND (name LIKE 'A%' ESCAPE '\'))
   # ORDER BY copies_sold LIMIT 10
 
 Records retrieved using the +_dataset+ method are not cached in the
@@ -400,7 +411,7 @@ can combine the approaches:
   @artist.albums_dataset.where(:publisher=>@publisher)
 
 This doesn't just work for +many_to_one+ associations, it also works for
-+one_to_one+, +one_to_many+, and +many_to_many+ associations:
+the other associations:
 
   Album.one_to_one :album_info
   # The album related to that AlbumInfo instance
@@ -414,6 +425,10 @@ This doesn't just work for +many_to_one+ associations, it also works for
   # All albums related to that Tag instance
   Album.where(:tags=>Tag[4])
 
+  Album.one_through_one :tag
+  # All albums related to that Tag instance
+  Album.where(:tag=>Tag[4])
+
 Note that for +one_to_many+ and +many_to_many+ associations, you still
 use the plural form even though only a single model object is given.
 
@@ -445,7 +460,7 @@ use separate filter calls:
 
   Album.where(:tags=>@tag1).where(:tags=>@tag2)
 
-Or the the array form of condition specifiers:
+Or the array form of condition specifiers:
 
   Album.where([[:tags, @tag1], [:tags, @tag2]])
 
@@ -462,8 +477,19 @@ the other forms, this can be inverted:
  
 This will return all albums whose artist does not start with 'A'.
 
-Note that filtering by associations only works correctly for simple
-associations (ones without conditions).
+Filtering by associations even works for associations that have
+conditions added via the :conditions option or a block:
+
+  Album.one_to_many :popular_tags, :clone=>:tags do |ds|
+    ds.where{times_used > 1000}
+  end
+  Album.where(:popular_tags=>[@tag1, @tag2])
+
+This will return all albums that whose popular tags would include
+at least one of those tags.
+
+Note that filtering by associations does not work for associations
+that use blocks with instance-specific code.
 
 == Name Collisions
 
@@ -675,9 +701,17 @@ The add_<i>association</i> method returns the now associated object:
 
   @album = @artist.add_album(:name=>'RF')
 
+Note that the add_* methods for +one_to_many+ persist the changes by
+saving the passed in (or newly created) object.  However, to avoid
+silent failures of these methods, they explicitly raise exceptions
+even when raise_on_save_failure is false for the associated model.
+You can disable this behavior (i.e. return nil instead of raising
+exceptions on a save failure) by setting the <tt>:raise_on_save_failure=>false</tt>
+option for the association.
+
 === remove_<i>association</i>(object_to_disassociate) (e.g. remove_album) [+one_to_many+ and +many_to_many+]
 
-The remove_<i>association</i> method disassociates the the passed object from
+The remove_<i>association</i> method disassociates the passed object from
 the current object.  For +one_to_many+ associations, it sets the foreign key of
 the associated object to NULL, and saves the associated object.  For
 +many_to_many+ associations, this deletes the matching row in the join table.
@@ -735,7 +769,7 @@ added methods:
   ds.model_object # @artist
   ds.association_reflection # same as Artist.association_reflection(:albums)
 
-For a more info on Sequel's reflection capabilities see the {Reflection page}[link:files/doc/reflection_rdoc.html].
+For a more info on Sequel's reflection capabilities see the {Reflection page}[rdoc-ref:doc/reflection.rdoc].
 
 == Overriding Method Behavior
 
@@ -944,7 +978,7 @@ Use an array with two arguments for the value to specify a limit and an offset.
 This probably doesn't make a lot of sense for *_to_one associations, though you
 could use it to specify an offset.
 
-==== :join_table [+many_to_many+]
+==== :join_table [+many_to_many+, +one_through_one+]
 
 Name of table that includes the foreign keys to both the current model and the
 associated model, as a symbol.  Defaults to the name of current model and name
@@ -955,7 +989,7 @@ Here's an example of the defaults:
   Album.many_to_many :artists # :join_table=>:albums_artists
   Person.many_to_many :colleges # :join_table=>:colleges_people
 
-==== :left_key [+many_to_many+]
+==== :left_key [+many_to_many+, +one_through_one+]
 
 Foreign key in join table that points to current model's primary key, as a
 symbol.  Defaults to :"#{model_name.underscore}_id".
@@ -964,10 +998,11 @@ symbol.  Defaults to :"#{model_name.underscore}_id".
 
 Can use an array of symbols for a composite key association.
 
-==== :right_key [+many_to_many+]
+==== :right_key [+many_to_many+, +one_through_one+]
 
 Foreign key in join table that points to associated model's primary key, as a
-symbol.  Defaults to :"#{association_name.singularize}_id".
+symbol.  Defaults to :"#{association_name.singularize}_id" for +many_to_many+
+and :"#{association_name}_id" for +one_through_one+.
 
   Album.many_to_many :tags # :right_key=>:tag_id
   
@@ -1041,9 +1076,9 @@ the artist can perform any one of four tasks for the lyric:
 
 A module or array of modules to extend the dataset with.  These are used to
 set up association extensions.  For more information , please see the
-{Advanced Associations page}[link:files/doc/advanced_associations_rdoc.html].
+{Advanced Associations page}[rdoc-ref:doc/advanced_associations.rdoc].
 
-==== :primary_key
+==== :primary_key [+many_to_one+, +one_to_one+, +one_to_many+]
 
 The column that the :key option references, as a symbol. For +many_to_one+
 associations, this column is in the associated table. For +one_to_one+ and
@@ -1055,7 +1090,7 @@ array of symbols for a composite key association.
   Artist.one_to_many :albums # :primary_key=>:arid
   Album.one_to_many :artist # :primary_key=>:arid
 
-==== :left_primary_key [+many_to_many+]
+==== :left_primary_key [+many_to_many+, +one_through_one+]
 
 Column in current table that :left_key option points to, as a symbol.
 Defaults to primary key of current table.
@@ -1065,7 +1100,7 @@ Defaults to primary key of current table.
 
 Can use an array of symbols for a composite key association.
 
-==== :right_primary_key [+many_to_many+]
+==== :right_primary_key [+many_to_many+, +one_through_one+]
 
 Column in associated table that :right_key points to, as a symbol.
 Defaults to primary key of the associated table.
@@ -1075,7 +1110,7 @@ Defaults to primary key of the associated table.
 
 Can use an array of symbols for a composite key association.
 
-==== :join_table_block [+many_to_many+]
+==== :join_table_block [+many_to_many+, +one_through_one+]
 
 A proc that can be used to modify the dataset used in the add/remove/remove_all
 methods.  It's separate from the association block, as that is called on a
@@ -1235,7 +1270,7 @@ to eagerly load:
 
 A custom loader to use when eagerly load associated objects via eager.
 For many details and examples of custom eager loaders, please see the
-{Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html].
+{Advanced Associations guide}[rdoc-ref:doc/advanced_associations.rdoc].
 
 ==== :eager_loader_key
 
@@ -1367,12 +1402,14 @@ needed, as one of the other eager_graph related association options is usually s
 If specified, should be a proc that accepts a single hash argument, which will contain
 at least the following keys:
 
-:self :: The dataset that is doing the eager loading
-:table_alias :: An alias to use for the table to graph for this association.
-:implicit_qualifier :: The alias that was used for the current table (since you can cascade associations).
 :callback :: A callback proc used to dynamically modify the dataset to graph into the
              current dataset, before such graphing is done. This is nil if no callback
              proc is used.
+:implicit_qualifier :: The alias that was used for the current table (since you can cascade associations).
+:join_type :: Override the join type to use when graphing.
+:limit_strategy :: The limit strategy symbol to use when graphing (for limited associations only)
+:self :: The dataset that is doing the eager loading
+:table_alias :: An alias to use for the table to graph for this association.
 
 Example:
 
@@ -1391,7 +1428,15 @@ Sequel has to do some guess work when attempting to add the association's
 order to an eager_graphed dataset.  In most cases it does so correctly, but
 if it has problems, you'll probably want to set this option to false.
 
-==== :graph_join_table_conditions [+many_to_many+]
+==== :graph_order
+
+Override the order added when using eager_graph, instead of using the one
+defined in :order.  This is useful if :order contains qualified identifiers,
+as the qualifiers may not match the aliases automatically used by eager_graph.
+This should contain unqualified identifiers, and eager_graph will automatically
+qualify them with the appropriate alias.
+
+==== :graph_join_table_conditions [+many_to_many+, +one_through_one+]
 
 The additional conditions to use on the SQL join for the join table when
 eagerly loading the association via eager_graph. Should be a hash or an array
@@ -1406,7 +1451,7 @@ has received a specific degree:
     :join_table=>:degrees_received, 
     :graph_join_table_conditions=>{:degree=>'BS'}
 
-==== :graph_join_table_block [+many_to_many+]
+==== :graph_join_table_block [+many_to_many+, +one_through_one+]
 
 The block to pass to join_table for the join table when eagerly loading the
 association via eager_graph.  This is used for similar reasons as :graph_block,
@@ -1428,7 +1473,7 @@ has received a bachelor's degree (degree starting with B):
 This should be done when graphing the join table, instead of when graphing the
 final table, as :degree is a column of the join table.
 
-==== :graph_join_table_join_type [+many_to_many+]
+==== :graph_join_table_join_type [+many_to_many+, +one_through_one+]
 
 The type of SQL join to use for the join table when eagerly loading the
 association via eager_graph.  Defaults to the :graph_join_type option or
@@ -1436,7 +1481,7 @@ association via eager_graph.  Defaults to the :graph_join_type option or
 you want to use a different join type when JOINing to the join table then
 you want to use for JOINing to the final table
 
-==== :graph_join_table_only_conditions [+many_to_many+]
+==== :graph_join_table_only_conditions [+many_to_many+, +one_through_one+]
 
 The conditions to use on the SQL join for the join table when eagerly loading
 the association via eager_graph, instead of the default conditions specified
@@ -1503,12 +1548,12 @@ Like the :primary_key option, but :primary_key references the method name, while
 Like the :key option, but :key references the column
 name, while :key_method references the method name.
 
-==== :left_primary_key_column [+many_to_many+]
+==== :left_primary_key_column [+many_to_many+, +one_through_one+]
 
 Like the :left_primary_key option, but :left_primary_key references the method name, while
 :left_primary_key_column references the underlying column.
 
-==== :right_primary_key_method [+many_to_many+]
+==== :right_primary_key_method [+many_to_many+, +one_through_one+]
 
 Like the :right_primary_key option, but :right_primary_key references the column
 name, while :right_primary_key_method references the method name.
@@ -1631,6 +1676,16 @@ add_<i>association</i> method, Sequel will automatically save the object.
 If you don't want to validate objects when these implicit saves are done,
 the validate option should be set to false.
 
+==== :raise_on_save_failure [+one_to_many+ associations]
+
+Set to false to not raise an exception when validation or a before hook
+fails when implicitly saving an associated object in the add_* or remove_*
+methods.  This mirrors the raise_on_save_failure model setting, which these
+methods do not respect (by design, since the setting is dangerous).
+
+If you use this option, you must explicitly check all add_* and remove_* return
+values to see if they were successful.
+
 ==== :allow_eager
 
 If set to false, you cannot load the association eagerly via eager or
@@ -1646,7 +1701,7 @@ instances.
 ==== :cartesian_product_number 
 
 The number of joins completed by this association that could cause more
-than one row for each row in the current table (default: 0 for *_to_one
+than one row for each row in the current table (default: 0 for *_one
 associations, 1 for *_to_many associations).
 
 This should only be modified in specific cases.  For example, if you have
@@ -1671,41 +1726,38 @@ plugin.
 
 ==== :eager_limit_strategy
 
-This setting determines what strategy to use for loading the associations
+This setting determines what strategy to use for eager loading the associations
 that use the :limit setting to limit the number of returned records. You
 can't use LIMIT directly, since you want a limit for each group of
 associated records, not a LIMIT on the total number of records returned
 by the dataset.
 
-By default, if a *_to_many association uses a limit or offset, or a
-one_to_one association uses an offset, Sequel will choose to use an
-eager limit strategy.  The default strategy depends on the database
-being used.  For databases which support window functions, a window
-function will be used.  Other databases will just have an ruby array
-slice done on the entire record set.
-
-For one_to_one associations without offsets, no strategy is used by default
-because none is needed for a true one_to_one association (since there
-is only one associated record per current record).  However, if you are
-using a one_to_one association where the relationship is really one_to_many,
-and using an order to pick the first matching row, then if you don't
-specify an :eager_limit_strategy option, you'll be loading all related
-rows just to have Sequel ignore all rows after the first.  By using a
-strategy to change the query to only return one associated record per
-current record, you can get much better database performance.
-
 In general, Sequel picks an appropriate strategy, so it is not usually
-necessary to specify a specific strategy.  The exception is for one_to_one
-associations where there is more than one associated record per current
-record.  For those, you should probably specify true to this option to have
-Sequel pick an appropriate strategy.
-
-You can also specify a symbol to manually choose a strategy.  The available
-strategies are:
+necessary to specify a strategy.  You can specify true for this option to
+have Sequel choose which strategy to use (this is the default).  You can
+specify a symbol to manually choose a strategy.  The available strategies are:
 
+:union :: Uses one or more UNION queries with a subquery for each record
+          you are eagerly loading for (this is the default strategy).
 :distinct_on :: Uses DISTINCT ON to ensure only the first matching record
-                is loaded (only used for one_to_one associations without
+                is loaded (only used for one_*_one associations without
                 offsets on PostgreSQL).
 :window_function :: Uses a ROW_NUMBER window functions to ensure the
                     correctly limited/offset records are returned.
 :ruby :: Uses ruby array slicing to emulate database limiting/offsetting.
+
+==== :subqueries_per_union
+
+The number of subqueries per union query to use when eager loading for a
+limited association using a union strategy.  This defaults to 40, but the
+optimum number depends on the database in use and the latency between the
+database and the application.
+
+==== :filter_limit_strategy
+
+The strategy to use when filtering by limited associations.  In general
+Sequel will choose either a :distinct_on, :window_function, or
+:correlated_subquery strategy based on the association type and what
+the database supports, but you can override that if necessary using
+this option.
+
diff --git a/doc/bin_sequel.rdoc b/doc/bin_sequel.rdoc
index c8b619c..10bb0e3 100644
--- a/doc/bin_sequel.rdoc
+++ b/doc/bin_sequel.rdoc
@@ -24,7 +24,7 @@ In general, you probably want to provide a connection string argument to bin/seq
   sequel postgres://user:pass@host/database_name
   sequel mysql2://user:pass at host/database_name
 
-See the {Connecting to a database guide}[link:files/doc/opening_databases_rdoc.html] for more details about and examples of connection strings.
+See the {Connecting to a database guide}[rdoc-ref:doc/opening_databases.rdoc] for more details about and examples of connection strings.
 
 === YAML Connection File
 
@@ -77,7 +77,7 @@ You can use the -M attribute to set the version to migrate to:
 
   sequel -m /path/to/migrations/dir -M 3 postgres://host/database
 
-See the {migration guide}[link:files/doc/migration_rdoc.html] for more details about migrations.
+See the {migration guide}[rdoc-ref:doc/migration.rdoc] for more details about migrations.
 
 === Dump Schemas
 
diff --git a/doc/cheat_sheet.rdoc b/doc/cheat_sheet.rdoc
index 2161db9..ac6a67a 100644
--- a/doc/cheat_sheet.rdoc
+++ b/doc/cheat_sheet.rdoc
@@ -63,7 +63,7 @@ Without a filename argument, the sqlite adapter will setup a new sqlite database
   dataset.inject(0){|sum, r| sum + r[:value]}
   dataset.sum(:value) # better
 
-== Filtering (see also {Dataset Filtering}[link:files/doc/dataset_filtering_rdoc.html])
+== Filtering (see also {Dataset Filtering}[rdoc-ref:doc/dataset_filtering.rdoc])
 
 === Equality
 
@@ -121,6 +121,7 @@ Without a filename argument, the sqlite adapter will setup a new sqlite database
 
   dataset.limit(30) # LIMIT 30
   dataset.limit(30, 10) # LIMIT 30 OFFSET 10
+  dataset.limit(30).offset(10) # LIMIT 30 OFFSET 10
 
 == Joins
 
@@ -215,6 +216,5 @@ Savepoints can be used if the database supports it:
 
   dataset.sql # "SELECT * FROM items"
   dataset.delete_sql # "DELETE FROM items"
-  dataset.where(:name => 'sequel').exists # "EXISTS ( SELECT * FROM items WHERE name = 'sequel' )"
   dataset.columns #=> array of columns in the result set, does a SELECT
   DB.schema(:items) => [[:id, {:type=>:integer, ...}], [:name, {:type=>:string, ...}], ...]
diff --git a/doc/core_extensions.rdoc b/doc/core_extensions.rdoc
index b5f1e31..6644ae1 100644
--- a/doc/core_extensions.rdoc
+++ b/doc/core_extensions.rdoc
@@ -164,17 +164,17 @@ Note the reversed order of the arguments.  For the Symbol#qualify method, the ar
 
 Symbol#like returns a case sensitive LIKE expression between the identifier and the given argument:
 
-  :a.like('b%') # SQL: a LIKE 'b%'
+  :a.like('b%') # SQL: a LIKE 'b%' ESCAPE '\'
 
 Alternative: Sequel.like:
 
   Sequel.like(:a, 'b%')
 
-==== like
+==== ilike
 
 Symbol#ilike returns a case insensitive LIKE expression between the identifier and the given argument:
 
-  :a.ilike('b%') # SQL: a ILIKE 'b%'
+  :a.ilike('b%') # SQL: a ILIKE 'b%' ESCAPE '\'
 
 Alternative: Sequel.ilike:
 
diff --git a/doc/dataset_basics.rdoc b/doc/dataset_basics.rdoc
index bde12d0..335474f 100644
--- a/doc/dataset_basics.rdoc
+++ b/doc/dataset_basics.rdoc
@@ -81,7 +81,7 @@ WHERE:: where, filter, exclude, exclude_where, and, or, grep, invert, unfiltered
 GROUP:: group, group_by, group_and_count, select_group, ungrouped
 HAVING:: having, exclude_having, invert, unfiltered
 ORDER:: order, order_by, order_append, order_prepend, order_more, reverse, reverse_order, unordered
-LIMIT:: limit, unlimited
+LIMIT/OFFSET:: limit, offset, unlimited
 compounds:: union, intersect, except
 locking:: for_update, lock_style
 common table expressions:: with, with_recursive
diff --git a/doc/dataset_filtering.rdoc b/doc/dataset_filtering.rdoc
index f80eba7..30be659 100644
--- a/doc/dataset_filtering.rdoc
+++ b/doc/dataset_filtering.rdoc
@@ -29,7 +29,7 @@ If you are specifying a filter/selection/order, you can use a virtual row block:
 
   items.select{avg(price)}
 
-You can also use the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html] and the +sql_function+ method:
+You can also use the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc] and the +sql_function+ method:
 
   :avg.sql_function(:price)
 
@@ -143,12 +143,12 @@ Or against SQL functions:
 You can search SQL strings in a case sensitive manner using the Sequel.like method:
 
   items.where(Sequel.like(:name, 'Acme%')).sql
-  #=> "SELECT * FROM items WHERE (name LIKE 'Acme%')"
+  #=> "SELECT * FROM items WHERE (name LIKE 'Acme%' ESCAPE '\')"
 
 You can search SQL strings in a case insensitive manner using the Sequel.ilike method:
 
   items.where(Sequel.ilike(:name, 'Acme%')).sql
-  #=> "SELECT * FROM items WHERE (name ILIKE 'Acme%')"
+  #=> "SELECT * FROM items WHERE (name ILIKE 'Acme%' ESCAPE '\')"
 
 You can specify a Regexp as a like argument, but this will probably only work
 on PostgreSQL and MySQL:
@@ -159,19 +159,19 @@ on PostgreSQL and MySQL:
 Like can also take more than one argument:
 
   items.where(Sequel.like(:name, 'Acme%', /Beta.*/)).sql
-  #=> "SELECT * FROM items WHERE ((name LIKE 'Acme%') OR (name ~ 'Beta.*'))"
+  #=> "SELECT * FROM items WHERE ((name LIKE 'Acme%' ESCAPE '\') OR (name ~ 'Beta.*'))"
 
 == String concatenation
 
 You can concatenate SQL strings using Sequel.join: 
 
   items.where(Sequel.join([:name, :comment]).like('%acme%')).sql
-  #=> "SELECT * FROM items WHERE ((name || comment) LIKE 'Acme%')"
+  #=> "SELECT * FROM items WHERE ((name || comment) LIKE 'Acme%' ESCAPE '\')"
 
 Sequel.join also takes a join argument:
 
   items.filter(Sequel.join([:name, :comment], ' ').like('%acme%')).sql
-  #=> "SELECT * FROM items WHERE ((name || ' ' || comment) LIKE 'Acme%')"
+  #=> "SELECT * FROM items WHERE ((name || ' ' || comment) LIKE 'Acme%' ESCAPE '\')"
 
 == Filtering using sub-queries
 
diff --git a/doc/migration.rdoc b/doc/migration.rdoc
index 5ff8f58..72e5dc1 100644
--- a/doc/migration.rdoc
+++ b/doc/migration.rdoc
@@ -26,7 +26,7 @@ you generally need to run Sequel's migrator with <tt>bin/sequel -m</tt>:
 Migrations in Sequel use a very simple DSL via the <tt>Sequel.migration</tt>
 method, and inside the DSL, use the <tt>Sequel::Database</tt> schema
 modification methods such as +create_table+ and +alter_table+.
-See the {schema modification guide}[link:files/doc/schema_modification_rdoc.html]
+See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
 for details on the schema modification methods you can use.
 
 == A Basic Migration
@@ -144,7 +144,7 @@ Migrations themselves do not contain any schema modification methods, but they m
 any of the <tt>Sequel::Database</tt> modification methods, of which there are many.  The main
 ones are +create_table+ and +alter_table+, but Sequel also comes with numerous other schema
 modification methods, most of which are shortcuts for +alter_table+ (all of these methods are
-described in more detail in the {schema modification guide}[link:files/doc/schema_modification_rdoc.html]):
+described in more detail in the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]):
 
 * add_column
 * add_index
@@ -482,6 +482,36 @@ depend on which migrations that have been applied.  Applied migrations greater
 than that version will be migrated down, while unapplied migrations less than
 or equal to that version will be migrated up.
 
+== Running migrations from a Rake task
+
+You can also incorporate migrations into a Rakefile. Here's an example
+using integer migration versions.
+
+  namespace :db do
+    desc "Run migrations"
+    task :migrate, [:version] do |t, args|
+      require "sequel"
+      Sequel.extension :migration
+      db = Sequel.connect(ENV.fetch("DATABASE_URL"))
+      if args[:version]
+        puts "Migrating to version #{args[:version]}"
+        Sequel::Migrator.run(db, "db/migrations", target: args[:version].to_i)
+      else
+        puts "Migrating to latest"
+        Sequel::Migrator.run(db, "db/migrations")
+      end
+    end
+  end
+
+To migrate to the latest version, run:
+
+  rake db:migrate
+
+This Rake task takes an optional argument specifying the target
+version. To migrate to version 42, run:
+
+  rake db:migrate[42]
+
 == Verbose migrations
 
 By default, <tt>sequel -m</tt> operates as a well behaved command line utility
diff --git a/doc/model_hooks.rdoc b/doc/model_hooks.rdoc
index 625cb26..9822904 100644
--- a/doc/model_hooks.rdoc
+++ b/doc/model_hooks.rdoc
@@ -81,6 +81,15 @@ If you aren't using transactions when saving or destroying model objects, and th
 
 The purpose of these hooks is dealing with external systems that are interacting with the same database.  For example, let's say you have a model that stores a picture, and you have a background job library that makes thumbnails of all of the pictures.  So when a model object is created, you want to add a background job that will create the thumbnail for the picture.  If you used after_save for this and transactions are being used, you are subject to a race condition where the background [...]
 
+Note that when using the after_commit or after_rollback hooks, you don't know whether the saved object was newly created or updated.  If you only want to run an action after commit of a newly created record, you need to use the Database's after_commit inside the model's after_create hook:
+
+  class Album < Sequel::Model
+    def after_create
+      super
+      db.after_commit{update_external_cache}
+    end
+  end
+
 == Running Hooks
 
 Sequel does not provide a simple way to turn off the running of save/create/update hooks.  If you attempt to save a model object, the save hooks are always called.  All model instance methods that modify the database call save in some manner, so you can be sure that if you define the hooks, they will be called when you save the object.
diff --git a/doc/mssql_stored_procedures.rdoc b/doc/mssql_stored_procedures.rdoc
new file mode 100644
index 0000000..918df0f
--- /dev/null
+++ b/doc/mssql_stored_procedures.rdoc
@@ -0,0 +1,43 @@
+= Stored Procedures in MSSQL
+
+This guide documents the workaround implemented to allow executing stored procedures
+in MSSQL, as well as getting the value of output variables.
+
+== Simple Execution
+
+The following stored procedure is used as an example:
+
+  CREATE PROCEDURE dbo.SequelTest(
+    @Input varchar(25),
+    @Output int OUTPUT
+  )
+  AS
+    SET @Output = LEN(@Input)
+    RETURN 0
+
+Execute it as follows:
+
+  DB.call_mssql_sproc(:SequelTest, {:args => ['Input String', :output]})
+
+Use the +:output+ symbol to denote an output variable. The result will contain a
+hash of the output variables, as well as the result code and number of affected rows:
+
+  {:result => 0, :numrows => 1, :var1 => "1"}
+
+Output variables will be strings by default. To specify their type, include the
+SQL type:
+
+  DB.call_mssql_sproc(:SequelTest, {:args => ['Input String', [:output, 'int']]})
+
+Result:
+
+  {:result => 0, :numrows => 1, :var1 => 1}
+
+Output variables will be named +var#{n}+ where n is their zero indexed position
+in the parameter list. To name the output variable, include their name:
+
+  DB.call_mssql_sproc(:SequelTest, {:args => ['Input String', [:output, nil, 'Output']]})
+
+Result:
+
+  {:result => 0, :numrows => 1, :output => "1"}
diff --git a/doc/object_model.rdoc b/doc/object_model.rdoc
index c1afb02..b4e23ea 100644
--- a/doc/object_model.rdoc
+++ b/doc/object_model.rdoc
@@ -204,7 +204,7 @@ If Sequel needs to represent an SQL concept that does not map directly to an exi
 ruby class, it will generally use a Sequel::SQL::Expression subclass to represent that
 concept.
 
-Some of the examples below show examples that require the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html].
+Some of the examples below show examples that require the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc].
 
 === Sequel::LiteralString
 
@@ -291,12 +291,17 @@ The following shortcuts exist for creating Sequel::SQL::QualifiedIdentifier obje
 Sequel::SQL::AliasedExpression objects represent aliased expressions in SQL.  The alias
 is treated as an identifier, but the expression can be an arbitrary Sequel expression:
 
-  Sequel::SQL::AliasedExpression.new(:column, :alias) # "column" AS "alias"
+  Sequel::SQL::AliasedExpression.new(:column, :alias)
+  # "column" AS "alias"
+
+  Sequel::SQL::AliasedExpression.new(:table, :alias, [:column_alias1, :column_alias2])
+  # "table" AS "alias"("column_alias1", "column_alias2")
 
 The following shortcuts exist for creating Sequel::SQL::AliasedExpression objects:
 
   Sequel.expr(:column___alias)
   Sequel.as(:column, :alias)
+  Sequel.as(:column, :alias, [:column_alias1, :column_alias2])
   :column.as(:alias) # core_extensions extension
   
 === Sequel::SQL::ComplexExpression
@@ -530,11 +535,11 @@ block expression support:
 In the above code, the block is instance-evaled inside a VirtualRow instance.
 
 These objects are usually not instantiated manually.  See the
-{Virtual Row Guide}[link:files/doc/virtual_rows_rdoc.html] for details.
+{Virtual Row Guide}[rdoc-ref:doc/virtual_rows.rdoc] for details.
 
 === Sequel::SQL::Window
 
-Sequel::SQL::Window objects represent the windows used by Sequel::SQL::WindowFunction.
+Sequel::SQL::Window objects represent the windows used by Sequel::SQL::Function.
 They use a hash-based API, supporting the :frame, :order, :partition, and :window
 options:
 
@@ -544,17 +549,6 @@ options:
   Sequel::SQL::Window.new(:parition=>:a, :frame=>:all)
   # (PARTITION BY "a" ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
 
-=== Sequel::SQL::WindowFunction
-
-Sequel::SQL::WindowFunction objects represent SQL window function calls.  These
-just combine a Sequel::SQL::Function with a Sequel::SQL::Window:
-
-  function = Sequel::SQL::Function.new(:f, 1)
-  window = Sequel::SQL::Window.new(:order=>:a)
-  Sequel::SQL::WindowFunction.new(function, window) # f(1) OVER (ORDER BY "a")
-
-Virtual rows offer a shortcut for creating Sequel::SQL::Window objects.
-
 === Sequel::SQL::Wrapper
 
 Sequel::SQL::Wrapper objects wrap arbitrary objects so that they can be used
diff --git a/doc/opening_databases.rdoc b/doc/opening_databases.rdoc
index edc8811..665c9ff 100644
--- a/doc/opening_databases.rdoc
+++ b/doc/opening_databases.rdoc
@@ -28,7 +28,7 @@ You can use URI query parameters to specify options:
 
 You can also pass an additional option hash with the connection string:
 
-  DB = Sequel.connect('postgres://localhost/blog' :user=>'user', :password=>'password')
+  DB = Sequel.connect('postgres://localhost/blog', :user=>'user', :password=>'password')
 
 You can also just use an options hash without a connection string.  If you do this, you must
 provide the adapter to use:
@@ -76,9 +76,11 @@ These options are shared by all adapters unless otherwise noted.
 :test :: Whether to test that a valid database connection can be made (false by default)
 :user :: The user account name to use logging in
 
-The following options can be specified and are passed to the the database's internal connection pool.
+The following options can be specified and are passed to the database's internal connection pool.
 
-:after_connect :: A proc called after a new connection is made, with the connection object (default: nil)
+:after_connect :: A callable object called after each new connection is made, with the
+                  connection object (and server argument if the callable accepts 2 arguments),
+                  useful for customizations that you want to apply to all connections (default: nil).
 :max_connections :: The maximum size of the connection pool (default: 4 connections on most databases)
 :pool_sleep_time :: The number of seconds to sleep before trying to acquire a connection again (default: 0.001 seconds)
 :pool_timeout :: The number of seconds to wait if a connection cannot be acquired before raising an error (default: 5 seconds)
@@ -271,6 +273,7 @@ Example connection strings:
   jdbc:firebirdsql:localhost/3050:/path/to/database.fdb
   jdbc:jdbcprogress:T:hostname:port:database
   jdbc:cubrid:hostname:port:database:::
+  jdbc:sqlanywhere://localhost?DBN=Test;UID=user;PWD=password
 
 You can also use JNDI connection strings:
 
@@ -282,6 +285,10 @@ The following additional options are supported:
                   Setting to false roughly doubles performance when selecting large numbers of rows.
                   Note that you can't provide this option inside the connection string (as that is passed
                   directly to JDBC), you have to pass it as a separate option.
+:driver :: Specify the Java driver class to use to connect to the database.  This only has
+           an effect if the database type is not recognized from the connection string,
+           and only helps cases where <tt>java.sql.DriverManager.getConnection</tt> does not
+           return a connection.
 :login_timeout :: Set the login timeout on the JDBC connection (in seconds).
 
 === mysql 
@@ -299,6 +306,10 @@ The following additional options are supported:
 :config_default_group :: The default group to read from the in the MySQL config file.
 :config_local_infile :: If provided, sets the Mysql::OPT_LOCAL_INFILE option on the connection with the given value.
 :encoding :: Specify the encoding/character set to use for the connection.
+:fractional_seconds :: On MySQL 5.6.5+, this option is recognized and will include fractional seconds in
+                       time/timestamp values, as well as have the schema method create columns that can contain
+                       fractional seconds by deafult.  This option is also supported on other adapters that connect
+                       to MySQL.
 :socket :: Can be used to specify a Unix socket file to connect to instead of a TCP host and port.
 :sql_mode :: Set the sql_mode(s) for a given connection.  Can be single symbol or string,
              or an array of symbols or strings (e.g. <tt>:sql_mode=>[:no_zero_date, :pipes_as_concat]</tt>).
@@ -370,6 +381,17 @@ The following additional options are supported:
 :use_iso_date_format :: This can be set to false to not force the ISO date format.  Sequel forces
                         it by default to allow for an optimization.
 
+=== sqlanywhere
+
+The sqlanywhere driver works off connection strings, so a connection string
+is built based on the url/options hash provided.  The following additional
+options are respected:
+
+:commlinks :: specify the CommLinks connection string option
+:conn_string :: specify the connection string to use, ignoring all other options
+:connection_name :: specify the ConnectionName connection string option
+:encoding :: specify the CharSet connection string option
+
 === sqlite
 
 Requires: sqlite3
@@ -428,6 +450,9 @@ options that you may want to set are :login_timeout, :timeout, :tds_version, :az
 
 Other Sequel specific options:
 
+:server_version :: Override the server version to use (9000000 = SQL Server 2005).
+                   This also works on any other adapter that connects to Microsoft
+                   SQL Server.
 :textsize :: Override the default TEXTSIZE setting for this connection.  The FreeTDS
              default is small (around 64000 bytes), but can be set up to around 2GB.
              This should be specified as an integer.  If you plan on setting large
diff --git a/doc/postgresql.rdoc b/doc/postgresql.rdoc
index 4363ff7..aa06f75 100644
--- a/doc/postgresql.rdoc
+++ b/doc/postgresql.rdoc
@@ -44,14 +44,15 @@ pg_range :: ranges (for any scalar type), as a ruby Range-like object
 pg_row :: row-valued/composite types, as a ruby Hash-like or Sequel::Model object
 
 In general, these extensions just add support for Database objects to return retrieved
-column values as the appropriate type (<tt>postgres only</tt>), and support for literalizing
-the objects correctly for use in an SQL string, or using them as bound variable values (<tt>postgres/pg only</tt>).
+column values as the appropriate type (<tt>postgres and jdbc/postgres only</tt>), and support for literalizing
+the objects correctly for use in an SQL string, or using them as bound variable values (<tt>postgres/pg and jdbc/postgres only</tt>).
 
 There are also type-specific extensions that make it easy to use database functions
 and operators related to the type.  These extensions are:
 
 pg_array_ops :: array-related functions and operators
 pg_hstore_ops :: hstore-related functions and operators
+pg_json_ops :: json-related functions and operators
 pg_range_ops :: range-related functions and operators
 pg_row_ops :: row-valued/composite type syntax support
 
@@ -75,7 +76,7 @@ You can also add exclusion constraints in +alter_table+ blocks using add_exclusi
   end
   # ALTER TABLE "table" ADD CONSTRAINT "table_during_excl" EXCLUDE USING gist ("during" WITH &&)
 
-=== Adding Foreign Key Constraints Without Initial Validation
+=== Adding Foreign Key and Check Constraints Without Initial Validation
 
 You can add a <tt>:not_valid=>true</tt> option when adding constraints to existing tables so
 that it doesn't check if all current rows are valid:
@@ -83,16 +84,21 @@ that it doesn't check if all current rows are valid:
   DB.alter_table(:table) do
     # Assumes t_id column already exists
     add_foreign_key([:t_id], :table, :not_valid=>true, :name=>:table_fk)
+
+    constraint({:name=>:col_123, :not_valid=>true}, :col=>[1,2,3])
   end
   # ALTER TABLE "table" ADD CONSTRAINT "table_fk" FOREIGN KEY ("t_id") REFERENCES "table" NOT VALID
+  # ALTER TABLE "table" ADD CONSTRAINT "col_123" CHECK (col IN (1, 2, 3)) NOT VALID
 
 Such constraints will be enforced for newly inserted and updated rows, but not for existing rows. After
 all existing rows have been fixed, you can validate the constraint:
 
   DB.alter_table(:table) do
     validate_constraint(:table_fk)
+    validate_constraint(:col_123)
   end
   # ALTER TABLE "table" VALIDATE CONSTRAINT "table_fk"
+  # ALTER TABLE "table" VALIDATE CONSTRAINT "col_123"
 
 === Creating Indexes Concurrently
 
@@ -202,6 +208,17 @@ without keeping all rows in memory:
   # CLOSE sequel_cursor
   # COMMIT
 
+This support is used by default when using <tt>Dataset#paged_each</tt>.
+
+Using cursors, it is possible to update individual rows of a large dataset
+easily using the <tt>:rows_per_fetch=>1</tt> option in conjunction with
+<tt>Dataset#where_current_of</tt>.  This is useful if the logic needed to
+update the rows exists in the application and not in the database:
+
+  ds.use_cursor(:rows_per_fetch=>1).each do |row|
+    ds.where_current_of.update(:col=>new_col_value(row))
+  end
+
 === Truncate Modifiers
 
 Sequel supports PostgreSQL-specific truncate options:
@@ -261,6 +278,10 @@ notifications:
 
   DB.listen(:channel, :loop=>true){|channel| p channel}
 
+The +pg_static_cache_updater+ extension uses this support to automatically update
+the caches for models using the +static_cache+ plugin.  Look at the documentation of that
+plugin for details.
+
 === Locking Tables
 
 Sequel makes it easy to lock tables, though it is generally better to let the database
@@ -300,3 +321,6 @@ Then you can stream individual datasets:
 Or stream all datasets by default:
 
   DB.stream_all_queries = true
+
+When streaming is enabled, <tt>Dataset#paged_each</tt> will use streaming to implement
+paging.
diff --git a/doc/querying.rdoc b/doc/querying.rdoc
index cd5fd3a..d72188c 100644
--- a/doc/querying.rdoc
+++ b/doc/querying.rdoc
@@ -11,7 +11,7 @@ aims to be a gentle introduction to Sequel's querying support.
 While you can easily use raw SQL with Sequel, a large part of the
 advantage you get from using Sequel is Sequel's ability to abstract
 SQL from you and give you a much nicer interface. Sequel also ships with
-a {core_extensions extension}[link:files/doc/core_extensions_rdoc.html],
+a {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
 which better integrates Sequel's DSL into the ruby language.
 
 == Retrieving Objects
@@ -25,7 +25,7 @@ method you can use.
 === Sequel::Dataset
 
 If you are new to Sequel and aren't familiar with Sequel, you should probably
-read the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html],
+read the {"Dataset Basics" guide}[rdoc-ref:doc/dataset_basics.rdoc],
 then come back here.
 
 === Retrieving a Single Object
@@ -63,7 +63,7 @@ Any options you pass to +first+ will be used as a filter:
   => #<Artist @values={:name=>"YJM", :id=>1}>
   
   artist = Artist.first(Sequel.like(:name, 'Y%'))
-  # SELECT * FROM artists WHERE (name LIKE 'Y%') LIMIT 1
+  # SELECT * FROM artists WHERE (name LIKE 'Y%' ESCAPE '\') LIMIT 1
   => #<Artist @values={:name=>"YJM", :id=>1}>
   
 If there is no matching row, +first+ will return nil.  If you want to
@@ -273,7 +273,7 @@ Let's say we are only interested in the artists whose names
 start with "A":
 
   ds2 = ds1.where(Sequel.like(:name, 'A%'))
-  # SELECT * FROM artists WHERE name LIKE 'A%'
+  # SELECT * FROM artists WHERE name LIKE 'A%' ESCAPE '\'
 
 Here we see that +where+ returns a dataset that adds a +WHERE+
 clause to the query.  It's important to note that +where+ does
@@ -282,7 +282,7 @@ not modify the receiver:
   ds1
   # SELECT * FROM artists
   ds2
-  # SELECT * FROM artists WHERE name LIKE 'A%'
+  # SELECT * FROM artists WHERE name LIKE 'A%' ESCAPE '\'
  
 In Sequel, most dataset methods that you will be using will
 not modify the dataset itself, so you can freely use the dataset in multiple
@@ -293,7 +293,7 @@ Let's say we only want to select the id and name columns, and that
 we want to order by name:
 
   ds3 = ds.order(:name).select(:id, :name)
-  # SELECT id, name FROM artists WHERE name LIKE 'A%' ORDER BY name
+  # SELECT id, name FROM artists WHERE name LIKE 'A%' ESCAPE '\' ORDER BY name
   
 Note how you don't need to assign the returned value of order to a variable,
 and then call select on that.  Because order just returns a dataset, you can
@@ -380,7 +380,7 @@ If a block is passed to a filter, it is treated as a virtual row block:
   Artist.where{id > 5}
   # SELECT * FROM artists WHERE id > 5
 
-You can learn more about virtual row blocks in the {"Virtual Rows" guide}[link:files/doc/virtual_rows_rdoc.html].
+You can learn more about virtual row blocks in the {"Virtual Rows" guide}[rdoc-ref:doc/virtual_rows.rdoc].
 
 You can provide both regular arguments and a block, in which case the results
 will be ANDed together:
@@ -403,7 +403,7 @@ expressions are instances of subclasses of Sequel::SQL::Expression. You've
 already seen an example earlier:
 
   Artist.where(Sequel.like(:name, 'Y%'))
-  # SELECT * FROM artists WHERE name LIKE 'Y%'
+  # SELECT * FROM artists WHERE name LIKE 'Y%' ESCAPE '\'
 
 In this case Sequel.like returns a Sequel::SQL::BooleanExpression object,
 which is used directly in the filter.
@@ -415,7 +415,7 @@ object.  In most cases, the SQL::Expression returned supports the & operator for
 +AND+, the | operator for +OR+, and the ~ operator for inversion:
 
   Artist.where(Sequel.like(:name, 'Y%') & (Sequel.expr(:b=>1) | Sequel.~(:c=>3)))
-  # SELECT * FROM artists WHERE name LIKE 'Y%' AND (b = 1 OR c != 3)
+  # SELECT * FROM artists WHERE name LIKE 'Y%' ESCAPE '\' AND (b = 1 OR c != 3)
 
 You can combine these expression operators with the virtual row support:
 
@@ -493,7 +493,7 @@ So to do a NOT IN with an array:
 Or to use the NOT LIKE operator:
 
   Artist.exclude(Sequel.like(:name, '%J%'))
-  # SELECT * FROM artists WHERE name NOT LIKE '%J%'
+  # SELECT * FROM artists WHERE name NOT LIKE '%J%' ESCAPE '\'
  
 === Removing
 
@@ -629,7 +629,12 @@ You can provide a second argument to +limit+ to specify an offset:
   Artist.limit(5, 10)
   # SELECT * FROM artists LIMIT 5 OFFSET 10
 
-This would return the 11th through 15th records in the original
+You can also call the +offset+ method separately:
+
+  Artist.limit(5).offset(10)
+  # SELECT * FROM artists LIMIT 5 OFFSET 10
+
+Either of these would return the 11th through 15th records in the original
 dataset.
 
 To remove a limit from a dataset, use +unlimited+:
@@ -977,7 +982,7 @@ If you just want to know whether the current dataset would return any rows, use
   => true
 
   Album.where(Sequel.like(:name, 'R%')).empty?
-  # SELECT 1 FROM albums WHERE name LIKE 'R%' LIMIT 1
+  # SELECT 1 FROM albums WHERE name LIKE 'R%' ESCAPE '\' LIMIT 1
   => false
 
 == Aggregate Calculations
diff --git a/doc/release_notes/3.18.0.txt b/doc/release_notes/3.18.0.txt
index dd113e8..b43e427 100644
--- a/doc/release_notes/3.18.0.txt
+++ b/doc/release_notes/3.18.0.txt
@@ -52,9 +52,8 @@
   the graphviz dot program in order to create visualizations
   of the dataset's abstract syntax tree.  Examples:
 
-  * http://sequel.heroku.com/images/to_dot_simple.gif
-  * http://sequel.heroku.com/images/to_dot_complex.gif
-  * http://imgpaste.com/i/lxngy.gif
+  * http://sequel.jeremyevans.net/images/to_dot_simple.gif
+  * http://sequel.jeremyevans.net/images/to_dot_complex.gif
 
   Both the to_dot extension and reversible migrations support
   were inspired by Aaron Patterson's recent work on ActiveRecord
diff --git a/doc/release_notes/3.9.0.txt b/doc/release_notes/3.9.0.txt
index bd7ac60..6ce244c 100644
--- a/doc/release_notes/3.9.0.txt
+++ b/doc/release_notes/3.9.0.txt
@@ -230,4 +230,4 @@ Backwards Compatibility
 Other News
 ----------
 
-* Sequel now has an official blog at http://sequel.heroku.com.
+* Sequel now has an official blog at http://sequel.jeremyevans.net/blog.html.
diff --git a/doc/release_notes/4.10.0.txt b/doc/release_notes/4.10.0.txt
new file mode 100644
index 0000000..df75f05
--- /dev/null
+++ b/doc/release_notes/4.10.0.txt
@@ -0,0 +1,226 @@
+= Performance Enhancements
+
+* Dataset literalization for simple datasets is now faster by
+  creating a per-adapter SQL literalization method instead of
+  having all adapters share a generic method with higher overhead.
+  Sequel.split_symbol now caches results globally. Symbol
+  literalization is now cached per Database.
+
+  Combining these three optimizations, here are the performance
+  increases compared to 4.9.0 for a couple example datasets:
+
+    ds1 = DB[:a]
+    ds2 = DB[:a].select(:a, :b).where(:c=>1).order(:d, :e)
+
+            .sql     .all (1 row)
+    ds1     140%     11%
+    ds2     187%     32%
+
+* Regular association loading now uses a placeholder literalizer
+  in most cases, for up to an 85% improvement when loading
+  simple associations.
+
+* Eager loading associations using Dataset#eager now uses a
+  placeholder literalizer in most cases, for up to a
+  20% improvement when eager loading simple associations.
+
+* Eager loading associations with limits using Dataset#eager now
+  uses a UNION-based strategy by default.  After extensive
+  testing, this was found to be the fastest strategy if the
+  key columns are indexed.  Unfortunately, it is a much slower
+  strategy if the key columns are not indexed.  You can override
+  the default UNION strategy by using the :eager_limit_strategy
+  association option.
+
+  On some databases, execution time of UNION queries with n subqueries
+  increases faster than O(n).  Also, there are limits on the number of
+  subqueries supported in a single UNION query.  Sequel chooses a
+  default limit of 40 subqueries per UNION query.  You can increase
+  this via the :subqueries_per_union association option.
+
+* Dataset#import and #multi_insert can now insert multiple rows
+  in a single query on H2, HSQLDB, Derby, SQLAnywhere, CUBRID,
+  SQLite, Oracle, DB2, and Firebird, which should be significantly
+  faster than previous versions that issued a separate INSERT query
+  per row.
+
+* The many_to_many setter method in the association_pks plugin now
+  uses Dataset#import to insert many rows at once, instead of using
+  a seperate query per insert.
+
+* The jdbc adapter's type conversion has been rewritten to be
+  more similar to the other adapters, setting up the type
+  conversion procs before iterating over results.  This increases
+  performance up to 20%.
+
+* The jdbc/oracle adapter now defaults to a fetch_size of 100,
+  similar to the oci8-based oracle adapter, significantly improving
+  performance for large datasets.
+
+= New Features
+
+* Database#transaction now supports an :auto_savepoint option.  This
+  option makes it so that transactions inside the transaction block
+  automatically use savepoints unless they use the :savepoint=>false
+  option.  This should make testing transactional behavior easier.
+
+* Model.prepared_finder has been added.  This has an API similar to
+  Model.finder, but it uses a prepared statement instead of a
+  placeholder literalizer.  It is less flexible than Model.finder
+  as prepared statements have fixed SQL, but it may perform better.
+
+* Common table expressions (WITH clauses) are now supported on SQLite
+  3.8.3+.
+
+* :correlated_subquery has been added as an eager_graph and filter by
+  association limit strategy for one_to_one and one_to_many
+  associations.  In certain cases it was found that this is faster
+  than the :window_function limit strategy.  It is the default
+  filter by associations limit strategy on databases that do not
+  support window functions.
+
+  Filtering by limited associations using a correlated subquery
+  strategy does not work in all cases, but it should handle most
+  cases correctly.
+
+* The prepared_statement_associations plugin now handles
+  one_through_one and one_through_many associations.
+
+* Sequel now emulates support for offsets without limits on MySQL,
+  SQLite, H2, SQLAnywhere, and CUBRID.
+
+* In the jdbc adapter, the Database#fetch_size accessor and
+  :fetch_size option can be used to automatically set the JDBC
+  fetch size for JDBC Statement objects created by the database.
+
+* Dataset#with_fetch_size has been added to jdbc adapter datasets,
+  setting the fetch size to use on ResultSets generated by the
+  dataset.  This generally has the effect of overriding the
+  Database fetch_size setting.
+ 
+* On MySQL 5.6.5+, Sequel supports a :fractional_seconds Database
+  option, which will use fractional seconds for timestamp values,
+  and have the schema modification code create timestamp columns
+  that accept fractional timestamps by default.
+
+* Database#call_mssql_sproc on Microsoft SQL Server now handles named
+  parameters:
+
+    DB.call_mssql_sproc(:sproc_name, :args => {
+      'input_arg1_name' => 'input arg1 value',
+      'input_arg2_name' => 'input arg2 value',
+      'output_arg_name' => [:output, 'int', 'result key name']
+    })
+
+* Database#drop_view now supports an :if_exists option on SQLite,
+  MySQL, H2, and HSQLDB.
+
+* Database#drop_table now supports an :if_exists option on HSQLDB.
+
+* A :filter_limit_strategy association option has been added, for
+  choosing the strategy that will be used when filtering/excluding by
+  associations with limits.  For backwards compatibility, Sequel will
+  fallback to looking at the :eager_limit_strategy option.
+
+* A :server_version Database option is now supported on Microsoft SQL
+  Server, which will use the value given instead of querying for it.
+
+= Other Improvements
+
+* Dataset::PlaceholderLiteralizer arguments are how handled
+  correctly when emulating offsets via the row_number window function
+  on DB2, MSSQL <=2012, and Oracle.
+
+* Dataset::PlaceholderLiteralizer now handles DelayedEvaluation
+  objects correctly.
+
+* Offset emulation is skipped if static SQL is used on Access,
+  DB2, and MSSQL <=2008.
+
+* Additional disconnect errors are now recognized in the postgres
+  adapter.
+
+* The :foreign_key_constraint_name option is now respected when
+  adding a foreign key column to an existing table on MySQL.
+
+* Sequel now attempts to work around a bug on MySQL 5.6+ when
+  combining DROP FOREIGN KEY and DROP INDEX in the same ALTER TABLE
+  statement.
+
+* Dataset#for_update is now respected on H2.
+
+* Timestamp with local time zone types are now returned as
+  Time/DateTime objects on jdbc/oracle.
+
+* Model.include now has the same API as Module.include.
+
+* Model#marshallable! now works correctly when using the
+  tactical_eager_loading plugin.
+
+* The pg_array_associations plugin now attempts to automatically
+  determine the correct array type to use, and explicitly casts
+  to that array type in more places.
+
+* The auto_validations plugin now handles models that select from
+  subqueries.
+
+* The association_pks plugin does no longer creates getter and setter
+  methods for one_through_one associations.
+
+* bin/sequel now uses the Sequel code in the related lib directory.
+  This makes it easier to use from a repository checkout.
+
+= Backwards Compatibility
+
+* AssociationReflection#associated_dataset now returns a joined
+  dataset for associations that require joins (e.g. many_to_many).
+  Anyone using this directly for associations that require joins
+  probably needs to update their code.
+
+* Model.associate now adds the association instance methods instead
+  of relying on the def_#{association_type} method doing so.  Anyone
+  using custom association types probably needs to update their code.
+
+* Model.eager_loading_dataset, .apply_association_dataset_opts, and
+  .def_{add_method,association_dataset_methods,remove_methods}
+  are now deprecated.
+
+* Key conditions for associations requiring joins have been moved
+  from the JOIN ON clause to the WHERE clause.  This should be
+  optimized the same by the database, but it can break tests that
+  expect specific SQL.
+
+* Dataset#_insert_sql and #_update_sql are now private instead of
+  protected.
+
+* The install/uninstall rake tasks have been removed.
+
+* Model association and association reflection internals have
+  changed significantly, if you were relying on them, you'll
+  probably need to update your code.
+
+* Database transaction internals have changed significantly, if you
+  were relying on them, you'll probably need to update your code.
+
+* Dataset literalization internals have changed significantly, with
+  the Dataset#*_clause_methods private methods being removed.
+  Custom adapters that used these methods should switch to using the
+  new Dataset.def_sql_method method.
+
+* Common table expressions are no longer enabled by default in
+  Sequel.  External adapters for databases that support common
+  table expressions should define Dataset#supports_cte?(type) to
+  return true.
+
+* Support for RETURNING is no longer determined via introspection.
+  External adapters for databases that support RETURNING should
+  define Dataset#supports_returning?(type) to return true.
+
+* The new jdbc adapter type conversion code may not be completely
+  compatible with the previous code.  The currently known case
+  where it is different is on jdbc/postgresql, when using an
+  array type where no conversion proc exists, the returned object
+  will be a ruby array containing java objects, instead of a ruby
+  array containing ruby objects.  It is recommended that
+  jdbc/postgresql users using array types use the pg_array extension
+  to avoid this issue.
diff --git a/doc/release_notes/4.11.0.txt b/doc/release_notes/4.11.0.txt
new file mode 100644
index 0000000..28f1a91
--- /dev/null
+++ b/doc/release_notes/4.11.0.txt
@@ -0,0 +1,147 @@
+= New SQL Function Features
+
+* SQL::Function now supports an options hash for functions.
+  Unfortunately, since SQL::Function#initialize does not support
+  an options hash, you need to use SQL::Function.new! to create
+  a function with an options hash.  You can also call methods on
+  the SQL::Function instance, which will return a new SQL::Function
+  with the appropriate option set.
+
+* SQL::Function#quoted has been added, which will return a new
+  SQL::Function instance that will quote the function name (if
+  the database supports quoting function names).
+
+* SQL::Function#unquoted has been added, which will return a new
+  SQL::Function instance that will not quote the function name.
+
+* SQL::Function#lateral has been added, which will return a new
+  SQL::Function instance that will be preceded by LATERAL when
+  literalized, useful for set-returning functions.
+
+* SQL::Function#within_group has been added, for creating
+  ordered-set and hypothetical-set functions that use WITHIN GROUP.
+
+* SQL::Function#filter has been added, for creating filtered
+  aggregate function calls using FILTER.
+
+* SQL::Function#with_ordinality has been added, for creating set
+  returning functions that also include a row number for every
+  row in the set, using WITH ORDINALITY.
+
+= New PostgreSQL Features
+
+* The jsonb type added in 9.4 is now supported in the pg_json
+  extension.  To create a jsonb type manually, you need to call
+  Sequel.pg_jsonb.
+
+  The new json and jsonb functions and operators added in 9.4 are
+  now supported in the pg_json_ops extension.  You can use the jsonb
+  functions and operators by creating a Postgres::JSONBOp using
+  Sequel.pg_jsonb_op.
+
+* Database#full_text_search now takes a :rank option to order by the
+  ranking.
+
+* Database#refresh_view now supports a :concurrently option, to
+  refresh a materialized view concurrently, supported on 9.4+.
+
+* Postgres::ArrayOp#cardinality has been added to the pg_array_ops
+  extension, for easy use of the cardinality method added in 9.4.
+
+* Postgres::ArrayOp#unnest in the pg_array_ops extension now accepts
+  arguments.  PostgreSQL 9.4+ supports this if unnest is used in the
+  FROM clause.
+
+= Other New Features
+
+* Sequel now supports derived column lists (table aliases that include
+  column aliases) via Sequel.as and SQL::AliasedMethods#as:
+
+    Sequel.as(:table, :alias, [:c1, :c2])
+    # table AS alias(c1, c2)
+
+  Not all databases support this, but it is in SQL92 and Sequel now
+  supports it by default.  Derived column lists make it easier to
+  alias columns when using set-returning functions.
+
+  Dataset#from_self now supports derived column lists via the new
+  :column_aliases option (which requires the :alias option to take
+  effect).
+
+* Database#create_view now supports a :check option, to use
+  WITH CHECK OPTION.  You can also use :check=>:local for
+  WITH LOCAL CHECK OPTION.  These clauses make it so when you are
+  inserting into/updating the view, you can only modify rows in the
+  underlying table if the result would be returned by the view.
+
+* The :after_connect Database option proc now can accept two
+  arguments.  If the arity of the proc is 2, Sequel will pass both
+  the connection object and the shard symbol.
+
+* The class_table_inheritance plugin now supports a :model_map
+  option similar to the single_table_inheritance plugin, allowing
+  use of the plugin without storing ruby class names in the database.
+  Note that if you use this option, you must set the correct value
+  for the kind column manually when creating the row.
+
+* Support for CUBRID/SQLAnywhere emulation has been added to the
+  mock adapter.
+
+= Other Improvements
+
+* Dataset#import now supports a default slice size, which Sequel
+  sets to 500 on SQLite as that is the limit that SQLite supports in
+  a single statement.
+
+* The serialization plugin now only modifies changed_columns in the
+  setter method if the deserialized value has changed, similar to
+  how Sequel's standard column setters work.  Note that if you are
+  mutating the deserialized value (i.e. not calling the setter
+  method), you still need to use the
+  serialization_modification_detection plugin.
+
+* Plugins that set column values for new objects before creation now
+  use before_validation instead of before_create, which works better
+  when the auto_validations plugin is used.
+
+* The :read_only transaction option is now applied per-savepoint on
+  PostgreSQL.  Note that this allows you to have a READ ONLY
+  savepoint in a READ WRITE transaction, it does not allow you to
+  have a READ WRITE savepoint in a READ ONLY transaction.
+
+* In the ibm_db adapter, fix warnings when using certain column names.
+
+* Support connecting to a DB2 catalog name in the ibm_db adapter, by
+  providing a :database option without a :host or :port option.
+
+* The mock adapter now sets an emulated version when using MySQL and
+  SQLite.  Additionally, the emulated version for PostgreSQL and
+  Microsoft SQL Server has been updated.
+
+= Backwards Compatibility
+
+* External adapters that override Dataset#as_sql_append now need to
+  have the method accept two arguments.
+
+* Model.eager_loading_dataset, .apply_association_dataset_opts, and
+  .def_{add_method,association_dataset_methods,remove_methods} have
+  been removed (they were deprecated in 4.10.0).
+
+* SQL::WindowFunction and SQL::EmulatedFunction classes are now
+  deprecated, as well as Dataset methods that literalize instances of
+  these classes.  These classes are replaced by using options on
+  SQL::Function instances.
+
+* Passing a table_alias argument when creating an SQL::JoinClause
+  manually is no longer supported.  You now need to pass the table as
+  an SQL::AliasedExpression if the table needs to be aliased.
+  
+* ASTTransformer no longer transforms the table alias for
+  SQL::JoinClause.  This is for consistency with
+  SQL::AliasedExpression.
+
+* SQL standard casts are now used in Database#full_text_search, which
+  can break tests that expect specific SQL.
+
+* The to_dot extension now uses slightly different output for
+  SQL::Function and SQL::JoinClause instances.
diff --git a/doc/release_notes/4.4.0.txt b/doc/release_notes/4.4.0.txt
new file mode 100644
index 0000000..4c47a10
--- /dev/null
+++ b/doc/release_notes/4.4.0.txt
@@ -0,0 +1,92 @@
+= New Features
+
+* Sequel now supports Sybase SQLAnywhere, via the sqlanywhere and
+  jdbc/sqlanywhere adapters.
+
+* The filter by associations support now handles cases where the
+  association has :conditions or a block (as long as the block
+  does not rely on instance-specific behavior).  This allows
+  you to handle the following:
+
+    Album.many_to_many :popular_tags, :class=>:Tag do |ds|
+      ds.where{tags__popularity > 9000}
+    end
+    Album.where(:popular_tags=>[Tag[1], Tag[2]])
+
+  This will return all albums whose popular_tags would include
+  at least one of those two tags.  Previously, the block would
+  be ignored, returning albums containing one those tags even if
+  the tags weren't popular.
+
+* A table_select plugin has been added that changes the default
+  selection for models from * to table.*.  This is useful for
+  people who want ActiveRecord-like behavior instead of SQL-like
+  behavior, where joining tables doesn't automatically include
+  columns in the other table.
+
+  This can fix issues where joining another table that has columns
+  with the same name as columns in the model table without
+  specifying an explicit selection results in model objects being
+  returned where the values in the model object are the values
+  from the joined table instead of the model table.
+
+* Dataset#offset has been added, for specifying offset separately
+  from limit.  Previous this was possible via:
+
+    ds.limit(nil, offset)
+
+  but this is a friendlier API.
+
+* The jdbc adapter now has support for foreign key parsing.  This
+  is used if there is no specific support for the underlying
+  database.
+
+* Foreign key parsing is now supported on Oracle.
+
+= Other Improvements
+
+* Association add_*/remove_*/remove_all_* methods for
+  pg_array_to_many associations now work on unsaved model objects.
+
+* In the constraint_validations extension, deletes from the
+  metadata table are now processed before inserts, so that dropping
+  an existing constraint and readding a constraint with the same
+  name now works correctly.
+
+* Cloning an association now copies the :eager_block option
+  correctly from the source association if it was passed as
+  the block to the source association method.
+
+* Cloning a cloned association now copies the block for the
+  association.
+
+* The descendants method in the tree plugin no longer modifies an
+  array it is iterating over.
+
+* The jdbc/postgresql adapter now supports PostgreSQL-specific types,
+  with pretty much the same support as the postgres adapter.  When
+  using the pg_* extensions, the dataset will now handle the
+  PostgreSQL types correctly and return instances of the correct
+  Ruby classes (e.g. hstore is returned as Sequel::Postgres::HStore).
+
+  You should no longer need to use the typecast_on_load or
+  pg_typecast_on_load plugins when using model objects that use these
+  types when using the jdbc/postgresql adapter.
+
+* Offset emulation on Oracle now handles cases where selected
+  columns can't be ordered.
+
+* Offset emulation on DB2 no longer automatically orders on all
+  columns if the dataset itself is unordered.
+
+* Types containing spaces are now returning correctly when
+  parsing the schema in the oracle adapter.
+
+* Database#tables no longer returns tables in the recycle bin on
+  Oracle.
+
+* add_foreign_key now works correctly on HSQLDB, by splitting the
+  column addition and constraint addition into two separate
+  statements.
+
+* add_primary_key now works correctly on H2.
diff --git a/doc/release_notes/4.5.0.txt b/doc/release_notes/4.5.0.txt
new file mode 100644
index 0000000..fb48914
--- /dev/null
+++ b/doc/release_notes/4.5.0.txt
@@ -0,0 +1,34 @@
+= New Features
+
+* An mssql_optimistic_locking plugin has been added.  This is similar
+  to the regular optimistic_locking plugin, but instead of using an
+  integer lock column, it uses a timestamp/rowversion lock column.
+
+* Database#create_table with the :temp=>true option on PostgreSQL now
+  supports an :on_commit option.  This option can be set to :drop or
+  :delete_rows to either drop or empty the temporary table on
+  transaction commit.
+
+= Other Improvements
+
+* Dataset#insert no longer errors on PostgreSQL if the related table
+  is a placeholder literal string.
+
+* Unique constraints are now copied when emulating alter_table
+  operations on SQLite.
+
+* Clob column values are no longer returned as SQL::Blob instances
+  by the db2 and ibmdb adapters unless use_clob_as_blob is true.
+
+* SQL::Blob objects now work correctly as prepared statement
+  arguments in the jdbc/db2 adapter if use_clob_as_blob is false.
+
+= Backwards Compatibility
+
+* The Model.primary_key array for models with composite keys is now
+  frozen.
+
+* On DB2, use_clob_as_blob now defaults to false instead of true.
+
+* Sequel no longer uses RubyForge. The Sequel website is now located
+  at http://sequel.jeremyevans.net.
diff --git a/doc/release_notes/4.6.0.txt b/doc/release_notes/4.6.0.txt
new file mode 100644
index 0000000..52753a6
--- /dev/null
+++ b/doc/release_notes/4.6.0.txt
@@ -0,0 +1,30 @@
+= New Features
+
+* Database#call_mssql_sproc is now available for calling
+  stored procedures on Microsoft SQL Server, including the use
+  of output parameters.
+
+* The Database#{commit,rollback}_prepared_transaction methods now
+  support a :server option for the server on which to operate.
+
+= Other Improvements
+
+* On Microsoft SQL Server 2012, the native OFFSET/FETCH support
+  is now used for offsets, instead of emulating support via the
+  ROW_NUMBER window function.
+
+* Eager loading is now skipped when doing eager(...).naked.all on
+  a model dataset, instead of raising an error.  This can fix issues
+  when the eager_each plugin is used.
+
+* A couple additional disconnection errors are now detected in the
+  jdbc/postgresql adapter.
+
+* The tinytds adapter now handles returning rows when the fields
+  are not immediately available.
+
+* RuntimeErrors raised by oci8 are now handled correctly in the
+  oracle adapter.
+
+* Sequel's specs now work with RSpec 3, while still running
+  correctly on RSpec 1.3 and 2.
diff --git a/doc/release_notes/4.7.0.txt b/doc/release_notes/4.7.0.txt
new file mode 100644
index 0000000..a982d84
--- /dev/null
+++ b/doc/release_notes/4.7.0.txt
@@ -0,0 +1,103 @@
+= New Features
+
+* Alternatives for the more complex virtual row method calls have
+  been added:
+
+    # Window Functions using SQL::Function#over
+    # before: select{sum(:over, :args=>:col1, :partition=>:col2){}}
+    select{sum(:col1).over(:partition=>:col2)}
+
+    # count(*) using SQL::Function#*
+    # before: select{count(:*){}}
+    select{count{}.*}
+
+    # count(distinct col) using SQL::Function#distinct
+    # before: select{count(:distinct, :col){}}
+    select{count(:col).distinct}
+
+  Additionally, schema qualified functions are now supported via
+  SQL::QualifiedIdentifier#function, and quoted functions are now
+  supported via SQL::Identifier#function on some databases:
+
+    # "func"("col")
+    select{func.function(:col)}
+
+    # "schema"."func"("col1")
+    select{schema__func.function(:col1)}
+
+  If the database does not support quoting function names, then
+  Sequel will not quote them.
+
+* An update_or_create plugin has been added, for updating a matching
+  object if one exists, or creating an object if it does not. For
+  example, the following code will update the number of copies sold
+  for album with the name 'Hello', or it will create an album with
+  the name 'Hello' and 1000 number of copies sold:
+
+    Album.plugin :update_or_create
+    Album.update_or_create(:name=>'Hello') do |album|
+      album.num_copies_sold = 1000
+    end
+
+  You can also use a shorter form of this, with two hashes:
+    
+    Album.update_or_create({:name=>'Hello'}, {:num_copies_sold=>1000})
+
+  This plugin also adds a method named find_or_new, which does the
+  same thing as update_or_create, except it doesn't persist any
+  changes.
+
+* A :raise_on_save_failure option has been added for one_to_many,
+  pg_array_to_many, and many_to_pg_array associations.  This mirrors
+  the Model.raise_on_save_failure setting, and if set to false, it
+  will make the add/remove methods return nil instead of raising
+  an error if there is a validation/hook error when saving the
+  associated record.
+
+* The validates_unique validation in validation_helpers now supports a
+  :dataset option to provide the base dataset to use to check
+  uniqueness.  This is useful when the model itself uses a filtered
+  dataset, but the unique index in the database is on an unfiltered
+  dataset.
+
+  The auto_validations plugin uses this option to ensure that unique
+  validations are setup correctly in subclasses using single table
+  inheritance.
+
+= Other Improvements
+
+* Sequel now automatically rolls back transactions in killed threads
+  on ruby 2.0+.  It is still impossible to do so on ruby 1.9.
+
+* In the instance_hooks plugin, validation instance hooks are now
+  not cleared until after a successful save.
+
+* Composite unique key constraint violations are now recognized
+  and raised as Sequel::UniqueConstraintViolation on SQLite.
+
+* Primary key unique constraint violations are now recognized and
+  and raised as Sequel::UniqueConstraintViolation on Microsoft
+  SQL Server and SQLAnywhere.
+
+* If an exception occurs when using a cursor in the postgres adapter,
+  and an exception also occurs when closing the cursor when cleaning
+  up, the initial exception is now raised.
+
+* You can now get tables in a specific schema in the jdbc adapter
+  using the :schema option to Database#tables.  This was already
+  supported in most jdbc subadapters because they implement #tables
+  using database specific code instead of looking at the JDBC
+  metadata, but it should now work for all jdbc subadapters.
+
+* Sequel::SQLTime#to_s is now defined and returns a string in
+  HH:MM:SS format (leaving off the date).
+
+= Backwards Compatibility
+
+* The odbc adapter's :driver option is no longer deprecated, as reports
+  were received that it still works.
+
+* If you were re-adding instance validation hooks using instance_hooks
+  after a save failure, and then retrying the save, you may now end up
+  with duplicate validations.  You no longer need to re-add validation
+  hooks unless the object was saved successfully.
diff --git a/doc/release_notes/4.8.0.txt b/doc/release_notes/4.8.0.txt
new file mode 100644
index 0000000..9267cb0
--- /dev/null
+++ b/doc/release_notes/4.8.0.txt
@@ -0,0 +1,175 @@
+= New Features
+
+* A one_through_one association type has been added.  This is similar
+  to the many_to_many association type in that it uses a join table,
+  but it returns a single record instead of an array of records.
+  This is designed for cases where the foreign key in the join table
+  that references the current table has a unique constraint, or where
+  you want to use an order to just pick the first matching record.
+
+  Similarly, the many_through_many plugin now also offers a
+  one_through_many association.
+
+* An association_join method has been added to model datasets, for
+  setting up joins based on associations.  This basically does the
+  same join that eager_graph would do, but does not make the other
+  changes that eager_graph makes.
+
+  Unlike eager_graph (which uses LEFT OUTER JOINs by default),
+  association_join uses INNER JOINs, but there are also
+  association_*_join methods (e.g. association_left_join) for
+  using different join types.
+
+  Similar to eager_graph, you can use cascading of associations or
+  multiple associations.
+
+    Album.association_join(:artist, :tracks)
+    Artist.association_left_join(:albums=>:tracks)
+
+* Dataset#eager_graph_with_options has been added for model
+  datasets.  It currently supports a :join_type option, for
+  overriding the type of join to use on a per-call basis, as well
+  as a :limit_strategy option.  The API is similar to eager_graph,
+  except that the associations to eagerly load are passed in as
+  a single argument, and it takes an options hash.
+
+  The :limit_strategy option works similarly to the
+  :eager_limit_strategy option when eagerly loading.  If set to
+  true and the database supports window functions, it will join
+  the current dataset to a subquery that uses a window function
+  to correctly restrict the join to only those objects that fall
+  within the association's limit/offset.
+
+  The :limit_strategy option is not on by default.  It is possible
+  for it to perform significantly worse than the default strategy
+  (which uses array slicing in ruby).  The :limit_strategy
+  significantly changes the SQL used, and can change the results
+  of the query if any filters/orders related to the association
+  are used.
+
+  It's recommended you only use the :limit_strategy option if you
+  are experiencing a bottleneck and you have benchmarked that it
+  is faster and still produces the desired results.
+
+    Artist.eager_graph_with_options(:first_10_albums,
+      :limit_strategy=>true)
+    # SELECT artists.id, artists.name,
+    #   first_10_albums.id AS first_10_albums_id,
+    #   first_10_albums.name AS first_10_albums_name,
+    #   first_10_albums.artist_id,
+    #   first_10_albums.release_date
+    # FROM artists
+    # LEFT OUTER JOIN (
+    #   SELECT id, name, artist_id, release_date
+    #   FROM (
+    #     SELECT *, row_number() OVER (PARTITION BY tracks.album_id)
+    #       AS x_sequel_row_number_x
+    #     FROM albums
+    #   ) AS t1 WHERE (x_sequel_row_number_x <= 10)
+    # ) AS first_10_albums ON (first_10_albums.artist_id = artists.id)
+
+* Dataset#full_text_search on PostgreSQL now supports :plain and
+  :phrase options.  :plain takes the search terms as a single
+  string, and searches for rows where all terms are used.
+  :phrase is similar to :plain, but also adds a substring search
+  to ensure that the string given appears verbatim in the text.
+
+* A :graph_order association option has been added, for using a
+  different order when using eager_graph.  This is mostly
+  designed for cases where :order should be qualified in other
+  cases, but using a qualification breaks eager_graph because the
+  correct qualifier is not known until runtime.
+
+* SQL::AliasedExpression#alias has been added as an alias for #aliaz.
+
+= Other Improvements
+
+* Sequel will now automatically use an eager limit strategy for
+  *_one associations that use an :order option.  For associations
+  that are truly one-to-one, an :order option is not needed, so it
+  only makes sense to have an :order option if the association
+  could theoretically return multiple results (in which case an
+  eager limit strategy is helpful).
+
+* The queries that Sequel uses to filter by associations when
+  those associations have conditions are now simpler and easier
+  for the database to execute.
+
+* The queries that Sequel uses for dataset associations now handle
+  cases where unqualified identifiers were used in the receiving
+  dataset that would be made ambiguous by a join.
+
+* A limit strategy is now used when filtering by associations if
+  the association has a limit and the database supports window
+  functions.  This allows Sequel to setup a correct filter in
+  such cases.
+
+    Artist.where(:first_10_albums=>Album[1]).all
+    # SELECT *
+    # FROM artists
+    # WHERE (artists.id IN (
+    #   SELECT albums.artist_id
+    #   FROM albums
+    #   WHERE ((albums.artist_id IS NOT NULL) AND (albums.id IN (
+    #     SELECT id FROM (
+    #       SELECT albums.id, row_number() OVER
+    #         (PARTITION BY albums.artist_id ORDER BY release_date)
+    #         AS x_sequel_row_number_x
+    #       FROM albums
+    #     ) AS t1
+    #     WHERE (x_sequel_row_number_x <= 10)
+    #   )) AND (albums.id = 1))))
+
+* A limit strategy is now used in the dataset_associations plugin
+  if the association has a limit and the database supports window
+  functions.  This makes the resulting datasets return correct
+  results.
+
+    Artist.first_10_albums
+    # SELECT *
+    # FROM albums
+    # WHERE ((albums.artist_id IN (
+    #   SELECT artists.id FROM artists)
+    # ) AND (albums.id IN (
+    #   SELECT id FROM (
+    #     SELECT albums.id, row_number() OVER
+    #       (PARTITION BY albums.artist_id ORDER BY release_date)
+    #       AS x_sequel_row_number_x
+    #     FROM albums
+    #   ) AS t1
+    #   WHERE (x_sequel_row_number_x <= 10)
+    # )))
+    # ORDER BY release_date
+
+* You can now pass symbols with embedded qualifiers or aliases,
+  as well as SQL::Identifier, SQL::QualifiedIdentifier, and
+  SQL::AliasedExpression objects as the first argument to
+  Dataset#graph.
+
+* The nested_attributes plugin now automatically handles presence
+  validations on foreign keys when creating associated objects.
+  It now sets the foreign key value (or a placeholder value)
+  before validating such objects.
+
+* Offsets on *_one associations are now respected when using
+  eager_graph.
+
+* eager graphing *_many associations with offsets no longer breaks
+  if there are no associated results.
+
+* Database#register_array_type in the pg_array extension now works
+  correctly if there is no existing scalar conversion proc for
+  the type.
+
+* Unique, foreign key, and not null constraint violations are now
+  recognized correctly on SQLite 3.8.2+.
+
+* The odbc adapter now returns fractional seconds in timestamps.
+
+* The obdc/mssql adapter now inputs timestamps with 3 decimal
+  places.
+
+= Backwards Compatibility
+
+* The private Model.apply_window_function_eager_limit_strategy
+  method has been removed.
diff --git a/doc/release_notes/4.9.0.txt b/doc/release_notes/4.9.0.txt
new file mode 100644
index 0000000..5e9e567
--- /dev/null
+++ b/doc/release_notes/4.9.0.txt
@@ -0,0 +1,190 @@
+= Performance Enhancements
+
+* Dataset::PlaceholderLiteralizer has been added as an optimization
+  framework.  This allows you to record changes to a given dataset
+  using placeholder arguments, and later quickly execute the query
+  providing values for the placeholders.  This is similar in idea
+  to prepared statements, except that the SQL for each query can
+  change depending on the values for the placeholders.
+
+  Using this optimization framework, generating the SQL for query
+  is about 3x faster, and since SQL generation time is a significant
+  portion of total time for simple queries, simple queries can
+  execute up to 50% faster.
+
+  There are two APIs for this optimization framework.  There is a
+  lower level dataset API:
+
+    loader = Sequel::Dataset::PlaceholderLiteralizer.
+     loader(DB[:items]) do |pl, ds|
+      ds.where(:id=>pl.arg).exclude(:name=>pl.arg).limit(1)
+    end
+
+    loader.first(1, "foo")
+    # SELECT * FROM items WHERE ((id = 1) AND (name != 'foo')) LIMIT 1 
+
+    loader.first([1, 2], %w"foo bar")
+    # SELECT * FROM items WHERE ((id IN (1, 2)) AND
+    #   (name NOT IN ('foo', 'bar'))) LIMIT 1
+
+  There is also a higher level model API (Model.finder):
+
+    class Item < Sequel::Model
+      # Given class method that returns a dataset
+      def self.by_id_and_not_name(id, not_name)
+        where(:id=>id).exclude(:name=>not_name)
+      end
+
+      # Create optimized method that returns first value
+      finder :by_id_and_not_name
+    end
+    
+    # Call optimized method
+    Album.first_by_id_and_not_name(1, 'foo')
+    # SELECT * FROM items WHERE ((id = 1) AND (name != 'foo')) LIMIT 1 
+
+  Model.finder defaults to creating a method that returns the first
+  matching row, but using the :type option you can create methods
+  that call each, all, or get.  There is also an option to choose the
+  method name (:name), as well as one to specify the number of
+  arguments to use if the method doesn't take a fixed number
+  (:arity).
+
+  Finally, Model.find, .first, and .first! now automatically use an
+  optimized finder if given a single argument. Model.[] uses an
+  optimized finder if given a single hash, and Model.[], .with_pk,
+  and .with_pk! use an optimized finder if the model has a composite
+  primary key.  In all of these cases, these methods are about 50%
+  faster than before.
+
+* The pure-ruby PostgreSQL array parser that ships with Sequel has
+  been replaced with a strscan-based parser.  This parser avoids
+  O(n^2) performance for arrays with multibyte strings, and in general
+  is much faster.  Parsing an array with a single string with 100,000
+  multibyte characters is about 1000x faster, and now about half the
+  speed of the C implementation in sequel_pg.
+
+* Dataset#paged_each now has a :strategy=>:filter option that
+  dramatically improves performance, especially if the columns
+  being ordered by are indexed.
+
+  Unfortunately, there are enough corner cases to this approach
+  that it cannot be used by default.  At the least, the dataset
+  needs to be selecting the columns it is ordering by, not aliasing
+  the columns it is ordering by in the SELECT clause, not have
+  NULLs in any of the columns being ordered by, and not itself use
+  a limit or offset.
+
+  If you are ordering by expressions that are not simple column
+  values, you can provide a :filter_value option proc that takes the
+  last retrieved row and array of order by expressions, and returns 
+  an array of values in the last retrieved row for those order by
+  expressions.
+
+* In the postgres adapter, Dataset#paged_each now automatically uses
+  a cursor for improved performance.
+
+* In the mysql2 adapter, Dataset#paged_each now automatically uses
+  streaming for improved performance, if streaming is supported.
+
+* Dataset#with_sql_{each,all,first,single_value,insert,update}
+  have been added.  These methods take specific SQL and execute
+  it on the database, returning the appropriate value.  They
+  are significantly faster than the previous approach of
+  with_sql(SQL).{each,all,first,single_value,insert,update},
+  as they don't require cloning the dataset.
+
+= New Features
+
+* Database#create_join_table! and #create_join_table? have been added,
+  for consistency with #create_table! and #create_table?.
+
+* A :hold option has been added to Dataset#use_cursor in the postgres
+  adapter, which uses WITH HOLD in the query, allowing for usage of
+  the cursor outside the enclosing transaction.  When :hold is used,
+  Sequel does not automatically use a transaction around the cursor
+  call.
+
+* Dataset#where_current_of has been added to the postgres adapter,
+  for updating rows based on a cursor's current position.  This can
+  be used to update a large dataset where new values depend on
+  some ruby method, without keeping all rows in memory.
+
+    ds = DB[:huge_table]
+    ds.use_cursor(:rows_per_fetch=>1).each do |row|
+      ds.where_current_of.update(:column=>ruby_method(row))
+    end
+
+* A current_datetime_timestamp extension has been added, for
+  creating Time/DateTime instances that are literalized as
+  CURRENT_TIMESTAMP.  When the dataset uses this extension, models
+  that use the touch and timestamps plugins will use
+  CURRENT_TIMESTAMP for the timestamps.
+
+* The jdbc adapter now supports a :driver option, useful when
+  Sequel doesn't have direct support for the underlying driver, and
+  where java.sql.DriverManager.getConnection does not work
+  correctly due to Java class loading issues.
+
+= Other Improvements
+
+* Multiple corner cases in Dataset#eager_graph have been fixed.
+
+* Calling Dataset#columns when using the eager_each plugin no
+  longer triggers eager loading.
+
+* Database#column_schema_to_ruby_default is now a public method
+  in the schema_dumper extension.
+
+* When validating associated objects for one_to_many and one_to_one
+  associations in the nested_attributes plugin, don't remove column
+  values if the association's foreign key is the associated model's
+  primary key.
+
+* On PostgreSQL, Dataset#disable_insert_returning has been added
+  back.  This disables the automatic use of RETURNING for INSERTs
+  for the dataset.  This is necessary in cases where INSERT
+  RETURNING doesn't work, such as PostgreSQL <8.2 (or PostgreSQL
+  variants that forked before 8.2), or when using partitioning
+  with trigger functions, or conditional rules.
+
+  Note that if you use disable_insert_returning, insert will not
+  return the autoincremented primary key.  You need to call
+  currval or lastval manually using the same connection to get
+  the value, or use nextval to get the value to use before
+  inserting.
+
+* The pg_array extension now uses the correct database type when
+  typecasting values for smallint, oid, real, character, and varchar
+  arrays.  Previously, Sequel did not use the correct database type
+  in some cases (e.g. text[] for a varchar[]), which resulted in
+  errors if the value was used in a filter expression.
+
+* Additional unique constraint violations are now recognized on
+  SQLite.
+
+* Check constraint violations are now recognized on SQLite >=3.8.2.
+
+* Adapters that emulate bitwise operators now do so using an append
+  only design, similar to how all other queries are built in Sequel.
+
+= Backwards Compatibility
+
+* In some cases Sequel no longer adds superfluous parentheses when
+  constructing SQL strings.  If you are testing for specific SQL,
+  this can cause test failures.
+
+* The pg_array extension no longer recognizes the :typecast_method
+  option when registering an array type.  The option allowed reuse
+  of an existing typecast method, but as that results in an incorrect
+  type at the database level, the option was fundementally broken.
+
+* The internals of the PostgreSQL array parser have changed.  If you
+  were relying on them, you'll need to update your code.
+
+* Dataset#complex_expression_arg_pairs private method now returns
+  nested expression objects instead of an already literalized string
+  in some cases.  Custom adapters that call this method will probably
+  need to be changed.  It's recommended that such adapters switch to
+  using the new Dataset#complex_expression_emulate_append method if
+  possible.
diff --git a/doc/schema_modification.rdoc b/doc/schema_modification.rdoc
index 4bf1989..1db913f 100644
--- a/doc/schema_modification.rdoc
+++ b/doc/schema_modification.rdoc
@@ -586,7 +586,7 @@ the table if the table already exists.  On some databases, it uses
 <tt>IF NOT EXISTS</tt>, on others it does a separate query to check for
 existence.
 
-This should not be used inside migrations, as if the the tbale does not
+This should not be used inside migrations, as if the table does not
 exist, it may mess up the migration.
 
 === +rename_table+
diff --git a/doc/security.rdoc b/doc/security.rdoc
index 45f5aa4..007ff3c 100644
--- a/doc/security.rdoc
+++ b/doc/security.rdoc
@@ -18,6 +18,7 @@ could conceivably be abused to do so:
 
 * Sequel::Schema::CreateTableGenerator.add_type_method
 * Sequel::Dataset.def_mutation_method
+* Sequel::Dataset.def_sql_method
 * Sequel::Model::Plugins.def_dataset_methods
 * Sequel.def_adapter_method (private)
 * Sequel::SQL::Expression.to_s_method (private)
@@ -79,7 +80,7 @@ in which case Sequel automatically literalizes the input:
 Sequel generally treats ruby strings as SQL strings (escaping them correctly), and
 not as raw SQL.  However, you can convert a ruby string to a literal string, and
 Sequel will then treat it as raw SQL.  This is typically done through String#lit
-if the {core_extensions}[link:files/doc/core_extensions_rdoc.html] are in use,
+if the {core_extensions}[rdoc-ref:doc/core_extensions.rdoc] are in use,
 or Sequel.lit[rdoc-ref:Sequel::SQL::Builders#lit] if they are not in use.
 
   'a'.lit
@@ -126,7 +127,7 @@ Note that for that type of query, Sequel generally encourages the following form
   DB[:table].where{|o| o.name > params[:id].to_s} # Safe
 
 Sequel's DSL supports a wide variety of SQL concepts, so it's possible to
-code most applications without every using raw SQL.
+code most applications without ever using raw SQL.
 
 A large number of dataset methods ultimately pass down their arguments to a filter
 method, even some you may not expect, so you should be careful.  At least the
@@ -168,6 +169,18 @@ Instead, you should do:
 The Sequel::Dataset#lock_style method also treats an input string 
 as SQL code. This method should not be called with user input.
 
+==== SQL Type Names
+
+In general, most places where Sequel needs to use an SQL type that should
+be specified by the user, it allows you to use a ruby string, and that
+string is used verbatim as the SQL type.  You should not use user input
+for type strings.
+
+==== SQL Function Names
+
+In most cases, Sequel does not quote SQL function names.  You should not use
+user input for function names.
+
 === SQL Identifier Injections
 
 Usually, Sequel treats ruby symbols as SQL identifiers, and ruby
@@ -308,7 +321,7 @@ practice, though being explicit on a per-call basis is still recommended:
   Album.set_allowed_columns(:name, :copies_sold)
   Album.create(params[:album]) # Only name and copies_sold set
 
-For more details on the mass assignment methods, see the {Mass Assignment Guide}[link:files/doc/mass_assignment_rdoc.html].
+For more details on the mass assignment methods, see the {Mass Assignment Guide}[rdoc-ref:doc/mass_assignment.rdoc].
 
 == General Parameter Handling
 
diff --git a/doc/sql.rdoc b/doc/sql.rdoc
index e2de1c4..dfac6c9 100644
--- a/doc/sql.rdoc
+++ b/doc/sql.rdoc
@@ -78,7 +78,7 @@ Almost everywhere in Sequel, you can drop down to literal SQL by providing a lit
   DB[:albums].select('name') # SELECT 'name' FROM albums
   DB[:albums].select(Sequel.lit('name')) # SELECT name FROM albums
 
-For a simpler way of creating literal strings, you can also use the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html], which adds the <tt>String#lit</tt> method, and other methods that integrate Sequel's DSL with the ruby language:
+For a simpler way of creating literal strings, you can also use the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc], which adds the <tt>String#lit</tt> method, and other methods that integrate Sequel's DSL with the ruby language:
 
   DB[:albums].select('name'.lit)
 
@@ -137,7 +137,7 @@ The other way to qualify an identifier is to use the <tt>Sequel.qualify</tt> wit
 
   Sequel.qualify(:table, :column) # "table"."column"
 
-Another way to generate identifiers is to use Sequel's {virtual row support}[link:files/doc/virtual_rows_rdoc.html]:
+Another way to generate identifiers is to use Sequel's {virtual row support}[rdoc-ref:doc/virtual_rows.rdoc]:
 
   DB[:albums].select{name} # SELECT "name" FROM "albums"
   DB[:albums].select{albums__name} # SELECT "albums"."name" FROM "albums"
@@ -178,11 +178,15 @@ You can also use the <tt>Sequel.as</tt> method to create an alias, and the +as+
   Sequel.as(:column, :alias) # "column" AS "alias"
   Sequel.qualify(:table, :column).as(:alias) # "table"."column" AS "alias"
 
+If you want to use a derived column list, you can provide an array of column aliases:
+
+  Sequel.as(:table, :alias, [:c1, :c2]) # "table" AS "alias"("c1", "c2")
+
 === Functions
 
 The easiest way to use SQL functions is via a virtual row:
 
-  DB[:albums].select{func{}} # SELECT func() FROM "albums"
+  DB[:albums].select{func.function} # SELECT func() FROM "albums"
   DB[:albums].select{func(col1, col2)} # SELECT func("col1", "col2") FROM "albums"
 
 You can also use the <tt>Sequel.function</tt> method on the symbol that contains the function name:
@@ -196,34 +200,43 @@ Aggregate functions work the same way as normal functions, since they share the
 
   Sequel.function(:sum, :column) # sum(column)
 
-However, if you want to use the DISTINCT modifier to an aggregate function, you either have to use literal SQL or a virtual row block:
+To use the DISTINCT modifier to an aggregate function, call the distinct method on the function:
 
-  Sequel.function(:sum, Sequel.lit('DISTINCT column')) # sum(DISTINCT column)
-  DB[:albums].select{sum(:distinct, :column){}} # SELECT sum(DISTINCT column) FROM albums
+  DB[:albums].select{sum(:column).distinct} # SELECT sum(DISTINCT column) FROM albums
 
-If you want to use the wildcard as the sole argument of the aggregate function, you again have to use literal SQL or a virtual row block:
+If you want to use the wildcard as the sole argument of the aggregate function, use the * method on the Function:
 
-  Sequel.function(:count, Sequel.lit('*')) # count(*)
-  DB[:albums].select{count(:*){}} # SELECT count(*) FROM albums
+  Sequel.function(:count).* # count(*)
+  DB[:albums].select{count.function.*} # SELECT count(*) FROM albums
 
 Note that Sequel provides helper methods for aggregate functions such as +count+, +sum+, +min+, +max+, +avg+, and +group_and_count+, which handle common uses of aggregate functions.
 
 === Window Functions
 
-If the database supports window functions, Sequel can handle them using a virtual row block:
+If the database supports window functions, Sequel can handle them by calling the over method on a Function:
 
-  DB[:albums].select{function(:over){}}
+  DB[:albums].select{function.function.over}
   # SELECT function() OVER () FROM albums
 
-  DB[:albums].select{count(:over, :*=>true){}}
+  DB[:albums].select{count.function.*.over}
   # SELECT count(*) OVER () FROM albums
 
-  DB[:albums].select{function(:over, :args=>col1, :partition=>col2, :order=>col3){}}
+  DB[:albums].select{function(:col1).over(:partition=>col2, :order=>col3)}
   # SELECT function(col1) OVER (PARTITION BY col2 ORDER BY col3) FROM albums
 
-  DB[:albums].select{function(:over, :args=>[c1, c2], :partition=>[c3, c4], :order=>[c5, c6]){}}
+  DB[:albums].select{function(c1, c2).over(:partition=>[c3, c4], :order=>[c5, c6])}
   # SELECT function(c1, c2) OVER (PARTITION BY c3, c4 ORDER BY c5, c6) FROM albums
 
+=== Schema Qualified Functions
+
+If the database supports schema qualified functions, Sequel can handle them by calling the function method on a QuailfiedIdentifier:
+
+  DB[:albums].select{schema__function.function}
+  # SELECT schema.function() FROM albums
+  
+  DB[:albums].select{schema__function.function(col, 2, "a")}
+  # SELECT schema.function(col, 2, 'a') FROM albums
+
 === Equality Operator (=)
 
 Sequel uses hashes to specify equality:
@@ -443,19 +456,19 @@ Just like ruby's <tt>String#join</tt>, you can provide an argument for a string
 
 For the LIKE operator, Sequel defines the +like+ and +ilike+ methods on most Sequel-specific expression objects:
 
-  Sequel.expr(:name).like('A%') # ("name" LIKE 'A%') 
-  Sequel.expr(:name).ilike('A%') # ("name" ILIKE 'A%') 
+  Sequel.expr(:name).like('A%') # ("name" LIKE 'A%' ESCAPE '\') 
+  Sequel.expr(:name).ilike('A%') # ("name" ILIKE 'A%' ESCAPE '\') 
 
 You can also use the <tt>Sequel.like</tt> and <tt>Sequel.ilike</tt> methods:
 
-  Sequel.like(:name, 'A%') # ("name" LIKE 'A%') 
-  Sequel.ilike(:name, 'A%') # ("name" ILIKE 'A%') 
+  Sequel.like(:name, 'A%') # ("name" LIKE 'A%' ESCAPE '\') 
+  Sequel.ilike(:name, 'A%') # ("name" ILIKE 'A%' ESCAPE '\') 
 
 Note the above syntax for ilike, while Sequel's default, is specific to PostgreSQL.  However, most other adapters override the behavior.  For example, on MySQL, Sequel uses LIKE BINARY for +like+, and LIKE for +ilike+.  If the database supports both case sensitive and case insensitive LIKE, then +like+ will use a case sensitive LIKE, and +ilike+ will use a case insensitive LIKE.
 
 Inverting the LIKE operator works like other inversions:
 
-  ~Sequel.like(:name, 'A%') # ("name" NOT LIKE 'A%')
+  ~Sequel.like(:name, 'A%') # ("name" NOT LIKE 'A%' ESCAPE '\')
 
 Sequel also supports SQL regular expressions on MySQL and PostgreSQL.  You can use these by passing a ruby regular expression to +like+ or +ilike+, or by making the regular expression a hash value:
 
@@ -544,12 +557,12 @@ If you don't want to select from any FROM tables, just call dataset:
 Once you have your dataset object, you build queries by chaining methods, usually with one method per clause in the query:
 
   DB[:albums].select(:id, :name).where(Sequel.like(:name, 'A%')).order(:name)
-  # SELECT id, name FROM albums WHERE (name LIKE 'A%') ORDER BY name
+  # SELECT id, name FROM albums WHERE (name LIKE 'A%' ESCAPE '\') ORDER BY name
 
 Note that the order of your method chain is not usually important unless you have multiple methods that affect the same clause:
 
   DB[:albums].order(:name).where(Sequel.like(:name, 'A%')).select(:id, :name)
-  # SELECT id, name FROM albums WHERE (name LIKE 'A%') ORDER BY name
+  # SELECT id, name FROM albums WHERE (name LIKE 'A%' ESCAPE '\') ORDER BY name
 
 === Using the Same Dataset for SELECT, INSERT, UPDATE, and DELETE
 
@@ -573,4 +586,4 @@ Note how +update+ and +delete+ used the +where+ argument, but that +insert+ did
 
 === Methods Used for Each SQL Clause
 
-To see which methods exist that affect each SQL clause, see the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html].
+To see which methods exist that affect each SQL clause, see the {"Dataset Basics" guide}[rdoc-ref:doc/dataset_basics.rdoc].
diff --git a/doc/testing.rdoc b/doc/testing.rdoc
index c81a10e..33814f5 100644
--- a/doc/testing.rdoc
+++ b/doc/testing.rdoc
@@ -11,7 +11,7 @@ These run each test in its own transaction, the recommended way to test.
   class Spec::Example::ExampleGroup
     def execute(*args, &block)
       result = nil
-      Sequel::Model.db.transaction(:rollback=>:always){result = super(*args, &block)}
+      Sequel::Model.db.transaction(:rollback=>:always, :auto_savepoint=>true){result = super(*args, &block)}
       result
     end
   end
@@ -24,17 +24,17 @@ These run each test in its own transaction, the recommended way to test.
     def self.inherited(subclass)
       super
       subclass.around do |example|
-        Sequel::Model.db.transaction(:rollback=>:always){example.call}
+        Sequel::Model.db.transaction(:rollback=>:always, :auto_savepoint=>true){example.call}
       end
     end
   end
 
-=== RSpec 2, >=2.8
+=== RSpec >=2.8
 
   # Global around filters should work
   RSpec.configure do |c|
     c.around(:each) do |example|
-      DB.transaction(:rollback=>:always){example.run}
+      DB.transaction(:rollback=>:always, :auto_savepoint=>true){example.run}
     end
   end
 
@@ -44,7 +44,7 @@ These run each test in its own transaction, the recommended way to test.
   class SequelTestCase < Test::Unit::TestCase
     def run(*args, &block)
       result = nil
-      Sequel::Model.db.transaction(:rollback=>:always){result = super}
+      Sequel::Model.db.transaction(:rollback=>:always, :auto_savepoint=>true){result = super}
       result
     end
   end
@@ -55,7 +55,7 @@ These run each test in its own transaction, the recommended way to test.
 
     def run(*args, &block)
       result = nil
-      Sequel::Model.db.transaction(:rollback => :always) do
+      Sequel::Model.db.transaction(:rollback => :always, :auto_savepoint=>true) do
         result = _original_run(*args, &block)
       end
       result
@@ -69,7 +69,7 @@ These run each test in its own transaction, the recommended way to test.
   class SequelTestCase < MiniTest::Unit::TestCase
     def run(*args, &block)
       result = nil
-      Sequel::Model.db.transaction(:rollback=>:always){result = super}
+      Sequel::Model.db.transaction(:rollback=>:always, :auto_savepoint=>true){result = super}
       result
     end
   end
@@ -80,7 +80,7 @@ These run each test in its own transaction, the recommended way to test.
 
     def run(*args, &block)
       result = nil
-      Sequel::Model.db.transaction(:rollback => :always) do
+      Sequel::Model.db.transaction(:rollback => :always, :auto_savepoint=>true) do
         result = _original_run(*args, &block)
       end
       result
@@ -126,7 +126,7 @@ The order in which you delete/truncate the tables is important if you are using
 
 = Testing Sequel Itself
 
-Sequel has multiple separate test suites.  All test suites run under either RSpec 1 or RSpec 2.
+Sequel has multiple separate test suites.  All test suites run under rspec >=1.3.
 
 == rake spec
 
diff --git a/doc/thread_safety.rdoc b/doc/thread_safety.rdoc
index d2b6a2e..96eac14 100644
--- a/doc/thread_safety.rdoc
+++ b/doc/thread_safety.rdoc
@@ -4,7 +4,7 @@ Most Sequel usage (and all common Sequel usage) is thread safe by default.  Spec
 
 == Connection Pool
 
-In order to allow multiple threads to operate on the same database at the same time, Sequel uses a connection pool.  The connection pool is designed so that a thread uses a connection for the minimum amount of time, returning the connection to the pool as soon as it is done using the connection.  If a thread requests a connection and the pool does not have an available connection, a new connection will be created.  If the maximum number of connections in the pool has already been reached [...]
+In order to allow multiple threads to operate on the same database at the same time, Sequel uses a connection pool.  The connection pool is designed so that a thread uses a connection for the minimum amount of time, returning the connection to the pool as soon as it is done using the connection.  If a thread requests a connection and the pool does not have an available connection, a new connection will be created.  If the maximum number of connections in the pool has already been reached [...]
 
 == Exceptions
 
diff --git a/doc/transactions.rdoc b/doc/transactions.rdoc
index 81011f8..1856885 100644
--- a/doc/transactions.rdoc
+++ b/doc/transactions.rdoc
@@ -91,6 +91,14 @@ You can use the <tt>:savepoint => true</tt> option in the inner transaction to e
     end # RELEASE SAVEPOINT
   end # COMMIT
 
+You can use the <tt>:auto_savepoint => true</tt> option in the outer transaction to explicitly use a savepoint in the inner transaction (if the database supports it):
+
+  DB.transaction(:auto_savepoint => true) do # BEGIN
+    DB.transaction do # SAVEPOINT
+      DB[:foo].insert(1) # INSERT
+    end # RELEASE SAVEPOINT
+  end # COMMIT
+
 If a Sequel::Rollback exception is raised inside the savepoint block, it will only rollback to the savepoint:
 
   DB.transaction do # BEGIN
diff --git a/doc/validations.rdoc b/doc/validations.rdoc
index 60abcd3..003663c 100644
--- a/doc/validations.rdoc
+++ b/doc/validations.rdoc
@@ -245,7 +245,7 @@ These methods check that the specified attributes can be valid integers or valid
 
 === +validates_schema_types+
 
-+validates_schema_types+ uses the database metadata for the model's table to determine which ruby type(s) should be used for the given database type, and calls +validates_type+ with that ruby type.  It's designed to be used with the <tt>raise_on_typecast_failure = false</tt> setting (the default starting in Sequel 4).  <tt>raise_on_typecast_failure = false</tt, Sequel attempts to typecast values, but silently ignores any errors raised:
++validates_schema_types+ uses the database metadata for the model's table to determine which ruby type(s) should be used for the given database type, and calls +validates_type+ with that ruby type.  It's designed to be used with the <tt>raise_on_typecast_failure = false</tt> setting (the default starting in Sequel 4).  <tt>raise_on_typecast_failure = false</tt>, Sequel attempts to typecast values, but silently ignores any errors raised:
 
   Album.raise_on_typecast_failure = false
   album = Album.new
@@ -294,12 +294,20 @@ You can mix and match the two approaches.  For example, if all albums should hav
 
 If you provide a block, it is called with the dataset to use for the uniqueness check, which you can then filter to scope the uniqueness validation to a subset of the model's dataset.
 
-Additionally, you can also include an optional options hash as the last argument.  Unlike the other validations, the options hash for +validates_unique+ only checks for two options:
+You can also include an options hash as the last argument.  Unlike the other validations, the options hash for +validates_unique+ only recognizes for these options:
 
+:dataset :: The base dataset to use for the unique query, defaults to the model's dataset
 :message :: The message to use
 :only_if_modified :: Only check the uniqueness if the object is new or one of the columns has been modified.
+:where :: A callable object where call takes three arguments, a dataset,
+          the current object, and an array of columns, and should return
+          a modified dataset that is filtered to include only rows with
+          the same values as the current object for each column in the array.
+          This is useful any time the unique constraints are derived from
+          the columns and not the columns themselves (such as unique constraints
+          on lower(column)).
 
-+validates_unique+ is the only method in +validation_helpers+ that checks with the database.  Attempting to validate uniqueness outside of the database suffers from a race condition, so any time you want to add a uniqueness validation, you should make sure to add a uniqueness constraint or unique index on the underlying database table.  See the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html] for details on how to do that.  
++validates_unique+ is the only method in +validation_helpers+ that checks with the database.  Attempting to validate uniqueness outside of the database suffers from a race condition, so any time you want to add a uniqueness validation, you should make sure to add a uniqueness constraint or unique index on the underlying database table.  See the {"Migrations and Schema Modification" guide}[rdoc-ref:doc/migration.rdoc] for details on how to do that.  
 
 == +validation_helpers+ Options
 
@@ -502,6 +510,14 @@ Here, you don't care about validating the release date if there were validation
   album.errors.full_messages
   # => ["name cannot be empty"]
   
+Note that the column names used in the errors are used verbatim in the error messages.  If you want full control over the error messages, you can use +add+ with a literal string:
+
+  errors.add(:name, Sequel.lit("Album name is not valid"))
+  errors.full_messages
+  # => ["Album name is not valid"]
+
+Alternatively, feel free to override Sequel::Model::Errors#full_messages.  As long as it returns an array of strings, overridding it is completely safe.
+
 === +count+
 
 +count+ returns the total number of error messages in the errors.
diff --git a/doc/virtual_rows.rdoc b/doc/virtual_rows.rdoc
index 17f5201..1444833 100644
--- a/doc/virtual_rows.rdoc
+++ b/doc/virtual_rows.rdoc
@@ -68,7 +68,7 @@ inside the proc.  If that doesn't make sense, maybe this example will help:
   # WHERE c > (a - 32)
   
 There are two related differences here.  First is the usage of <tt>o.c</tt> vs +c+,
-and second is the difference between the the use of +a+.  In the regular proc,
+and second is the difference between the use of +a+.  In the regular proc,
 you couldn't call +c+ without an explicit receiver in the proc, unless the self of the
 surrounding scope responded to it.  For +a+, note how ruby calls the method on
 the receiver of the surrounding scope in the regular proc, which returns an integer,
@@ -95,8 +95,8 @@ local variable access.  This is mostly useful in instance_evaled procs:
 
 == VirtualRow Methods
 
-VirtualRow is a class that returns SQL::Identifiers, SQL::QualifiedIdentifiers,
-SQL::Functions, or SQL::WindowFunctions depending on how it is called.
+VirtualRow is a class that returns SQL::Identifiers, SQL::QualifiedIdentifiers, or
+SQL::Functions depending on how it is called.
 
 == SQL::Identifiers - Regular columns
 
@@ -140,55 +140,47 @@ your function call:
   ds.where{function(1, a) > 1}
   # WHERE function(1, a) > 1
 
-If the SQL function does not accept any arguments, you need to provide an empty
-block to the method to distinguish it from a call that will produce an
-SQL::Identifier:
+If the SQL function does not accept any arguments, create an identifier, then
+call the function method on it to produce a function:
 
-  ds.select{|o| o.version{}}
-  ds.select{version{}}
+  ds.select{|o| o.version.function}
+  ds.select{version.function}
   # SELECT version()
   
-To use the SQL wildcard (*) as the sole argument in a function call (most often
-used with the count function), you should provide :* as the sole argument to
-the method, and provide an empty block to the method:
+To use the SQL wildcard (*) as the sole argument in a function call, create a
+function without arguments, then call the * method on the function:
   
-  ds.select{|o| o.count(:*){}}
-  ds.select{count(:*){}}
+  ds.select{|o| o.count.function.*}
+  ds.select{count.function.*}
   # SELECT count(*)
 
-To append the DISTINCT keyword before the method arguments, you need to make
-:distinct the first argument of the method call, and provide an empty block to
-the method:
+To append the DISTINCT keyword before the method arguments, just call the
+distinct method on the returned Function:
 
-  ds.select{|o| o.count(:distinct, o.col1){}}
-  ds.select{count(:distinct, col1){}}
+  ds.select{|o| o.count(o.col1).distinct)}
+  ds.select{count(col1).distinct}
   # SELECT count(DISTINCT col1)
   
-To use multiple columns with the DISTINCT keyword, use multiple arguments in
-the method call:
-
-  ds.select{|o| o.count(:distinct, o.col1, o.col2){}}
-  ds.select{count(:distinct, col1, col2){}}
+  ds.select{|o| o.count(o.col1, o.col2).distinct}
+  ds.select{count(col1, col2).distinct}
   # SELECT count(DISTINCT col1, col2)
   
-== SQL::WindowFunctions - SQL window function calls
+== SQL::Functions with windows - SQL window function calls
 
-SQL::WindowFunctions can be thought of as calls to SQL window functions.  Not
-all databases support them, but they are very helpful for certain types of
-queries.  To use them, you need to make :over the first argument of the method
-call, with an optional hash as the second argument, and provide an empty block
-to the method. Here are some examples of use:
+Not all databases support window functions, but they are very helpful for certain types of
+queries.  To use them, you should just call the over method on the Function
+object returned, with the options for the window:
 
-  ds.select{|o| o.rank(:over){}}
-  ds.select{rank(:over){}}
+  ds.select{|o| o.rank.function.over}
+  ds.select{rank.function.over}
   # SELECT rank() OVER ()
   
-  ds.select{|o| o.count(:over, :*=>true){}}
-  ds.select{count(:over, :*=>true){}}
+  ds.select{|o| o.count.function.*.over}
+  ds.select{count.function.*.over}
   # SELECT count(*) OVER ()
   
-  ds.select{|o| o.sum(:over, :args=>o.col1, :partition=>o.col2, :order=>o.col3){}}
-  ds.select{sum(:over, :args=>col1, :partition=>col2, :order=>col3){}}
+  ds.select{|o| o.sum(o.col1).over(:partition=>o.col2, :order=>o.col3)}
+  ds.select{sum(col1).over(:partition=>col2, :order=>col3)}
   # SELECT sum(col1) OVER (PARTITION BY col2 ORDER BY col3)
 
 == Operators
diff --git a/lib/sequel/adapters/db2.rb b/lib/sequel/adapters/db2.rb
index 6edf82e..f1dc77e 100644
--- a/lib/sequel/adapters/db2.rb
+++ b/lib/sequel/adapters/db2.rb
@@ -31,7 +31,6 @@ module Sequel
       DB2CLI::SQL_TYPE_TIME => tt.method(:time),
       DB2CLI::SQL_DECIMAL => ::BigDecimal.method(:new)
     }
-    DB2_TYPES[DB2CLI::SQL_CLOB] = DB2_TYPES[DB2CLI::SQL_BLOB]
 
     class Database < Sequel::Database
       include DatabaseMethods
@@ -217,6 +216,8 @@ module Sequel
           name, buflen, datatype, size, digits, nullable = db.checked_error("Could not describe column"){DB2CLI.SQLDescribeCol(sth, i, MAX_COL_SIZE)}
           pr = if datatype == DB2CLI::SQL_SMALLINT && convert && size <= 5 && digits <= 1
             cps[:boolean]
+          elsif datatype == DB2CLI::SQL_CLOB && Sequel::DB2.use_clob_as_blob
+            cps[DB2CLI::SQL_BLOB]
           else
             cps[datatype]
           end
diff --git a/lib/sequel/adapters/ibmdb.rb b/lib/sequel/adapters/ibmdb.rb
index 015fcbe..9e7f13f 100644
--- a/lib/sequel/adapters/ibmdb.rb
+++ b/lib/sequel/adapters/ibmdb.rb
@@ -25,7 +25,6 @@ module Sequel
       :time => ::Sequel.method(:string_to_time),
       :date => ::Sequel.method(:string_to_date)
     }
-    DB2_TYPES[:clob] = DB2_TYPES[:blob]
 
     # Wraps an underlying connection to DB2 using IBM_DB.
     class Connection
@@ -44,8 +43,13 @@ module Sequel
       end
 
       # Create the underlying IBM_DB connection.
-      def initialize(connection_string)
-        @conn = IBM_DB.connect(connection_string, '', '')
+      def initialize(connection_param)
+        @conn = if connection_param.class == String
+          IBM_DB.connect(connection_param, '', '')
+        else  # connect using catalog 
+          IBM_DB.connect(*connection_param)
+        end
+
         self.autocommit = true
         @prepared_statements = {}
       end
@@ -194,19 +198,22 @@ module Sequel
       # Create a new connection object for the given server.
       def connect(server)
         opts = server_opts(server)
-        
-        # use uncataloged connection so that host and port can be supported
-        connection_string = ( \
-            'Driver={IBM DB2 ODBC DRIVER};' \
-            "Database=#{opts[:database]};" \
-            "Hostname=#{opts[:host]};" \
-            "Port=#{opts[:port] || 50000};" \
-            'Protocol=TCPIP;' \
-            "Uid=#{opts[:user]};" \
-            "Pwd=#{opts[:password]};" \
-        )
 
-        Connection.new(connection_string)
+        connection_params = if opts[:host].nil? && opts[:port].nil? && opts[:database]
+          # use a cataloged connection
+          opts.values_at(:database, :user, :password)
+        else
+          # use uncataloged connection so that host and port can be supported
+          'Driver={IBM DB2 ODBC DRIVER};' \
+          "Database=#{opts[:database]};" \
+          "Hostname=#{opts[:host]};" \
+          "Port=#{opts[:port] || 50000};" \
+          'Protocol=TCPIP;' \
+          "Uid=#{opts[:user]};" \
+          "Pwd=#{opts[:password]};" \
+        end 
+
+        Connection.new(connection_params)
       end
 
       # Execute the given SQL on the database.
@@ -433,9 +440,10 @@ module Sequel
           stmt.num_fields.times do |i|
             k = stmt.field_name i
             key = output_identifier(k)
-            type = stmt.field_type(k).downcase.to_sym
+            type = stmt.field_type(i).downcase.to_sym
             # decide if it is a smallint from precision
-            type = :boolean  if type ==:int && convert && stmt.field_precision(k) < 8
+            type = :boolean  if type == :int && convert && stmt.field_precision(i) < 8
+            type = :blob if type == :clob && Sequel::DB2.use_clob_as_blob
             columns << [key, cps[type]]
           end
           cols = columns.map{|c| c.at(0)}
diff --git a/lib/sequel/adapters/jdbc.rb b/lib/sequel/adapters/jdbc.rb
index d5f39d1..bc02c34 100644
--- a/lib/sequel/adapters/jdbc.rb
+++ b/lib/sequel/adapters/jdbc.rb
@@ -143,6 +143,25 @@ module Sequel
         db.extend(Sequel::JDBC::Cubrid::DatabaseMethods)
         db.extend_datasets Sequel::Cubrid::DatasetMethods
         Java::cubrid.jdbc.driver.CUBRIDDriver
+      end,
+      :sqlanywhere=>proc do |db|
+        drv = [
+          lambda{Java::sybase.jdbc4.sqlanywhere.IDriver},
+          lambda{Java::ianywhere.ml.jdbcodbc.jdbc4.IDriver},
+          lambda{Java::sybase.jdbc.sqlanywhere.IDriver},
+          lambda{Java::ianywhere.ml.jdbcodbc.jdbc.IDriver},
+          lambda{Java::com.sybase.jdbc4.jdbc.Sybdriver},
+          lambda{Java::com.sybase.jdbc3.jdbc.Sybdriver}
+        ].each do |class_proc|
+          begin
+            break class_proc.call
+          rescue NameError
+          end
+        end
+        Sequel.require 'adapters/jdbc/sqlanywhere'
+        db.extend(Sequel::JDBC::SqlAnywhere::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::SqlAnywhere::Dataset
+        drv
       end
     }
     
@@ -161,6 +180,86 @@ module Sequel
       end
     end
 
+    class TypeConvertor
+      %w'Boolean Float Double Int Long Short'.each do |meth|
+        class_eval("def #{meth}(r, i) v = r.get#{meth}(i); v unless r.wasNull end", __FILE__, __LINE__)
+      end
+      %w'Object Array String Time Date Timestamp BigDecimal Blob Bytes Clob'.each do |meth|
+        class_eval("def #{meth}(r, i) r.get#{meth}(i) end", __FILE__, __LINE__)
+      end
+      def RubyTime(r, i)
+        if v = r.getTime(i)
+          Sequel.string_to_time("#{v.to_string}.#{sprintf('%03i', v.getTime.divmod(1000).last)}")
+        end
+      end
+      def RubyDate(r, i)
+        if v = r.getDate(i)
+          Date.civil(v.getYear + 1900, v.getMonth + 1, v.getDate)
+        end
+      end
+      def RubyTimestamp(r, i)
+        if v = r.getTimestamp(i)
+          Sequel.database_to_application_timestamp([v.getYear + 1900, v.getMonth + 1, v.getDate, v.getHours, v.getMinutes, v.getSeconds, v.getNanos])
+        end
+      end
+      def RubyBigDecimal(r, i)
+        if v = r.getBigDecimal(i)
+          BigDecimal.new(v.to_string)
+        end
+      end
+      def RubyBlob(r, i)
+        if v = r.getBytes(i)
+          Sequel::SQL::Blob.new(String.from_java_bytes(v))
+        end
+      end
+      def RubyClob(r, i)
+        if v = r.getClob(i)
+          v.getSubString(1, v.length)
+        end
+      end
+
+      INSTANCE = new
+      o = INSTANCE
+      MAP = Hash.new(o.method(:Object))
+      types = Java::JavaSQL::Types
+
+      {
+        :ARRAY => :Array,
+        :BOOLEAN => :Boolean,
+        :CHAR => :String,
+        :DOUBLE => :Double,
+        :FLOAT => :Double,
+        :INTEGER => :Int,
+        :LONGNVARCHAR => :String,
+        :LONGVARCHAR => :String,
+        :NCHAR => :String,
+        :REAL => :Float,
+        :SMALLINT => :Short,
+        :TINYINT => :Short,
+        :VARCHAR => :String,
+      }.each do |type, meth|
+        MAP[types.const_get(type)] = o.method(meth) 
+      end
+      BASIC_MAP = MAP.dup
+
+      {
+        :BINARY => :Blob,
+        :BLOB => :Blob,
+        :CLOB => :Clob,
+        :DATE => :Date,
+        :DECIMAL => :BigDecimal,
+        :LONGVARBINARY => :Blob,
+        :NCLOB => :Clob,
+        :NUMERIC => :BigDecimal,
+        :TIME => :Time,
+        :TIMESTAMP => :Timestamp,
+        :VARBINARY => :Blob,
+      }.each do |type, meth|
+        BASIC_MAP[types.const_get(type)] = o.method(meth) 
+        MAP[types.const_get(type)] = o.method(:"Ruby#{meth}") 
+      end
+    end
+
     # JDBC Databases offer a fairly uniform interface that does not change
     # much based on the sub adapter.
     class Database < Sequel::Database
@@ -169,7 +268,7 @@ module Sequel
       # The type of database we are connecting to
       attr_reader :database_type
       
-      # The Java database driver we are using
+      # The Java database driver we are using (should be a Java class)
       attr_reader :driver
       
       # Whether to convert some Java types to ruby types when retrieving rows.
@@ -177,6 +276,16 @@ module Sequel
       # fetching rows.
       attr_accessor :convert_types
 
+      # The fetch size to use for JDBC Statement objects created by this database.
+      # By default, this is nil so a fetch size is not set explicitly.
+      attr_accessor :fetch_size
+
+      # Map of JDBC type ids to callable objects that return appropriate ruby values.
+      attr_reader :type_convertor_map
+
+      # Map of JDBC type ids to callable objects that return appropriate ruby or java values.
+      attr_reader :basic_type_convertor_map
+
       # Execute the given stored procedure with the give name. If a block is
       # given, the stored procedure should return rows.
       def call_sproc(name, opts = OPTS)
@@ -258,6 +367,9 @@ module Sequel
         synchronize(opts[:server]) do |conn|
           statement(conn) do |stmt|
             if block
+              if size = fetch_size
+                stmt.setFetchSize(size)
+              end
               yield log_yield(sql){stmt.executeQuery(sql)}
             else
               case opts[:type]
@@ -286,15 +398,31 @@ module Sequel
       def execute_insert(sql, opts=OPTS)
         execute(sql, {:type=>:insert}.merge(opts))
       end
-      
+
+      # Use the JDBC metadata to get a list of foreign keys for the table.
+      def foreign_key_list(table, opts=OPTS)
+        m = output_identifier_meth
+        schema, table = metadata_schema_and_table(table, opts)
+        foreign_keys = {}
+        metadata(:getImportedKeys, nil, schema, table) do |r|
+          if fk = foreign_keys[r[:fk_name]]
+            fk[:columns] << [r[:key_seq], m.call(r[:fkcolumn_name])]
+            fk[:key] << [r[:key_seq], m.call(r[:pkcolumn_name])]
+          elsif r[:fk_name]
+            foreign_keys[r[:fk_name]] = {:name=>m.call(r[:fk_name]), :columns=>[[r[:key_seq], m.call(r[:fkcolumn_name])]], :table=>m.call(r[:pktable_name]), :key=>[[r[:key_seq], m.call(r[:pkcolumn_name])]]}
+          end
+        end
+        foreign_keys.values.each do |fk|
+          [:columns, :key].each do |k|
+            fk[k] = fk[k].sort.map{|_, v| v}
+          end
+        end
+      end
+
       # Use the JDBC metadata to get the index information for the table.
       def indexes(table, opts=OPTS)
         m = output_identifier_meth
-        im = input_identifier_meth
-        schema, table = schema_and_table(table)
-        schema ||= opts[:schema]
-        schema = im.call(schema) if schema
-        table = im.call(table)
+        schema, table = metadata_schema_and_table(table, opts)
         indexes = {}
         metadata(:getIndexInfo, nil, schema, table, false, true) do |r|
           next unless name = r[:column_name]
@@ -339,14 +467,19 @@ module Sequel
       def adapter_initialize
         @connection_prepared_statements = {}
         @connection_prepared_statements_mutex = Mutex.new
+        @fetch_size = @opts[:fetch_size] ? typecast_value_integer(@opts[:fetch_size]) : default_fetch_size
         @convert_types = typecast_value_boolean(@opts.fetch(:convert_types, true))
         raise(Error, "No connection string specified") unless uri
         
         resolved_uri = jndi? ? get_uri_from_jndi : uri
 
-        if match = /\Ajdbc:([^:]+)/.match(resolved_uri) and prok = DATABASE_SETUP[match[1].to_sym]
-          @driver = prok.call(self)
+        @driver = if (match = /\Ajdbc:([^:]+)/.match(resolved_uri)) && (prok = DATABASE_SETUP[match[1].to_sym])
+          prok.call(self)
+        else
+          @opts[:driver]
         end        
+
+        setup_type_convertor_map
       end
       
       # Yield the native prepared statements hash for the given connection
@@ -402,6 +535,9 @@ module Sequel
           else
             log_yield("CLOSE #{name}"){cps[1].close} if cps
             cps = log_yield("PREPARE#{" #{name}:" if name} #{sql}"){prepare_jdbc_statement(conn, sql, opts)}
+            if size = fetch_size
+              cps.setFetchSize(size)
+            end
             cps_sync(conn){|cpsh| cpsh[name] = [sql, cps]} if name
           end
           i = 0
@@ -443,6 +579,12 @@ module Sequel
       def execute_statement_insert(stmt, sql)
         stmt.executeUpdate(sql)
       end
+
+      # The default fetch size to use for statements.  Nil by default, so that the
+      # default for the JDBC driver is used.
+      def default_fetch_size
+        nil
+      end
       
       # Gets the connection from JNDI.
       def get_connection_from_jndi
@@ -462,7 +604,10 @@ module Sequel
       def get_tables(type, opts)
         ts = []
         m = output_identifier_meth
-        metadata(:getTables, nil, nil, nil, [type].to_java(:string)){|h| ts << m.call(h[:table_name])}
+        if schema = opts[:schema]
+          schema = schema.to_s
+        end
+        metadata(:getTables, nil, schema, nil, [type].to_java(:string)){|h| ts << m.call(h[:table_name])}
         ts
       end
 
@@ -511,6 +656,16 @@ module Sequel
         end
       end
 
+      # Return the schema and table suitable for use with metadata queries.
+      def metadata_schema_and_table(table, opts)
+        im = input_identifier_meth(opts[:dataset])
+        schema, table = schema_and_table(table)
+        schema ||= opts[:schema]
+        schema = im.call(schema) if schema
+        table = im.call(table)
+        [schema, table]
+      end
+      
       # Created a JDBC prepared statement on the connection with the given SQL.
       def prepare_jdbc_statement(conn, sql, opts)
         conn.prepareStatement(sql)
@@ -563,12 +718,8 @@ module Sequel
       # Parse the table schema for the given table.
       def schema_parse_table(table, opts=OPTS)
         m = output_identifier_meth(opts[:dataset])
-        im = input_identifier_meth(opts[:dataset])
         ds = dataset
-        schema, table = schema_and_table(table)
-        schema ||= opts[:schema]
-        schema = im.call(schema) if schema
-        table = im.call(table)
+        schema, table = metadata_schema_and_table(table, opts)
         pks, ts = [], []
         metadata(:getPrimaryKeys, nil, schema, table) do |h|
           next if schema_parse_table_skip?(h, schema)
@@ -596,6 +747,11 @@ module Sequel
         h[:table_schem] == 'INFORMATION_SCHEMA'
       end
 
+      def setup_type_convertor_map
+        @type_convertor_map = TypeConvertor::MAP.merge(Java::JavaSQL::Types::TIMESTAMP=>timestamp_convertor)
+        @basic_type_convertor_map = TypeConvertor::BASIC_MAP
+      end
+
       # Yield a new statement object, and ensure that it is closed before returning.
       def statement(conn)
         stmt = conn.createStatement
@@ -605,6 +761,16 @@ module Sequel
       ensure
         stmt.close if stmt
       end
+
+      # A conversion proc for timestamp columns.  This is used to make sure timestamps are converted using the
+      # correct timezone.
+      def timestamp_convertor
+        lambda do |r, i|
+          if v = r.getTimestamp(i)
+            to_application_timestamp([v.getYear + 1900, v.getMonth + 1, v.getDate, v.getHours, v.getMinutes, v.getSeconds, v.getNanos])
+          end
+        end
+      end
     end
     
     class Dataset < Sequel::Dataset
@@ -683,170 +849,67 @@ module Sequel
         end
         ps
       end
+
+      # Set the fetch size on JDBC ResultSets created from this dataset.
+      def with_fetch_size(size)
+        clone(:fetch_size=>size)
+      end
       
       private
 
-      # Cache Java class constants to speed up lookups
-      JAVA_SQL_TIMESTAMP    = Java::JavaSQL::Timestamp
-      JAVA_SQL_TIME         = Java::JavaSQL::Time
-      JAVA_SQL_DATE         = Java::JavaSQL::Date
-      JAVA_SQL_BLOB         = Java::JavaSQL::Blob
-      JAVA_SQL_CLOB         = Java::JavaSQL::Clob
-      JAVA_BUFFERED_READER  = Java::JavaIo::BufferedReader
-      JAVA_BIG_DECIMAL      = Java::JavaMath::BigDecimal
-      JAVA_BYTE_ARRAY       = Java::byte[]
-      JAVA_UUID             = Java::JavaUtil::UUID
-      JAVA_HASH_MAP         = Java::JavaUtil::HashMap
-
-      # Handle type conversions for common Java types.
-      class TYPE_TRANSLATOR
-        LF = "\n".freeze
-        def time(v) Sequel.string_to_time("#{v.to_string}.#{sprintf('%03i', v.getTime.divmod(1000).last)}") end
-        def date(v) Date.civil(v.getYear + 1900, v.getMonth + 1, v.getDate) end
-        def decimal(v) BigDecimal.new(v.to_string) end
-        def byte_array(v) Sequel::SQL::Blob.new(String.from_java_bytes(v)) end
-        def blob(v) Sequel::SQL::Blob.new(String.from_java_bytes(v.getBytes(1, v.length))) end
-        def clob(v) v.getSubString(1, v.length) end
-        def buffered_reader(v)
-          lines = ""
-          c = false
-          while(line = v.read_line) do
-            lines << LF if c
-            lines << line
-            c ||= true
-          end
-          lines
-        end
-        def uuid(v) v.to_string end
-        def hash_map(v) v.to_hash end
-      end
-      TYPE_TRANSLATOR_INSTANCE = tt = TYPE_TRANSLATOR.new
-
-      # Cache type translator methods so that duplicate Method
-      # objects are not created.
-      DECIMAL_METHOD = tt.method(:decimal)
-      TIME_METHOD = tt.method(:time)
-      DATE_METHOD = tt.method(:date)
-      BUFFERED_READER_METHOD = tt.method(:buffered_reader)
-      BYTE_ARRAY_METHOD = tt.method(:byte_array)
-      BLOB_METHOD = tt.method(:blob)
-      CLOB_METHOD = tt.method(:clob)
-      UUID_METHOD = tt.method(:uuid)
-      HASH_MAP_METHOD = tt.method(:hash_map)
-
-      # Convert the given Java timestamp to an instance of Sequel.datetime_class.
-      def convert_type_timestamp(v)
-        db.to_application_timestamp([v.getYear + 1900, v.getMonth + 1, v.getDate, v.getHours, v.getMinutes, v.getSeconds, v.getNanos])
-      end
-
-      # Return a callable object that will convert any value of <tt>v</tt>'s
-      # class to a ruby object.  If no callable object can handle <tt>v</tt>'s
-      # class, return false so that the negative lookup is cached.
-      def convert_type_proc(v)
-        case v
-        when JAVA_BIG_DECIMAL
-          DECIMAL_METHOD
-        when JAVA_SQL_TIMESTAMP
-          method(:convert_type_timestamp)
-        when JAVA_SQL_TIME
-          TIME_METHOD
-        when JAVA_SQL_DATE
-          DATE_METHOD
-        when JAVA_BUFFERED_READER
-          BUFFERED_READER_METHOD
-        when JAVA_BYTE_ARRAY
-          BYTE_ARRAY_METHOD
-        when JAVA_SQL_BLOB
-          BLOB_METHOD
-        when JAVA_SQL_CLOB
-          CLOB_METHOD
-        when JAVA_UUID
-          UUID_METHOD
-        when JAVA_HASH_MAP
-          HASH_MAP_METHOD
-        else
-          false
-        end
+      # Whether we should convert Java types to ruby types for this dataset.
+      def convert_types?
+        ct = @convert_types
+        ct.nil? ? db.convert_types : ct
       end
-      
+
       # Extend the dataset with the JDBC stored procedure methods.
       def prepare_extend_sproc(ds)
         ds.extend(StoredProcedureMethods)
       end
-      
+
+      # The type conversion proc to use for the given column number i,
+      # given the type conversion map and the ResultSetMetaData.
+      def type_convertor(map, meta, type, i)
+        map[type]
+      end
+
+      # The basic type conversion proc to use for the given column number i,
+      # given the type conversion map and the ResultSetMetaData.
+      #
+      # This is implemented as a separate method so that subclasses can
+      # override the methods separately.
+      def basic_type_convertor(map, meta, type, i)
+        map[type]
+      end
+
       # Split out from fetch rows to allow processing of JDBC result sets
       # that don't come from issuing an SQL string.
-      def process_result_set(result, &block)
-        # get column names
+      def process_result_set(result)
         meta = result.getMetaData
+        if fetch_size = opts[:fetch_size]
+          result.setFetchSize(fetch_size)
+        end
         cols = []
         i = 0
-        meta.getColumnCount.times{cols << [output_identifier(meta.getColumnLabel(i+=1)), i]}
-        columns = cols.map{|c| c.at(0)}
-        @columns = columns
-        ct = @convert_types
-        if (ct.nil? ? db.convert_types : ct)
-          cols.each{|c| c << nil}
-          process_result_set_convert(cols, result, &block)
-        else
-          process_result_set_no_convert(cols, result, &block)
-        end
-      ensure
-        result.close
-      end
+        convert = convert_types?
+        map = convert ? db.type_convertor_map : db.basic_type_convertor_map
 
-      # Use conversion procs to convert data retrieved
-      # from the database.  This has been optimized, the algorithm it uses
-      # is roughly, for each column value in each row:
-      # * check if the value is truthy (not false/nil)
-      # * if not truthy, return object
-      # * otherwise, see if a conversion method exists for
-      #   the column.  All columns start with a nil conversion proc,
-      #   since unlike other adapters, Sequel doesn't get the type of
-      #   the column when parsing the column metadata.
-      # * if a conversion proc is not false/nil, call it with the object
-      #   and return the result.
-      # * if a conversion proc has already been looked up and doesn't
-      #   exist (false value), return object.  
-      # * if a conversion proc hasn't been looked up yet (nil value),
-      #   call convert_type_proc to get the conversion method.  Cache
-      #   the result of as the column's conversion proc to speed up
-      #   later processing.  If the conversion proc exists, call it
-      #   and return the result, otherwise, return the object.
-      def process_result_set_convert(cols, result)
-        while result.next
-          row = {}
-          cols.each do |n, i, p|
-            v = result.getObject(i)
-            row[n] = if v
-              if p
-                p.call(v)
-              elsif p.nil?
-                cols[i-1][2] = p = convert_type_proc(v)
-                if p
-                  p.call(v)
-                else
-                  v
-                end
-              else
-                v
-              end
-            else
-              v
-            end
-          end
-          yield row
+        meta.getColumnCount.times do
+          i += 1
+          cols << [output_identifier(meta.getColumnLabel(i)), i, convert ? type_convertor(map, meta, meta.getColumnType(i), i) : basic_type_convertor(map, meta, meta.getColumnType(i), i)]
         end
-      end
+        @columns = cols.map{|c| c.at(0)}
 
-      # Yield rows without calling any conversion procs.  This
-      # may yield Java values and not ruby values.
-      def process_result_set_no_convert(cols, result)
         while result.next
           row = {}
-          cols.each{|n, i| row[n] = result.getObject(i)}
+          cols.each do |n, i, pr|
+            row[n] = pr.call(result, i)
+          end
           yield row
         end
+      ensure
+        result.close
       end
     end
   end
diff --git a/lib/sequel/adapters/jdbc/db2.rb b/lib/sequel/adapters/jdbc/db2.rb
index 8113d48..68ba6a1 100644
--- a/lib/sequel/adapters/jdbc/db2.rb
+++ b/lib/sequel/adapters/jdbc/db2.rb
@@ -3,6 +3,16 @@ Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    class TypeConvertor
+      def DB2Clob(r, i)
+        if v = r.getClob(i)
+          v = v.getSubString(1, v.length)
+          v = Sequel::SQL::Blob.new(v) if ::Sequel::DB2::use_clob_as_blob
+          v
+        end
+      end
+    end
+
     class Database
       # Alias the generic JDBC versions so they can be called directly later
       alias jdbc_schema_parse_table schema_parse_table
@@ -32,7 +42,11 @@ module Sequel
         def set_ps_arg(cps, arg, i)
           case arg
           when Sequel::SQL::Blob
-            cps.setString(i, arg)
+            if ::Sequel::DB2.use_clob_as_blob
+              cps.setString(i, arg)
+            else
+              super
+            end
           else
             super
           end
@@ -51,28 +65,17 @@ module Sequel
         def primary_key_index_re
           PRIMARY_KEY_INDEX_RE
         end
+
+        def setup_type_convertor_map
+          super
+          map = @type_convertor_map
+          types = Java::JavaSQL::Types
+          map[types::NCLOB] = map[types::CLOB] = TypeConvertor::INSTANCE.method(:DB2Clob)
+        end
       end
 
       class Dataset < JDBC::Dataset
         include Sequel::DB2::DatasetMethods
-
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          def db2_clob(v) Sequel::SQL::Blob.new(v.getSubString(1, v.length)) end
-        end
-
-        DB2_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:db2_clob)
-      
-        private
-
-        # Return clob as blob if use_clob_as_blob is true
-        def convert_type_proc(v)
-          case v
-          when JAVA_SQL_CLOB
-            ::Sequel::DB2::use_clob_as_blob ? DB2_CLOB_METHOD : super
-          else
-            super
-          end
-        end
       end
     end
   end
diff --git a/lib/sequel/adapters/jdbc/derby.rb b/lib/sequel/adapters/jdbc/derby.rb
index d404088..3e5562c 100644
--- a/lib/sequel/adapters/jdbc/derby.rb
+++ b/lib/sequel/adapters/jdbc/derby.rb
@@ -173,8 +173,6 @@ module Sequel
         PAREN_OPEN = Dataset::PAREN_OPEN
         OFFSET = Dataset::OFFSET
         CAST_STRING_OPEN = "RTRIM(".freeze
-        BITCOMP_OPEN = "((0 - ".freeze
-        BITCOMP_CLOSE = ") - 1)".freeze
         BLOB_OPEN = "CAST(X'".freeze
         BLOB_CLOSE = "' AS BLOB)".freeze
         HSTAR = "H*".freeze
@@ -187,7 +185,6 @@ module Sequel
         BOOL_FALSE_OLD = '(1 = 0)'.freeze
         BOOL_TRUE = 'TRUE'.freeze
         BOOL_FALSE = 'FALSE'.freeze
-        SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock')
         EMULATED_FUNCTION_MAP = {:char_length=>'length'.freeze}
 
         # Derby doesn't support an expression between CASE and WHEN,
@@ -212,14 +209,10 @@ module Sequel
 
         def complex_expression_sql_append(sql, op, args)
           case op
-          when :%
-            sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"}
+          when :%, :'B~'
+            complex_expression_emulate_append(sql, op, args)
           when :&, :|, :^, :<<, :>>
             raise Error, "Derby doesn't support the #{op} operator"
-          when :'B~'
-            sql << BITCOMP_OPEN
-            literal_append(sql, args.at(0))
-            sql << BITCOMP_CLOSE
           when :extract
             sql << args.at(0).to_s << PAREN_OPEN
             literal_append(sql, args.at(1))
@@ -246,23 +239,10 @@ module Sequel
 
         private
 
-        JAVA_SQL_CLOB         = Java::JavaSQL::Clob
-
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          def derby_clob(v) v.getSubString(1, v.length) end
+        def empty_from_sql
+          DEFAULT_FROM
         end
 
-        DERBY_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:derby_clob)
-      
-        # Handle clobs on Derby as strings.
-        def convert_type_proc(v)
-          if v.is_a?(JAVA_SQL_CLOB)
-            DERBY_CLOB_METHOD
-          else
-            super
-          end
-        end
-        
         # Derby needs a hex string casted to BLOB for blobs.
         def literal_blob_append(sql, v)
           sql << BLOB_OPEN << v.unpack(HSTAR).first << BLOB_CLOSE
@@ -299,18 +279,9 @@ module Sequel
           end
         end
 
-        # Derby doesn't support common table expressions.
-        def select_clause_methods
-          SELECT_CLAUSE_METHODS
-        end
-
-        # Use a default FROM table if the dataset does not contain a FROM table.
-        def select_from_sql(sql)
-          if @opts[:from]
-            super
-          else
-            sql << DEFAULT_FROM
-          end
+        # Derby supports multiple rows in INSERT.
+        def multi_insert_sql_strategy
+          :values
         end
 
         # Offset comes before limit in Derby
diff --git a/lib/sequel/adapters/jdbc/h2.rb b/lib/sequel/adapters/jdbc/h2.rb
index e88966d..4503654 100644
--- a/lib/sequel/adapters/jdbc/h2.rb
+++ b/lib/sequel/adapters/jdbc/h2.rb
@@ -9,8 +9,8 @@ module Sequel
       
         # Commit an existing prepared transaction with the given transaction
         # identifier string.
-        def commit_prepared_transaction(transaction_id)
-          run("COMMIT TRANSACTION #{transaction_id}")
+        def commit_prepared_transaction(transaction_id, opts=OPTS)
+          run("COMMIT TRANSACTION #{transaction_id}", opts)
         end
 
         # H2 uses the :h2 database type.
@@ -20,8 +20,8 @@ module Sequel
 
         # Rollback an existing prepared transaction with the given transaction
         # identifier string.
-        def rollback_prepared_transaction(transaction_id)
-          run("ROLLBACK TRANSACTION #{transaction_id}")
+        def rollback_prepared_transaction(transaction_id, opts=OPTS)
+          run("ROLLBACK TRANSACTION #{transaction_id}", opts)
         end
 
         # H2 uses an IDENTITY type
@@ -49,7 +49,7 @@ module Sequel
         # If the :prepare option is given and we aren't in a savepoint,
         # prepare the transaction for a two-phase commit.
         def commit_transaction(conn, opts=OPTS)
-          if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1
+          if (s = opts[:prepare]) && savepoint_level(conn) <= 1
             log_connection_execute(conn, "PREPARE COMMIT #{s}")
           else
             super
@@ -62,7 +62,7 @@ module Sequel
           when :add_column
             if (pk = op.delete(:primary_key)) || (ref = op.delete(:table))
               sqls = [super(table, op)]
-              sqls << "ALTER TABLE #{quote_schema_table(table)} ADD PRIMARY KEY (#{quote_identifier(op[:name])})" if pk
+              sqls << "ALTER TABLE #{quote_schema_table(table)} ADD PRIMARY KEY (#{quote_identifier(op[:name])})" if pk && op[:type] != :identity
               if ref
                 op[:table] = ref
                 sqls << "ALTER TABLE #{quote_schema_table(table)} ADD FOREIGN KEY (#{quote_identifier(op[:name])}) #{column_references_sql(op)}"
@@ -142,35 +142,29 @@ module Sequel
       
       # Dataset class for H2 datasets accessed via JDBC.
       class Dataset < JDBC::Dataset
-        SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit')
-        BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}
         APOS = Dataset::APOS
         HSTAR = "H*".freeze
-        BITCOMP_OPEN = "((0 - ".freeze
-        BITCOMP_CLOSE = ") - 1)".freeze
         ILIKE_PLACEHOLDER = ["CAST(".freeze, " AS VARCHAR_IGNORECASE)".freeze].freeze
         TIME_FORMAT = "'%H:%M:%S'".freeze
-        
+        ONLY_OFFSET = " LIMIT -1 OFFSET ".freeze
+
         # Emulate the case insensitive LIKE operator and the bitwise operators.
         def complex_expression_sql_append(sql, op, args)
           case op
           when :ILIKE, :"NOT ILIKE"
             super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), [SQL::PlaceholderLiteralString.new(ILIKE_PLACEHOLDER, [args.at(0)]), args.at(1)])
-          when :&, :|, :^
-            sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(BITWISE_METHOD_MAP[op], a, b))}
-          when :<<
-            sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"}
-          when :>>
-            sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"}
-          when :'B~'
-            sql << BITCOMP_OPEN
-            literal_append(sql, args.at(0))
-            sql << BITCOMP_CLOSE
+          when :&, :|, :^, :<<, :>>, :'B~'
+            complex_expression_emulate_append(sql, op, args)
           else
             super
           end
         end
         
+        # H2 does not support derived column lists
+        def supports_derived_column_lists?
+          false
+        end
+
         # H2 requires SQL standard datetimes
         def requires_sql_standard_datetimes?
           true
@@ -193,23 +187,6 @@ module Sequel
 
         private
 
-        #JAVA_H2_CLOB = Java::OrgH2Jdbc::JdbcClob
-
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          def h2_clob(v) v.getSubString(1, v.length) end
-        end
-
-        H2_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:h2_clob)
-      
-        # Handle H2 specific clobs as strings.
-        def convert_type_proc(v)
-          if v.is_a?(Java::OrgH2Jdbc::JdbcClob)
-            H2_CLOB_METHOD
-          else
-            super
-          end
-        end
-        
         # H2 expects hexadecimal strings for blob values
         def literal_blob_append(sql, v)
           sql << APOS << v.unpack(HSTAR).first << APOS
@@ -220,8 +197,19 @@ module Sequel
           v.strftime(TIME_FORMAT)
         end
 
-        def select_clause_methods
-          SELECT_CLAUSE_METHODS
+        # H2 supports multiple rows in INSERT.
+        def multi_insert_sql_strategy
+          :values
+        end
+
+        def select_only_offset_sql(sql)
+          sql << ONLY_OFFSET
+          literal_append(sql, @opts[:offset])
+        end
+
+        # H2 supports quoted function names.
+        def supports_quoted_function_names?
+          true
         end
       end
     end
diff --git a/lib/sequel/adapters/jdbc/hsqldb.rb b/lib/sequel/adapters/jdbc/hsqldb.rb
index d25d576..e814440 100644
--- a/lib/sequel/adapters/jdbc/hsqldb.rb
+++ b/lib/sequel/adapters/jdbc/hsqldb.rb
@@ -32,11 +32,23 @@ module Sequel
           end
         end
         
+        # HSQLDB supports DROP TABLE IF EXISTS
+        def supports_drop_table_if_exists?
+          true
+        end
+
         private
         
         # HSQLDB specific SQL for renaming columns, and changing column types and/or nullity.
         def alter_table_sql(table, op)
           case op[:op]
+          when :add_column
+            if op[:table]
+              [super(table, op.merge(:table=>nil)),
+               alter_table_sql(table, op.merge(:op=>:add_constraint, :type=>:foreign_key, :name=>op[:foreign_key_name], :columns=>[op[:name]], :table=>op[:table]))]
+            else
+              super
+            end
           when :rename_column
             "ALTER TABLE #{quote_schema_table(table)} ALTER COLUMN #{quote_identifier(op[:name])} RENAME TO #{quote_identifier(op[:new_name])}"
           when :set_column_type
@@ -64,6 +76,16 @@ module Sequel
           DATABASE_ERROR_REGEXPS
         end
 
+        # IF EXISTS comes after table name on HSQLDB
+        def drop_table_sql(name, options)
+          "DROP TABLE #{quote_schema_table(name)}#{' IF EXISTS' if options[:if_exists]}#{' CASCADE' if options[:cascade]}"
+        end
+        
+        # IF EXISTS comes after view name on HSQLDB
+        def drop_view_sql(name, options)
+          "DROP VIEW #{quote_schema_table(name)}#{' IF EXISTS' if options[:if_exists]}#{' CASCADE' if options[:cascade]}"
+        end
+
         # Use IDENTITY() to get the last inserted id.
         def last_insert_id(conn, opts=OPTS)
           statement(conn) do |stmt|
@@ -100,25 +122,21 @@ module Sequel
         def uses_clob_for_text?
           true
         end
+
+        # HSQLDB supports views with check option.
+        def view_with_check_option_support
+          :local
+        end
       end
       
       # Dataset class for HSQLDB datasets accessed via JDBC.
       class Dataset < JDBC::Dataset
-        BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}
         BOOL_TRUE = 'TRUE'.freeze
         BOOL_FALSE = 'FALSE'.freeze
-        # HSQLDB does support common table expressions, but the support is broken.
-        # CTEs operate more like temprorary tables or views, lasting longer than the duration of the expression.
-        # CTEs in earlier queries might take precedence over CTEs with the same name in later queries.
-        # Also, if any CTE is recursive, all CTEs must be recursive.
-        # If you want to use CTEs with HSQLDB, you'll have to manually modify the dataset to allow it.
-        SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock')
         SQL_WITH_RECURSIVE = "WITH RECURSIVE ".freeze
         APOS = Dataset::APOS
         HSTAR = "H*".freeze
         BLOB_OPEN = "X'".freeze
-        BITCOMP_OPEN = "((0 - ".freeze
-        BITCOMP_CLOSE = ") - 1)".freeze
         DEFAULT_FROM = " FROM (VALUES (0))".freeze
         TIME_FORMAT = "'%H:%M:%S'".freeze
 
@@ -127,19 +145,8 @@ module Sequel
           case op
           when :ILIKE, :"NOT ILIKE"
             super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| SQL::Function.new(:ucase, v)})
-          when :&, :|, :^
-            op = BITWISE_METHOD_MAP[op]
-            sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(op, a, b))}
-          when :<<
-            sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"}
-          when :>>
-            sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"}
-          when :%
-            sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"}
-          when :'B~'
-            sql << BITCOMP_OPEN
-            literal_append(sql, args.at(0))
-            sql << BITCOMP_CLOSE
+          when :&, :|, :^, :%, :<<, :>>, :'B~'
+            complex_expression_emulate_append(sql, op, args)
           else
             super
           end
@@ -155,6 +162,15 @@ module Sequel
           true
         end
 
+        # HSQLDB does support common table expressions, but the support is broken.
+        # CTEs operate more like temprorary tables or views, lasting longer than the duration of the expression.
+        # CTEs in earlier queries might take precedence over CTEs with the same name in later queries.
+        # Also, if any CTE is recursive, all CTEs must be recursive.
+        # If you want to use CTEs with HSQLDB, you'll have to manually modify the dataset to allow it.
+        def supports_cte?(type=:select)
+          false
+        end
+
         # HSQLDB does not support IS TRUE.
         def supports_is_true?
           false
@@ -167,6 +183,10 @@ module Sequel
 
         private
 
+        def empty_from_sql
+          DEFAULT_FROM
+        end
+        
         # Use string in hex format for blob data.
         def literal_blob_append(sql, v)
           sql << BLOB_OPEN << v.unpack(HSTAR).first << APOS
@@ -187,20 +207,11 @@ module Sequel
           BOOL_TRUE
         end
 
-        # HSQLDB does not support CTEs well enough for Sequel to enable support for them.
-        def select_clause_methods
-          SELECT_CLAUSE_METHODS
+        # HSQLDB supports multiple rows in INSERT.
+        def multi_insert_sql_strategy
+          :values
         end
 
-        # Use a default FROM table if the dataset does not contain a FROM table.
-        def select_from_sql(sql)
-          if @opts[:from]
-            super
-          else
-            sql << DEFAULT_FROM
-          end
-        end
-        
         # Use WITH RECURSIVE instead of WITH if any of the CTEs is recursive
         def select_with_sql_base
           opts[:with].any?{|w| w[:recursive]} ? SQL_WITH_RECURSIVE : super
diff --git a/lib/sequel/adapters/jdbc/jtds.rb b/lib/sequel/adapters/jdbc/jtds.rb
index bf23b74..23122ae 100644
--- a/lib/sequel/adapters/jdbc/jtds.rb
+++ b/lib/sequel/adapters/jdbc/jtds.rb
@@ -25,21 +25,6 @@ module Sequel
       # Dataset class for JTDS datasets accessed via JDBC.
       class Dataset < JDBC::Dataset
         include Sequel::MSSQL::DatasetMethods
-
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          def jtds_clob(v) v.getSubString(1, v.length) end
-        end
-
-        JTDS_CLOB_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:jtds_clob)
-      
-        # Handle CLOB types retrieved via JTDS.
-        def convert_type_proc(v)
-          if v.is_a?(Java::NetSourceforgeJtdsJdbc::ClobImpl)
-            JTDS_CLOB_METHOD
-          else
-            super
-          end
-        end
       end
     end
   end
diff --git a/lib/sequel/adapters/jdbc/oracle.rb b/lib/sequel/adapters/jdbc/oracle.rb
index fe537c7..6d3eeeb 100644
--- a/lib/sequel/adapters/jdbc/oracle.rb
+++ b/lib/sequel/adapters/jdbc/oracle.rb
@@ -3,6 +3,21 @@ Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    class TypeConvertor
+      JAVA_BIG_DECIMAL_CONSTRUCTOR = java.math.BigDecimal.java_class.constructor(Java::long).method(:new_instance)
+
+      def OracleDecimal(r, i)
+        if v = r.getBigDecimal(i)
+          i = v.long_value
+          if v == JAVA_BIG_DECIMAL_CONSTRUCTOR.call(i)
+            i
+          else
+            BigDecimal.new(v.to_string)
+          end
+        end
+      end 
+    end
+
     # Database and Dataset support for Oracle databases accessed via JDBC.
     module Oracle
       # Instance methods for Oracle Database objects accessed via JDBC.
@@ -31,6 +46,11 @@ module Sequel
           super || exception.message =~ /\AClosed Connection/
         end
 
+        # Default the fetch size for statements to 100, similar to the oci8-based oracle adapter.
+        def default_fetch_size
+          100
+        end
+        
         def last_insert_id(conn, opts)
           unless sequence = opts[:sequence]
             if t = opts[:table]
@@ -76,50 +96,31 @@ module Sequel
         def supports_releasing_savepoints?
           false
         end
+
+        def setup_type_convertor_map
+          super
+          @type_convertor_map[:OracleDecimal] = TypeConvertor::INSTANCE.method(:OracleDecimal)
+        end
       end
       
       # Dataset class for Oracle datasets accessed via JDBC.
       class Dataset < JDBC::Dataset
         include Sequel::Oracle::DatasetMethods
 
-        private
+        NUMERIC_TYPE = Java::JavaSQL::Types::NUMERIC
+        TIMESTAMP_TYPE = Java::JavaSQL::Types::TIMESTAMP
+        TIMESTAMPTZ_TYPES = [Java::oracle.jdbc.OracleTypes::TIMESTAMPTZ, Java::oracle.jdbc.OracleTypes::TIMESTAMPLTZ]
 
-        JAVA_BIG_DECIMAL = ::Sequel::JDBC::Dataset::JAVA_BIG_DECIMAL
-        JAVA_BIG_DECIMAL_CONSTRUCTOR = java.math.BigDecimal.java_class.constructor(Java::long).method(:new_instance)
-
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          def oracle_decimal(v)
-            if v.scale == 0
-              i = v.long_value
-              if v.equals(JAVA_BIG_DECIMAL_CONSTRUCTOR.call(i))
-                i
-              else
-                decimal(v)
-              end
+        def type_convertor(map, meta, type, i)
+          case type
+          when NUMERIC_TYPE
+            if meta.getScale(i) == 0
+              map[:OracleDecimal]
             else
-              decimal(v)
+              super
             end
-          end
-        end
-
-        ORACLE_DECIMAL_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:oracle_decimal)
-
-        def convert_type_oracle_timestamp(v)
-          db.to_application_timestamp(v.to_string)
-        end
-      
-        def convert_type_oracle_timestamptz(v)
-          convert_type_oracle_timestamp(db.synchronize(@opts[:server]){|c| v.timestampValue(c)})
-        end
-      
-        def convert_type_proc(v)
-          case v
-          when JAVA_BIG_DECIMAL
-            ORACLE_DECIMAL_METHOD
-          when Java::OracleSql::TIMESTAMPTZ
-            method(:convert_type_oracle_timestamptz)
-          when Java::OracleSql::TIMESTAMP
-            method(:convert_type_oracle_timestamp)
+          when *TIMESTAMPTZ_TYPES
+            map[TIMESTAMP_TYPE]
           else
             super
           end
diff --git a/lib/sequel/adapters/jdbc/postgresql.rb b/lib/sequel/adapters/jdbc/postgresql.rb
index 572980d..a609991 100644
--- a/lib/sequel/adapters/jdbc/postgresql.rb
+++ b/lib/sequel/adapters/jdbc/postgresql.rb
@@ -4,6 +4,26 @@ module Sequel
   Postgres::CONVERTED_EXCEPTIONS << NativeException
   
   module JDBC
+    class TypeConvertor
+      # Return PostgreSQL array types as ruby Arrays instead of
+      # JDBC PostgreSQL driver-specific array type. Only used if the
+      # database does not have a conversion proc for the type.
+      def RubyPGArray(r, i)
+        if v = r.getArray(i)
+          v.array.to_ary
+        end
+      end 
+
+      # Return PostgreSQL hstore types as ruby Hashes instead of
+      # Java HashMaps.  Only used if the database does not have a
+      # conversion proc for the type.
+      def RubyPGHstore(r, i)
+        if v = r.getObject(i)
+          v.to_hash
+        end
+      end 
+    end
+
     # Adapter, Database, and Dataset support for accessing a PostgreSQL
     # database via JDBC.
     module Postgres
@@ -12,7 +32,7 @@ module Sequel
       module DatabaseMethods
         extend Sequel::Database::ResetIdentifierMangling
         include Sequel::Postgres::DatabaseMethods
-        
+
         # Add the primary_keys and primary_key_sequences instance variables,
         # so we can get the correct return values for inserted rows.
         def self.extended(db)
@@ -81,8 +101,34 @@ module Sequel
           end
         end
 
+        def oid_convertor_proc(oid)
+          if (conv = Sequel.synchronize{@oid_convertor_map[oid]}).nil?
+            conv = if pr = conversion_procs[oid]
+              lambda do |r, i|
+                if v = r.getString(i)
+                  pr.call(v)
+                end
+              end
+            else
+              false
+            end
+             Sequel.synchronize{@oid_convertor_map[oid] = conv}
+          end
+          conv
+        end
+
         private
         
+        # Clear oid convertor map cache when conversion procs are updated.
+        def conversion_procs_updated
+          super
+          Sequel.synchronize{@oid_convertor_map = {}}
+        end
+
+        def disconnect_error?(exception, opts)
+          super || exception.message =~ /\AThis connection has been closed\.\z|\AFATAL: terminating connection due to administrator command\z/
+        end
+
         # Use setNull for nil arguments as the default behavior of setString
         # with nil doesn't appear to work correctly on PostgreSQL.
         def set_ps_arg_nil(cps, i)
@@ -97,6 +143,13 @@ module Sequel
           end
           conn
         end
+
+        def setup_type_convertor_map
+          super
+          @oid_convertor_map = {}
+          @type_convertor_map[:RubyPGArray] = TypeConvertor::INSTANCE.method(:RubyPGArray)
+          @type_convertor_map[:RubyPGHstore] = TypeConvertor::INSTANCE.method(:RubyPGHstore)
+        end
       end
       
       # Dataset subclass used for datasets that connect to PostgreSQL via JDBC.
@@ -104,53 +157,6 @@ module Sequel
         include Sequel::Postgres::DatasetMethods
         APOS = Dataset::APOS
         
-        class ::Sequel::JDBC::Dataset::TYPE_TRANSLATOR
-          # Convert Java::OrgPostgresqlUtil::PGobject to ruby strings
-          def pg_object(v)
-            v.to_string
-          end
-        end
-
-        # Handle conversions of PostgreSQL array instances
-        class PGArrayConverter
-          # Set the method that will return the correct conversion
-          # proc for elements of this array.
-          def initialize(meth)
-            @conversion_proc_method = meth
-            @conversion_proc = nil
-          end
-          
-          # Convert Java::OrgPostgresqlJdbc4::Jdbc4Array to ruby arrays
-          def call(v)
-            _pg_array(v.array)
-          end
-
-          private
-
-          # Handle multi-dimensional Java arrays by recursively mapping them
-          # to ruby arrays of ruby values.
-          def _pg_array(v)
-            v.to_ary.map do |i|
-              if i.respond_to?(:to_ary)
-                _pg_array(i)
-              elsif i
-                if @conversion_proc.nil?
-                  @conversion_proc = @conversion_proc_method.call(i)
-                end
-                if @conversion_proc
-                  @conversion_proc.call(i)
-                else
-                  i
-                end
-              else
-                i
-              end
-            end
-          end
-        end
-
-        PG_OBJECT_METHOD = TYPE_TRANSLATOR_INSTANCE.method(:pg_object)
-      
         # Add the shared PostgreSQL prepared statement methods
         def prepare(type, name=nil, *values)
           ps = to_prepared_statement(type, values)
@@ -165,24 +171,35 @@ module Sequel
 
         private
         
-        # Handle PostgreSQL array and object types. Object types are just
-        # turned into strings, similarly to how the native adapter treats
-        # the types.
-        def convert_type_proc(v)
-          case v
-          when Java::OrgPostgresqlJdbc4::Jdbc4Array
-            PGArrayConverter.new(method(:convert_type_proc))
-          when Java::OrgPostgresqlUtil::PGobject
-            PG_OBJECT_METHOD
-          else
-            super
-          end
-        end
-        
         # Literalize strings similar to the native postgres adapter
         def literal_string_append(sql, v)
           sql << APOS << db.synchronize(@opts[:server]){|c| c.escape_string(v)} << APOS
         end
+
+        STRING_TYPE = Java::JavaSQL::Types::VARCHAR
+        ARRAY_TYPE = Java::JavaSQL::Types::ARRAY
+        PG_SPECIFIC_TYPES = [ARRAY_TYPE, Java::JavaSQL::Types::OTHER, Java::JavaSQL::Types::STRUCT]
+        HSTORE_TYPE = 'hstore'.freeze
+
+        def type_convertor(map, meta, type, i)
+          case type
+          when *PG_SPECIFIC_TYPES
+            oid = meta.field(i).oid
+            if pr = db.oid_convertor_proc(oid)
+              pr
+            elsif type == ARRAY_TYPE
+              map[:RubyPGArray]
+            elsif oid == 2950 # UUID
+              map[STRING_TYPE]
+            elsif meta.getPGType(i) == HSTORE_TYPE
+              map[:RubyPGHstore]
+            else
+              super
+            end
+          else
+            super
+          end
+        end
       end
     end
   end
diff --git a/lib/sequel/adapters/jdbc/sqlanywhere.rb b/lib/sequel/adapters/jdbc/sqlanywhere.rb
new file mode 100644
index 0000000..f5ecf04
--- /dev/null
+++ b/lib/sequel/adapters/jdbc/sqlanywhere.rb
@@ -0,0 +1,59 @@
+Sequel.require 'adapters/shared/sqlanywhere'
+Sequel.require 'adapters/jdbc/transactions'
+
+module Sequel
+  module JDBC
+    class TypeConvertor
+      def SqlAnywhereBoolean(r, i)
+        if v = Short(r, i)
+          v != 0
+        end
+      end
+    end
+
+    module SqlAnywhere
+      # Database instance methods for Sybase databases accessed via JDBC.
+      module DatabaseMethods
+        extend Sequel::Database::ResetIdentifierMangling
+        include Sequel::SqlAnywhere::DatabaseMethods
+        include Sequel::JDBC::Transactions
+
+        LAST_INSERT_ID = 'SELECT @@IDENTITY'.freeze
+
+        private
+
+        # Get the last inserted id.
+        def last_insert_id(conn, opts=OPTS)
+          statement(conn) do |stmt|
+            sql = LAST_INSERT_ID
+            rs = log_yield(sql){stmt.executeQuery(sql)}
+            rs.next
+            rs.getInt(1)
+          end
+        end
+
+        def setup_type_convertor_map
+          super
+          @type_convertor_map[:SqlAnywhereBoolean] = TypeConvertor::INSTANCE.method(:SqlAnywhereBoolean)
+        end
+      end
+
+      #Dataset class for Sybase datasets accessed via JDBC.
+      class Dataset < JDBC::Dataset
+        include Sequel::SqlAnywhere::DatasetMethods
+
+        private
+
+        SMALLINT_TYPE = Java::JavaSQL::Types::SMALLINT
+
+        def type_convertor(map, meta, type, i)
+          if convert_smallint_to_bool && type == SMALLINT_TYPE
+            map[:SqlAnywhereBoolean]
+          else
+            super
+          end
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/adapters/jdbc/sqlite.rb b/lib/sequel/adapters/jdbc/sqlite.rb
index ffad124..52266bd 100644
--- a/lib/sequel/adapters/jdbc/sqlite.rb
+++ b/lib/sequel/adapters/jdbc/sqlite.rb
@@ -59,6 +59,13 @@ module Sequel
           end
           conn
         end
+
+        # Use getLong instead of getInt for converting integers on SQLite, since SQLite does not enforce a limit of 2**32.
+        def setup_type_convertor_map
+          super
+          @type_convertor_map[Java::JavaSQL::Types::INTEGER] = @type_convertor_map[Java::JavaSQL::Types::BIGINT]
+          @basic_type_convertor_map[Java::JavaSQL::Types::INTEGER] = @basic_type_convertor_map[Java::JavaSQL::Types::BIGINT]
+        end
       end
     end
   end
diff --git a/lib/sequel/adapters/jdbc/sqlserver.rb b/lib/sequel/adapters/jdbc/sqlserver.rb
index a55fc96..cc83c2e 100644
--- a/lib/sequel/adapters/jdbc/sqlserver.rb
+++ b/lib/sequel/adapters/jdbc/sqlserver.rb
@@ -19,39 +19,19 @@ module Sequel
         # than getObject() for this column avoids the problem.
         # Reference: http://social.msdn.microsoft.com/Forums/en/sqldataaccess/thread/20df12f3-d1bf-4526-9daa-239a83a8e435
         module MetadataDatasetMethods
-          def process_result_set_convert(cols, result)
-            while result.next
-              row = {}
-              cols.each do |n, i, p|
-                v = (n == :is_autoincrement ? result.getString(i) : result.getObject(i))
-                row[n] = if v
-                  if p
-                    p.call(v)
-                  elsif p.nil?
-                    cols[i-1][2] = p = convert_type_proc(v)
-                    if p
-                      p.call(v)
-                    else
-                      v
-                    end
-                  else
-                    v
-                  end
-                else
-                  v
-                end
-              end
-              yield row
+          def type_convertor(map, meta, type, i)
+            if output_identifier(meta.getColumnLabel(i)) == :is_autoincrement
+              map[Java::JavaSQL::Types::VARCHAR]
+            else
+              super
             end
           end
 
-          def process_result_set_no_convert(cols, result)
-            while result.next
-              row = {}
-              cols.each do |n, i|
-                row[n] = (n == :is_autoincrement ? result.getString(i) : result.getObject(i))
-              end
-              yield row
+          def basic_type_convertor(map, meta, type, i)
+            if output_identifier(meta.getColumnLabel(i)) == :is_autoincrement
+              map[Java::JavaSQL::Types::VARCHAR]
+            else
+              super
             end
           end
         end
diff --git a/lib/sequel/adapters/jdbc/transactions.rb b/lib/sequel/adapters/jdbc/transactions.rb
index 065fda7..8aac247 100644
--- a/lib/sequel/adapters/jdbc/transactions.rb
+++ b/lib/sequel/adapters/jdbc/transactions.rb
@@ -47,14 +47,13 @@ module Sequel
       def begin_transaction(conn, opts=OPTS)
         if supports_savepoints?
           th = _trans(conn)
-          if sps = th[:savepoints]
+          if sps = th[:savepoint_objs]
             sps << log_yield(TRANSACTION_SAVEPOINT){conn.set_savepoint}
           else
             log_yield(TRANSACTION_BEGIN){conn.setAutoCommit(false)}
-            th[:savepoints] = []
+            th[:savepoint_objs] = []
             set_transaction_isolation(conn, opts)
           end
-          th[:savepoint_level] += 1
         else
           log_yield(TRANSACTION_BEGIN){conn.setAutoCommit(false)}
           set_transaction_isolation(conn, opts)
@@ -64,7 +63,7 @@ module Sequel
       # Use JDBC connection's commit method to commit transactions
       def commit_transaction(conn, opts=OPTS)
         if supports_savepoints?
-          sps = _trans(conn)[:savepoints]
+          sps = _trans(conn)[:savepoint_objs]
           if sps.empty?
             log_yield(TRANSACTION_COMMIT){conn.commit}
           elsif supports_releasing_savepoints?
@@ -81,7 +80,7 @@ module Sequel
           conn.setTransactionIsolation(jdbc_level)
         end
         if supports_savepoints?
-          sps = _trans(conn)[:savepoints]
+          sps = _trans(conn)[:savepoint_objs]
           conn.setAutoCommit(true) if sps.empty?
           sps.pop
         else
@@ -94,7 +93,7 @@ module Sequel
       # Use JDBC connection's rollback method to rollback transactions
       def rollback_transaction(conn, opts=OPTS)
         if supports_savepoints?
-          sps = _trans(conn)[:savepoints]
+          sps = _trans(conn)[:savepoint_objs]
           if sps.empty?
             log_yield(TRANSACTION_ROLLBACK){conn.rollback}
           else
diff --git a/lib/sequel/adapters/mock.rb b/lib/sequel/adapters/mock.rb
index 7d570ee..aec9a69 100644
--- a/lib/sequel/adapters/mock.rb
+++ b/lib/sequel/adapters/mock.rb
@@ -34,6 +34,7 @@ module Sequel
       # mock adapters for specific database types.
       SHARED_ADAPTERS = {
         'access'=>'Access',
+        'cubrid'=>'Cubrid',
         'db2'=>'DB2',
         'firebird'=>'Firebird',
         'informix'=>'Informix',
@@ -41,6 +42,7 @@ module Sequel
         'mysql'=>'MySQL',
         'oracle'=>'Oracle',
         'postgres'=>'Postgres',
+        'sqlanywhere'=>'SqlAnywhere',
         'sqlite'=>'SQLite'
       }
 
@@ -49,7 +51,7 @@ module Sequel
       SHARED_ADAPTER_SETUP = {
         'postgres' => lambda do |db|
           db.instance_eval do
-            @server_version = 90103
+            @server_version = 90400
             initialize_postgres_adapter
           end
           db.extend(Module.new do
@@ -67,9 +69,19 @@ module Sequel
             @primary_key_sequences = {}
           end
         end,
+        'mysql' => lambda do |db|
+          db.instance_eval do
+            @server_version = 50617
+          end
+        end,
         'mssql' => lambda do |db|
           db.instance_eval do
-            @server_version = 10000000
+            @server_version = 11000000
+          end
+        end,
+        'sqlite' => lambda do |db|
+          db.instance_eval do
+            @sqlite_version = 30804
           end
         end
       }
diff --git a/lib/sequel/adapters/mysql2.rb b/lib/sequel/adapters/mysql2.rb
index c441331..8ce71ae 100644
--- a/lib/sequel/adapters/mysql2.rb
+++ b/lib/sequel/adapters/mysql2.rb
@@ -144,6 +144,7 @@ module Sequel
     class Dataset < Sequel::Dataset
       include Sequel::MySQL::DatasetMethods
       include Sequel::MySQL::PreparedStatements::DatasetMethods
+      STREAMING_SUPPORTED = ::Mysql2::VERSION >= '0.3.12'
 
       Database::DatasetClass = self
 
@@ -160,9 +161,18 @@ module Sequel
         self
       end
 
+      # Use streaming to implement paging if Mysql2 supports it.
+      def paged_each(opts=OPTS, &block)
+        if STREAMING_SUPPORTED
+          stream.each(&block)
+        else
+          super
+        end
+      end
+
       # Return a clone of the dataset that will stream rows when iterating
       # over the result set, so it can handle large datasets that
-      # won't fit in memory (Requires mysql 0.3.12 to have an effect).
+      # won't fit in memory (Requires mysql 0.3.12+ to have an effect).
       def stream
         clone(:stream=>true)
       end
diff --git a/lib/sequel/adapters/odbc.rb b/lib/sequel/adapters/odbc.rb
index 8a515f8..0076d8c 100644
--- a/lib/sequel/adapters/odbc.rb
+++ b/lib/sequel/adapters/odbc.rb
@@ -14,7 +14,6 @@ module Sequel
         conn = if opts.include?(:drvconnect)
           ::ODBC::Database.new.drvconnect(opts[:drvconnect])
         elsif opts.include?(:driver)
-          Deprecation.deprecate("The odbc driver's handling of the :driver option is thought to be broken and will probably be removed in the future. If you are successfully using it, please contact the developers.")
           drv = ::ODBC::Driver.new
           drv.name = 'Sequel ODBC Driver130'
           opts.each do |param, value|
@@ -128,7 +127,7 @@ module Sequel
         # ODBCColumn#mapSqlTypeToGenericType and Column#klass.
         case v
         when ::ODBC::TimeStamp
-          db.to_application_timestamp([v.year, v.month, v.day, v.hour, v.minute, v.second])
+          db.to_application_timestamp([v.year, v.month, v.day, v.hour, v.minute, v.second, v.fraction])
         when ::ODBC::Time
           Sequel::SQLTime.create(v.hour, v.minute, v.second)
         when ::ODBC::Date
diff --git a/lib/sequel/adapters/odbc/mssql.rb b/lib/sequel/adapters/odbc/mssql.rb
index 2ced7b7..51a93b6 100644
--- a/lib/sequel/adapters/odbc/mssql.rb
+++ b/lib/sequel/adapters/odbc/mssql.rb
@@ -32,10 +32,12 @@ module Sequel
       class Dataset < ODBC::Dataset
         include Sequel::MSSQL::DatasetMethods
 
+        # Use ODBC format, not Microsoft format, as the ODBC layer does
+        # some translation.  MSSQL version is over-ridden to allow 3 millisecond decimal places        
+        TIMESTAMP_FORMAT="{ts '%Y-%m-%d %H:%M:%S%N'}".freeze
+
         private
 
-        # Use ODBC format, not Microsoft format, as the ODBC layer does
-        # some translation.
         def default_timestamp_format
           TIMESTAMP_FORMAT
         end
diff --git a/lib/sequel/adapters/openbase.rb b/lib/sequel/adapters/openbase.rb
index b4d5568..de2ca47 100644
--- a/lib/sequel/adapters/openbase.rb
+++ b/lib/sequel/adapters/openbase.rb
@@ -29,7 +29,7 @@ module Sequel
     end
     
     class Dataset < Sequel::Dataset
-      SELECT_CLAUSE_METHODS = clause_methods(:select, %w'select distinct columns from join where group having compounds order limit')
+      def_sql_method(self, :select, %w'select distinct columns from join where group having compounds order limit')
 
       Database::DatasetClass = self
       
@@ -48,12 +48,6 @@ module Sequel
         end
         self
       end
-      
-      private
-      
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
-      end
     end
   end
 end
diff --git a/lib/sequel/adapters/oracle.rb b/lib/sequel/adapters/oracle.rb
index 2f3404e..f9d9747 100644
--- a/lib/sequel/adapters/oracle.rb
+++ b/lib/sequel/adapters/oracle.rb
@@ -143,6 +143,7 @@ module Sequel
       end
 
       def database_specific_error_class(exception, opts)
+        return super unless exception.respond_to?(:code)
         case exception.code
         when 1400, 1407
           NotNullConstraintViolation
@@ -291,7 +292,7 @@ module Sequel
               :primary_key => pks.include?(column.name),
               :default => defaults[column.name],
               :oci8_type => column.data_type,
-              :db_type => column.type_string.split(' ')[0],
+              :db_type => column.type_string,
               :type_string => column.type_string,
               :charset_form => column.charset_form,
               :char_used => column.char_used?,
diff --git a/lib/sequel/adapters/postgres.rb b/lib/sequel/adapters/postgres.rb
index a9ec143..797ed7b 100644
--- a/lib/sequel/adapters/postgres.rb
+++ b/lib/sequel/adapters/postgres.rb
@@ -108,7 +108,7 @@ module Sequel
     # PGconn subclass for connection specific methods used with the
     # pg, postgres, or postgres-pr driver.
     class Adapter < ::PGconn
-      DISCONNECT_ERROR_RE = /\Acould not receive data from server/
+      DISCONNECT_ERROR_RE = /\A(?:could not receive data from server|no connection to the server|connection not open)/
       
       self.translate_results = false if respond_to?(:translate_results=)
       
@@ -618,6 +618,7 @@ module Sequel
 
       Database::DatasetClass = self
       APOS = Sequel::Dataset::APOS
+      DEFAULT_CURSOR_NAME = 'sequel_cursor'.freeze
       
       # Yield all rows returned by executing the given SQL and converting
       # the types.
@@ -626,15 +627,22 @@ module Sequel
         execute(sql){|res| yield_hash_rows(res, fetch_rows_set_cols(res)){|h| yield h}}
       end
       
+      # Use a cursor for paging.
+      def paged_each(opts=OPTS, &block)
+        use_cursor(opts).each(&block)
+      end
+
       # Uses a cursor for fetching records, instead of fetching the entire result
       # set at once.  Can be used to process large datasets without holding
-      # all rows in memory (which is what the underlying drivers do
+      # all rows in memory (which is what the underlying drivers may do
       # by default). Options:
       #
-      # * :rows_per_fetch - the number of rows per fetch (default 1000).  Higher
-      #   numbers result in fewer queries but greater memory use.
-      # * :cursor_name - the name assigned to the cursor (default 'sequel_cursor').
-      #   Nested cursors require different names.
+      # :cursor_name :: The name assigned to the cursor (default 'sequel_cursor').
+      #                 Nested cursors require different names.
+      # :hold :: Declare the cursor WITH HOLD and don't use transaction around the
+      #          cursor usage.
+      # :rows_per_fetch :: The number of rows per fetch (default 1000).  Higher
+      #                    numbers result in fewer queries but greater memory use.
       #
       # Usage:
       #
@@ -645,7 +653,19 @@ module Sequel
       # This is untested with the prepared statement/bound variable support,
       # and unlikely to work with either.
       def use_cursor(opts=OPTS)
-        clone(:cursor=>{:rows_per_fetch=>1000, :cursor_name => 'sequel_cursor'}.merge(opts))
+        clone(:cursor=>{:rows_per_fetch=>1000}.merge(opts))
+      end
+
+      # Replace the WHERE clause with one that uses CURRENT OF with the given
+      # cursor name (or the default cursor name).  This allows you to update a
+      # large dataset by updating individual rows while processing the dataset
+      # via a cursor:
+      #
+      #   DB[:huge_table].use_cursor(:rows_per_fetch=>1).each do |row|
+      #     DB[:huge_table].where_current_of.update(:column=>ruby_method(row))
+      #   end
+      def where_current_of(cursor_name=DEFAULT_CURSOR_NAME)
+        clone(:where=>Sequel.lit(['CURRENT OF '], Sequel.identifier(cursor_name)))
       end
 
       if SEQUEL_POSTGRES_USES_PG
@@ -760,11 +780,14 @@ module Sequel
       # Use a cursor to fetch groups of records at a time, yielding them to the block.
       def cursor_fetch_rows(sql)
         server_opts = {:server=>@opts[:server] || :read_only}
-        cursor_name = quote_identifier(@opts[:cursor][:cursor_name])
-        db.transaction(server_opts) do 
+        cursor = @opts[:cursor]
+        hold = cursor[:hold]
+        cursor_name = quote_identifier(cursor[:cursor_name] || DEFAULT_CURSOR_NAME)
+        rows_per_fetch = cursor[:rows_per_fetch].to_i
+
+        db.send(*(hold ? [:synchronize, server_opts[:server]] : [:transaction, server_opts])) do 
           begin
-            execute_ddl("DECLARE #{cursor_name} NO SCROLL CURSOR WITHOUT HOLD FOR #{sql}", server_opts)
-            rows_per_fetch = @opts[:cursor][:rows_per_fetch].to_i
+            execute_ddl("DECLARE #{cursor_name} NO SCROLL CURSOR WITH#{'OUT' unless hold} HOLD FOR #{sql}", server_opts)
             rows_per_fetch = 1000 if rows_per_fetch <= 0
             fetch_sql = "FETCH FORWARD #{rows_per_fetch} FROM #{cursor_name}"
             cols = nil
@@ -780,8 +803,15 @@ module Sequel
                 return if res.ntuples < rows_per_fetch
               end
             end
+          rescue Exception => e
+            raise
           ensure
-            execute_ddl("CLOSE #{cursor_name}", server_opts)
+            begin
+              execute_ddl("CLOSE #{cursor_name}", server_opts)
+            rescue
+              raise e if e
+              raise
+            end
           end
         end
       end
diff --git a/lib/sequel/adapters/shared/access.rb b/lib/sequel/adapters/shared/access.rb
index 2e1c3b8..95841e8 100644
--- a/lib/sequel/adapters/shared/access.rb
+++ b/lib/sequel/adapters/shared/access.rb
@@ -88,9 +88,11 @@ module Sequel
     end
   
     module DatasetMethods
+      include(Module.new do
+        Dataset.def_sql_method(self, :select, %w'select distinct limit columns into from join where group order having compounds')
+      end)
       include EmulateOffsetWithReverseAndCount
 
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct limit columns into from join where group order having compounds')
       DATE_FORMAT = '#%Y-%m-%d#'.freeze
       TIMESTAMP_FORMAT = '#%Y-%m-%d %H:%M:%S#'.freeze
       TOP = " TOP ".freeze
@@ -110,6 +112,7 @@ module Sequel
       TIME_FUNCTION = 'Time()'.freeze
       CAST_TYPES = {String=>:CStr, Integer=>:CLng, Date=>:CDate, Time=>:CDate, DateTime=>:CDate, Numeric=>:CDec, BigDecimal=>:CDec, File=>:CStr, Float=>:CDbl, TrueClass=>:CBool, FalseClass=>:CBool}
 
+      EMULATED_FUNCTION_MAP = {:char_length=>:len}
       EXTRACT_MAP = {:year=>"'yyyy'", :month=>"'m'", :day=>"'d'", :hour=>"'h'", :minute=>"'n'", :second=>"'s'"}
       COMMA = Dataset::COMMA
       DATEPART_OPEN = "datepart(".freeze
@@ -187,15 +190,6 @@ module Sequel
         clone(:from=>@opts[:from] + [table])
       end
 
-      def emulated_function_sql_append(sql, f)
-        case f.f
-        when :char_length
-          literal_append(sql, SQL::Function.new(:len, f.args.first))
-        else
-          super
-        end
-      end
-      
       # Access uses [] to escape metacharacters, instead of backslashes.
       def escape_like(string)
         string.gsub(/[\\*#?\[]/){|m| "[#{m}]"}
@@ -206,6 +200,11 @@ module Sequel
         clone(:into => table)
       end
 
+      # Access does not support derived column lists.
+      def supports_derived_column_lists?
+        false
+      end
+
       # Access doesn't support INTERSECT or EXCEPT
       def supports_intersect_except?
         false
@@ -295,11 +294,6 @@ module Sequel
       def quoted_identifier_append(sql, v)
         sql << BRACKET_OPEN << v.to_s << BRACKET_CLOSE
       end
-
-      # Access requires the limit clause come before other clauses
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
-      end
     end
   end
 end
diff --git a/lib/sequel/adapters/shared/cubrid.rb b/lib/sequel/adapters/shared/cubrid.rb
index e356905..a8648c4 100644
--- a/lib/sequel/adapters/shared/cubrid.rb
+++ b/lib/sequel/adapters/shared/cubrid.rb
@@ -162,15 +162,22 @@ module Sequel
       def uses_clob_for_text?
         true
       end
+
+      # CUBRID supports views with check option, but not local.
+      def view_with_check_option_support
+        true
+      end
     end
     
     module DatasetMethods
-      SELECT_CLAUSE_METHODS = Sequel::Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit')
-      LIMIT = Sequel::Dataset::LIMIT
       COMMA = Sequel::Dataset::COMMA
+      LIMIT = Sequel::Dataset::LIMIT
       BOOL_FALSE = '0'.freeze
       BOOL_TRUE = '1'.freeze
 
+      # Hope you don't have more than 2**32 + offset rows in your dataset
+      ONLY_OFFSET = ",4294967295".freeze
+
       def supports_join_using?
         false
       end
@@ -200,23 +207,36 @@ module Sequel
         BOOL_TRUE
       end
      
-      # CUBRID doesn't support CTEs or FOR UPDATE.
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # CUBRID supports multiple rows in INSERT.
+      def multi_insert_sql_strategy
+        :values
       end
 
       # CUBRID requires a limit to use an offset,
       # and requires a FROM table if a limit is used.
       def select_limit_sql(sql)
-        if @opts[:from] && (l = @opts[:limit])
+        return unless @opts[:from]
+        l = @opts[:limit]
+        o = @opts[:offset]
+        if l || o
           sql << LIMIT
-          if o = @opts[:offset]
+          if o
             literal_append(sql, o)
-            sql << COMMA
+            if l
+              sql << COMMA
+              literal_append(sql, l)
+            else
+              sql << ONLY_OFFSET
+            end
+          else
+            literal_append(sql, l)
           end
-          literal_append(sql, l)
         end
       end
+
+      # CUBRID doesn't support FOR UPDATE.
+      def select_lock_sql(sql)
+      end
     end
   end
 end
diff --git a/lib/sequel/adapters/shared/db2.rb b/lib/sequel/adapters/shared/db2.rb
index ac45f54..9249ee3 100644
--- a/lib/sequel/adapters/shared/db2.rb
+++ b/lib/sequel/adapters/shared/db2.rb
@@ -2,7 +2,7 @@ Sequel.require 'adapters/utils/emulate_offset_with_row_number'
 
 module Sequel
   module DB2
-    @use_clob_as_blob = true
+    @use_clob_as_blob = false
 
     class << self
       # Whether to use clob as the generic File type, true by default.
@@ -222,6 +222,11 @@ module Sequel
       def uses_clob_for_text?
         true
       end
+
+      # DB2 supports views with check option.
+      def view_with_check_option_support
+        :local
+      end
     end
 
     module DatasetMethods
@@ -256,16 +261,8 @@ module Sequel
 
       def complex_expression_sql_append(sql, op, args)
         case op
-        when :&, :|, :^
-          # works with db2 v9.5 and after
-          op = BITWISE_METHOD_MAP[op]
-          sql << complex_expression_arg_pairs(args){|a, b| literal(SQL::Function.new(op, a, b))}
-        when :<<
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"}
-        when :>>
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"}
-        when :%
-          sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"}
+        when :&, :|, :^, :%, :<<, :>>
+          complex_expression_emulate_append(sql, op, args)
         when :'B~'
           literal_append(sql, SQL::Function.new(:BITNOT, *args))
         when :extract
@@ -278,6 +275,10 @@ module Sequel
         end
       end
 
+      def supports_cte?(type=:select)
+        type == :select
+      end
+
       # DB2 supports GROUP BY CUBE
       def supports_group_cube?
         true
@@ -325,6 +326,10 @@ module Sequel
 
       private
 
+      def empty_from_sql
+        EMPTY_FROM_TABLE
+      end
+
       # DB2 needs the standard workaround to insert all default values into
       # a table with more than one column.
       def insert_supports_empty_values?
@@ -350,9 +355,14 @@ module Sequel
         end
       end
 
-      # Add a fallback table for empty from situation
-      def select_from_sql(sql)
-        @opts[:from] ? super : (sql << EMPTY_FROM_TABLE)
+      # DB2 can insert multiple rows using a UNION
+      def multi_insert_sql_strategy
+        :union
+      end
+
+      # DB2 does not require that ROW_NUMBER be ordered.
+      def require_offset_order?
+        false
       end
 
       # Modify the sql to limit the number of rows returned
@@ -377,6 +387,11 @@ module Sequel
         end
       end
       
+      # DB2 supports quoted function names.
+      def supports_quoted_function_names?
+        true
+      end
+
       def _truncate_sql(table)
         # "TRUNCATE #{table} IMMEDIATE" is only for newer version of db2, so we
         # use the following one
diff --git a/lib/sequel/adapters/shared/firebird.rb b/lib/sequel/adapters/shared/firebird.rb
index 0c575f3..06fbddd 100644
--- a/lib/sequel/adapters/shared/firebird.rb
+++ b/lib/sequel/adapters/shared/firebird.rb
@@ -146,18 +146,24 @@ module Sequel
       def type_literal_generic_string(column)
         column[:text] ? :"BLOB SUB_TYPE TEXT" : super
       end
+
+      # Firebird supports views with check option, but not local.
+      def view_with_check_option_support
+        true
+      end
     end
 
     module DatasetMethods
       BOOL_TRUE = '1'.freeze
       BOOL_FALSE = '0'.freeze
       NULL = LiteralString.new('NULL').freeze
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct limit columns from join where group having compounds order')
-      INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert into columns values returning')
       FIRST = " FIRST ".freeze
       SKIP = " SKIP ".freeze
       DEFAULT_FROM = " FROM RDB$DATABASE"
       
+      Dataset.def_sql_method(self, :select, %w'with select distinct limit columns from join where group having compounds order')
+      Dataset.def_sql_method(self, :insert, %w'insert into columns values returning')
+
       # Insert given values into the database.
       def insert(*values)
         if @opts[:sql] || @opts[:returning]
@@ -176,6 +182,10 @@ module Sequel
         true
       end
 
+      def supports_cte?(type=:select)
+        type == :select
+      end
+
       def supports_insert_select?
         true
       end
@@ -185,10 +195,14 @@ module Sequel
         false
       end
 
+      def supports_returning?(type)
+        type == :insert
+      end
+
       private
 
-      def insert_clause_methods
-        INSERT_CLAUSE_METHODS
+      def empty_from_sql
+        DEFAULT_FROM
       end
 
       def insert_pk(*values)
@@ -204,19 +218,10 @@ module Sequel
         BOOL_TRUE
       end
 
-      # The order of clauses in the SELECT SQL statement
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # Firebird can insert multiple rows using a UNION
+      def multi_insert_sql_strategy
+        :union
       end
-      
-        # Use a default FROM table if the dataset does not contain a FROM table.
-        def select_from_sql(sql)
-          if @opts[:from]
-            super
-          else
-            sql << DEFAULT_FROM
-          end
-        end
 
       def select_limit_sql(sql)
         if l = @opts[:limit]
diff --git a/lib/sequel/adapters/shared/informix.rb b/lib/sequel/adapters/shared/informix.rb
index bb5843c..9538649 100644
--- a/lib/sequel/adapters/shared/informix.rb
+++ b/lib/sequel/adapters/shared/informix.rb
@@ -25,20 +25,17 @@ module Sequel
     end
     
     module DatasetMethods
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select limit distinct columns from join where having group compounds order')
       FIRST = " FIRST ".freeze
       SKIP = " SKIP ".freeze
 
-      private
+      Dataset.def_sql_method(self, :select, %w'select limit distinct columns from join where having group compounds order')
 
       # Informix does not support INTERSECT or EXCEPT
       def supports_intersect_except?
         false
       end
 
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
-      end
+      private
 
       def select_limit_sql(sql)
         if o = @opts[:offset]
diff --git a/lib/sequel/adapters/shared/mssql.rb b/lib/sequel/adapters/shared/mssql.rb
index b2b9a80..069b64a 100644
--- a/lib/sequel/adapters/shared/mssql.rb
+++ b/lib/sequel/adapters/shared/mssql.rb
@@ -34,6 +34,85 @@ module Sequel
       # to :integer.
       DECIMAL_TYPE_RE = /number|numeric|decimal/io
 
+      # Execute the given stored procedure with the given name.
+      #
+      # Options:
+      # :args :: Arguments to stored procedure.  For named argumetns, this should be a
+      #          hash keyed by argument named.  For unnamed arguments, this should be an
+      #          array.  Output parameters to the function are specified using :output.
+      #          You can also name output parameters and provide a type by using an
+      #          array containing :output, the type name, and the parameter name.
+      # :server :: The server/shard on which to execute the procedure.
+      #
+      # This method returns a single hash with the following keys:
+      #
+      # :result :: The result code of the stored procedure
+      # :numrows :: The number of rows affected by the stored procedure
+      # output params :: Values for any output paramters, using the name given for the output parameter
+      #
+      # Examples:
+      #
+      #     DB.call_mssql_sproc(:SequelTest, {:args => ['input arg', :output]})
+      #     DB.call_mssql_sproc(:SequelTest, {:args => ['input arg', [:output, 'int', 'varname']]})
+      #
+      #     named params:
+      #     DB.call_mssql_sproc(:SequelTest, :args => {
+      #       'input_arg1_name' => 'input arg1 value',
+      #       'input_arg2_name' => 'input arg2 value',
+      #       'output_arg_name' => [:output, 'int', 'varname']
+      #     })
+      def call_mssql_sproc(name, opts=OPTS)
+        args = opts[:args] || []
+        names = ['@RC AS RESULT', '@@ROWCOUNT AS NUMROWS']
+        declarations = ['@RC int']
+        values = []
+
+        if args.is_a?(Hash)
+          named_args = true
+          args = args.to_a
+          method = :each
+        else
+          method = :each_with_index
+        end
+
+        args.send(method) do |v, i|
+          if named_args
+            k = v
+            v, type, select = i
+            raise Error, "must provide output parameter name when using output parameters with named arguments" if v == :output && !select
+          else
+            v, type, select = v
+          end
+
+          if v == :output
+            type ||= "nvarchar(max)"
+            if named_args
+              varname = select
+            else
+              varname = "var#{i}"
+              select ||= varname
+            end
+            names << "@#{varname} AS #{quote_identifier(select)}"
+            declarations << "@#{varname} #{type}"
+            value = "@#{varname} OUTPUT"
+          else
+            value = literal(v)
+          end
+
+          if named_args
+            value = "@#{k}=#{value}"
+          end
+
+          values << value
+        end
+
+        sql = "DECLARE #{declarations.join(', ')}; EXECUTE @RC = #{name} #{values.join(', ')}; SELECT #{names.join(', ')}"
+
+        ds = dataset.with_sql(sql)
+        ds = ds.server(opts[:server]) if opts[:server]
+        ds.first
+      end
+
       # Microsoft SQL Server uses the :mssql type.
       def database_type
         :mssql
@@ -114,6 +193,9 @@ module Sequel
       # SQL Server 2008 Express).
       def server_version(server=nil)
         return @server_version if @server_version
+        if @opts[:server_version]
+          return @server_version = Integer(@opts[:server_version])
+        end
         @server_version = synchronize(server) do |conn|
           (conn.server_version rescue nil) if conn.respond_to?(:server_version)
         end
@@ -219,7 +301,7 @@ module Sequel
       def begin_transaction_sql
         SQL_BEGIN
       end
-      
+
       # Handle MSSQL specific default format.
       def column_schema_normalize_default(default, type)
         if m = MSSQL_DEFAULT_RE.match(default)
@@ -231,7 +313,7 @@ module Sequel
       # Commit the active transaction on the connection, does not commit/release
       # savepoints.
       def commit_transaction(conn, opts=OPTS)
-        log_connection_execute(conn, commit_transaction_sql) unless _trans(conn)[:savepoint_level] > 1
+        log_connection_execute(conn, commit_transaction_sql) unless savepoint_level(conn) > 1
       end
 
       # SQL to COMMIT a transaction.
@@ -256,7 +338,7 @@ module Sequel
       end
     
       DATABASE_ERROR_REGEXPS = {
-        /Violation of UNIQUE KEY constraint/ => UniqueConstraintViolation,
+        /Violation of UNIQUE KEY constraint|Violation of PRIMARY KEY constraint.+Cannot insert duplicate key/ => UniqueConstraintViolation,
         /conflicted with the (FOREIGN KEY.*|REFERENCE) constraint/ => ForeignKeyConstraintViolation,
         /conflicted with the CHECK constraint/ => CheckConstraintViolation,
         /column does not allow nulls/ => NotNullConstraintViolation,
@@ -332,6 +414,8 @@ module Sequel
           :boolean
         when /\A(?:(?:small)?money)\z/io
           :decimal
+        when /\A(timestamp|rowversion)\z/io
+          :blob
         else
           super
         end
@@ -403,19 +487,22 @@ module Sequel
       def type_literal_generic_file(column)
         :'varbinary(max)'
       end
+      
+      # MSSQL supports views with check option, but not local.
+      def view_with_check_option_support
+        true
+      end
     end
   
     module DatasetMethods
+      include(Module.new do
+        Dataset.def_sql_method(self, :select, %w'with select distinct limit columns into from lock join where group having order compounds')
+      end)
       include EmulateOffsetWithRowNumber
 
       BOOL_TRUE = '1'.freeze
       BOOL_FALSE = '0'.freeze
       COMMA_SEPARATOR = ', '.freeze
-      DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'with delete from output from2 where')
-      INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'with insert into columns output values')
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct limit columns into from lock join where group having order compounds')
-      UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'with update limit table set output from where')
-      UPDATE_CLAUSE_METHODS_2000 = Dataset.clause_methods(:update, %w'update table set output from where')
       NOLOCK = ' WITH (NOLOCK)'.freeze
       UPDLOCK = ' WITH (UPDLOCK)'.freeze
       WILDCARD = LiteralString.new('*').freeze
@@ -437,8 +524,6 @@ module Sequel
       DATEPART_SECOND_MIDDLE = ') + datepart(ns, '.freeze
       DATEPART_SECOND_CLOSE = ")/1000000000.0) AS double precision)".freeze
       DATEPART_OPEN = "datepart(".freeze
-      UNION_ALL = ' UNION ALL '.freeze
-      SELECT_SPACE = 'SELECT '.freeze
       TIMESTAMP_USEC_FORMAT = ".%03d".freeze
       OUTPUT_INSERTED = " OUTPUT INSERTED.*".freeze
       HEX_START = '0x'.freeze
@@ -455,8 +540,16 @@ module Sequel
       FORMAT_DATE = "'%Y%m%d'".freeze
       CROSS_APPLY = 'CROSS APPLY'.freeze
       OUTER_APPLY = 'OUTER APPLY'.freeze
+      OFFSET = " OFFSET ".freeze
+      ROWS = " ROWS".freeze
+      ROWS_ONLY = " ROWS ONLY".freeze
+      FETCH_NEXT = " FETCH NEXT ".freeze
+
+      Dataset.def_mutation_method(:disable_insert_output, :output, :module=>self)
+      Dataset.def_sql_method(self, :delete, %w'with delete from output from2 where')
+      Dataset.def_sql_method(self, :insert, %w'with insert into columns output values')
+      Dataset.def_sql_method(self, :update, [['if is_2005_or_later?', %w'with update limit table set output from where'], ['else', %w'update table set output from where']])
 
-      Sequel::Dataset.def_mutation_method(:disable_insert_output, :output, :module=>self)
 
       # Allow overriding of the mssql_unicode_strings option at the dataset level.
       attr_writer :mssql_unicode_strings
@@ -472,19 +565,21 @@ module Sequel
         when :'||'
           super(sql, :+, args)
         when :LIKE, :"NOT LIKE"
-          super(sql, op, args.map{|a| LiteralString.new("(#{literal(a)} COLLATE #{CASE_SENSITIVE_COLLATION})")})
+          super(sql, op, args.map{|a| Sequel.lit(["(", " COLLATE #{CASE_SENSITIVE_COLLATION})"], a)})
         when :ILIKE, :"NOT ILIKE"
-          super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|a| LiteralString.new("(#{literal(a)} COLLATE #{CASE_INSENSITIVE_COLLATION})")})
-        when :<<
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * POWER(2, #{literal(b)}))"}
-        when :>>
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / POWER(2, #{literal(b)}))"}
+          super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|a| Sequel.lit(["(", " COLLATE #{CASE_INSENSITIVE_COLLATION})"], a)})
+        when :<<, :>>
+          complex_expression_emulate_append(sql, op, args)
         when :extract
           part = args.at(0)
           raise(Sequel::Error, "unsupported extract argument: #{part.inspect}") unless format = EXTRACT_MAP[part]
           if part == :second
-            expr = literal(args.at(1))
-            sql << DATEPART_SECOND_OPEN << format.to_s << COMMA << expr << DATEPART_SECOND_MIDDLE << expr << DATEPART_SECOND_CLOSE
+            expr = args.at(1)
+            sql << DATEPART_SECOND_OPEN << format.to_s << COMMA
+            literal_append(sql, expr)
+            sql << DATEPART_SECOND_MIDDLE
+            literal_append(sql, expr)
+            sql << DATEPART_SECOND_CLOSE
           else
             sql << DATEPART_OPEN << format.to_s << COMMA
             literal_append(sql, args.at(1))
@@ -519,21 +614,6 @@ module Sequel
         string.gsub(/[\\%_\[\]]/){|m| "\\#{m}"}
       end
    
-      # There is no function on Microsoft SQL Server that does character length
-      # and respects trailing spaces (datalength respects trailing spaces, but
-      # counts bytes instead of characters).  Use a hack to work around the
-      # trailing spaces issue.
-      def emulated_function_sql_append(sql, f)
-        case f.f
-        when :char_length
-          literal_append(sql, SQL::Function.new(:len, Sequel.join([f.args.first, 'x'])) - 1)
-        when :trim
-          literal_append(sql, SQL::Function.new(:ltrim, SQL::Function.new(:rtrim, f.args.first)))
-        else
-          super
-        end
-      end
-      
       # MSSQL uses the CONTAINS keyword for full text search
       def full_text_search(cols, terms, opts = OPTS)
         terms = "\"#{terms.join('" OR "')}\"" if terms.is_a?(Array)
@@ -551,20 +631,6 @@ module Sequel
         clone(:into => table)
       end
 
-      # MSSQL uses a UNION ALL statement to insert multiple values at once.
-      def multi_insert_sql(columns, values)
-        c = false
-        sql = LiteralString.new('')
-        u = UNION_ALL
-        values.each do |v|
-          sql << u if c
-          sql << SELECT_SPACE
-          expression_list_append(sql, v)
-          c ||= true
-        end
-        [insert_sql(columns, sql)]
-      end
-
       # Allows you to do a dirty read of uncommitted data using WITH (NOLOCK).
       def nolock
         lock_style(:dirty)
@@ -605,11 +671,27 @@ module Sequel
         sql << BRACKET_OPEN << name.to_s.gsub(/\]/, DOUBLE_BRACKET_CLOSE) << BRACKET_CLOSE
       end
       
+      # On MSSQL 2012+ add a default order to the current dataset if an offset is used.
+      # The default offset emulation using a subquery would be used in the unordered
+      # case by default, and that also adds a default order, so it's better to just
+      # avoid the subquery.
+      def select_sql
+        if @opts[:offset] && !@opts[:order] && is_2012_or_later?
+          order(1).select_sql
+        else
+          super
+        end
+      end
+
       # The version of the database server.
       def server_version
         db.server_version(@opts[:server])
       end
 
+      def supports_cte?(type=:select)
+        is_2005_or_later?
+      end
+
       # MSSQL 2005+ supports GROUP BY CUBE.
       def supports_group_cube?
         is_2005_or_later?
@@ -650,6 +732,11 @@ module Sequel
         false
       end
       
+      # MSSQL 2012+ supports offsets in correlated subqueries.
+      def supports_offsets_in_correlated_subqueries?
+        is_2012_or_later?
+      end
+
       # MSSQL 2005+ supports the output clause.
       def supports_output_clause?
         is_2005_or_later?
@@ -701,6 +788,11 @@ module Sequel
         server_version >= 10000000
       end
 
+      # Whether we are using SQL Server 2012 or later.
+      def is_2012_or_later?
+        server_version >= 11000000
+      end
+
       # Use strict ISO-8601 format with T between date and time,
       # since that is the format that is multilanguage and not
       # DATEFORMAT dependent.
@@ -708,12 +800,6 @@ module Sequel
         DEFAULT_TIMESTAMP_FORMAT
       end
 
-      # MSSQL supports the OUTPUT clause for DELETE statements.
-      # It also allows prepending a WITH clause.
-      def delete_clause_methods
-        DELETE_CLAUSE_METHODS
-      end
-
       # Only include the primary table in the main delete clause
       def delete_from_sql(sql)
         sql << FROM
@@ -728,6 +814,28 @@ module Sequel
         end
       end
       alias update_from_sql delete_from2_sql
+
+      # There is no function on Microsoft SQL Server that does character length
+      # and respects trailing spaces (datalength respects trailing spaces, but
+      # counts bytes instead of characters).  Use a hack to work around the
+      # trailing spaces issue.
+      def emulate_function?(name)
+        name == :char_length || name == :trim
+      end
+
+      def emulate_function_sql_append(sql, f)
+        case f.name
+        when :char_length
+          literal_append(sql, SQL::Function.new(:len, Sequel.join([f.args.first, 'x'])) - 1)
+        when :trim
+          literal_append(sql, SQL::Function.new(:ltrim, SQL::Function.new(:rtrim, f.args.first)))
+        end
+      end
+      
+      # Microsoft SQL Server 2012 has native support for offsets, but only for ordered datasets.
+      def emulate_offset_with_row_number?
+        super && !(is_2012_or_later? && @opts[:order])
+      end
       
       # Return the first primary key for the current table.  If this table has
       # multiple primary keys, this will only return one of them.  Used by #_import.
@@ -742,12 +850,6 @@ module Sequel
         sprintf(TIMESTAMP_USEC_FORMAT, usec/1000)
       end
 
-      # MSSQL supports the OUTPUT clause for INSERT statements.
-      # It also allows prepending a WITH clause.
-      def insert_clause_methods
-        INSERT_CLAUSE_METHODS
-      end
-
       # Use OUTPUT INSERTED.* to return all columns of the inserted row,
       # for use with the prepared statement code.
       def insert_output_sql(sql)
@@ -798,9 +900,10 @@ module Sequel
         BOOL_TRUE
       end
       
-      # MSSQL adds the limit before the columns
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # MSSQL 2008+ supports multiple rows in the VALUES clause, older versions
+      # can use UNION.
+      def multi_insert_sql_strategy
+        is_2008_or_later? ? :values : :union
       end
 
       def select_into_sql(sql)
@@ -814,6 +917,8 @@ module Sequel
       # to allow the limit to be a bound variable.
       def select_limit_sql(sql)
         if l = @opts[:limit]
+          return if is_2012_or_later? && @opts[:order] && @opts[:offset]
+
           if is_2005_or_later?
             sql << TOP_PAREN
             literal_append(sql, l)
@@ -838,6 +943,25 @@ module Sequel
         end
       end
 
+      # On 2012+ when there is an order with an offset, append the offset (and possible
+      # limit) at the end of the order clause.
+      def select_order_sql(sql)
+        super
+        if is_2012_or_later? && @opts[:order]
+          if o = @opts[:offset]
+            sql << OFFSET
+            literal_append(sql, o)
+            sql << ROWS
+
+            if l = @opts[:limit]
+              sql << FETCH_NEXT
+              literal_append(sql, l)
+              sql << ROWS_ONLY
+            end
+          end
+        end
+      end
+
       # SQL fragment for MSSQL's OUTPUT clause.
       def output_sql(sql)
         return unless supports_output_clause?
@@ -857,17 +981,6 @@ module Sequel
       alias delete_output_sql output_sql
       alias update_output_sql output_sql
 
-      # MSSQL supports the OUTPUT and TOP clause for UPDATE statements.
-      # It also allows prepending a WITH clause.  For MSSQL 2000
-      # and below, exclude WITH and TOP.
-      def update_clause_methods
-        if is_2005_or_later?
-          UPDATE_CLAUSE_METHODS
-        else
-          UPDATE_CLAUSE_METHODS_2000
-        end
-      end
-
       # Only include the primary table in the main update clause
       def update_table_sql(sql)
         sql << SPACE
diff --git a/lib/sequel/adapters/shared/mysql.rb b/lib/sequel/adapters/shared/mysql.rb
index e291ef9..1245dcb 100644
--- a/lib/sequel/adapters/shared/mysql.rb
+++ b/lib/sequel/adapters/shared/mysql.rb
@@ -50,8 +50,8 @@ module Sequel
 
       # Commit an existing prepared transaction with the given transaction
       # identifier string.
-      def commit_prepared_transaction(transaction_id)
-        run("XA COMMIT #{literal(transaction_id)}")
+      def commit_prepared_transaction(transaction_id, opts=OPTS)
+        run("XA COMMIT #{literal(transaction_id)}", opts)
       end
 
       # MySQL uses the :mysql database type
@@ -112,8 +112,8 @@ module Sequel
 
       # Rollback an existing prepared transaction with the given transaction
       # identifier string.
-      def rollback_prepared_transaction(transaction_id)
-        run("XA ROLLBACK #{literal(transaction_id)}")
+      def rollback_prepared_transaction(transaction_id, opts=OPTS)
+        run("XA ROLLBACK #{literal(transaction_id)}", opts)
       end
 
       # Get version of MySQL server, used for determined capabilities.
@@ -129,12 +129,12 @@ module Sequel
         true
       end
       
-      # MySQL supports prepared transactions (two-phase commit) using XA
+      # MySQL 5+ supports prepared transactions (two-phase commit) using XA
       def supports_prepared_transactions?
         server_version >= 50000
       end
 
-      # MySQL supports savepoints
+      # MySQL 5+ supports savepoints
       def supports_savepoints?
         server_version >= 50000
       end
@@ -145,6 +145,14 @@ module Sequel
         super && (server_version <= 50512 || server_version >= 50523)
       end
 
+      # Support fractional timestamps on MySQL 5.6.5+ if the :fractional_seconds
+      # Database option is used.  Technically, MySQL 5.6.4+ supports them, but
+      # automatic initialization of datetime values wasn't supported to 5.6.5+,
+      # and this is related to that.
+      def supports_timestamp_usecs?
+        @supports_timestamp_usecs ||= server_version >= 50605 && typecast_value_boolean(opts[:fractional_seconds])
+      end
+
       # MySQL supports transaction isolation levels
       def supports_transaction_isolation_levels?
         true
@@ -185,7 +193,11 @@ module Sequel
             sql = super
             op[:table] = related
             op[:key] ||= primary_key_from_schema(related)
-            sql << ", ADD FOREIGN KEY (#{quote_identifier(op[:name])})#{column_references_sql(op)}"
+            sql << ", ADD "
+            if constraint_name = op.delete(:foreign_key_constraint_name)
+              sql << "CONSTRAINT #{quote_identifier(constraint_name)} "
+            end
+            sql << "FOREIGN KEY (#{quote_identifier(op[:name])})#{column_references_sql(op)}"
           else
             super
           end
@@ -292,9 +304,8 @@ module Sequel
       # Use XA START to start a new prepared transaction if the :prepare
       # option is given.
       def begin_transaction(conn, opts=OPTS)
-        if (s = opts[:prepare]) && (th = _trans(conn))[:savepoint_level] == 0
+        if (s = opts[:prepare]) && savepoint_level(conn) == 1
           log_connection_execute(conn, "XA START #{literal(s)}")
-          th[:savepoint_level] += 1
         else
           super
         end
@@ -315,7 +326,7 @@ module Sequel
       # Prepare the XA transaction for a two-phase commit if the
       # :prepare option is given.
       def commit_transaction(conn, opts=OPTS)
-        if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1
+        if (s = opts[:prepare]) && savepoint_level(conn) <= 1
           log_connection_execute(conn, "XA END #{literal(s)}")
           log_connection_execute(conn, "XA PREPARE #{literal(s)}")
         else
@@ -421,7 +432,7 @@ module Sequel
 
       # Rollback the currently open XA transaction
       def rollback_transaction(conn, opts=OPTS)
-        if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1
+        if (s = opts[:prepare]) && savepoint_level(conn) <= 1
           log_connection_execute(conn, "XA END #{literal(s)}")
           log_connection_execute(conn, "XA PREPARE #{literal(s)}")
           log_connection_execute(conn, "XA ROLLBACK #{literal(s)}")
@@ -462,6 +473,12 @@ module Sequel
         end
       end
 
+      # Split DROP INDEX ops on MySQL 5.6+, as dropping them in the same
+      # statement as dropping a related foreign key causes an error.
+      def split_alter_table_op?(op)
+        server_version >= 50600 && (op[:op] == :drop_index || (op[:op] == :drop_constraint && op[:type] == :unique))
+      end
+
       # MySQL can combine multiple alter table ops into a single query.
       def supports_combining_alter_table_ops?
         true
@@ -496,7 +513,9 @@ module Sequel
       # MySQL has both datetime and timestamp classes, most people are going
       # to want datetime
       def type_literal_generic_datetime(column)
-        if column[:default] == Sequel::CURRENT_TIMESTAMP
+        if supports_timestamp_usecs?
+          :'datetime(6)'
+        elsif column[:default] == Sequel::CURRENT_TIMESTAMP
           :timestamp
         else
           :datetime
@@ -504,15 +523,28 @@ module Sequel
       end
 
       # MySQL has both datetime and timestamp classes, most people are going
-      # to want datetime
+      # to want datetime.
       def type_literal_generic_time(column)
-        column[:only_time] ? :time : type_literal_generic_datetime(column)
+        if column[:only_time]
+          if supports_timestamp_usecs?
+            :'time(6)'
+          else
+            :time
+          end
+        else
+          type_literal_generic_datetime(column)
+        end
       end
 
       # MySQL doesn't have a true boolean class, so it uses tinyint(1)
       def type_literal_generic_trueclass(column)
         :'tinyint(1)'
       end
+
+      # MySQL 5.0.2+ supports views with check option.
+      def view_with_check_option_support
+        :local if server_version >= 50002
+      end
     end
   
     # Dataset methods shared by datasets that use MySQL databases.
@@ -522,10 +554,6 @@ module Sequel
       COMMA_SEPARATOR = ', '.freeze
       FOR_SHARE = ' LOCK IN SHARE MODE'.freeze
       SQL_CALC_FOUND_ROWS = ' SQL_CALC_FOUND_ROWS'.freeze
-      DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'delete from where order limit')
-      INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert ignore into columns values on_duplicate_key_update')
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct calc_found_rows columns from join where group having compounds order limit lock')
-      UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'update ignore table set where order limit')
       APOS = Dataset::APOS
       APOS_RE = Dataset::APOS_RE
       DOUBLE_APOS = Dataset::DOUBLE_APOS
@@ -566,6 +594,15 @@ module Sequel
       BLOB_START = "0x".freeze
       EMPTY_BLOB = "''".freeze
       HSTAR = "H*".freeze
+      CURRENT_TIMESTAMP_56 = 'CURRENT_TIMESTAMP(6)'.freeze
+
+      # Comes directly from MySQL's documentation, used for queries with limits without offsets
+      ONLY_OFFSET = ",18446744073709551615".freeze
+
+      Dataset.def_sql_method(self, :delete, %w'delete from where order limit')
+      Dataset.def_sql_method(self, :insert, %w'insert ignore into columns values on_duplicate_key_update')
+      Dataset.def_sql_method(self, :select, %w'select distinct calc_found_rows columns from join where group having compounds order limit lock')
+      Dataset.def_sql_method(self, :update, %w'update ignore table set where order limit')
 
       include Sequel::Dataset::Replace
 
@@ -610,6 +647,18 @@ module Sequel
         end
       end
       
+      # MySQL's CURRENT_TIMESTAMP does not use fractional seconds,
+      # even if the database itself supports fractional seconds. If
+      # MySQL 5.6.4+ is being used, use a value that will return
+      # fractional seconds.
+      def constant_sql_append(sql, constant)
+        if constant == :CURRENT_TIMESTAMP && supports_timestamp_usecs?
+          sql << CURRENT_TIMESTAMP_56
+        else
+          super
+        end
+      end
+
       # Use GROUP BY instead of DISTINCT ON if arguments are provided.
       def distinct(*args)
         args.empty? ? super : group(*args)
@@ -706,18 +755,16 @@ module Sequel
         clone(:on_duplicate_key_update => args)
       end
 
-      # MySQL specific syntax for inserting multiple values at once.
-      def multi_insert_sql(columns, values)
-        sql = LiteralString.new('VALUES ')
-        expression_list_append(sql, values.map{|r| Array(r)})
-        [insert_sql(columns, sql)]
-      end
-      
       # MySQL uses the nonstandard ` (backtick) for quoting identifiers.
       def quoted_identifier_append(sql, c)
         sql << BACKTICK << c.to_s.gsub(BACKTICK_RE, DOUBLE_BACKTICK) << BACKTICK
       end
 
+      # MySQL does not support derived column lists
+      def supports_derived_column_lists?
+        false
+      end
+
       # MySQL can emulate DISTINCT ON with its non-standard GROUP BY implementation,
       # though the rows returned cannot be made deterministic through ordering.
       def supports_distinct_on?
@@ -734,6 +781,11 @@ module Sequel
         false
       end
       
+      # MySQL does not support limits in correlated subqueries (or any subqueries that use IN).
+      def supports_limits_in_correlated_subqueries?
+        false
+      end
+    
       # MySQL supports modifying joined datasets
       def supports_modifying_joins?
         true
@@ -754,7 +806,7 @@ module Sequel
       # ignores them.  Also, using them seems to cause problems on 1.9.  Since
       # they are ignored anyway, not using them is probably best.
       def supports_timestamp_usecs?
-        false
+        db.supports_timestamp_usecs?
       end
       
       # Sets up the update methods to use UPDATE IGNORE.
@@ -769,11 +821,6 @@ module Sequel
       
       private
 
-      # MySQL supports the ORDER BY and LIMIT clauses for DELETE statements
-      def delete_clause_methods
-        DELETE_CLAUSE_METHODS
-      end
-      
       # Consider the first table in the joined dataset is the table to delete
       # from, but include the others for the purposes of selecting rows.
       def delete_from_sql(sql)
@@ -788,12 +835,6 @@ module Sequel
         end
       end
 
-      # MySQL supports the IGNORE and ON DUPLICATE KEY UPDATE clauses for INSERT statements
-      def insert_clause_methods
-        INSERT_CLAUSE_METHODS
-      end
-      alias replace_clause_methods insert_clause_methods
-
       # MySQL doesn't use the SQL standard DEFAULT VALUES.
       def insert_columns_sql(sql)
         values = opts[:values]
@@ -905,11 +946,17 @@ module Sequel
         BOOL_TRUE
       end
       
-      # MySQL does not support the SQL WITH clause for SELECT statements
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # MySQL supports multiple rows in INSERT.
+      def multi_insert_sql_strategy
+        :values
       end
-      
+
+      def select_only_offset_sql(sql)
+        sql << LIMIT
+        literal_append(sql, @opts[:offset])
+        sql << ONLY_OFFSET
+      end
+  
       # Support FOR SHARE locking when using the :share lock style.
       def select_lock_sql(sql)
         @opts[:lock] == :share ? (sql << FOR_SHARE) : super
@@ -920,11 +967,6 @@ module Sequel
         sql << SQL_CALC_FOUND_ROWS if opts[:calc_found_rows]
       end
 
-      # MySQL supports the ORDER BY and LIMIT clauses for UPDATE statements
-      def update_clause_methods
-        UPDATE_CLAUSE_METHODS
-      end
-
       # MySQL uses WITH ROLLUP syntax.
       def uses_with_rollup?
         true
diff --git a/lib/sequel/adapters/shared/oracle.rb b/lib/sequel/adapters/shared/oracle.rb
index 0a2a5fd..e8bfbec 100644
--- a/lib/sequel/adapters/shared/oracle.rb
+++ b/lib/sequel/adapters/shared/oracle.rb
@@ -31,6 +31,29 @@ module Sequel
         :oracle
       end
 
+      def foreign_key_list(table, opts=OPTS)
+        m = output_identifier_meth
+        im = input_identifier_meth
+        schema, table = schema_and_table(table)
+        ds = metadata_dataset.
+          from(:all_cons_columns___pc, :all_constraints___p, :all_cons_columns___fc, :all_constraints___f).
+          where(:f__table_name=>im.call(table), :f__constraint_type=>'R', :p__owner=>:f__r_owner, :p__constraint_name=>:f__r_constraint_name, :pc__owner=>:p__owner, :pc__constraint_name=>:p__constraint_name, :pc__table_name=>:p__table_name, :fc__owner=>:f__owner, :fc__constraint_name=>:f__constraint_name, :fc__table_name=>:f__table_name, :fc__position=>:pc__position).
+          select(:p__table_name___table, :pc__column_name___key, :fc__column_name___column, :f__constraint_name___name).
+          order(:table, :fc__position)
+        ds = ds.where(:f__schema_name=>im.call(schema)) if schema
+
+        fks = {}
+        ds.each do |r|
+          if fk = fks[r[:name]]
+            fk[:columns] << m.call(r[:column])
+            fk[:key] << m.call(r[:key])
+          else
+            fks[r[:name]] = {:name=>m.call(r[:name]), :columns=>[m.call(r[:column])], :table=>m.call(r[:table]), :key=>[m.call(r[:key])]}
+          end
+        end
+        fks.values
+      end
+
       # Oracle namespaces indexes per table.
       def global_index_namespace?
         false
@@ -38,7 +61,7 @@ module Sequel
 
       def tables(opts=OPTS)
         m = output_identifier_meth
-        metadata_dataset.from(:tab).server(opts[:server]).select(:tname).filter(:tabtype => 'TABLE').map{|r| m.call(r[:tname])}
+        metadata_dataset.from(:tabs).server(opts[:server]).select(:table_name).map{|r| m.call(r[:table_name])}
       end
 
       def views(opts=OPTS) 
@@ -221,55 +244,46 @@ module Sequel
       def uses_clob_for_text?
         true
       end
+
+      # Oracle supports views with check option, but not local.
+      def view_with_check_option_support
+        true
+      end
     end
 
     module DatasetMethods
-      include EmulateOffsetWithRowNumber
-
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'with select distinct columns from join where group having compounds order lock')
       ROW_NUMBER_EXPRESSION = LiteralString.new('ROWNUM').freeze
       SPACE = Dataset::SPACE
       APOS = Dataset::APOS
       APOS_RE = Dataset::APOS_RE
       DOUBLE_APOS = Dataset::DOUBLE_APOS
       FROM = Dataset::FROM
-      BITCOMP_OPEN = "((0 - ".freeze
-      BITCOMP_CLOSE = ") - 1)".freeze
       TIMESTAMP_FORMAT = "TIMESTAMP '%Y-%m-%d %H:%M:%S%N %z'".freeze
       TIMESTAMP_OFFSET_FORMAT = "%+03i:%02i".freeze
       BOOL_FALSE = "'N'".freeze
       BOOL_TRUE = "'Y'".freeze
       HSTAR = "H*".freeze
-      DUAL = ['DUAL'.freeze].freeze
+      DUAL = ' FROM DUAL'.freeze
+      BITAND_PROC = lambda{|a, b| Sequel.lit(["CAST(BITAND(", ", ", ") AS INTEGER)"], a, b)}
+
+      include(Module.new do
+        Dataset.def_sql_method(self, :select, %w'with select distinct columns from join where group having compounds order lock')
+      end)
 
       def complex_expression_sql_append(sql, op, args)
         case op
         when :&
-          sql << complex_expression_arg_pairs(args){|a, b| "CAST(BITAND(#{literal(a)}, #{literal(b)}) AS INTEGER)"}
+          complex_expression_arg_pairs_append(sql, args, &BITAND_PROC)
         when :|
-          sql << complex_expression_arg_pairs(args) do |a, b|
-            s1 = ''
-            complex_expression_sql_append(s1, :&, [a, b])
-            "(#{literal(a)} - #{s1} + #{literal(b)})"
-          end
+          complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.lit(["(", " - ", " + ", ")"], a, complex_expression_arg_pairs([a, b], &BITAND_PROC), b)}
         when :^
-          sql << complex_expression_arg_pairs(args) do |*x|
-            s1 = ''
-            s2 = ''
-            complex_expression_sql_append(s1, :|, x)
-            complex_expression_sql_append(s2, :&, x)
-            "(#{s1} - #{s2})"
+          complex_expression_arg_pairs_append(sql, args) do |*x|
+            s1 = complex_expression_arg_pairs(x){|a, b| Sequel.lit(["(", " - ", " + ", ")"], a, complex_expression_arg_pairs([a, b], &BITAND_PROC), b)}
+            s2 = complex_expression_arg_pairs(x, &BITAND_PROC)
+            Sequel.lit(["(", " - ", ")"], s1, s2)
           end
-        when :'B~'
-          sql << BITCOMP_OPEN
-          literal_append(sql, args.at(0))
-          sql << BITCOMP_CLOSE
-        when :<<
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} * power(2, #{literal(b)}))"}
-        when :>>
-          sql << complex_expression_arg_pairs(args){|a, b| "(#{literal(a)} / power(2, #{literal(b)}))"}
-        when :%
-          sql << complex_expression_arg_pairs(args){|a, b| "MOD(#{literal(a)}, #{literal(b)})"}
+        when :%, :<<, :>>, :'B~'
+          complex_expression_emulate_append(sql, op, args)
         else
           super
         end
@@ -286,20 +300,6 @@ module Sequel
         end
       end
 
-      # Oracle treats empty strings like NULL values, and doesn't support
-      # char_length, so make char_length use length with a nonempty string.
-      # Unfortunately, as Oracle treats the empty string as NULL, there is
-      # no way to get trim to return an empty string instead of nil if
-      # the string only contains spaces.
-      def emulated_function_sql_append(sql, f)
-        case f.f
-        when :char_length
-          literal_append(sql, Sequel::SQL::Function.new(:length, Sequel.join([f.args.first, 'x'])) - 1)
-        else
-          super
-        end
-      end
-      
       # Oracle uses MINUS instead of EXCEPT, and doesn't support EXCEPT ALL
       def except(dataset, opts=OPTS)
         raise(Sequel::Error, "EXCEPT ALL not supported") if opts[:all]
@@ -326,7 +326,23 @@ module Sequel
 
       # Handle LIMIT by using a unlimited subselect filtered with ROWNUM.
       def select_sql
-        if (limit = @opts[:limit]) && !@opts[:sql]
+        return super if @opts[:sql]
+        if o = @opts[:offset]
+          columns = clone(:append_sql=>'', :placeholder_literal_null=>true).columns
+          dsa1 = dataset_alias(1)
+          rn = row_number_column
+          limit = @opts[:limit]
+          ds = unlimited.
+            from_self(:alias=>dsa1).
+            select_append(ROW_NUMBER_EXPRESSION.as(rn)).
+            from_self(:alias=>dsa1).
+            select(*columns).
+            where(SQL::Identifier.new(rn) > o)
+          ds = ds.where(SQL::Identifier.new(rn) <= Sequel.+(o, limit)) if limit
+          sql = @opts[:append_sql] || ''
+          subselect_sql_append(sql, ds)
+          sql
+        elsif limit = @opts[:limit]
           ds = clone(:limit=>nil)
           # Lock doesn't work in subselects, so don't use a subselect when locking.
           # Don't use a subselect if custom SQL is used, as it breaks somethings.
@@ -344,6 +360,15 @@ module Sequel
         true
       end
 
+      def supports_cte?(type=:select)
+        type == :select
+      end
+
+      # Oracle does not support derived column lists
+      def supports_derived_column_lists?
+        false
+      end
+
       # Oracle supports GROUP BY CUBE
       def supports_group_cube?
         true
@@ -364,6 +389,16 @@ module Sequel
         false
       end
       
+      # Oracle does not support limits in correlated subqueries.
+      def supports_limits_in_correlated_subqueries?
+        false
+      end
+    
+      # Oracle does not support offsets in correlated subqueries.
+      def supports_offsets_in_correlated_subqueries?
+        false
+      end
+
       # Oracle does not support SELECT *, column
       def supports_select_all_and_column?
         false
@@ -388,7 +423,8 @@ module Sequel
 
       # Oracle doesn't support the use of AS when aliasing a dataset.  It doesn't require
       # the use of AS anywhere, so this disables it in all cases.
-      def as_sql_append(sql, aliaz)
+      def as_sql_append(sql, aliaz, column_aliases=nil)
+        raise Error, "oracle does not support derived column lists" if column_aliases
         sql << SPACE
         quote_identifier_append(sql, aliaz)
       end
@@ -398,6 +434,29 @@ module Sequel
         TIMESTAMP_FORMAT
       end
 
+      def empty_from_sql
+        DUAL
+      end
+
+      # There is no function on Microsoft SQL Server that does character length
+      # and respects trailing spaces (datalength respects trailing spaces, but
+      # counts bytes instead of characters).  Use a hack to work around the
+      # trailing spaces issue.
+      def emulate_function?(name)
+        name == :char_length
+      end
+
+      # Oracle treats empty strings like NULL values, and doesn't support
+      # char_length, so make char_length use length with a nonempty string.
+      # Unfortunately, as Oracle treats the empty string as NULL, there is
+      # no way to get trim to return an empty string instead of nil if
+      # the string only contains spaces.
+      def emulate_function_sql_append(sql, f)
+        if f.name == :char_length
+          literal_append(sql, Sequel::SQL::Function.new(:length, Sequel.join([f.args.first, 'x'])) - 1)
+        end
+      end
+      
       # If this dataset is associated with a sequence, return the most recently
       # inserted sequence value.
       def execute_insert(sql, opts=OPTS)
@@ -435,18 +494,14 @@ module Sequel
         BOOL_TRUE
       end
 
-      # Use the Oracle-specific SQL clauses (no limit, since it is emulated).
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # Oracle can insert multiple rows using a UNION
+      def multi_insert_sql_strategy
+        :union
       end
 
-      # Modify the SQL to add the list of tables to select FROM
-      # Oracle doesn't support select without FROM clause
-      # so add the dummy DUAL table if the dataset doesn't select
-      # from a table.
-      def select_from_sql(sql)
-        sql << FROM
-        source_list_append(sql, @opts[:from] || DUAL)
+      # Oracle supports quoted function names.
+      def supports_quoted_function_names?
+        true
       end
     end
   end
diff --git a/lib/sequel/adapters/shared/postgres.rb b/lib/sequel/adapters/shared/postgres.rb
index 5d3d2ba..91da4d1 100644
--- a/lib/sequel/adapters/shared/postgres.rb
+++ b/lib/sequel/adapters/shared/postgres.rb
@@ -94,6 +94,9 @@ module Sequel
       FOREIGN_KEY_LIST_ON_DELETE_MAP = {'a'.freeze=>:no_action, 'r'.freeze=>:restrict, 'c'.freeze=>:cascade, 'n'.freeze=>:set_null, 'd'.freeze=>:set_default}.freeze
       POSTGRES_DEFAULT_RE = /\A(?:B?('.*')::[^']+|\((-?\d+(?:\.\d+)?)\))\z/
       UNLOGGED = 'UNLOGGED '.freeze
+      ON_COMMIT = {
+        :drop => 'DROP', :delete_rows => 'DELETE ROWS', :preserve_rows => 'PRESERVE ROWS',
+      }.freeze
 
       # SQL fragment for custom sequences (ones not created by serial primary key),
       # Returning the schema and literal form of the sequence name, by parsing
@@ -153,8 +156,8 @@ module Sequel
 
       # Commit an existing prepared transaction with the given transaction
       # identifier string.
-      def commit_prepared_transaction(transaction_id)
-        run("COMMIT PREPARED #{literal(transaction_id)}")
+      def commit_prepared_transaction(transaction_id, opts=OPTS)
+        run("COMMIT PREPARED #{literal(transaction_id)}", opts)
       end
 
       # Creates the function in the database.  Arguments:
@@ -399,14 +402,18 @@ module Sequel
       # 
       #   DB.refresh_view(:items_view)
       #   # REFRESH MATERIALIZED VIEW items_view
+      #   DB.refresh_view(:items_view, :concurrently=>true)
+      #   # REFRESH MATERIALIZED VIEW CONCURRENTLY items_view
       def refresh_view(name, opts=OPTS)
-        run "REFRESH MATERIALIZED VIEW #{quote_schema_table(name)}"
+        run "REFRESH MATERIALIZED VIEW#{' CONCURRENTLY' if opts[:concurrently]} #{quote_schema_table(name)}"
       end
       
       # Reset the database's conversion procs, requires a server query if there
       # any named types.
       def reset_conversion_procs
         @conversion_procs = get_conversion_procs
+        conversion_procs_updated
+        @conversion_procs
       end
 
       # Reset the primary key sequence for the given table, basing it on the
@@ -423,8 +430,8 @@ module Sequel
 
       # Rollback an existing prepared transaction with the given transaction
       # identifier string.
-      def rollback_prepared_transaction(transaction_id)
-        run("ROLLBACK PREPARED #{literal(transaction_id)}")
+      def rollback_prepared_transaction(transaction_id, opts=OPTS)
+        run("ROLLBACK PREPARED #{literal(transaction_id)}", opts)
       end
 
       # PostgreSQL uses SERIAL psuedo-type instead of AUTOINCREMENT for
@@ -537,6 +544,7 @@ module Sequel
           convert_named_procs_to_procs(named_procs).each do |oid, pr|
             procs[oid] ||= pr
           end
+          conversion_procs_updated
         end
       end
 
@@ -582,6 +590,15 @@ module Sequel
         end
       end
       
+      # Set the READ ONLY transaction setting per savepoint, as PostgreSQL supports that.
+      def begin_savepoint(conn, opts)
+        super
+
+        unless (read_only = opts[:read_only]).nil?
+          log_connection_execute(conn, "SET TRANSACTION READ #{read_only ? 'ONLY' : 'WRITE'}")
+        end
+      end
+
       # Handle PostgreSQL specific default format.
       def column_schema_normalize_default(default, type)
         if m = POSTGRES_DEFAULT_RE.match(default)
@@ -593,7 +610,7 @@ module Sequel
       # If the :prepare option is given and we aren't in a savepoint,
       # prepare the transaction for a two-phase commit.
       def commit_transaction(conn, opts=OPTS)
-        if (s = opts[:prepare]) && _trans(conn)[:savepoint_level] <= 1
+        if (s = opts[:prepare]) && savepoint_level(conn) <= 1
           log_connection_execute(conn, "PREPARE TRANSACTION #{literal(s)}")
         else
           super
@@ -655,6 +672,11 @@ module Sequel
         end
       end
 
+      # Callback used when conversion procs are updated.
+      def conversion_procs_updated
+        nil
+      end
+
       # Convert the hash of named conversion procs into a hash a oid conversion procs. 
       def convert_named_procs_to_procs(named_procs)
         h = {}
@@ -671,6 +693,7 @@ module Sequel
         oids.each do |oid|
           procs[oid] = PG_TYPES[oid]
         end
+        conversion_procs_updated
       end
 
       EXCLUSION_CONSTRAINT_SQL_STATE = '23P01'.freeze
@@ -766,6 +789,10 @@ module Sequel
 
       # DDL statement for creating a table with the given name, columns, and options
       def create_table_prefix_sql(name, options)
+        if on_commit = options[:on_commit]
+          raise(Error, "can't provide :on_commit without :temp to create_table") unless options[:temp]
+          raise(Error, "unsupported on_commit option: #{on_commit.inspect}") unless ON_COMMIT.has_key? on_commit
+        end
         temp_or_unlogged_sql = if options[:temp]
          raise(Error, "can't provide both :temp and :unlogged to create_table") if options[:unlogged]
          temporary_table_sql
@@ -780,9 +807,20 @@ module Sequel
         if inherits = options[:inherits]
           sql << " INHERITS (#{Array(inherits).map{|t| quote_schema_table(t)}.join(', ')})"
         end
+        if on_commit = options[:on_commit]
+          sql << " ON COMMIT #{ON_COMMIT[on_commit]}"
+        end
         sql
       end
 
+      def create_table_as_sql(name, sql, options)
+        result = create_table_prefix_sql name, options
+        if on_commit = options[:on_commit]
+          result << " ON COMMIT #{ON_COMMIT[on_commit]}"
+        end
+        result << " AS #{sql}"
+      end
+
       # Use a PostgreSQL-specific create table generator
       def create_table_generator_class
         Postgres::CreateTableGenerator
@@ -1067,6 +1105,11 @@ module Sequel
           :text
         end
       end
+
+      # PostgreSQL 9.4+ supports views with check option.
+      def view_with_check_option_support
+        :local if server_version >= 90400
+      end
     end
 
     # Instance methods for datasets that connect to a PostgreSQL database.
@@ -1076,27 +1119,19 @@ module Sequel
       BOOL_FALSE = 'false'.freeze
       BOOL_TRUE = 'true'.freeze
       COMMA_SEPARATOR = ', '.freeze
-      DELETE_CLAUSE_METHODS = Dataset.clause_methods(:delete, %w'delete from using where returning')
-      DELETE_CLAUSE_METHODS_91 = Dataset.clause_methods(:delete, %w'with delete from using where returning')
       EXCLUSIVE = 'EXCLUSIVE'.freeze
       EXPLAIN = 'EXPLAIN '.freeze
       EXPLAIN_ANALYZE = 'EXPLAIN ANALYZE '.freeze
       FOR_SHARE = ' FOR SHARE'.freeze
-      INSERT_CLAUSE_METHODS = Dataset.clause_methods(:insert, %w'insert into columns values returning')
-      INSERT_CLAUSE_METHODS_91 = Dataset.clause_methods(:insert, %w'with insert into columns values returning')
       NULL = LiteralString.new('NULL').freeze
       PG_TIMESTAMP_FORMAT = "TIMESTAMP '%Y-%m-%d %H:%M:%S".freeze
       QUERY_PLAN = 'QUERY PLAN'.to_sym
       ROW_EXCLUSIVE = 'ROW EXCLUSIVE'.freeze
       ROW_SHARE = 'ROW SHARE'.freeze
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit lock')
-      SELECT_CLAUSE_METHODS_84 = Dataset.clause_methods(:select, %w'with select distinct columns from join where group having window compounds order limit lock')
       SHARE = 'SHARE'.freeze
       SHARE_ROW_EXCLUSIVE = 'SHARE ROW EXCLUSIVE'.freeze
       SHARE_UPDATE_EXCLUSIVE = 'SHARE UPDATE EXCLUSIVE'.freeze
       SQL_WITH_RECURSIVE = "WITH RECURSIVE ".freeze
-      UPDATE_CLAUSE_METHODS = Dataset.clause_methods(:update, %w'update table set from where returning')
-      UPDATE_CLAUSE_METHODS_91 = Dataset.clause_methods(:update, %w'with update table set from where returning')
       SPACE = Dataset::SPACE
       FROM = Dataset::FROM
       APOS = Dataset::APOS
@@ -1115,6 +1150,11 @@ module Sequel
       EMPTY_STRING = ''.freeze
       LOCK_MODES = ['ACCESS SHARE', 'ROW SHARE', 'ROW EXCLUSIVE', 'SHARE UPDATE EXCLUSIVE', 'SHARE', 'SHARE ROW EXCLUSIVE', 'EXCLUSIVE', 'ACCESS EXCLUSIVE'].each{|s| s.freeze}
 
+      Dataset.def_sql_method(self, :delete, [['if server_version >= 90100', %w'with delete from using where returning'], ['else', %w'delete from using where returning']])
+      Dataset.def_sql_method(self, :insert, [['if server_version >= 90100', %w'with insert into columns values returning'], ['else', %w'insert into columns values returning']])
+      Dataset.def_sql_method(self, :select, [['if server_version >= 80400', %w'with select distinct columns from join where group having window compounds order limit lock'], ['else', %w'select distinct columns from join where group having compounds order limit lock']])
+      Dataset.def_sql_method(self, :update, [['if server_version >= 90100', %w'with update table set from where returning'], ['else', %w'update table set from where returning']])
+
       # Shared methods for prepared statements when used with PostgreSQL databases.
       module PreparedStatementMethods
         # Override insert action to use RETURNING if the server supports it.
@@ -1165,6 +1205,24 @@ module Sequel
         end
       end
 
+      # Disables automatic use of INSERT ... RETURNING.  You can still use
+      # returning manually to force the use of RETURNING when inserting.
+      #
+      # This is designed for cases where INSERT RETURNING cannot be used,
+      # such as when you are using partitioning with trigger functions
+      # or conditional rules, or when you are using a PostgreSQL version
+      # less than 8.2, or a PostgreSQL derivative that does not support
+      # returning.
+      #
+      # Note that when this method is used, insert will not return the
+      # primary key of the inserted row, you will have to get the primary
+      # key of the inserted row before inserting via nextval, or after
+      # inserting via currval or lastval (making sure to use the same
+      # database connection for currval or lastval).
+      def disable_insert_returning
+        clone(:disable_insert_returning=>true)
+      end
+
       # Return the results of an EXPLAIN query as a string
       def explain(opts=OPTS)
         with_sql((opts[:analyze] ? EXPLAIN_ANALYZE : EXPLAIN) + select_sql).map(QUERY_PLAN).join(CRLF)
@@ -1175,35 +1233,59 @@ module Sequel
         lock_style(:share)
       end
 
-      # PostgreSQL specific full text search syntax, using tsearch2 (included
-      # in 8.3 by default, and available for earlier versions as an add-on).
+      # Run a full text search on PostgreSQL.  By default, searching for the inclusion
+      # of any of the terms in any of the cols.
+      #
+      # Options:
+      # :language :: The language to use for the search (default: 'simple')
+      # :plain :: Whether a plain search should be used (default: false).  In this case,
+      #           terms should be a single string, and it will do a search where cols
+      #           contains all of the words in terms.  This ignores search operators in terms.
+      # :phrase :: Similar to :plain, but also adding an ILIKE filter to ensure that
+      #            returned rows also include the exact phrase used.
+      # :rank :: Set to true to order by the rank, so that closer matches are returned first.
       def full_text_search(cols, terms, opts = OPTS)
-        lang = opts[:language] || 'simple'
+        lang = Sequel.cast(opts[:language] || 'simple', :regconfig)
         terms = terms.join(' | ') if terms.is_a?(Array)
-        filter("to_tsvector(?::regconfig, ?) @@ to_tsquery(?::regconfig, ?)", lang, full_text_string_join(cols), lang, terms)
+        columns = full_text_string_join(cols)
+        query_func = (opts[:phrase] || opts[:plain]) ? :plainto_tsquery : :to_tsquery
+        vector = Sequel.function(:to_tsvector, lang, columns)
+        query = Sequel.function(query_func, lang, terms)
+
+        ds = where(Sequel.lit(["(", " @@ ", ")"], vector, query))
+
+        if opts[:phrase]
+          ds = ds.grep(cols, "%#{escape_like(terms)}%", :case_insensitive=>true)
+        end
+
+        if opts[:rank]
+          ds = ds.order{ts_rank_cd(vector, query)}
+        end
+
+        ds
       end
 
       # Insert given values into the database.
       def insert(*values)
         if @opts[:returning]
-          # already know which columns to return, let the standard code
-          # handle it
+          # Already know which columns to return, let the standard code handle it
           super
-        elsif @opts[:sql]
-          # raw SQL used, so don't know which table is being inserted
-          # into, and therefore can't determine primary key.  Run the
-          # insert statement and return nil.
+        elsif @opts[:sql] || @opts[:disable_insert_returning]
+          # Raw SQL used or RETURNING disabled, just use the default behavior
+          # and return nil since sequence is not known.
           super
           nil
         else
-          # Force the use of RETURNING with the primary key value.
+          # Force the use of RETURNING with the primary key value,
+          # unless it has been disabled.
           returning(insert_pk).insert(*values){|r| return r.values.first}
         end
       end
 
-      # Insert a record returning the record inserted
+      # Insert a record returning the record inserted.  Always returns nil without
+      # inserting a query if disable_insert_returning is used.
       def insert_select(*values)
-        returning.insert(*values){|r| return r}
+        returning.insert(*values){|r| return r} unless @opts[:disable_insert_returning]
       end
 
       # Locks all tables in the dataset's FROM clause (but not in JOINs) with
@@ -1227,11 +1309,12 @@ module Sequel
         nil
       end
 
-      # PostgreSQL allows inserting multiple rows at once.
-      def multi_insert_sql(columns, values)
-        sql = LiteralString.new('VALUES ')
-        expression_list_append(sql, values.map{|r| Array(r)})
-        [insert_sql(columns, sql)]
+      def supports_cte?(type=:select)
+        if type == :select
+          server_version >= 80400
+        else
+          server_version >= 90100
+        end
       end
 
       # PostgreSQL supports using the WITH clause in subqueries if it
@@ -1245,6 +1328,11 @@ module Sequel
         true
       end
 
+      # True unless insert returning has been disabled for this dataset.
+      def supports_insert_select?
+        !@opts[:disable_insert_returning]
+      end
+
       # PostgreSQL 9.3rc1+ supports lateral subqueries
       def supports_lateral_subqueries?
         server_version >= 90300
@@ -1336,15 +1424,6 @@ module Sequel
         raise(InvalidOperation, "Joined datasets cannot be truncated") if opts[:join]
       end
 
-      # PostgreSQL allows deleting from joined datasets
-      def delete_clause_methods
-        if server_version >= 90100
-          DELETE_CLAUSE_METHODS_91
-        else
-          DELETE_CLAUSE_METHODS
-        end
-      end
-
       # Only include the primary table in the main delete clause
       def delete_from_sql(sql)
         sql << FROM
@@ -1356,19 +1435,15 @@ module Sequel
         join_from_sql(:USING, sql)
       end
 
-      # PostgreSQL allows a RETURNING clause.
-      def insert_clause_methods
-        if server_version >= 90100
-          INSERT_CLAUSE_METHODS_91
-        else
-          INSERT_CLAUSE_METHODS
-        end
-      end
-
       # Return the primary key to use for RETURNING in an INSERT statement
       def insert_pk
-        if (f = opts[:from]) && !f.empty? && (pk = db.primary_key(f.first))
-          Sequel::SQL::Identifier.new(pk)
+        if (f = opts[:from]) && !f.empty?
+          case t = f.first
+          when Symbol, String, SQL::Identifier, SQL::QualifiedIdentifier
+            if pk = db.primary_key(t)
+              Sequel::SQL::Identifier.new(pk)
+            end
+          end
         end
       end
 
@@ -1417,9 +1492,9 @@ module Sequel
         BOOL_TRUE
       end
 
-      # The order of clauses in the SELECT SQL statement
-      def select_clause_methods
-        server_version >= 80400 ? SELECT_CLAUSE_METHODS_84 : SELECT_CLAUSE_METHODS
+      # PostgreSQL supports multiple rows in INSERT.
+      def multi_insert_sql_strategy
+        :values
       end
 
       # PostgreSQL requires parentheses around compound datasets if they use
@@ -1462,6 +1537,11 @@ module Sequel
         db.server_version(@opts[:server])
       end
 
+      # PostgreSQL supports quoted function names.
+      def supports_quoted_function_names?
+        true
+      end
+
       # Concatenate the expressions with a space in between
       def full_text_string_join(cols)
         cols = Array(cols).map{|x| SQL::Function.new(:COALESCE, x, EMPTY_STRING)}
@@ -1470,15 +1550,6 @@ module Sequel
         SQL::StringExpression.new(:'||', *cols)
       end
 
-      # PostgreSQL splits the main table from the joined tables
-      def update_clause_methods
-        if server_version >= 90100
-          UPDATE_CLAUSE_METHODS_91
-        else
-          UPDATE_CLAUSE_METHODS
-        end
-      end
-
       # Use FROM to specify additional tables in an update query
       def update_from_sql(sql)
         join_from_sql(:FROM, sql)
diff --git a/lib/sequel/adapters/shared/progress.rb b/lib/sequel/adapters/shared/progress.rb
index 26c9cf3..459346c 100644
--- a/lib/sequel/adapters/shared/progress.rb
+++ b/lib/sequel/adapters/shared/progress.rb
@@ -10,7 +10,7 @@ module Sequel
     end
   
     module DatasetMethods
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select limit distinct columns from join where group order having compounds')
+      Dataset.def_sql_method(self, :select, %w'select limit distinct columns from join where group order having compounds')
 
       # Progress requires SQL standard datetimes
       def requires_sql_standard_datetimes?
@@ -24,10 +24,6 @@ module Sequel
 
       private
 
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
-      end
-
       # Progress uses TOP for limit, but it is only supported in Progress 10.
       # The Progress adapter targets Progress 9, so it silently ignores the option.
       def select_limit_sql(sql)
diff --git a/lib/sequel/adapters/shared/sqlanywhere.rb b/lib/sequel/adapters/shared/sqlanywhere.rb
new file mode 100644
index 0000000..54ac907
--- /dev/null
+++ b/lib/sequel/adapters/shared/sqlanywhere.rb
@@ -0,0 +1,469 @@
+module Sequel
+  module SqlAnywhere
+
+    @convert_smallint_to_bool = true
+
+    class << self
+      # Whether to convert smallint values to bool, false by default.
+      # Can also be overridden per dataset.
+      attr_accessor :convert_smallint_to_bool
+    end
+
+    module DatabaseMethods
+      extend Sequel::Database::ResetIdentifierMangling
+
+      attr_reader :conversion_procs
+
+      # Override the default SqlAnywhere.convert_smallint_to_bool setting for this database.
+      attr_writer :convert_smallint_to_bool
+
+      AUTO_INCREMENT = 'IDENTITY'.freeze
+      SQL_BEGIN = "BEGIN TRANSACTION".freeze
+      SQL_COMMIT = "COMMIT TRANSACTION".freeze
+      SQL_ROLLBACK = "IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION".freeze
+      TEMPORARY = "GLOBAL TEMPORARY ".freeze
+      SMALLINT_RE = /smallint/i.freeze
+      DECIMAL_TYPE_RE = /numeric/io
+
+      # Whether to convert smallint to boolean arguments for this dataset.
+      # Defaults to the SqlAnywhere module setting.
+      def convert_smallint_to_bool
+        defined?(@convert_smallint_to_bool) ? @convert_smallint_to_bool : (@convert_smallint_to_bool = ::Sequel::SqlAnywhere.convert_smallint_to_bool)
+      end
+
+      # Sysbase Server uses the :sqlanywhere type.
+      def database_type
+        :sqlanywhere
+      end
+
+      def to_application_timestamp_sa(v)
+        to_application_timestamp(v.to_s) if v
+      end
+
+      # Convert smallint type to boolean if convert_smallint_to_bool is true
+      def schema_column_type(db_type)
+        if convert_smallint_to_bool && db_type =~ SMALLINT_RE
+          :boolean
+        else
+          super
+        end
+      end
+
+      def schema_parse_table(table, opts)
+        m = output_identifier_meth(opts[:dataset])
+        im = input_identifier_meth(opts[:dataset])
+        metadata_dataset.
+         from{sa_describe_query("select * from #{im.call(table)}").as(:a)}.
+         join(:syscolumn___b, :table_id=>:base_table_id, :column_id=>:base_column_id).
+         order(:a__column_number).
+          map do |row|
+          row[:auto_increment] = row.delete(:is_autoincrement) == 1
+          row[:primary_key] = row.delete(:pkey) == 'Y'
+          row[:allow_null] = row[:nulls_allowed].is_a?(Fixnum) ? row.delete(:nulls_allowed) == 1 : row.delete(:nulls_allowed)
+          row[:db_type] = row.delete(:domain_name)
+          row[:type] = if row[:db_type] =~ DECIMAL_TYPE_RE and (row[:scale].is_a?(Fixnum) ? row[:scale] == 0 : !row[:scale])
+            :integer
+          else
+            schema_column_type(row[:db_type])
+          end
+          [m.call(row.delete(:name)), row]
+        end
+      end
+
+      def indexes(table, opts = OPTS)
+        m = output_identifier_meth
+        im = input_identifier_meth
+        indexes = {}
+        metadata_dataset.
+         from(:dbo__sysobjects___z).
+         select(:z__name___table_name, :i__name___index_name, :si__indextype___type, :si__colnames___columns).
+         join(:dbo__sysindexes___i, :id___i=> :id___z).
+         join(:sys__sysindexes___si, :iname=> :name___i).
+         where(:z__type => 'U', :table_name=>im.call(table)).
+         each do |r|
+          indexes[m.call(r[:index_name])] =
+            {:unique=>(r[:type].downcase=='unique'),
+             :columns=>r[:columns].split(',').map{|v| m.call(v.split(' ').first)}} unless r[:type].downcase == 'primary key'
+        end
+        indexes
+      end
+
+      def foreign_key_list(table, opts=OPTS)
+        m = output_identifier_meth
+        im = input_identifier_meth
+        fk_indexes = {}
+        metadata_dataset.
+         from(:sys__sysforeignkey___fk).
+         select(:fk__role___name, :fks__columns___column_map, :si__indextype___type, :si__colnames___columns, :fks__primary_tname___table_name).
+         join(:sys__sysforeignkeys___fks, :role => :role).
+         join_table(:inner, :sys__sysindexes___si, [:iname=> :fk__role], {:implicit_qualifier => :fk}).
+         where(:fks__foreign_tname=>im.call(table)).
+         each do |r|
+          unless r[:type].downcase == 'primary key'
+            fk_indexes[r[:name]] =
+              {:name=>m.call(r[:name]),
+               :columns=>r[:columns].split(',').map{|v| m.call(v.split(' ').first)},
+               :table=>m.call(r[:table_name]),
+               :key=>r[:column_map].split(',').map{|v| m.call(v.split(' IS ').last)}}
+           end
+        end
+        fk_indexes.values
+      end
+
+      def tables(opts=OPTS)
+        tables_and_views('U', opts)
+      end
+
+      def views(opts=OPTS)
+        tables_and_views('V', opts)
+      end
+
+      private
+
+      DATABASE_ERROR_REGEXPS = {
+        /would not be unique|Primary key for table.+is not unique/ => Sequel::UniqueConstraintViolation,
+        /Column .* in table .* cannot be NULL/ => Sequel::NotNullConstraintViolation,
+        /Constraint .* violated: Invalid value in table .*/ => Sequel::CheckConstraintViolation,
+        /No primary key value for foreign key .* in table .*/ => Sequel::ForeignKeyConstraintViolation,
+        /Primary key for row in table .* is referenced by foreign key .* in table .*/ => Sequel::ForeignKeyConstraintViolation
+      }.freeze
+
+      def database_error_regexps
+        DATABASE_ERROR_REGEXPS
+      end
+
+      # Sybase uses the IDENTITY column for autoincrementing columns.
+      def auto_increment_sql
+        AUTO_INCREMENT
+      end
+      
+      # SQL fragment for marking a table as temporary
+      def temporary_table_sql
+        TEMPORARY
+      end
+
+      # SQL to BEGIN a transaction.
+      def begin_transaction_sql
+        SQL_BEGIN
+      end
+
+      # SQL to ROLLBACK a transaction.
+      def rollback_transaction_sql
+        SQL_ROLLBACK
+      end
+
+      # SQL to COMMIT a transaction.
+      def commit_transaction_sql
+        SQL_COMMIT
+      end
+
+      # Sybase has both datetime and timestamp classes, most people are going
+      # to want datetime
+      def type_literal_generic_datetime(column)
+        :datetime
+      end
+
+      # Sybase has both datetime and timestamp classes, most people are going
+      # to want datetime
+      def type_literal_generic_time(column)
+        column[:only_time] ? :time : :datetime
+      end
+      
+      # Sybase doesn't have a true boolean class, so it uses integer
+      def type_literal_generic_trueclass(column)
+        :smallint
+      end
+
+      # SQLAnywhere uses image type for blobs
+      def type_literal_generic_file(column)
+        :image
+      end
+
+      # Sybase specific syntax for altering tables.
+      def alter_table_sql(table, op)
+        case op[:op]
+        when :add_column
+          "ALTER TABLE #{quote_schema_table(table)} ADD #{column_definition_sql(op)}"
+        when :drop_column
+          "ALTER TABLE #{quote_schema_table(table)} DROP #{column_definition_sql(op)}"
+        when :drop_constraint
+          case op[:type]
+          when :primary_key
+            "ALTER TABLE #{quote_schema_table(table)} DROP PRIMARY KEY"
+          when :foreign_key
+            if op[:name] || op[:columns]
+              name = op[:name] || foreign_key_name(table, op[:columns])
+              if name
+                "ALTER TABLE #{quote_schema_table(table)} DROP FOREIGN KEY #{quote_identifier(name)}"
+              end
+            end
+          else
+            super
+          end
+        when :rename_column
+          "ALTER TABLE #{quote_schema_table(table)} RENAME #{quote_identifier(op[:name])} TO #{quote_identifier(op[:new_name].to_s)}"
+        when :set_column_type
+          "ALTER TABLE #{quote_schema_table(table)} ALTER #{quote_identifier(op[:name])} #{type_literal(op)}"
+        when :set_column_null
+          "ALTER TABLE #{quote_schema_table(table)} ALTER #{quote_identifier(op[:name])} #{'NOT ' unless op[:null]}NULL"
+        when :set_column_default
+          "ALTER TABLE #{quote_schema_table(table)} ALTER #{quote_identifier(op[:name])} DEFAULT #{literal(op[:default])}"
+        else
+          super(table, op)
+        end
+      end
+
+      # SqlAnywhere doesn't support CREATE TABLE AS, it only supports SELECT INTO.
+      # Emulating CREATE TABLE AS using SELECT INTO is only possible if a dataset
+      # is given as the argument, it can't work with a string, so raise an
+      # Error if a string is given.
+      def create_table_as(name, ds, options)
+        raise(Error, "must provide dataset instance as value of create_table :as option on SqlAnywhere") unless ds.is_a?(Sequel::Dataset)
+        run(ds.into(name).sql)
+      end
+
+      # Use SP_RENAME to rename the table
+      def rename_table_sql(name, new_name)
+        "ALTER TABLE #{quote_schema_table(name)} RENAME #{quote_schema_table(new_name)}"
+      end
+
+      def tables_and_views(type, opts=OPTS)
+        m = output_identifier_meth
+        metadata_dataset.
+          from(:sysobjects___a).
+          where(:a__type=>type).
+          select_map(:a__name).
+          map{|n| m.call(n)}
+      end
+      
+      # SQLAnywhere supports views with check option, but not local.
+      def view_with_check_option_support
+        true
+      end
+    end
+
+    module DatasetMethods
+      BOOL_TRUE = '1'.freeze
+      BOOL_FALSE = '0'.freeze
+      WILDCARD = LiteralString.new('%').freeze
+      TOP = " TOP ".freeze
+      START_AT = " START AT ".freeze
+      SQL_WITH_RECURSIVE = "WITH RECURSIVE ".freeze
+      DATE_FUNCTION = 'today()'.freeze
+      NOW_FUNCTION = 'now()'.freeze
+      DATEPART = 'datepart'.freeze
+      REGEXP = 'REGEXP'.freeze
+      NOT_REGEXP = 'NOT REGEXP'.freeze
+      TIMESTAMP_USEC_FORMAT = ".%03d".freeze
+      APOS = Dataset::APOS
+      APOS_RE = Dataset::APOS_RE
+      DOUBLE_APOS = Dataset::DOUBLE_APOS
+      BACKSLASH_RE = /\\/.freeze
+      QUAD_BACKSLASH = "\\\\\\\\".freeze
+      BLOB_START = "0x".freeze
+      HSTAR = "H*".freeze
+      CROSS_APPLY = 'CROSS APPLY'.freeze
+      OUTER_APPLY = 'OUTER APPLY'.freeze
+      ONLY_OFFSET = " TOP 2147483647".freeze
+
+      Dataset.def_sql_method(self, :insert, %w'with insert into columns values')
+      Dataset.def_sql_method(self, :select, %w'with select distinct limit columns into from join where group having order compounds lock')
+
+      # Whether to convert smallint to boolean arguments for this dataset.
+      # Defaults to the SqlAnywhere module setting.
+      def convert_smallint_to_bool
+        defined?(@convert_smallint_to_bool) ? @convert_smallint_to_bool : (@convert_smallint_to_bool = @db.convert_smallint_to_bool)
+      end
+
+      # Override the default SqlAnywhere.convert_smallint_to_bool setting for this dataset.
+      attr_writer :convert_smallint_to_bool
+
+      def supports_cte?(type=:select)
+        type == :select || type == :insert
+      end
+
+      def supports_multiple_column_in?
+        false
+      end
+
+      def supports_where_true?
+        false
+      end
+
+      def supports_is_true?
+        false
+      end
+
+      def supports_join_using?
+        false
+      end
+
+      def supports_timestamp_usecs?
+        false
+      end
+
+      # Uses CROSS APPLY to join the given table into the current dataset.
+      def cross_apply(table)
+        join_table(:cross_apply, table)
+      end
+
+      # SqlAnywhere requires recursive CTEs to have column aliases.
+      def recursive_cte_requires_column_aliases?
+        true
+      end
+
+      # SQLAnywhere uses + for string concatenation, and LIKE is case insensitive by default.
+      def complex_expression_sql_append(sql, op, args)
+        case op
+        when :'||'
+          super(sql, :+, args)
+        when :<<, :>>
+          complex_expression_emulate_append(sql, op, args)
+        when :LIKE, :"NOT LIKE"
+          sql << Sequel::Dataset::PAREN_OPEN
+          literal_append(sql, args.at(0))
+          sql << Sequel::Dataset::SPACE << (op == :LIKE ? REGEXP : NOT_REGEXP) << Sequel::Dataset::SPACE
+          pattern = ''
+          last_c = ''
+          args.at(1).each_char do |c|
+            if  c == '_' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '.'
+            elsif c == '%' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '.*'
+            elsif c == '[' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '\['
+            elsif c == ']' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '\]'
+            elsif c == '*' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '\*'
+            elsif c == '?' and not pattern.end_with?('\\') and last_c != '\\'
+              pattern << '\?'
+            else
+              pattern << c
+            end
+            if c == '\\' and last_c == '\\'
+              last_c = ''
+            else
+              last_c = c
+            end
+          end
+          literal_append(sql, pattern)
+          sql << Sequel::Dataset::ESCAPE
+          literal_append(sql, Sequel::Dataset::BACKSLASH)
+          sql << Sequel::Dataset::PAREN_CLOSE
+        when :ILIKE, :"NOT ILIKE"
+          super(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args)
+        when :extract
+          sql << DATEPART + Sequel::Dataset::PAREN_OPEN
+          literal_append(sql, args.at(0))
+          sql << ','
+          literal_append(sql, args.at(1))
+          sql << Sequel::Dataset::PAREN_CLOSE
+        else
+          super
+        end
+      end
+
+      # SqlAnywhere uses \\ to escape metacharacters, but a ']' should not be escaped
+      def escape_like(string)
+        string.gsub(/[\\%_\[]/){|m| "\\#{m}"}
+      end
+
+      # Use Date() and Now() for CURRENT_DATE and CURRENT_TIMESTAMP
+      def constant_sql_append(sql, constant)
+        case constant
+        when :CURRENT_DATE
+          sql << DATE_FUNCTION
+        when :CURRENT_TIMESTAMP, :CURRENT_TIME
+          sql << NOW_FUNCTION
+        else
+          super
+        end
+      end
+
+      # Specify a table for a SELECT ... INTO query.
+      def into(table)
+        clone(:into => table)
+      end
+
+      private
+
+      # Use 1 for true on Sybase
+      def literal_true
+        BOOL_TRUE
+      end
+
+      # Use 0 for false on Sybase
+      def literal_false
+        BOOL_FALSE
+      end
+
+      # SQL fragment for String.  Doubles \ and ' by default.
+      def literal_string_append(sql, v)
+        sql << APOS << v.gsub(BACKSLASH_RE, QUAD_BACKSLASH).gsub(APOS_RE, DOUBLE_APOS) << APOS
+      end
+
+      # SqlAnywhere uses a preceding X for hex escaping strings
+      def literal_blob_append(sql, v)
+        if v.empty?
+          literal_append(sql, "")
+        else
+          sql << BLOB_START << v.unpack(HSTAR).first
+        end
+      end
+
+      # Sybase supports multiple rows in INSERT.
+      def multi_insert_sql_strategy
+        :values
+      end
+
+      def select_into_sql(sql)
+        if i = @opts[:into]
+          sql << Sequel::Dataset::INTO
+          identifier_append(sql, i)
+        end
+      end
+
+      def format_timestamp_usec(usec)
+        sprintf(TIMESTAMP_USEC_FORMAT, usec/1000)
+      end
+
+      # Sybase uses TOP N for limit.  For Sybase TOP (N) is used
+      # to allow the limit to be a bound variable.
+      def select_limit_sql(sql)
+        l = @opts[:limit]
+        o = @opts[:offset]
+        if l || o
+          if l
+            sql << TOP
+            literal_append(sql, l)
+          else
+            sql << ONLY_OFFSET
+          end
+
+          if o 
+            sql << START_AT + "("
+            literal_append(sql, o)
+            sql << " + 1)"
+          end
+        end
+      end
+
+      # Use WITH RECURSIVE instead of WITH if any of the CTEs is recursive
+      def select_with_sql_base
+        opts[:with].any?{|w| w[:recursive]} ? SQL_WITH_RECURSIVE : super
+      end
+
+      def join_type_sql(join_type)
+        case join_type
+        when :cross_apply
+          CROSS_APPLY
+        when :outer_apply
+          OUTER_APPLY
+        else
+          super
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/adapters/shared/sqlite.rb b/lib/sequel/adapters/shared/sqlite.rb
index 6db7985..5b3deba 100644
--- a/lib/sequel/adapters/shared/sqlite.rb
+++ b/lib/sequel/adapters/shared/sqlite.rb
@@ -97,7 +97,8 @@ module Sequel
         im = input_identifier_meth
         indexes = {}
         metadata_dataset.with_sql("PRAGMA index_list(?)", im.call(table)).each do |r|
-          next if r[:name] =~ PRIMARY_KEY_INDEX_RE
+          # :only_autocreated internal option can be used to get only autocreated indexes
+          next if (!!(r[:name] =~ PRIMARY_KEY_INDEX_RE) ^ !!opts[:only_autocreated])
           indexes[m.call(r[:name])] = {:unique=>r[:unique].to_i==1}
         end
         indexes.each do |k, v|
@@ -131,7 +132,7 @@ module Sequel
       def sqlite_version
         return @sqlite_version if defined?(@sqlite_version)
         @sqlite_version = begin
-          v = get{sqlite_version{}}
+          v = fetch('SELECT sqlite_version()').single_value
           [10000, 100, 1].zip(v.split('.')).inject(0){|a, m| a + m[0] * Integer(m[1])}
         rescue
           0
@@ -331,10 +332,11 @@ module Sequel
       end
 
       DATABASE_ERROR_REGEXPS = {
-        /is not unique\z/ => UniqueConstraintViolation,
-        /foreign key constraint failed\z/ => ForeignKeyConstraintViolation,
+        /(is|are) not unique\z|PRIMARY KEY must be unique\z|UNIQUE constraint failed: .+\z/ => UniqueConstraintViolation,
+        /foreign key constraint failed\z/i => ForeignKeyConstraintViolation,
+        /\ACHECK constraint failed/ => CheckConstraintViolation,
         /\A(SQLITE ERROR 19 \(CONSTRAINT\) : )?constraint failed\z/ => ConstraintViolation,
-        /may not be NULL\z/ => NotNullConstraintViolation,
+        /may not be NULL\z|NOT NULL constraint failed: .+\z/ => NotNullConstraintViolation,
       }.freeze
       def database_error_regexps
         DATABASE_ERROR_REGEXPS
@@ -389,6 +391,19 @@ module Sequel
           constraints.concat(fks.each{|h| h[:type] = :foreign_key})
         end
 
+        # Determine unique constraints and make sure the new columns have them
+        unique_columns = []
+        indexes(table, :only_autocreated=>true).each_value do |h|
+          unique_columns.concat(h[:columns]) if h[:columns].length == 1 && h[:unique]
+        end
+        unique_columns -= pks
+        unless unique_columns.empty?
+          unique_columns.map!{|c| quote_identifier(c)}
+          def_columns.each do |c|
+            c[:unique] = true if unique_columns.include?(quote_identifier(c[:name]))
+          end
+        end
+        
         def_columns_str = (def_columns.map{|c| column_definition_sql(c)} + constraints.map{|c| constraint_definition_sql(c)}).join(', ')
         new_columns = old_columns.dup
         opts[:new_columns_proc].call(new_columns) if opts[:new_columns_proc]
@@ -482,7 +497,6 @@ module Sequel
     module DatasetMethods
       include Dataset::Replace
 
-      SELECT_CLAUSE_METHODS = Dataset.clause_methods(:select, %w'select distinct columns from join where group having compounds order limit')
       CONSTANT_MAP = {:CURRENT_DATE=>"date(CURRENT_TIMESTAMP, 'localtime')".freeze, :CURRENT_TIMESTAMP=>"datetime(CURRENT_TIMESTAMP, 'localtime')".freeze, :CURRENT_TIME=>"time(CURRENT_TIMESTAMP, 'localtime')".freeze}
       EMULATED_FUNCTION_MAP = {:char_length=>'length'.freeze}
       EXTRACT_MAP = {:year=>"'%Y'", :month=>"'%m'", :day=>"'%d'", :hour=>"'%H'", :minute=>"'%M'", :second=>"'%f'"}
@@ -502,6 +516,11 @@ module Sequel
       HSTAR = "H*".freeze
       DATE_OPEN = "date(".freeze
       DATETIME_OPEN = "datetime(".freeze
+      ONLY_OFFSET = " LIMIT -1 OFFSET ".freeze
+
+      Dataset.def_sql_method(self, :delete, [['if db.sqlite_version >= 30803', %w'with delete from where'], ["else", %w'delete from where']])
+      Dataset.def_sql_method(self, :insert, [['if db.sqlite_version >= 30803', %w'with insert into columns values'], ["else", %w'insert into columns values']])
+      Dataset.def_sql_method(self, :update, [['if db.sqlite_version >= 30803', %w'with update table set where'], ["else", %w'update table set where']])
 
       def cast_sql_append(sql, expr, type)
         if type == Time or type == DateTime
@@ -525,11 +544,7 @@ module Sequel
           sql << NOT_SPACE
           complex_expression_sql_append(sql, (op == :"NOT ILIKE" ? :ILIKE : :LIKE), args)
         when :^
-          sql << complex_expression_arg_pairs(args) do |a, b|
-            a = literal(a)
-            b = literal(b)
-            "((~(#{a} & #{b})) & (#{a} | #{b}))"
-          end
+          complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.lit(["((~(", " & ", ")) & (", " | ", "))"], a, b, a, b)}
         when :extract
           part = args.at(0)
           raise(Sequel::Error, "unsupported extract argument: #{part.inspect}") unless format = EXTRACT_MAP[part]
@@ -593,6 +608,16 @@ module Sequel
         end
       end
       
+      # SQLite 3.8.3+ supports common table expressions.
+      def supports_cte?(type=:select)
+        db.sqlite_version >= 30803
+      end
+
+      # SQLite does not support table aliases with column aliases
+      def supports_derived_column_lists?
+        false
+      end
+
       # SQLite does not support INTERSECT ALL or EXCEPT ALL
       def supports_intersect_except_all?
         false
@@ -623,7 +648,8 @@ module Sequel
       private
       
       # SQLite uses string literals instead of identifiers in AS clauses.
-      def as_sql_append(sql, aliaz)
+      def as_sql_append(sql, aliaz, column_aliases=nil)
+        raise Error, "sqlite does not support derived column lists" if column_aliases
         aliaz = aliaz.value if aliaz.is_a?(SQL::Identifier)
         sql << AS
         literal_append(sql, aliaz.to_s)
@@ -646,6 +672,11 @@ module Sequel
         end
       end
 
+      # SQLite supports a maximum of 500 rows in a VALUES clause.
+      def default_import_slice
+        500
+      end
+
       # SQL fragment specifying a list of identifiers
       def identifier_list(columns)
         columns.map{|i| quote_identifier(i)}.join(COMMA)
@@ -666,11 +697,12 @@ module Sequel
         @db.integer_booleans ? '1' : "'t'"
       end
 
-      # SQLite does not support the SQL WITH clause
-      def select_clause_methods
-        SELECT_CLAUSE_METHODS
+      # SQLite only supporting multiple rows in the VALUES clause
+      # starting in 3.7.11.  On older versions, fallback to using a UNION.
+      def multi_insert_sql_strategy
+        db.sqlite_version >= 30711 ? :values : :union
       end
-      
+
       # SQLite does not support FOR UPDATE, but silently ignore it
       # instead of raising an error for compatibility with other
       # databases.
@@ -678,6 +710,16 @@ module Sequel
         super unless @opts[:lock] == :update
       end
 
+      def select_only_offset_sql(sql)
+        sql << ONLY_OFFSET
+        literal_append(sql, @opts[:offset])
+      end
+  
+      # SQLite supports quoted function names.
+      def supports_quoted_function_names?
+        true
+      end
+
       # SQLite treats a DELETE with no WHERE clause as a TRUNCATE
       def _truncate_sql(table)
         "DELETE FROM #{table}"
diff --git a/lib/sequel/adapters/sqlanywhere.rb b/lib/sequel/adapters/sqlanywhere.rb
new file mode 100644
index 0000000..9962906
--- /dev/null
+++ b/lib/sequel/adapters/sqlanywhere.rb
@@ -0,0 +1,177 @@
+require 'sqlanywhere'
+
+Sequel.require %w'shared/sqlanywhere', 'adapters'
+
+module Sequel
+  # Module for holding all SqlAnywhere-related classes and modules for Sequel.
+  module SqlAnywhere
+
+    class SQLAnywhereException < StandardError
+      attr_reader :errno
+      attr_reader :sql
+
+      def initialize(message, errno, sql)
+        super(message)
+        @errno = errno
+        @sql = sql
+      end
+    end
+
+    TYPE_TRANSLATOR = tt = Class.new do
+      def blob(s) ::Sequel::SQL::Blob.new(s) end
+      def boolean(s) s.to_i != 0 end
+      def date(s) ::Date.strptime(s) end
+      def decimal(s) ::BigDecimal.new(s) end
+      def time(s) ::Sequel.string_to_time(s) end
+    end.new
+
+    SQLANYWHERE_TYPES = {}
+    {
+        [0, 484] => tt.method(:decimal),
+        [384] => tt.method(:date),
+        [388] =>  tt.method(:time),
+        [500] => tt.method(:boolean),
+        [524, 528] => tt.method(:blob)
+    }.each do |k,v|
+      k.each{|n| SQLANYWHERE_TYPES[n] = v}
+    end
+
+    # Database class for SQLAnywhere databases used with Sequel.
+    class Database < Sequel::Database
+      include Sequel::SqlAnywhere::DatabaseMethods
+
+      DEFAULT_CONFIG = { :user => 'dba', :password => 'sql' }
+
+      attr_accessor :api
+
+      set_adapter_scheme :sqlanywhere
+
+      def connect(server)
+        opts = server_opts(server)
+        unless conn_string = opts[:conn_string]
+          conn_string = []
+          conn_string << "Host=#{opts[:host]}#{":#{opts[:port]}" if opts[:port]}" if opts[:host]
+          conn_string << "DBN=#{opts[:database]}" if opts[:database]
+          conn_string << "UID=#{opts[:user]}" if opts[:user]
+          conn_string << "Password=#{opts[:password]}" if opts[:password]
+          conn_string << "CommLinks=#{opts[:commlinks]}" if opts[:commlinks]
+          conn_string << "ConnectionName=#{opts[:connection_name]}" if opts[:connection_name]
+          conn_string << "CharSet=#{opts[:encoding]}" if opts[:encoding]
+          conn_string << "Idle=0" # Prevent the server from disconnecting us if we're idle for >240mins (by default)
+          conn_string << nil
+          conn_string = conn_string.join(';')
+        end
+
+        conn = @api.sqlany_new_connection
+        raise LoadError, "Could not connect" unless conn && @api.sqlany_connect(conn, conn_string) == 1
+
+        if Sequel.application_timezone == :utc
+          @api.sqlany_execute_immediate(conn, "SET TEMPORARY OPTION time_zone_adjustment=0")
+        end
+
+        conn
+      end
+
+      # Closes given database connection.
+      def disconnect_connection(c)
+        @api.sqlany_disconnect(c)
+      end
+
+      # Returns number of rows affected
+      def execute_dui(sql, opts=OPTS)
+        synchronize do |conn|
+          _execute(conn, :rows, sql, opts)
+        end
+      end
+
+      def execute(sql, opts=OPTS, &block)
+        synchronize do |conn|
+          _execute(conn, :select, sql, opts, &block)
+        end
+      end
+
+      def execute_insert(sql, opts=OPTS)
+        synchronize do |conn|
+          _execute(conn, :insert, sql, opts)
+        end
+      end
+
+      private
+
+      LAST_INSERT_ID = 'SELECT @@IDENTITY'.freeze
+      def _execute(conn, type, sql, opts)
+        unless rs = log_yield(sql){@api.sqlany_execute_direct(conn, sql)}
+          result, errstr = @api.sqlany_error(conn)
+          raise_error(SQLAnywhereException.new(errstr, result, sql))
+        end
+
+        case type
+        when :select
+          yield rs if block_given?
+        when :rows
+          return @api.sqlany_affected_rows(rs)
+        when :insert
+          _execute(conn, :select, LAST_INSERT_ID, opts){|r| return @api.sqlany_get_column(r, 0)[1] if r && @api.sqlany_fetch_next(r) == 1}
+        end
+      ensure
+        @api.sqlany_commit(conn) unless in_transaction?
+        @api.sqlany_free_stmt(rs) if rs
+      end
+
+      def adapter_initialize
+        @conversion_procs = SQLANYWHERE_TYPES.dup
+        @conversion_procs[392] = method(:to_application_timestamp_sa)
+        @api = SQLAnywhere::SQLAnywhereInterface.new
+        raise LoadError, "Could not load SQLAnywhere DBCAPI library" if SQLAnywhere::API.sqlany_initialize_interface(@api) == 0
+        raise LoadError, "Could not initialize SQLAnywhere DBCAPI library" if @api.sqlany_init == 0
+      end
+
+      def log_connection_execute(conn, sql)
+        _execute(conn, nil, sql, OPTS)
+      end
+    end
+
+    # Dataset class for SqlAnywhere datasets accessed via the native driver.
+    class Dataset < Sequel::Dataset
+      include Sequel::SqlAnywhere::DatasetMethods
+
+      Database::DatasetClass = self
+
+      # Yield all rows matching this dataset.  If the dataset is set to
+      # split multiple statements, yield arrays of hashes one per statement
+      # instead of yielding results for all statements as hashes.
+      def fetch_rows(sql)
+        db = @db
+        cps = db.conversion_procs
+        api = db.api
+        execute(sql) do |rs|
+          convert = (convert_smallint_to_bool and db.convert_smallint_to_bool)
+          col_infos = []
+          api.sqlany_num_cols(rs).times do |i|
+            _, _, name, _, type = api.sqlany_get_column_info(rs, i)
+            cp = if type == 500
+              cps[500] if convert
+            else
+              cps[type]
+            end
+            col_infos << [i, output_identifier(name), cp]
+          end
+
+          @columns = col_infos.map{|a| a[1]}
+
+          if rs
+            while api.sqlany_fetch_next(rs) == 1
+              h = {}
+              col_infos.each do |i, name, cp|
+                _, v = api.sqlany_get_column(rs, i)
+                h[name] = cp && v ? cp[v] : v
+              end
+              yield h
+            end
+          end
+        end
+        self
+      end
+    end
+  end
+end
diff --git a/lib/sequel/adapters/tinytds.rb b/lib/sequel/adapters/tinytds.rb
index 5aebd96..5e2053b 100644
--- a/lib/sequel/adapters/tinytds.rb
+++ b/lib/sequel/adapters/tinytds.rb
@@ -230,11 +230,29 @@ module Sequel
       # various cases.
       def fetch_rows(sql)
         execute(sql) do |result|
-          @columns = result.fields.map!{|c| output_identifier(c)}
-          if db.timezone == :utc
-            result.each(:timezone=>:utc){|r| yield r}
+          columns = result.fields.map!{|c| output_identifier(c)}
+          if columns.empty?
+            args = []
+            args << {:timezone=>:utc} if db.timezone == :utc
+            cols = nil
+            result.each(*args) do |r|
+              unless cols
+                cols = result.fields.map{|c| [c, output_identifier(c)]}
+                @columns = columns = cols.map{|c| c.last}
+              end
+              h = {}
+              cols.each do |s, sym|
+                h[sym] = r[s]
+              end
+              yield h
+            end
           else
-            result.each{|r| yield r}
+            @columns = columns
+            if db.timezone == :utc
+              result.each(:timezone=>:utc){|r| yield r}
+            else
+              result.each{|r| yield r}
+            end
           end
         end
         self
diff --git a/lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb b/lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb
index a794a28..0a996af 100644
--- a/lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb
+++ b/lib/sequel/adapters/utils/emulate_offset_with_reverse_and_count.rb
@@ -18,6 +18,7 @@ module Sequel
     # reversed in the subselect.  Note that the order needs to be unambiguous
     # to work correctly, and you must select all columns that you are ordering on.
     def select_sql
+      return super if @opts[:sql]
       return super unless o = @opts[:offset]
 
       order = @opts[:order] || default_offset_order
@@ -26,7 +27,7 @@ module Sequel
       end
 
       ds = unlimited
-      row_count = @opts[:offset_total_count] || ds.clone(:append_sql=>'').count
+      row_count = @opts[:offset_total_count] || ds.clone(:append_sql=>'', :placeholder_literal_null=>true).count
       dsa1 = dataset_alias(1)
 
       if o.is_a?(Symbol) && @opts[:bind_vars] && (match = Sequel::Dataset::PreparedStatementMethods::PLACEHOLDER_RE.match(o.to_s))
@@ -57,6 +58,12 @@ module Sequel
       sql
     end
 
+    # This does not support offsets in correlated subqueries, as it requires a query to get
+    # a count that will be invalid if a correlated subquery is used.
+    def supports_offsets_in_correlated_subqueries?
+      false
+    end
+
     private
 
     # The default order to use for datasets with offsets, if no order is defined.
diff --git a/lib/sequel/adapters/utils/emulate_offset_with_row_number.rb b/lib/sequel/adapters/utils/emulate_offset_with_row_number.rb
index 1100ab1..e6d93fc 100644
--- a/lib/sequel/adapters/utils/emulate_offset_with_row_number.rb
+++ b/lib/sequel/adapters/utils/emulate_offset_with_row_number.rb
@@ -9,28 +9,38 @@ module Sequel
     # If offset is used, an order must be provided, because the use of ROW_NUMBER
     # requires an order.
     def select_sql
-      return super unless o = @opts[:offset]
+      return super unless emulate_offset_with_row_number?
 
-      order = @opts[:order] || default_offset_order
-      if order.nil? || order.empty?
-        raise(Error, "#{db.database_type} requires an order be provided if using an offset")
+      offset = @opts[:offset]
+      order = @opts[:order]
+      if require_offset_order?
+        order ||= default_offset_order
+        if order.nil? || order.empty?
+          raise(Error, "#{db.database_type} requires an order be provided if using an offset")
+        end
       end
 
-      columns = clone(:append_sql=>'').columns
+      columns = clone(:append_sql=>'', :placeholder_literal_null=>true).columns
       dsa1 = dataset_alias(1)
       rn = row_number_column
       sql = @opts[:append_sql] || ''
       subselect_sql_append(sql, unlimited.
         unordered.
-        select_append{ROW_NUMBER(:over, :order=>order){}.as(rn)}.
+        select_append{ROW_NUMBER{}.over(:order=>order).as(rn)}.
         from_self(:alias=>dsa1).
         select(*columns).
         limit(@opts[:limit]).
-        where(SQL::Identifier.new(rn) > o).
+        where(SQL::Identifier.new(rn) > offset).
         order(rn))
       sql
     end
 
+    # This does not support offsets in correlated subqueries, as it requires a query to get
+    # the columns that will be invalid if a correlated subquery is used.
+    def supports_offsets_in_correlated_subqueries?
+      false
+    end
+
     private
 
     # The default order to use for datasets with offsets, if no order is defined.
@@ -38,5 +48,15 @@ module Sequel
     def default_offset_order
       clone(:append_sql=>'').columns
     end
+
+    # Whether an order is required when using offset emulation via ROW_NUMBER, true by default.
+    def require_offset_order?
+      true
+    end
+
+    # Whether to use ROW_NUMBER to emulate offsets
+    def emulate_offset_with_row_number?
+      @opts[:offset] && !@opts[:sql]
+    end
   end
 end
diff --git a/lib/sequel/adapters/utils/split_alter_table.rb b/lib/sequel/adapters/utils/split_alter_table.rb
index a339da8..97e85e9 100644
--- a/lib/sequel/adapters/utils/split_alter_table.rb
+++ b/lib/sequel/adapters/utils/split_alter_table.rb
@@ -24,6 +24,9 @@ module Sequel::Database::SplitAlterTable
         modified_columns << op[:name] unless modified_columns.include?(op[:name])
         modified_columns << op[:new_name] unless modified_columns.include?(op[:new_name])
       end
+      if split_alter_table_op?(op)
+        op_groups << []
+      end
       op_groups.last << op
     end
 
@@ -33,4 +36,9 @@ module Sequel::Database::SplitAlterTable
       remove_cached_schema(name)
     end
   end
+
+  # Whether the given alter table op should start a new group.
+  def split_alter_table_op?(op)
+    false
+  end
 end
diff --git a/lib/sequel/ast_transformer.rb b/lib/sequel/ast_transformer.rb
index d712c4a..945c61f 100644
--- a/lib/sequel/ast_transformer.rb
+++ b/lib/sequel/ast_transformer.rb
@@ -33,7 +33,7 @@ module Sequel
       when SQL::OrderedExpression
         SQL::OrderedExpression.new(v(o.expression), o.descending, :nulls=>o.nulls)
       when SQL::AliasedExpression
-        SQL::AliasedExpression.new(v(o.expression), o.aliaz)
+        SQL::AliasedExpression.new(v(o.expression), o.alias, o.columns)
       when SQL::CaseExpression
         args = [v(o.conditions), v(o.default)]
         args << v(o.expression) if o.expression?
@@ -41,11 +41,13 @@ module Sequel
       when SQL::Cast
         SQL::Cast.new(v(o.expr), o.type)
       when SQL::Function
-        SQL::Function.new(o.f, *v(o.args))
+        h = {}
+        o.opts.each do |k, val|
+          h[k] = v(val)
+        end
+        SQL::Function.new!(o.name, v(o.args), h)
       when SQL::Subscript
         SQL::Subscript.new(v(o.f), v(o.sub))
-      when SQL::WindowFunction
-        SQL::WindowFunction.new(v(o.function), v(o.window))
       when SQL::Window
         opts = o.opts.dup
         opts[:partition] = v(opts[:partition]) if opts[:partition]
@@ -61,11 +63,11 @@ module Sequel
         end
         SQL::PlaceholderLiteralString.new(o.str, args, o.parens)
       when SQL::JoinOnClause
-        SQL::JoinOnClause.new(v(o.on), o.join_type, v(o.table), v(o.table_alias))
+        SQL::JoinOnClause.new(v(o.on), o.join_type, v(o.table_expr))
       when SQL::JoinUsingClause
-        SQL::JoinUsingClause.new(v(o.using), o.join_type, v(o.table), v(o.table_alias))
+        SQL::JoinUsingClause.new(v(o.using), o.join_type, v(o.table_expr))
       when SQL::JoinClause
-        SQL::JoinClause.new(o.join_type, v(o.table), v(o.table_alias))
+        SQL::JoinClause.new(o.join_type, v(o.table_expr))
       when SQL::DelayedEvaluation
         SQL::DelayedEvaluation.new(lambda{v(o.callable.call)})
       when SQL::Wrapper
diff --git a/lib/sequel/connection_pool.rb b/lib/sequel/connection_pool.rb
index f7a9ddb..4317b41 100644
--- a/lib/sequel/connection_pool.rb
+++ b/lib/sequel/connection_pool.rb
@@ -69,9 +69,9 @@ class Sequel::ConnectionPool
   # with a single symbol (specifying the server/shard to use) every time a new
   # connection is needed.  The following options are respected for all connection
   # pools:
-  # :after_connect :: The proc called after each new connection is made, with the
-  #                   connection object, useful for customizations that you want to apply to all
-  #                   connections.
+  # :after_connect :: A callable object called after each new connection is made, with the
+  #                   connection object (and server argument if the callable accepts 2 arguments),
+  #                   useful for customizations that you want to apply to all connections.
   def initialize(db, opts=OPTS)
     @db = db
     @after_connect = opts[:after_connect]
@@ -94,7 +94,13 @@ class Sequel::ConnectionPool
   def make_new(server)
     begin
       conn = @db.connect(server)
-      @after_connect.call(conn) if @after_connect
+      if ac = @after_connect
+        if ac.arity == 2
+          ac.call(conn, server)
+        else
+          ac.call(conn)
+        end
+      end
     rescue Exception=>exception
       raise Sequel.convert_exception_class(exception, Sequel::DatabaseConnectionError)
     end
diff --git a/lib/sequel/core.rb b/lib/sequel/core.rb
index 77e0250..2022533 100644
--- a/lib/sequel/core.rb
+++ b/lib/sequel/core.rb
@@ -17,8 +17,8 @@
 #
 #   Sequel.sqlite('blog.db'){|db| puts db[:users].count} 
 #
-# For a more expanded introduction, see the {README}[link:files/README_rdoc.html].
-# For a quicker introduction, see the {cheat sheet}[link:files/doc/cheat_sheet_rdoc.html].
+# For a more expanded introduction, see the {README}[rdoc-ref:README.rdoc].
+# For a quicker introduction, see the {cheat sheet}[rdoc-ref:doc/cheat_sheet.rdoc].
 module Sequel
   @convert_two_digit_years = true
   @datetime_class = Time
@@ -89,8 +89,8 @@ module Sequel
   #
   #   Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count}  
   # 
-  # For details, see the {"Connecting to a Database" guide}[link:files/doc/opening_databases_rdoc.html].
-  # To set up a master/slave or sharded database connection, see the {"Master/Slave Databases and Sharding" guide}[link:files/doc/sharding_rdoc.html].
+  # For details, see the {"Connecting to a Database" guide}[rdoc-ref:doc/opening_databases.rdoc].
+  # To set up a master/slave or sharded database connection, see the {"Master/Slave Databases and Sharding" guide}[rdoc-ref:doc/sharding.rdoc].
   def self.connect(*args, &block)
     Database.connect(*args, &block)
   end
@@ -219,6 +219,7 @@ module Sequel
   COLUMN_REF_RE1 = /\A((?:(?!__).)+)__((?:(?!___).)+)___(.+)\z/.freeze
   COLUMN_REF_RE2 = /\A((?:(?!___).)+)___(.+)\z/.freeze
   COLUMN_REF_RE3 = /\A((?:(?!__).)+)__(.+)\z/.freeze
+  SPLIT_SYMBOL_CACHE = {}
 
   # Splits the symbol into three parts.  Each part will
   # either be a string or nil.
@@ -226,16 +227,20 @@ module Sequel
   # For columns, these parts are the table, column, and alias.
   # For tables, these parts are the schema, table, and alias.
   def self.split_symbol(sym)
-    case s = sym.to_s
-    when COLUMN_REF_RE1
-      [$1, $2, $3]
-    when COLUMN_REF_RE2
-      [nil, $1, $2]
-    when COLUMN_REF_RE3
-      [$1, $2, nil]
-    else
-      [nil, s, nil]
+    unless v = Sequel.synchronize{SPLIT_SYMBOL_CACHE[sym]}
+      v = case s = sym.to_s
+      when COLUMN_REF_RE1
+        [$1.freeze, $2.freeze, $3.freeze].freeze
+      when COLUMN_REF_RE2
+        [nil, $1.freeze, $2.freeze].freeze
+      when COLUMN_REF_RE3
+        [$1.freeze, $2.freeze, nil].freeze
+      else
+        [nil, s.freeze, nil].freeze
+      end
+      Sequel.synchronize{SPLIT_SYMBOL_CACHE[sym] = v}
     end
+    v
   end
 
   # Converts the given +string+ into a +Date+ object.
diff --git a/lib/sequel/database/connecting.rb b/lib/sequel/database/connecting.rb
index 3ad364e..bab1b34 100644
--- a/lib/sequel/database/connecting.rb
+++ b/lib/sequel/database/connecting.rb
@@ -6,7 +6,7 @@ module Sequel
     # ---------------------
 
     # Array of supported database adapters
-    ADAPTERS = %w'ado amalgalite cubrid db2 dbi do firebird ibmdb informix jdbc mock mysql mysql2 odbc openbase oracle postgres sqlite swift tinytds'.collect{|x| x.to_sym}
+    ADAPTERS = %w'ado amalgalite cubrid db2 dbi do firebird ibmdb informix jdbc mock mysql mysql2 odbc openbase oracle postgres sqlanywhere sqlite swift tinytds'.collect{|x| x.to_sym}
 
     @single_threaded = false
 
diff --git a/lib/sequel/database/dataset_defaults.rb b/lib/sequel/database/dataset_defaults.rb
index 389b92f..f8d68e0 100644
--- a/lib/sequel/database/dataset_defaults.rb
+++ b/lib/sequel/database/dataset_defaults.rb
@@ -31,7 +31,7 @@ module Sequel
     # Change the default identifier output method to use for all databases,
     def self.identifier_output_method=(v)
       @identifier_output_method = v.nil? ? false : v
-     end
+    end
 
     # The class to use for creating datasets.  Should respond to
     # new with the Database argument as the first argument, and
@@ -137,6 +137,7 @@ module Sequel
     # create datasets.  Usually done after changes to the identifier
     # mangling methods.
     def reset_default_dataset
+      Sequel.synchronize{@symbol_literal_cache.clear}
       @default_dataset = dataset
     end
 
diff --git a/lib/sequel/database/features.rb b/lib/sequel/database/features.rb
index 4a4a8e9..26497b7 100644
--- a/lib/sequel/database/features.rb
+++ b/lib/sequel/database/features.rb
@@ -94,6 +94,16 @@ module Sequel
       false
     end
 
+    # Whether CREATE VIEW ... WITH CHECK OPTION is supported, false by default.
+    def supports_views_with_check_option?
+      !!view_with_check_option_support
+    end
+
+    # Whether CREATE VIEW ... WITH LOCAL CHECK OPTION is supported, false by default.
+    def supports_views_with_local_check_option?
+      view_with_check_option_support == :local
+    end
+
     private
 
     # Whether the database supports combining multiple alter table
@@ -115,5 +125,10 @@ module Sequel
     def supports_named_column_constraints?
       true
     end
+
+    # Don't advertise support for WITH CHECK OPTION by default.
+    def view_with_check_option_support
+      nil
+    end
   end
 end
diff --git a/lib/sequel/database/misc.rb b/lib/sequel/database/misc.rb
index 66b80f5..aee1533 100644
--- a/lib/sequel/database/misc.rb
+++ b/lib/sequel/database/misc.rb
@@ -131,6 +131,7 @@ module Sequel
       @dataset_class = dataset_class_default
       @cache_schema = typecast_value_boolean(@opts.fetch(:cache_schema, true))
       @dataset_modules = []
+      @symbol_literal_cache = {}
       @schema_type_classes = SCHEMA_TYPE_CLASSES.dup
       self.sql_log_level = @opts[:sql_log_level] ? @opts[:sql_log_level].to_sym : :info
       @pool = ConnectionPool.get_pool(self, @opts)
@@ -235,6 +236,17 @@ module Sequel
       schema_utility_dataset.literal(v)
     end
 
+    # Return the literalized version of the symbol if cached, or
+    # nil if it is not cached.
+    def literal_symbol(sym)
+      Sequel.synchronize{@symbol_literal_cache[sym]}
+    end
+
+    # Set the cached value of the literal symbol.
+    def literal_symbol_set(sym, lit)
+      Sequel.synchronize{@symbol_literal_cache[sym] = lit}
+    end
+
     # Synchronize access to the prepared statements cache.
     def prepared_statement(name)
       Sequel.synchronize{prepared_statements[name]}
diff --git a/lib/sequel/database/query.rb b/lib/sequel/database/query.rb
index 70f3a52..8da4d1e 100644
--- a/lib/sequel/database/query.rb
+++ b/lib/sequel/database/query.rb
@@ -6,7 +6,7 @@ module Sequel
     # ---------------------
 
     STRING_DEFAULT_RE = /\A'(.*)'\z/
-    CURRENT_TIMESTAMP_RE = /now|CURRENT|getdate|\ADate\(\)\z/io
+    CURRENT_TIMESTAMP_RE = /now|today|CURRENT|getdate|\ADate\(\)\z/io
     COLUMN_SCHEMA_DATETIME_TYPES = [:date, :datetime]
     COLUMN_SCHEMA_STRING_TYPES = [:string, :blob, :date, :datetime, :time, :enum, :set, :interval]
 
@@ -279,7 +279,10 @@ module Sequel
 
     # Remove the cached schema for the given schema name
     def remove_cached_schema(table)
-      Sequel.synchronize{@schemas.delete(quote_schema_table(table))} if @schemas
+      if @schemas
+        k = quote_schema_table(table)
+        Sequel.synchronize{@schemas.delete(k)}
+      end
     end
     
     # Match the database's column type to a ruby type via a
diff --git a/lib/sequel/database/schema_generator.rb b/lib/sequel/database/schema_generator.rb
index 087f65f..f8e794d 100644
--- a/lib/sequel/database/schema_generator.rb
+++ b/lib/sequel/database/schema_generator.rb
@@ -13,7 +13,7 @@ module Sequel
     # the column method, which makes for a nicer DSL.
     #
     # For more information on Sequel's support for schema modification, see
-    # the {"Schema Modification" guide}[link:files/doc/schema_modification_rdoc.html].
+    # the {"Schema Modification" guide}[rdoc-ref:doc/schema_modification.rdoc].
     class CreateTableGenerator
       # Classes specifying generic types that Sequel will convert to database-specific types.
       GENERIC_TYPES=[String, Integer, Fixnum, Bignum, Float, Numeric, BigDecimal,
@@ -322,9 +322,9 @@ module Sequel
       # See CreateTableGenerator#constraint.
       #
       #   add_constraint(:valid_name, Sequel.like(:name, 'A%'))
-      #   # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%')
+      #   # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%' ESCAPE '\')
       #   add_constraint({:name=>:valid_name, :deferrable=>true}, :num=>1..5)
-      #   # CONSTRAINT valid_name CHECK (name LIKE 'A%') DEFERRABLE INITIALLY DEFERRED
+      #   # ADD CONSTRAINT valid_name CHECK (name LIKE 'A%' ESCAPE '\') DEFERRABLE INITIALLY DEFERRED
       def add_constraint(name, *args, &block)
         opts = name.is_a?(Hash) ? name : {:name=>name}
         @operations << opts.merge(:op=>:add_constraint, :type=>:check, :check=>block || args)
diff --git a/lib/sequel/database/schema_methods.rb b/lib/sequel/database/schema_methods.rb
index ad8b1d8..bd3cf73 100644
--- a/lib/sequel/database/schema_methods.rb
+++ b/lib/sequel/database/schema_methods.rb
@@ -44,6 +44,7 @@ module Sequel
     #
     # Options:
     # :ignore_errors :: Ignore any DatabaseErrors that are raised
+    # :name :: Name to use for index instead of default
     #
     # See <tt>alter_table</tt>.
     def add_index(table, columns, options=OPTS)
@@ -71,7 +72,7 @@ module Sequel
     # definitions using <tt>create_table</tt>, and +add_index+ accepts all the options
     # available for index definition.
     #
-    # See <tt>Schema::AlterTableGenerator</tt> and the {"Migrations and Schema Modification" guide}[link:files/doc/migration_rdoc.html].
+    # See <tt>Schema::AlterTableGenerator</tt> and the {"Migrations and Schema Modification" guide}[rdoc-ref:doc/migration.rdoc].
     def alter_table(name, generator=nil, &block)
       generator ||= alter_table_generator(&block)
       remove_cached_schema(name)
@@ -133,6 +134,21 @@ module Sequel
       end
     end
 
+    # Forcibly create a join table, attempting to drop it if it already exists, then creating it.
+    def create_join_table!(hash, options=OPTS)
+      drop_table?(join_table_name(hash, options))
+      create_join_table(hash, options)
+    end
+    
+    # Creates the join table unless it already exists.
+    def create_join_table?(hash, options=OPTS)
+      if supports_create_table_if_not_exists?
+        create_join_table(hash, options.merge(:if_not_exists=>true))
+      elsif !table_exists?(join_table_name(hash, options))
+        create_join_table(hash, options)
+      end
+    end
+
     # Creates a table with the columns given in the provided block:
     #
     #   DB.create_table :posts do
@@ -155,11 +171,12 @@ module Sequel
     # :engine :: The table engine to use for the table.
     #
     # PostgreSQL specific options:
+    # :on_commit :: Either :preserve_rows (default), :drop or :delete_rows.
     # :unlogged :: Create the table as an unlogged table.
     # :inherits :: Inherit from a different tables.  An array can be
     #              specified to inherit from multiple tables.
     #
-    # See <tt>Schema::Generator</tt> and the {"Schema Modification" guide}[link:files/doc/schema_modification_rdoc.html].
+    # See <tt>Schema::Generator</tt> and the {"Schema Modification" guide}[rdoc-ref:doc/schema_modification.rdoc].
     def create_table(name, options=OPTS, &block)
       remove_cached_schema(name)
       options = {:generator=>options} if options.is_a?(Schema::CreateTableGenerator)
@@ -224,16 +241,30 @@ module Sequel
     # Creates a view based on a dataset or an SQL string:
     #
     #   DB.create_view(:cheap_items, "SELECT * FROM items WHERE price < 100")
-    #   DB.create_view(:ruby_items, DB[:items].filter(:category => 'ruby'))
+    #   # CREATE VIEW cheap_items AS
+    #   # SELECT * FROM items WHERE price < 100
+    #
+    #   DB.create_view(:ruby_items, DB[:items].where(:category => 'ruby'))
+    #   # CREATE VIEW ruby_items AS
+    #   # SELECT * FROM items WHERE (category = 'ruby')
+    #
+    #   DB.create_view(:checked_items, DB[:items].where(:foo), :check=>true)
+    #   # CREATE VIEW checked_items AS
+    #   # SELECT * FROM items WHERE foo
+    #   # WITH CHECK OPTION
     #
     # Options:
     # :columns :: The column names to use for the view.  If not given,
     #             automatically determined based on the input dataset.
+    # :check :: Adds a WITH CHECK OPTION clause, so that attempting to modify 
+    #           rows in the underlying table that would not be returned by the
+    #           view is not allowed.  This can be set to :local to use WITH
+    #           LOCAL CHECK OPTION.
     #
     # PostgreSQL/SQLite specific option:
     # :temp :: Create a temporary view, automatically dropped on disconnect.
     #
-    # PostgreSQL specific option:
+    # PostgreSQL specific options:
     # :materialized :: Creates a materialized view, similar to a regular view,
     #                  but backed by a physical table.
     # :recursive :: Creates a recursive view.  As columns must be specified for
@@ -315,12 +346,13 @@ module Sequel
     #   DB.drop_view(:cheap_items)
     #   DB.drop_view(:cheap_items, :pricey_items)
     #   DB.drop_view(:cheap_items, :pricey_items, :cascade=>true)
+    #   DB.drop_view(:cheap_items, :pricey_items, :if_exists=>true)
     #
     # Options:
     # :cascade :: Also drop objects depending on this view.
+    # :if_exists :: Do not raise an error if the view does not exist.
     #
     # PostgreSQL specific options:
-    # :if_exists :: Do not raise an error if the view does not exist.
     # :materialized :: Drop a materialized view.
     def drop_view(*names)
       options = names.last.is_a?(Hash) ? names.pop : {}
@@ -665,7 +697,11 @@ module Sequel
     # DDL statement for creating a view.
     def create_view_sql(name, source, options)
       source = source.sql if source.is_a?(Dataset)
-      "#{create_view_prefix_sql(name, options)} AS #{source}"
+      sql = "#{create_view_prefix_sql(name, options)} AS #{source}"
+      if check = options[:check]
+        sql << " WITH#{' LOCAL' if check == :local} CHECK OPTION"
+      end
+      sql
     end
 
     # Append the column list to the SQL, if a column list is given.
@@ -704,7 +740,7 @@ module Sequel
     
     # SQL DDL statement to drop a view with the given name.
     def drop_view_sql(name, options)
-      "DROP VIEW #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}"
+      "DROP VIEW#{' IF EXISTS' if options[:if_exists]} #{quote_schema_table(name)}#{' CASCADE' if options[:cascade]}"
     end
 
     # Proxy the filter_expr call to the dataset, used for creating constraints.
diff --git a/lib/sequel/database/transactions.rb b/lib/sequel/database/transactions.rb
index 907b433..832c7d7 100644
--- a/lib/sequel/database/transactions.rb
+++ b/lib/sequel/database/transactions.rb
@@ -36,6 +36,8 @@ module Sequel
     #
     # The following general options are respected:
     #
+    # :auto_savepoint :: Automatically use a savepoint for Database#transaction calls
+    #                    inside this transaction block.
     # :isolation :: The transaction isolation level to use for this transaction,
     #               should be :uncommitted, :committed, :repeatable, or :serializable,
     #               used if given and the database/adapter supports customizable
@@ -59,7 +61,8 @@ module Sequel
     # :savepoint :: Whether to create a new savepoint for this transaction,
     #               only respected if the database/adapter supports savepoints.  By
     #               default Sequel will reuse an existing transaction, so if you want to
-    #               use a savepoint you must use this option.
+    #               use a savepoint you must use this option.  If the surrounding transaction
+    #               uses :auto_savepoint, you can set this to false to not use a savepoint.
     #
     # PostgreSQL specific options:
     #
@@ -88,9 +91,14 @@ module Sequel
             if opts[:retrying]
               raise Sequel::Error, "cannot set :retry_on options if you are already inside a transaction"
             end
-            return yield(conn)
+            if opts[:savepoint] != false && (stack = _trans(conn)[:savepoints]) && stack.last
+              _transaction(conn, opts.merge(:savepoint=>true), &block)
+            else
+              return yield(conn)
+            end
+          else
+            _transaction(conn, opts, &block)
           end
-          _transaction(conn, opts, &block)
         end
       end
     end
@@ -151,18 +159,24 @@ module Sequel
 
     # Add the current thread to the list of active transactions
     def add_transaction(conn, opts)
+      hash = {}
+
       if supports_savepoints?
-        unless _trans(conn)
+        if _trans(conn)
+          hash = nil
+          _trans(conn)[:savepoints].push(opts[:auto_savepoint])
+        else
+          hash[:savepoints] = [opts[:auto_savepoint]]
           if (prep = opts[:prepare]) && supports_prepared_transactions?
-            Sequel.synchronize{@transactions[conn] = {:savepoint_level=>0, :prepare=>prep}}
-          else
-            Sequel.synchronize{@transactions[conn] = {:savepoint_level=>0}}
+            hash[:prepare] = prep
           end
         end
       elsif (prep = opts[:prepare]) && supports_prepared_transactions?
-        Sequel.synchronize{@transactions[conn] = {:prepare => prep}}
-      else
-        Sequel.synchronize{@transactions[conn] = {}}
+        hash[:prepare] = prep
+      end
+
+      if hash
+        Sequel.synchronize{@transactions[conn] = hash}
       end
     end    
 
@@ -184,7 +198,12 @@ module Sequel
     def already_in_transaction?(conn, opts)
       _trans(conn) && (!supports_savepoints? || !opts[:savepoint])
     end
-    
+
+    # Issue query to begin a new savepoint.
+    def begin_savepoint(conn, opts)
+      log_connection_execute(conn, begin_savepoint_sql(savepoint_level(conn)-1))
+    end
+
     # SQL to start a new savepoint
     def begin_savepoint_sql(depth)
       SQL_SAVEPOINT % depth
@@ -199,13 +218,11 @@ module Sequel
     # Start a new database transaction or a new savepoint on the given connection.
     def begin_transaction(conn, opts=OPTS)
       if supports_savepoints?
-        th = _trans(conn)
-        if (depth = th[:savepoint_level]) > 0
-          log_connection_execute(conn, begin_savepoint_sql(depth))
+        if savepoint_level(conn) > 1
+          begin_savepoint(conn, opts)
         else
           begin_new_transaction(conn, opts)
         end
-        th[:savepoint_level] += 1
       else
         begin_new_transaction(conn, opts)
       end
@@ -216,32 +233,17 @@ module Sequel
       SQL_BEGIN
     end
 
-    if (! defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or RUBY_ENGINE == 'rbx') and RUBY_VERSION < '1.9'
-    # :nocov:
-      # Whether to commit the current transaction. On ruby 1.8 and rubinius,
-      # Thread.current.status is checked because Thread#kill skips rescue
-      # blocks (so exception would be nil), but the transaction should
-      # still be rolled back.
-      def commit_or_rollback_transaction(exception, conn, opts)
-        if exception
-          false
-        else
-          if Thread.current.status == 'aborting'
-            rollback_transaction(conn, opts)
-            false
-          else
-            commit_transaction(conn, opts)
-            true
-          end
-        end
-      end
-    # :nocov:
-    else
-      # Whether to commit the current transaction.  On ruby 1.9 and JRuby,
-      # transactions will be committed if Thread#kill is used on an thread
-      # that has a transaction open, and there isn't a work around.
-      def commit_or_rollback_transaction(exception, conn, opts)
-        if exception
+    # Whether to commit the current transaction. Thread.current.status is
+    # checked because Thread#kill skips rescue blocks (so exception would be
+    # nil), but the transaction should still be rolled back. On Ruby 1.9 (but
+    # not 1.8 or 2.0), the thread status will still be "run", so Thread#kill
+    # will erroneously commit the transaction, and there isn't a workaround.
+    def commit_or_rollback_transaction(exception, conn, opts)
+      if exception
+        false
+      else
+        if Thread.current.status == 'aborting'
+          rollback_transaction(conn, opts)
           false
         else
           commit_transaction(conn, opts)
@@ -258,7 +260,7 @@ module Sequel
     # Commit the active transaction on the connection
     def commit_transaction(conn, opts=OPTS)
       if supports_savepoints?
-        depth = _trans(conn)[:savepoint_level]
+        depth = savepoint_level(conn)
         log_connection_execute(conn, depth > 1 ? commit_savepoint_sql(depth-1) : commit_transaction_sql)
       else
         log_connection_execute(conn, commit_transaction_sql)
@@ -278,7 +280,7 @@ module Sequel
 
     # Remove the current thread from the list of active transactions
     def remove_transaction(conn, committed)
-      if !supports_savepoints? || ((_trans(conn)[:savepoint_level] -= 1) <= 0)
+      if transaction_finished?(conn)
         begin
           if committed
             after_transaction_commit(conn)
@@ -299,7 +301,7 @@ module Sequel
     # Rollback the active transaction on the connection
     def rollback_transaction(conn, opts=OPTS)
       if supports_savepoints?
-        depth = _trans(conn)[:savepoint_level]
+        depth = savepoint_level(conn)
         log_connection_execute(conn, depth > 1 ? rollback_savepoint_sql(depth-1) : rollback_transaction_sql)
       else
         log_connection_execute(conn, rollback_transaction_sql)
@@ -323,6 +325,11 @@ module Sequel
       "SET TRANSACTION ISOLATION LEVEL #{TRANSACTION_ISOLATION_LEVELS[level]}"
     end
 
+    # Current savepoint level.
+    def savepoint_level(conn)
+      _trans(conn)[:savepoints].length
+    end
+
     # Raise a database error unless the exception is an Rollback.
     def transaction_error(e, opts=OPTS)
       if e.is_a?(Rollback)
@@ -331,5 +338,17 @@ module Sequel
         raise_error(e, opts.merge(:classes=>database_error_classes))
       end
     end
+
+    # Finish a subtransaction.  If savepoints are supported, pops the current
+    # tansaction off the savepoint stack.
+    def transaction_finished?(conn)
+      if supports_savepoints?
+        stack = _trans(conn)[:savepoints]
+        stack.pop
+        stack.empty?
+      else
+        true
+      end
+    end
   end
 end
diff --git a/lib/sequel/dataset.rb b/lib/sequel/dataset.rb
index 21f5227..2297116 100644
--- a/lib/sequel/dataset.rb
+++ b/lib/sequel/dataset.rb
@@ -21,7 +21,7 @@ module Sequel
   # Datasets are Enumerable objects, so they can be manipulated using any
   # of the Enumerable methods, such as map, inject, etc.
   #
-  # For more information, see the {"Dataset Basics" guide}[link:files/doc/dataset_basics_rdoc.html].
+  # For more information, see the {"Dataset Basics" guide}[rdoc-ref:doc/dataset_basics.rdoc].
   class Dataset
     OPTS = Sequel::OPTS
 
@@ -36,5 +36,5 @@ module Sequel
     include SQL::StringMethods
   end
   
-  require(%w"query actions features graph prepared_statements misc mutation sql", 'dataset')
+  require(%w"query actions features graph prepared_statements misc mutation sql placeholder_literalizer", 'dataset')
 end
diff --git a/lib/sequel/dataset/actions.rb b/lib/sequel/dataset/actions.rb
index ed34444..6e55c7b 100644
--- a/lib/sequel/dataset/actions.rb
+++ b/lib/sequel/dataset/actions.rb
@@ -42,11 +42,7 @@ module Sequel
     #   # Iterate over all rows in the table
     #   DB[:table].all{|row| p row}
     def all(&block)
-      a = []
-      each{|r| a << r}
-      post_load(a)
-      a.each(&block) if block
-      a
+      _all(block){|a| each{|r| a << r}}
     end
     
     # Returns the average value for the given column/expression.
@@ -102,14 +98,14 @@ module Sequel
       if no_arg
         if block
           arg = Sequel.virtual_row(&block)
-          aggregate_dataset.get{count(arg).as(count)}
+          aggregate_dataset.get{count(arg).as(:count)}
         else
-          aggregate_dataset.get{count(:*){}.as(count)}.to_i
+          aggregate_dataset.get{count{}.*.as(:count)}.to_i
         end
       elsif block
         raise Error, 'cannot provide both argument and block to Dataset#count'
       else
-        aggregate_dataset.get{count(arg).as(count)}
+        aggregate_dataset.get{count(arg).as(:count)}
       end
     end
     
@@ -271,6 +267,8 @@ module Sequel
     # :commit_every :: Open a new transaction for every given number of records.
     #                  For example, if you provide a value of 50, will commit
     #                  after every 50 records.
+    # :return :: When the :value is :primary_key, returns an array of
+    #            autoincremented primary key values for the rows inserted.
     # :server :: Set the server/shard to use for the transaction and insert
     #            queries.
     # :slice :: Same as :commit_every, :commit_every takes precedence.
@@ -281,7 +279,7 @@ module Sequel
       raise(Error, IMPORT_ERROR_MSG) if columns.empty?
       ds = opts[:server] ? server(opts[:server]) : self
       
-      if slice_size = opts[:commit_every] || opts[:slice]
+      if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice))
         offset = 0
         rows = []
         while offset < values.length
@@ -432,49 +430,107 @@ module Sequel
       import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts)
     end
 
-    # Yields each row in the dataset, but interally uses multiple queries as needed with
-    # limit and offset to process the entire result set without keeping all
-    # rows in the dataset in memory, even if the underlying driver buffers all
-    # query results in memory.
+    # Yields each row in the dataset, but interally uses multiple queries as needed to
+    # process the entire result set without keeping all rows in the dataset in memory,
+    # even if the underlying driver buffers all query results in memory.
     #
     # Because this uses multiple queries internally, in order to remain consistent,
-    # it also uses a transaction internally.  Additionally, to make sure that all rows
-    # in the dataset are yielded and none are yielded twice, the dataset must have an
-    # unambiguous order.  Sequel requires that datasets using this method have an
-    # order, but it cannot ensure that the order is unambiguous.
+    # it also uses a transaction internally.  Additionally, to work correctly, the dataset
+    # must have unambiguous order.  Using an ambiguous order can result in an infinite loop,
+    # as well as subtler bugs such as yielding duplicate rows or rows being skipped.
+    #
+    # Sequel checks that the datasets using this method have an order, but it cannot
+    # ensure that the order is unambiguous.
     #
     # Options:
     # :rows_per_fetch :: The number of rows to fetch per query.  Defaults to 1000.
+    # :strategy :: The strategy to use for paging of results.  By default this is :offset,
+    #              for using an approach with a limit and offset for every page.  This can
+    #              be set to :filter, which uses a limit and a filter that excludes
+    #              rows from previous pages.  In order for this strategy to work, you must be
+    #              selecting the columns you are ordering by, and non of the columns can contain
+    #              NULLs.  Note that some Sequel adapters have optimized implementations that will
+    #              use cursors or streaming regardless of the :strategy option used.
+    # :filter_values :: If the :strategy=>:filter option is used, this option should be a proc
+    #                   that accepts the last retreived row for the previous page and an array of
+    #                   ORDER BY expressions, and returns an array of values relating to those
+    #                   expressions for the last retrieved row.  You will need to use this option
+    #                   if your ORDER BY expressions are not simple columns, if they contain
+    #                   qualified identifiers that would be ambiguous unqualified, if they contain
+    #                   any identifiers that are aliased in SELECT, and potentially other cases.
+    #
+    # Examples:
+    #
+    #   DB[:table].order(:id).paged_each{|row| ...}
+    #   # SELECT * FROM table ORDER BY id LIMIT 1000
+    #   # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000
+    #   # ...
+    #
+    #   DB[:table].order(:id).paged_each(:rows_per_fetch=>100){|row| ...}
+    #   # SELECT * FROM table ORDER BY id LIMIT 100
+    #   # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100
+    #   # ...
+    #
+    #   DB[:table].order(:id).paged_each(:strategy=>:filter){|row| ...}
+    #   # SELECT * FROM table ORDER BY id LIMIT 1000
+    #   # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
+    #   # ...
+    #
+    #   DB[:table].order(:table__id).paged_each(:strategy=>:filter,
+    #     :filter_values=>proc{|row, exprs| [row[:id]]}){|row| ...}
+    #   # SELECT * FROM table ORDER BY id LIMIT 1000
+    #   # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
+    #   # ...
     def paged_each(opts=OPTS)
       unless @opts[:order]
         raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered"
       end
 
       total_limit = @opts[:limit]
-      offset = @opts[:offset] || 0
-
+      offset = @opts[:offset]
       if server = @opts[:server]
         opts = opts.merge(:server=>server)
       end
 
       rows_per_fetch = opts[:rows_per_fetch] || 1000
-      num_rows_yielded = rows_per_fetch
-      total_rows = 0
+      strategy = if offset || total_limit
+        :offset
+      else
+        opts[:strategy] || :offset
+      end
 
       db.transaction(opts) do
-        while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit)
-          if total_limit && total_rows + rows_per_fetch > total_limit
-            rows_per_fetch = total_limit - total_rows
+        case strategy
+        when :filter
+          filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}}
+          base_ds = ds = limit(rows_per_fetch)
+          while ds
+            last_row = nil
+            ds.each do |row|
+              last_row = row
+              yield row
+            end
+            ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row)
           end
+        else
+          offset ||= 0
+          num_rows_yielded = rows_per_fetch
+          total_rows = 0
 
-          num_rows_yielded = 0
-          limit(rows_per_fetch, offset).each do |row|
-            num_rows_yielded += 1
-            total_rows += 1 if total_limit
-            yield row
-          end
+          while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit)
+            if total_limit && total_rows + rows_per_fetch > total_limit
+              rows_per_fetch = total_limit - total_rows
+            end
 
-          offset += rows_per_fetch
+            num_rows_yielded = 0
+            limit(rows_per_fetch, offset).each do |row|
+              num_rows_yielded += 1
+              total_rows += 1 if total_limit
+              yield row
+            end
+
+            offset += rows_per_fetch
+          end
         end
       end
 
@@ -516,13 +572,13 @@ module Sequel
     # Returns a hash with key_column values as keys and an array of value_column values.
     # Similar to to_hash_groups, but only selects the columns given.
     #
-    #   DB[:table].select_hash(:name, :id) # SELECT id, name FROM table
+    #   DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table
     #   # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
     #
     # You can also provide an array of column names for either the key_column,
     # the value column, or both:
     #
-    #   DB[:table].select_hash([:first, :middle], [:last, :id]) # SELECT * FROM table
+    #   DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table
     #   # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
     #
     # When using this method, you must be sure that each expression has an alias
@@ -652,19 +708,19 @@ module Sequel
     # array of column values. If the value_column is not given or nil, uses
     # the entire hash as the value.
     #
-    #   DB[:table].to_hash(:name, :id) # SELECT * FROM table
+    #   DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table
     #   # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...}
     #
-    #   DB[:table].to_hash(:name) # SELECT * FROM table
+    #   DB[:table].to_hash_groups(:name) # SELECT * FROM table
     #   # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
     #
     # You can also provide an array of column names for either the key_column,
     # the value column, or both:
     #
-    #   DB[:table].to_hash([:first, :middle], [:last, :id]) # SELECT * FROM table
+    #   DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table
     #   # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...}
     #
-    #   DB[:table].to_hash([:first, :middle]) # SELECT * FROM table
+    #   DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table
     #   # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
     def to_hash_groups(key_column, value_column = nil)
       h = {}
@@ -718,12 +774,54 @@ module Sequel
       end
     end
 
+    # Run the given SQL and return an array of all rows.  If a block is given,
+    # each row is yielded to the block after all rows are loaded. See with_sql_each.
+    def with_sql_all(sql, &block)
+      _all(block){|a| with_sql_each(sql){|r| a << r}}
+    end
+
     # Execute the given SQL and return the number of rows deleted.  This exists
     # solely as an optimization, replacing with_sql(sql).delete.  It's significantly
     # faster as it does not require cloning the current dataset.
     def with_sql_delete(sql)
       execute_dui(sql)
     end
+    alias with_sql_update with_sql_delete
+
+    # Run the given SQL and yield each returned row to the block.
+    #
+    # This method should not be called on a shared dataset if the columns selected
+    # in the given SQL do not match the columns in the receiver.
+    def with_sql_each(sql)
+      if row_proc = @row_proc
+        fetch_rows(sql){|r| yield row_proc.call(r)}
+      else
+        fetch_rows(sql){|r| yield r}
+      end
+      self
+    end
+    
+    # Run the given SQL and return the first row, or nil if no rows were returned.
+    # See with_sql_each.
+    def with_sql_first(sql)
+      with_sql_each(sql){|r| return r}
+      nil
+    end
+
+    # Run the given SQL and return the first value in the first row, or nil if no
+    # rows were returned.  For this to make sense, the SQL given should select
+    # only a single value.  See with_sql_each.
+    def with_sql_single_value(sql)
+      if r = with_sql_first(sql)
+        r.values.first
+      end
+    end
+
+    # Execute the given SQL and (on most databases) return the primary key of the
+    # inserted row.
+    def with_sql_insert(sql)
+      execute_insert(sql)
+    end
 
     protected
 
@@ -752,6 +850,15 @@ module Sequel
   
     private
     
+    # Internals of all and with_sql_all
+    def _all(block)
+      a = []
+      yield a
+      post_load(a)
+      a.each(&block) if block
+      a
+    end
+    
     # Internals of +select_hash+ and +select_hash_groups+
     def _select_hash(meth, key_column, value_column)
       select(*(key_column.is_a?(Array) ? key_column : [key_column]) + (value_column.is_a?(Array) ? value_column : [value_column])).
@@ -782,6 +889,12 @@ module Sequel
       end
     end
 
+    # The default number of rows that can be inserted in a single INSERT statement via import.
+    # The default is for no limit.
+    def default_import_slice
+      nil
+    end
+
     # Set the server to use to :default unless it is already set in the passed opts
     def default_server_opts(opts)
       {:server=>@opts[:server] || :default}.merge(opts)
@@ -823,7 +936,7 @@ module Sequel
       when SQL::QualifiedIdentifier
         _hash_key_symbol(s.column, true)
       when SQL::AliasedExpression
-        _hash_key_symbol(s.aliaz, true)
+        _hash_key_symbol(s.alias, true)
       when String
         s.to_sym if recursing
       end
@@ -848,6 +961,33 @@ module Sequel
       s.is_a?(Array) ? s.map{|c| hash_key_symbol(c)} : hash_key_symbol(s)
     end
     
+    # Returns an expression that will ignore values preceding the given row, using the
+    # receiver's current order. This yields the row and the array of order expressions
+    # to the block, which should return an array of values to use.
+    def ignore_values_preceding(row)
+      @opts[:order].map{|v| v.is_a?(SQL::OrderedExpression) ? v.expression : v}
+
+      order_exprs = @opts[:order].map do |v|
+        if v.is_a?(SQL::OrderedExpression)
+          descending = v.descending
+          v = v.expression
+        else
+          descending = false
+        end
+        [v, descending]
+      end
+
+      row_values = yield(row, order_exprs.map{|e| e.first})
+
+      last_expr = []
+      cond = order_exprs.zip(row_values).map do |(v, descending), value|
+        expr =  last_expr + [SQL::BooleanExpression.new(descending ? :< : :>, v, value)]
+        last_expr += [SQL::BooleanExpression.new(:'=', v, value)]
+        Sequel.&(*expr)
+      end
+      Sequel.|(*cond)
+    end
+
     # Modify the identifier returned from the database based on the
     # identifier_output_method.
     def output_identifier(v)
diff --git a/lib/sequel/dataset/features.rb b/lib/sequel/dataset/features.rb
index 73c291d..1c9c151 100644
--- a/lib/sequel/dataset/features.rb
+++ b/lib/sequel/dataset/features.rb
@@ -44,7 +44,7 @@ module Sequel
     # If given, +type+ can be :select, :insert, :update, or :delete, in which case it
     # determines whether WITH is supported for the respective statement type.
     def supports_cte?(type=:select)
-      send(:"#{type}_clause_methods").include?(:"#{type}_with_sql")
+      false
     end
 
     # Whether the dataset supports common table expressions (the WITH clause)
@@ -54,6 +54,13 @@ module Sequel
       false
     end
 
+    # Whether the database supports derived column lists (e.g.
+    # "table_expr AS table_alias(column_alias1, column_alias2, ...)"), true by
+    # default.
+    def supports_derived_column_lists?
+      true
+    end
+
     # Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.
     def supports_distinct_on?
       false
@@ -99,6 +106,11 @@ module Sequel
     def supports_lateral_subqueries?
       false
     end
+
+    # Whether limits are supported in correlated subqueries.  True by default.
+    def supports_limits_in_correlated_subqueries?
+      true
+    end
     
     # Whether modifying joined datasets is supported.
     def supports_modifying_joins?
@@ -111,6 +123,11 @@ module Sequel
       true
     end
 
+    # Whether offsets are supported in correlated subqueries, true by default.
+    def supports_offsets_in_correlated_subqueries?
+      true
+    end
+
     # Whether the dataset supports or can fully emulate the DISTINCT ON clause,
     # including respecting the ORDER BY clause, false by default
     def supports_ordered_distinct_on?
@@ -130,7 +147,7 @@ module Sequel
     # Whether the RETURNING clause is supported for the given type of query.
     # +type+ can be :insert, :update, or :delete.
     def supports_returning?(type)
-      send(:"#{type}_clause_methods").include?(:"#{type}_returning_sql")
+      false
     end
 
     # Whether the database supports SELECT *, column FROM table
@@ -167,6 +184,11 @@ module Sequel
       true
     end
 
+    # Whether the database supports quoting function names, false by default.
+    def supports_quoted_function_names?
+      false
+    end
+
     # Whether the RETURNING clause is used for the given dataset.
     # +type+ can be :insert, :update, or :delete.
     def uses_returning?(type)
diff --git a/lib/sequel/dataset/graph.rb b/lib/sequel/dataset/graph.rb
index ebc7865..5495f31 100644
--- a/lib/sequel/dataset/graph.rb
+++ b/lib/sequel/dataset/graph.rb
@@ -26,7 +26,7 @@ module Sequel
     #
     # Arguments:
     # dataset :: Can be a symbol (specifying a table), another dataset,
-    #            or an object that responds to +dataset+ and returns a symbol or a dataset
+    #            or an SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression.
     # join_conditions :: Any condition(s) allowed by +join_table+.
     # block :: A block that is passed to +join_table+.
     #
@@ -44,28 +44,42 @@ module Sequel
     #            some metadata about the join that makes it important to use +graph+ instead
     #            of +join_table+.
     # :table_alias :: The alias to use for the table.  If not specified, doesn't
-    #                 alias the table.  You will get an error if the the alias (or table) name is
+    #                 alias the table.  You will get an error if the alias (or table) name is
     #                 used more than once.
     def graph(dataset, join_conditions = nil, options = OPTS, &block)
       # Allow the use of a dataset or symbol as the first argument
       # Find the table name/dataset based on the argument
       table_alias = options[:table_alias]
+      table = dataset
+      create_dataset = true
+
       case dataset
       when Symbol
-        table = dataset
-        dataset = @db[dataset]
-        table_alias ||= table
-      when ::Sequel::Dataset
+        # let alias be the same as the table name (sans any optional schema)
+        # unless alias explicitly given in the symbol using ___ notation
+        table_alias ||= split_symbol(table).compact.last
+      when Dataset
         if dataset.simple_select_all?
           table = dataset.opts[:from].first
           table_alias ||= table
         else
-          table = dataset
           table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1)
         end
+        create_dataset = false
+      when SQL::Identifier
+        table_alias ||= table.value
+      when SQL::QualifiedIdentifier
+        table_alias ||= split_qualifiers(table).last
+      when SQL::AliasedExpression
+        return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge(options), &block)
       else
         raise Error, "The dataset argument should be a symbol or dataset"
       end
+      table_alias = table_alias.to_sym
+
+      if create_dataset
+        dataset = db.from(table)
+      end
 
       # Raise Sequel::Error with explanation that the table alias has been used
       raise_alias_error = lambda do
@@ -76,11 +90,18 @@ module Sequel
       # Only allow table aliases that haven't been used
       raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias)
       
-      # Use a from_self if this is already a joined table
-      ds = (!@opts[:graph] && (@opts[:from].length > 1 || @opts[:join])) ? from_self(:alias=>options[:from_self_alias] || first_source) : self
+      table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table)
+      implicit_qualifier = options[:implicit_qualifier]
+      ds = self
+
+      # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs)
+      if (@opts[:graph_from_self] != false && !@opts[:graph] && (@opts[:from].length > 1 || @opts[:join]))
+        implicit_qualifier = options[:from_self_alias] || first_source
+        ds = ds.from_self(:alias=>implicit_qualifier)
+      end
       
       # Join the table early in order to avoid cloning the dataset twice
-      ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], :qualify=>options[:qualify], &block)
+      ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>options[:qualify], &block)
       opts = ds.opts
 
       # Whether to include the table in the result set
@@ -94,7 +115,8 @@ module Sequel
         select = opts[:select].dup
         [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup}
       else
-        master = alias_symbol(ds.first_source_alias)
+        qualifier = ds.first_source_alias
+        master = alias_symbol(qualifier)
         raise_alias_error.call if master == table_alias
         # Master hash storing all .graph related information
         graph = opts[:graph] = {}
@@ -121,7 +143,7 @@ module Sequel
                 column = column.value if column.is_a?(SQL::Identifier)
                 column.to_sym
               when SQL::AliasedExpression
-                column = sel.aliaz
+                column = sel.alias
                 column = column.value if column.is_a?(SQL::Identifier)
                 column.to_sym
               else
@@ -129,11 +151,11 @@ module Sequel
               end
               column_aliases[column] = [master, column]
             end
-            select = qualified_expression(select, master)
+            select = qualified_expression(select, qualifier)
           else
             select = columns.map do |column|
               column_aliases[column] = [master, column]
-              SQL::QualifiedIdentifier.new(master, column)
+              SQL::QualifiedIdentifier.new(qualifier, column)
             end
           end
         end
@@ -165,9 +187,9 @@ module Sequel
               column_alias = :"#{column_alias}_#{column_alias_num}" 
               ca_num[column_alias] += 1
             end
-            [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias, column), column_alias)]
+            [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)]
           else
-            ident = SQL::QualifiedIdentifier.new(table_alias, column)
+            ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column)
             [column, ident]
           end
           column_aliases[col_alias] = [table_alias, column]
@@ -215,6 +237,20 @@ module Sequel
 
     private
 
+    # Wrap the alias symbol in an SQL::Identifier if the identifier on which is based
+    # is an SQL::Identifier.  This works around cases where the alias symbol contains
+    # double embedded underscores which would be considered an implicit qualified identifier
+    # if not wrapped in an SQL::Identifier.
+    def qualifier_from_alias_symbol(aliaz, identifier)
+      identifier = identifier.column if identifier.is_a?(SQL::QualifiedIdentifier)
+      case identifier
+      when SQL::Identifier
+        Sequel.identifier(aliaz)
+      else
+        aliaz
+      end
+    end
+
     # Transform the hash of graph aliases and return a two element array
     # where the first element is an array of identifiers suitable to pass to
     # a select method, and the second is a new hash of preprocessed graph aliases.
diff --git a/lib/sequel/dataset/misc.rb b/lib/sequel/dataset/misc.rb
index d6c469a..7589b64 100644
--- a/lib/sequel/dataset/misc.rb
+++ b/lib/sequel/dataset/misc.rb
@@ -36,6 +36,12 @@ module Sequel
       o.is_a?(self.class) && db == o.db && opts == o.opts && sql == o.sql
     end
 
+    # An object representing the current date or time, should be an instance
+    # of Sequel.datetime_class.
+    def current_datetime
+      Sequel.datetime_class.now
+    end
+
     # Alias for ==
     def eql?(o)
       self == o
@@ -97,7 +103,7 @@ module Sequel
       end
       case s = source.first
       when SQL::AliasedExpression
-        s.aliaz
+        s.alias
       when Symbol
         _, _, aliaz = split_symbol(s)
         aliaz ? aliaz.to_sym : s
@@ -178,7 +184,7 @@ module Sequel
         c_table, column, aliaz = split_symbol(c)
         [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz]
       when SQL::AliasedExpression
-        [c.expression, c.aliaz]
+        [c.expression, c.alias]
       when SQL::JoinClause
         [c.table, c.table_alias]
       else
diff --git a/lib/sequel/dataset/mutation.rb b/lib/sequel/dataset/mutation.rb
index 62f9ee8..64c6698 100644
--- a/lib/sequel/dataset/mutation.rb
+++ b/lib/sequel/dataset/mutation.rb
@@ -58,6 +58,7 @@ module Sequel
     # Set the method to call on identifiers going into the database for this dataset
     def identifier_input_method=(v)
       raise_if_frozen!
+      skip_symbol_cache!
       @identifier_input_method = v
     end
     
@@ -77,6 +78,7 @@ module Sequel
     # Set whether to quote identifiers for this dataset
     def quote_identifiers=(v)
       raise_if_frozen!
+      skip_symbol_cache!
       @quote_identifiers = v
     end
 
diff --git a/lib/sequel/dataset/placeholder_literalizer.rb b/lib/sequel/dataset/placeholder_literalizer.rb
new file mode 100644
index 0000000..75cd235
--- /dev/null
+++ b/lib/sequel/dataset/placeholder_literalizer.rb
@@ -0,0 +1,172 @@
+module Sequel
+  class Dataset
+    # PlaceholderLiteralizer allows you to record the application of arbitrary changes
+    # to a dataset with placeholder arguments, recording where those placeholder arguments
+    # are used in the query.  When running the query, the literalization process is much
+    # faster as Sequel can skip most of the work it normal has to do when literalizing a
+    # dataset.
+    #
+    # Basically, this enables optimizations that allow Sequel to cache the SQL produced
+    # for a given dataset, so that it doesn't need to recompute that information every
+    # time.
+    #
+    # Example:
+    #
+    #   loader = Sequel::Dataset::PlaceholderLiteralizer.loader(DB[:items]) do |pl, ds|
+    #     ds.where(:id=>pl.arg).exclude(:name=>pl.arg).limit(1)
+    #   end
+    #   loader.first(1, "foo")
+    #   # SELECT * FROM items WHERE ((id = 1) AND (name != 'foo')) LIMIT 1
+    #   loader.first(2, "bar")
+    #   # SELECT * FROM items WHERE ((id = 2) AND (name != 'bar')) LIMIT 1
+    #
+    # Caveats:
+    #
+    # Note that this method does not handle all possible cases.  For example:
+    #
+    #   loader = Sequel::Dataset::PlaceholderLiteralizer.loader(DB[:items]) do |pl, ds|
+    #     ds.join(pl.arg, :item_id=>:id)
+    #   end
+    #   loader(:cart_items)
+    #  
+    # Will not qualify the item_id column with cart_items.  In this type of situation it's
+    # best to add a table alias when joining:
+    #
+    #   loader = Sequel::Dataset::PlaceholderLiteralizer.loader(DB[:items]) do |pl, ds|
+    #     ds.join(Sequel.as(pl.arg, :t), :item_id=>:id)
+    #   end
+    #   loader(:cart_items)
+    #
+    # There are other similar cases that are not handled, mainly when Sequel changes the
+    # SQL produced depending on the types of the arguments.
+    class PlaceholderLiteralizer
+      # A placeholder argument used by the PlaceholderLiteralizer.  This records the offset
+      # that the argument should be used in the resulting SQL.
+      class Argument
+        # Set the recorder, the argument position, and any transforming block to use
+        # for this placeholder.
+        def initialize(recorder, pos, transformer=nil)
+          @recorder = recorder
+          @pos = pos
+          @transformer = transformer
+        end
+
+        # Record the SQL query offset, argument position, and transforming block where the
+        # argument should be literalized.
+        def sql_literal_append(ds, sql)
+          if ds.opts[:placeholder_literal_null]
+            ds.send(:literal_append, sql, nil)
+          else
+            @recorder.use(sql, @pos, @transformer)
+          end
+        end
+
+        # Return a new Argument object for the same recorder and argument position, but with a
+        # different transformer block.
+        def transform(&block)
+          Argument.new(@recorder, @pos, block)
+        end
+      end
+
+      # Records the offsets at which the placeholder arguments are used in
+      # the SQL query.
+      class Recorder
+        # Yields the receiver and the dataset to the block, which should
+        # call #arg on the receiver for each placeholder argument, and
+        # return the dataset that you want to load.
+        def loader(dataset)
+          @argn = -1
+          @args = []
+          ds = yield self, dataset
+          sql = ds.clone(:placeholder_literalizer=>self).sql
+
+          last_offset = 0
+          fragments = @args.map do |used_sql, offset, arg, t|
+            raise Error, "placeholder literalizer argument literalized into different string than dataset returned" unless used_sql.equal?(sql)
+            a = [sql[last_offset...offset], arg, t]
+            last_offset = offset
+            a
+          end
+          final_sql = sql[last_offset..-1]
+
+          arity = @argn+1
+          PlaceholderLiteralizer.new(ds.clone, fragments, final_sql, arity)
+        end
+
+        # Return an Argument with the specified position, or the next position. In
+        # general you shouldn't mix calls with an argument and calls without an
+        # argument for the same receiver.
+        def arg(v=(no_arg_given = true; @argn+=1))
+          unless no_arg_given
+            @argn = v if @argn < v
+          end
+          Argument.new(self, v)
+        end
+
+        # Record the offset at which the argument is used in the SQL query, and any
+        # transforming
+        def use(sql, arg, transformer)
+          @args << [sql, sql.length, arg, transformer]
+        end
+      end
+
+      # Create a PlaceholderLiteralizer by yielding a Recorder and dataset to the
+      # given block, recording the offsets at which the recorders arguments
+      # are used in the query.
+      def self.loader(dataset, &block)
+        Recorder.new.loader(dataset, &block)
+      end
+
+      # Save the dataset, array of SQL fragments, and ending SQL string.
+      def initialize(dataset, fragments, final_sql, arity)
+        @dataset = dataset
+        @fragments = fragments
+        @final_sql = final_sql
+        @arity = arity
+      end
+
+      # Return an array of all objects by running the SQL query for the given arguments.
+      # If a block is given, yields all objects to the block after loading them.
+      def all(*args, &block)
+        @dataset.with_sql_all(sql(*args), &block)
+      end
+
+      # Run the SQL query for the given arguments, yielding each returned row to the block.
+      def each(*args, &block)
+        @dataset.with_sql_each(sql(*args), &block)
+      end
+
+      # Run the SQL query for the given arguments, returning the first row.
+      def first(*args)
+        @dataset.with_sql_first(sql(*args))
+      end
+
+      # Run the SQL query for the given arguments, returning the first value.  For this to
+      # make sense, the dataset should return a single row with a single value (or no rows).
+      def get(*args)
+        @dataset.with_sql_single_value(sql(*args))
+      end
+
+      # Return the SQL query to use for the given arguments.
+      def sql(*args)
+        raise Error, "wrong number of arguments (#{args.length} for #{@arity})" unless args.length == @arity
+        s = ''
+        ds = @dataset
+        @fragments.each do |sql, i, transformer|
+          s << sql
+          if i.is_a?(Integer)
+            v = args.fetch(i)
+            v = transformer.call(v) if transformer
+          else
+            v = i.call
+          end
+          ds.literal_append(s, v)
+        end
+        if sql = @final_sql
+          s << sql
+        end
+        s
+      end
+    end
+  end
+end
diff --git a/lib/sequel/dataset/prepared_statements.rb b/lib/sequel/dataset/prepared_statements.rb
index 10e6b6a..136e56e 100644
--- a/lib/sequel/dataset/prepared_statements.rb
+++ b/lib/sequel/dataset/prepared_statements.rb
@@ -3,7 +3,7 @@ module Sequel
     # ---------------------
     # :section: 8 - Methods related to prepared statements or bound variables
     # On some adapters, these use native prepared statements and bound variables, on others
-    # support is emulated.  For details, see the {"Prepared Statements/Bound Variables" guide}[link:files/doc/prepared_statements_rdoc.html].
+    # support is emulated.  For details, see the {"Prepared Statements/Bound Variables" guide}[rdoc-ref:doc/prepared_statements.rdoc].
     # ---------------------
     
     PREPARED_ARG_PLACEHOLDER = LiteralString.new('?').freeze
@@ -165,6 +165,12 @@ module Sequel
         @opts[:bind_vars].has_key?(k)
       end
 
+      # The symbol cache should always be skipped, since placeholders
+      # are symbols.
+      def skip_symbol_cache?
+        true
+      end
+
       # Use a clone of the dataset extended with prepared statement
       # support and using the same argument hash so that you can use
       # bind variables/prepared arguments in subselects.
diff --git a/lib/sequel/dataset/query.rb b/lib/sequel/dataset/query.rb
index ff968c5..38f16a1 100644
--- a/lib/sequel/dataset/query.rb
+++ b/lib/sequel/dataset/query.rb
@@ -36,7 +36,7 @@ module Sequel
     QUERY_METHODS = (<<-METHS).split.map{|x| x.to_sym} + JOIN_METHODS
       add_graph_aliases and distinct except exclude exclude_having exclude_where
       filter for_update from from_self graph grep group group_and_count group_by having intersect invert
-      limit lock_style naked or order order_append order_by order_more order_prepend qualify
+      limit lock_style naked offset or order order_append order_by order_more order_prepend qualify
       reverse reverse_order select select_all select_append select_group select_more server
       set_graph_aliases unfiltered ungraphed ungrouped union
       unlimited unordered where with with_recursive with_sql
@@ -213,10 +213,13 @@ module Sequel
     #
     #   ds.from_self(:alias=>:foo)
     #   # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo
+    #
+    #   ds.from_self(:alias=>:foo, :column_aliases=>[:c1, :c2])
+    #   # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
     def from_self(opts=OPTS)
       fs = {}
       @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)}
-      clone(fs).from(opts[:alias] ? as(opts[:alias]) : self)
+      clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self)
     end
 
     # Match any of the columns to any of the patterns. The terms can be
@@ -237,19 +240,23 @@ module Sequel
     # Examples:
     #
     #   dataset.grep(:a, '%test%')
-    #   # SELECT * FROM items WHERE (a LIKE '%test%')
+    #   # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\')
     #
     #   dataset.grep([:a, :b], %w'%test% foo')
-    #   # SELECT * FROM items WHERE ((a LIKE '%test%') OR (a LIKE 'foo') OR (b LIKE '%test%') OR (b LIKE 'foo'))
+    #   # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\')
+    #   #   OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\'))
     #
     #   dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true)
-    #   # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (b LIKE '%foo%')) AND ((a LIKE '%bar%') OR (b LIKE '%bar%')))
+    #   # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\'))
+    #   #   AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))
     #
     #   dataset.grep([:a, :b], %w'%foo% %bar%', :all_columns=>true)
-    #   # SELECT * FROM a WHERE (((a LIKE '%foo%') OR (a LIKE '%bar%')) AND ((b LIKE '%foo%') OR (b LIKE '%bar%')))
+    #   # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\'))
+    #   #   AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))
     #
     #   dataset.grep([:a, :b], %w'%foo% %bar%', :all_patterns=>true, :all_columns=>true)
-    #   # SELECT * FROM a WHERE ((a LIKE '%foo%') AND (b LIKE '%foo%') AND (a LIKE '%bar%') AND (b LIKE '%bar%'))
+    #   # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\')
+    #   #   AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
     def grep(columns, patterns, opts=OPTS)
       if opts[:all_patterns]
         conds = Array(patterns).map do |pat|
@@ -445,23 +452,33 @@ module Sequel
       last_alias = options[:implicit_qualifier]
       qualify_type = options[:qualify]
 
-      if table.is_a?(Dataset)
+      if table.is_a?(SQL::AliasedExpression)
+        table_expr = if table_alias
+          SQL::AliasedExpression.new(table.expression, table_alias, table.columns)
+        else
+          table
+        end
+        table = table_expr.expression
+        table_name = table_alias = table_expr.alias
+      elsif table.is_a?(Dataset)
         if table_alias.nil?
           table_alias_num = (@opts[:num_dataset_sources] || 0) + 1
           table_alias = dataset_alias(table_alias_num)
         end
         table_name = table_alias
+        table_expr = SQL::AliasedExpression.new(table, table_alias)
       else
         table, implicit_table_alias = split_alias(table)
         table_alias ||= implicit_table_alias
         table_name = table_alias || table
+        table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table
       end
 
       join = if expr.nil? and !block
-        SQL::JoinClause.new(type, table, table_alias)
+        SQL::JoinClause.new(type, table_expr)
       elsif using_join
         raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block
-        SQL::JoinUsingClause.new(expr, type, table, table_alias)
+        SQL::JoinUsingClause.new(expr, type, table_expr)
       else
         last_alias ||= @opts[:last_joined_table] || first_source_alias
         if Sequel.condition_specifier?(expr)
@@ -485,7 +502,7 @@ module Sequel
           expr2 = yield(table_name, last_alias, @opts[:join] || [])
           expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2
         end
-        SQL::JoinOnClause.new(expr, type, table, table_alias)
+        SQL::JoinOnClause.new(expr, type, table_expr)
       end
 
       opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name}
@@ -524,6 +541,7 @@ module Sequel
       return from_self.limit(l, o) if @opts[:sql]
 
       if l.is_a?(Range)
+        no_offset = false
         o = l.first
         l = l.last - l.first + (l.exclude_end? ? 0 : 1)
       end
@@ -531,17 +549,10 @@ module Sequel
       if l.is_a?(Integer)
         raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1
       end
-      opts = {:limit => l}
-      if o
-        o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString)
-        if o.is_a?(Integer)
-          raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0
-        end
-        opts[:offset] = o
-      elsif !no_offset
-        opts[:offset] = nil
-      end
-      clone(opts)
+
+      ds = clone(:limit=>l)
+      ds = ds.offset(o) unless no_offset
+      ds
     end
     
     # Returns a cloned dataset with the given lock style.  If style is a
@@ -568,6 +579,19 @@ module Sequel
       ds.row_proc = nil
       ds
     end
+
+    # Returns a copy of the dataset with a specified order. Can be safely combined with limit.
+    # If you call limit with an offset, it will override override the offset if you've called
+    # offset first.
+    #
+    #   DB[:items].offset(10) # SELECT * FROM items OFFSET 10
+    def offset(o)
+      o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString)
+      if o.is_a?(Integer)
+        raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0
+      end
+      clone(:offset => o)
+    end
     
     # Adds an alternate filter to an existing filter using OR. If no filter 
     # exists an +Error+ is raised.
@@ -842,7 +866,7 @@ module Sequel
     # where also accepts a block, which should return one of the above argument
     # types, and is treated the same way.  This block yields a virtual row object,
     # which is easy to use to create identifiers and functions.  For more details
-    # on the virtual row support, see the {"Virtual Rows" guide}[link:files/doc/virtual_rows_rdoc.html]
+    # on the virtual row support, see the {"Virtual Rows" guide}[rdoc-ref:doc/virtual_rows.rdoc]
     #
     # If both a block and regular argument are provided, they get ANDed together.
     #
@@ -871,7 +895,7 @@ module Sequel
     #   software = dataset.where(:category => 'software').where{price < 100}
     #   # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
     #
-    # See the the {"Dataset Filtering" guide}[link:files/doc/dataset_filtering_rdoc.html] for more examples and details.
+    # See the {"Dataset Filtering" guide}[rdoc-ref:doc/dataset_filtering.rdoc] for more examples and details.
     def where(*cond, &block)
       _filter(:where, *cond, &block)
     end
@@ -883,9 +907,9 @@ module Sequel
     # :recursive :: Specify that this is a recursive CTE
     #
     #   DB[:items].with(:items, DB[:syx].where(:name.like('A%')))
-    #   # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%')) SELECT * FROM items
+    #   # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
     def with(name, dataset, opts=OPTS)
-      raise(Error, 'This datatset does not support common table expressions') unless supports_cte?
+      raise(Error, 'This dataset does not support common table expressions') unless supports_cte?
       if hoist_cte?(dataset)
         s, ds = hoist_cte(dataset)
         s.with(name, ds, opts)
@@ -961,10 +985,24 @@ module Sequel
       !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty?
     end
 
-    # Whether this dataset is a simple SELECT * FROM table.
+    # Whether this dataset is a simple select from an underlying table, such as:
+    #
+    #   SELECT * FROM table
+    #   SELECT table.* FROM table
     def simple_select_all?
       o = @opts.reject{|k,v| v.nil? || NON_SQL_OPTIONS.include?(k)}
-      o.length == 1 && (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression))
+      if (f = o[:from]) && f.length == 1 && (f.first.is_a?(Symbol) || f.first.is_a?(SQL::AliasedExpression))
+        case o.length
+        when 1
+          true
+        when 2
+          (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll)
+        else
+          false
+        end
+      else
+        false
+      end
     end
 
     private
@@ -1028,6 +1066,8 @@ module Sequel
         end
       when String
         LiteralString.new("(#{expr})")
+      when PlaceholderLiteralizer::Argument
+        expr.transform{|v| filter_expr(v)}
       else
         raise(Error, "Invalid filter argument: #{expr.inspect}")
       end
diff --git a/lib/sequel/dataset/sql.rb b/lib/sequel/dataset/sql.rb
index 5bc1888..89596ef 100644
--- a/lib/sequel/dataset/sql.rb
+++ b/lib/sequel/dataset/sql.rb
@@ -5,17 +5,7 @@ module Sequel
     # These are methods you can call to see what SQL will be generated by the dataset.
     # ---------------------
     
-    # Returns a DELETE SQL query string.  See +delete+.
-    # 
-    #   dataset.filter{|o| o.price >= 100}.delete_sql
-    #   # => "DELETE FROM items WHERE (price >= 100)"
-    def delete_sql
-      return static_sql(opts[:sql]) if opts[:sql]
-      check_modification_allowed!
-      clause_sql(:delete)
-    end
-    
-    # Returns an EXISTS clause for the dataset as a +LiteralString+.
+    # Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
     #
     #   DB.select(1).where(DB[:items].exists)
     #   # SELECT 1 WHERE (EXISTS (SELECT * FROM items))
@@ -59,23 +49,25 @@ module Sequel
         columns = [columns().last]
         values = [DEFAULT]
       end
-      clone(:columns=>columns, :values=>values)._insert_sql
+      clone(:columns=>columns, :values=>values).send(:_insert_sql)
     end
     
-    # Returns a literal representation of a value to be used as part
-    # of an SQL expression. 
+    # Append a literal representation of a value to the given SQL string.
     # 
-    #   DB[:items].literal("abc'def\\") #=> "'abc''def\\\\'"
-    #   DB[:items].literal(:items__id) #=> "items.id"
-    #   DB[:items].literal([1, 2, 3]) => "(1, 2, 3)"
-    #   DB[:items].literal(DB[:items]) => "(SELECT * FROM items)"
-    #   DB[:items].literal(:x + 1 > :y) => "((x + 1) > y)"
-    #
     # If an unsupported object is given, an +Error+ is raised.
     def literal_append(sql, v)
       case v
       when Symbol
-        literal_symbol_append(sql, v)
+        if skip_symbol_cache?
+          literal_symbol_append(sql, v)
+        else 
+          unless l = db.literal_symbol(v)
+            l = ''
+            literal_symbol_append(l, v)
+            db.literal_symbol_set(v, l)
+          end
+          sql << l
+        end
       when String
         case v
         when LiteralString
@@ -104,9 +96,9 @@ module Sequel
       when Array
         literal_array_append(sql, v)
       when Time
-        sql << (v.is_a?(SQLTime) ? literal_sqltime(v) : literal_time(v))
+        v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v)
       when DateTime
-        sql << literal_datetime(v)
+        literal_datetime_append(sql, v)
       when Date
         sql << literal_date(v)
       when Dataset
@@ -123,17 +115,32 @@ module Sequel
     # This method should be overridden by descendants if the support
     # inserting multiple records in a single SQL statement.
     def multi_insert_sql(columns, values)
-      values.map{|r| insert_sql(columns, r)}
+      case multi_insert_sql_strategy
+      when :values
+        sql = LiteralString.new('VALUES ')
+        expression_list_append(sql, values.map{|r| Array(r)})
+        [insert_sql(columns, sql)]
+      when :union
+        c = false
+        sql = LiteralString.new('')
+        u = UNION_ALL_SELECT
+        f = empty_from_sql
+        values.each do |v|
+          if c
+            sql << u
+          else
+            sql << SELECT << SPACE
+            c = true
+          end
+          expression_list_append(sql, v)
+          sql << f if f
+        end
+        [insert_sql(columns, sql)]
+      else
+        values.map{|r| insert_sql(columns, r)}
+      end
     end
     
-    # Returns a SELECT SQL query string.
-    #
-    #   dataset.select_sql # => "SELECT * FROM items"
-    def select_sql
-      return static_sql(@opts[:sql]) if @opts[:sql]
-      clause_sql(:select)
-    end
-
     # Same as +select_sql+, not aliased directly to make subclassing simpler.
     def sql
       select_sql
@@ -164,7 +171,7 @@ module Sequel
     def update_sql(values = OPTS)
       return static_sql(opts[:sql]) if opts[:sql]
       check_modification_allowed!
-      clone(:values=>values)._update_sql
+      clone(:values=>values).send(:_update_sql)
     end
     
     # ---------------------
@@ -178,6 +185,47 @@ module Sequel
       clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze
     end
 
+    # Define a dataset literalization method for the given type in the given module,
+    # using the given clauses.
+    #
+    # Arguments:
+    # mod :: Module in which to define method
+    # type :: Type of SQL literalization method to create, either :select, :insert, :update, or :delete
+    # clauses :: array of clauses that make up the SQL query for the type.  This can either be a single
+    #            array of symbols/strings, or it can be an array of pairs, with the first element in
+    #            each pair being an if/elsif/else code fragment, and the second element in each pair
+    #            being an array of symbol/strings for the appropriate branch.
+    def self.def_sql_method(mod, type, clauses)
+      priv = type == :update || type == :insert
+
+      lines = []
+      lines << 'private' if priv
+      lines << "def #{'_' if priv}#{type}_sql"
+      lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv
+      lines << 'check_modification_allowed!' if type == :delete
+      lines << 'sql = @opts[:append_sql] || sql_string_origin'
+
+      if clauses.all?{|c| c.is_a?(Array)}
+        clauses.each do |i, cs|
+          lines << i
+          lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) 
+        end 
+        lines << 'end'
+      else
+        lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"})
+      end
+
+      lines << 'sql'
+      lines << 'end'
+
+      mod.class_eval lines.join("\n"), __FILE__, __LINE__
+    end
+
+    def_sql_method(self, :delete, %w'delete from where')
+    def_sql_method(self, :insert, %w'insert into columns values')
+    def_sql_method(self, :select, %w'with select distinct columns from join where group having compounds order limit lock')
+    def_sql_method(self, :update, %w'update table set where')
+
     # Map of emulated function names to native function names.
     EMULATED_FUNCTION_MAP = {}
 
@@ -190,6 +238,9 @@ module Sequel
     AS = ' AS '.freeze
     ASC = ' ASC'.freeze
     BACKSLASH = "\\".freeze
+    BITCOMP_CLOSE = ") - 1)".freeze
+    BITCOMP_OPEN = "((0 - ".freeze
+    BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}
     BOOL_FALSE = "'f'".freeze
     BOOL_TRUE = "'t'".freeze
     BRACKET_CLOSE =  ']'.freeze
@@ -214,7 +265,6 @@ module Sequel
     DEFAULT = LiteralString.new('DEFAULT').freeze
     DEFAULT_VALUES = " DEFAULT VALUES".freeze
     DELETE = 'DELETE'.freeze
-    DELETE_CLAUSE_METHODS = clause_methods(:delete, %w'delete from where')
     DESC = ' DESC'.freeze
     DISTINCT = " DISTINCT".freeze
     DOT = '.'.freeze
@@ -224,6 +274,7 @@ module Sequel
     ESCAPE = " ESCAPE ".freeze
     EXTRACT = 'extract('.freeze
     EXISTS = ['EXISTS '.freeze].freeze
+    FILTER = " FILTER (WHERE ".freeze
     FOR_UPDATE = ' FOR UPDATE'.freeze
     FORMAT_DATE = "'%Y-%m-%d'".freeze
     FORMAT_DATE_STANDARD = "DATE '%Y-%m-%d'".freeze
@@ -234,11 +285,10 @@ module Sequel
     FRAME_ALL = "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING".freeze
     FRAME_ROWS = "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW".freeze
     FROM = ' FROM '.freeze
-    FUNCTION_EMPTY = '()'.freeze
+    FUNCTION_DISTINCT = "DISTINCT ".freeze
     GROUP_BY = " GROUP BY ".freeze
     HAVING = " HAVING ".freeze
     INSERT = "INSERT".freeze
-    INSERT_CLAUSE_METHODS = clause_methods(:insert, %w'insert into columns values')
     INTO = " INTO ".freeze
     IS_LITERALS = {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze
     IS_OPERATORS = ::Sequel::SQL::ComplexExpression::IS_OPERATORS
@@ -267,7 +317,6 @@ module Sequel
     QUOTE_RE = /"/.freeze
     RETURNING = " RETURNING ".freeze
     SELECT = 'SELECT'.freeze
-    SELECT_CLAUSE_METHODS = clause_methods(:select, %w'with select distinct columns from join where group having compounds order limit lock')
     SET = ' SET '.freeze
     SPACE = ' '.freeze
     SQL_WITH = "WITH ".freeze
@@ -279,11 +328,13 @@ module Sequel
     REGEXP_OPERATORS = ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS
     UNDERSCORE = '_'.freeze
     UPDATE = 'UPDATE'.freeze
-    UPDATE_CLAUSE_METHODS = clause_methods(:update, %w'update table set where')
     USING = ' USING ('.freeze
+    UNION_ALL_SELECT = ' UNION ALL SELECT '.freeze
     VALUES = " VALUES ".freeze
     V190 = '1.9.0'.freeze
     WHERE = " WHERE ".freeze
+    WITH_ORDINALITY = " WITH ORDINALITY".freeze
+    WITHIN_GROUP = " WITHIN GROUP (ORDER BY ".freeze
 
     [:literal, :quote_identifier, :quote_schema_table].each do |meth|
       class_eval(<<-END, __FILE__, __LINE__ + 1)
@@ -295,13 +346,13 @@ module Sequel
       END
     end
 
-    # SQL fragment for AliasedExpression
+    # Append literalization of aliased expression to SQL string.
     def aliased_expression_sql_append(sql, ae)
       literal_append(sql, ae.expression)
-      as_sql_append(sql, ae.aliaz)
+      as_sql_append(sql, ae.alias, ae.columns)
     end
 
-    # SQL fragment for Array
+    # Append literalization of array to SQL string.
     def array_sql_append(sql, a)
       if a.empty?
         sql << ARRAY_EMPTY
@@ -312,7 +363,7 @@ module Sequel
       end
     end
 
-    # SQL fragment for BooleanConstants
+    # Append literalization of boolean constant to SQL string.
     def boolean_constant_sql_append(sql, constant)
       if (constant == true || constant == false) && !supports_where_true?
         sql << (constant == true ? CONDITION_TRUE : CONDITION_FALSE)
@@ -321,7 +372,7 @@ module Sequel
       end
     end
 
-    # SQL fragment for CaseExpression
+    # Append literalization of case expression to SQL string.
     def case_expression_sql_append(sql, ce)
       sql << CASE_OPEN
       if ce.expression?
@@ -341,7 +392,7 @@ module Sequel
       sql << CASE_END
     end
 
-    # SQL fragment for the SQL CAST expression
+    # Append literalization of cast expression to SQL string.
     def cast_sql_append(sql, expr, type)
       sql << CAST_OPEN
       literal_append(sql, expr)
@@ -349,12 +400,12 @@ module Sequel
       sql << PAREN_CLOSE
     end
 
-    # SQL fragment for specifying all columns in a given table
+    # Append literalization of column all selection to SQL string.
     def column_all_sql_append(sql, ca)
       qualified_identifier_sql_append(sql, ca.table, WILDCARD)
     end
 
-    # SQL fragment for the complex expression.
+    # Append literalization of complex expression to SQL string.
     def complex_expression_sql_append(sql, op, args)
       case op
       when *IS_OPERATORS
@@ -459,49 +510,112 @@ module Sequel
       end
     end
     
-    # SQL fragment for constants
+    # Append literalization of constant to SQL string.
     def constant_sql_append(sql, constant)
       sql << constant.to_s
     end
 
-    # SQL fragment for delayed evaluations, evaluating the
-    # object and literalizing the returned value.
+    # Append literalization of delayed evaluation to SQL string,
+    # causing the delayed evaluation proc to be evaluated.
     def delayed_evaluation_sql_append(sql, callable)
-      literal_append(sql, callable.call)
+      if recorder = @opts[:placeholder_literalizer]
+        recorder.use(sql, callable, nil)
+      else
+        literal_append(sql, callable.call)
+      end
     end
 
-    # SQL fragment specifying an emulated SQL function call.
-    # By default, assumes just the function name may need to
-    # be emulated, adapters should set an EMULATED_FUNCTION_MAP
-    # hash mapping emulated functions to native functions in
-    # their dataset class to setup the emulation.
+    # REMOVE411
     def emulated_function_sql_append(sql, f)
       _function_sql_append(sql, native_function_name(f.f), f.args)
     end
 
-    # SQL fragment specifying an SQL function call without emulation.
+    # Append literalization of function call to SQL string.
     def function_sql_append(sql, f)
-      _function_sql_append(sql, f.f, f.args)
+      name = f.name
+      opts = f.opts
+
+      if opts[:emulate]
+        if emulate_function?(name)
+          emulate_function_sql_append(sql, f)
+          return
+        end
+
+        name = native_function_name(name) 
+      end
+
+      sql << LATERAL if opts[:lateral]
+
+      case name
+      when SQL::Identifier
+        if supports_quoted_function_names? && opts[:quoted] != false
+          literal_append(sql, name)
+        else
+          sql << name.value.to_s
+        end
+      when SQL::QualifiedIdentifier
+        if supports_quoted_function_names? && opts[:quoted] != false
+          literal_append(sql, name)
+        else
+          sql << split_qualifiers(name).join(DOT)
+        end
+      else
+        if supports_quoted_function_names? && opts[:quoted]
+          quote_identifier_append(sql, name)
+        else
+          sql << name.to_s
+        end
+      end
+
+      sql << PAREN_OPEN
+      if opts[:*]
+        sql << WILDCARD
+      else
+        sql << FUNCTION_DISTINCT if opts[:distinct]
+        expression_list_append(sql, f.args)
+      end
+      sql << PAREN_CLOSE
+
+      if group = opts[:within_group]
+        sql << WITHIN_GROUP
+        expression_list_append(sql, group)
+        sql << PAREN_CLOSE
+      end
+
+      if filter = opts[:filter]
+        sql << FILTER
+        literal_append(sql, filter_expr(filter, &opts[:filter_block]))
+        sql << PAREN_CLOSE
+      end
+
+      if window = opts[:over]
+        sql << OVER
+        window_sql_append(sql, window.opts)
+      end
+
+      if opts[:with_ordinality]
+        sql << WITH_ORDINALITY
+      end
     end
 
-    # SQL fragment specifying a JOIN clause without ON or USING.
+    # Append literalization of JOIN clause without ON or USING to SQL string.
     def join_clause_sql_append(sql, jc)
       table = jc.table
       table_alias = jc.table_alias
-      table_alias = nil if table == table_alias
+      table_alias = nil if table == table_alias && !jc.column_aliases
       sql << SPACE << join_type_sql(jc.join_type) << SPACE
       identifier_append(sql, table)
-      as_sql_append(sql, table_alias) if table_alias
+      as_sql_append(sql, table_alias, jc.column_aliases) if table_alias
     end
 
-    # SQL fragment specifying a JOIN clause with ON.
+    # Append literalization of JOIN ON clause to SQL string.
     def join_on_clause_sql_append(sql, jc)
       join_clause_sql_append(sql, jc)
       sql << ON
       literal_append(sql, filter_expr(jc.on))
     end
 
-    # SQL fragment specifying a JOIN clause with USING.
+    # Append literalization of JOIN USING clause to SQL string.
     def join_using_clause_sql_append(sql, jc)
       join_clause_sql_append(sql, jc)
       sql << USING
@@ -509,14 +623,13 @@ module Sequel
       sql << PAREN_CLOSE
     end
     
-    # SQL fragment for NegativeBooleanConstants
+    # Append literalization of negative boolean constant to SQL string.
     def negative_boolean_constant_sql_append(sql, constant)
       sql << NOT_SPACE
       boolean_constant_sql_append(sql, constant)
     end
 
-    # SQL fragment for the ordered expression, used in the ORDER BY
-    # clause.
+    # Append literalization of ordered expression to SQL string.
     def ordered_expression_sql_append(sql, oe)
       literal_append(sql, oe.expression)
       sql << (oe.descending ? DESC : ASC)
@@ -528,7 +641,7 @@ module Sequel
       end
     end
 
-    # SQL fragment for a literal string with placeholders
+    # Append literalization of placeholder literal string to SQL string.
     def placeholder_literal_string_sql_append(sql, pls)
       args = pls.args
       str = pls.str
@@ -572,8 +685,7 @@ module Sequel
       sql << PAREN_CLOSE if pls.parens
     end
 
-    # SQL fragment for the qualifed identifier, specifying
-    # a table and a column (or schema and table).
+    # Append literalization of qualified identifier to SQL string.
     # If 3 arguments are given, the 2nd should be the table/qualifier and the third should be
     # column/qualified.  If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
     def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c))
@@ -582,6 +694,7 @@ module Sequel
       identifier_append(sql, column)
     end
 
+    # Append literalization of unqualified identifier to SQL string.
     # Adds quoting to identifiers (columns and tables). If identifiers are not
     # being quoted, returns name as a string.  If identifiers are being quoted
     # quote the name with quoted_identifier.
@@ -599,8 +712,7 @@ module Sequel
       end
     end
 
-    # Separates the schema from the table and returns a string with them
-    # quoted (if quoting identifiers)
+    # Append literalization of identifier or unqualified identifier to SQL string.
     def quote_schema_table_append(sql, table)
       schema, table = schema_and_table(table)
       if schema
@@ -610,6 +722,7 @@ module Sequel
       quote_identifier_append(sql, table)
     end
 
+    # Append literalization of quoted identifier to SQL string.
     # This method quotes the given name with the SQL standard double quote. 
     # should be overridden by subclasses to provide quoting not matching the
     # SQL standard, such as backtick (used by MySQL and SQLite).
@@ -657,7 +770,7 @@ module Sequel
       end
     end
 
-    # SQL fragment for specifying subscripts (SQL array accesses)
+    # Append literalization of subscripts (SQL array accesses) to SQL string.
     def subscript_sql_append(sql, s)
       literal_append(sql, s.f)
       sql << BRACKET_OPEN
@@ -673,7 +786,7 @@ module Sequel
       sql << BRACKET_CLOSE
     end
 
-    # The SQL fragment for the given window's options.
+    # Append literalization of windows (for window functions) to SQL string.
     def window_sql_append(sql, opts)
       raise(Error, 'This dataset does not support window functions') unless supports_window_functions?
       sql << PAREN_OPEN
@@ -714,8 +827,9 @@ module Sequel
       sql << PAREN_CLOSE
     end
 
-    # The SQL fragment for the given window function's function and window.
+    # REMOVE411
     def window_function_sql_append(sql, function, window)
+      Deprecation.deprecate("Dataset#window_function_sql_append", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
       literal_append(sql, function)
       sql << OVER
       literal_append(sql, window)
@@ -723,16 +837,6 @@ module Sequel
 
     protected
 
-    # Formats in INSERT statement using the stored columns and values.
-    def _insert_sql
-      clause_sql(:insert)
-    end
-
-    # Formats an UPDATE statement using the stored values.
-    def _update_sql
-      clause_sql(:update)
-    end
-
     # Return a from_self dataset if an order or limit is specified, so it works as expected
     # with UNION, EXCEPT, and INTERSECT clauses.
     def compound_from_self
@@ -741,9 +845,25 @@ module Sequel
     
     private
 
-    # Backbone of function_sql_append and emulated_function_sql_append.
+    # REMOVE411
     def _function_sql_append(sql, name, args)
-      sql << name.to_s
+      Deprecation.deprecate("Dataset#emulated_function_sql_append and #_function_sql_append", "Please use Sequel::SQL::Function.new!(name, args, :emulate=>true) to create an emulated SQL function")
+      case name
+      when SQL::Identifier
+        if supports_quoted_function_names?
+          literal_append(sql, name)
+        else
+          sql << name.value.to_s
+        end
+      when SQL::QualifiedIdentifier
+        if supports_quoted_function_names?
+          literal_append(sql, name)
+        else
+          sql << split_qualifiers(name).join(DOT)
+        end
+      else
+        sql << name.to_s
+      end
       if args.empty?
         sql << FUNCTION_EMPTY
       else
@@ -786,7 +906,7 @@ module Sequel
       when SQL::QualifiedIdentifier
         alias_symbol(sym.column)
       when SQL::AliasedExpression
-        alias_alias_symbol(sym.aliaz)
+        alias_alias_symbol(sym.alias)
       else
         raise Error, "Invalid alias for alias_symbol: #{sym.inspect}"
       end
@@ -800,10 +920,16 @@ module Sequel
       options_overlap(COUNT_FROM_SELF_OPTS) ? from_self : unordered
     end
 
-    # SQL fragment for specifying an alias.  expression should already be literalized.
-    def as_sql_append(sql, aliaz)
+    # Append aliasing expression to SQL string.
+    def as_sql_append(sql, aliaz, column_aliases=nil)
       sql << AS
       quote_identifier_append(sql, aliaz)
+      if column_aliases
+        raise Error, "#{db.database_type} does not support derived column lists" unless supports_derived_column_lists?
+        sql << PAREN_OPEN
+        identifier_list_append(sql, column_aliases)
+        sql << PAREN_CLOSE
+      end
     end
     
     # Raise an InvalidOperation exception if deletion is not allowed
@@ -818,13 +944,7 @@ module Sequel
       check_modification_allowed!
     end
 
-    # Prepare an SQL statement by calling all clause methods for the given statement type.
-    def clause_sql(type)
-      sql = @opts[:append_sql] || sql_string_origin
-      send("#{type}_clause_methods").each{|x| send(x, sql)}
-      sql
-    end
-
+    # Append column list to SQL string.
     # Converts an array of column names into a comma seperated string of 
     # column names. If the array is empty, a wildcard (*) is returned.
     def column_list_append(sql, columns)
@@ -835,24 +955,50 @@ module Sequel
       end
     end
 
-    # Yield each two pair of arguments to the block, which should
-    # return a string representing the SQL code for those
-    # two arguments.  If more than 2 arguments are provided, all
-    # calls to the block # after the first will have a LiteralString
-    # as the first argument, representing the application of the block to
-    # the previous arguments.
+    # Yield each pair of arguments to the block, which should
+    # return an object representing the SQL expression for those
+    # two arguments.  For more than two arguments, the first
+    # argument to the block will be result of the previous block call.
     def complex_expression_arg_pairs(args)
       case args.length
       when 1
-        literal(args.at(0))
+        args.at(0)
       when 2
         yield args.at(0), args.at(1)
       else
-        args.inject{|m, a| LiteralString.new(yield(m, a))}
+        args.inject{|m, a| yield(m, a)}
       end
     end
 
-    # The SQL to use for the dataset used in a UNION/INTERSECT/EXCEPT clause. 
+    # Append the literalization of the args using complex_expression_arg_pairs
+    # to the given SQL string, used when database operator/function is 2-ary
+    # where Sequel expression is N-ary.
+    def complex_expression_arg_pairs_append(sql, args, &block)
+      literal_append(sql, complex_expression_arg_pairs(args, &block))
+    end
+
+    # Append literalization of complex expression to SQL string, for
+    # operators unsupported by some databases. Used by adapters for databases
+    # that don't support the operators natively.
+    def complex_expression_emulate_append(sql, op, args)
+      case op
+      when :%
+        complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.function(:MOD, a, b)}
+      when :>>
+        complex_expression_arg_pairs_append(sql, args){|a, b| Sequel./(a, Sequel.function(:power, 2, b))}
+      when :<<
+        complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.*(a, Sequel.function(:power, 2, b))}
+      when :&, :|, :^
+        f = BITWISE_METHOD_MAP[op]
+        complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.function(f, a, b)}
+      when :'B~'
+        sql << BITCOMP_OPEN
+        literal_append(sql, args.at(0))
+        sql << BITCOMP_CLOSE
+      end
+    end
+
+    # Append literalization of dataset used in UNION/INTERSECT/EXCEPT clause to SQL string.
     def compound_dataset_sql_append(sql, ds)
       subselect_sql_append(sql, ds)
     end
@@ -867,17 +1013,30 @@ module Sequel
       requires_sql_standard_datetimes? ? STANDARD_TIMESTAMP_FORMAT : TIMESTAMP_FORMAT
     end
 
-    # The order of methods to call to build the DELETE SQL statement
-    def delete_clause_methods
-      DELETE_CLAUSE_METHODS
-    end
-
     def delete_delete_sql(sql)
       sql << DELETE
     end
 
-    # Converts an array of expressions into a comma separated string of
-    # expressions.
+    def delete_from_sql(sql)
+      if f = @opts[:from]
+        sql << FROM
+        source_list_append(sql, f)
+      end
+    end
+
+    # An SQL FROM clause to use in SELECT statements where the dataset has
+    # no from tables.
+    def empty_from_sql
+      nil
+    end
+
+    # Whether to emulate the function with the given name.  This should only be true
+    # if the emulation goes beyond choosing a function with a different name.
+    def emulate_function?(name)
+      false
+    end
+
+    # Append literalization of array of expressions to SQL string.
     def expression_list_append(sql, columns)
       c = false
       co = COMMA
@@ -926,8 +1085,8 @@ module Sequel
       sprintf(FORMAT_TIMESTAMP_USEC, usec)
     end
 
-    # Append the value, but special case regular (non-literal, non-blob) strings
-    # so that they are considered as identifiers and not SQL strings.
+    # Append literalization of identifier to SQL string, considering regular strings
+    # as SQL identifiers instead of SQL strings.
     def identifier_append(sql, v)
       if v.is_a?(String)
         case v
@@ -943,7 +1102,7 @@ module Sequel
       end
     end
 
-    # Append all identifiers in args interspersed by commas.
+    # Append literalization of array of identifiers to SQL string.
     def identifier_list_append(sql, args)
       c = false
       comma = COMMA
@@ -960,18 +1119,11 @@ module Sequel
       (i = identifier_input_method) ? v.to_s.send(i) : v.to_s
     end
 
-    # SQL fragment specifying the table to insert INTO
     def insert_into_sql(sql)
       sql << INTO
       source_list_append(sql, @opts[:from])
     end
 
-    # The order of methods to call to build the INSERT SQL statement
-    def insert_clause_methods
-      INSERT_CLAUSE_METHODS
-    end
-
-    # SQL fragment specifying the columns to insert into
     def insert_columns_sql(sql)
       columns = opts[:columns]
       if columns && !columns.empty?
@@ -985,7 +1137,6 @@ module Sequel
       sql << INSERT
     end
 
-    # SQL fragment specifying the values to insert.
     def insert_values_sql(sql)
       case values = opts[:values]
       when Array
@@ -1005,7 +1156,6 @@ module Sequel
       end
     end
 
-    # SQL fragment specifying the values to return.
     def insert_returning_sql(sql)
       if opts.has_key?(:returning)
         sql << RETURNING
@@ -1026,7 +1176,8 @@ module Sequel
      (opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]
     end
 
-    # SQL fragment for Array.  Treats as an expression if an array of all two pairs, or as a SQL array otherwise.
+    # Append a literalization of the array to SQL string.
+    # Treats as an expression if an array of all two pairs, or as a SQL array otherwise.
     def literal_array_append(sql, v)
       if Sequel.condition_specifier?(v)
         literal_expression_append(sql, SQL::BooleanExpression.from_value_pairs(v))
@@ -1041,12 +1192,12 @@ module Sequel
       v.nan? || v.infinite? ?  "'#{d}'" : d
     end
 
-    # SQL fragment for SQL::Blob
+    # Append literalization of SQL::Blob to SQL string.
     def literal_blob_append(sql, v)
       literal_string_append(sql, v)
     end
 
-    # SQL fragment for Dataset.  Does a subselect inside parantheses.
+    # Append literalization of dataset to SQL string.  Does a subselect inside parantheses.
     def literal_dataset_append(sql, v)
       sql << LATERAL if v.opts[:lateral]
       sql << PAREN_OPEN
@@ -1068,7 +1219,12 @@ module Sequel
       format_timestamp(v)
     end
 
-    # SQL fragment for SQL::Expression, result depends on the specific type of expression.
+    # Append literalization of DateTime to SQL string.
+    def literal_datetime_append(sql, v)
+      sql << literal_datetime(v)
+    end
+
+    # Append literalization of SQL::Expression to SQL string.
     def literal_expression_append(sql, v)
       v.to_s_append(self, sql)
     end
@@ -1083,7 +1239,7 @@ module Sequel
       v.to_s
     end
 
-    # SQL fragment for Hash, treated as an expression
+    # Append literalization of Hash to SQL string, treating hash as a boolean expression.
     def literal_hash_append(sql, v)
       literal_expression_append(sql, SQL::BooleanExpression.from_value_pairs(v))
     end
@@ -1098,11 +1254,9 @@ module Sequel
       NULL
     end
 
-    # SQL fragment for a type of object not handled by Dataset#literal.
-    # Calls +sql_literal+ if object responds to it, otherwise raises an error.
-    # Classes implementing +sql_literal+ should call a class-specific method on the dataset
-    # provided and should add that method to Sequel::Dataset, allowing for adapters
-    # to provide customized literalizations.
+    # Append a literalization of the object to the given SQL string.
+    # Calls +sql_literal_append+ if object responds to it, otherwise
+    # calls +sql_literal+ if object responds to it, otherwise raises an error.
     # If a database specific type is allowed, this should be overriden in a subclass.
     def literal_other_append(sql, v)
       if v.respond_to?(:sql_literal_append)
@@ -1119,19 +1273,17 @@ module Sequel
       v.strftime("'%H:%M:%S#{format_timestamp_usec(v.usec) if supports_timestamp_usecs?}'")
     end
 
-    # SQL fragment for String.  Doubles \ and ' by default.
+    # Append literalization of Sequel::SQLTime to SQL string.
+    def literal_sqltime_append(sql, v)
+      sql << literal_sqltime(v)
+    end
+
+    # Append literalization of string to SQL string.
     def literal_string_append(sql, v)
       sql << APOS << v.gsub(APOS_RE, DOUBLE_APOS) << APOS
     end
 
-    # Converts a symbol into a column name. This method supports underscore
-    # notation in order to express qualified (two underscores) and aliased
-    # (three underscores) columns:
-    #
-    #   dataset.literal(:abc) #=> "abc"
-    #   dataset.literal(:abc___a) #=> "abc AS a"
-    #   dataset.literal(:items__abc) #=> "items.abc"
-    #   dataset.literal(:items__abc___a) #=> "items.abc AS a"
+    # Append literalization of symbol to SQL string.
     def literal_symbol_append(sql, v)
       c_table, column, c_alias = split_symbol(v)
       if c_table
@@ -1147,11 +1299,24 @@ module Sequel
       format_timestamp(v)
     end
 
+    # Append literalization of Time to SQL string.
+    def literal_time_append(sql, v)
+      sql << literal_time(v)
+    end
+
     # SQL fragment for true
     def literal_true
       BOOL_TRUE
     end
 
+    # What strategy to use for import/multi_insert.  While SQL-92 defaults
+    # to allowing multiple rows in a VALUES clause, there are enough databases
+    # that don't allow that that it can't be the default.  Use separate queries
+    # by default, which works everywhere.
+    def multi_insert_sql_strategy
+      :separate
+    end
+
     # Get the native function name given the emulated function name.
     def native_function_name(emulated_function)
       self.class.const_get(:EMULATED_FUNCTION_MAP).fetch(emulated_function, emulated_function)
@@ -1168,7 +1333,7 @@ module Sequel
             schema, table, t_alias = split_symbol(table)
             t_alias ||= Sequel::SQL::QualifiedIdentifier.new(schema, table) if schema
           when Sequel::SQL::AliasedExpression
-            t_alias = table.aliaz
+            t_alias = table.alias
           end
           c_table = t_alias || table
         end
@@ -1183,18 +1348,11 @@ module Sequel
       Qualifier.new(self, table).transform(e)
     end
 
-    # The order of methods to call to build the SELECT SQL statement
-    def select_clause_methods
-      SELECT_CLAUSE_METHODS
-    end
-
-    # Modify the sql to add the columns selected
     def select_columns_sql(sql)
       sql << SPACE
       column_list_append(sql, @opts[:select])
     end
 
-    # Modify the sql to add the DISTINCT modifier
     def select_distinct_sql(sql)
       if distinct = @opts[:distinct]
         sql << DISTINCT
@@ -1219,16 +1377,15 @@ module Sequel
       end
     end
 
-    # Modify the sql to add the list of tables to select FROM
     def select_from_sql(sql)
       if f = @opts[:from]
         sql << FROM
         source_list_append(sql, f)
+      elsif f = empty_from_sql
+        sql << f
       end
     end
-    alias delete_from_sql select_from_sql
 
-    # Modify the sql to add the expressions to GROUP BY
     def select_group_sql(sql)
       if group = @opts[:group]
         sql << GROUP_BY
@@ -1247,7 +1404,6 @@ module Sequel
       end
     end
 
-    # Modify the sql to add the filter criteria in the HAVING clause
     def select_having_sql(sql)
       if having = @opts[:having]
         sql << HAVING
@@ -1255,26 +1411,25 @@ module Sequel
       end
     end
 
-    # Modify the sql to add the list of tables to JOIN to
     def select_join_sql(sql)
       if js = @opts[:join]
         js.each{|j| literal_append(sql, j)}
       end
     end
 
-    # Modify the sql to limit the number of rows returned and offset
     def select_limit_sql(sql)
       if l = @opts[:limit]
         sql << LIMIT
         literal_append(sql, l)
-      end
-      if o = @opts[:offset]
-        sql << OFFSET
-        literal_append(sql, o)
+        if o = @opts[:offset]
+          sql << OFFSET
+          literal_append(sql, o)
+        end
+      elsif @opts[:offset]
+        select_only_offset_sql(sql)
       end
     end
-  
-    # Modify the sql to support the different types of locking modes.
+
     def select_lock_sql(sql)
       case l = @opts[:lock]
       when :update
@@ -1284,7 +1439,14 @@ module Sequel
       end
     end
 
-    # Modify the sql to add the expressions to ORDER BY
+    # Used only if there is an offset and no limit, making it easier to override
+    # in the adapter, as many databases do not support just a plain offset with
+    # no limit.
+    def select_only_offset_sql(sql)
+      sql << OFFSET
+      literal_append(sql, @opts[:offset])
+    end
+  
     def select_order_sql(sql)
       if o = @opts[:order]
         sql << ORDER_BY
@@ -1298,7 +1460,6 @@ module Sequel
       sql << SELECT
     end
 
-    # Modify the sql to add the filter criteria in the WHERE clause
     def select_where_sql(sql)
       if w = @opts[:where]
         sql << WHERE
@@ -1308,8 +1469,8 @@ module Sequel
     alias delete_where_sql select_where_sql
     alias update_where_sql select_where_sql
     
-    # SQL Fragment specifying the WITH clause
     def select_with_sql(sql)
+      return unless supports_cte?
       ws = opts[:with]
       return if !ws || ws.empty?
       sql << select_with_sql_base
@@ -1338,7 +1499,18 @@ module Sequel
       SQL_WITH
     end
 
-    # Converts an array of source names into into a comma separated list.
+    # Whether the symbol cache should be skipped when literalizing the dataset
+    def skip_symbol_cache?
+      @skip_symbol_cache
+    end
+
+    # Set the dataset to skip the symbol cache
+    def skip_symbol_cache!
+      @skip_symbol_cache = true
+    end
+
+    # Append literalization of array of sources/tables to SQL string, raising an Error if there
+    # are no sources.
     def source_list_append(sql, sources)
       raise(Error, 'No source specified for query') if sources.nil? || sources == []
       identifier_list_append(sql, sources)
@@ -1357,7 +1529,8 @@ module Sequel
     
     # SQL to use if this dataset uses static SQL.  Since static SQL
     # can be a PlaceholderLiteralString in addition to a String,
-    # we literalize nonstrings.
+    # we literalize nonstrings.  If there is an append_sql for this
+    # dataset, append to that SQL instead of returning the value.
     def static_sql(sql)
       if append_sql = @opts[:append_sql]
         if sql.is_a?(String)
@@ -1374,25 +1547,17 @@ module Sequel
       end
     end
 
-    # SQL fragment for a subselect using the given database's SQL.
+    # Append literalization of the subselect to SQL String.
     def subselect_sql_append(sql, ds)
       ds.clone(:append_sql=>sql).sql
     end
 
-    # The order of methods to call to build the UPDATE SQL statement
-    def update_clause_methods
-      UPDATE_CLAUSE_METHODS
-    end
-
-    # SQL fragment specifying the tables from with to delete.
-    # Includes join table if modifying joins is allowed.
     def update_table_sql(sql)
       sql << SPACE
       source_list_append(sql, @opts[:from])
       select_join_sql(sql) if supports_modifying_joins?
     end
 
-    # The SQL fragment specifying the columns and values to SET.
     def update_set_sql(sql)
       values = opts[:values]
       sql << SET
diff --git a/lib/sequel/extensions/columns_introspection.rb b/lib/sequel/extensions/columns_introspection.rb
index 798af01..4d93311 100644
--- a/lib/sequel/extensions/columns_introspection.rb
+++ b/lib/sequel/extensions/columns_introspection.rb
@@ -49,7 +49,7 @@ module Sequel
           from.probable_columns
         when Symbol, SQL::Identifier, SQL::QualifiedIdentifier
           schemas = db.instance_variable_get(:@schemas)
-          if schemas && (sch = Sequel.synchronize{schemas[literal(from)]})
+          if schemas && (table = literal(from)) && (sch = Sequel.synchronize{schemas[table]})
             sch.map{|c,_| c}
           end
         end
@@ -71,7 +71,7 @@ module Sequel
         col = c.column
         col.is_a?(SQL::Identifier) ? col.value.to_sym : col.to_sym
       when SQL::AliasedExpression
-        a = c.aliaz
+        a = c.alias
         a.is_a?(SQL::Identifier) ? a.value.to_sym : a.to_sym
       end
     end
diff --git a/lib/sequel/extensions/constraint_validations.rb b/lib/sequel/extensions/constraint_validations.rb
index b861849..ab4bc8a 100644
--- a/lib/sequel/extensions/constraint_validations.rb
+++ b/lib/sequel/extensions/constraint_validations.rb
@@ -70,8 +70,8 @@
 # length_range 3...5 :: CHECK char_length(column) >= 3 AND char_length(column) < 5
 # format /foo\\d+/ :: CHECK column ~ 'foo\\d+'
 # format /foo\\d+/i :: CHECK column ~* 'foo\\d+'
-# like 'foo%' :: CHECK column LIKE 'foo%'
-# ilike 'foo%' :: CHECK column ILIKE 'foo%'
+# like 'foo%' :: CHECK column LIKE 'foo%' ESCAPE '\'
+# ilike 'foo%' :: CHECK column ILIKE 'foo%' ESCAPE '\'
 # includes ['a', 'b'] :: CHECK column IN ('a', 'b')
 # includes [1, 2] :: CHECK column IN (1, 2)
 # includes 3..5 :: CHECK column >= 3 AND column <= 5
@@ -401,10 +401,10 @@ module Sequel
       end
 
       ds = from(:sequel_constraint_validations)
-      ds.multi_insert(rows.flatten)
       unless drop_rows.empty?
         ds.where([:table, :constraint_name]=>drop_rows).delete
       end
+      ds.multi_insert(rows.flatten)
     end
 
     # Add the constraint to the generator, including a NOT NULL constraint
diff --git a/lib/sequel/extensions/current_datetime_timestamp.rb b/lib/sequel/extensions/current_datetime_timestamp.rb
new file mode 100644
index 0000000..0986e9e
--- /dev/null
+++ b/lib/sequel/extensions/current_datetime_timestamp.rb
@@ -0,0 +1,57 @@
+# The current_datetime_timestamp extension makes Dataset#current_datetime
+# return an object that operates like Sequel.datetime_class.now, but will
+# be literalized as CURRENT_TIMESTAMP.
+#
+# This allows you to use the defaults_setter, timestamps, and touch
+# model plugins and make sure that CURRENT_TIMESTAMP is used instead of
+# a literalized timestamp value.
+#
+# The reason that CURRENT_TIMESTAMP is better than a literalized version
+# of the timestamp is that it obeys correct transactional semantics
+# (all calls to CURRENT_TIMESTAMP in the same transaction return the
+# same timestamp, at least on some databases).
+#
+# To have current_datetime be literalized as CURRENT_TIMESTAMP for
+# a single dataset:
+#
+#   ds = ds.extension(:current_datetime_timestamp)
+#
+# To have current_datetime be literalized as CURRENT_TIMESTAMP for all
+# datasets of a given database.
+#
+#   DB.extension(:current_datetime_timestamp)
+
+module Sequel
+  module CurrentDateTimeTimestamp
+    module DatasetMethods
+      # Return an instance of Sequel.datetime_class that will be literalized
+      # as CURRENT_TIMESTAMP.
+      def current_datetime
+        MAP.fetch(Sequel.datetime_class).now
+      end
+
+      private
+
+      # Literalize custom DateTime subclass objects as CURRENT_TIMESTAMP.
+      def literal_datetime_append(sql, v)
+        v.is_a?(DateTime) ? literal_append(sql, Sequel::CURRENT_TIMESTAMP) : super
+      end
+
+      # Literalize custom Time subclass objects as CURRENT_TIMESTAMP.
+      def literal_time_append(sql, v)
+        v.is_a?(Time) ? literal_append(sql, Sequel::CURRENT_TIMESTAMP) : super
+      end
+    end
+
+    # Time subclass literalized as CURRENT_TIMESTAMP
+    class Time < ::Time; end
+
+    # DateTime subclass literalized as CURRENT_TIMESTAMP
+    class DateTime < ::DateTime; end
+
+    # Mapping of Time/DateTime classes to subclasses literalized as CURRENT_TIMESTAMP
+    MAP = {::Time=>Time, ::DateTime=>DateTime}
+  end
+
+  Dataset.register_extension(:current_datetime_timestamp, CurrentDateTimeTimestamp::DatasetMethods)
+end
diff --git a/lib/sequel/extensions/date_arithmetic.rb b/lib/sequel/extensions/date_arithmetic.rb
index 7878e06..420584b 100644
--- a/lib/sequel/extensions/date_arithmetic.rb
+++ b/lib/sequel/extensions/date_arithmetic.rb
@@ -82,7 +82,7 @@ module Sequel
             each_valid_interval_unit(h, DEF_DURATION_UNITS) do |value, sql_unit|
               args << "#{value} #{sql_unit}"
             end
-            return _function_sql_append(sql, :datetime, args)
+            return function_sql_append(sql, Sequel.function(:datetime, *args))
           when :mysql, :hsqldb, :cubrid
             if db_type == :hsqldb
               # HSQLDB requires 2.2.9+ for the DATE_ADD function
@@ -91,9 +91,9 @@ module Sequel
             each_valid_interval_unit(h, MYSQL_DURATION_UNITS) do |value, sql_unit|
               expr = Sequel.function(:DATE_ADD, expr, Sequel.lit(["INTERVAL ", " "], value, sql_unit))
             end
-          when :mssql, :h2, :access
+          when :mssql, :h2, :access, :sqlanywhere
             units = case db_type
-            when :mssql
+            when :mssql, :sqlanywhere
               MSSQL_DURATION_UNITS
             when :h2
               H2_DURATION_UNITS
diff --git a/lib/sequel/extensions/eval_inspect.rb b/lib/sequel/extensions/eval_inspect.rb
index baf1b15..62c2eeb 100644
--- a/lib/sequel/extensions/eval_inspect.rb
+++ b/lib/sequel/extensions/eval_inspect.rb
@@ -74,7 +74,7 @@ module Sequel
             Sequel.eval_inspect(send(arg))
           end
         end
-        "#{klass}.new(#{args.join(', ')})"
+        "#{klass}.#{inspect_new_method}(#{args.join(', ')})"
       end
 
       private
@@ -83,6 +83,11 @@ module Sequel
       def inspect_args
         self.class.comparison_attrs
       end
+
+      # Use the new method by default for creating new objects.
+      def inspect_new_method
+        :new
+      end
     end
 
     class ComplexExpression
@@ -127,9 +132,10 @@ module Sequel
     class Function
       private
 
-      # Function's initializer uses a splat for the function arguments.
-      def inspect_args
-        [:f, "*args"]
+      # Function uses a new! method for creating functions with options,
+      # since Function.new does not allow for an options hash.
+      def inspect_new_method
+        :new!
       end
     end
 
@@ -139,7 +145,7 @@ module Sequel
       # JoinOnClause's initializer takes the on argument as the first argument
       # instead of the last.
       def inspect_args
-        [:on, :join_type, :table, :table_alias] 
+        [:on, :join_type, :table_expr] 
       end
     end
 
@@ -149,7 +155,7 @@ module Sequel
       # JoinOnClause's initializer takes the using argument as the first argument
       # instead of the last.
       def inspect_args
-        [:using, :join_type, :table, :table_alias] 
+        [:using, :join_type, :table_expr] 
       end
     end
 
diff --git a/lib/sequel/extensions/migration.rb b/lib/sequel/extensions/migration.rb
index b0a0da3..6d4ff98 100644
--- a/lib/sequel/extensions/migration.rb
+++ b/lib/sequel/extensions/migration.rb
@@ -373,7 +373,7 @@ module Sequel
       migrator_class(directory).new(db, directory, opts).is_current?
     end
 
-    # Migrates the supplied database using the migration files in the the specified directory. Options:
+    # Migrates the supplied database using the migration files in the specified directory. Options:
     # :allow_missing_migration_files :: Don't raise an error if there are missing migration files.
     # :column :: The column in the :table argument storing the migration version (default: :version).
     # :current :: The current version of the database.  If not given, it is retrieved from the database
diff --git a/lib/sequel/extensions/mssql_emulate_lateral_with_apply.rb b/lib/sequel/extensions/mssql_emulate_lateral_with_apply.rb
index 95b8dbe..3c95312 100644
--- a/lib/sequel/extensions/mssql_emulate_lateral_with_apply.rb
+++ b/lib/sequel/extensions/mssql_emulate_lateral_with_apply.rb
@@ -1,12 +1,13 @@
 # The mssql_emulate_lateral_with_apply extension converts
 # queries that use LATERAL into queries that use CROSS/OUTER
 # APPLY, allowing code that works on databases that support
-# LATERAL via Dataset#lateral to run on Microsoft SQL Server.
+# LATERAL via Dataset#lateral to run on Microsoft SQL Server
+# and Sybase SQLAnywhere.
 #
 # This is available as a separate extension instead of
-# integrated into the Microsoft SQL Server support because
-# few people need it and there is a performance hit to
-# code that doesn't use it.
+# integrated into the Microsoft SQL Server and Sybase
+# SQLAnywhere support because few people need it and there
+# is a performance hit to code that doesn't use it.
 #
 # It is possible there are cases where this emulation does
 # not work.  Users should probably verify that correct
@@ -57,7 +58,7 @@ module Sequel
         ds = from(*source)
         lateral.each do |l|
           l = if l.is_a?(Sequel::SQL::AliasedExpression)
-            l.expression.clone(:lateral=>nil).as(l.aliaz)
+            l.expression.clone(:lateral=>nil).as(l.alias)
           else
             l.clone(:lateral=>nil)
           end
diff --git a/lib/sequel/extensions/null_dataset.rb b/lib/sequel/extensions/null_dataset.rb
index d63eed3..8266865 100644
--- a/lib/sequel/extensions/null_dataset.rb
+++ b/lib/sequel/extensions/null_dataset.rb
@@ -2,10 +2,6 @@
 # returns a cloned dataset that will never issue a query to the
 # database.  It implements the null object pattern for datasets.
 #
-# To load the extension:
-#
-#   Sequel.extension :null_dataset
-#
 # The most common usage is probably in a method that must return
 # a dataset, where the method knows the dataset shouldn't return
 # anything.  With standard Sequel, you'd probably just add a
@@ -26,6 +22,14 @@
 # the same options to get the columns.
 #
 # This extension uses Object#extend at runtime, which can hurt performance.
+#
+# To add the nullify method to a single dataset:
+#
+#   ds = ds.extension(:null_dataset)
+#
+# To add the nullify method to all datasets on a single database:
+#
+#   DB.extension(:null_dataset)
 
 module Sequel
   class Dataset
diff --git a/lib/sequel/extensions/pg_array.rb b/lib/sequel/extensions/pg_array.rb
index 707778e..dcacf6c 100644
--- a/lib/sequel/extensions/pg_array.rb
+++ b/lib/sequel/extensions/pg_array.rb
@@ -17,13 +17,13 @@
 #
 #   Sequel.pg_array(array)
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html],
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html]
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Array#pg_array:
 #
 #   array.pg_array
 #
-# You can also provide a type, though it many cases it isn't necessary:
+# You can also provide a type, though in many cases it isn't necessary:
 #
 #   Sequel.pg_array(array, :varchar) # or :integer, :"double precision", etc.
 #   array.pg_array(:varchar) # or :integer, :"double precision", etc.
@@ -36,6 +36,9 @@
 #
 #   DB.extension :pg_array
 #
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using postgres array columns in CREATE/ALTER TABLE statements.
+#
 # If you are not using the native postgres adapter and are using array
 # types as model column values you probably should use the
 # typecast_on_load plugin if the column values are returned as a
@@ -70,35 +73,10 @@
 # If you want an easy way to call PostgreSQL array functions and
 # operators, look into the pg_array_ops extension.
 #
-# This extension requires both the json and delegate libraries.
-#
-# == Additional License
-#
-# PGArray::Parser code was translated from Javascript code in the
-# node-postgres project and has the following additional license:
-# 
-# Copyright (c) 2010 Brian Carlson (brian.m.carlson at gmail.com)
-# 
-# Permission is hereby granted, free of charge, to any person obtaining
-# a copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish,
-# distribute, sublicense, and/or sell copies of the Software, and to
-# permit persons to whom the Software is furnished to do so, subject
-# to the following conditions:
-# 
-# The above copyright notice and this permission notice shall be included
-# in all copies or substantial portions of the Software.
-# 
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
-# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
-# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+# This extension requires the json, strscan, and delegate libraries.
 
 require 'delegate'
+require 'strscan'
 require 'json'
 Sequel.require 'adapters/utils/pg_types'
 
@@ -156,10 +134,6 @@ module Sequel
       # :type_symbol :: The base of the schema type symbol for this type.  For example, if you provide
       #                 :integer, Sequel will recognize this type as :integer_array during schema parsing.
       #                 Defaults to the db_type argument.
-      # :typecast_method :: If given, specifies the :type_symbol option, but additionally causes no
-      #                     typecasting method to be created in the database.  This should only be used
-      #                     to alias existing array types.  For example, if there is an array type that can be
-      #                     treated just like an integer array, you can do :typecast_method=>:integer.
       # :typecast_method_map :: The map in which to place the database type string to type symbol mapping.
       #                         Defaults to ARRAY_TYPES.
       # :typecast_methods_module :: If given, a module object to add the typecasting method to.  Defaults
@@ -168,8 +142,7 @@ module Sequel
       # If a block is given, it is treated as the :converter option.
       def self.register(db_type, opts=OPTS, &block)
         db_type = db_type.to_s
-        typecast_method = opts[:typecast_method]
-        type = (typecast_method || opts[:type_symbol] || db_type).to_sym
+        type = (opts[:type_symbol] || db_type).to_sym
         type_procs = opts[:type_procs] || PG_TYPES
         mod = opts[:typecast_methods_module] || DatabaseMethods
         typecast_method_map = opts[:typecast_method_map] || ARRAY_TYPES
@@ -182,7 +155,7 @@ module Sequel
 
         if soid = opts[:scalar_oid]
           raise Error, "can't provide both a converter and :scalar_oid option to register" if converter 
-          raise Error, "no conversion proc for :scalar_oid=>#{soid.inspect}" unless converter = type_procs[soid]
+          converter = type_procs[soid]
         end
 
         array_type = (opts[:array_type] || db_type).to_s.dup.freeze
@@ -190,7 +163,7 @@ module Sequel
 
         typecast_method_map[db_type] = :"#{type}_array"
 
-        define_array_typecast_method(mod, type, creator, opts.fetch(:scalar_typecast, type)) unless typecast_method
+        define_array_typecast_method(mod, type, creator, opts.fetch(:scalar_typecast, type))
 
         if oid = opts[:oid]
           type_procs[oid] = creator
@@ -223,15 +196,14 @@ module Sequel
         def self.extended(db)
           db.instance_eval do
             @pg_array_schema_types ||= {}
+            procs = conversion_procs
+            procs[1115] = Creator.new("timestamp without time zone", procs[1114])
+            procs[1185] = Creator.new("timestamp with time zone", procs[1184])
             copy_conversion_procs([1009, 1007, 1016, 1231, 1022, 1000, 1001, 1182, 1183, 1270, 1005, 1028, 1021, 1014, 1015])
             [:string_array, :integer_array, :decimal_array, :float_array, :boolean_array, :blob_array, :date_array, :time_array, :datetime_array].each do |v|
               @schema_type_classes[v] = PGArray
             end
           end
-
-          procs = db.conversion_procs
-          procs[1115] = Creator.new("timestamp without time zone", procs[1114])
-          procs[1185] = Creator.new("timestamp with time zone", procs[1184])
         end
 
         # Handle arrays in bound variables
@@ -258,7 +230,8 @@ module Sequel
             opts[:oid] = array_oid unless opts.has_key?(:oid)
           end
           PGArray.register(db_type, opts, &block)
-          @schema_type_classes[:"#{opts[:typecast_method] || opts[:type_symbol] || db_type}_array"] = PGArray
+          @schema_type_classes[:"#{opts[:type_symbol] || db_type}_array"] = PGArray
+          conversion_procs_updated
         end
 
         # Return PGArray if this type matches any supported array type.
@@ -331,7 +304,6 @@ module Sequel
         #   typecast all members of the array in ruby for performance reasons, but
         #   it will cast the array the appropriate database type when the array is
         #   literalized.
-        # * If given a String, call the parser for the subclass with it.
         def typecast_value_pg_array(value, creator, scalar_typecast_method=nil)
           case value
           when PGArray
@@ -351,45 +323,26 @@ module Sequel
         end
       end
 
-      # PostgreSQL array parser that handles all types of input.
-      #
-      # This parser is very simple and unoptimized, but should still
-      # be O(n) where n is the length of the input string.
-      class Parser
-        # Current position in the input string.
-        attr_reader :pos
+      # PostgreSQL array parser that handles PostgreSQL array output format.
+      # Note that does not handle all forms out input that PostgreSQL will
+      # accept, and it will not raise an error for all forms of invalid input.
+      class Parser < StringScanner
+        UNQUOTED_RE = /[{}",]|[^{}",]+/
+        QUOTED_RE = /["\\]|[^"\\]+/
+        NULL_RE = /NULL",/
+        OPEN_RE = /\{/
 
         # Set the source for the input, and any converter callable
         # to call with objects to be created.  For nested parsers
         # the source may contain text after the end current parse,
         # which will be ignored.
         def initialize(source, converter=nil)
-          @source = source
-          @source_length = source.length
+          super(source)
           @converter = converter 
-          @pos = -1
-          @entries = []
+          @stack = [[]]
           @recorded = ""
-          @dimension = 0
-        end
-
-        # Return 2 objects, whether the next character in the input
-        # was escaped with a backslash, and what the next character is.
-        def next_char
-          @pos += 1
-          if (c = @source[@pos.. at pos]) == BACKSLASH
-            @pos += 1
-            [true, @source[@pos.. at pos]]
-          else
-            [false, c]
-          end
         end
 
-        # Add a new character to the buffer of recorded characters.
-        def record(c)
-          @recorded << c
-        end
-        
         # Take the buffer of recorded characters and add it to the array
         # of entries, and use a new buffer for recorded characters.
         def new_entry(include_empty=false)
@@ -400,53 +353,65 @@ module Sequel
             elsif @converter
               entry = @converter.call(entry)
             end
-            @entries.push(entry)
+            @stack.last.push(entry)
             @recorded = ""
           end
         end
 
         # Parse the input character by character, returning an array
         # of parsed (and potentially converted) objects.
-        def parse(nested=false)
-          # quote sets whether we are inside of a quoted string.
-          quote = false
-          until @pos >= @source_length
-            escaped, char = next_char
-            if char == OPEN_BRACE && !quote
-              @dimension += 1
-              if (@dimension > 1)
-                # Multi-dimensional array encounter, use a subparser
-                # to parse the next level down.
-                subparser = self.class.new(@source[@pos..-1], @converter)
-                @entries.push(subparser.parse(true))
-                @pos += subparser.pos - 1
-              end
-            elsif char == CLOSE_BRACE && !quote
-              @dimension -= 1
-              if (@dimension == 0)
-                new_entry
-                # Exit early if inside a subparser, since the
-                # text after parsing the current level should be
-                # ignored as it is handled by the parent parser.
-                return @entries if nested
+        def parse
+          raise Sequel::Error, "invalid array, empty string" if eos?
+          raise Sequel::Error, "invalid array, doesn't start with {" unless scan(OPEN_RE)
+
+          while !eos?
+            char = scan(UNQUOTED_RE)
+            if char == COMMA
+              # Comma outside quoted string indicates end of current entry
+              new_entry
+            elsif char == QUOTE
+              raise Sequel::Error, "invalid array, opening quote with existing recorded data" unless @recorded.empty?
+              while true
+                char = scan(QUOTED_RE)
+                if char == BACKSLASH
+                  @recorded << getch
+                elsif char == QUOTE
+                  n = peek(1)
+                  raise Sequel::Error, "invalid array, closing quote not followed by comma or closing brace" unless n == COMMA || n == CLOSE_BRACE
+                  break
+                else
+                  @recorded << char
+                end
               end
-            elsif char == QUOTE && !escaped
-              # If already inside the quoted string, this is the
-              # ending quote, so add the entry.  Otherwise, this
-              # is the opening quote, so set the quote flag.
-              new_entry(true) if quote
-              quote = !quote
-            elsif char == COMMA && !quote
-              # If not inside a string and a comma occurs, it indicates
-              # the end of the entry, so add the entry.
+              new_entry(true)
+            elsif char == OPEN_BRACE
+              raise Sequel::Error, "invalid array, opening brace with existing recorded data" unless @recorded.empty?
+
+              # Start of new array, add it to the stack
+              new = []
+              @stack.last << new
+              @stack << new
+            elsif char == CLOSE_BRACE
+              # End of current array, add current entry to the current array
               new_entry
+
+              if @stack.length == 1
+                raise Sequel::Error, "array parsing finished without parsing entire string" unless eos?
+
+                # Top level of array, parsing should be over.
+                # Pop current array off stack and return it as result
+                return @stack.pop
+              else
+                # Nested array, pop current array off stack
+                @stack.pop
+              end
             else
               # Add the character to the recorded character buffer.
-              record(char)
+              @recorded << char
             end
           end
-          raise Sequel::Error, "array dimensions not balanced" unless @dimension == 0
-          @entries
+
+          raise Sequel::Error, "array parsing finished with array unclosed"
         end
       end unless Sequel::Postgres.respond_to?(:parse_pg_array)
 
@@ -568,11 +533,11 @@ module Sequel
       register('time with time zone', :oid=>1270, :scalar_oid=>1083, :type_symbol=>:time_timezone, :scalar_typecast=>:time)
       register('timestamp with time zone', :oid=>1185, :scalar_oid=>1184, :type_symbol=>:datetime_timezone, :scalar_typecast=>:datetime)
 
-      register('smallint', :oid=>1005, :parser=>:json, :typecast_method=>:integer)
-      register('oid', :oid=>1028, :parser=>:json, :typecast_method=>:integer)
-      register('real', :oid=>1021, :scalar_oid=>701, :typecast_method=>:float)
-      register('character', :oid=>1014, :array_type=>:text, :typecast_method=>:string)
-      register('character varying', :oid=>1015, :typecast_method=>:string)
+      register('smallint', :oid=>1005, :parser=>:json, :scalar_typecast=>:integer)
+      register('oid', :oid=>1028, :parser=>:json, :scalar_typecast=>:integer)
+      register('real', :oid=>1021, :scalar_oid=>700, :scalar_typecast=>:float)
+      register('character', :oid=>1014, :array_type=>:text, :scalar_typecast=>:string)
+      register('character varying', :oid=>1015, :scalar_typecast=>:string, :type_symbol=>:varchar)
     end
   end
 
diff --git a/lib/sequel/extensions/pg_array_ops.rb b/lib/sequel/extensions/pg_array_ops.rb
index ba217b6..982a67a 100644
--- a/lib/sequel/extensions/pg_array_ops.rb
+++ b/lib/sequel/extensions/pg_array_ops.rb
@@ -19,8 +19,8 @@
 #
 #   ia = Sequel.expr(:int_array_column).pg_array
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Symbol#pg_array:
 #
 #   ia = :int_array_column.pg_array
@@ -41,7 +41,10 @@
 #
 #   ia.any             # ANY(int_array_column)
 #   ia.all             # ALL(int_array_column)
+#   ia.cardinality     # cardinality(int_array_column)
 #   ia.dims            # array_dims(int_array_column)
+#   ia.hstore          # hstore(int_array_column)
+#   ia.hstore(:a)      # hstore(int_array_column, a)
 #   ia.length          # array_length(int_array_column, 1)
 #   ia.length(2)       # array_length(int_array_column, 2)
 #   ia.lower           # array_lower(int_array_column, 1)
@@ -50,6 +53,7 @@
 #   ia.join(':')       # array_to_string(int_array_column, ':', NULL)
 #   ia.join(':', ' ')  # array_to_string(int_array_column, ':', ' ')
 #   ia.unnest          # unnest(int_array_column)
+#   ia.unnest(:b)      # unnest(int_array_column, b)
 # 
 # See the PostgreSQL array function and operator documentation for more
 # details on what these functions and operators do.
@@ -57,6 +61,10 @@
 # If you are also using the pg_array extension, you should load it before
 # loading this extension.  Doing so will allow you to use PGArray#op to get
 # an ArrayOp, allowing you to perform array operations on array literals.
+#
+# In order for #hstore to automatically wrap the returned value correctly in
+# an HStoreOp, you need to load the pg_hstore_ops extension.
+
 module Sequel
   module Postgres
     # The ArrayOp class is a simple container for a single object that
@@ -105,6 +113,13 @@ module Sequel
         function(:ANY)
       end
 
+      # Call the cardinality method:
+      #
+      #   array_op.cardinality # cardinality(array)
+      def cardinality
+        function(:cardinality)
+      end
+
       # Use the contains (@>) operator:
       #
       #   array_op.contains(:a) # (array @> a)
@@ -209,8 +224,8 @@ module Sequel
       # Call the unnest method:
       #
       #   array_op.unnest # unnest(array)
-      def unnest
-        function(:unnest)
+      def unnest(*args)
+        function(:unnest, *args.map{|a| wrap_array(a)})
       end
       
       # Use the concatentation (||) operator, reversing the order:
diff --git a/lib/sequel/extensions/pg_hstore.rb b/lib/sequel/extensions/pg_hstore.rb
index fb4b385..92269a4 100644
--- a/lib/sequel/extensions/pg_hstore.rb
+++ b/lib/sequel/extensions/pg_hstore.rb
@@ -19,8 +19,8 @@
 #
 #   Sequel.hstore(hash)
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Hash#hstore:
 # 
 #   hash.hstore
@@ -28,16 +28,16 @@
 # Since the hstore type only supports strings, non string keys and
 # values are converted to strings
 #
-#   {:foo=>1}.hstore.to_hash # {'foo'=>'1'}
-#   v = {}.hstore
+#   Sequel.hstore(:foo=>1).to_hash # {'foo'=>'1'}
+#   v = Sequel.hstore({})
 #   v[:foo] = 1
 #   v # {'foo'=>'1'}
 #
 # However, to make life easier, lookups by key are converted to
 # strings (even when accessing the underlying hash directly):
 #
-#   {'foo'=>'bar'}.hstore[:foo] # 'bar'
-#   {'foo'=>'bar'}.hstore.to_hash[:foo] # 'bar'
+#   Sequel.hstore('foo'=>'bar')[:foo] # 'bar'
+#   Sequel.hstore('foo'=>'bar').to_hash[:foo] # 'bar'
 # 
 # HStore instances mostly just delegate to the underlying hash
 # instance, so Hash methods that modify the receiver or returned
@@ -66,7 +66,7 @@
 #
 # If you want to insert a hash into an hstore database column:
 #
-#   DB[:table].insert(:column=>{'foo'=>'bar'}.hstore)
+#   DB[:table].insert(:column=>Sequel.hstore('foo'=>'bar'))
 #
 # If you would like to use hstore columns in your model objects, you
 # probably want to modify the schema parsing/typecasting so that it
@@ -75,6 +75,9 @@
 #
 #   DB.extension :pg_hstore
 #
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using hstore columns in CREATE/ALTER TABLE statements.
+#
 # If you are not using the native postgres adapter and are using hstore
 # types as model column values you probably should use the
 # typecast_on_load plugin if the column values are returned as a
diff --git a/lib/sequel/extensions/pg_hstore_ops.rb b/lib/sequel/extensions/pg_hstore_ops.rb
index ac5816d..167b3d6 100644
--- a/lib/sequel/extensions/pg_hstore_ops.rb
+++ b/lib/sequel/extensions/pg_hstore_ops.rb
@@ -20,8 +20,8 @@
 #
 #   h = Sequel.expr(:hstore_column).hstore
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Symbol#hstore:
 #
 #   h = :hstore_column.hstore
@@ -58,6 +58,13 @@
 # If you are also using the pg_hstore extension, you should load it before
 # loading this extension.  Doing so will allow you to use HStore#op to get
 # an HStoreOp, allowing you to perform hstore operations on hstore literals.
+#
+# Some of these methods will accept ruby arrays and convert them automatically to
+# PostgreSQL arrays if you have the pg_array extension loaded.  Some of these methods
+# will accept ruby hashes and convert them automatically to PostgreSQL hstores if the
+# pg_hstore extension is loaded.  Methods representing expressions that return
+# PostgreSQL arrays will have the returned expression automatically wrapped in a
+# Postgres::ArrayOp if the pg_array_ops extension is loaded.
 
 module Sequel
   module Postgres
diff --git a/lib/sequel/extensions/pg_inet.rb b/lib/sequel/extensions/pg_inet.rb
index 1712b93..cf0cc99 100644
--- a/lib/sequel/extensions/pg_inet.rb
+++ b/lib/sequel/extensions/pg_inet.rb
@@ -25,6 +25,9 @@
 # addresses, so these will still be returned as strings.  The exception
 # to this is that the pg_array extension integration will recognize
 # macaddr[] types return them as arrays of strings.
+#
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using inet/cidr columns in CREATE/ALTER TABLE statements.
 
 require 'ipaddr'
 Sequel.require 'adapters/utils/pg_types'
diff --git a/lib/sequel/extensions/pg_interval.rb b/lib/sequel/extensions/pg_interval.rb
index 379a9c2..155ee11 100644
--- a/lib/sequel/extensions/pg_interval.rb
+++ b/lib/sequel/extensions/pg_interval.rb
@@ -31,6 +31,9 @@
 # very simple, and is only designed to parse PostgreSQL's default output
 # format, it is not designed to support all input formats that PostgreSQL
 # supports.
+#
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using interval columns in CREATE/ALTER TABLE statements.
 
 require 'active_support/duration'
 Sequel.require 'adapters/utils/pg_types'
diff --git a/lib/sequel/extensions/pg_json.rb b/lib/sequel/extensions/pg_json.rb
index 55d8c8a..ad052b7 100644
--- a/lib/sequel/extensions/pg_json.rb
+++ b/lib/sequel/extensions/pg_json.rb
@@ -23,8 +23,8 @@
 #   Sequel.pg_json(array)
 #   Sequel.pg_json(hash)
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Array#pg_json and Hash#pg_json:
 #
 #   array.pg_json
@@ -46,6 +46,9 @@
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using json columns in CREATE/ALTER TABLE statements.
+#
 # This extension integrates with the pg_array extension.  If you plan
 # to use the json[] type, load the pg_array extension before the
 # pg_json extension:
@@ -61,42 +64,74 @@ Sequel.require 'adapters/utils/pg_types'
 module Sequel
   module Postgres
     CAST_JSON = '::json'.freeze
+    CAST_JSONB = '::jsonb'.freeze
 
-    # Class representating PostgreSQL JSON column array values.
-    class JSONArray < DelegateClass(Array)
+    # Class representing PostgreSQL JSON/JSONB column array values.
+    class JSONArrayBase < DelegateClass(Array)
       include Sequel::SQL::AliasMethods
+      include Sequel::SQL::CastMethods
 
-      # Convert the array to a json string, append a
-      # literalized version of the string to the sql, and explicitly
-      # cast the string to json.
+      # Convert the array to a json string and append a
+      # literalized version of the string to the sql.
       def sql_literal_append(ds, sql)
         ds.literal_append(sql, Sequel.object_to_json(self))
+      end
+    end
+
+    class JSONArray < JSONArrayBase
+      # Cast as json
+      def sql_literal_append(ds, sql)
+        super
         sql << CAST_JSON
       end
     end
 
-    # Class representating PostgreSQL JSON column hash/object values.
-    class JSONHash < DelegateClass(Hash)
+    class JSONBArray < JSONArrayBase
+      # Cast as jsonb
+      def sql_literal_append(ds, sql)
+        super
+        sql << CAST_JSONB
+      end
+    end
+
+    # Class representing PostgreSQL JSON/JSONB column hash/object values.
+    class JSONHashBase < DelegateClass(Hash)
       include Sequel::SQL::AliasMethods
+      include Sequel::SQL::CastMethods
 
-      # Convert the hash to a json string, append a
-      # literalized version of the string to the sql, and explicitly
-      # cast the string to json.
+      # Convert the hash to a json string and append a
+      # literalized version of the string to the sql.
       def sql_literal_append(ds, sql)
         ds.literal_append(sql, Sequel.object_to_json(self))
-        sql << CAST_JSON
       end
 
       # Return the object being delegated to.
       alias to_hash __getobj__
     end
 
+    class JSONHash < JSONHashBase
+      # Cast as json
+      def sql_literal_append(ds, sql)
+        super
+        sql << CAST_JSON
+      end
+    end
+
+    class JSONBHash < JSONHashBase
+      # Cast as jsonb
+      def sql_literal_append(ds, sql)
+        super
+        sql << CAST_JSONB
+      end
+    end
+
     # Methods enabling Database object integration with the json type.
     module JSONDatabaseMethods
       def self.extended(db)
         db.instance_eval do
-          copy_conversion_procs([114, 199])
+          copy_conversion_procs([114, 199, 3802, 3807])
           @schema_type_classes[:json] = [JSONHash, JSONArray]
+          @schema_type_classes[:jsonb] = [JSONBHash, JSONBArray]
         end
       end
 
@@ -110,10 +145,18 @@ module Sequel
         parse_json("[#{s}]").first
       end
 
+      # Same as db_parse_json, but consider the input as jsonb.
+      def self.db_parse_jsonb(s)
+        parse_json(s, true)
+      rescue Sequel::InvalidValue
+        raise unless s.is_a?(String)
+        parse_json("[#{s}]").first
+      end
+
       # Parse the given string as json, returning either a JSONArray
       # or JSONHash instance, and raising an error if the JSON
       # parsing does not yield an array or hash.
-      def self.parse_json(s)
+      def self.parse_json(s, jsonb=false)
         begin
           value = Sequel.parse_json(s)
         rescue Sequel.json_parser_error_class => e
@@ -122,9 +165,9 @@ module Sequel
 
         case value
         when Array
-          JSONArray.new(value)
+          (jsonb ? JSONBArray : JSONArray).new(value)
         when Hash 
-          JSONHash.new(value)
+          (jsonb ? JSONBHash : JSONHash).new(value)
         else
           raise Sequel::InvalidValue, "unhandled json value: #{value.inspect} (from #{s.inspect})"
         end
@@ -133,7 +176,7 @@ module Sequel
       # Handle JSONArray and JSONHash in bound variables
       def bound_variable_arg(arg, conn)
         case arg
-        when JSONArray, JSONHash
+        when JSONArrayBase, JSONHashBase
           Sequel.object_to_json(arg)
         else
           super
@@ -145,7 +188,7 @@ module Sequel
       # Handle json[] types in bound variables.
       def bound_variable_array(a)
         case a
-        when JSONHash, JSONArray
+        when JSONHashBase, JSONArrayBase
           "\"#{Sequel.object_to_json(a).gsub('"', '\\"')}\""
         else
           super
@@ -157,17 +200,14 @@ module Sequel
         case db_type
         when 'json'
           :json
+        when 'jsonb'
+          :jsonb
         else
           super
         end
       end
 
-      # Given a value to typecast to the json column
-      # * If given a JSONArray or JSONHash, just return the value
-      # * If given an Array, return a JSONArray
-      # * If given a Hash, return a JSONHash
-      # * If given a String, parse it as would be done during
-      #   database retrieval.
+      # Convert the value given to a JSONArray or JSONHash
       def typecast_value_json(value)
         case value
         when JSONArray, JSONHash
@@ -176,17 +216,43 @@ module Sequel
           JSONArray.new(value)
         when Hash 
           JSONHash.new(value)
+        when JSONBArray
+          JSONArray.new(value.to_a)
+        when JSONBHash
+          JSONHash.new(value.to_hash)
         when String
           JSONDatabaseMethods.parse_json(value)
         else
           raise Sequel::InvalidValue, "invalid value for json: #{value.inspect}"
         end
       end
+
+      # Convert the value given to a JSONBArray or JSONBHash
+      def typecast_value_jsonb(value)
+        case value
+        when JSONBArray, JSONBHash
+          value
+        when Array
+          JSONBArray.new(value)
+        when Hash 
+          JSONBHash.new(value)
+        when JSONArray
+          JSONBArray.new(value.to_a)
+        when JSONHash
+          JSONBHash.new(value.to_hash)
+        when String
+          JSONDatabaseMethods.parse_json(value, true)
+        else
+          raise Sequel::InvalidValue, "invalid value for jsonb: #{value.inspect}"
+        end
+      end
     end
 
     PG_TYPES[114] = JSONDatabaseMethods.method(:db_parse_json)
+    PG_TYPES[3802] = JSONDatabaseMethods.method(:db_parse_jsonb)
     if defined?(PGArray) && PGArray.respond_to?(:register)
       PGArray.register('json', :oid=>199, :scalar_oid=>114)
+      PGArray.register('jsonb', :oid=>3807, :scalar_oid=>3802)
     end
   end
 
@@ -200,6 +266,28 @@ module Sequel
         Postgres::JSONArray.new(v)
       when Hash
         Postgres::JSONHash.new(v)
+      when Postgres::JSONBArray
+        Postgres::JSONArray.new(v.to_a)
+      when Postgres::JSONBHash
+        Postgres::JSONHash.new(v.to_hash)
+      else
+        Sequel.pg_json_op(v)
+      end
+    end
+
+    # Wrap the array or hash in a Postgres::JSONArray or Postgres::JSONHash.
+    def pg_jsonb(v)
+      case v
+      when Postgres::JSONBArray, Postgres::JSONBHash
+        v
+      when Array
+        Postgres::JSONBArray.new(v)
+      when Hash
+        Postgres::JSONBHash.new(v)
+      when Postgres::JSONArray
+        Postgres::JSONBArray.new(v.to_a)
+      when Postgres::JSONHash
+        Postgres::JSONBHash.new(v.to_hash)
       else
         Sequel.pg_json_op(v)
       end
@@ -218,6 +306,13 @@ if Sequel.core_extensions?
     def pg_json
       Sequel::Postgres::JSONArray.new(self)
     end
+
+    # Return a Sequel::Postgres::JSONArray proxy to the receiver.
+    # This is mostly useful as a short cut for creating JSONArray
+    # objects that didn't come from the database.
+    def pg_jsonb
+      Sequel::Postgres::JSONBArray.new(self)
+    end
   end
 
   class Hash
@@ -227,6 +322,13 @@ if Sequel.core_extensions?
     def pg_json
       Sequel::Postgres::JSONHash.new(self)
     end
+
+    # Return a Sequel::Postgres::JSONHash proxy to the receiver.
+    # This is mostly useful as a short cut for creating JSONHash
+    # objects that didn't come from the database.
+    def pg_jsonb
+      Sequel::Postgres::JSONBHash.new(self)
+    end
   end
 end
 
@@ -236,12 +338,20 @@ if defined?(Sequel::CoreRefinements)
       def pg_json
         Sequel::Postgres::JSONArray.new(self)
       end
+
+      def pg_jsonb
+        Sequel::Postgres::JSONBArray.new(self)
+      end
     end
 
     refine Hash do
       def pg_json
         Sequel::Postgres::JSONHash.new(self)
       end
+
+      def pg_jsonb
+        Sequel::Postgres::JSONBHash.new(self)
+      end
     end
   end
 end
diff --git a/lib/sequel/extensions/pg_json_ops.rb b/lib/sequel/extensions/pg_json_ops.rb
index f70047d..cd69934 100644
--- a/lib/sequel/extensions/pg_json_ops.rb
+++ b/lib/sequel/extensions/pg_json_ops.rb
@@ -1,32 +1,39 @@
 # The pg_json_ops extension adds support to Sequel's DSL to make
 # it easier to call PostgreSQL JSON functions and operators (added
-# first in PostgreSQL 9.3).
+# first in PostgreSQL 9.3).  It also supports the JSONB functions
+# and operators added in PostgreSQL 9.4).
 #
 # To load the extension:
 #
 #   Sequel.extension :pg_json_ops
 #
-# The most common usage is passing an expression to Sequel.pg_json_op:
+# The most common usage is passing an expression to Sequel.pg_json_op
+# or Sequel.pg_jsonb_op:
 #
 #   j = Sequel.pg_json_op(:json_column)
+#   jb = Sequel.pg_jsonb_op(:jsonb_column)
 #
 # If you have also loaded the pg_json extension, you can use
-# Sequel.pg_json as well:
+# Sequel.pg_json or Sequel.pg_jsonb as well:
 #
 #  j = Sequel.pg_json(:json_column)
+#  jb = Sequel.pg_jsonb(:jsonb_column)
 #
 # Also, on most Sequel expression objects, you can call the pg_json
-# method:
+# or pg_jsonb # method:
 #
 #   j = Sequel.expr(:json_column).pg_json
+#   jb = Sequel.expr(:jsonb_column).pg_jsonb
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
-# and have activated refinements for the file, you can also use Symbol#pg_json:
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
+# and have activated refinements for the file, you can also use Symbol#pg_json or
+# Symbol#pg_jsonb:
 #
 #   j = :json_column.pg_json
+#   jb = :jsonb_column.pg_jsonb
 #
-# This creates a Sequel::Postgres::JSONOp object that can be used
+# This creates a Sequel::Postgres::JSONOp or Sequel::Postgres::JSONBOp object that can be used
 # for easier querying:
 #
 #   j[1]                     # (json_column -> 1)
@@ -38,27 +45,36 @@
 #
 #   j.array_length           # json_array_length(json_column)
 #   j.array_elements         # json_array_elements(json_column)
+#   j.array_elements_text    # json_array_elements_text(json_column)
 #   j.each                   # json_each(json_column)
 #   j.each_text              # json_each_text(json_column)
 #   j.keys                   # json_object_keys(json_column)
+#   j.typeof                 # json_typeof(json_column)
 #
 #   j.populate(:a)           # json_populate_record(:a, json_column)
 #   j.populate_set(:a)       # json_populate_recordset(:a, json_column)
+#   j.to_record              # json_to_record(json_column)
+#   j.to_recordset           # json_to_recordset(json_column)
 #
 # If you are also using the pg_json extension, you should load it before
-# loading this extension.  Doing so will allow you to use JSONHash#op and
-# JSONArray#op to get a JSONOp, allowing you to perform json operations
-# on json literals.
+# loading this extension.  Doing so will allow you to use the #op method on
+# JSONHash, JSONHarray, JSONBHash, and JSONBArray, allowing you to perform json/jsonb operations
+# on json/jsonb literals.
+#
+# In order to get the automatic conversion from a ruby array to a PostgreSQL array
+# (as shown in the #[] and #get_text examples above), you need to load the pg_array
+# extension.
+
 module Sequel
   module Postgres
-    # The JSONOp class is a simple container for a single object that
+    # The JSONBaseOp class is a simple container for a single object that
     # defines methods that yield Sequel expression objects representing
     # PostgreSQL json operators and functions.
     #
     # In the method documentation examples, assume that:
     #
     #   json_op = Sequel.pg_json(:json)
-    class JSONOp < Sequel::SQL::Wrapper
+    class JSONBaseOp < Sequel::SQL::Wrapper
       GET = ["(".freeze, " -> ".freeze, ")".freeze].freeze
       GET_TEXT = ["(".freeze, " ->> ".freeze, ")".freeze].freeze
       GET_PATH = ["(".freeze, " #> ".freeze, ")".freeze].freeze
@@ -81,16 +97,23 @@ module Sequel
 
       # Returns a set of json values for the elements in the json array.
       #
-      #   json_op.array_elements # json_oarray_elements(json)
+      #   json_op.array_elements # json_array_elements(json)
       def array_elements
-        function(:json_array_elements)
+        function(:array_elements)
+      end
+
+      # Returns a set of text values for the elements in the json array.
+      #
+      #   json_op.array_elements_text # json_array_elements_text(json)
+      def array_elements_text
+        function(:array_elements_text)
       end
 
       # Get the length of the outermost json array.
       #
       #   json_op.array_length # json_array_length(json)
       def array_length
-        Sequel::SQL::NumericExpression.new(:NOOP, function(:json_array_length))
+        Sequel::SQL::NumericExpression.new(:NOOP, function(:array_length))
       end
 
       # Returns a set of key and value pairs, where the keys
@@ -98,7 +121,7 @@ module Sequel
       #
       #   json_op.each # json_each(json)
       def each
-        function(:json_each)
+        function(:each)
       end
 
       # Returns a set of key and value pairs, where the keys
@@ -106,7 +129,7 @@ module Sequel
       #
       #   json_op.each_text # json_each_text(json)
       def each_text
-        function(:json_each_text)
+        function(:each_text)
       end
 
       # Returns a json value for the object at the given path.
@@ -114,7 +137,7 @@ module Sequel
       #   json_op.extract('a') # json_extract_path(json, 'a')
       #   json_op.extract('a', 'b') # json_extract_path(json, 'a', 'b')
       def extract(*a)
-        JSONOp.new(function(:json_extract_path, *a))
+        self.class.new(function(:extract_path, *a))
       end
 
       # Returns a text value for the object at the given path.
@@ -122,7 +145,7 @@ module Sequel
       #   json_op.extract_text('a') # json_extract_path_text(json, 'a')
       #   json_op.extract_text('a', 'b') # json_extract_path_text(json, 'a', 'b')
       def extract_text(*a)
-        Sequel::SQL::StringExpression.new(:NOOP, function(:json_extract_path_text, *a))
+        Sequel::SQL::StringExpression.new(:NOOP, function(:extract_path_text, *a))
       end
 
       # Get JSON array element or object field as text.  If an array is given,
@@ -143,26 +166,44 @@ module Sequel
       #
       #   json_op.keys # json_object_keys(json)
       def keys
-        function(:json_object_keys)
-      end
-
-      # Return the receiver, since it is already a JSONOp.
-      def pg_json
-        self
+        function(:object_keys)
       end
 
       # Expands the given argument using the columns in the json.
       #
       #   json_op.populate(arg) # json_populate_record(arg, json)
       def populate(arg)
-        SQL::Function.new(:json_populate_record, arg, self)
+        SQL::Function.new(function_name(:populate_record), arg, self)
       end
 
       # Expands the given argument using the columns in the json.
       #
       #   json_op.populate_set(arg) # json_populate_recordset(arg, json)
       def populate_set(arg)
-        SQL::Function.new(:json_populate_recordset, arg, self)
+        SQL::Function.new(function_name(:populate_recordset), arg, self)
+      end
+
+      # Builds arbitrary record from json object.  You need to define the
+      # structure of the record using #as on the resulting object:
+      #
+      #   json_op.to_record.as(:x, [Sequel.lit('a integer'), Sequel.lit('b text')]) # json_to_record(json) AS x(a integer, b text)
+      def to_record(nested_as_text=false)
+        function(:to_record, nested_as_text)
+      end
+
+      # Builds arbitrary set of records from json array of objects.  You need to define the
+      # structure of the records using #as on the resulting object:
+      #
+      #   json_op.to_recordset.as(:x, [Sequel.lit('a integer'), Sequel.lit('b text')]) # json_to_recordset(json) AS x(a integer, b text)
+      def to_recordset(nested_as_text=false)
+        function(:to_recordset, nested_as_text)
+      end
+
+      # Returns the type of the outermost json value as text.
+      #
+      #   json_op.typeof # json_typeof(json)
+      def typeof
+        function(:typeof)
       end
 
       private
@@ -176,7 +217,7 @@ module Sequel
       # Return a function with the given name, and the receiver as the first
       # argument, with any additional arguments given.
       def function(name, *args)
-        SQL::Function.new(name, self, *args)
+        SQL::Function.new(function_name(name), self, *args)
       end
 
       # Whether the given object represents an array in PostgreSQL.
@@ -195,17 +236,123 @@ module Sequel
       end
     end
 
+    # JSONBaseOp subclass for the json type
+    class JSONOp < JSONBaseOp
+      # Return the receiver, since it is already a JSONOp.
+      def pg_json
+        self
+      end
+
+      private
+
+      # The json type functions are prefixed with json_
+      def function_name(name)
+        "json_#{name}"
+      end
+    end
+
+    # JSONBaseOp subclass for the jsonb type.
+    #
+    # In the method documentation examples, assume that:
+    #
+    #   jsonb_op = Sequel.pg_jsonb(:jsonb)
+    class JSONBOp < JSONBaseOp
+      CONTAIN_ALL = ["(".freeze, " ?& ".freeze, ")".freeze].freeze
+      CONTAIN_ANY = ["(".freeze, " ?| ".freeze, ")".freeze].freeze
+      CONTAINS = ["(".freeze, " @> ".freeze, ")".freeze].freeze
+      CONTAINED_BY = ["(".freeze, " <@ ".freeze, ")".freeze].freeze
+      HAS_KEY = ["(".freeze, " ? ".freeze, ")".freeze].freeze
+
+      # Check if the receiver contains all of the keys in the given array:
+      #
+      #   jsonb_op.contain_all(:a) # (jsonb ?& a)
+      def contain_all(other)
+        bool_op(CONTAIN_ALL, wrap_input_array(other))
+      end
+
+      # Check if the receiver contains any of the keys in the given array:
+      #
+      #   jsonb_op.contain_any(:a) # (jsonb ?| a)
+      def contain_any(other)
+        bool_op(CONTAIN_ANY, wrap_input_array(other))
+      end
+
+      # Check if the receiver contains all entries in the other jsonb:
+      #
+      #   jsonb_op.contains(:h) # (jsonb @> h)
+      def contains(other)
+        bool_op(CONTAINS, wrap_input_jsonb(other))
+      end
+
+      # Check if the other jsonb contains all entries in the receiver:
+      #
+      #   jsonb_op.contained_by(:h) # (jsonb <@ h)
+      def contained_by(other)
+        bool_op(CONTAINED_BY, wrap_input_jsonb(other))
+      end
+
+      # Check if the receiver contains the given key:
+      #
+      #   jsonb_op.has_key?('a') # (jsonb ? 'a')
+      def has_key?(key)
+        bool_op(HAS_KEY, key)
+      end
+      alias include? has_key?
+
+      # Return the receiver, since it is already a JSONBOp.
+      def pg_jsonb
+        self
+      end
+
+      private
+
+      # Return a placeholder literal with the given str and args, wrapped
+      # in a boolean expression, used by operators that return booleans.
+      def bool_op(str, other)
+        Sequel::SQL::BooleanExpression.new(:NOOP, Sequel::SQL::PlaceholderLiteralString.new(str, [value, other]))
+      end
+
+      # Wrap argument in a PGArray if it is an array
+      def wrap_input_array(obj)
+        if obj.is_a?(Array) && Sequel.respond_to?(:pg_array) 
+          Sequel.pg_array(obj)
+        else
+          obj
+        end
+      end
+
+      # Wrap argument in a JSONBArray or JSONBHash if it is an array or hash.
+      def wrap_input_jsonb(obj)
+        if Sequel.respond_to?(:pg_jsonb) && (obj.is_a?(Array) || obj.is_a?(Hash))
+          Sequel.pg_jsonb(obj)
+        else
+          obj
+        end
+      end
+
+      # The jsonb type functions are prefixed with jsonb_
+      def function_name(name)
+        "jsonb_#{name}"
+      end
+    end
+
     module JSONOpMethods
       # Wrap the receiver in an JSONOp so you can easily use the PostgreSQL
       # json functions and operators with it.
       def pg_json
         JSONOp.new(self)
       end
+      #
+      # Wrap the receiver in an JSONBOp so you can easily use the PostgreSQL
+      # jsonb functions and operators with it.
+      def pg_jsonb
+        JSONBOp.new(self)
+      end
     end
 
     if defined?(JSONArray)
       class JSONArray
-        # Wrap the JSONHash instance in an JSONOp, allowing you to easily use
+        # Wrap the JSONArray instance in an JSONOp, allowing you to easily use
         # the PostgreSQL json functions and operators with literal jsons.
         def op
           JSONOp.new(self)
@@ -219,6 +366,22 @@ module Sequel
           JSONOp.new(self)
         end
       end
+
+      class JSONBArray
+        # Wrap the JSONBArray instance in an JSONBOp, allowing you to easily use
+        # the PostgreSQL jsonb functions and operators with literal jsonbs.
+        def op
+          JSONBOp.new(self)
+        end
+      end
+
+      class JSONBHash
+        # Wrap the JSONBHash instance in an JSONBOp, allowing you to easily use
+        # the PostgreSQL jsonb functions and operators with literal jsonbs.
+        def op
+          JSONBOp.new(self)
+        end
+      end
     end
   end
 
@@ -232,6 +395,16 @@ module Sequel
         Postgres::JSONOp.new(v)
       end
     end
+
+    # Return the object wrapped in an Postgres::JSONBOp.
+    def pg_jsonb_op(v)
+      case v
+      when Postgres::JSONBOp
+        v
+      else
+        Postgres::JSONBOp.new(v)
+      end
+    end
   end
 
   class SQL::GenericExpression
diff --git a/lib/sequel/extensions/pg_range.rb b/lib/sequel/extensions/pg_range.rb
index 767aabe..34a763a 100644
--- a/lib/sequel/extensions/pg_range.rb
+++ b/lib/sequel/extensions/pg_range.rb
@@ -24,8 +24,8 @@
 #
 #   Sequel.pg_range(range)
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Range#pg_range:
 #
 #   range.pg_range 
@@ -49,6 +49,9 @@
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using range type columns in CREATE/ALTER TABLE statements.
+#
 # This extension integrates with the pg_array extension.  If you plan
 # to use arrays of range types, load the pg_array extension before the
 # pg_range extension:
diff --git a/lib/sequel/extensions/pg_range_ops.rb b/lib/sequel/extensions/pg_range_ops.rb
index 5b986a2..fcc2c92 100644
--- a/lib/sequel/extensions/pg_range_ops.rb
+++ b/lib/sequel/extensions/pg_range_ops.rb
@@ -19,8 +19,8 @@
 #
 #   r = Sequel.expr(:range).pg_range
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Symbol#pg_range:
 #
 #   r = :range.pg_range
diff --git a/lib/sequel/extensions/pg_row.rb b/lib/sequel/extensions/pg_row.rb
index 5e04dcf..94daf54 100644
--- a/lib/sequel/extensions/pg_row.rb
+++ b/lib/sequel/extensions/pg_row.rb
@@ -27,8 +27,8 @@
 #
 #   Sequel.pg_row(array)
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Array#pg_row:
 #
 #   array.pg_row
@@ -78,6 +78,9 @@
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
+# See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
+# for details on using row type columns in CREATE/ALTER TABLE statements.
+#
 # This extension requires both the strscan and delegate libraries.
 
 require 'delegate'
@@ -496,6 +499,7 @@ module Sequel
             private meth
           end
 
+          conversion_procs_updated
           nil
         end
 
diff --git a/lib/sequel/extensions/pg_row_ops.rb b/lib/sequel/extensions/pg_row_ops.rb
index fff36b0..3e9f9b9 100644
--- a/lib/sequel/extensions/pg_row_ops.rb
+++ b/lib/sequel/extensions/pg_row_ops.rb
@@ -19,8 +19,8 @@
 #
 #   r = Sequel.expr(:row_column).pg_row
 #
-# If you have loaded the {core_extensions extension}[link:files/doc/core_extensions_rdoc.html]),
-# or you have loaded the {core_refinements extension}[link:files/doc/core_refinements_rdoc.html])
+# If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
+# or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Symbol#pg_row:
 #
 #   r = :row_column.pg_row
@@ -70,7 +70,7 @@
 #
 #   DB[:a].select(a.splat).first # SELECT (a.*)::a FROM a
 #   # => {:a=>"(1,2)"} # or {:a=>{:a=>1, :b=>2}} if the "a" type has been registered
-#                      # with the the pg_row extension
+#                      # with the pg_row extension
 #
 # This feature is mostly useful for a different way to graph tables:
 #
diff --git a/lib/sequel/extensions/query.rb b/lib/sequel/extensions/query.rb
index 24b9235..9cbc107 100644
--- a/lib/sequel/extensions/query.rb
+++ b/lib/sequel/extensions/query.rb
@@ -1,6 +1,12 @@
-# The query extension adds Sequel::Dataset#query which allows
+# The query extension adds a query method which allows
 # a different way to construct queries instead of the usual
-# method chaining.  See Sequel::Dataset#query for details.
+# method chaining:
+#
+#   dataset = DB[:items].query do
+#     select :x, :y, :z
+#     filter{(x > 1) & (y > 2)}
+#     reverse :z
+#   end
 #
 # You can load this extension into specific datasets:
 #
diff --git a/lib/sequel/extensions/schema_dumper.rb b/lib/sequel/extensions/schema_dumper.rb
index 0f2a595..1b12a84 100644
--- a/lib/sequel/extensions/schema_dumper.rb
+++ b/lib/sequel/extensions/schema_dumper.rb
@@ -12,6 +12,55 @@ Sequel.extension :eval_inspect
 
 module Sequel
   module SchemaDumper
+    # Convert the column schema information to a hash of column options, one of which must
+    # be :type.  The other options added should modify that type (e.g. :size).  If a
+    # database type is not recognized, return it as a String type.
+    def column_schema_to_ruby_type(schema)
+      case schema[:db_type].downcase
+      when /\A(medium|small)?int(?:eger)?(?:\((\d+)\))?( unsigned)?\z/o
+        if !$1 && $2 && $2.to_i >= 10 && $3
+          # Unsigned integer type with 10 digits can potentially contain values which
+          # don't fit signed integer type, so use bigint type in target database.
+          {:type=>Bignum}
+        else
+          {:type=>Integer}
+        end
+      when /\Atinyint(?:\((\d+)\))?(?: unsigned)?\z/o
+        {:type =>schema[:type] == :boolean ? TrueClass : Integer}
+      when /\Abigint(?:\((?:\d+)\))?(?: unsigned)?\z/o
+        {:type=>Bignum}
+      when /\A(?:real|float|double(?: precision)?|double\(\d+,\d+\)(?: unsigned)?)\z/o
+        {:type=>Float}
+      when 'boolean'
+        {:type=>TrueClass}
+      when /\A(?:(?:tiny|medium|long|n)?text|clob)\z/o
+        {:type=>String, :text=>true}
+      when 'date'
+        {:type=>Date}
+      when /\A(?:small)?datetime\z/o
+        {:type=>DateTime}
+      when /\Atimestamp(?:\((\d+)\))?(?: with(?:out)? time zone)?\z/o
+        {:type=>DateTime, :size=>($1.to_i if $1)}
+      when /\Atime(?: with(?:out)? time zone)?\z/o
+        {:type=>Time, :only_time=>true}
+      when /\An?char(?:acter)?(?:\((\d+)\))?\z/o
+        {:type=>String, :size=>($1.to_i if $1), :fixed=>true}
+      when /\A(?:n?varchar|character varying|bpchar|string)(?:\((\d+)\))?\z/o
+        {:type=>String, :size=>($1.to_i if $1)}
+      when /\A(?:small)?money\z/o
+        {:type=>BigDecimal, :size=>[19,2]}
+      when /\A(?:decimal|numeric|number)(?:\((\d+)(?:,\s*(\d+))?\))?\z/o
+        s = [($1.to_i if $1), ($2.to_i if $2)].compact
+        {:type=>BigDecimal, :size=>(s.empty? ? nil : s)}
+      when /\A(?:bytea|(?:tiny|medium|long)?blob|(?:var)?binary)(?:\((\d+)\))?\z/o
+        {:type=>File, :size=>($1.to_i if $1)}
+      when /\A(?:year|(?:int )?identity)\z/o
+        {:type=>Integer}
+      else
+        {:type=>String}
+      end
+    end
+
     # Dump foreign key constraints for all tables as a migration. This complements
     # the :foreign_keys=>false option to dump_schema_migration. This only dumps
     # the constraints (not the columns) using alter_table/add_foreign_key with an
@@ -145,55 +194,6 @@ END_MIG
       end
     end
 
-    # Convert the column schema information to a hash of column options, one of which must
-    # be :type.  The other options added should modify that type (e.g. :size).  If a
-    # database type is not recognized, return it as a String type.
-    def column_schema_to_ruby_type(schema)
-      case schema[:db_type].downcase
-      when /\A(medium|small)?int(?:eger)?(?:\((\d+)\))?( unsigned)?\z/o
-        if !$1 && $2 && $2.to_i >= 10 && $3
-          # Unsigned integer type with 10 digits can potentially contain values which
-          # don't fit signed integer type, so use bigint type in target database.
-          {:type=>Bignum}
-        else
-          {:type=>Integer}
-        end
-      when /\Atinyint(?:\((\d+)\))?(?: unsigned)?\z/o
-        {:type =>schema[:type] == :boolean ? TrueClass : Integer}
-      when /\Abigint(?:\((?:\d+)\))?(?: unsigned)?\z/o
-        {:type=>Bignum}
-      when /\A(?:real|float|double(?: precision)?|double\(\d+,\d+\)(?: unsigned)?)\z/o
-        {:type=>Float}
-      when 'boolean'
-        {:type=>TrueClass}
-      when /\A(?:(?:tiny|medium|long|n)?text|clob)\z/o
-        {:type=>String, :text=>true}
-      when 'date'
-        {:type=>Date}
-      when /\A(?:small)?datetime\z/o
-        {:type=>DateTime}
-      when /\Atimestamp(?:\((\d+)\))?(?: with(?:out)? time zone)?\z/o
-        {:type=>DateTime, :size=>($1.to_i if $1)}
-      when /\Atime(?: with(?:out)? time zone)?\z/o
-        {:type=>Time, :only_time=>true}
-      when /\An?char(?:acter)?(?:\((\d+)\))?\z/o
-        {:type=>String, :size=>($1.to_i if $1), :fixed=>true}
-      when /\A(?:n?varchar|character varying|bpchar|string)(?:\((\d+)\))?\z/o
-        {:type=>String, :size=>($1.to_i if $1)}
-      when /\A(?:small)?money\z/o
-        {:type=>BigDecimal, :size=>[19,2]}
-      when /\A(?:decimal|numeric|number)(?:\((\d+)(?:,\s*(\d+))?\))?\z/o
-        s = [($1.to_i if $1), ($2.to_i if $2)].compact
-        {:type=>BigDecimal, :size=>(s.empty? ? nil : s)}
-      when /\A(?:bytea|(?:tiny|medium|long)?blob|(?:var)?binary)(?:\((\d+)\))?\z/o
-        {:type=>File, :size=>($1.to_i if $1)}
-      when /\A(?:year|(?:int )?identity)\z/o
-        {:type=>Integer}
-      else
-        {:type=>String}
-      end
-    end
-
     # For the table and foreign key metadata array, return an alter_table
     # string that would add the foreign keys if run in a migration.
     def dump_add_fk_constraints(table, fks)
diff --git a/lib/sequel/extensions/to_dot.rb b/lib/sequel/extensions/to_dot.rb
index e8ff338..4a5f4b1 100644
--- a/lib/sequel/extensions/to_dot.rb
+++ b/lib/sequel/extensions/to_dot.rb
@@ -91,7 +91,8 @@ module Sequel
       when SQL::AliasedExpression
         dot "AliasedExpression"
         v(e.expression, :expression)
-        v(e.aliaz, :alias)
+        v(e.alias, :alias)
+        v(e.columns, :columns) if e.columns
       when SQL::CaseExpression
         dot "CaseExpression"
         v(e.expression, :expression) if e.expression
@@ -102,18 +103,16 @@ module Sequel
         v(e.expr, :expr)
         v(e.type, :type)
       when SQL::Function
-        dot "Function: #{e.f}"
+        dot "Function: #{e.name}"
         e.args.each_with_index do |val, j|
           v(val, j)
         end
+        v(e.args, :args)
+        v(e.opts, :opts)
       when SQL::Subscript 
         dot "Subscript"
         v(e.f, :f)
         v(e.sub, :sub)
-      when SQL::WindowFunction
-        dot "WindowFunction"
-        v(e.function, :function)
-        v(e.window, :window)
       when SQL::Window
         dot "Window"
         v(e.opts, :opts)
@@ -130,8 +129,7 @@ module Sequel
           str << " USING"
         end
         dot str
-        v(e.table, :table)
-        v(e.table_alias, :alias) if e.table_alias
+        v(e.table_expr, :table)
         if e.is_a?(SQL::JoinOnClause)
           v(e.on, :on) 
         elsif e.is_a?(SQL::JoinUsingClause)
diff --git a/lib/sequel/model.rb b/lib/sequel/model.rb
index 82944dd..6797c8c 100644
--- a/lib/sequel/model.rb
+++ b/lib/sequel/model.rb
@@ -80,7 +80,7 @@ module Sequel
 
     # Class methods added to model that call the method of the same name on the dataset
     DATASET_METHODS = (Dataset::ACTION_METHODS + Dataset::QUERY_METHODS +
-      [:each_server]) - [:and, :or, :[], :columns, :columns!, :delete, :update, :add_graph_aliases]
+      [:each_server]) - [:and, :or, :[], :columns, :columns!, :delete, :update, :add_graph_aliases, :first, :first!]
     
     # Boolean settings that can be modified at the global, class, or instance level.
     BOOLEAN_SETTINGS = [:typecast_empty_string_to_nil, :typecast_on_assignment, :strict_param_setting, \
@@ -104,7 +104,7 @@ module Sequel
     # Empty instance methods to create that the user can override to get hook/callback behavior.
     # Just like any other method defined by Sequel, if you override one of these, you should
     # call +super+ to get the default behavior (while empty by default, they can also be defined
-    # by plugins).  See the {"Model Hooks" guide}[link:files/doc/model_hooks_rdoc.html] for
+    # by plugins).  See the {"Model Hooks" guide}[rdoc-ref:doc/model_hooks.rdoc] for
     # more detail on hooks.
     HOOKS = BEFORE_HOOKS + AFTER_HOOKS
 
@@ -119,7 +119,7 @@ module Sequel
       :@typecast_empty_string_to_nil=>nil, :@typecast_on_assignment=>nil,
       :@raise_on_typecast_failure=>nil, :@plugins=>:dup, :@setter_methods=>nil,
       :@use_after_commit_rollback=>nil, :@fast_pk_lookup_sql=>nil,
-      :@fast_instance_delete_sql=>nil,
+      :@fast_instance_delete_sql=>nil, :@finders=>:dup, :@finder_loaders=>:dup,
       :@db=>nil, :@default_set_fields_options=>:dup}
 
     # Regular expression that determines if a method name is normal in the sense that
@@ -138,6 +138,8 @@ module Sequel
     @dataset_method_modules = []
     @default_eager_limit_strategy = true
     @default_set_fields_options = {}
+    @finders = {}
+    @finder_loaders = {}
     @overridable_methods_module = nil
     @fast_pk_lookup_sql = nil
     @fast_instance_delete_sql = nil
diff --git a/lib/sequel/model/associations.rb b/lib/sequel/model/associations.rb
index cbe9816..cc92c42 100644
--- a/lib/sequel/model/associations.rb
+++ b/lib/sequel/model/associations.rb
@@ -54,9 +54,10 @@ module Sequel
         end
 
         # The dataset associated via this association, with the non-instance specific
-        # changes already applied.
+        # changes already applied.  This will be a joined dataset if the association
+        # requires joining tables.
         def associated_dataset
-          cached_fetch(:_dataset){apply_dataset_changes(associated_class.dataset.clone)}
+          cached_fetch(:_dataset){apply_dataset_changes(_associated_dataset)}
         end
 
         # Apply all non-instance specific changes to the given dataset and return it.
@@ -70,12 +71,94 @@ module Sequel
           end
           ds = ds.order(*self[:order]) if self[:order]
           ds = ds.limit(*self[:limit]) if self[:limit]
-          ds = ds.limit(1) if !returns_array? && self[:key]
+          ds = ds.limit(1) if limit_to_single_row?
           ds = ds.eager(*self[:eager]) if self[:eager]
           ds = ds.distinct if self[:distinct]
           ds
         end
-        
+
+        # Apply all non-instance specific changes and the eager_block option to the given
+        # dataset and return it.
+        def apply_eager_dataset_changes(ds)
+          ds = apply_dataset_changes(ds)
+          if block = self[:eager_block]
+            ds = block.call(ds)
+          end
+          ds
+        end
+
+        # Apply the eager graph limit strategy to the dataset to graph into the current dataset, or return
+        # the dataset unmodified if no SQL limit strategy is needed.
+        def apply_eager_graph_limit_strategy(strategy, ds)
+          case strategy
+          when :distinct_on
+            apply_distinct_on_eager_limit_strategy(ds.order_prepend(*self[:order]))
+          when :window_function
+            apply_window_function_eager_limit_strategy(ds.order_prepend(*self[:order])).select(*ds.columns)
+          else
+            ds
+          end
+        end
+
+        # Apply an eager limit strategy to the dataset, or return the dataset
+        # unmodified if it doesn't need an eager limit strategy.
+        def apply_eager_limit_strategy(ds, strategy=eager_limit_strategy)
+          case strategy
+          when :distinct_on
+            apply_distinct_on_eager_limit_strategy(ds)
+          when :window_function
+            apply_window_function_eager_limit_strategy(ds)
+          else
+            ds
+          end
+        end
+
+        # Use DISTINCT ON and ORDER BY clauses to limit the results to the first record with matching keys.
+        def apply_distinct_on_eager_limit_strategy(ds)
+          keys = predicate_key
+          ds.distinct(*keys).order_prepend(*keys)
+        end
+
+        # Use a window function to limit the results of the eager loading dataset.
+        def apply_window_function_eager_limit_strategy(ds)
+          rn = ds.row_number_column 
+          limit, offset = limit_and_offset
+          ds = ds.unordered.select_append{|o| o.row_number{}.over(:partition=>predicate_key, :order=>ds.opts[:order]).as(rn)}.from_self
+          ds = if !returns_array?
+            ds.where(rn => offset ? offset+1 : 1)
+          elsif offset
+            offset += 1
+            if limit
+              ds.where(rn => (offset...(offset+limit))) 
+            else
+              ds.where{SQL::Identifier.new(rn) >= offset} 
+            end
+          else
+            ds.where{SQL::Identifier.new(rn) <= limit} 
+          end
+        end
+
+        # If the ruby eager limit strategy is being used, slice the array using the slice
+        # range to return the object(s) at the correct offset/limit.
+        def apply_ruby_eager_limit_strategy(rows)
+          if eager_limit_strategy == :ruby
+            name = self[:name]
+            if returns_array?
+              range = slice_range
+              rows.each{|o| o.associations[name] = o.associations[name][range] || []}
+            elsif slice_range
+              offset = slice_range.begin
+              rows.each{|o| o.associations[name] = o.associations[name][offset]}
+            end
+          end
+        end
+
+        # Whether the associations cache should use an array when storing the
+        # associated records during eager loading.
+        def assign_singular?
+          !returns_array?
+        end
+
         # Whether this association can have associated objects, given the current
         # object.  Should be false if obj cannot have associated objects because
         # the necessary key columns are NULL.
@@ -83,6 +166,12 @@ module Sequel
           true
         end
 
+        # Whether you are able to clone from the given association type to the current
+        # association type, true by default only if the types match.
+        def cloneable?(ref)
+          ref[:type] == self[:type]
+        end
+
         # Name symbol for the dataset association method
         def dataset_method
           :"#{self[:name]}_dataset"
@@ -93,27 +182,90 @@ module Sequel
           true
         end
     
+        # Return the symbol used for the row number column if the window function
+        # eager limit strategy is being used, or nil otherwise.
+        def delete_row_number_column(ds=associated_dataset)
+          if eager_limit_strategy == :window_function
+            ds.row_number_column 
+          end
+        end
+
+        # Return an dataset that will load the appropriate associated objects for
+        # the given object using this association.
+        def association_dataset_for(object)
+          associated_dataset.where(predicate_keys.zip(predicate_key_values(object)))
+        end
+
+        ASSOCIATION_DATASET_PROC = proc{|r| r.association_dataset_for(self)}
+        # Proc used to create the association dataset method.
+        def association_dataset_proc
+          ASSOCIATION_DATASET_PROC
+        end
+
+        # The eager_graph limit strategy to use for this dataset
+        def eager_graph_limit_strategy(strategy)
+          if self[:limit] || !returns_array?
+            strategy = strategy[self[:name]] if strategy.is_a?(Hash)
+            case strategy
+            when true
+              true_eager_graph_limit_strategy
+            when Symbol
+              strategy
+            else
+              if returns_array? || offset
+                :ruby
+              end
+            end
+          end
+        end
+        
         # The eager limit strategy to use for this dataset.
         def eager_limit_strategy
           cached_fetch(:_eager_limit_strategy) do
-            if self[:limit]
-              case s = cached_fetch(:eager_limit_strategy){self[:model].default_eager_limit_strategy || :ruby}
+            if self[:limit] || !returns_array?
+              case s = cached_fetch(:eager_limit_strategy){default_eager_limit_strategy}
               when true
-                ds = associated_class.dataset
-                if ds.supports_window_functions?
-                  :window_function
-                else
-                  :ruby
-                end
+                true_eager_limit_strategy
               else
                 s
               end
-            else
-              nil
             end
           end
         end
 
+        # Eager load the associated objects using the hash of eager options,
+        # yielding each row to the block.
+        def eager_load_results(eo, &block)
+          rows = eo[:rows]
+          initialize_association_cache(rows) unless eo[:initialize_rows] == false
+          strategy = eager_limit_strategy
+          cascade = eo[:associations]
+
+          if eo[:eager_block] || eo[:loader] == false
+            strategy = true_eager_graph_limit_strategy if strategy == :union
+            objects = apply_eager_limit_strategy(eager_loading_dataset(eo), strategy).all
+            cascade = nil
+          elsif strategy == :union
+            objects = []
+            ds = associated_dataset
+            ds = self[:eager_block].call(ds) if self[:eager_block]
+            loader = union_eager_loader
+            joiner = " UNION ALL "
+            eo[:id_map].keys.each_slice(subqueries_per_union).each do |slice|
+              objects.concat(ds.with_sql(slice.map{|k| loader.sql(*k)}.join(joiner)).to_a)
+            end
+          else
+            objects = placeholder_eager_loader.all(eo[:id_map].keys)
+          end
+
+          if cascade && !(cascade = associated_dataset.send(:eager_options_for_associations, [cascade])).empty?
+            associated_eager_dataset.send(:eager_load, objects, cascade)
+          end
+
+          objects.each(&block)
+          apply_ruby_eager_limit_strategy(rows)
+        end
+
         # The key to use for the key hash when eager loading
         def eager_loader_key
           self[:eager_loader_key]
@@ -137,6 +289,37 @@ module Sequel
           true
         end
     
+        # Whether additional conditions should be added when using the filter
+        # by associations support.
+        def filter_by_associations_add_conditions?
+          self[:conditions] || self[:eager_block] || self[:limit]
+        end
+
+        # The expression to use for the additional conditions to be added for
+        # the filter by association support, when the association itself is
+        # filtered.  Works by using a subquery to test that the objects passed
+        # also meet the association filter criteria.
+        def filter_by_associations_conditions_expression(obj)
+          ds = filter_by_associations_conditions_dataset.where(filter_by_associations_conditions_subquery_conditions(obj))
+          {filter_by_associations_conditions_key=>ds}
+        end
+
+        # Whether to handle silent modification failure when adding/removing
+        # associated records, false by default.
+        def handle_silent_modification_failure?
+          false
+        end
+
+        # Initialize the associations cache for the current association for the given objects.
+        def initialize_association_cache(objects)
+          name = self[:name]
+          if assign_singular?
+            objects.each{|object| object.associations[name] = nil}
+          else
+            objects.each{|object| object.associations[name] = []}
+          end
+        end
+
         # The limit and offset for this association (returned as a two element array).
         def limit_and_offset
           if (v = self[:limit]).is_a?(Array)
@@ -152,11 +335,28 @@ module Sequel
           false
         end
 
+        # A placeholder literalizer that can be used to lazily load the association. If
+        # one can't be used, returns nil.
+        def placeholder_loader
+          if use_placeholder_loader?
+            cached_fetch(:placeholder_loader) do
+              Sequel::Dataset::PlaceholderLiteralizer.loader(associated_dataset) do |pl, ds|
+                ds.where(*predicate_keys.map{|k| SQL::BooleanExpression.new(:'=', k, pl.arg)})
+              end
+            end
+          end
+        end
+
         # The keys to use for loading of the regular dataset, as an array.
         def predicate_keys
           cached_fetch(:predicate_keys){Array(predicate_key)}
         end
 
+        # The values that predicate_keys should match for objects to be associated.
+        def predicate_key_values(object)
+          predicate_key_methods.map{|k| object.send(k)}
+        end
+
         # Qualify +col+ with the given table name.  If +col+ is an array of columns,
         # return an array of qualified columns.  Only qualifies Symbols and SQL::Identifier
         # values, other values are not modified.
@@ -298,6 +498,156 @@ module Sequel
           end
         end
 
+        # The base dataset used for the association, before any order/conditions
+        # options have been applied.
+        def _associated_dataset
+          associated_class.dataset.clone
+        end
+
+        # Apply a limit strategy to the given dataset so that filter by
+        # associations works with a limited dataset.
+        def apply_filter_by_associations_limit_strategy(ds)
+          case filter_by_associations_limit_strategy
+          when :distinct_on
+            apply_filter_by_associations_distinct_on_limit_strategy(ds)
+          when :window_function
+            apply_filter_by_associations_window_function_limit_strategy(ds)
+          else
+            ds
+          end
+        end
+
+        # Apply a distinct on eager limit strategy using IN with a subquery
+        # that uses DISTINCT ON to ensure only the first matching record for
+        # each key is included.
+        def apply_filter_by_associations_distinct_on_limit_strategy(ds)
+          k = filter_by_associations_limit_key 
+          ds.where(k=>apply_distinct_on_eager_limit_strategy(associated_eager_dataset.select(*k)))
+        end
+
+        # Apply a distinct on eager limit strategy using IN with a subquery
+        # that uses a filter on the row_number window function to ensure
+        # that only rows inside the limit are returned.
+        def apply_filter_by_associations_window_function_limit_strategy(ds)
+          ds.where(filter_by_associations_limit_key=>apply_window_function_eager_limit_strategy(associated_eager_dataset.select(*filter_by_associations_limit_alias_key)).select(*filter_by_associations_limit_aliases))
+        end
+
+        # The associated_dataset with the eager_block callback already applied.
+        def associated_eager_dataset
+          cached_fetch(:associated_eager_dataset) do
+            ds = associated_dataset.unlimited
+            if block = self[:eager_block]
+              ds = block.call(ds)
+            end
+            ds
+          end
+        end
+
+        # The dataset to use for eager loading associated objects for multiple current objects,
+        # given the hash passed to the eager loader.
+        def eager_loading_dataset(eo=OPTS)
+          ds = eo[:dataset] || associated_eager_dataset
+          if id_map = eo[:id_map]
+            ds = ds.where(eager_loading_predicate_condition(id_map.keys))
+          end
+          if associations = eo[:associations]
+            ds = ds.eager(associations)
+          end
+          if block = eo[:eager_block]
+            ds = block.call(ds)
+          end
+          if eager_loading_use_associated_key?
+            ds = ds.select_append(*associated_key_array)
+          end
+          if self[:eager_graph]
+            raise(Error, "cannot eagerly load a #{self[:type]} association that uses :eager_graph") if eager_loading_use_associated_key?
+            ds = ds.eager_graph(self[:eager_graph])
+          end
+          ds
+        end
+
+        # The default eager limit strategy to use for this association
+        def default_eager_limit_strategy
+          self[:model].default_eager_limit_strategy || :ruby
+        end
+
+        # The predicate condition to use for the eager_loader.
+        def eager_loading_predicate_condition(keys)
+          {predicate_key=>keys}
+        end
+
+        # Add conditions to the dataset to not include NULL values for
+        # the associated keys, and select those keys.
+        def filter_by_associations_add_conditions_dataset_filter(ds)
+          k = filter_by_associations_conditions_associated_keys
+          ds.select(*k).where(Sequel.negate(k.zip([])))
+        end
+
+        # The conditions to add to the filter by associations conditions
+        # subquery to restrict it to to the object(s) that was used as the
+        # filter value.
+        def filter_by_associations_conditions_subquery_conditions(obj)
+          key = qualify(associated_class.table_name, associated_class.primary_key)
+          case obj
+          when Array
+            {key=>obj.map{|o| o.pk}}
+          when Sequel::Dataset
+            {key=>obj.select(*Array(qualify(associated_class.table_name, associated_class.primary_key)))}
+          else
+            Array(key).zip(Array(obj.pk))
+          end
+        end
+
+        # The base dataset to use for the filter by associations conditions
+        # subquery, regardless of the objects that are passed in as filter
+        # values.
+        def filter_by_associations_conditions_dataset
+          cached_fetch(:filter_by_associations_conditions_dataset) do
+            ds = associated_eager_dataset.unordered
+            ds = filter_by_associations_add_conditions_dataset_filter(ds)
+            ds = apply_filter_by_associations_limit_strategy(ds)
+            ds
+          end
+        end
+
+        # The strategy to use to filter by a limited association
+        def filter_by_associations_limit_strategy
+          v = fetch(:filter_limit_strategy, self[:eager_limit_strategy])
+          if v || self[:limit] || !returns_array?
+            case v ||= self[:model].default_eager_limit_strategy
+            when :union, :ruby
+              # Can't use a union or ruby-based strategy for filtering by associations, switch to default eager graph limit
+              # strategy.
+              true_eager_graph_limit_strategy
+            when Symbol
+              v
+            when true
+              true_eager_graph_limit_strategy
+            end
+          end
+        end
+
+        # Whether to limit the associated dataset to a single row.
+        def limit_to_single_row?
+          !returns_array?
+        end
+        
+        # Any offset to use for this association (or nil if there is no offset).
+        def offset
+          limit_and_offset.last
+        end
+
+        # A placeholder literalizer used to speed up eager loading.
+        def placeholder_eager_loader
+          cached_fetch(:placeholder_eager_loader) do
+            Sequel::Dataset::PlaceholderLiteralizer.loader(associated_dataset) do |pl, ds|
+              apply_eager_limit_strategy(eager_loading_dataset.where(predicate_key=>pl.arg), eager_limit_strategy)
+            end
+          end
+        end
+
+        # Whether the given association reflection is possible reciprocal
+        # association for the current association reflection.
         def reciprocal_association?(assoc_reflect)
           Array(reciprocal_type).include?(assoc_reflect[:type]) &&
             assoc_reflect.associated_class == self[:model] &&
@@ -305,11 +655,60 @@ module Sequel
             assoc_reflect[:block].nil?
         end
     
+        # The number of subqueries to use in each union query, used to eagerly load
+        # limited associations.  Defaults to 40, the optimal number depends on the
+        # latency between the database and the application.
+        def subqueries_per_union
+          self[:subqueries_per_union] || 40
+        end
+
         # If +s+ is an array, map +s+ over the block.  Otherwise, just call the
         # block with +s+.
         def transform(s)
           s.is_a?(Array) ? s.map(&Proc.new) : (yield s)
         end
+
+        # What eager limit strategy should be used when true is given as the value,
+        # defaults to UNION as that is the fastest strategy if the appropriate keys are indexed.
+        def true_eager_limit_strategy
+          if self[:eager_graph] || (offset && !associated_dataset.supports_offsets_in_correlated_subqueries?)
+            # An SQL-based approach won't work if you are also eager graphing,
+            # so use a ruby based approach in that case.
+            :ruby
+          else
+            :union 
+          end
+        end
+
+        # The eager_graph limit strategy used when true is given as the value, choosing the
+        # best strategy based on what the database supports.
+        def true_eager_graph_limit_strategy
+          if associated_class.dataset.supports_window_functions?
+            :window_function
+          else
+            :ruby
+          end
+        end
+
+        # A placeholder literalizer used to speed up the creation of union queries when eager
+        # loading a limited association.
+        def union_eager_loader
+          cached_fetch(:union_eager_loader) do
+            Sequel::Dataset::PlaceholderLiteralizer.loader(associated_dataset) do |pl, ds|
+              keys = predicate_keys
+              ds = ds.where(keys.map{pl.arg}.zip(keys))
+              if eager_loading_use_associated_key?
+                ds = ds.select_append(*associated_key_array)
+              end
+              ds.from_self
+            end
+          end
+        end
+
+        # Whether the placeholder loader can be used to load the association.
+        def use_placeholder_loader?
+          !self[:instance_specific] && !self[:eager_graph]
+        end
       end
     
       class ManyToOneAssociationReflection < AssociationReflection
@@ -338,11 +737,21 @@ module Sequel
           self[:key].nil?
         end
     
+        # many_to_one associations don't need an eager_graph limit strategy
+        def eager_graph_limit_strategy(_)
+          nil
+        end
+
         # many_to_one associations don't need an eager limit strategy
         def eager_limit_strategy
           nil
         end
 
+        # many_to_one associations don't need a filter by associations limit strategy
+        def filter_by_associations_limit_strategy
+          nil
+        end
+
         # The expression to use on the left hand side of the IN lookup when eager loading
         def predicate_key
           cached_fetch(:predicate_key){qualified_primary_key}
@@ -395,6 +804,24 @@ module Sequel
     
         private
     
+        def filter_by_associations_conditions_associated_keys
+          qualify(associated_class.table_name, primary_keys)
+        end
+
+        def filter_by_associations_conditions_key
+          qualify(self[:model].table_name, self[:key_column])
+        end
+
+        # many_to_one associations do not need to be limited to a single row if they
+        # explicitly do not have a key.
+        def limit_to_single_row?
+          super && self[:key]
+        end
+        
+        def predicate_key_methods
+          self[:keys]
+        end
+    
         def reciprocal_association?(assoc_reflect)
           super && self[:keys] == assoc_reflect[:keys] && primary_key == assoc_reflect.primary_key
         end
@@ -409,6 +836,16 @@ module Sequel
       class OneToManyAssociationReflection < AssociationReflection
         ASSOCIATION_TYPES[:one_to_many] = self
         
+        # Support a correlated subquery limit strategy when using eager_graph.
+        def apply_eager_graph_limit_strategy(strategy, ds)
+          case strategy
+          when :correlated_subquery
+            apply_correlated_subquery_limit_strategy(ds)
+          else
+            super
+          end
+        end
+
         # The keys in the associated model's table related to this association
         def associated_object_keys
           self[:keys]
@@ -420,18 +857,28 @@ module Sequel
           !self[:primary_keys].any?{|k| obj.send(k).nil?}
         end
 
+        # one_to_many and one_to_one associations can be clones
+        def cloneable?(ref)
+          ref[:type] == :one_to_many || ref[:type] == :one_to_one
+        end
+
         # Default foreign key name symbol for key in associated table that points to
         # current table's primary key.
         def default_key
           :"#{underscore(demodulize(self[:model].name))}_id"
         end
-        
+
+        # Handle silent failure of add/remove methods if raise_on_save_failure is false.
+        def handle_silent_modification_failure?
+          self[:raise_on_save_failure] == false
+        end
+
         # The hash key to use for the eager loading predicate (left side of IN (1, 2, 3))
         def predicate_key
           cached_fetch(:predicate_key){qualify_assoc(self[:key])}
         end
         alias qualified_key predicate_key
-    
+
         # The column in the current table that the key in the associated table references.
         def primary_key
           self[:primary_key]
@@ -465,6 +912,57 @@ module Sequel
     
         private
     
+        # Use a correlated subquery to limit the dataset.  Note that this will not
+        # work correctly if the associated dataset uses qualified identifers in the WHERE clause,
+        # as they would reference the containing query instead of the subquery.
+        def apply_correlated_subquery_limit_strategy(ds)
+          table = ds.first_source_table
+          table_alias = ds.first_source_alias
+          primary_key = associated_class.primary_key
+          key = self[:key]
+          cs_alias = :t1
+          cs = associated_dataset.
+            from(Sequel.as(table, :t1)).
+            select(*qualify(cs_alias, primary_key)).
+            where(Array(qualify(cs_alias, key)).zip(Array(qualify(table_alias, key)))).
+            limit(*limit_and_offset)
+          ds.where(qualify(table_alias, primary_key)=>cs)
+        end
+
+        # Support correlated subquery strategy when filtering by limited associations.
+        def apply_filter_by_associations_limit_strategy(ds)
+          case filter_by_associations_limit_strategy
+          when :correlated_subquery
+            apply_correlated_subquery_limit_strategy(ds)
+          else
+            super
+          end
+        end
+
+        def filter_by_associations_conditions_associated_keys
+          qualify(associated_class.table_name, self[:keys])
+        end
+
+        def filter_by_associations_conditions_key
+          qualify(self[:model].table_name, self[:primary_key_column])
+        end
+
+        def filter_by_associations_limit_alias_key
+          Array(filter_by_associations_limit_key)
+        end
+
+        def filter_by_associations_limit_aliases
+          filter_by_associations_limit_alias_key.map{|v| v.column}
+        end
+
+        def filter_by_associations_limit_key
+          qualify(associated_class.table_name, associated_class.primary_key)
+        end
+
+        def predicate_key_methods
+          self[:primary_keys]
+        end
+    
         def reciprocal_association?(assoc_reflect)
           super && self[:keys] == assoc_reflect[:keys] && primary_key == assoc_reflect.primary_key
         end
@@ -473,47 +971,78 @@ module Sequel
         def reciprocal_type
           :many_to_one
         end
-      end
-      
-      class OneToOneAssociationReflection < OneToManyAssociationReflection
-        ASSOCIATION_TYPES[:one_to_one] = self
-        
-        # one_to_one associations don't use an eager limit strategy by default, but
-        # support both DISTINCT ON and window functions as strategies.
-        def eager_limit_strategy
-          cached_fetch(:_eager_limit_strategy) do
-            offset = limit_and_offset.last
-            case s = self.fetch(:eager_limit_strategy){(self[:model].default_eager_limit_strategy || :ruby) if offset}
-            when Symbol
-              s
-            when true
-              ds = associated_class.dataset
-              if ds.supports_ordered_distinct_on? && offset.nil?
-                :distinct_on
-              elsif ds.supports_window_functions?
-                :window_function
-              else
-                :ruby
-              end
-            else
-              nil
-            end
+
+        # Support automatic use of correlated subqueries if :ruby option is best available option,
+        # MySQL is not being used, and either the associated class has a non-composite primary key
+        # or the database supports multiple columns in IN.
+        def true_eager_graph_limit_strategy
+          r = super
+          ds = associated_dataset
+          if r == :ruby && ds.supports_limits_in_correlated_subqueries? && (Array(associated_class.primary_key).length == 1 || ds.supports_multiple_column_in?) && (!offset || ds.supports_offsets_in_correlated_subqueries?)
+            :correlated_subquery
+          else
+            r
           end
         end
+      end
 
-        # The limit and offset for this association (returned as a two element array).
+      # Methods that turn an association that returns multiple objects into an association that
+      # returns a single object.
+      module SingularAssociationReflection
+        # Singular associations do not assign singular if they are using the ruby eager limit strategy
+        # and have a slice range, since they need to store the array of associated objects in order to
+        # pick the correct one with an offset.
+        def assign_singular?
+          super && (eager_limit_strategy != :ruby || !slice_range)
+        end
+
+        # Add conditions when filtering by singular associations with orders, since the
+        # underlying relationship is probably not one-to-one.
+        def filter_by_associations_add_conditions?
+          super || self[:order] || self[:eager_limit_strategy] || self[:filter_limit_strategy]
+        end
+
+        # Make sure singular associations always have 1 as the limit
         def limit_and_offset
-          if (v = self[:limit]).is_a?(Array)
-            v
+          r = super
+          if r.first == 1
+            r
           else
-            [v, nil]
+            [1, r[1]]
           end
         end
 
-        # one_to_one associations return a single object, not an array
+        # Singular associations always return a single object, not an array.
         def returns_array?
           false
         end
+
+        private
+
+        # Only use a eager limit strategy by default if there is an offset or an order.
+        def default_eager_limit_strategy
+          super if self[:order] || offset
+        end
+
+        # Use a strategy for filtering by associations if there is an order or an offset,
+        # or a specific limiting strategy has been specified.
+        def filter_by_associations_limit_strategy
+          super if self[:order] || offset || self[:eager_limit_strategy] || self[:filter_limit_strategy]
+        end
+
+        # Use the DISTINCT ON eager limit strategy for true if the database supports it.
+        def true_eager_graph_limit_strategy
+          if associated_class.dataset.supports_ordered_distinct_on? && !offset
+            :distinct_on
+          else
+            super
+          end
+        end
+      end
+      
+      class OneToOneAssociationReflection < OneToManyAssociationReflection
+        ASSOCIATION_TYPES[:one_to_one] = self
+        include SingularAssociationReflection
       end
     
       class ManyToManyAssociationReflection < AssociationReflection
@@ -524,6 +1053,17 @@ module Sequel
           self[:left_key_alias]
         end
 
+        # Array of associated keys used when eagerly loading.
+        def associated_key_array
+          cached_fetch(:associated_key_array) do
+            if self[:uses_left_composite_keys]
+              associated_key_alias.zip(predicate_keys).map{|a, k| SQL::AliasedExpression.new(k, a)}
+            else
+              [SQL::AliasedExpression.new(predicate_key, associated_key_alias)]
+            end
+          end
+        end
+
         # The column to use for the associated key when eagerly loading
         def associated_key_column
           self[:left_key]
@@ -540,12 +1080,47 @@ module Sequel
           !self[:left_primary_keys].any?{|k| obj.send(k).nil?}
         end
 
+        # one_through_one and many_to_many associations can be clones
+        def cloneable?(ref)
+          ref[:type] == :many_to_many || ref[:type] == :one_through_one
+        end
+
         # The default associated key alias(es) to use when eager loading
         # associations via eager.
         def default_associated_key_alias
           self[:uses_left_composite_keys] ? (0...self[:left_keys].length).map{|i| :"x_foreign_key_#{i}_x"} : :x_foreign_key_x
         end
       
+        # The default eager loader used if the user doesn't override it.  Extracted
+        # to a method so the code can be shared with the many_through_many plugin.
+        def default_eager_loader(eo)
+          h = eo[:id_map]
+          assign_singular = assign_singular?
+          delete_rn = delete_row_number_column
+          uses_lcks = self[:uses_left_composite_keys]
+          left_key_alias = self[:left_key_alias]
+          name = self[:name]
+
+          self[:model].eager_load_results(self, eo) do |assoc_record|
+            assoc_record.values.delete(delete_rn) if delete_rn
+            hash_key = if uses_lcks
+              left_key_alias.map{|k| assoc_record.values.delete(k)}
+            else
+              assoc_record.values.delete(left_key_alias)
+            end
+            next unless objects = h[hash_key]
+            if assign_singular
+              objects.each do |object| 
+                object.associations[name] ||= assoc_record
+              end
+            else
+              objects.each do |object|
+                object.associations[name].push(assoc_record)
+              end
+            end
+          end
+        end
+
         # Default name symbol for the join table.
         def default_join_table
           [self[:class_name], self[:model].name].map{|i| underscore(pluralize(demodulize(i)))}.sort.join('_').to_sym
@@ -636,6 +1211,35 @@ module Sequel
 
         private
 
+        def _associated_dataset
+          super.inner_join(self[:join_table], self[:right_keys].zip(right_primary_keys), :qualify=>:deep)
+        end
+
+        def filter_by_associations_conditions_associated_keys
+          qualify(join_table_alias, self[:left_keys])
+        end
+
+        def filter_by_associations_conditions_key
+          qualify(self[:model].table_name, self[:left_primary_key_column])
+        end
+
+        def filter_by_associations_limit_alias_key
+          aliaz = 'a'
+          filter_by_associations_limit_key.map{|c| c.as(Sequel.identifier(aliaz = aliaz.next))}
+        end
+
+        def filter_by_associations_limit_aliases
+          filter_by_associations_limit_alias_key.map{|v| v.alias}
+        end
+
+        def filter_by_associations_limit_key
+          qualify(join_table_alias, self[:left_keys]) + Array(qualify(associated_class.table_name, associated_class.primary_key))
+        end
+
+        def predicate_key_methods
+          self[:left_primary_keys]
+        end
+    
         def reciprocal_association?(assoc_reflect)
           super && assoc_reflect[:left_keys] == self[:right_keys] &&
             assoc_reflect[:right_keys] == self[:left_keys] &&
@@ -654,6 +1258,22 @@ module Sequel
         end
       end
   
+      class OneThroughOneAssociationReflection < ManyToManyAssociationReflection
+        ASSOCIATION_TYPES[:one_through_one] = self
+        include SingularAssociationReflection
+        
+        # one_through_one associations should not singularize the association name when
+        # creating the foreign key.
+        def default_right_key
+          :"#{self[:name]}_id"
+        end
+      
+        # one_through_one associations have no reciprocals
+        def reciprocal
+          nil
+        end
+      end
+    
       # This module contains methods added to all association datasets
       module AssociationDatasetMethods
         # The model object that created the association dataset
@@ -710,8 +1330,8 @@ module Sequel
       # as a column, you will probably end up with an association that doesn't work, or a SystemStackError.
       #
       # For a more in depth general overview, as well as a reference guide,
-      # see the {Association Basics guide}[link:files/doc/association_basics_rdoc.html].
-      # For examples of advanced usage, see the {Advanced Associations guide}[link:files/doc/advanced_associations_rdoc.html].
+      # see the {Association Basics guide}[rdoc-ref:doc/association_basics.rdoc].
+      # For examples of advanced usage, see the {Advanced Associations guide}[rdoc-ref:doc/advanced_associations.rdoc].
       module ClassMethods
         # All association reflections defined for this model (default: {}).
         attr_reader :association_reflections
@@ -730,22 +1350,6 @@ module Sequel
           association_reflections.values
         end
         
-        # Given an association reflection and a dataset, apply the
-        # :select, :conditions, :order, :eager, :distinct, and :eager_block
-        # association options to the given dataset and return the dataset
-        # or a modified copy of it.
-        def apply_association_dataset_opts(opts, ds)
-          ds = ds.select(*opts.select) if opts.select
-          if c = opts[:conditions]
-            ds = (c.is_a?(Array) && !Sequel.condition_specifier?(c)) ? ds.where(*c) : ds.where(c)
-          end
-          ds = ds.order(*opts[:order]) if opts[:order]
-          ds = ds.eager(opts[:eager]) if opts[:eager]
-          ds = ds.distinct if opts[:distinct]
-          ds = opts[:eager_block].call(ds) if opts[:eager_block]
-          ds
-        end
-
         # Associates a related model with the current model. The following types are
         # supported:
         #
@@ -757,6 +1361,9 @@ module Sequel
         #                 model's primary key.   Each current model object can be associated with
         #                 more than one associated model objects.  Each associated model object
         #                 can be associated with only one current model object.
+        # :one_through_one :: Similar to many_to_many in terms of foreign keys, but only one object
+        #                     is associated to the current object through the association.
+        #                     Provides only getter methods, no setter or modification methods.
         # :one_to_one :: Similar to one_to_many in terms of foreign keys, but
         #                only one object is associated to the current object through the
         #                association.  The methods created are similar to many_to_one, except
@@ -789,11 +1396,11 @@ module Sequel
         #                before an item is set using the association setter method.
         # :cartesian_product_number :: the number of joins completed by this association that could cause more
         #                              than one row for each row in the current table (default: 0 for
-        #                              many_to_one and one_to_one associations, 1 for one_to_many and
-        #                              many_to_many associations).
+        #                              many_to_one, one_to_one, and one_through_one associations, 1
+        #                              for one_to_many and many_to_many associations).
         # :class :: The associated class or its name as a string or symbol. If not
         #           given, uses the association's name, which is camelized (and
-        #           singularized unless the type is :many_to_one or :one_to_one).  If this is specified
+        #           singularized unless the type is :many_to_one, :one_to_one, or one_through_one).  If this is specified
         #           as a string or symbol, you must specify the full class name (e.g. "SomeModule::MyModel"). 
         # :clearer :: Proc used to define the private _remove_all_* method for doing the database work
         #             to remove all objects associated to the current object (*_to_many assocations).
@@ -813,10 +1420,9 @@ module Sequel
         # :eager_graph :: The associations to eagerly load via +eager_graph+ when loading the associated object(s).
         #                 many_to_many associations with this option cannot be eagerly loaded via +eager+.
         # :eager_grapher :: A proc to use to implement eager loading via +eager_graph+, overriding the default.
-        #                   Takes an options hash with the entries :self (the receiver of the eager_graph call),
-        #                   :table_alias (the alias to use for table to graph into the association), :implicit_qualifier
-        #                   (the alias that was used for the current table), and possibly :eager_block (a callback
-        #                   proc accepting the associated dataset, for per-call customization).
+        #                   Takes an options hash with at least the entries :self (the receiver of the eager_graph call),
+        #                   :table_alias (the alias to use for table to graph into the association), and :implicit_qualifier
+        #                   (the alias that was used for the current table).
         #                   Should return a copy of the dataset with the association graphed into it.
         # :eager_limit_strategy :: Determines the strategy used for enforcing limits and offsets when eager loading
         #                          associations via the +eager+ method.  
@@ -830,6 +1436,9 @@ module Sequel
         # :eager_loader_key :: A symbol for the key column to use to populate the key_hash
         #                      for the eager loader.  Can be set to nil to not populate the key_hash.
         # :extend :: A module or array of modules to extend the dataset with.
+        # :filter_limit_strategy :: Determines the strategy used for enforcing limits and offsets when filtering by
+        #                           limited associations.  Possible options are :window_function, :distinct_on, or
+        #                           :correlated_subquery depending on association type and database type.
         # :graph_alias_base :: The base name to use for the table alias when eager graphing.  Defaults to the name
         #                      of the association.  If the alias name has already been used in the query, Sequel will create
         #                      a unique alias by appending a numeric suffix (e.g. alias_0, alias_1, ...) until the alias is
@@ -844,6 +1453,10 @@ module Sequel
         # :graph_only_conditions :: The conditions to use on the SQL join when eagerly loading
         #                           the association via +eager_graph+, instead of the default conditions specified by the
         #                           foreign/primary keys.  This option causes the :graph_conditions option to be ignored.
+        # :graph_order :: Over the order to use when using eager_graph, instead of the default order.  This should be used
+        #                 in the case where :order contains an identifier qualified by the table's name, which may not match
+        #                 the alias used when eager graphing.  By setting this to the unqualified identifier, it will be
+        #                 automatically qualified when using eager_graph.
         # :graph_select :: A column or array of columns to select from the associated table
         #                  when eagerly loading the association via +eager_graph+. Defaults to all
         #                  columns in the associated table.
@@ -857,7 +1470,8 @@ module Sequel
         # :order_eager_graph :: Whether to add the association's order to the graphed dataset's order when graphing
         #                       via +eager_graph+.  Defaults to true, so set to false to disable.
         # :read_only :: Do not add a setter method (for many_to_one or one_to_one associations),
-        #               or add_/remove_/remove_all_ methods (for one_to_many and many_to_many associations).
+        #               or add_/remove_/remove_all_ methods (for one_to_many and many_to_many associations). Always
+        #               true for one_through_one associations.
         # :reciprocal :: the symbol name of the reciprocal association,
         #                if it exists.  By default, Sequel will try to determine it by looking at the
         #                associated model's assocations for a association that matches
@@ -872,6 +1486,8 @@ module Sequel
         #            the same name in both the join table and the associated table.
         # :setter :: Proc used to define the private _*= method for doing the work to setup the assocation
         #            between the given object and the current object (*_to_one associations).
+        # :subqueries_per_union :: The number of subqueries to use in each UNION query, for eager
+        #                          loading limited associations using the default :union strategy.
         # :validate :: Set to false to not validate when implicitly saving any associated object.
         # === :many_to_one
         # :key :: foreign key in current model's table that references
@@ -879,7 +1495,7 @@ module Sequel
         #         array of symbols for a composite key association.
         # :key_column :: Similar to, and usually identical to, :key, but :key refers to the model method
         #                to call, where :key_column refers to the underlying column.  Should only be
-        #                used if the the model method differs from the foreign key column, in conjunction
+        #                used if the model method differs from the foreign key column, in conjunction
         #                with defining a model alias method for the key column.
         # :primary_key :: column in the associated table that :key option references, as a symbol.
         #                 Defaults to the primary key of the associated table. Can use an
@@ -902,9 +1518,11 @@ module Sequel
         #                 array of symbols for a composite key association.
         # :primary_key_column :: Similar to, and usually identical to, :primary_key, but :primary_key refers
         #                        to the model method call, where :primary_key_column refers to the underlying column.
-        #                        Should only be used if the the model method differs from the primary key column, in
+        #                        Should only be used if the model method differs from the primary key column, in
         #                        conjunction with defining a model alias method for the primary key column.
-        # === :many_to_many
+        # :raise_on_save_failure :: Do not raise exceptions for hook or validation failures when saving associated
+        #                           objects in the add/remove methods (return nil instead) [one_to_many only].
+        # === :many_to_many and :one_through_one
         # :graph_join_table_block :: The block to pass to +join_table+ for
         #                            the join table when eagerly loading the association via +eager_graph+.
         # :graph_join_table_conditions :: The additional conditions to use on the SQL join for
@@ -950,15 +1568,27 @@ module Sequel
 
           # dup early so we don't modify opts
           orig_opts = opts.dup
+
           if opts[:clone]
             cloned_assoc = association_reflection(opts[:clone])
-            raise(Error, "cannot clone an association to an association of different type (association #{name} with type #{type} cloning #{opts[:clone]} with type #{cloned_assoc[:type]})") unless cloned_assoc[:type] == type || [cloned_assoc[:type], type].all?{|t| [:one_to_many, :one_to_one].include?(t)}
             orig_opts = cloned_assoc[:orig_opts].merge(orig_opts)
           end
+
           opts = orig_opts.merge(:type => type, :name => name, :cache=>{}, :model => self)
           opts[:block] = block if block
+          if block || orig_opts[:block] || orig_opts[:dataset]
+            # It's possible the association is instance specific, in that it depends on
+            # values other than the foreign key value.  This needs to be checked for
+            # in certain places to disable optimizations.
+            opts[:instance_specific] = true
+          end
           opts = assoc_class.new.merge!(opts)
-          opts[:eager_block] = block unless opts.include?(:eager_block)
+
+          if opts[:clone] && !opts.cloneable?(cloned_assoc)
+            raise(Error, "cannot clone an association to an association of different type (association #{name} with type #{type} cloning #{opts[:clone]} with type #{cloned_assoc[:type]})")
+          end
+
+          opts[:eager_block] = opts[:block] unless opts.include?(:eager_block)
           if !opts.has_key?(:predicate_key) && opts.has_key?(:eager_loading_predicate_key)
             opts[:predicate_key] = opts[:eager_loading_predicate_key]
           end
@@ -976,11 +1606,12 @@ module Sequel
           
           # Remove :class entry if it exists and is nil, to work with cached_fetch
           opts.delete(:class) unless opts[:class]
-          
+
           send(:"def_#{type}", opts)
+          def_association_instance_methods(opts)
       
           orig_opts.delete(:clone)
-          orig_opts.merge!(:class_name=>opts[:class_name], :class=>opts[:class], :block=>block)
+          orig_opts.merge!(:class_name=>opts[:class_name], :class=>opts[:class], :block=>opts[:block])
           opts[:orig_opts] = orig_opts
           # don't add to association_reflections until we are sure there are no errors
           association_reflections[name] = opts
@@ -996,24 +1627,9 @@ module Sequel
           association_reflections.keys
         end
 
-        # Modify and return eager loading dataset based on association options.
-        def eager_loading_dataset(opts, ds, select, associations, eager_options=OPTS)
-          ds = apply_association_dataset_opts(opts, ds)
-          ds = ds.select(*select) if select
-          if opts[:eager_graph]
-            raise(Error, "cannot eagerly load a #{opts[:type]} association that uses :eager_graph") if opts.eager_loading_use_associated_key?
-            ds = ds.eager_graph(opts[:eager_graph])
-          end
-          ds = ds.eager(associations) unless Array(associations).empty?
-          ds = eager_options[:eager_block].call(ds) if eager_options[:eager_block]
-          if opts.eager_loading_use_associated_key?
-            ds = if opts[:uses_left_composite_keys]
-              ds.select_append(*opts.associated_key_alias.zip(opts.predicate_keys).map{|a, k| SQL::AliasedExpression.new(k, a)})
-            else
-              ds.select_append(SQL::AliasedExpression.new(opts.predicate_key, opts.associated_key_alias))
-            end
-          end
-          ds
+        # Eager load the association with the given eager loader options.
+        def eager_load_results(opts, eo, &block)
+          opts.eager_load_results(eo, &block)
         end
 
         # Shortcut for adding a many_to_many association, see #associate
@@ -1026,6 +1642,11 @@ module Sequel
           associate(:many_to_one, name, opts, &block)
         end
         
+        # Shortcut for adding a one_through_one association, see #associate.
+        def one_through_one(name, opts=OPTS, &block)
+          associate(:one_through_one, name, opts, &block)
+        end
+
         # Shortcut for adding a one_to_many association, see #associate
         def one_to_many(name, opts=OPTS, &block)
           associate(:one_to_many, name, opts, &block)
@@ -1037,29 +1658,10 @@ module Sequel
         end
 
         Plugins.inherited_instance_variables(self, :@association_reflections=>:dup, :@autoreloading_associations=>:hash_dup, :@default_eager_limit_strategy=>nil)
-        Plugins.def_dataset_methods(self, [:eager, :eager_graph])
+        Plugins.def_dataset_methods(self, [:eager, :eager_graph, :eager_graph_with_options, :association_join, :association_full_join, :association_inner_join, :association_left_join, :association_right_join])
         
         private
       
-        # Use a window function to limit the results of the eager loading dataset.
-        def apply_window_function_eager_limit_strategy(ds, opts)
-          rn = ds.row_number_column 
-          limit, offset = opts.limit_and_offset
-          ds = ds.unordered.select_append{row_number(:over, :partition=>opts.predicate_key, :order=>ds.opts[:order]){}.as(rn)}.from_self
-          ds = if opts[:type] == :one_to_one
-            ds.where(rn => offset ? offset+1 : 1)
-          elsif offset
-            offset += 1
-            if limit
-              ds.where(rn => (offset...(offset+limit))) 
-            else
-              ds.where{SQL::Identifier.new(rn) >= offset} 
-            end
-          else
-            ds.where{SQL::Identifier.new(rn) <= limit} 
-          end
-        end
-
         # The module to use for the association's methods.  Defaults to
         # the overridable_methods_module.
         def association_module(opts=OPTS)
@@ -1078,27 +1680,45 @@ module Sequel
           association_module_def(name, opts, &block)
           association_module(opts).send(:private, name)
         end
-      
-        # Add the add_ instance method 
-        def def_add_method(opts)
-          association_module_def(opts.add_method, opts){|o,*args| add_associated_object(opts, o, *args)}
-        end
-      
-        # Adds the association dataset methods to the association methods module.
-        def def_association_dataset_methods(opts)
-          association_module_def(opts.dataset_method, opts){_dataset(opts)}
-          def_association_method(opts)
-        end
 
         # Adds the association method to the association methods module.
         def def_association_method(opts)
           association_module_def(opts.association_method, opts){|*dynamic_opts, &block| load_associated_objects(opts, dynamic_opts[0], &block)}
         end
       
-        # Configures many_to_many association reflection and adds the related association methods
+        # Define all of the association instance methods for this association.
+        def def_association_instance_methods(opts)
+          association_module_def(opts.dataset_method, opts){_dataset(opts)}
+          def_association_method(opts)
+
+          return if opts[:read_only]
+
+          if opts[:setter] && opts[:_setter]
+            # This is backwards due to backwards compatibility
+            association_module_private_def(opts._setter_method, opts, &opts[:setter])
+            association_module_def(opts.setter_method, opts, &opts[:_setter])
+          end
+
+          if adder = opts[:adder]
+            association_module_private_def(opts._add_method, opts, &adder)
+            association_module_def(opts.add_method, opts){|o,*args| add_associated_object(opts, o, *args)}
+          end
+
+          if remover = opts[:remover]
+            association_module_private_def(opts._remove_method, opts, &remover)
+            association_module_def(opts.remove_method, opts){|o,*args| remove_associated_object(opts, o, *args)}
+          end
+
+          if clearer = opts[:clearer]
+            association_module_private_def(opts._remove_all_method, opts, &clearer)
+            association_module_def(opts.remove_all_method, opts){|*args| remove_all_associated_objects(opts, *args)}
+          end
+        end
+        
+        # Configures many_to_many and one_through_one association reflection and adds the related association methods
         def def_many_to_many(opts)
-          name = opts[:name]
-          model = self
+          one_through_one = opts[:type] == :one_through_one
+          opts[:read_only] = true if one_through_one
           left = (opts[:left_key] ||= opts.default_left_key)
           lcks = opts[:left_keys] = Array(left)
           right = (opts[:right_key] ||= opts.default_right_key)
@@ -1113,43 +1733,15 @@ module Sequel
             rcpks = Array(opts[:right_primary_key])
             raise(Error, "mismatched number of right keys: #{rcks.inspect} vs #{rcpks.inspect}") unless rcks.length == rcpks.length
           end
-          uses_lcks = opts[:uses_left_composite_keys] = lcks.length > 1
+          opts[:uses_left_composite_keys] = lcks.length > 1
           opts[:uses_right_composite_keys] = rcks.length > 1
-          opts[:cartesian_product_number] ||= 1
+          opts[:cartesian_product_number] ||= one_through_one ? 0 : 1
           join_table = (opts[:join_table] ||= opts.default_join_table)
-          left_key_alias = opts[:left_key_alias] ||= opts.default_associated_key_alias
-          graph_jt_conds = opts[:graph_join_table_conditions] = opts.fetch(:graph_join_table_conditions, []).to_a
+          opts[:left_key_alias] ||= opts.default_associated_key_alias
           opts[:graph_join_table_join_type] ||= opts[:graph_join_type]
           opts[:after_load].unshift(:array_uniq!) if opts[:uniq]
-          slice_range = opts.slice_range
-          opts[:dataset] ||= proc{opts.associated_dataset.inner_join(join_table, rcks.zip(opts.right_primary_keys) + opts.predicate_keys.zip(lcpks.map{|k| send(k)}), :qualify=>:deep)}
-
-          opts[:eager_loader] ||= proc do |eo|
-            h = eo[:id_map]
-            rows = eo[:rows]
-            rows.each{|object| object.associations[name] = []}
-            r = rcks.zip(opts.right_primary_keys)
-            l = [[opts.predicate_key, h.keys]]
-            ds = model.eager_loading_dataset(opts, opts.associated_class.inner_join(join_table, r + l, :qualify=>:deep), nil, eo[:associations], eo)
-            if opts.eager_limit_strategy == :window_function
-              delete_rn = true
-              rn = ds.row_number_column
-              ds = apply_window_function_eager_limit_strategy(ds, opts)
-            end
-            ds.all do |assoc_record|
-              assoc_record.values.delete(rn) if delete_rn
-              hash_key = if uses_lcks
-                left_key_alias.map{|k| assoc_record.values.delete(k)}
-              else
-                assoc_record.values.delete(left_key_alias)
-              end
-              next unless objects = h[hash_key]
-              objects.each{|object| object.associations[name].push(assoc_record)}
-            end
-            if opts.eager_limit_strategy == :ruby
-              rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []}
-            end
-          end
+          opts[:dataset] ||= opts.association_dataset_proc
+          opts[:eager_loader] ||= opts.method(:default_eager_loader)
           
           join_type = opts[:graph_join_type]
           select = opts[:graph_select]
@@ -1157,46 +1749,50 @@ module Sequel
           only_conditions = opts[:graph_only_conditions]
           conditions = opts[:graph_conditions]
           graph_block = opts[:graph_block]
+          graph_jt_conds = opts[:graph_join_table_conditions] = opts.fetch(:graph_join_table_conditions, []).to_a
           use_jt_only_conditions = opts.include?(:graph_join_table_only_conditions)
           jt_only_conditions = opts[:graph_join_table_only_conditions]
           jt_join_type = opts[:graph_join_table_join_type]
           jt_graph_block = opts[:graph_join_table_block]
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
-            ds = ds.graph(join_table, use_jt_only_conditions ? jt_only_conditions : lcks.zip(lpkcs) + graph_jt_conds, :select=>false, :table_alias=>ds.unused_table_alias(join_table, [eo[:table_alias]]), :join_type=>jt_join_type, :implicit_qualifier=>eo[:implicit_qualifier], :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master], &jt_graph_block)
-            ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.right_primary_keys.zip(rcks) + conditions, :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>join_type, &graph_block)
+            egls = eo[:limit_strategy]
+            if egls && egls != :ruby
+              associated_key_array = opts.associated_key_array
+              orig_egds = egds = eager_graph_dataset(opts, eo)
+              egds = egds.
+                inner_join(join_table, rcks.zip(opts.right_primary_keys) + graph_jt_conds, :qualify=>:deep).
+                select_all(egds.first_source).
+                select_append(*associated_key_array)
+              egds = opts.apply_eager_graph_limit_strategy(egls, egds)
+              ds.graph(egds, associated_key_array.map{|v| v.alias}.zip(lpkcs) + conditions, :qualify=>:deep, :table_alias=>eo[:table_alias], :implicit_qualifier=>eo[:implicit_qualifier], :join_type=>eo[:join_type]||join_type, :from_self_alias=>eo[:from_self_alias], :select=>select||orig_egds.columns, &graph_block)
+            else
+              ds = ds.graph(join_table, use_jt_only_conditions ? jt_only_conditions : lcks.zip(lpkcs) + graph_jt_conds, :select=>false, :table_alias=>ds.unused_table_alias(join_table, [eo[:table_alias]]), :join_type=>eo[:join_type]||jt_join_type, :implicit_qualifier=>eo[:implicit_qualifier], :qualify=>:deep, :from_self_alias=>eo[:from_self_alias], &jt_graph_block)
+              ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.right_primary_keys.zip(rcks) + conditions, :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>eo[:join_type]||join_type, &graph_block)
+            end
           end
       
-          def_association_dataset_methods(opts)
-      
-          return if opts[:read_only]
+          return if opts[:read_only] || one_through_one
       
-          adder = opts[:adder] || proc do |o|
+          opts[:adder] ||= proc do |o|
             h = {}
             lcks.zip(lcpks).each{|k, pk| h[k] = send(pk)}
             rcks.zip(opts.right_primary_key_methods).each{|k, pk| h[k] = o.send(pk)}
             _join_table_dataset(opts).insert(h)
           end
-          association_module_private_def(opts._add_method, opts, &adder) 
 
-          remover = opts[:remover] || proc do |o|
+          opts[:remover] ||= proc do |o|
             _join_table_dataset(opts).where(lcks.zip(lcpks.map{|k| send(k)}) + rcks.zip(opts.right_primary_key_methods.map{|k| o.send(k)})).delete
           end
-          association_module_private_def(opts._remove_method, opts, &remover)
 
-          clearer = opts[:clearer] || proc do
+          opts[:clearer] ||= proc do
             _join_table_dataset(opts).where(lcks.zip(lcpks.map{|k| send(k)})).delete
           end
-          association_module_private_def(opts._remove_all_method, opts, &clearer)
-      
-          def_add_method(opts)
-          def_remove_methods(opts)
         end
-        
+
         # Configures many_to_one association reflection and adds the related association methods
         def def_many_to_one(opts)
           name = opts[:name]
-          model = self
           opts[:key] = opts.default_key unless opts.has_key?(:key)
           key = opts[:key]
           opts[:eager_loader_key] = key unless opts.has_key?(:eager_loader_key)
@@ -1221,21 +1817,14 @@ module Sequel
             (auto_assocs[k] ||= []) << name
           end
 
-          opts[:dataset] ||= proc do
-            opts.associated_dataset.where(opts.predicate_keys.zip(cks.map{|k| send(k)}))
-          end
+          opts[:dataset] ||= opts.association_dataset_proc
           opts[:eager_loader] ||= proc do |eo|
             h = eo[:id_map]
-            keys = h.keys
-            # Default the cached association to nil, so any object that doesn't have it
-            # populated will have cached the negative lookup.
-            eo[:rows].each{|object| object.associations[name] = nil}
-            # Skip eager loading if no objects have a foreign key for this association
-            unless keys.empty?
-              klass = opts.associated_class
-              model.eager_loading_dataset(opts, klass.where(opts.predicate_key=>keys), nil, eo[:associations], eo).all do |assoc_record|
-                hash_key = uses_cks ? opts.primary_key_methods.map{|k| assoc_record.send(k)} : assoc_record.send(opts.primary_key_method)
-                next unless objects = h[hash_key]
+            pk_meths = opts.primary_key_methods
+
+            eager_load_results(opts, eo) do |assoc_record|
+              hash_key = uses_cks ? pk_meths.map{|k| assoc_record.send(k)} : assoc_record.send(opts.primary_key_method)
+              if objects = h[hash_key]
                 objects.each{|object| object.associations[name] = assoc_record}
               end
             end
@@ -1250,23 +1839,19 @@ module Sequel
           graph_cks = opts[:graph_keys]
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
-            ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.primary_keys.zip(graph_cks) + conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block)
+            ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : opts.primary_keys.zip(graph_cks) + conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
           end
       
-          def_association_dataset_methods(opts)
-          
           return if opts[:read_only]
       
-          setter = opts[:setter] || proc{|o| cks.zip(opts.primary_key_methods).each{|k, pk| send(:"#{k}=", (o.send(pk) if o))}}
-          association_module_private_def(opts._setter_method, opts, &setter)
-          association_module_def(opts.setter_method, opts){|o| set_associated_object(opts, o)}
+          opts[:setter] ||= proc{|o| cks.zip(opts.primary_key_methods).each{|k, pk| send(:"#{k}=", (o.send(pk) if o))}}
+          opts[:_setter] = proc{|o| set_associated_object(opts, o)}
         end
         
         # Configures one_to_many and one_to_one association reflections and adds the related association methods
         def def_one_to_many(opts)
           one_to_one = opts[:type] == :one_to_one
           name = opts[:name]
-          model = self
           key = (opts[:key] ||= opts.default_key)
           km = opts[:key_method] ||= opts[:key]
           cks = opts[:keys] = Array(key)
@@ -1278,35 +1863,15 @@ module Sequel
           pkcs = opts[:primary_key_columns] ||= Array(pkc)
           raise(Error, "mismatched number of keys: #{cks.inspect} vs #{cpks.inspect}") unless cks.length == cpks.length
           uses_cks = opts[:uses_composite_keys] = cks.length > 1
-          slice_range = opts.slice_range
-          opts[:dataset] ||= proc do
-            opts.associated_dataset.where(opts.predicate_keys.zip(cpks.map{|k| send(k)}))
-          end
+          opts[:dataset] ||= opts.association_dataset_proc
           opts[:eager_loader] ||= proc do |eo|
             h = eo[:id_map]
-            rows = eo[:rows]
             reciprocal = opts.reciprocal
-            klass = opts.associated_class
-            filter_keys = opts.predicate_key
-            ds = model.eager_loading_dataset(opts, klass.where(filter_keys=>h.keys), nil, eo[:associations], eo)
-            assign_singular = true if one_to_one 
-            case opts.eager_limit_strategy
-            when :distinct_on
-              ds = ds.distinct(*filter_keys).order_prepend(*filter_keys)
-            when :window_function
-              delete_rn = true
-              rn = ds.row_number_column
-              ds = apply_window_function_eager_limit_strategy(ds, opts)
-            when :ruby
-              assign_singular = false if one_to_one && slice_range
-            end
-            if assign_singular
-              rows.each{|object| object.associations[name] = nil}
-            else
-              rows.each{|object| object.associations[name] = []}
-            end
-            ds.all do |assoc_record|
-              assoc_record.values.delete(rn) if delete_rn
+            assign_singular = opts.assign_singular?
+            delete_rn = opts.delete_row_number_column
+
+            eager_load_results(opts, eo) do |assoc_record|
+              assoc_record.values.delete(delete_rn) if delete_rn
               hash_key = uses_cks ? km.map{|k| assoc_record.send(k)} : assoc_record.send(km)
               next unless objects = h[hash_key]
               if assign_singular
@@ -1323,15 +1888,6 @@ module Sequel
                 end
               end
             end
-            if opts.eager_limit_strategy == :ruby
-              if one_to_one
-                if slice_range
-                  rows.each{|o| o.associations[name] = o.associations[name][slice_range.begin]}
-                end
-              else
-                rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []}
-              end
-            end
           end
           
           join_type = opts[:graph_join_type]
@@ -1343,69 +1899,60 @@ module Sequel
           graph_block = opts[:graph_block]
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
-            ds = ds.graph(eager_graph_dataset(opts, eo), use_only_conditions ? only_conditions : cks.zip(pkcs) + conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block)
+            ds = ds.graph(opts.apply_eager_graph_limit_strategy(eo[:limit_strategy], eager_graph_dataset(opts, eo)), use_only_conditions ? only_conditions : cks.zip(pkcs) + conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
             # We only load reciprocals for one_to_many associations, as other reciprocals don't make sense
             ds.opts[:eager_graph][:reciprocals][eo[:table_alias]] = opts.reciprocal
             ds
           end
       
-          def_association_dataset_methods(opts)
-          
+          return if opts[:read_only]
+
+          save_opts = {:validate=>opts[:validate]}
           ck_nil_hash ={}
           cks.each{|k| ck_nil_hash[k] = nil}
 
-          unless opts[:read_only]
-            validate = opts[:validate]
-
-            if one_to_one
-              setter = opts[:setter] || proc do |o|
-                up_ds = _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)})))
-                if o
-                  up_ds = up_ds.exclude(o.pk_hash) unless o.new?
-                  cks.zip(cpks).each{|k, pk| o.send(:"#{k}=", send(pk))}
-                end
-                checked_transaction do
-                  up_ds.update(ck_nil_hash)
-                  o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save") if o
-                end
-              end
-              association_module_private_def(opts._setter_method, opts, &setter)
-              association_module_def(opts.setter_method, opts){|o| set_one_to_one_associated_object(opts, o)}
-            else 
-              adder = opts[:adder] || proc do |o|
+          if one_to_one
+            opts[:setter] ||= proc do |o|
+              up_ds = _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)})))
+              if o
+                up_ds = up_ds.exclude(o.pk_hash) unless o.new?
                 cks.zip(cpks).each{|k, pk| o.send(:"#{k}=", send(pk))}
-                o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save")
               end
-              association_module_private_def(opts._add_method, opts, &adder)
-      
-              remover = opts[:remover] || proc do |o|
-                cks.each{|k| o.send(:"#{k}=", nil)}
-                o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save")
+              checked_transaction do
+                up_ds.update(ck_nil_hash)
+                o.save(save_opts) || raise(Sequel::Error, "invalid associated object, cannot save") if o
               end
-              association_module_private_def(opts._remove_method, opts, &remover)
+            end
+            opts[:_setter] = proc{|o| set_one_to_one_associated_object(opts, o)}
+          else 
+            save_opts[:raise_on_failure] = opts[:raise_on_save_failure] != false
 
-              clearer = opts[:clearer] || proc do
-                _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)}))).update(ck_nil_hash)
-              end
-              association_module_private_def(opts._remove_all_method, opts, &clearer)
+            opts[:adder] ||= proc do |o|
+              cks.zip(cpks).each{|k, pk| o.send(:"#{k}=", send(pk))}
+              o.save(save_opts)
+            end
+    
+            opts[:remover] ||= proc do |o|
+              cks.each{|k| o.send(:"#{k}=", nil)}
+              o.save(save_opts)
+            end
 
-              def_add_method(opts)
-              def_remove_methods(opts)
+            opts[:clearer] ||= proc do
+              _apply_association_options(opts, opts.associated_dataset.where(cks.zip(cpks.map{|k| send(k)}))).update(ck_nil_hash)
             end
           end
         end
 
+        # Alias of def_many_to_many, since they share pretty much the same code.
+        def def_one_through_one(opts)
+          def_many_to_many(opts)
+        end
+        
         # Alias of def_one_to_many, since they share pretty much the same code.
         def def_one_to_one(opts)
           def_one_to_many(opts)
         end
         
-        # Add the remove_ and remove_all instance methods
-        def def_remove_methods(opts)
-          association_module_def(opts.remove_method, opts){|o,*args| remove_associated_object(opts, o, *args)}
-          association_module_def(opts.remove_all_method, opts){|*args| remove_all_associated_objects(opts, *args)}
-        end
-
         # Return dataset to graph into given the association reflection, applying the :callback option if set.
         def eager_graph_dataset(opts, eager_options)
           ds = opts.associated_class.dataset
@@ -1455,6 +2002,13 @@ module Sequel
           ds
         end
         
+        # A placeholder literalizer that can be used to load the association, or nil to not use one.
+        def _associated_object_loader(opts, dynamic_opts)
+          if !dynamic_opts[:callback] && (loader = opts.placeholder_loader)
+            loader
+          end
+        end
+
         # Return an association dataset for the given association reflection
         def _dataset(opts)
           raise(Sequel::Error, "model object #{inspect} does not have a primary key") if opts.dataset_need_primary_key? && !pk
@@ -1478,10 +2032,19 @@ module Sequel
           _load_associated_object_array(opts, dynamic_opts).first
         end
 
+        # Return the associated single object using a primary key lookup on the associated class.
+        def _load_associated_object_via_primary_key(opts)
+          opts.associated_class.send(:primary_key_lookup, ((fk = opts[:key]).is_a?(Array) ? fk.map{|c| send(c)} : send(fk)))
+        end
+
         # Load the associated objects for the given association reflection and dynamic options
         # as an array.
         def _load_associated_object_array(opts, dynamic_opts)
-          _associated_dataset(opts, dynamic_opts).all
+          if loader = _associated_object_loader(opts, dynamic_opts)
+            loader.all(*opts.predicate_key_values(self))
+          else
+            _associated_dataset(opts, dynamic_opts).all
+          end
         end
 
         # Return the associated objects from the dataset, without association callbacks, reciprocals, and caching.
@@ -1491,7 +2054,7 @@ module Sequel
             if opts.returns_array?
               _load_associated_object_array(opts, dynamic_opts)
             elsif load_with_primary_key_lookup?(opts, dynamic_opts)
-              opts.associated_class.send(:primary_key_lookup, ((fk = opts[:key]).is_a?(Array) ? fk.map{|c| send(c)} : send(fk)))
+              _load_associated_object_via_primary_key(opts)
             else
               _load_associated_object(opts, dynamic_opts)
             end
@@ -1516,10 +2079,10 @@ module Sequel
           elsif !o.is_a?(klass)
             raise(Sequel::Error, "associated object #{o.inspect} not of correct type #{klass}")
           end
-          raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk
+          raise(Sequel::Error, "model object #{inspect} does not have a primary key") if opts.dataset_need_primary_key? && !pk
           ensure_associated_primary_key(opts, o, *args)
           return if run_association_callbacks(opts, :before_add, o) == false
-          send(opts._add_method, o, *args)
+          return if !send(opts._add_method, o, *args) && opts.handle_silent_modification_failure?
           if array = associations[opts[:name]] and !array.include?(o)
             array.push(o)
           end
@@ -1608,7 +2171,7 @@ module Sequel
         
         # Remove all associated objects from the given association
         def remove_all_associated_objects(opts, *args)
-          raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk
+          raise(Sequel::Error, "model object #{inspect} does not have a primary key") if opts.dataset_need_primary_key? && !pk
           send(opts._remove_all_method, *args)
           ret = associations[opts[:name]].each{|o| remove_reciprocal_object(opts, o)} if associations.include?(opts[:name])
           associations[opts[:name]] = []
@@ -1625,10 +2188,10 @@ module Sequel
           elsif opts.remove_should_check_existing? && send(opts.dataset_method).where(o.pk_hash).empty?
             raise(Sequel::Error, "associated object #{o.inspect} is not currently associated to #{inspect}")
           end
-          raise(Sequel::Error, "model object #{inspect} does not have a primary key") unless pk
+          raise(Sequel::Error, "model object #{inspect} does not have a primary key") if opts.dataset_need_primary_key? && !pk
           raise(Sequel::Error, "associated object #{o.inspect} does not have a primary key") if opts.need_associated_primary_key? && !o.pk
           return if run_association_callbacks(opts, :before_remove, o) == false
-          send(opts._remove_method, o, *args)
+          return if !send(opts._remove_method, o, *args) && opts.handle_silent_modification_failure?
           associations[opts[:name]].delete_if{|x| o === x} if associations.include?(opts[:name])
           remove_reciprocal_object(opts, o)
           run_association_callbacks(opts, :after_remove, o)
@@ -1758,6 +2321,26 @@ module Sequel
       #   Artist.eager(:albums => {proc{|ds| ds.where{year > 1990}}=>{:tracks => :genre}})
       module DatasetMethods
         Sequel::Dataset.def_mutation_method(:eager, :eager_graph, :module=>self)
+
+        %w'inner left right full'.each do |type|
+          class_eval <<END, __FILE__, __LINE__+1
+            def association_#{type}_join(*associations)
+              _association_join(:#{type}, associations)
+            end
+END
+        end
+
+        # Adds one or more INNER JOINs to the existing dataset using the keys and conditions
+        # specified by the given association.  The following methods also exist for specifying
+        # a different type of JOIN:
+        #
+        # association_full_join :: FULL JOIN
+        # association_inner_join :: INNER JOIN
+        # association_left_join :: LEFT JOIN
+        # association_right_join :: RIGHT JOIN
+        def association_join(*associations)
+          association_inner_join(*associations)
+        end
       
         # If the expression is in the form <tt>x = y</tt> where +y+ is a <tt>Sequel::Model</tt>
         # instance, array of <tt>Sequel::Model</tt> instances, or a <tt>Sequel::Model</tt> dataset,
@@ -1835,23 +2418,12 @@ module Sequel
         # Each association's order, if defined, is respected.
         # If the association uses a block or has an :eager_block argument, it is used.
         def eager(*associations)
-          opt = @opts[:eager]
-          opt = opt ? opt.dup : {}
-          associations.flatten.each do |association|
-            case association
-            when Symbol
-              check_association(model, association)
-              opt[association] = nil
-            when Hash
-              association.keys.each{|assoc| check_association(model, assoc)}
-              opt.merge!(association)
-            else
-              raise(Sequel::Error, 'Associations must be in the form of a symbol or hash')
-            end
-          end
-          clone(:eager=>opt)
+          opts = @opts[:eager]
+          association_opts = eager_options_for_associations(associations)
+          opts = opts ? opts.merge(association_opts) : association_opts
+          clone(:eager=>opts)
         end
-      
+
         # The secondary eager loading method.  Loads all associations in a single query. This
         # method should only be used if you need to filter or order based on columns in associated tables.
         #
@@ -1865,7 +2437,7 @@ module Sequel
         # 
         # Each association's order, if definied, is respected. +eager_graph+ probably
         # won't work correctly on a limited dataset, unless you are
-        # only graphing many_to_one and one_to_one associations.
+        # only graphing many_to_one, one_to_one, and one_through_one associations.
         # 
         # Does not use the block defined for the association, since it does a single query for
         # all objects.  You can use the :graph_* association options to modify the SQL query.
@@ -1873,22 +2445,49 @@ module Sequel
         # Like +eager+, you need to call +all+ on the dataset for the eager loading to work.  If you just
         # call +each+, it will yield plain hashes, each containing all columns from all the tables.
         def eager_graph(*associations)
+          eager_graph_with_options(associations)
+        end
+
+        # Run eager_graph with some options specific to just this call. Unlike eager_graph, this takes
+        # the associations as a single argument instead of multiple arguments.
+        #
+        # Options:
+        #
+        # :join_type :: Override the join type specified in the association
+        # :limit_strategy :: Use a strategy for handling limits on associations.
+        #                    Appropriate :limit_strategy values are:
+        #                    true :: Pick the most appropriate based on what the database supports
+        #                    :distinct_on :: Force use of DISTINCT ON stategy (*_one associations only)
+        #                    :correlated_subquery :: Force use of correlated subquery strategy (one_to_* associations only)
+        #                    :window_function :: Force use of window function strategy
+        #                    :ruby :: Don't modify the SQL, implement limits/offsets with array slicing
+        #
+        #                    This can also be a hash with association name symbol keys and one of the above values,
+        #                    to use different strategies per association.
+        #
+        #                    The default is the :ruby strategy.  Choosing a different strategy can make your code
+        #                    significantly slower in some cases (perhaps even the majority of cases), so you should
+        #                    only use this if you have benchmarked that it is faster for your use cases.
+        def eager_graph_with_options(associations, opts=OPTS)
+          associations = [associations] unless associations.is_a?(Array)
           if eg = @opts[:eager_graph]
             eg = eg.dup
-            [:requirements, :reflections, :reciprocals].each{|k| eg[k] = eg[k].dup}
+            [:requirements, :reflections, :reciprocals, :limits].each{|k| eg[k] = eg[k].dup}
+            eg[:local] = opts
             ds = clone(:eager_graph=>eg)
             ds.eager_graph_associations(ds, model, ds.opts[:eager_graph][:master], [], *associations)
           else
             # Each of the following have a symbol key for the table alias, with the following values: 
-            # :reciprocals - the reciprocal instance variable to use for this association
-            # :reflections - AssociationReflection instance related to this association
-            # :requirements - array of requirements for this association
-            ds = clone(:eager_graph=>{:requirements=>{}, :master=>alias_symbol(first_source), :reflections=>{}, :reciprocals=>{}, :cartesian_product_number=>0, :row_proc=>row_proc})
-            ds.eager_graph_associations(ds, model, ds.opts[:eager_graph][:master], [], *associations).
-              naked
+            # :reciprocals :: the reciprocal value to use for this association
+            # :reflections :: AssociationReflection instance related to this association
+            # :requirements :: array of requirements for this association
+            # :limits :: Any limit/offset array slicing that need to be handled in ruby land after loading
+            opts = {:requirements=>{}, :master=>alias_symbol(first_source), :reflections=>{}, :reciprocals=>{}, :limits=>{}, :local=>opts, :cartesian_product_number=>0, :row_proc=>row_proc}
+            ds = clone(:eager_graph=>opts)
+            ds.eager_graph_associations(ds, model, ds.opts[:eager_graph][:master], [], *associations).naked
           end
         end
-        
+
         # Do not attempt to split the result set into associations,
         # just return results as simple objects.  This is useful if you
         # want to use eager_graph as a shortcut to have all of the joins
@@ -1918,7 +2517,7 @@ module Sequel
         # *associations :: any associations dependent on this one
         def eager_graph_association(ds, model, ta, requirements, r, *associations)
           if r.is_a?(SQL::AliasedExpression)
-            alias_base = r.aliaz
+            alias_base = r.alias
             r = r.expression
           else
             alias_base = r[:graph_alias_base]
@@ -1934,11 +2533,18 @@ module Sequel
               associations = assoc.is_a?(Array) ? assoc : [assoc]
             end
           end
-          ds = loader.call(:self=>ds, :table_alias=>assoc_table_alias, :implicit_qualifier=>ta, :callback=>callback)
-          ds = ds.order_more(*qualified_expression(r[:order], assoc_table_alias)) if r[:order] and r[:order_eager_graph]
+          local_opts = ds.opts[:eager_graph][:local]
+          limit_strategy = r.eager_graph_limit_strategy(local_opts[:limit_strategy])
+          ds = loader.call(:self=>ds, :table_alias=>assoc_table_alias, :implicit_qualifier=>(ta == ds.opts[:eager_graph][:master]) ? first_source : qualifier_from_alias_symbol(ta, first_source), :callback=>callback, :join_type=>local_opts[:join_type], :limit_strategy=>limit_strategy, :from_self_alias=>ds.opts[:eager_graph][:master])
+          if r[:order_eager_graph] && (order = r.fetch(:graph_order, r[:order]))
+            ds = ds.order_more(*qualified_expression(order, assoc_table_alias))
+          end
           eager_graph = ds.opts[:eager_graph]
           eager_graph[:requirements][assoc_table_alias] = requirements.dup
           eager_graph[:reflections][assoc_table_alias] = r
+          if limit_strategy == :ruby
+            eager_graph[:limits][assoc_table_alias] = r.limit_and_offset 
+          end
           eager_graph[:cartesian_product_number] += r[:cartesian_product_number] || 2
           ds = ds.eager_graph_associations(ds, r.associated_class, assoc_table_alias, requirements + [assoc_table_alias], *associations) unless associations.empty?
           ds
@@ -1978,6 +2584,42 @@ module Sequel
         end
       
         private
+
+        # Return a new dataset with JOINs of the given type added, using the tables and
+        # conditions specified by the associations.
+        def _association_join(type, associations)
+          clone(:join=>clone(:graph_from_self=>false).eager_graph_with_options(associations, :join_type=>type).opts[:join])
+        end
+
+        # If the association has conditions itself, then it requires additional filters be
+        # added to the current dataset to ensure that the passed in object would also be
+        # included by the association's conditions.
+        def add_association_filter_conditions(ref, obj, expr)
+          if expr != SQL::Constants::FALSE && ref.filter_by_associations_add_conditions?
+            Sequel.expr(ref.filter_by_associations_conditions_expression(obj))
+          else
+            expr
+          end
+        end
+
+        # Process the array of associations arguments (Symbols, Arrays, and Hashes),
+        # and return a hash of options suitable for cascading.
+        def eager_options_for_associations(associations)
+          opts = {}
+          associations.flatten.each do |association|
+            case association
+            when Symbol
+              check_association(model, association)
+              opts[association] = nil
+            when Hash
+              association.keys.each{|assoc| check_association(model, assoc)}
+              opts.merge!(association)
+            else
+              raise(Sequel::Error, 'Associations must be in the form of a symbol or hash')
+            end
+          end
+          opts
+        end
       
         # Return an expression for filtering by the given association reflection and associated object.
         def association_filter_expression(op, ref, obj)
@@ -2033,7 +2675,7 @@ module Sequel
         # per-call determining of the alias base.
         def eager_graph_check_association(model, association)
           if association.is_a?(SQL::AliasedExpression)
-            SQL::AliasedExpression.new(check_association(model, association.expression), association.aliaz)
+            SQL::AliasedExpression.new(check_association(model, association.expression), association.alias)
           else
             check_association(model, association)
           end
@@ -2105,13 +2747,15 @@ module Sequel
             ref.right_primary_key_methods
           end
 
-          exp = association_filter_key_expression(ref.qualify(jt, rks), meths, obj)
-          if exp == SQL::Constants::FALSE
-            association_filter_handle_inversion(op, exp, Array(lpks))
-          else
-            association_filter_handle_inversion(op, SQL::BooleanExpression.from_value_pairs(lpks=>model.db.from(ref[:join_table]).select(*ref.qualify(jt, lks)).where(exp).exclude(SQL::BooleanExpression.from_value_pairs(ref.qualify(jt, lks).zip([]), :OR))), Array(lpks))
+          expr = association_filter_key_expression(ref.qualify(jt, rks), meths, obj)
+          unless expr == SQL::Constants::FALSE
+            expr = SQL::BooleanExpression.from_value_pairs(lpks=>model.db.from(ref[:join_table]).select(*ref.qualify(jt, lks)).where(expr).exclude(SQL::BooleanExpression.from_value_pairs(ref.qualify(jt, lks).zip([]), :OR)))
+            expr = add_association_filter_conditions(ref, obj, expr)
           end
+
+          association_filter_handle_inversion(op, expr, Array(lpks))
         end
+        alias one_through_one_association_filter_expression many_to_many_association_filter_expression
 
         # Return a simple equality expression for filering by a many_to_one association
         def many_to_one_association_filter_expression(op, ref, obj)
@@ -2121,7 +2765,10 @@ module Sequel
           else
             ref.primary_key_methods
           end
-          association_filter_handle_inversion(op, association_filter_key_expression(keys, meths, obj), keys)
+
+          expr = association_filter_key_expression(keys, meths, obj)
+          expr = add_association_filter_conditions(ref, obj, expr)
+          association_filter_handle_inversion(op, expr, keys)
         end
 
         # Return a simple equality expression for filering by a one_to_* association
@@ -2132,7 +2779,10 @@ module Sequel
           else
             ref[:key_methods]
           end
-          association_filter_handle_inversion(op, association_filter_key_expression(keys, meths, obj), keys)
+
+          expr = association_filter_key_expression(keys, meths, obj)
+          expr = add_association_filter_conditions(ref, obj, expr)
+          association_filter_handle_inversion(op, expr, keys)
         end
         alias one_to_one_association_filter_expression one_to_many_association_filter_expression
 
@@ -2140,7 +2790,7 @@ module Sequel
         # and/or load other associations if #eager was used.
         def post_load(all_records)
           eager_graph_build_associations(all_records) if @opts[:eager_graph]
-          eager_load(all_records) if @opts[:eager]
+          eager_load(all_records) if @opts[:eager] && (row_proc || @opts[:eager_graph])
           super
         end
       end
@@ -2200,17 +2850,20 @@ module Sequel
           requirements = eager_graph[:requirements]
           reflection_map = @reflection_map = eager_graph[:reflections]
           reciprocal_map = @reciprocal_map = eager_graph[:reciprocals]
+          limit_map = @limit_map = eager_graph[:limits]
           @unique = eager_graph[:cartesian_product_number] > 1
       
           alias_map = @alias_map = {}
           type_map = @type_map = {}
           after_load_map = @after_load_map = {}
-          limit_map = @limit_map = {}
           reflection_map.each do |k, v|
             alias_map[k] = v[:name]
-            type_map[k] = v.returns_array?
             after_load_map[k] = v[:after_load] unless v[:after_load].empty?
-            limit_map[k] = v.limit_and_offset if v[:limit]
+            type_map[k] = if v.returns_array?
+              true
+            elsif (limit_and_offset = limit_map[k]) && !limit_and_offset.last.nil?
+              :offset
+            end
           end
 
           # Make dependency map hash out of requirements array for each association.
@@ -2419,9 +3072,14 @@ module Sequel
                 if lo = limit_map[ta]
                   limit, offset = lo
                   offset ||= 0
-                  list.replace(list[(offset)..(limit ? (offset)+limit-1 : -1)])
+                  if type_map[ta] == :offset
+                    [record.associations[assoc_name] = list[offset]]
+                  else
+                    list.replace(list[(offset)..(limit ? (offset)+limit-1 : -1)] || [])
+                  end
+                else
+                  list
                 end
-                list
               elsif list
                 [list]
               else
diff --git a/lib/sequel/model/base.rb b/lib/sequel/model/base.rb
index cc640fa..c3a038d 100644
--- a/lib/sequel/model/base.rb
+++ b/lib/sequel/model/base.rb
@@ -41,8 +41,8 @@ module Sequel
       attr_reader :primary_key
   
       # Whether to raise an error instead of returning nil on a failure
-      # to save/create/save_changes/etc due to a validation failure or
-      # a before_* hook returning false.
+      # to save/create/save_changes/update/destroy due to a validation failure or
+      # a before_* hook returning false (default: true). 
       attr_accessor :raise_on_save_failure
   
       # Whether to raise an error when unable to typecast data for a column
@@ -107,7 +107,7 @@ module Sequel
       #   # => #<Artist {:name=>'Bob', ...}>
       def [](*args)
         args = args.first if args.size <= 1
-        args.is_a?(Hash) ? dataset[args] : (primary_key_lookup(args) unless args.nil?)
+        args.is_a?(Hash) ? first_where(args) : (primary_key_lookup(args) unless args.nil?)
       end
 
       # Initializes a model instance as an existing record. This constructor is
@@ -308,7 +308,12 @@ module Sequel
       #   Artist.find{name > 'M'}
       #   # SELECT * FROM artists WHERE (name > 'M') LIMIT 1
       def find(*args, &block)
-        filter(*args, &block).first
+        if args.length == 1 && !block
+          # Use optimized finder
+          first_where(args.first)
+        else
+          filter(*args, &block).first
+        end
       end
       
       # Like +find+ but invokes create with given conditions when record does not
@@ -327,9 +332,148 @@ module Sequel
         find(cond) || create(cond, &block)
       end
     
+
+      FINDER_TYPES = [:first, :all, :each, :get].freeze
+
+      # Create an optimized finder method using a dataset placeholder literalizer.
+      # This pre-computes the SQL to use for the query, except for given arguments.
+      #
+      # There are two ways to use this.  The recommended way is to pass a symbol
+      # that represents a model class method that returns a dataset:
+      #
+      #   def Artist.by_name(name)
+      #     where(:name=>name)
+      #   end
+      #
+      #   Artist.finder :by_name
+      #
+      # This creates an optimized first_by_name method, which you can call normally:
+      #
+      #   Artist.first_by_name("Joe")
+      #
+      # The alternative way to use this to pass your own block:
+      #
+      #   Artist.finder(:name=>:first_by_name){|pl, ds| ds.where(:name=>pl.arg).limit(1)}
+      #
+      # Note that if you pass your own block, you are responsible for manually setting
+      # limits if necessary (as shown above).
+      #
+      # Options:
+      # :arity :: When using a symbol method name, this specifies the arity of the method.
+      #           This should be used if if the method accepts an arbitrary number of arguments,
+      #           or the method has default argument values.  Note that if the method is defined
+      #           as a dataset method, the class method Sequel creates accepts an arbitrary number
+      #           of arguments, so you should use this option in that case.  If you want to handle
+      #           multiple possible arities, you need to call the finder method multiple times with
+      #           unique :arity and :name methods each time.
+      # :name :: The name of the method to create.  This must be given if you pass a block.
+      #          If you use a symbol, this defaults to the symbol prefixed by the type.
+      # :mod :: The module in which to create the finder method.  Defaults to the singleton
+      #         class of the model.
+      # :type :: The type of query to run.  Can be :first, :each, :all, or :get, defaults to
+      #          :first.
+      #
+      # Caveats:
+      #
+      # This doesn't handle all possible cases.  For example, if you have a method such as:
+      #
+      #   def Artist.by_name(name)
+      #     name ? where(:name=>name) : exclude(:name=>nil)
+      #   end
+      #
+      # Then calling a finder without an argument will not work as you expect.
+      #
+      #   Artist.finder :by_name
+      #   Artist.by_name(nil).first
+      #   # WHERE (name IS NOT NULL)
+      #   Artist.first_by_name(nil)
+      #   # WHERE (name IS NULL)
+      #
+      # See Dataset::PlaceholderLiteralizer for additional caveats.
+      def finder(meth=OPTS, opts=OPTS, &block)
+        if block
+          raise Error, "cannot pass both a method name argument and a block of Model.finder" unless meth.is_a?(Hash)
+          raise Error, "cannot pass two option hashes to Model.finder" unless opts.equal?(OPTS)
+          opts = meth
+          raise Error, "must provide method name via :name option when passing block to Model.finder" unless meth_name = opts[:name]
+        end
+
+        type = opts.fetch(:type, :first)
+        unless prepare = opts[:prepare]
+          raise Error, ":type option to Model.finder must be :first, :all, :each, or :get" unless FINDER_TYPES.include?(type)
+        end
+        limit1 = type == :first || type == :get
+        meth_name ||= opts[:name] || :"#{type}_#{meth}"
+
+        argn = lambda do |model|
+          if arity = opts[:arity]
+            arity
+          else
+            method = block || model.method(meth)
+            (method.arity < 0 ? method.arity.abs - 1 : method.arity)
+          end
+        end
+
+        loader_proc = if prepare
+          proc do |model|
+            args = prepare_method_args('$a', argn.call(model))
+            ds = if block
+              model.instance_exec(*args, &block)
+            else
+              model.send(meth, *args)
+            end
+            ds = ds.limit(1) if limit1
+            model_name = model.name
+            if model_name.to_s.empty?
+              model_name = model.object_id
+            else
+              model_name = model_name.gsub(/\W/, '_')
+            end
+            ds.prepare(type, :"#{model_name}_#{meth_name}")
+          end
+        else
+          proc do |model|
+            n = argn.call(model)
+            block ||= lambda do |pl, model2|
+              args = (0...n).map{pl.arg}
+              ds = model2.send(meth, *args)
+              ds = ds.limit(1) if limit1
+              ds
+            end
+
+            Sequel::Dataset::PlaceholderLiteralizer.loader(model, &block) 
+          end
+        end
+
+        Sequel.synchronize{@finder_loaders[meth_name] = loader_proc}
+        mod = opts[:mod] || (class << self; self; end)
+        if prepare
+          def_prepare_method(mod, meth_name)
+        else
+          def_finder_method(mod, meth_name, type)
+        end
+      end
+
+      # An alias for calling first on the model's dataset, but with
+      # optimized handling of the single argument case.
+      def first(*args, &block)
+        if args.length == 1 && !block && !args.first.is_a?(Integer)
+          # Use optimized finder
+          first_where(args.first)
+        else
+          dataset.first(*args, &block)
+        end
+      end
+
+      # An alias for calling first! on the model's dataset, but with
+      # optimized handling of the single argument case.
+      def first!(*args, &block)
+        first(*args, &block) || raise(Sequel::NoMatchingRow)
+      end
+
       # Clear the setter_methods cache when a module is included, as it
       # may contain setter methods.
-      def include(mod)
+      def include(*mods)
         clear_setter_methods_cache
         super
       end
@@ -460,6 +604,27 @@ module Sequel
         h
       end
   
+      # Similar to finder, but uses a prepared statement instead of a placeholder
+      # literalizer. This makes the SQL used static (cannot vary per call), but
+      # allows binding argument values instead of literalizing them into the SQL
+      # query string.
+      #
+      # If a block is used with this method, it is instance_execed by the model,
+      # and should accept the desired number of placeholder arguments.
+      #
+      # The options are the same as the options for finder, with the following
+      # exception:
+      # :type :: Specifies the type of prepared statement to create
+      def prepared_finder(meth=OPTS, opts=OPTS, &block)
+        if block
+          raise Error, "cannot pass both a method name argument and a block of Model.finder" unless meth.is_a?(Hash)
+          meth = meth.merge(:prepare=>true)
+        else
+          opts = opts.merge(:prepare=>true)
+        end
+        finder(meth, opts, &block)
+      end
+
       # Restrict the setting of the primary key(s) when using mass assignment (e.g. +set+).  Because
       # this is the default, this only make sense to use in a subclass where the
       # parent class has used +unrestrict_primary_key+.
@@ -512,22 +677,7 @@ module Sequel
       # sharding support.
       def set_dataset(ds, opts=OPTS)
         inherited = opts[:inherited]
-        case ds
-        when Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, LiteralString
-          self.simple_table = db.literal(ds)
-          ds = db.from(ds)
-        when Dataset
-          self.simple_table = if ds.send(:simple_select_all?)
-            ds.literal(ds.first_source_table)
-          else
-            nil
-          end
-          @db = ds.db
-        else
-          raise(Error, "Model.set_dataset takes one of the following classes as an argument: Symbol, LiteralString, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, Dataset")
-        end
-        set_dataset_row_proc(ds)
-        @dataset = ds
+        @dataset = convert_input_dataset(ds)
         @require_modification = Sequel::Model.require_modification.nil? ? @dataset.provides_accurate_rows_matched? : Sequel::Model.require_modification
         if inherited
           self.simple_table = superclass.simple_table
@@ -557,8 +707,12 @@ module Sequel
       #   end
       def set_primary_key(key)
         clear_setter_methods_cache
-        if key.is_a?(Array) && key.length < 2
-          key = key.first
+        if key.is_a?(Array)
+          if key.length < 2
+            key = key.first
+          else
+            key = key.dup.freeze
+          end
         end
         self.simple_pk = if key && !key.is_a?(Array)
           (@dataset || db).literal(key)
@@ -644,6 +798,25 @@ module Sequel
         end
       end
 
+      # Convert the given object to a Dataset that should be used as
+      # this model's dataset.
+      def convert_input_dataset(ds)
+        case ds
+        when Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, LiteralString
+          self.simple_table = db.literal(ds)
+          ds = db.from(ds)
+        when Dataset
+          self.simple_table = if ds.send(:simple_select_all?)
+            ds.literal(ds.first_source_table)
+          end
+          @db = ds.db
+        else
+          raise(Error, "Model.set_dataset takes one of the following classes as an argument: Symbol, LiteralString, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression, Dataset")
+        end
+        set_dataset_row_proc(ds)
+        ds
+      end
+
       # Add the module to the class's dataset_method_modules.  Extend the dataset with the
       # module if the model has a dataset.  Add dataset methods to the class for all
       # public dataset methods.
@@ -691,6 +864,30 @@ module Sequel
         end
       end
 
+      # Define a finder method in the given module with the given method name that
+      # load rows using the finder with the given name.
+      def def_finder_method(mod, meth, type)
+        mod.send(:define_method, meth){|*args, &block| finder_for(meth).send(type, *args, &block)}
+      end
+
+      # Define a prepared_finder method in the given module that will call the associated prepared
+      # statement.
+      def def_prepare_method(mod, meth)
+        mod.send(:define_method, meth){|*args, &block| finder_for(meth).call(prepare_method_arg_hash(args), &block)}
+      end
+
+      # Find the finder to use for the give method.  If a finder has not been loaded
+      # for the method, load the finder and set correctly in the finders hash, then
+      # return the finder.
+      def finder_for(meth)
+        unless finder = Sequel.synchronize{@finders[meth]}
+          finder_loader = @finder_loaders.fetch(meth)
+          finder = finder_loader.call(self)
+          Sequel.synchronize{@finders[meth] = finder}
+        end
+        finder
+      end
+
       # Get the schema from the database, fall back on checking the columns
       # via the database if that will return inaccurate results or if
       # it raises an error.
@@ -703,7 +900,7 @@ module Sequel
         schema_array = check_non_connection_error{db.schema(dataset, :reload=>reload)} if db.supports_schema_parsing?
         if schema_array
           schema_array.each{|k,v| schema_hash[k] = v}
-          if ds_opts.include?(:select)
+          if (select = ds_opts[:select]) && !(select.length == 1 && select.first.is_a?(SQL::ColumnAll))
             # We don't remove the columns from the schema_hash,
             # as it's possible they will be used for typecasting
             # even if they are not selected.
@@ -809,6 +1006,23 @@ module Sequel
         end
       end
   
+      # An hash of prepared argument values for the given arguments, with keys
+      # starting at a.  Used by the methods created by prepared_finder.
+      def prepare_method_arg_hash(args)
+        h = {}
+        prepare_method_args('a', args.length).zip(args).each{|k, v| h[k] = v}
+        h
+      end
+
+      # An array of prepared statement argument names, of length n and starting with base.
+      def prepare_method_args(base, n)
+        (0...n).map do
+          s = base.to_sym
+          base = base.next
+          s
+        end
+      end
+
       # Find the row in the dataset that matches the primary key.  Uses
       # a static SQL optimization if the table and primary key are simple.
       #
@@ -823,7 +1037,7 @@ module Sequel
           ds.fetch_rows(sql){|r| return ds.row_proc.call(r)}
           nil
         else
-          dataset[primary_key_hash(pk)]
+          first_where(primary_key_hash(pk))
         end
       end
 
@@ -841,6 +1055,7 @@ module Sequel
       # Reset the instance dataset to a modified copy of the current dataset,
       # should be used whenever the model's dataset is modified.
       def reset_instance_dataset
+        @finders.clear if @finders
         @instance_dataset = @dataset.limit(1).naked if @dataset
       end
   
@@ -1022,7 +1237,7 @@ module Sequel
       
       # Like delete but runs hooks before and after delete.
       # If before_destroy returns false, returns false without
-      # deleting the object the the database. Otherwise, deletes
+      # deleting the object from the database. Otherwise, deletes
       # the item from the database and returns self.  Uses a transaction
       # if use_transactions is true or if the :transaction option is given and
       # true.
@@ -1473,7 +1688,7 @@ module Sequel
       
       # Validates the object.  If the object is invalid, errors should be added
       # to the errors attribute.  By default, does nothing, as all models
-      # are valid by default.  See the {"Model Validations" guide}[link:files/doc/validations_rdoc.html].
+      # are valid by default.  See the {"Model Validations" guide}[rdoc-ref:doc/validations.rdoc].
       # for details about validation.  Should not be called directly by
       # user code, call <tt>valid?</tt> instead to check if an object
       # is valid.
@@ -2047,5 +2262,6 @@ module Sequel
 
     extend ClassMethods
     plugin self
+    finder(:where, :arity=>1, :mod=>ClassMethods)
   end
 end
diff --git a/lib/sequel/model/errors.rb b/lib/sequel/model/errors.rb
index 2b3c15e..21abe71 100644
--- a/lib/sequel/model/errors.rb
+++ b/lib/sequel/model/errors.rb
@@ -29,6 +29,12 @@ module Sequel
       #   errors.full_messages
       #   # => ['name is not valid',
       #   #     'hometown is not at least 2 letters']
+      #
+      # If the message is a Sequel::LiteralString, it will be used literally, without the column name:
+      #
+      #   errors.add(:name, Sequel.lit("Album name is not valid"))
+      #   errors.full_messages
+      #   # => ['Album name is not valid']
       def full_messages
         inject([]) do |m, kv| 
           att, errors = *kv
diff --git a/lib/sequel/plugins/association_pks.rb b/lib/sequel/plugins/association_pks.rb
index 10fe34c..75eb5ad 100644
--- a/lib/sequel/plugins/association_pks.rb
+++ b/lib/sequel/plugins/association_pks.rb
@@ -49,85 +49,44 @@ module Sequel
         # a setter that deletes from or inserts into the join table.
         def def_many_to_many(opts)
           super
+
+          return if opts[:type] == :one_through_one
+
           # Grab values from the reflection so that the hash lookup only needs to be
           # done once instead of inside ever method call.
           lk, lpk, rk = opts.values_at(:left_key, :left_primary_key, :right_key)
+          clpk = lpk.is_a?(Array)
+          crk = rk.is_a?(Array)
 
-          # Add 2 separate implementations of the getter method optimized for the
-          # composite and singular left key cases, and 4 separate implementations of the setter
-          # method optimized for each combination of composite and singular keys for both
-          # the left and right keys.
-          if lpk.is_a?(Array)
+          if clpk
             def_association_pks_getter(opts) do
               h = {}
               lk.zip(lpk).each{|k, pk| h[k] = send(pk)}
               _join_table_dataset(opts).filter(h).select_map(rk)
             end
-
-            if rk.is_a?(Array)
-              def_association_pks_setter(opts) do |pks|
-                pks = convert_cpk_array(opts, pks)
-                checked_transaction do
-                  lpkv = lpk.map{|k| send(k)}
-                  ds = _join_table_dataset(opts).filter(lk.zip(lpkv))
-                  ds.exclude(rk=>pks).delete
-                  pks -= ds.select_map(rk)
-                  h = {}
-                  lk.zip(lpkv).each{|k, v| h[k] = v}
-                  pks.each do |pk|
-                    ih = h.dup
-                    rk.zip(pk).each{|k, v| ih[k] = v}
-                    ds.insert(ih)
-                  end
-                end
-              end
-            else
-              def_association_pks_setter(opts) do |pks|
-                pks = convert_pk_array(opts, pks)
-                checked_transaction do
-                  lpkv = lpk.map{|k| send(k)}
-                  ds = _join_table_dataset(opts).filter(lk.zip(lpkv))
-                  ds.exclude(rk=>pks).delete
-                  pks -= ds.select_map(rk)
-                  h = {}
-                  lk.zip(lpkv).each{|k, v| h[k] = v}
-                  pks.each do |pk|
-                    ds.insert(h.merge(rk=>pk))
-                  end
-                end
-              end
-            end
           else
             def_association_pks_getter(opts) do
               _join_table_dataset(opts).filter(lk=>send(lpk)).select_map(rk)
             end
+          end
 
-            if rk.is_a?(Array)
-              def_association_pks_setter(opts) do |pks|
-                pks = convert_cpk_array(opts, pks)
-                checked_transaction do
-                  lpkv = send(lpk)
-                  ds = _join_table_dataset(opts).filter(lk=>lpkv)
-                  ds.exclude(rk=>pks).delete
-                  pks -= ds.select_map(rk)
-                  pks.each do |pk|
-                    h = {lk=>lpkv}
-                    rk.zip(pk).each{|k, v| h[k] = v}
-                    ds.insert(h)
-                  end
-                end
-              end
-            else
-              def_association_pks_setter(opts) do |pks|
-                pks = convert_pk_array(opts, pks)
-                checked_transaction do
-                  lpkv = send(lpk)
-                  ds = _join_table_dataset(opts).filter(lk=>lpkv)
-                  ds.exclude(rk=>pks).delete
-                  pks -= ds.select_map(rk)
-                  pks.each{|pk| ds.insert(lk=>lpkv, rk=>pk)}
-                end
+          def_association_pks_setter(opts) do |pks|
+            pks = send(crk ? :convert_cpk_array : :convert_pk_array, opts, pks)
+            checked_transaction do
+              if clpk
+                lpkv = lpk.map{|k| send(k)}
+                cond = lk.zip(lpkv)
+              else
+                lpkv = send(lpk)
+                cond = {lk=>lpkv}
               end
+              ds = _join_table_dataset(opts).filter(cond)
+              ds.exclude(rk=>pks).delete
+              pks -= ds.select_map(rk)
+              lpkv = Array(lpkv)
+              key_array = crk ? pks.map{|pk| lpkv + pk} : pks.map{|pk| lpkv + [pk]}
+              key_columns = Array(lk) + Array(rk)
+              ds.import(key_columns, key_array)
             end
           end
         end
diff --git a/lib/sequel/plugins/auto_validations.rb b/lib/sequel/plugins/auto_validations.rb
index a293c0b..d40dc9d 100644
--- a/lib/sequel/plugins/auto_validations.rb
+++ b/lib/sequel/plugins/auto_validations.rb
@@ -107,8 +107,9 @@ module Sequel
           @auto_validate_not_null_columns = not_null_cols - Array(primary_key)
           explicit_not_null_cols += Array(primary_key)
           @auto_validate_explicit_not_null_columns = explicit_not_null_cols.uniq
-          @auto_validate_unique_columns = if db.supports_index_parsing?
-            db.indexes(dataset.first_source_table).select{|name, idx| idx[:unique] == true}.map{|name, idx| idx[:columns]}
+          table = dataset.first_source_table
+          @auto_validate_unique_columns = if db.supports_index_parsing? && [Symbol, SQL::QualifiedIdentifier, SQL::Identifier, String].any?{|c| table.is_a?(c)}
+            db.indexes(table).select{|name, idx| idx[:unique] == true}.map{|name, idx| idx[:columns]}
           else
             []
           end
@@ -136,7 +137,11 @@ module Sequel
 
           validates_schema_types if model.auto_validate_types?
 
-          model.auto_validate_unique_columns.each{|cols| validates_unique(cols)}
+          unique_opts = {}
+          if model.respond_to?(:sti_dataset)
+            unique_opts[:dataset] = model.sti_dataset
+          end
+          model.auto_validate_unique_columns.each{|cols| validates_unique(cols, unique_opts)}
         end
       end
     end
diff --git a/lib/sequel/plugins/class_table_inheritance.rb b/lib/sequel/plugins/class_table_inheritance.rb
index 7927fab..39cd4ce 100644
--- a/lib/sequel/plugins/class_table_inheritance.rb
+++ b/lib/sequel/plugins/class_table_inheritance.rb
@@ -59,11 +59,21 @@ module Sequel
     #
     #   # Set up class table inheritance in the parent class
     #   # (Not in the subclasses)
-    #   Employee.plugin :class_table_inheritance
+    #   class Employee < Sequel::Model
+    #     plugin :class_table_inheritance
+    #   end
     #
-    #   # Set the +kind+ column to hold the class name, and
-    #   # set the subclass table to map to for each subclass 
-    #   Employee.plugin :class_table_inheritance, :key=>:kind, :table_map=>{:Staff=>:staff}
+    #   # Have subclasses inherit from the appropriate class
+    #   class Staff < Employee; end
+    #   class Manager < Employee; end
+    #   class Executive < Manager; end
+    #
+    #   # You can also set options when loading the plugin:
+    #   # :kind :: column to hold the class name
+    #   # :table_map :: map of class name symbols to table name symbols
+    #   # :model_map :: map of column values to class name symbols
+    #   Employee.plugin :class_table_inheritance, :key=>:kind, :table_map=>{:Staff=>:staff},
+    #     :model_map=>{1=>:Employee, 2=>:Manager, 3=>:Executive, 4=>:Staff}
     module ClassTableInheritance
       # The class_table_inheritance plugin requires the lazy_attributes plugin
       # to handle lazily-loaded attributes for subclass instances returned
@@ -75,25 +85,26 @@ module Sequel
       # Initialize the per-model data structures and set the dataset's row_proc
       # to check for the :key option column for the type of class when loading objects.
       # Options:
-      # * :key - The column symbol holding the name of the model class this
-      #   is an instance of.  Necessary if you want to call model methods
-      #   using the superclass, but have them return subclass instances.
-      # * :table_map - Hash with class name symbol keys and table name symbol
-      #   values.  Necessary if the implicit table name for the model class
-      #   does not match the database table name
+      # :key :: The column symbol holding the name of the model class this
+      #         is an instance of.  Necessary if you want to call model methods
+      #         using the superclass, but have them return subclass instances.
+      # :table_map :: Hash with class name symbol keys and table name symbol
+      #               values.  Necessary if the implicit table name for the model class
+      #               does not match the database table name
+      # :model_map :: Hash with keys being values of the cti_key column, and values
+      #               being class name strings or symbols.  Used if you don't want to
+      #               store class names in the database.  If you use this option, you
+      #               are responsible for setting the values of the cti_key column
+      #               manually (usually in a before_create hook).
       def self.configure(model, opts=OPTS)
         model.instance_eval do
-          m = method(:constantize)
           @cti_base_model = self
           @cti_key = key = opts[:key] 
           @cti_tables = [table_name]
           @cti_columns = {table_name=>columns}
           @cti_table_map = opts[:table_map] || {}
-          dataset.row_proc = if key
-            lambda{|r| (m.call(r[key]) rescue model).call(r)}
-          else
-            model
-          end
+          @cti_model_map = opts[:model_map]
+          set_dataset_cti_row_proc
         end
       end
 
@@ -112,6 +123,11 @@ module Sequel
         # load method.
         attr_reader :cti_key
         
+        # A hash with keys being values of the cti_key column, and values
+        # being class name strings or symbols.  Used if you don't want to
+        # store class names in the database.
+        attr_reader :cti_model_map
+        
         # An array of table symbols that back this model.  The first is
         # cti_base_model table symbol, and the last is the current model
         # table symbol.
@@ -131,6 +147,7 @@ module Sequel
           ct = cti_tables.dup
           ctm = cti_table_map.dup
           cbm = cti_base_model
+          cmm = cti_model_map
           pk = primary_key
           ds = dataset
           subclass.instance_eval do
@@ -142,6 +159,7 @@ module Sequel
             @cti_columns = cc.merge(table=>columns)
             @cti_table_map = ctm
             @cti_base_model = cbm
+            @cti_model_map = cmm
             # Need to set dataset and columns before calling super so that
             # the main column accessor module is included in the class before any
             # plugin accessor modules (such as the lazy attributes accessor module).
@@ -150,12 +168,7 @@ module Sequel
           end
           super
           subclass.instance_eval do
-            m = method(:constantize)
-            dataset.row_proc = if cti_key
-              lambda{|r| (m.call(r[ck]) rescue subclass).call(r)}
-            else
-              subclass
-            end
+            set_dataset_cti_row_proc
             (columns - [cbm.primary_key]).each{|a| define_lazy_attribute_getter(a)}
             cti_tables.reverse.each do |table|
               db.schema(table).each{|k,v| db_schema[k] = v}
@@ -184,12 +197,35 @@ module Sequel
           ds.row_proc = @dataset.row_proc if @dataset
         end
 
+        # Set the row_proc for the model's dataset appropriately
+        # based on the cti key and model map.
+        def set_dataset_cti_row_proc
+          m = method(:constantize)
+          dataset.row_proc = if ck = cti_key
+            if model_map = cti_model_map
+              lambda do |r|
+                mod = if name = model_map[r[ck]]
+                  m.call(name)
+                else
+                  self
+                end
+                mod.call(r)
+              end
+            else
+              lambda{|r| (m.call(r[ck]) rescue self).call(r)}
+            end
+          else
+            self
+          end
+        end
       end
 
       module InstanceMethods
         # Set the cti_key column to the name of the model.
-        def before_create
-          send("#{model.cti_key}=", model.name.to_s) if model.cti_key
+        def before_validation
+          if new? && model.cti_key && !model.cti_model_map
+            send("#{model.cti_key}=", model.name.to_s)
+          end
           super
         end
         
diff --git a/lib/sequel/plugins/dataset_associations.rb b/lib/sequel/plugins/dataset_associations.rb
index ae24043..719b4f9 100644
--- a/lib/sequel/plugins/dataset_associations.rb
+++ b/lib/sequel/plugins/dataset_associations.rb
@@ -17,12 +17,10 @@ module Sequel
     #   #   WHERE ((id >= 1) AND (id <= 100))))
     # 
     # This works for all of the association types that ship with Sequel,
-    # including the many_through_many type.  Most association options that
+    # including ones implemented in other plugins.  Most association options that
     # are supported when eager loading are supported when using a
-    # dataset association.  However, associations that use :limit or
-    # one_to_one associations that are really one_to_many relationships
-    # in the database will not work correctly, returning all associated
-    # objects.
+    # dataset association. However, it will only work for limited associations or
+    # *_one associations with orders if the database supports window functions.
     #
     # As the dataset methods return datasets, you can easily chain the
     # methods to get associated datasets of associated datasets:
@@ -66,10 +64,9 @@ module Sequel
         # such that it would return the union of calling the association method on
         # all objects returned by the current dataset.
         #
-        # This supports most options that are supported when eager loading.  It doesn't
-        # support limits on the associations, or one_to_one associations that are really
-        # one_to_many and use an order to select the first matching object.  In both of
-        # those cases, this will return an array of all matching objects.
+        # This supports most options that are supported when eager loading.  However, it
+        # will only work for limited associations or *_one associations with orders if the
+        # database supports window functions.
         def associated(name)
           raise Error, "unrecognized association name: #{name.inspect}" unless r = model.association_reflection(name)
           ds = r.associated_class.dataset
@@ -78,17 +75,23 @@ module Sequel
           when :many_to_one
             ds.filter(r.qualified_primary_key=>sds.select(*Array(r[:qualified_key])))
           when :one_to_one, :one_to_many
-            ds.filter(r.qualified_key=>sds.select(*Array(r.qualified_primary_key)))
-          when :many_to_many
-            ds.filter(r.qualified_right_primary_key=>sds.select(*Array(r.qualified_right_key)).
-              join(r[:join_table], r[:left_keys].zip(r[:left_primary_keys]), :implicit_qualifier=>model.table_name))
-          when :many_through_many
+            r.send(:apply_filter_by_associations_limit_strategy, ds.filter(r.qualified_key=>sds.select(*Array(r.qualified_primary_key))))
+          when :many_to_many, :one_through_one
+            mds = r.associated_class.dataset.
+              join(r[:join_table], r[:right_keys].zip(r.right_primary_keys)).
+              select(*Array(r.qualified_right_key)).
+              where(r.qualify(r.join_table_alias, r[:left_keys])=>sds.select(*r.qualify(model.table_name, r[:left_primary_key_columns])))
+            ds.filter(r.qualified_right_primary_key=>r.send(:apply_filter_by_associations_limit_strategy, mds))
+          when :many_through_many, :one_through_many
             fre = r.reverse_edges.first
             fe, *edges = r.edges
-            sds = sds.select(*Array(r.qualify(fre[:table], fre[:left]))).
-              join(fe[:table], Array(fe[:right]).zip(Array(fe[:left])), :implicit_qualifier=>model.table_name)
-            edges.each{|e| sds = sds.join(e[:table], Array(e[:right]).zip(Array(e[:left])))}
-            ds.filter(r.qualified_right_primary_key=>sds)
+            edges << r.final_edge
+            mds = model.
+              select(*Array(r.qualify(fre[:table], fre[:left]))).
+              join(fe[:table], Array(fe[:right]).zip(Array(fe[:left])), :implicit_qualifier=>model.table_name).
+              where(r.qualify(fe[:table], fe[:right])=>sds.select(*r.qualify(model.table_name, r[:left_primary_key_columns])))
+            edges.each{|e| mds = mds.join(e[:table], Array(e[:right]).zip(Array(e[:left])))}
+            ds.filter(r.qualified_right_primary_key=>r.send(:apply_filter_by_associations_limit_strategy, mds))
           when :pg_array_to_many
             ds.filter(Sequel.expr(r.primary_key=>sds.select{Sequel.pg_array_op(r.qualify(r[:model].table_name, r[:key])).unnest}))
           when :many_to_pg_array
@@ -96,9 +99,7 @@ module Sequel
           else
             raise Error, "unrecognized association type for association #{name.inspect}: #{r[:type].inspect}"
           end
-          ds = model.apply_association_dataset_opts(r, ds)
-          r[:extend].each{|m| ds.extend(m)}
-          ds
+          r.apply_eager_dataset_changes(ds).unlimited
         end
       end
     end
diff --git a/lib/sequel/plugins/defaults_setter.rb b/lib/sequel/plugins/defaults_setter.rb
index 7713e8f..2f2f368 100644
--- a/lib/sequel/plugins/defaults_setter.rb
+++ b/lib/sequel/plugins/defaults_setter.rb
@@ -46,7 +46,7 @@ module Sequel
           when Sequel::CURRENT_DATE
             lambda{Date.today}
           when Sequel::CURRENT_TIMESTAMP
-            lambda{Sequel.datetime_class.now}
+            lambda{dataset.current_datetime}
           else
             v
           end
diff --git a/lib/sequel/plugins/eager_each.rb b/lib/sequel/plugins/eager_each.rb
index 5cf457e..5db2e6a 100644
--- a/lib/sequel/plugins/eager_each.rb
+++ b/lib/sequel/plugins/eager_each.rb
@@ -22,6 +22,15 @@ module Sequel
     #   Album.plugin :eager_each
     module EagerEach 
       module DatasetMethods
+        # Don't call #all when attempting to load the columns.
+        def columns
+          if use_eager_all?
+            clone(:all_called=>true).columns
+          else
+            super
+          end
+        end
+
         # Call #all instead of #each if eager loading,
         # uless #each is being called by #all.
         def each(&block)
diff --git a/lib/sequel/plugins/instance_hooks.rb b/lib/sequel/plugins/instance_hooks.rb
index dd31b24..5eeaf49 100644
--- a/lib/sequel/plugins/instance_hooks.rb
+++ b/lib/sequel/plugins/instance_hooks.rb
@@ -4,7 +4,7 @@ module Sequel
     # by passing a block to a _hook method (e.g. before_save_hook{do_something}).
     # The block is executed when the hook is called (e.g. before_save).
     #
-    # All of the standard hooks are supported, except for after_initialize.
+    # All of the standard hooks are supported.
     # Instance level before hooks are executed in reverse order of addition before
     # calling super.  Instance level after hooks are executed in order of addition
     # after calling super.  If any of the instance level before hook blocks return
@@ -16,6 +16,8 @@ module Sequel
     # be run the first time you save the object (creating it), and the before_update
     # hook will be run the second time you save the object (updating it), and no
     # hooks will be run the third time you save the object.
+    #
+    # Validation hooks are not cleared until after a successful save.
     # 
     # Usage:
     #
@@ -27,7 +29,7 @@ module Sequel
     module InstanceHooks
       module InstanceMethods 
         BEFORE_HOOKS = Sequel::Model::BEFORE_HOOKS
-        AFTER_HOOKS = Sequel::Model::AFTER_HOOKS - [:after_initialize]
+        AFTER_HOOKS = Sequel::Model::AFTER_HOOKS
         HOOKS = BEFORE_HOOKS + AFTER_HOOKS
         HOOKS.each{|h| class_eval(<<-END , __FILE__, __LINE__+1)}
           def #{h}_hook(&block)
@@ -38,7 +40,7 @@ module Sequel
         END
         
         BEFORE_HOOKS.each{|h| class_eval("def #{h}; run_before_instance_hooks(:#{h}) == false ? false : super end", __FILE__, __LINE__)}
-        AFTER_HOOKS.each{|h| class_eval(<<-END, __FILE__, __LINE__ + 1)}
+        (AFTER_HOOKS - [:after_validation, :after_save]).each{|h| class_eval(<<-END, __FILE__, __LINE__ + 1)}
           def #{h}
             super
             run_after_instance_hooks(:#{h})
@@ -46,6 +48,22 @@ module Sequel
             @instance_hooks.delete(:#{h.to_s.sub('after', 'before')})
           end
         END
+
+        # Run after validation hooks, without clearing the validation hooks.
+        def after_validation
+          super
+          run_after_instance_hooks(:after_validation)
+        end
+        
+        # Run after save hooks, clearing both the save and validation hooks.
+        def after_save
+          super
+          run_after_instance_hooks(:after_save)
+          @instance_hooks.delete(:after_save)
+          @instance_hooks.delete(:before_save)
+          @instance_hooks.delete(:after_validation)
+          @instance_hooks.delete(:before_validation)
+        end
         
         private
         
diff --git a/lib/sequel/plugins/json_serializer.rb b/lib/sequel/plugins/json_serializer.rb
index f4b279a..3952908 100644
--- a/lib/sequel/plugins/json_serializer.rb
+++ b/lib/sequel/plugins/json_serializer.rb
@@ -293,7 +293,7 @@ module Sequel
 
       module DatasetMethods
         # Return a JSON string representing an array of all objects in
-        # this dataset.  Takes the same options as the the instance
+        # this dataset.  Takes the same options as the instance
         # method, and passes them to every instance.  Additionally,
         # respects the following options:
         #
diff --git a/lib/sequel/plugins/many_through_many.rb b/lib/sequel/plugins/many_through_many.rb
index 3f8537f..979f260 100644
--- a/lib/sequel/plugins/many_through_many.rb
+++ b/lib/sequel/plugins/many_through_many.rb
@@ -1,6 +1,6 @@
 module Sequel
   module Plugins
-    # The many_through_many plugin allow you to create an association to multiple objects using multiple join tables.
+    # The many_through_many plugin allow you to create an association using multiple join tables.
     # For example, assume the following associations:
     #
     #    Artist.many_to_many :albums
@@ -9,14 +9,13 @@ module Sequel
     # The many_through_many plugin would allow this:
     #
     #    Artist.plugin :many_through_many
-    #    Artist.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    #    Artist.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums_tags, :album_id, :tag_id]]
     #
     # Which will give you the tags for all of the artist's albums.
     #
     # Let's break down the 2nd argument of the many_through_many call:
     #
     #   [[:albums_artists, :artist_id, :album_id],
-    #    [:albums, :id, :id],
     #    [:albums_tags, :album_id, :tag_id]]
     #
     # This argument is an array of arrays with three elements.  Each entry in the main array represents a JOIN in SQL:
@@ -29,12 +28,12 @@ module Sequel
     #
     #   FROM artists
     #   JOIN albums_artists ON (artists.id = albums_artists.artist_id)
-    #   JOIN albums ON (albums_artists.album_id = albums.id)
-    #   JOIN albums_tags ON (albums.id = albums_tag.album_id)
+    #   JOIN albums_tags ON (albums_artists.album_id = albums_tag.album_id)
     #   JOIN tags ON (albums_tags.tag_id = tags.id)
     #
     # The "artists.id" and "tags.id" criteria come from other association options (defaulting to the primary keys of the current and
-    # associated tables), but hopefully you can see how each argument in the array is used in the JOIN clauses.
+    # associated tables), but hopefully you can see how each argument in the array is used in the JOIN clauses. Note that you do
+    # not need to add an entry for the final table (tags in this example), as that comes from the associated class.
     #
     # Here are some more examples:
     #
@@ -42,20 +41,20 @@ module Sequel
     #   Artist.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]]
     #
     #   # All artists that are associated to any album that this artist is associated to
-    #   Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]]
+    #   Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums_artists, :album_id, :artist_id]]
     #
     #   # All albums by artists that are associated to any album that this artist is associated to
-    #   Artist.many_through_many :artist_albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], \
-    #    [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]], \
+    #   Artist.many_through_many :artist_albums, [[:albums_artists, :artist_id, :album_id], \
+    #    [:albums_artists, :album_id, :artist_id], [:albums_artists, :artist_id, :album_id]], \
     #    :class=>:Album
     #
-    #   # All tracks on albums by this artist
-    #   Artist.many_through_many :tracks, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id]], \
+    #   # All tracks on albums by this artist (also could be a many_to_many)
+    #   Artist.many_through_many :tracks, [[:albums_artists, :artist_id, :album_id]], \
     #    :right_primary_key=>:album_id
     #
     # Often you don't want the current object to appear in the array of associated objects.  This is easiest to handle via an :after_load hook:
     # 
-    #   Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]],
+    #   Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums_artists, :album_id, :artist_id]],
     #     :after_load=>proc{|artist, associated_artists| associated_artists.delete(artist)}
     #
     # You can also handle it by adding a dataset block that excludes the current record (so it won't be retrieved at all), but
@@ -65,11 +64,28 @@ module Sequel
     # 
     #   Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]],
     #    :distinct=>true
+    # 
+    # In addition to many_through_many, this plugin also adds one_through_many, for an association to a single object through multiple join tables.
+    # This is useful if there are unique constraints on the foreign keys in the join tables that reference back to the current table, or if you want
+    # to set an order on the association and just want the first record.
+    #
+    # Usage:
+    #
+    #   # Make all model subclasses support many_through_many associations
+    #   Sequel::Model.plugin :many_through_many
+    #
+    #   # Make the Album class support many_through_many associations
+    #   Album.plugin :many_through_many
     module ManyThroughMany
       # The AssociationReflection subclass for many_through_many associations.
       class ManyThroughManyAssociationReflection < Sequel::Model::Associations::ManyToManyAssociationReflection
         Sequel::Model::Associations::ASSOCIATION_TYPES[:many_through_many] = self
 
+        # many_through_many and one_through_many associations can be clones
+        def cloneable?(ref)
+          ref[:type] == :many_through_many || ref[:type] == :one_through_many
+        end
+
         # The default associated key alias(es) to use when eager loading
         # associations via eager.
         def default_associated_key_alias
@@ -84,6 +100,11 @@ module Sequel
           END
         end
 
+        # The alias for the first join table.
+        def join_table_alias
+          final_reverse_edge[:alias]
+        end
+
         # Many through many associations don't have a reciprocal
         def reciprocal
           nil
@@ -91,6 +112,13 @@ module Sequel
 
         private
 
+        def _associated_dataset
+          ds = associated_class
+          reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)}
+          ft = final_reverse_edge
+          ds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])), :table_alias=>ft[:alias], :qualify=>:deep)
+        end
+
         # Make sure to use unique table aliases when lazy loading or eager loading
         def calculate_reverse_edge_aliases(reverse_edges)
           aliases = [associated_class.table_name]
@@ -138,6 +166,16 @@ module Sequel
           h.each{|k, v| cached_set(k, v)}
           h
         end
+
+        def filter_by_associations_limit_key
+          fe = edges.first
+          Array(qualify(fe[:table], fe[:right])) + Array(qualify(associated_class.table_name, associated_class.primary_key))
+        end
+      end
+
+      class OneThroughManyAssociationReflection < ManyThroughManyAssociationReflection
+        Sequel::Model::Associations::ASSOCIATION_TYPES[:one_through_many] = self
+        include Sequel::Model::Associations::SingularAssociationReflection
       end
 
       module ClassMethods
@@ -161,15 +199,19 @@ module Sequel
           associate(:many_through_many, name, opts.merge(through.is_a?(Hash) ? through : {:through=>through}), &block)
         end
 
+        # Creates a one_through_many association.  See many_through_many for arguments.
+        def one_through_many(name, through, opts=OPTS, &block)
+          associate(:one_through_many, name, opts.merge(through.is_a?(Hash) ? through : {:through=>through}), &block)
+        end
+
         private
 
         # Create the association methods and :eager_loader and :eager_grapher procs.
         def def_many_through_many(opts)
-          name = opts[:name]
-          model = self
+          one_through_many = opts[:type] == :one_through_many
           opts[:read_only] = true
           opts[:after_load].unshift(:array_uniq!) if opts[:uniq]
-          opts[:cartesian_product_number] ||= 2
+          opts[:cartesian_product_number] ||= one_through_many ? 0 : 2
           opts[:through] = opts[:through].map do |e|
             case e
             when Array
@@ -184,50 +226,17 @@ module Sequel
           end
 
           left_key = opts[:left_key] = opts[:through].first[:left]
-          uses_lcks = opts[:uses_left_composite_keys] = left_key.is_a?(Array)
-          left_keys = Array(left_key)
+          opts[:left_keys] = Array(left_key)
+          opts[:uses_left_composite_keys] = left_key.is_a?(Array)
           left_pk = (opts[:left_primary_key] ||= self.primary_key)
           opts[:eager_loader_key] = left_pk unless opts.has_key?(:eager_loader_key)
-          left_pks = opts[:left_primary_keys] = Array(left_pk)
+          opts[:left_primary_keys] = Array(left_pk)
           lpkc = opts[:left_primary_key_column] ||= left_pk
-          opts[:left_primary_key_columns] ||= Array(lpkc)
-          opts[:dataset] ||= lambda do
-            ds = opts.associated_dataset
-            opts.reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)}
-            ft = opts.final_reverse_edge
-            ds.join(ft[:table],  Array(ft[:left]).zip(Array(ft[:right])) + opts.predicate_keys.zip(left_pks.map{|k| send(k)}), :table_alias=>ft[:alias], :qualify=>:deep)
-          end
+          lpkcs = opts[:left_primary_key_columns] ||= Array(lpkc)
+          opts[:dataset] ||= opts.association_dataset_proc
 
-          slice_range = opts.slice_range
-          left_key_alias = opts[:left_key_alias] ||= opts.default_associated_key_alias
-          opts[:eager_loader] ||= lambda do |eo|
-            h = eo[:id_map]
-            rows = eo[:rows]
-            rows.each{|object| object.associations[name] = []}
-            ds = opts.associated_class 
-            opts.reverse_edges.each{|t| ds = ds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)}
-            ft = opts.final_reverse_edge
-            ds = ds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])) + [[opts.predicate_key, h.keys]], :table_alias=>ft[:alias], :qualify=>:deep)
-            ds = model.eager_loading_dataset(opts, ds, nil, eo[:associations], eo)
-            if opts.eager_limit_strategy == :window_function
-              delete_rn = true
-              rn = ds.row_number_column
-              ds = apply_window_function_eager_limit_strategy(ds, opts)
-            end
-            ds.all do |assoc_record|
-              assoc_record.values.delete(rn) if delete_rn
-              hash_key = if uses_lcks
-                left_key_alias.map{|k| assoc_record.values.delete(k)}
-              else
-                assoc_record.values.delete(left_key_alias)
-              end
-              next unless objects = h[hash_key]
-              objects.each{|object| object.associations[name].push(assoc_record)}
-            end
-            if opts.eager_limit_strategy == :ruby
-              rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []}
-            end
-          end
+          opts[:left_key_alias] ||= opts.default_associated_key_alias
+          opts[:eager_loader] ||= opts.method(:default_eager_loader)
 
           join_type = opts[:graph_join_type]
           select = opts[:graph_select]
@@ -238,15 +247,31 @@ module Sequel
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
             iq = eo[:implicit_qualifier]
-            opts.edges.each do |t|
-              ds = ds.graph(t[:table], t.fetch(:only_conditions, (Array(t[:right]).zip(Array(t[:left])) + t[:conditions])), :select=>false, :table_alias=>ds.unused_table_alias(t[:table]), :join_type=>t[:join_type], :qualify=>:deep, :implicit_qualifier=>iq, &t[:block])
-              iq = nil
+            egls = eo[:limit_strategy]
+            if egls && egls != :ruby
+              associated_key_array = opts.associated_key_array
+              orig_egds = egds = eager_graph_dataset(opts, eo)
+              opts.reverse_edges.each{|t| egds = egds.join(t[:table], Array(t[:left]).zip(Array(t[:right])), :table_alias=>t[:alias], :qualify=>:deep)}
+              ft = opts.final_reverse_edge
+              egds = egds.join(ft[:table], Array(ft[:left]).zip(Array(ft[:right])), :table_alias=>ft[:alias], :qualify=>:deep).
+                select_all(egds.first_source).
+                select_append(*associated_key_array)
+              egds = opts.apply_eager_graph_limit_strategy(egls, egds)
+              ds.graph(egds, associated_key_array.map{|v| v.alias}.zip(Array(lpkcs)) + conditions, :qualify=>:deep, :table_alias=>eo[:table_alias], :implicit_qualifier=>iq, :join_type=>eo[:join_type]||join_type, :from_self_alias=>eo[:from_self_alias], :select=>select||orig_egds.columns, &graph_block)
+            else
+              opts.edges.each do |t|
+                ds = ds.graph(t[:table], t.fetch(:only_conditions, (Array(t[:right]).zip(Array(t[:left])) + t[:conditions])), :select=>false, :table_alias=>ds.unused_table_alias(t[:table]), :join_type=>eo[:join_type]||t[:join_type], :qualify=>:deep, :implicit_qualifier=>iq, :from_self_alias=>eo[:from_self_alias], &t[:block])
+                iq = nil
+              end
+              fe = opts.final_edge
+              ds.graph(opts.associated_class, use_only_conditions ? only_conditions : (Array(opts.right_primary_key).zip(Array(fe[:left])) + conditions), :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>eo[:join_type]||join_type, &graph_block)
             end
-            fe = opts.final_edge
-            ds.graph(opts.associated_class, use_only_conditions ? only_conditions : (Array(opts.right_primary_key).zip(Array(fe[:left])) + conditions), :select=>select, :table_alias=>eo[:table_alias], :qualify=>:deep, :join_type=>join_type, &graph_block)
           end
+        end
 
-          def_association_dataset_methods(opts)
+        # Use def_many_through_many, since they share pretty much the same code.
+        def def_one_through_many(opts)
+          def_many_through_many(opts)
         end
       end
 
@@ -275,14 +300,16 @@ module Sequel
             ref.right_primary_key_methods
           end
 
-          exp = association_filter_key_expression(ref.qualify(last_alias, Array(ref.final_edge[:left])), meths, obj)
-          if exp == SQL::Constants::FALSE
-            association_filter_handle_inversion(op, exp, Array(lpks))
-          else
-            ds = ds.where(exp).exclude(SQL::BooleanExpression.from_value_pairs(ds.opts[:select].zip([]), :OR))
-            association_filter_handle_inversion(op, SQL::BooleanExpression.from_value_pairs(lpks=>ds), Array(lpks))
+          expr = association_filter_key_expression(ref.qualify(last_alias, Array(ref.final_edge[:left])), meths, obj)
+          unless expr == SQL::Constants::FALSE
+            ds = ds.where(expr).exclude(SQL::BooleanExpression.from_value_pairs(ds.opts[:select].zip([]), :OR))
+            expr = SQL::BooleanExpression.from_value_pairs(lpks=>ds)
+            expr = add_association_filter_conditions(ref, obj, expr)
           end
+
+          association_filter_handle_inversion(op, expr, Array(lpks))
         end
+        alias one_through_many_association_filter_expression many_through_many_association_filter_expression
       end
     end
   end
diff --git a/lib/sequel/plugins/mssql_optimistic_locking.rb b/lib/sequel/plugins/mssql_optimistic_locking.rb
new file mode 100644
index 0000000..150bb4c
--- /dev/null
+++ b/lib/sequel/plugins/mssql_optimistic_locking.rb
@@ -0,0 +1,92 @@
+module Sequel
+  module Plugins
+    # This plugin implements optimistic locking mechanism on Microsoft SQL Server
+    # using a timestamp/rowversion column to ensure that concurrent updates are
+    # detected and previous changes are not automatically overridden. This is
+    # best implemented by a code example:
+    # 
+    #   class Person < Sequel::Model
+    #     plugin :mssql_optimistic_locking
+    #   end
+    #   p1 = Person[1]
+    #   p2 = Person[1]
+    #   p1.update(:name=>'Jim') # works
+    #   p2.update(:name=>'Bob') # raises Sequel::NoExistingObject
+    #
+    # In order for this plugin to work, you need to make sure that the database
+    # table has a column of timestamp or rowversion.  The plugin uses a default
+    # name of timestamp for this columns, but you can override that using the
+    # :lock_column option:
+    #
+    #     plugin :mssql_optimistic_locking, :lock_column=>:column_name
+    #
+    # This plugin relies on the instance_filters plugin.
+    module MssqlOptimisticLocking
+      # Load the instance_filters plugin into the model.
+      def self.apply(model, opts=OPTS)
+        model.plugin :instance_filters
+      end
+
+      # Set the lock_column to the :lock_column option (default: :timestamp)
+      def self.configure(model, opts=OPTS)
+        model.lock_column = opts[:lock_column] || :timestamp
+      end
+
+      module ClassMethods
+        # The timestamp/rowversion column containing the version for the current row.
+        attr_accessor :lock_column
+
+        Plugins.inherited_instance_variables(self, :@lock_column=>nil)
+      end
+
+      module InstanceMethods
+        # Add the lock column instance filter to the object before destroying it.
+        def before_destroy
+          lock_column_instance_filter
+          super
+        end
+        
+        # Add the lock column instance filter to the object before updating it.
+        def before_update
+          lock_column_instance_filter
+          super
+        end
+        
+        private
+        
+        # Add the lock column instance filter to the object.
+        def lock_column_instance_filter
+          lc = model.lock_column
+          instance_filter(lc=>Sequel.blob(send(lc)))
+        end
+
+        # Clear the instance filters when refreshing, so that attempting to
+        # refresh after a failed save removes the previous lock column filter
+        # (the new one will be added before updating).
+        def _refresh(ds)
+          clear_instance_filters
+          super
+        end
+
+        # Remove the lock column from the columns to update.
+        # SQL Server automatically updates the lock column value, and does not like
+        # it to be assigned.
+        def _save_update_all_columns_hash
+          v = @values.dup
+          Array(primary_key).each{|x| v.delete(x) unless changed_columns.include?(x)}
+          v.delete(model.lock_column)
+          v
+        end
+
+        # Add an OUTPUT clause to fetch the updated timestamp when updating the row.
+        def _update_without_checking(columns)
+          ds = _update_dataset
+          lc = model.lock_column
+          rows = ds.clone(ds.send(:default_server_opts, :sql=>ds.output(nil, [Sequel.qualify(:inserted, lc)]).update_sql(columns))).all
+          values[lc] = rows.first[lc] unless rows.empty?
+          rows.length
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/nested_attributes.rb b/lib/sequel/plugins/nested_attributes.rb
index bfd6224..87e7c5e 100644
--- a/lib/sequel/plugins/nested_attributes.rb
+++ b/lib/sequel/plugins/nested_attributes.rb
@@ -286,7 +286,19 @@ module Sequel
         def validate_associated_object(reflection, obj)
           return if reflection[:validate] == false
           association = reflection[:name]
+          if (reflection[:type] == :one_to_many || reflection[:type] == :one_to_one) && (key = reflection[:key]).is_a?(Symbol) && !(pk_val = obj.values[key])
+            # There could be a presence validation on the foreign key in the associated model,
+            # which will fail if we validate before saving the current object.  If there is
+            # no value for the foreign key, set it to the current primary key value, or a dummy
+            # value of 0 if we haven't saved the current object.
+            obj.values[key] = pk || 0
+            key = nil if pk
+          end
           obj.errors.full_messages.each{|m| errors.add(association, m)} unless obj.valid?
+          if key && !pk_val
+            # If we used a dummy value of 0, remove it so it doesn't accidently remain.
+            obj.values.delete(key)
+          end
         end
       end
     end
diff --git a/lib/sequel/plugins/pg_array_associations.rb b/lib/sequel/plugins/pg_array_associations.rb
index 197809a..0f3e8f2 100644
--- a/lib/sequel/plugins/pg_array_associations.rb
+++ b/lib/sequel/plugins/pg_array_associations.rb
@@ -47,11 +47,11 @@ module Sequel
     #
     # They support some additional options specific to this plugin:
     #
-    # :array_type :: This allows you to specify the type of the array.  This
-    #                is only necessary to set in very narrow circumstances,
-    #                such as when this plugin needs to create an array type,
-    #                and typecasting is turned off or not setup correctly
-    #                for the model object.
+    # :array_type :: This overrides the type of the array.  By default, the type
+    #                is determined by looking at the db_schema for the model, and if that fails,
+    #                it defaults to :integer.
+    # :raise_on_save_failure :: Do not raise exceptions for hook or validation failures when saving associated
+    #                           objects in the add/remove methods (return nil instead).
     # :save_after_modify :: For pg_array_to_many associations, this makes the
     #                       the modification methods save the current object,
     #                       so they operate more similarly to the one_to_many
@@ -67,11 +67,24 @@ module Sequel
     # This plugin should work on all supported PostgreSQL versions, except
     # the remove_all modification method for many_to_pg_array associations, which
     # requires the array_remove method added in PostgreSQL 9.3.
+    #
+    # This plugin requires that the underlying database have the pg_array
+    # extension loaded.
     module PgArrayAssociations
       # The AssociationReflection subclass for many_to_pg_array associations.
       class ManyToPgArrayAssociationReflection < Sequel::Model::Associations::AssociationReflection
         Sequel::Model::Associations::ASSOCIATION_TYPES[:many_to_pg_array] = self
 
+        def array_type
+          cached_fetch(:array_type) do
+            if (sch = associated_class.db_schema) && (s = sch[self[:key]]) && (t = s[:db_type])
+              t
+            else
+              :integer
+            end
+          end
+        end
+
         # The array column in the associated model containing foreign keys to
         # the current model.
         def associated_object_keys
@@ -90,6 +103,28 @@ module Sequel
           :"#{underscore(demodulize(self[:model].name))}_ids"
         end
         
+        # Always use the ruby eager_graph limit strategy if association is limited.
+        def eager_graph_limit_strategy(_)
+          :ruby if self[:limit]
+        end
+
+        # Always use the ruby eager limit strategy
+        def eager_limit_strategy
+          cached_fetch(:_eager_limit_strategy) do
+            :ruby if self[:limit]
+          end
+        end
+
+        # Don't use a filter by associations limit strategy
+        def filter_by_associations_limit_strategy
+          nil
+        end
+
+        # Handle silent failure of add/remove methods if raise_on_save_failure is false.
+        def handle_silent_modification_failure?
+          self[:raise_on_save_failure] == false
+        end
+
         # The hash key to use for the eager loading predicate (left side of IN (1, 2, 3))
         def predicate_key
           cached_fetch(:predicate_key){qualify_assoc(self[:key_column])}
@@ -109,6 +144,20 @@ module Sequel
     
         private
     
+        # The predicate condition to use for the eager_loader.
+        def eager_loading_predicate_condition(keys)
+          Sequel.pg_array_op(predicate_key).overlaps(Sequel.pg_array(keys, array_type))
+        end
+
+        def filter_by_associations_add_conditions_dataset_filter(ds)
+          key = qualify(associated_class.table_name, self[:key])
+          ds.select{unnest(key)}.exclude(key=>nil)
+        end
+        
+        def filter_by_associations_conditions_key
+          qualify(self[:model].table_name, primary_key)
+        end
+
         # Only consider an association as a reciprocal if it has matching keys
         # and primary keys.
         def reciprocal_association?(assoc_reflect)
@@ -118,12 +167,26 @@ module Sequel
         def reciprocal_type
           :pg_array_to_many
         end
+
+        def use_placeholder_loader?
+          false
+        end
       end
 
       # The AssociationReflection subclass for pg_array_to_many associations.
       class PgArrayToManyAssociationReflection < Sequel::Model::Associations::AssociationReflection
         Sequel::Model::Associations::ASSOCIATION_TYPES[:pg_array_to_many] = self
 
+        def array_type
+          cached_fetch(:array_type) do
+            if (sch = self[:model].db_schema) && (s = sch[self[:key]]) && (t = s[:db_type])
+              t
+            else
+              :integer
+            end
+          end
+        end
+
         # An array containing the primary key for the associated model.
         def associated_object_keys
           Array(primary_key)
@@ -147,6 +210,29 @@ module Sequel
           :"#{singularize(self[:name])}_ids"
         end
 
+        # Always use the ruby eager_graph limit strategy if association is limited.
+        def eager_graph_limit_strategy(_)
+          :ruby if self[:limit]
+        end
+
+        # Always use the ruby eager limit strategy
+        def eager_limit_strategy
+          cached_fetch(:_eager_limit_strategy) do
+            :ruby if self[:limit]
+          end
+        end
+
+        # Don't use a filter by associations limit strategy
+        def filter_by_associations_limit_strategy
+          nil
+        end
+
+        # Handle silent failure of add/remove methods if raise_on_save_failure is false
+        # and save_after_modify is true.
+        def handle_silent_modification_failure?
+          self[:raise_on_save_failure] == false && self[:save_after_modify]
+        end
+
         # A qualified version of the associated primary key.
         def predicate_key
           cached_fetch(:predicate_key){qualify_assoc(primary_key)}
@@ -162,8 +248,22 @@ module Sequel
           cached_fetch(:primary_key_method){primary_key}
         end
 
+        def filter_by_associations_conditions_expression(obj)
+          ds = filter_by_associations_conditions_dataset.where(filter_by_associations_conditions_subquery_conditions(obj))
+          Sequel.function(:coalesce, Sequel.pg_array(filter_by_associations_conditions_key).overlaps(ds), Sequel::SQL::Constants::FALSE)
+        end
+
         private
     
+        def filter_by_associations_add_conditions_dataset_filter(ds)
+          pk = qualify(associated_class.table_name, primary_key)
+          ds.select{array_agg(pk)}.exclude(pk=>nil)
+        end
+        
+        def filter_by_associations_conditions_key
+          qualify(self[:model].table_name, self[:key])
+        end
+
         # Only consider an association as a reciprocal if it has matching keys
         # and primary keys.
         def reciprocal_association?(assoc_reflect)
@@ -173,6 +273,10 @@ module Sequel
         def reciprocal_type
           :many_to_pg_array
         end
+
+        def use_placeholder_loader?
+          false
+        end
       end
 
       module ClassMethods
@@ -201,20 +305,13 @@ module Sequel
           key = opts[:key]
           key_column = opts[:key_column] ||= opts[:key]
           opts[:after_load].unshift(:array_uniq!) if opts[:uniq]
-          slice_range = opts.slice_range
           opts[:dataset] ||= lambda do
-            opts.associated_dataset.where(Sequel.pg_array_op(opts.predicate_key).contains([send(pk)]))
+            opts.associated_dataset.where(Sequel.pg_array_op(opts.predicate_key).contains(Sequel.pg_array([send(pk)], opts.array_type)))
           end
           opts[:eager_loader] ||= proc do |eo|
             id_map = eo[:id_map]
-            rows = eo[:rows]
-            rows.each do |object|
-              object.associations[name] = []
-            end
 
-            klass = opts.associated_class
-            ds = model.eager_loading_dataset(opts, klass.where(Sequel.pg_array_op(opts.predicate_key).overlaps(id_map.keys)), nil, eo[:associations], eo)
-            ds.all do |assoc_record|
+            eager_load_results(opts, eo.merge(:loader=>false)) do |assoc_record|
               if pks ||= assoc_record.send(key)
                 pks.each do |pkv|
                   next unless objects = id_map[pkv]
@@ -224,9 +321,6 @@ module Sequel
                 end
               end
             end
-            if slice_range
-              rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []}
-            end
           end
 
           join_type = opts[:graph_join_type]
@@ -253,54 +347,44 @@ module Sequel
 
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
-            ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block)
+            ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
             ds
           end
 
-          def_association_dataset_methods(opts)
+          return if opts[:read_only]
 
-          unless opts[:read_only]
-            validate = opts[:validate]
+          save_opts = {:validate=>opts[:validate]}
+          save_opts[:raise_on_failure] = opts[:raise_on_save_failure] != false
 
-            array_type = opts[:array_type] ||= :integer
-            adder = opts[:adder] || proc do |o|
-              if array = o.send(key)
-                array << send(pk)
-              else
-                o.send("#{key}=", Sequel.pg_array([send(pk)], array_type))
-              end
-              o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save")
+          opts[:adder] ||= proc do |o|
+            if array = o.send(key)
+              array << send(pk)
+            else
+              o.send("#{key}=", Sequel.pg_array([send(pk)], opts.array_type))
             end
-            association_module_private_def(opts._add_method, opts, &adder)
-    
-            remover = opts[:remover] || proc do |o|
-              if (array = o.send(key)) && !array.empty?
-                array.delete(send(pk))
-                o.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save")
-              end
-            end
-            association_module_private_def(opts._remove_method, opts, &remover)
-
-            clearer = opts[:clearer] || proc do
-              opts.associated_dataset.where(Sequel.pg_array_op(key).contains([send(pk)])).update(key=>Sequel.function(:array_remove, key, send(pk)))
+            o.save(save_opts)
+          end
+  
+          opts[:remover] ||= proc do |o|
+            if (array = o.send(key)) && !array.empty?
+              array.delete(send(pk))
+              o.save(save_opts)
             end
-            association_module_private_def(opts._remove_all_method, opts, &clearer)
+          end
 
-            def_add_method(opts)
-            def_remove_methods(opts)
+          opts[:clearer] ||= proc do
+            opts.associated_dataset.where(Sequel.pg_array_op(key).contains([send(pk)])).update(key=>Sequel.function(:array_remove, key, send(pk)))
           end
         end
 
         # Setup the pg_array_to_many-specific datasets, eager loaders, and modification methods.
         def def_pg_array_to_many(opts)
           name = opts[:name]
-          model = self
           opts[:key] = opts.default_key unless opts.has_key?(:key)
           key = opts[:key]
           key_column = opts[:key_column] ||= key
           opts[:eager_loader_key] = nil
           opts[:after_load].unshift(:array_uniq!) if opts[:uniq]
-          slice_range = opts.slice_range
           opts[:dataset] ||= lambda do
             opts.associated_dataset.where(opts.predicate_key=>send(key).to_a)
           end
@@ -308,8 +392,8 @@ module Sequel
             rows = eo[:rows]
             id_map = {}
             pkm = opts.primary_key_method
+
             rows.each do |object|
-              object.associations[name] = []
               if associated_pks = object.send(key)
                 associated_pks.each do |apk|
                   (id_map[apk] ||= []) << object
@@ -317,18 +401,13 @@ module Sequel
               end
             end
 
-            klass = opts.associated_class
-            ds = model.eager_loading_dataset(opts, klass.where(opts.predicate_key=>id_map.keys), nil, eo[:associations], eo)
-            ds.all do |assoc_record|
+            eager_load_results(opts, eo.merge(:id_map=>id_map)) do |assoc_record|
               if objects = id_map[assoc_record.send(pkm)]
                 objects.each do |object| 
                   object.associations[name].push(assoc_record)
                 end
               end
             end
-            if slice_range
-              rows.each{|o| o.associations[name] = o.associations[name][slice_range] || []}
-            end
           end
 
           join_type = opts[:graph_join_type]
@@ -355,53 +434,46 @@ module Sequel
 
           opts[:eager_grapher] ||= proc do |eo|
             ds = eo[:self]
-            ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>join_type, :qualify=>:deep, :from_self_alias=>ds.opts[:eager_graph][:master]), &graph_block)
+            ds = ds.graph(eager_graph_dataset(opts, eo), conditions, eo.merge(:select=>select, :join_type=>eo[:join_type]||join_type, :qualify=>:deep, :from_self_alias=>eo[:from_self_alias]), &graph_block)
             ds
           end
 
-          def_association_dataset_methods(opts)
+          return if opts[:read_only]
 
-          unless opts[:read_only]
-            validate = opts[:validate]
-            array_type = opts[:array_type] ||= :integer
-            if opts[:save_after_modify]
-              save_after_modify = proc do |obj|
-                obj.save(:validate=>validate) || raise(Sequel::Error, "invalid associated object, cannot save")
-              end
+          save_opts = {:validate=>opts[:validate]}
+          save_opts[:raise_on_failure] = opts[:raise_on_save_failure] != false
+
+          if opts[:save_after_modify]
+            save_after_modify = proc do |obj|
+              obj.save(save_opts)
             end
+          end
 
-            adder = opts[:adder] || proc do |o|
-              opk = o.send(opts.primary_key) 
-              if array = send(key)
-                modified!(key)
-                array << opk
-              else
-                send("#{key}=", Sequel.pg_array([opk], array_type))
-              end
-              save_after_modify.call(self) if save_after_modify
+          opts[:adder] ||= proc do |o|
+            opk = o.send(opts.primary_key) 
+            if array = send(key)
+              modified!(key)
+              array << opk
+            else
+              send("#{key}=", Sequel.pg_array([opk], opts.array_type))
             end
-            association_module_private_def(opts._add_method, opts, &adder)
-    
-            remover = opts[:remover] || proc do |o|
-              if (array = send(key)) && !array.empty?
-                modified!(key)
-                array.delete(o.send(opts.primary_key))
-                save_after_modify.call(self) if save_after_modify
-              end
+            save_after_modify.call(self) if save_after_modify
+          end
+  
+          opts[:remover] ||= proc do |o|
+            if (array = send(key)) && !array.empty?
+              modified!(key)
+              array.delete(o.send(opts.primary_key))
+              save_after_modify.call(self) if save_after_modify
             end
-            association_module_private_def(opts._remove_method, opts, &remover)
+          end
 
-            clearer = opts[:clearer] || proc do
-              if (array = send(key)) && !array.empty?
-                modified!(key)
-                array.clear
-                save_after_modify.call(self) if save_after_modify
-              end
+          opts[:clearer] ||= proc do
+            if (array = send(key)) && !array.empty?
+              modified!(key)
+              array.clear
+              save_after_modify.call(self) if save_after_modify
             end
-            association_module_private_def(opts._remove_all_method, opts, &clearer)
-
-            def_add_method(opts)
-            def_remove_methods(opts)
           end
         end
       end
@@ -426,6 +498,7 @@ module Sequel
             Sequel.expr(pk=>obj.select{Sequel.pg_array_op(ref.qualify(obj.model.table_name, ref[:key_column])).unnest})
           end
           expr = Sequel::SQL::Constants::FALSE unless expr
+          expr = add_association_filter_conditions(ref, obj, expr)
           association_filter_handle_inversion(op, expr, [pk])
         end
 
@@ -435,16 +508,17 @@ module Sequel
           expr = case obj
           when Sequel::Model
             if pkv = obj.send(ref.primary_key_method)
-              Sequel.pg_array_op(key).contains([pkv])
+              Sequel.pg_array_op(key).contains(Sequel.pg_array([pkv], ref.array_type))
             end
           when Array
             if (pkvs = obj.map{|o| o.send(ref.primary_key_method)}.compact) && !pkvs.empty?
-              Sequel.pg_array(key).overlaps(pkvs)
+              Sequel.pg_array(key).overlaps(Sequel.pg_array(pkvs, ref.array_type))
             end
           when Sequel::Dataset
             Sequel.function(:coalesce, Sequel.pg_array_op(key).overlaps(obj.select{array_agg(ref.qualify(obj.model.table_name, ref.primary_key))}), Sequel::SQL::Constants::FALSE)
           end
           expr = Sequel::SQL::Constants::FALSE unless expr
+          expr = add_association_filter_conditions(ref, obj, expr)
           association_filter_handle_inversion(op, expr, [key])
         end
       end
diff --git a/lib/sequel/plugins/prepared_statements.rb b/lib/sequel/plugins/prepared_statements.rb
index 2ca9634..88f6a8a 100644
--- a/lib/sequel/plugins/prepared_statements.rb
+++ b/lib/sequel/plugins/prepared_statements.rb
@@ -130,7 +130,8 @@ module Sequel
           h = @prepared_statements[type]
           Sequel.synchronize do
             if v = h[subtype]
-              return v end
+              return v
+            end
           end
           ps = yield
           Sequel.synchronize{h[subtype] = ps}
diff --git a/lib/sequel/plugins/prepared_statements_associations.rb b/lib/sequel/plugins/prepared_statements_associations.rb
index e3eb5fc..26ab798 100644
--- a/lib/sequel/plugins/prepared_statements_associations.rb
+++ b/lib/sequel/plugins/prepared_statements_associations.rb
@@ -22,17 +22,6 @@ module Sequel
       # lambda returns the next integer to use.
       NEXT = lambda{MUTEX.synchronize{i += 1}}
 
-      module ClassMethods
-        # Disable prepared statement use if a block is given, or the :dataset or :conditions
-        # options are used, or you are cloning an association.
-        def associate(type, name, opts = OPTS, &block)
-          if block || opts[:dataset] || (opts[:clone] && association_reflection(opts[:clone])[:prepared_statement] == false)
-            opts = opts.merge(:prepared_statement=>false)
-          end
-          super(type, name, opts, &block)
-        end
-      end
-
       module InstanceMethods
         private
 
@@ -50,9 +39,9 @@ module Sequel
             association_bound_variable_hash(opts.associated_class.table_name, opts.primary_keys, opts[:keys])
           when :one_to_many, :one_to_one
             association_bound_variable_hash(opts.associated_class.table_name, opts[:keys], opts[:primary_keys])
-          when :many_to_many
+          when :many_to_many, :one_through_one
             association_bound_variable_hash(opts.join_table_alias, opts[:left_keys], opts[:left_primary_keys])
-          when :many_through_many
+          when :many_through_many, :one_through_many
             association_bound_variable_hash(opts.final_reverse_edge[:alias], Array(opts[:left_key]), opts[:left_primary_keys])
           end
         end
@@ -62,26 +51,46 @@ module Sequel
         # instance.  Return false if such a prepared statement cannot be created.
         def association_prepared_statement(opts, assoc_bv)
           opts.send(:cached_fetch, :prepared_statement) do
-            ds, bv = _associated_dataset(opts, {}).unbind
-            if bv.length != assoc_bv.length
-              h = {}
-              bv.each do |k,v|
-                h[k] = v unless assoc_bv.has_key?(k)
+            unless opts[:instance_specific]
+              ds, bv = _associated_dataset(opts, {}).unbind
+              if bv.length != assoc_bv.length
+                h = {}
+                bv.each do |k,v|
+                  h[k] = v unless assoc_bv.has_key?(k)
+                end
+                ds = ds.bind(h)
               end
-              ds = ds.bind(h)
+              ps = ds.prepare(opts.returns_array? ? :select : :first, :"smpsap_#{NEXT.call}")
+              ps.log_sql = true
+              ps
             end
-            ps = ds.prepare(opts.returns_array? ? :select : :first, :"smpsap_#{NEXT.call}")
-            ps.log_sql = true
-            ps
           end
         end
 
-        # If a prepared statement can be used to load the associated objects, execute it to retrieve them.  Otherwise,
-        # fall back to the default implementation.
-        def _load_associated_objects(opts, dynamic_opts=OPTS)
-          if !opts.can_have_associated_objects?(self) || dynamic_opts[:callback] || (load_with_primary_key_lookup?(opts, dynamic_opts) && opts.associated_class.respond_to?(:cache_get_pk))
+        # Use a prepared statement if possible to load the associated object,
+        # unless a dynamic callback is given.
+        def _load_associated_object(opts, dynamic_opts)
+          if !dynamic_opts[:callback] && (bv = association_bound_variables(opts)) && (ps ||= association_prepared_statement(opts, bv))
+            ps.call(bv)
+          else
             super
-          elsif (bv = association_bound_variables(opts)) && (ps ||= association_prepared_statement(opts, bv))
+          end
+        end
+
+        # Use a prepared statement if possible to load the associated object,
+        # unless the associated model uses caching.
+        def _load_associated_object_via_primary_key(opts)
+          if !opts.associated_class.respond_to?(:cache_get_pk) && (bv = association_bound_variables(opts)) && (ps ||= association_prepared_statement(opts, bv))
+            ps.call(bv)
+          else
+            super
+          end
+        end
+
+        # Use a prepared statement if possible to load the associated objects,
+        # unless a dynamic callback is given.
+        def _load_associated_object_array(opts, dynamic_opts)
+          if !dynamic_opts[:callback] && (bv = association_bound_variables(opts)) && (ps ||= association_prepared_statement(opts, bv))
             ps.call(bv)
           else
             super
diff --git a/lib/sequel/plugins/rcte_tree.rb b/lib/sequel/plugins/rcte_tree.rb
index bddf24a..cd097c0 100644
--- a/lib/sequel/plugins/rcte_tree.rb
+++ b/lib/sequel/plugins/rcte_tree.rb
@@ -204,15 +204,12 @@ module Sequel
             end
           end
           table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym
-          elds = model.eager_loading_dataset(r,
-           model.from(SQL::AliasedExpression.new(t, table_alias)).
-            with_recursive(t, base_case,
-             recursive_case,
-             :args=>((key_aliases + col_aliases) if col_aliases)),
-           r.select,
-           eo[:associations], eo)
-          elds = elds.select_append(ka) unless elds.opts[:select] == nil
-          elds.all do |obj|
+          ds = model.from(SQL::AliasedExpression.new(t, table_alias)).
+            with_recursive(t, base_case, recursive_case,
+             :args=>((key_aliases + col_aliases) if col_aliases))
+          ds = r.apply_eager_dataset_changes(ds)
+          ds = ds.select_append(ka) unless ds.opts[:select] == nil
+          model.eager_load_results(r, eo.merge(:loader=>false, :initalize_rows=>false, :dataset=>ds, :id_map=>nil)) do |obj|
             opk = prkey_conv[obj]
             if parent_map.has_key?(opk)
               if idm_obj = parent_map[opk]
@@ -304,17 +301,16 @@ module Sequel
             level = associations
             no_cache_level = level - 1
             associations = {}
-            base_case = base_case.select_more(SQL::AliasedExpression.new(0, la))
+            base_case = base_case.select_more(SQL::AliasedExpression.new(Sequel.cast(0, Integer), la))
             recursive_case = recursive_case.select_more(SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(t, la) + 1, la)).filter(SQL::QualifiedIdentifier.new(t, la) < level - 1)
           end
           table_alias = model.dataset.schema_and_table(model.table_name)[1].to_sym
-          elds = model.eager_loading_dataset(r,
-           model.from(SQL::AliasedExpression.new(t, table_alias)).with_recursive(t, base_case, recursive_case,
-            :args=>((key_aliases + col_aliases + (level ? [la] : [])) if col_aliases)),
-           r.select,
-           associations, eo)
-          elds = elds.select_append(ka) unless elds.opts[:select] == nil
-          elds.all do |obj|
+          ds = model.from(SQL::AliasedExpression.new(t, table_alias)).
+            with_recursive(t, base_case, recursive_case,
+              :args=>((key_aliases + col_aliases + (level ? [la] : [])) if col_aliases))
+          ds = r.apply_eager_dataset_changes(ds)
+          ds = ds.select_append(ka) unless ds.opts[:select] == nil
+          model.eager_load_results(r, eo.merge(:loader=>false, :initalize_rows=>false, :dataset=>ds, :id_map=>nil, :associations=>{})) do |obj|
             if level
               no_cache = no_cache_level == obj.values.delete(la)
             end
diff --git a/lib/sequel/plugins/serialization.rb b/lib/sequel/plugins/serialization.rb
index 2889948..1ea103c 100644
--- a/lib/sequel/plugins/serialization.rb
+++ b/lib/sequel/plugins/serialization.rb
@@ -56,6 +56,17 @@ module Sequel
     #   user = User.create
     #   user.permissions = { :global => 'read-only' }
     #   user.save
+    #
+    # Note that if you mutate serialized column values without reassigning them,
+    # those changes won't be picked up by <tt>Model#save_changes</tt> or
+    # <tt>Model#update</tt>.  Example:
+    #
+    #   user = User[1]
+    #   user.permissions[:global] = 'foo'
+    #   user.save_changes # Will not pick up changes to permissions
+    #
+    # You can use the +serialization_modification_detection+ plugin to pick
+    # up such changes.
     module Serialization
       # The default serializers supported by the serialization module.
       # Use register_format to add serializers to this hash.
@@ -152,7 +163,10 @@ module Sequel
                 end
               end
               define_method("#{column}=") do |v| 
-                changed_columns << column unless changed_columns.include?(column)
+                if !changed_columns.include?(column) && (new? || send(column) != v)
+                  changed_columns << column
+                end
+
                 deserialized_values[column] = v
               end
             end
diff --git a/lib/sequel/plugins/sharding.rb b/lib/sequel/plugins/sharding.rb
index 5e4cc5b..0bc6580 100644
--- a/lib/sequel/plugins/sharding.rb
+++ b/lib/sequel/plugins/sharding.rb
@@ -24,15 +24,19 @@ module Sequel
           new_using_server(s, values, &block).save
         end
 
-        # When eagerly loading, if the current dataset has a defined shard and the
-        # dataset that you will be using to get the associated records does not,
-        # use the current dataset's shard for the associated dataset.
-        def eager_loading_dataset(opts, ds, select, associations, eager_options=OPTS)
-          ds = super(opts, ds, select, associations, eager_options)
-          if !ds.opts[:server] and s = eager_options[:self] and server = s.opts[:server]
-            ds = ds.server(server)
+        # Eager load the association with the given eager loader options.
+        def eager_load_results(opts, eo, &block)
+          if (s = eo[:self]) && (server = s.opts[:server])
+            eb = eo[:eager_block]
+            set_server = proc do |ds|
+              ds = eb.call(ds) if eb
+              ds = ds.server(server) unless ds.opts[:server]
+              ds
+            end
+            eo = eo.merge(:eager_block=>set_server)
           end
-          ds
+
+          super
         end
 
         # Return a newly instantiated object that is tied to the given
@@ -70,6 +74,11 @@ module Sequel
           use_server(super)
         end
 
+        # Don't use an associated object loader, as it won't respect the shard used.
+        def _associated_object_loader(opts, dynamic_opts)
+          nil
+        end
+
         # Ensure that the join table for many_to_many associations uses the correct shard.
         def _join_table_dataset(opts)
           use_server(super)
diff --git a/lib/sequel/plugins/single_table_inheritance.rb b/lib/sequel/plugins/single_table_inheritance.rb
index 98091f7..7ca270a 100644
--- a/lib/sequel/plugins/single_table_inheritance.rb
+++ b/lib/sequel/plugins/single_table_inheritance.rb
@@ -24,7 +24,15 @@ module Sequel
     #
     #   # Use the default of storing the class name in the sti_key
     #   # column (:kind in this case)
-    #   Employee.plugin :single_table_inheritance, :kind
+    #   class Employee < Sequel::Model
+    #     plugin :single_table_inheritance, :kind
+    #   end
+    #
+    #   # Have subclasses inherit from the appropriate class
+    #   class Staff < Employee; end
+    #   class Manager < Employee; end
+    #
+    #   # You can also use many different options to configure the plugin:
     #
     #   # Using integers to store the class type, with a :model_map hash
     #   # and an sti_key of :type
@@ -126,7 +134,7 @@ module Sequel
         attr_reader :sti_key_array
 
         # A hash/proc with class keys and column value values, mapping
-        # the the class to a particular value given to the sti_key column.
+        # the class to a particular value given to the sti_key column.
         # Used to set the column value when creating objects, and for the
         # filter when retrieving objects in subclasses.
         attr_reader :sti_key_map
@@ -174,7 +182,8 @@ module Sequel
         # keys for all of their descendant classes.
         def sti_subclass_added(key)
           if sti_key_array
-            Sequel.synchronize{sti_key_array.push(*Array(key))}
+            key_array = Array(key)
+            Sequel.synchronize{sti_key_array.push(*key_array)}
             superclass.sti_subclass_added(key)
           end
         end
@@ -207,8 +216,10 @@ module Sequel
 
       module InstanceMethods
         # Set the sti_key column based on the sti_key_map.
-        def before_create
-          send("#{model.sti_key}=", model.sti_key_chooser.call(self)) unless self[model.sti_key]
+        def before_validation
+          if new? && !self[model.sti_key]
+            send("#{model.sti_key}=", model.sti_key_chooser.call(self))
+          end
           super
         end
       end
diff --git a/lib/sequel/plugins/subclasses.rb b/lib/sequel/plugins/subclasses.rb
index 8d8caf8..e4cc9de 100644
--- a/lib/sequel/plugins/subclasses.rb
+++ b/lib/sequel/plugins/subclasses.rb
@@ -42,7 +42,7 @@ module Sequel
 
         # All descendent classes of this model.
         def descendents
-          Sequel.synchronize{_descendents}
+          Sequel.synchronize{subclasses.dup}.map{|x| [x] + x.send(:descendents)}.flatten
         end
 
         Plugins.inherited_instance_variables(self, :@subclasses=>lambda{|v| []}, :@on_subclass=>nil)
@@ -55,14 +55,6 @@ module Sequel
           Sequel.synchronize{subclasses << subclass}
           on_subclass.call(subclass) if on_subclass
         end
-
-        private
-
-        # Recursive, non-thread safe version of descendents, since
-        # the mutex Sequel uses isn't reentrant.
-        def _descendents
-          subclasses.map{|x| [x] + x.send(:_descendents)}.flatten
-        end
       end
     end
   end
diff --git a/lib/sequel/plugins/table_select.rb b/lib/sequel/plugins/table_select.rb
new file mode 100644
index 0000000..12219e7
--- /dev/null
+++ b/lib/sequel/plugins/table_select.rb
@@ -0,0 +1,41 @@
+module Sequel
+  module Plugins
+    # The table_select plugin changes the default selection for a
+    # model dataset from <tt>*</tt> to <tt>table.*</tt>.
+    # This makes it so that if you join the model's dataset to
+    # other tables, columns in the other tables do not appear
+    # in the result sets (and possibly overwrite columns in the
+    # current model with the same name).
+    #
+    # Usage:
+    #
+    #   # Make all model subclasses select table.*
+    #   Sequel::Model.plugin :table_select
+    #
+    #   # Make the Album class select albums.*
+    #   Album.plugin :table_select
+    module TableSelect
+      # Modify the current model's dataset selection, if the model
+      # has a dataset.
+      def self.configure(model)
+        model.instance_eval do
+          self.dataset = dataset if @dataset
+        end
+      end
+
+      module ClassMethods
+        private
+
+        # If the underlying dataset selects from a single table and
+        # has no explicit selection, select table.* from that table.
+        def convert_input_dataset(ds)
+          ds = super
+          if !ds.opts[:select] && (from = ds.opts[:from]) && from.length == 1 && !ds.opts[:join]
+            ds = ds.select_all(ds.first_source)
+          end
+          ds
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/tactical_eager_loading.rb b/lib/sequel/plugins/tactical_eager_loading.rb
index e858d68..d1f81d0 100644
--- a/lib/sequel/plugins/tactical_eager_loading.rb
+++ b/lib/sequel/plugins/tactical_eager_loading.rb
@@ -35,6 +35,15 @@ module Sequel
         # reteived via Dataset#all.
         attr_accessor :retrieved_with
 
+        # Remove retrieved_by and retrieved_with when marshalling.  retrieved_by
+        # contains unmarshallable objects, and retrieved_with can be very large
+        # and is not helpful without retrieved_by.
+        def marshallable!
+          @retrieved_by = nil
+          @retrieved_with = nil
+          super
+        end
+
         private
 
         # If there the association is not in the associations cache and the object
diff --git a/lib/sequel/plugins/timestamps.rb b/lib/sequel/plugins/timestamps.rb
index 62185a4..eafa14d 100644
--- a/lib/sequel/plugins/timestamps.rb
+++ b/lib/sequel/plugins/timestamps.rb
@@ -58,8 +58,8 @@ module Sequel
 
       module InstanceMethods
         # Set the create timestamp when creating
-        def before_create
-          set_create_timestamp
+        def before_validation
+          set_create_timestamp if new?
           super
         end
         
@@ -79,7 +79,7 @@ module Sequel
         def set_create_timestamp(time=nil)
           field = model.create_timestamp_field
           meth = :"#{field}="
-          self.send(meth, time||=Sequel.datetime_class.now) if respond_to?(field) && respond_to?(meth) && (model.create_timestamp_overwrite? || send(field).nil?)
+          self.send(meth, time||=model.dataset.current_datetime) if respond_to?(field) && respond_to?(meth) && (model.create_timestamp_overwrite? || send(field).nil?)
           set_update_timestamp(time) if model.set_update_timestamp_on_create?
         end
         
@@ -87,7 +87,7 @@ module Sequel
         # object has a setter method for the update timestamp field.
         def set_update_timestamp(time=nil)
           meth = :"#{model.update_timestamp_field}="
-          self.send(meth, time||Sequel.datetime_class.now) if respond_to?(meth)
+          self.send(meth, time||model.dataset.current_datetime) if respond_to?(meth)
         end
       end
     end
diff --git a/lib/sequel/plugins/touch.rb b/lib/sequel/plugins/touch.rb
index 2c82c43..e09cc0b 100644
--- a/lib/sequel/plugins/touch.rb
+++ b/lib/sequel/plugins/touch.rb
@@ -130,9 +130,9 @@ module Sequel
         end
 
         # The value to use when modifying the touch column for the model instance.
-        # Uses Time.now to work well with typecasting.
+        # Uses Time/DateTime.now to work well with typecasting.
         def touch_instance_value
-          Time.now
+          model.dataset.current_datetime
         end
       end
     end
diff --git a/lib/sequel/plugins/tree.rb b/lib/sequel/plugins/tree.rb
index 1f0f5f9..23085fd 100644
--- a/lib/sequel/plugins/tree.rb
+++ b/lib/sequel/plugins/tree.rb
@@ -88,12 +88,12 @@ module Sequel
           nodes
         end
 
-        # Returns list of ancestors, starting from parent until root.
+        # Returns list of descendants
         #
-        #   subchild1.ancestors # => [child1, root]
+        #   node.descendants # => [child1, child2, subchild1_1, subchild1_2, subchild2_1, subchild2_2]
         def descendants
           nodes = children.dup
-          nodes.each{|child| nodes.concat(child.descendants)}
+          children.each{|child| nodes.concat(child.descendants)}
           nodes 
         end
 
diff --git a/lib/sequel/plugins/update_or_create.rb b/lib/sequel/plugins/update_or_create.rb
new file mode 100644
index 0000000..0a95520
--- /dev/null
+++ b/lib/sequel/plugins/update_or_create.rb
@@ -0,0 +1,60 @@
+module Sequel
+  module Plugins
+    # The update_or_create plugin adds a couple of methods that make it easier
+    # to deal with objects which may or may not yet exist in the database.
+    # The first method is update_or_create, which updates an object if it
+    # exists in the database, or creates the object if it does not.
+    #
+    # You can call create_or_update with a block:
+    #
+    #   Album.update_or_create(:name=>'Hello') do |album|
+    #     album.num_copies_sold = 1000
+    #   end
+    #
+    # or provide two hashes, with the second one being the attributes
+    # to set.
+    #
+    #   Album.update_or_create({:name=>'Hello'}, {:num_copies_sold=>1000})
+    #
+    # In both cases, this will check the database to find the album with
+    # the name "Hello". If such an album exists, it will be updated to set
+    # num_copies_sold to 1000.  If no such album exists, an album with the
+    # name "Hello" and num_copies_sold 1000 will be created.
+    #
+    # The second method is find_or_new, which returns the object from the
+    # database if it exists, or returns a new (unsaved) object if not. It
+    # has the same API as update_or_create, and operates identically to
+    # update_or_create except that it doesn't persist any changes.
+    #
+    # Usage:
+    #
+    #   # Make all model subclass support update_or_create
+    #   Sequel::Model.plugin :update_or_create
+    #
+    #   # Make the Album class support update_or_create
+    #   Album.plugin :update_or_create
+    module UpdateOrCreate
+      module ClassMethods
+        # Attempt to find an record with the +attrs+, which should be a
+        # hash with column symbol keys.  If such an record exists, update it
+        # with the values given in +set_attrs+.  If no such record exists,
+        # create a new record with the columns specified by both +attrs+ and
+        # +set_attrs+, with the ones in +set_attrs+ taking priority.  If
+        # a block is given, the object is yielded to the block before the
+        # object is saved.
+        def update_or_create(attrs, set_attrs=nil, &block)
+          find_or_new(attrs, set_attrs, &block).save_changes
+        end
+
+        # Operates the same as +update_or_create+, but returns the objects
+        # without persisting changes (no UPDATE/INSERT queries).
+        def find_or_new(attrs, set_attrs=nil, &block)
+          obj = find(attrs) || new(attrs)
+          obj.set(set_attrs) if set_attrs
+          yield obj if block_given?
+          obj
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/validation_class_methods.rb b/lib/sequel/plugins/validation_class_methods.rb
index 39ecf86..9f957e1 100644
--- a/lib/sequel/plugins/validation_class_methods.rb
+++ b/lib/sequel/plugins/validation_class_methods.rb
@@ -67,7 +67,7 @@ module Sequel
         # Instructs the model to skip validations defined in superclasses
         def skip_superclass_validations
           superclass.validations.each do |att, procs|
-            if ps = @validations[att]
+            if @validations[att]
               @validations[att] -= procs
             end
           end
diff --git a/lib/sequel/plugins/validation_helpers.rb b/lib/sequel/plugins/validation_helpers.rb
index 7895a4e..5d53505 100644
--- a/lib/sequel/plugins/validation_helpers.rb
+++ b/lib/sequel/plugins/validation_helpers.rb
@@ -202,6 +202,8 @@ module Sequel
         # since it can deal with a grouping of multiple attributes.
         #
         # Possible Options:
+        # :dataset :: The base dataset to use for the unique query, defaults to the
+        #             model's dataset.
         # :message :: The message to use (default: 'is already taken')
         # :only_if_modified :: Only check the uniqueness if the object is new or
         #                      one of the columns has been modified.
@@ -231,12 +233,13 @@ module Sequel
             arr = Array(a)
             next if arr.any?{|x| errors.on(x)}
             next if opts[:only_if_modified] && !new? && !arr.any?{|x| changed_columns.include?(x)}
+            ds = opts[:dataset] || model.dataset
             ds = if where
-              where.call(model.dataset, self, arr)
+              where.call(ds, self, arr)
             else
               vals = arr.map{|x| send(x)}
               next if vals.any?{|v| v.nil?}
-              model.where(arr.zip(vals))
+              ds.where(arr.zip(vals))
             end
             ds = yield(ds) if block_given?
             ds = ds.exclude(pk_hash) unless new?
diff --git a/lib/sequel/sql.rb b/lib/sequel/sql.rb
index bed3af4..891d207 100644
--- a/lib/sequel/sql.rb
+++ b/lib/sequel/sql.rb
@@ -48,6 +48,17 @@ module Sequel
       t = now
       local(t.year, t.month, t.day, hour, minute, second, usec)
     end
+
+    # Return a string in HH:MM:SS format representing the time.
+    def to_s(*args)
+      if args.empty?
+        strftime('%H:%M:%S')
+      else
+        # Superclass may have defined a method that takes a format string,
+        # and we shouldn't override in that case.
+        super
+      end
+    end
   end
 
   # The SQL module holds classes whose instances represent SQL fragments.
@@ -205,7 +216,7 @@ module Sequel
         case op
         when *N_ARITY_OPERATORS
           raise(Error, "The #{op} operator requires at least 1 argument") unless args.length >= 1
-          old_args = args
+          old_args = args.map{|a| a.is_a?(self.class) && a.op == :NOOP ? a.args.first : a}
           args = []
           old_args.each{|a| a.is_a?(self.class) && a.op == op ? args.concat(a.args) : args.push(a)}
         when *TWO_ARITY_OPERATORS
@@ -240,8 +251,9 @@ module Sequel
       # Create an SQL alias (+AliasedExpression+) of the receiving column or expression to the given alias.
       #
       #   Sequel.function(:func).as(:alias) # func() AS "alias"
-      def as(aliaz)
-        AliasedExpression.new(self, aliaz)
+      #   Sequel.function(:func).as(:alias, [:col_alias1, :col_alias2]) # func() AS "alias"("col_alias1", "col_alias2")
+      def as(aliaz, columns=nil)
+        AliasedExpression.new(self, aliaz, columns)
       end
     end
 
@@ -309,8 +321,9 @@ module Sequel
       # Create an SQL::AliasedExpression for the given expression and alias.
       #
       #   Sequel.as(:column, :alias) # "column" AS "alias"
-      def as(exp, aliaz)
-        SQL::AliasedExpression.new(exp, aliaz)
+      #   Sequel.as(:column, :alias, [:col_alias1, :col_alias2]) # "column" AS "alias"("col_alias1", "col_alias2")
+      def as(exp, aliaz, columns=nil)
+        SQL::AliasedExpression.new(exp, aliaz, columns)
       end
 
       # Order the given argument ascending.
@@ -380,7 +393,7 @@ module Sequel
       #   Sequel.char_length(:a) # char_length(a) -- Most databases
       #   Sequel.char_length(:a) # length(a) -- SQLite
       def char_length(arg)
-        SQL::EmulatedFunction.new(:char_length, arg)
+        SQL::Function.new!(:char_length, [arg], :emulate=>true)
       end
 
       # Do a deep qualification of the argument using the qualifier.  This recurses into
@@ -544,7 +557,7 @@ module Sequel
       # Create a <tt>BooleanExpression</tt> case insensitive (if the database supports it) pattern match of the receiver with
       # the given patterns.  See <tt>SQL::StringExpression.like</tt>.
       #
-      #   Sequel.ilike(:a, 'A%') # "a" ILIKE 'A%'
+      #   Sequel.ilike(:a, 'A%') # "a" ILIKE 'A%' ESCAPE '\'
       def ilike(*args)
         SQL::StringExpression.like(*(args << {:case_insensitive=>true}))
       end
@@ -552,7 +565,7 @@ module Sequel
       # Create a <tt>SQL::BooleanExpression</tt> case sensitive (if the database supports it) pattern match of the receiver with
       # the given patterns.  See <tt>SQL::StringExpression.like</tt>.
       #
-      #   Sequel.like(:a, 'A%') # "a" LIKE 'A%'
+      #   Sequel.like(:a, 'A%') # "a" LIKE 'A%' ESCAPE '\'
       def like(*args)
         SQL::StringExpression.like(*args)
       end
@@ -637,7 +650,7 @@ module Sequel
       #   Sequel.trim(:a) # trim(a) -- Most databases
       #   Sequel.trim(:a) # ltrim(rtrim(a)) -- Microsoft SQL Server
       def trim(arg)
-        SQL::EmulatedFunction.new(:trim, arg)
+        SQL::Function.new!(:trim, [arg], :emulate=>true)
       end
 
       # Return a <tt>SQL::ValueList</tt> created from the given array.  Used if the array contains
@@ -787,20 +800,23 @@ module Sequel
     # arguments with the appropriate operator, and the & and | operators return
     # boolean expressions combining all of the arguments with either AND or OR.
     module OperatorBuilders
-      %w'+ - * /'.each do |op|
-        class_eval(<<-END, __FILE__, __LINE__ + 1)
-          def #{op}(*args)
-            SQL::NumericExpression.new(:#{op}, *args)
-          end
-        END
-      end
-
-      {'&'=>'AND', '|'=>'OR'}.each do |m, op|
-        class_eval(<<-END, __FILE__, __LINE__ + 1)
-          def #{m}(*args)
-            SQL::BooleanExpression.new(:#{op}, *args)
-          end
-        END
+      {'::Sequel::SQL::NumericExpression'=>{'+'=>'+', '-'=>'-', '*'=>'*', '/'=>'/'},
+       '::Sequel::SQL::BooleanExpression'=>{'&'=>'AND', '|'=>'OR'}}.each do |klass, ops|
+        ops.each do |m, op|
+          class_eval(<<-END, __FILE__, __LINE__ + 1)
+            def #{m}(*args)
+              if (args.length == 1)
+                if (v = args.first).class.is_a?(#{klass})
+                  v
+                else
+                  #{klass}.new(:NOOP, v)
+                end
+              else
+                #{klass}.new(:#{op}, *args)
+              end
+            end
+          END
+        end
       end
       
       # Invert the given expression.  Returns a <tt>Sequel::SQL::BooleanExpression</tt>
@@ -873,7 +889,7 @@ module Sequel
       # Create a +BooleanExpression+ case insensitive pattern match of the receiver
       # with the given patterns.  See <tt>StringExpression.like</tt>.
       #
-      #   :a.ilike('A%') # "a" ILIKE 'A%'
+      #   :a.ilike('A%') # "a" ILIKE 'A%' ESCAPE '\'
       def ilike(*ces)
         StringExpression.like(self, *(ces << {:case_insensitive=>true}))
       end
@@ -881,7 +897,7 @@ module Sequel
       # Create a +BooleanExpression+ case sensitive (if the database supports it) pattern match of the receiver with
       # the given patterns.  See <tt>StringExpression.like</tt>.
       #
-      #   :a.like('A%') # "a" LIKE 'A%'
+      #   :a.like('A%') # "a" LIKE 'A%' ESCAPE '\'
       def like(*ces)
         StringExpression.like(self, *ces)
       end
@@ -924,10 +940,17 @@ module Sequel
       # The alias to use for the expression, not +alias+ since that is
       # a keyword in ruby.
       attr_reader :aliaz
+      alias_method :alias, :aliaz
+
+      # The columns aliases to use, for when the aliased expression is
+      # a record or set of records (such as a dataset). 
+      attr_reader :columns
 
       # Create an object with the given expression and alias.
-      def initialize(expression, aliaz)
-        @expression, @aliaz = expression, aliaz
+      def initialize(expression, aliaz, columns=nil)
+        @expression = expression
+        @aliaz = aliaz
+        @columns = columns
       end
 
       to_s_method :aliased_expression_sql
@@ -1000,6 +1023,8 @@ module Sequel
           StringExpression.like(l, r)
         when DelayedEvaluation
           Sequel.delay{from_value_pair(l, r.callable.call)}
+        when Dataset::PlaceholderLiteralizer::Argument
+          r.transform{|v| from_value_pair(l, v)}
         else
           new(:'=', l, r)
         end
@@ -1200,23 +1225,127 @@ module Sequel
 
     # Represents an SQL function call.
     class Function < GenericExpression
+      WILDCARD = LiteralString.new('*').freeze
+      DISTINCT = ["DISTINCT ".freeze].freeze
+      COMMA_ARRAY = [LiteralString.new(', ').freeze].freeze
+
       # The SQL function to call
-      attr_reader :f
+      attr_reader :name
+      alias f name
       
       # The array of arguments to pass to the function (may be blank)
       attr_reader :args
 
-      # Set the functions and args to the given arguments
-      def initialize(f, *args)
-        @f, @args = f, args
+      # Options for this function
+      attr_reader :opts
+
+      # Set the name and args for the function
+      def initialize(name, *args)
+        @name = name
+        @args = args
+        @opts = OPTS
+      end
+
+      def self.new!(name, args, opts)
+        f = new(name, *args)
+        f.instance_variable_set(:@opts, opts)
+        f
+      end
+
+      # If no arguments are given, return a new function with the wildcard prepended to the arguments.
+      #
+      #   Sequel.function(:count).*  # count(*)
+      def *(ce=(arg=false;nil))
+        if arg == false
+          raise Error, "Cannot apply * to functions with arguments" unless args.empty?
+          with_opts(:"*"=>true)
+        else
+          super(ce)
+        end
+      end
+
+      # Return a new function with DISTINCT before the method arguments.
+      #
+      #   Sequel.function(:count, :col).distinct # count(DISTINCT col)
+      def distinct
+        with_opts(:distinct=>true)
+      end
+
+      # Return a new function with FILTER added to it, for filtered
+      # aggregate functions:
+      #
+      #   Sequel.function(:foo, :col).filter(:a=>1) # foo(col) FILTER (WHERE a = 1)
+      def filter(*args, &block)
+        args = args.first if args.length == 1
+        with_opts(:filter=>args, :filter_block=>block)
+      end
+
+      # Return a function which will use LATERAL when literalized:
+      #
+      #   Sequel.function(:foo, :col).lateral # LATERAL foo(col)
+      def lateral
+        with_opts(:lateral=>true)
+      end
+
+      # Return a new function with an OVER clause (making it a window function).
+      #
+      #   Sequel.function(:row_number).over(:partition=>:col) # row_number() OVER (PARTITION BY col)
+      def over(window=OPTS)
+        raise Error, "function already has a window applied to it" if opts[:over]
+        window = Window.new(window) unless window.is_a?(Window)
+        with_opts(:over=>window)
+      end
+
+      # Return a new function where the function name will be quoted if the database supports
+      # quoted functions:
+      #
+      #   Sequel.function(:foo).quoted # "foo"()
+      def quoted
+        with_opts(:quoted=>true)
+      end
+
+      # Return a new function where the function name will not be quoted even
+      # if the database supports quoted functions:
+      #
+      #   Sequel.expr(:foo).function.unquoted # foo()
+      def unquoted
+        with_opts(:quoted=>false)
+      end
+
+      # Return a new function that will use WITH ORDINALITY to also return
+      # a row number for every row the function returns:
+      #
+      #   Sequel.function(:foo).with_ordinality # foo() WITH ORDINALITY
+      def with_ordinality
+        with_opts(:with_ordinality=>true)
+      end
+
+      # Return a new function that uses WITHIN GROUP ordered by the given expression,
+      # useful for ordered-set and hypothetical-set aggregate functions:
+      #
+      #   Sequel.function(:rank, :a).within_group(:b, :c)
+      #   # rank(a) WITHIN GROUP (ORDER BY b, c)
+      def within_group(*expressions)
+        with_opts(:within_group=>expressions)
       end
 
       to_s_method :function_sql
+
+      private
+
+      # Return a new function call with the given opts merged into the current opts.
+      def with_opts(opts)
+        self.class.new!(name, args, @opts.merge(opts))
+      end
     end
 
-    # Represents an SQL function call that is translated/emulated
-    # on databases that lack support for such a function.
+    # REMOVE411
     class EmulatedFunction < Function
+      def self.new(name, *args)
+        Deprecation.deprecate("Sequel::SQL::EmulatedFunction", "Please use Sequel::SQL::Function.new!(name, args, :emulate=>true) to create an emulated SQL function")
+        Function.new!(name, args, :emulate=>true)
+      end
+
       to_s_method :emulated_function_sql
     end
     
@@ -1245,6 +1374,12 @@ module Sequel
       def initialize(value)
         @value = value
       end
+
+      # Create a Function using this identifier as the functions name, with
+      # the given args.
+      def function(*args)
+        Function.new(self, *args)
+      end
       
       to_s_method :quote_identifier, '@value'
     end
@@ -1254,15 +1389,48 @@ module Sequel
       # The type of join to do
       attr_reader :join_type
 
-      # The actual table to join
-      attr_reader :table
-
-      # The table alias to use for the join, if any
-      attr_reader :table_alias
+      # The expression representing the table/set related to the JOIN.
+      # Is an AliasedExpression if the JOIN uses an alias.
+      attr_reader :table_expr
 
-      # Create an object with the given join_type, table, and table alias
+      # Create an object with the given join_type and table expression.
       def initialize(join_type, table, table_alias = nil)
-        @join_type, @table, @table_alias = join_type, table, table_alias
+        @join_type = join_type
+
+        @table_expr = if table.is_a?(AliasedExpression)
+          table
+        # REMOVE411
+        elsif table_alias
+          Deprecation.deprecate("The table_alias argument to Sequel::SQL::JoinClause#initialize", "Please use a Sequel::SQL::AliasedExpression as the table argument instead.")
+          AliasedExpression.new(table, table_alias)
+        else
+          table
+        end
+      end
+
+      # The table/set related to the JOIN, without any alias.
+      def table
+        if @table_expr.is_a?(AliasedExpression)
+          @table_expr.expression
+        else
+          @table_expr
+        end
+      end
+
+      # The table alias to use for the JOIN , or nil if the
+      # JOIN does not alias the table.
+      def table_alias
+        if @table_expr.is_a?(AliasedExpression)
+          @table_expr.alias
+        end
+      end
+
+      # The column aliases to use for the JOIN , or nil if the
+      # JOIN does not use a derived column list.
+      def column_aliases
+        if @table_expr.is_a?(AliasedExpression)
+          @table_expr.columns
+        end
       end
 
       to_s_method :join_clause_sql
@@ -1399,6 +1567,12 @@ module Sequel
         @table, @column = table, column
       end
       
+      # Create a Function using this identifier as the functions name, with
+      # the given args.
+      def function(*args)
+        Function.new(self, *args)
+      end
+      
       to_s_method :qualified_identifier_sql, "@table, @column"
     end
     
@@ -1431,9 +1605,9 @@ module Sequel
       # if a case insensitive regular expression is used (//i), that particular
       # pattern which will always be case insensitive.
       #
-      #   StringExpression.like(:a, 'a%') # "a" LIKE 'a%'
-      #   StringExpression.like(:a, 'a%', :case_insensitive=>true) # "a" ILIKE 'a%'
-      #   StringExpression.like(:a, 'a%', /^a/i) # "a" LIKE 'a%' OR "a" ~* '^a' 
+      #   StringExpression.like(:a, 'a%') # "a" LIKE 'a%' ESCAPE '\'
+      #   StringExpression.like(:a, 'a%', :case_insensitive=>true) # "a" ILIKE 'a%' ESCAPE '\'
+      #   StringExpression.like(:a, 'a%', /^a/i) # "a" LIKE 'a%' ESCAPE '\' OR "a" ~* '^a'
       def self.like(l, *ces)
         l, lre, lci = like_element(l)
         lci = (ces.last.is_a?(Hash) ? ces.pop : {})[:case_insensitive] ? true : lci
@@ -1477,7 +1651,7 @@ module Sequel
       end
 
       # Create a new +Subscript+ appending the given subscript(s)
-      # the the current array of subscripts.
+      # to the current array of subscripts.
       #
       #   :a.sql_subscript(2) # a[2]
       #   :a.sql_subscript(2) | 1 # a[2, 1]
@@ -1513,7 +1687,7 @@ module Sequel
     # If the block doesn't take an argument, the block is instance_execed in the context of
     # an instance of this class.
     #
-    # +VirtualRow+ uses +method_missing+ to return either an +Identifier+, +QualifiedIdentifier+, +Function+, or +WindowFunction+, 
+    # +VirtualRow+ uses +method_missing+ to return either an +Identifier+, +QualifiedIdentifier+, or +Function+
     # depending on how it is called.
     #
     # If a block is _not_ given, creates one of the following objects:
@@ -1524,16 +1698,15 @@ module Sequel
     #                          table being the part before __, and the column being the part after.
     # +Identifier+ :: Returned otherwise, using the method name.
     #
-    # If a block is given, it returns either a +Function+ or +WindowFunction+, depending on the first
-    # argument to the method.  Note that the block is currently not called by the code, though
+    # If a block is given, it returns a +Function+.  Note that the block is currently not called by the code, though
     # this may change in a future version.  If the first argument is:
     #
     # no arguments given :: creates a +Function+ with no arguments.
     # :* :: creates a +Function+ with a literal wildcard argument (*), mostly useful for COUNT.
     # :distinct :: creates a +Function+ that prepends DISTINCT to the rest of the arguments, mostly
     #              useful for aggregate functions.
-    # :over :: creates a +WindowFunction+.  If a second argument is provided, it should be a hash
-    #          of options which are passed to Window (with possible keys :window, :partition, :order, and :frame).  The
+    # :over :: creates a +Function+ with a window.  If a second argument is provided, it should be a hash
+    #          of options which are used to create the +Window+ (with possible keys :window, :partition, :order, and :frame).  The
     #          arguments to the function itself should be specified as <tt>:*=>true</tt> for a wildcard, or via
     #          the <tt>:args</tt> option.
     #
@@ -1582,14 +1755,10 @@ module Sequel
     #   # Literal Strings
     #   ds.filter{{a=>`some SQL`}} # SELECT * FROM t WHERE (a = some SQL)
     #
-    # For a more detailed explanation, see the {Virtual Rows guide}[link:files/doc/virtual_rows_rdoc.html].
+    # For a more detailed explanation, see the {Virtual Rows guide}[rdoc-ref:doc/virtual_rows.rdoc].
     class VirtualRow < BasicObject
-      WILDCARD = LiteralString.new('*').freeze
       QUESTION_MARK = LiteralString.new('?').freeze
-      COMMA_SEPARATOR = LiteralString.new(', ').freeze
       DOUBLE_UNDERSCORE = '__'.freeze
-      DISTINCT = ["DISTINCT ".freeze].freeze
-      COMMA_ARRAY = [COMMA_SEPARATOR].freeze
 
       include OperatorBuilders
 
@@ -1606,7 +1775,7 @@ module Sequel
         Sequel::LiteralString.new(s)
       end
 
-      # Return an +Identifier+, +QualifiedIdentifier+, +Function+, or +WindowFunction+, depending
+      # Return an +Identifier+, +QualifiedIdentifier+, or +Function+, depending
       # on arguments and whether a block is provided.  Does not currently call the block.
       # See the class level documentation.
       def method_missing(m, *args, &block)
@@ -1616,13 +1785,14 @@ module Sequel
           else
             case args.shift
             when :*
-              Function.new(m, WILDCARD)
+              Function.new(m, *args).*
             when :distinct
-              Function.new(m, PlaceholderLiteralString.new(DISTINCT + COMMA_ARRAY * (args.length-1), args))
+              Function.new(m, *args).distinct
             when :over
-              opts = args.shift || {}
-              fun_args = ::Kernel.Array(opts[:*] ? WILDCARD : opts[:args])
-              WindowFunction.new(Function.new(m, *fun_args), Window.new(opts))
+              opts = args.shift || OPTS
+              f = Function.new(m, *::Kernel.Array(opts[:args]))
+              f = f.* if opts[:*]
+              f.over(opts)
             else
               Kernel.raise(Error, 'unsupported VirtualRow method argument used with block')
             end
@@ -1638,9 +1808,7 @@ module Sequel
       Sequel::VIRTUAL_ROW = new
     end
 
-    # A +Window+ is part of a window function specifying the window over which the function operates.
-    # It is separated from the +WindowFunction+ class because it also can be used separately on
-    # some databases.
+    # A +Window+ is part of a window function specifying the window over which a window function operates.
     class Window < Expression
       # The options for this window.  Options currently supported:
       # :frame :: if specified, should be :all, :rows, or a String that is used literally. :all always operates over all rows in the
@@ -1659,7 +1827,7 @@ module Sequel
       to_s_method :window_sql, '@opts'
     end
 
-    # A +WindowFunction+ is a grouping of a +Function+ with a +Window+ over which it operates.
+    # REMOVE411
     class WindowFunction < GenericExpression
       # The function to use, should be an <tt>SQL::Function</tt>.
       attr_reader :function
@@ -1667,8 +1835,14 @@ module Sequel
       # The window to use, should be an <tt>SQL::Window</tt>.
       attr_reader :window
 
+      def self.new(function, window)
+        Deprecation.deprecate("Sequel::SQL::WindowFunction", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
+        function.over(window)
+      end
+
       # Set the function and window.
       def initialize(function, window)
+        Deprecation.deprecate("Sequel::SQL::WindowFunction", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
         @function, @window = function, window
       end
 
diff --git a/lib/sequel/version.rb b/lib/sequel/version.rb
index ff5fcfa..e042b43 100644
--- a/lib/sequel/version.rb
+++ b/lib/sequel/version.rb
@@ -3,7 +3,7 @@ module Sequel
   MAJOR = 4
   # The minor version of Sequel.  Bumped for every non-patch level
   # release, generally around once a month.
-  MINOR = 3
+  MINOR = 11
   # The tiny version of Sequel.  Usually 0, only bumped for bugfix
   # releases that fix regressions from previous versions.
   TINY  = 0
diff --git a/sequel.gemspec b/sequel.gemspec
index 2bf5a3a..8747986 100644
--- a/sequel.gemspec
+++ b/sequel.gemspec
@@ -1,7 +1,6 @@
 require File.expand_path("../lib/sequel/version", __FILE__)
 SEQUEL_GEMSPEC = Gem::Specification.new do |s|
   s.name = 'sequel'
-  s.rubyforge_project = 'sequel'
   s.version = Sequel.version
   s.platform = Gem::Platform::RUBY
   s.has_rdoc = true
@@ -11,7 +10,7 @@ SEQUEL_GEMSPEC = Gem::Specification.new do |s|
   s.description = s.summary
   s.author = "Jeremy Evans"
   s.email = "code at jeremyevans.net"
-  s.homepage = "http://sequel.rubyforge.org"
+  s.homepage = "http://sequel.jeremyevans.net"
   s.license = 'MIT'
   s.required_ruby_version = ">= 1.8.7"
   s.files = %w(MIT-LICENSE CHANGELOG README.rdoc Rakefile bin/sequel) + Dir["doc/**/*.{rdoc,txt}"] + Dir["{spec,lib}/**/*.{rb,RB}"]
diff --git a/spec/adapters/db2_spec.rb b/spec/adapters/db2_spec.rb
index 7b6040f..70b8219 100644
--- a/spec/adapters/db2_spec.rb
+++ b/spec/adapters/db2_spec.rb
@@ -30,12 +30,12 @@ end
 
 describe "Simple Dataset operations" do
   before(:all) do
-    Sequel::DB2.use_clob_as_blob = false
+    Sequel::DB2.use_clob_as_blob = true
     DB.create_table!(:items) do
       Integer :id, :primary_key => true
       Integer :number
       column  :bin_string, 'varchar(20) for bit data'
-      column  :bin_blob, 'blob'
+      column  :bin_clob, 'clob'
     end
     @ds = DB[:items]
   end
@@ -43,7 +43,7 @@ describe "Simple Dataset operations" do
     @ds.delete
   end
   after(:all) do
-    Sequel::DB2.use_clob_as_blob = true
+    Sequel::DB2.use_clob_as_blob = false
     DB.drop_table(:items)
   end
 
@@ -54,8 +54,8 @@ describe "Simple Dataset operations" do
   end
 
   specify "should insert into binary columns" do
-    @ds.insert(:id => 1, :bin_string => Sequel.blob("\1"), :bin_blob => Sequel.blob("\2"))
-    @ds.select(:bin_string, :bin_blob).first.should == {:bin_string => "\1", :bin_blob => "\2"}
+    @ds.insert(:id => 1, :bin_string => Sequel.blob("\1"), :bin_clob => Sequel.blob("\2"))
+    @ds.select(:bin_string, :bin_clob).first.should == {:bin_string => "\1", :bin_clob => "\2"}
   end
 end
 
diff --git a/spec/adapters/mssql_spec.rb b/spec/adapters/mssql_spec.rb
index a8b8917..ccfa334 100644
--- a/spec/adapters/mssql_spec.rb
+++ b/spec/adapters/mssql_spec.rb
@@ -16,12 +16,12 @@ describe "A MSSQL database" do
     @db = DB
   end
 
-  cspecify "should be able to read fractional part of timestamp", :odbc do
+  specify "should be able to read fractional part of timestamp" do
     rs = @db["select getutcdate() as full_date, cast(datepart(millisecond, getutcdate()) as int) as milliseconds"].first
     rs[:milliseconds].should == rs[:full_date].usec/1000
   end
 
-  cspecify "should be able to write fractional part of timestamp", :odbc do
+  specify "should be able to write fractional part of timestamp" do
     t = Time.utc(2001, 12, 31, 23, 59, 59, 997000)
     (t.usec/1000).should == @db["select cast(datepart(millisecond, ?) as int) as milliseconds", t].get
   end
@@ -607,8 +607,8 @@ describe "Database#foreign_key_list" do
       end
     end
     after(:all) do
-      DB.drop_table :vendor__mapping
-      DB.drop_table :vendor__vendors
+      DB.drop_table? :vendor__mapping
+      DB.drop_table? :vendor__vendors
       DB.execute_ddl "drop schema vendor"
     end
     it "should support mixed schema bound tables" do
@@ -616,3 +616,107 @@ describe "Database#foreign_key_list" do
     end
   end
 end
+
+describe "MSSQL optimistic locking plugin" do
+  before do
+    @db = DB
+    @db.create_table! :items do
+      primary_key :id
+      String :name, :size => 20
+      column :timestamp, 'timestamp'
+    end
+   end
+  after do
+    @db.drop_table?(:items)
+  end
+
+  it "should not allow stale updates" do
+    c = Class.new(Sequel::Model(:items))
+    c.plugin :mssql_optimistic_locking
+    o = c.create(:name=>'test')
+    o2 = c.first
+    ts = o.timestamp
+    ts.should_not be_nil
+    o.name = 'test2'
+    o.save
+    o.timestamp.should_not == ts
+    proc{o2.save}.should raise_error(Sequel::NoExistingObject)
+  end
+end unless DB.adapter_scheme == :odbc
+
+describe "MSSQL Stored Procedure support" do
+  before do
+    @db = DB
+    @now = DateTime.now.to_s
+    @db.execute('CREATE PROCEDURE dbo.SequelTest
+      (@Input varchar(25), @IntegerInput int, @Output varchar(25) OUTPUT, @IntegerOutput int OUTPUT) AS
+      BEGIN SET @Output = @Input SET @IntegerOutput = @IntegerInput RETURN @IntegerInput END')
+  end
+  after do
+    @db.execute('DROP PROCEDURE dbo.SequelTest')
+  end
+
+  describe "with unnamed parameters" do
+    it "should return a hash of output variables" do
+      r = @db.call_mssql_sproc(:SequelTest, {:args => [@now, 1, :output, :output]})
+      r.should be_a_kind_of(Hash)
+      r.values_at(:var2, :var3).should == [@now, '1']
+    end
+
+    it "should support typed output variables" do
+      @db.call_mssql_sproc(:SequelTest, {:args => [@now, 1, :output, [:output, 'int']]})[:var3].should == 1
+    end
+
+    it "should support named output variables" do
+      @db.call_mssql_sproc(:SequelTest, {:args => [@now, 1, [:output, nil, 'output'], :output]})[:output].should == @now
+    end
+
+    it "should return the number of Affected Rows" do
+      @db.call_mssql_sproc(:SequelTest, {:args => [@now, 1, :output, :output]})[:numrows].should == 1
+    end
+
+    it "should return the Result Code" do
+      @db.call_mssql_sproc(:SequelTest, {:args => [@now, 1, :output, :output]})[:result].should == 1
+    end
+  end
+
+  describe "with named parameters" do
+    it "should return a hash of output variables" do
+      r = @db.call_mssql_sproc(:SequelTest, :args => {
+        'Input' => @now,
+        'IntegerInput' => 1,
+        'Output' => [:output, nil, 'output'],
+        'IntegerOutput' => [:output, nil, 'integer_output']
+      })
+      r.should be_a_kind_of(Hash)
+      r.values_at(:output, :integer_output).should == [@now, '1']
+    end
+
+    it "should support typed output variables" do
+      @db.call_mssql_sproc(:SequelTest, :args => {
+        'Input' => @now,
+        'IntegerInput' => 1,
+        'Output' => [:output, nil, 'output'],
+        'IntegerOutput' => [:output, 'int', 'integer_output']
+      })[:integer_output].should == 1
+    end
+
+    it "should return the number of Affected Rows" do
+      @db.call_mssql_sproc(:SequelTest, :args => {
+        'Input' => @now,
+        'IntegerInput' => 1,
+        'Output' => [:output, nil, 'output'],
+        'IntegerOutput' => [:output, nil, 'integer_output']
+      })[:numrows].should == 1
+    end
+
+    it "should return the Result Code" do
+      @db.call_mssql_sproc(:SequelTest, :args => {
+        'Input' => @now,
+        'IntegerInput' => 1,
+        'Output' => [:output, nil, 'output'],
+        'IntegerOutput' => [:output, nil, 'integer_output']
+      })[:result].should == 1
+    end
+  end
+end unless DB.adapter_scheme == :odbc
diff --git a/spec/adapters/mysql_spec.rb b/spec/adapters/mysql_spec.rb
index 96ce18e..b586b2c 100644
--- a/spec/adapters/mysql_spec.rb
+++ b/spec/adapters/mysql_spec.rb
@@ -520,6 +520,17 @@ describe "A MySQL database" do
     end
   end
 
+  specify "should correctly format ALTER TABLE statements with named foreign keys" do
+    @db.create_table(:items){Integer :id}
+    @db.create_table(:users){primary_key :id}
+    @db.alter_table(:items){add_foreign_key :p_id, :users, :key => :id, :null => false, :on_delete => :cascade, :foreign_key_constraint_name => :pk_items__users }
+    check_sqls do
+      @db.sqls.should == ["CREATE TABLE `items` (`id` integer)",
+        "CREATE TABLE `users` (`id` integer PRIMARY KEY AUTO_INCREMENT)",
+        "ALTER TABLE `items` ADD COLUMN `p_id` integer NOT NULL, ADD CONSTRAINT `pk_items__users` FOREIGN KEY (`p_id`) REFERENCES `users`(`id`) ON DELETE CASCADE"]
+    end
+  end
+
   specify "should have rename_column support keep existing options" do
     @db.create_table(:items){String :id, :null=>false, :default=>'blah'}
     @db.alter_table(:items){rename_column :id, :nid}
diff --git a/spec/adapters/oracle_spec.rb b/spec/adapters/oracle_spec.rb
index ae447ae..11c25ec 100644
--- a/spec/adapters/oracle_spec.rb
+++ b/spec/adapters/oracle_spec.rb
@@ -21,13 +21,34 @@ describe "An Oracle database" do
       Integer :id
       String :cat_name, :size => 50
     end
+
+    DB.create_table!(:notes) do
+      Integer :id
+      String :title, :size => 50
+      String :content, :text => true
+    end
     @d = DB[:items]
   end
   after do
     @d.delete
   end
   after(:all) do
-    DB.drop_table?(:items, :books, :categories)
+    DB.drop_table?(:items, :books, :categories, :notes)
+  end
+
+  specify "should allow limit and offset with clob columns" do
+    notes = []
+    notes << {:id => 1, :title => 'abc', :content => 'zyx'}
+    notes << {:id => 2, :title => 'def', :content => 'wvu'}
+    notes << {:id => 3, :title => 'ghi', :content => 'tsr'}
+    notes << {:id => 4, :title => 'jkl', :content => 'qpo'}
+    notes << {:id => 5, :title => 'mno', :content => 'nml'}
+    DB[:notes].multi_insert(notes)
+
+    DB[:notes].sort_by{|x| x[:id]}.should == notes
+    rows = DB[:notes].limit(3, 0).all
+    rows.length.should == 3
+    rows.all?{|v| notes.should include(v)}
   end
 
   specify "should provide disconnect functionality" do
@@ -39,15 +60,15 @@ describe "An Oracle database" do
 
   specify "should have working view_exists?" do
     begin
-      DB.view_exists?(:cats).should be_false
+      DB.view_exists?(:cats).should == false
       DB.create_view(:cats, DB[:categories])
-      DB.view_exists?(:cats).should be_true
+      DB.view_exists?(:cats).should == true
       om = DB.identifier_output_method
       im = DB.identifier_input_method
       DB.identifier_output_method = :reverse
       DB.identifier_input_method = :reverse
-      DB.view_exists?(:STAC).should be_true
-      DB.view_exists?(:cats).should be_false
+      DB.view_exists?(:STAC).should == true
+      DB.view_exists?(:cats).should == false
     ensure
       DB.identifier_output_method = om
       DB.identifier_input_method = im
diff --git a/spec/adapters/postgres_spec.rb b/spec/adapters/postgres_spec.rb
index 0b1efa4..bd05282 100644
--- a/spec/adapters/postgres_spec.rb
+++ b/spec/adapters/postgres_spec.rb
@@ -27,6 +27,46 @@ describe "PostgreSQL", '#create_table' do
     end
   end
 
+  specify "temporary table should support :on_commit option" do
+    @db.drop_table?(:some_table)
+    @db.transaction do
+      @db.create_table(:some_table, :temp => true, :on_commit => :drop){text :name}
+    end
+    @db.table_exists?(:some_table).should == false
+
+    @db.transaction do
+      @db.create_table(:some_table, :temp => true, :on_commit => :delete_rows){text :name}
+      @db[:some_table].insert('a')
+    end
+    @db.table_exists?(:some_table).should == true
+    @db[:some_table].empty?.should == true
+
+    @db.drop_table(:some_table)
+    @db.transaction do
+      @db.create_table(:some_table, :temp => true, :on_commit => :preserve_rows){text :name}
+      @db[:some_table].insert('a')
+    end
+    @db.table_exists?(:some_table).should == true
+    @db[:some_table].count.should == 1
+    @db.drop_table(:some_table)
+  end
+
+  specify "temporary table should accept :on_commit with :as option" do
+    @db.drop_table?(:some_table)
+    @db.transaction do
+      @db.create_table(:some_table, :temp => true, :on_commit => :drop, :as => 'select 1')
+    end
+    @db.table_exists?(:some_table).should == false
+  end
+
+  specify ":on_commit should raise error if not used on a temporary table" do
+    proc{@db.create_table(:some_table, :on_commit => :drop)}.should raise_error(Sequel::Error)
+  end
+
+  specify ":on_commit should raise error if given unsupported value" do
+    proc{@db.create_table(:some_table, :temp => true, :on_commit => :unsupported){text :name}}.should raise_error(Sequel::Error)
+  end
+
   specify "should create an unlogged table" do
     @db.create_table(:unlogged_dolls, :unlogged => true){text :name}
     check_sqls do
@@ -77,8 +117,8 @@ end
 describe "PostgreSQL views" do
   before do
     @db = DB
-    @db.drop_view(:items_view, :cascade=>true, :if_exists=>true)
-    @db.create_table!(:items){Integer :number}
+    @db.drop_table?(:items, :cascade=>true)
+    @db.create_table(:items){Integer :number}
     @db[:items].insert(10)
     @db[:items].insert(20)
   end
@@ -112,6 +152,15 @@ describe "PostgreSQL views" do
     @db[:items_view].select_order_map(:number).should == [10, 15, 20]
   end if DB.server_version >= 90300
 
+  specify "should support refreshing materialized views concurrently" do
+    @opts = {:materialized=>true}
+    @db.create_view(:items_view, @db[:items].where{number >= 10}, @opts)
+    @db.refresh_view(:items_view)
+    proc{@db.refresh_view(:items_view, :concurrently=>true)}.should raise_error(Sequel::DatabaseError)
+    @db.add_index :items_view, :number, :unique=>true
+    proc{@db.refresh_view(:items_view, :concurrently=>true)}.should_not raise_error
+  end if DB.server_version >= 90400
+
   specify "should support :if_exists=>true for not raising an error if the view does not exist" do
     proc{@db.drop_view(:items_view, :if_exists=>true)}.should_not raise_error
   end
@@ -130,6 +179,47 @@ describe "A PostgreSQL database" do
     @db.server_version.should > 70000
   end
 
+  specify "should respect the :read_only option per-savepoint" do
+    proc{@db.transaction{@db.transaction(:savepoint=>true, :read_only=>true){@db[:public__testfk].insert}}}.should raise_error(Sequel::DatabaseError)
+    proc{@db.transaction(:auto_savepoint=>true, :read_only=>true){@db.transaction(:read_only=>false){@db[:public__testfk].insert}}}.should raise_error(Sequel::DatabaseError)
+    proc{@db.transaction{@db[:public__testfk].insert; @db.transaction(:savepoint=>true, :read_only=>true){@db[:public__testfk].all;}}}.should_not raise_error
+    proc{@db.transaction{@db.transaction(:savepoint=>true, :read_only=>true){}; @db[:public__testfk].insert}}.should_not raise_error
+    proc{@db.transaction{@db[:public__testfk].all; @db.transaction(:savepoint=>true, :read_only=>true){@db[:public__testfk].all;}}}.should_not raise_error
+  end
+
+  specify "should support disable_insert_returning" do
+    ds = @db[:public__testfk].disable_insert_returning
+    ds.delete
+    ds.insert.should == nil
+    id = ds.max(:id)
+    ds.select_order_map([:id, :i]).should == [[id, nil]]
+    ds.insert(:i=>id).should == nil
+    ds.select_order_map([:id, :i]).should == [[id, nil], [id+1, id]]
+    ds.insert_select(:i=>ds.max(:id)).should == nil
+    ds.select_order_map([:id, :i]).should == [[id, nil], [id+1, id]]
+    c = Class.new(Sequel::Model(ds))
+    c.class_eval do
+      def before_create
+        self.id = model.max(:id)+1
+        super
+      end
+    end
+    c.create(:i=>id+1).should == c.load(:id=>id+2, :i=>id+1)
+    ds.select_order_map([:id, :i]).should == [[id, nil], [id+1, id], [id+2, id+1]]
+    ds.delete
+  end
+
+  specify "should support functions with and without quoting" do
+    ds = @db[:public__testfk]
+    ds.delete
+    ds.insert
+    ds.get{sum(1)}.should == 1
+    ds.get{Sequel.function('pg_catalog.sum', 1)}.should == 1
+    ds.get{sum.function(1)}.should == 1
+    ds.get{pg_catalog__sum.function(1)}.should == 1
+    ds.delete
+  end
+
   specify "should support a :qualify option to tables and views" do
     @db.tables(:qualify=>true).should include(Sequel.qualify(:public, :testfk))
     begin
@@ -161,6 +251,12 @@ describe "A PostgreSQL database" do
   specify "should return uuid fields as strings" do
     @db.get(Sequel.cast('550e8400-e29b-41d4-a716-446655440000', :uuid)).should == '550e8400-e29b-41d4-a716-446655440000'
   end
+
+  specify "should handle inserts with placeholder literal string tables" do
+    ds = @db.from(Sequel.lit('?', :testfk))
+    ds.insert(:id=>1)
+    ds.select_map(:id).should == [1]
+  end
 end
 
 describe "A PostgreSQL database with domain types" do
@@ -247,6 +343,22 @@ describe "A PostgreSQL dataset" do
     @d.order(Sequel.asc(:value, :nulls=>:first), :name).reverse.select_map(:name).should == %w[bcd bcd abc]
   end
 
+  specify "should support selecting from LATERAL functions" do
+    @d.from{[generate_series(1,3,1).as(:a), pow(:a, 2).lateral.as(:b)]}.select_map([:a, :b])== [[1, 1], [2, 4], [3, 9]]
+  end if DB.server_version >= 90300
+
+  specify "should support ordered-set and hypothetical-set aggregate functions" do
+    @d.from{generate_series(1,3,1).as(:a)}.select{(a.sql_number % 2).as(:a)}.from_self.get{mode{}.within_group(:a)}.should == 1
+  end if DB.server_version >= 90400
+
+  specify "should support filtered aggregate functions" do
+    @d.from{generate_series(1,3,1).as(:a)}.select{(a.sql_number % 2).as(:a)}.from_self.get{count(:a).filter(:a=>1)}.should == 2
+  end if DB.server_version >= 90400
+
+  specify "should support functions with ordinality" do
+    @d.from{generate_series(1,10,3).with_ordinality}.select_map([:generate_series, :ordinality]).should == [[1, 1], [4, 2], [7, 3], [10, 4]]
+  end if DB.server_version >= 90400
+
   specify "#lock should lock tables and yield if a block is given" do
     @d.lock('EXCLUSIVE'){@d.insert(:name=>'a')}
   end
@@ -769,31 +881,31 @@ describe "A PostgreSQL database" do
 
   specify "should support fulltext indexes and searching" do
     @db.create_table(:posts){text :title; text :body; full_text_index [:title, :body]; full_text_index :title, :language => 'french', :index_type=>:gist}
-    check_sqls do
-      @db.sqls.should == [
-        %{CREATE TABLE "posts" ("title" text, "body" text)},
-        %{CREATE INDEX "posts_title_body_index" ON "posts" USING gin (to_tsvector('simple'::regconfig, (COALESCE("title", '') || ' ' || COALESCE("body", ''))))},
-        %{CREATE INDEX "posts_title_index" ON "posts" USING gist (to_tsvector('french'::regconfig, (COALESCE("title", ''))))}
-      ]
-    end
 
     @db[:posts].insert(:title=>'ruby rails', :body=>'yowsa')
     @db[:posts].insert(:title=>'sequel', :body=>'ruby')
     @db[:posts].insert(:title=>'ruby scooby', :body=>'x')
-    @db.sqls.clear
 
     @db[:posts].full_text_search(:title, 'rails').all.should == [{:title=>'ruby rails', :body=>'yowsa'}]
     @db[:posts].full_text_search([:title, :body], ['yowsa', 'rails']).all.should == [:title=>'ruby rails', :body=>'yowsa']
     @db[:posts].full_text_search(:title, 'scooby', :language => 'french').all.should == [{:title=>'ruby scooby', :body=>'x'}]
-    check_sqls do
-      @db.sqls.should == [
-        %{SELECT * FROM "posts" WHERE (to_tsvector('simple'::regconfig, (COALESCE("title", ''))) @@ to_tsquery('simple'::regconfig, 'rails'))},
-        %{SELECT * FROM "posts" WHERE (to_tsvector('simple'::regconfig, (COALESCE("title", '') || ' ' || COALESCE("body", ''))) @@ to_tsquery('simple'::regconfig, 'yowsa | rails'))},
-        %{SELECT * FROM "posts" WHERE (to_tsvector('french'::regconfig, (COALESCE("title", ''))) @@ to_tsquery('french'::regconfig, 'scooby'))}]
-    end
 
     @db[:posts].full_text_search(:title, :$n).call(:select, :n=>'rails').should == [{:title=>'ruby rails', :body=>'yowsa'}]
     @db[:posts].full_text_search(:title, :$n).prepare(:select, :fts_select).call(:n=>'rails').should == [{:title=>'ruby rails', :body=>'yowsa'}]
+
+    @db[:posts].insert(:title=>'jruby rubinius ruby maglev mri iron')
+    @db[:posts].insert(:title=>'ruby jruby maglev mri rubinius iron')
+    @db[:posts].full_text_search(:title, 'rubinius ruby', :phrase=>true).select_order_map(:title).should == ['jruby rubinius ruby maglev mri iron']
+    @db[:posts].full_text_search(:title, 'jruby maglev', :phrase=>true).select_order_map(:title).should == ['ruby jruby maglev mri rubinius iron']
+    @db[:posts].full_text_search(:title, 'rubinius ruby', :plain=>true).select_order_map(:title).should == ['jruby rubinius ruby maglev mri iron', 'ruby jruby maglev mri rubinius iron']
+    @db[:posts].full_text_search(:title, 'jruby maglev', :plain=>true).select_order_map(:title).should == ['jruby rubinius ruby maglev mri iron', 'ruby jruby maglev mri rubinius iron']
+
+    @db[:posts].delete
+    t1 = "bork " * 1000 + "ruby sequel"
+    t2 = "ruby sequel " * 1000
+    @db[:posts].insert(:title=>t1)
+    @db[:posts].insert(:title=>t2)
+    @db[:posts].full_text_search(:title, 'ruby & sequel', :rank=>true).select_map(:title).should == [t1, t2]
   end
 
   specify "should support spatial indexes" do
@@ -1299,13 +1411,13 @@ if DB.dataset.supports_window_functions?
     end
 
     specify "should give correct results for window functions" do
-      @ds.window(:win, :partition=>:group_id, :order=>:id).select(:id){sum(:over, :args=>amount, :window=>win){}}.all.should ==
+      @ds.window(:win, :partition=>:group_id, :order=>:id).select(:id){sum(:amount).over(:window=>win)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.window(:win, :partition=>:group_id).select(:id){sum(:over, :args=>amount, :window=>win, :order=>id){}}.all.should ==
+      @ds.window(:win, :partition=>:group_id).select(:id){sum(:amount).over(:window=>win, :order=>id)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.window(:win, {}).select(:id){sum(:over, :args=>amount, :window=>:win, :order=>id){}}.all.should ==
+      @ds.window(:win, {}).select(:id){sum(:amount).over(:window=>:win, :order=>id)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}]
-      @ds.window(:win, :partition=>:group_id).select(:id){sum(:over, :args=>amount, :window=>:win, :order=>id, :frame=>:all){}}.all.should ==
+      @ds.window(:win, :partition=>:group_id).select(:id){sum(:amount).over(:window=>:win, :order=>id, :frame=>:all)}.all.should ==
         [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}]
     end
   end
@@ -1410,6 +1522,17 @@ if DB.adapter_scheme == :postgres
       @ds.all.should == @ds.use_cursor.all
     end
 
+    specify "should not swallow errors if closing cursor raises an error" do
+      proc do
+        @db.synchronize do |c|
+          @ds.use_cursor.each do |r|
+            @db.run "CLOSE sequel_cursor"
+            raise ArgumentError
+          end
+        end
+      end.should raise_error(ArgumentError)
+    end
+
     specify "should respect the :rows_per_fetch option" do
       @db.sqls.clear
       @ds.use_cursor.all
@@ -1423,6 +1546,23 @@ if DB.adapter_scheme == :postgres
       end
     end
 
+    specify "should respect the :hold=>true option for creating the cursor WITH HOLD and not using a transaction" do
+      @ds.use_cursor.each{@db.in_transaction?.should == true}
+      check_sqls{@db.sqls.any?{|s| s =~ /WITH HOLD/}.should == false}
+      @ds.use_cursor(:hold=>true).each{@db.in_transaction?.should == false}
+      check_sqls{@db.sqls.any?{|s| s =~ /WITH HOLD/}.should == true}
+    end
+
+    specify "should support updating individual rows based on a cursor" do
+      @db.transaction(:rollback=>:always) do
+        @ds.use_cursor(:rows_per_fetch=>1).each do |row|
+          @ds.where_current_of.update(:x=>Sequel.*(row[:x], 10))
+        end
+        @ds.select_order_map(:x).should == (0..1000).map{|x| x * 10}
+      end
+      @ds.select_order_map(:x).should == (0..1000).to_a
+    end
+
     specify "should respect the :cursor_name option" do
       one_rows = []
       two_rows = []
@@ -1642,7 +1782,7 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
         ['', nil].should include(payload)
         called = true
       end.should == 'foo'
-      called.should be_true
+      called.should == true
 
       # Check weird identifier names
       called = false
@@ -1652,7 +1792,7 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
         ['', nil].should include(payload)
         called = true
       end.should == 'FOO bar'
-      called.should be_true
+      called.should == true
 
       # Check identifier symbols
       called = false
@@ -1662,7 +1802,7 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
         ['', nil].should include(payload)
         called = true
       end.should == 'foo'
-      called.should be_true
+      called.should == true
 
       called = false
       @db.listen('foo', :after_listen=>proc{@db.notify('foo', :payload=>'bar')}) do |ev, pid, payload|
@@ -1671,7 +1811,7 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
         payload.should == 'bar'
         called = true
       end.should == 'foo'
-      called.should be_true
+      called.should == true
 
       @db.listen('foo', :after_listen=>proc{@db.notify('foo')}).should == 'foo'
 
@@ -1692,8 +1832,8 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
           break
         end
       end.should be_nil
-      called.should be_true
-      called2.should be_true
+      called.should == true
+      called2.should == true
       i.should == 1
     end
 
@@ -1701,7 +1841,7 @@ if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_versio
       @db.listen('foo2', :timeout=>0.001).should == nil
       called = false
       @db.listen('foo2', :timeout=>0.001){|ev, pid, payload| called = true}.should == nil
-      called.should be_false
+      called.should == false
       i = 0
       @db.listen('foo2', :timeout=>0.001, :loop=>proc{i+=1; throw :stop if i > 3}){|ev, pid, payload| called = true}.should == nil
       i.should == 4
@@ -1741,7 +1881,7 @@ describe 'PostgreSQL special float handling' do
     specify 'inserts NaN' do
       nan = 0.0/0.0
       @ds.insert(:value=>nan)
-      @ds.all[0][:value].nan?.should be_true
+      @ds.all[0][:value].nan?.should == true
     end
 
     specify 'inserts +Infinity' do
@@ -1763,8 +1903,7 @@ describe 'PostgreSQL array handling' do
     @db = DB
     @db.extension :pg_array
     @ds = @db[:items]
-    @native = DB.adapter_scheme == :postgres
-    @jdbc = DB.adapter_scheme == :jdbc
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
     @tp = lambda{@db.schema(:items).map{|a| a.last[:type]}}
   end
   after do
@@ -1779,14 +1918,12 @@ describe 'PostgreSQL array handling' do
       column :r, 'real[]'
       column :dp, 'double precision[]'
     end
-    @tp.call.should == [:integer_array, :integer_array, :bigint_array, :float_array, :float_array]
+    @tp.call.should == [:smallint_array, :integer_array, :bigint_array, :real_array, :float_array]
     @ds.insert(Sequel.pg_array([1], :int2), Sequel.pg_array([nil, 2], :int4), Sequel.pg_array([3, nil], :int8), Sequel.pg_array([4, nil, 4.5], :real), Sequel.pg_array([5, nil, 5.5], "double precision"))
     @ds.count.should == 1
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:i2=>[1], :i4=>[nil, 2], :i8=>[3, nil], :r=>[4.0, nil, 4.5], :dp=>[5.0, nil, 5.5]}]
-    end
     if @native
+      rs.should == [{:i2=>[1], :i4=>[nil, 2], :i8=>[3, nil], :r=>[4.0, nil, 4.5], :dp=>[5.0, nil, 5.5]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1798,10 +1935,8 @@ describe 'PostgreSQL array handling' do
     @ds.insert(Sequel.pg_array([[1], [2]], :int2), Sequel.pg_array([[nil, 2], [3, 4]], :int4), Sequel.pg_array([[3, nil], [nil, nil]], :int8), Sequel.pg_array([[4, nil], [nil, 4.5]], :real), Sequel.pg_array([[5, nil], [nil, 5.5]], "double precision"))
 
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:i2=>[[1], [2]], :i4=>[[nil, 2], [3, 4]], :i8=>[[3, nil], [nil, nil]], :r=>[[4, nil], [nil, 4.5]], :dp=>[[5, nil], [nil, 5.5]]}]
-    end
     if @native
+      rs.should == [{:i2=>[[1], [2]], :i4=>[[nil, 2], [3, 4]], :i8=>[[3, nil], [nil, nil]], :r=>[[4, nil], [nil, 4.5]], :dp=>[[5, nil], [nil, 5.5]]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1818,10 +1953,8 @@ describe 'PostgreSQL array handling' do
     @ds.insert(Sequel.pg_array([BigDecimal.new('1.000000000000000000001'), nil, BigDecimal.new('1')], :numeric))
     @ds.count.should == 1
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:n=>[BigDecimal.new('1.000000000000000000001'), nil, BigDecimal.new('1')]}]
-    end
     if @native
+      rs.should == [{:n=>[BigDecimal.new('1.000000000000000000001'), nil, BigDecimal.new('1')]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1832,10 +1965,8 @@ describe 'PostgreSQL array handling' do
     @ds.delete
     @ds.insert(Sequel.pg_array([[BigDecimal.new('1.0000000000000000000000000000001'), nil], [nil, BigDecimal.new('1')]], :numeric))
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:n=>[[BigDecimal.new('1.0000000000000000000000000000001'), nil], [nil, BigDecimal.new('1')]]}]
-    end
     if @native
+      rs.should == [{:n=>[[BigDecimal.new('1.0000000000000000000000000000001'), nil], [nil, BigDecimal.new('1')]]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1850,14 +1981,12 @@ describe 'PostgreSQL array handling' do
       column :vc, 'varchar[]'
       column :t, 'text[]'
     end
-    @tp.call.should == [:string_array, :string_array, :string_array]
-    @ds.insert(Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], 'char(4)'), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], :varchar), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], :text))
+    @tp.call.should == [:character_array, :varchar_array, :string_array]
+    @ds.insert(Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], 'char(4)'), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c', '', ''], :varchar), Sequel.pg_array(['a', nil, 'NULL', 'b"\'c'], :text))
     @ds.count.should == 1
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:c=>['a   ', nil, 'NULL', 'b"\'c'], :vc=>['a', nil, 'NULL', 'b"\'c'], :t=>['a', nil, 'NULL', 'b"\'c']}]
-    end
     if @native
+      rs.should == [{:c=>['a   ', nil, 'NULL', 'b"\'c'], :vc=>['a', nil, 'NULL', 'b"\'c', '', ''], :t=>['a', nil, 'NULL', 'b"\'c']}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1866,12 +1995,10 @@ describe 'PostgreSQL array handling' do
     end
 
     @ds.delete
-    @ds.insert(Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], 'char(4)'), Sequel.pg_array([[['a'], ['']], [['NULL'], ['b"\'c']]], :varchar), Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], :text))
+    @ds.insert(Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], 'char(4)'), Sequel.pg_array([[['a[],\\[\\]\\,\\""NULL",'], ['']], [['NULL'], ['b"\'c']]], :varchar), Sequel.pg_array([[['a'], [nil]], [['NULL'], ['b"\'c']]], :text))
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:c=>[[['a   '], [nil]], [['NULL'], ['b"\'c']]], :vc=>[[['a'], ['']], [['NULL'], ['b"\'c']]], :t=>[[['a'], [nil]], [['NULL'], ['b"\'c']]]}]
-    end
     if @native
+      rs.should == [{:c=>[[['a   '], [nil]], [['NULL'], ['b"\'c']]], :vc=>[[['a[],\\[\\]\\,\\""NULL",'], ['']], [['NULL'], ['b"\'c']]], :t=>[[['a'], [nil]], [['NULL'], ['b"\'c']]]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1897,10 +2024,8 @@ describe 'PostgreSQL array handling' do
     @ds.insert(Sequel.pg_array([true, false], :bool), Sequel.pg_array([d, nil], :date), Sequel.pg_array([t, nil], :time), Sequel.pg_array([ts, nil], :timestamp), Sequel.pg_array([ts, nil], :timestamptz))
     @ds.count.should == 1
     rs = @ds.all
-    if @jdbc || @native
-      rs.should == [{:b=>[true, false], :d=>[d, nil], :t=>[t, nil], :ts=>[ts, nil], :tstz=>[ts, nil]}]
-    end
     if @native
+      rs.should == [{:b=>[true, false], :d=>[d, nil], :t=>[t, nil], :ts=>[ts, nil], :tstz=>[ts, nil]}]
       rs.first.values.each{|v| v.should_not be_a_kind_of(Array)}
       rs.first.values.each{|v| v.to_a.should be_a_kind_of(Array)}
       @ds.delete
@@ -1913,7 +2038,7 @@ describe 'PostgreSQL array handling' do
       column :tz, 'timetz[]'
       column :o, 'oid[]'
     end
-    @tp.call.should == [:blob_array, :time_timezone_array, :integer_array]
+    @tp.call.should == [:blob_array, :time_timezone_array, :oid_array]
     @ds.insert(Sequel.pg_array([Sequel.blob("a\0"), nil], :bytea), Sequel.pg_array([t, nil], :timetz), Sequel.pg_array([1, 2, 3], :oid))
     @ds.count.should == 1
     if @native
@@ -1978,7 +2103,7 @@ describe 'PostgreSQL array handling' do
       @ds.insert(rs.first)
       @ds.all.should == rs
     end
-  end unless DB.adapter_scheme == :jdbc
+  end
 
   specify 'use arrays in bound variables' do
     @db.create_table!(:items) do
@@ -2050,11 +2175,33 @@ describe 'PostgreSQL array handling' do
     end
     c = Class.new(Sequel::Model(@db[:items]))
     c.plugin :pg_typecast_on_load, :i, :f, :d, :t unless @native
+    h = {:i=>[1,2, nil], :f=>[[1, 2.5], [3, 4.5]], :d=>[1, BigDecimal.new('1.000000000000000000001')], :t=>[%w'a b c', ['NULL', nil, '1']]}
+    o = c.create(h)
+    o.i.should == [1, 2, nil]
+    o.f.should == [[1, 2.5], [3, 4.5]]
+    o.d.should == [BigDecimal.new('1'), BigDecimal.new('1.000000000000000000001')]
+    o.t.should == [%w'a b c', ['NULL', nil, '1']]
+    c.where(:i=>o.i, :f=>o.f, :d=>o.d, :t=>o.t).all.should == [o]
+    o2 = c.new(h)
+    c.where(:i=>o2.i, :f=>o2.f, :d=>o2.d, :t=>o2.t).all.should == [o]
+
+    @db.create_table!(:items) do
+      primary_key :id
+      column :i, 'int2[]'
+      column :f, 'real[]'
+      column :d, 'numeric(30,28)[]'
+      column :t, 'varchar[]'
+    end
+    c = Class.new(Sequel::Model(@db[:items]))
+    c.plugin :pg_typecast_on_load, :i, :f, :d, :t unless @native
     o = c.create(:i=>[1,2, nil], :f=>[[1, 2.5], [3, 4.5]], :d=>[1, BigDecimal.new('1.000000000000000000001')], :t=>[%w'a b c', ['NULL', nil, '1']])
     o.i.should == [1, 2, nil]
     o.f.should == [[1, 2.5], [3, 4.5]]
     o.d.should == [BigDecimal.new('1'), BigDecimal.new('1.000000000000000000001')]
     o.t.should == [%w'a b c', ['NULL', nil, '1']]
+    c.where(:i=>o.i, :f=>o.f, :d=>o.d, :t=>o.t).all.should == [o]
+    o2 = c.new(h)
+    c.where(:i=>o2.i, :f=>o2.f, :d=>o2.d, :t=>o2.t).all.should == [o]
   end
 
   specify 'operations/functions with pg_array_ops' do
@@ -2062,26 +2209,26 @@ describe 'PostgreSQL array handling' do
     @db.create_table!(:items){column :i, 'integer[]'; column :i2, 'integer[]'; column :i3, 'integer[]'; column :i4, 'integer[]'; column :i5, 'integer[]'}
     @ds.insert(Sequel.pg_array([1, 2, 3]), Sequel.pg_array([2, 1]), Sequel.pg_array([4, 4]), Sequel.pg_array([[5, 5], [4, 3]]), Sequel.pg_array([1, nil, 5]))
 
-    @ds.get(Sequel.pg_array(:i) > :i3).should be_false
-    @ds.get(Sequel.pg_array(:i3) > :i).should be_true
+    @ds.get(Sequel.pg_array(:i) > :i3).should == false
+    @ds.get(Sequel.pg_array(:i3) > :i).should == true
 
-    @ds.get(Sequel.pg_array(:i) >= :i3).should be_false
-    @ds.get(Sequel.pg_array(:i) >= :i).should be_true
+    @ds.get(Sequel.pg_array(:i) >= :i3).should == false
+    @ds.get(Sequel.pg_array(:i) >= :i).should == true
 
-    @ds.get(Sequel.pg_array(:i3) < :i).should be_false
-    @ds.get(Sequel.pg_array(:i) < :i3).should be_true
+    @ds.get(Sequel.pg_array(:i3) < :i).should == false
+    @ds.get(Sequel.pg_array(:i) < :i3).should == true
 
-    @ds.get(Sequel.pg_array(:i3) <= :i).should be_false
-    @ds.get(Sequel.pg_array(:i) <= :i).should be_true
+    @ds.get(Sequel.pg_array(:i3) <= :i).should == false
+    @ds.get(Sequel.pg_array(:i) <= :i).should == true
 
-    @ds.get(Sequel.expr(5=>Sequel.pg_array(:i).any)).should be_false
-    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i).any)).should be_true
+    @ds.get(Sequel.expr(5=>Sequel.pg_array(:i).any)).should == false
+    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i).any)).should == true
 
-    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i3).all)).should be_false
-    @ds.get(Sequel.expr(4=>Sequel.pg_array(:i3).all)).should be_true
+    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i3).all)).should == false
+    @ds.get(Sequel.expr(4=>Sequel.pg_array(:i3).all)).should == true
 
-    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i)[1..1].any)).should be_true
-    @ds.get(Sequel.expr(2=>Sequel.pg_array(:i)[1..1].any)).should be_false
+    @ds.get(Sequel.expr(1=>Sequel.pg_array(:i)[1..1].any)).should == true
+    @ds.get(Sequel.expr(2=>Sequel.pg_array(:i)[1..1].any)).should == false
 
     @ds.get(Sequel.pg_array(:i2)[1]).should == 2
     @ds.get(Sequel.pg_array(:i2)[1]).should == 2
@@ -2090,14 +2237,14 @@ describe 'PostgreSQL array handling' do
     @ds.get(Sequel.pg_array(:i4)[2][1]).should == 4
     @ds.get(Sequel.pg_array(:i4)[2][2]).should == 3
 
-    @ds.get(Sequel.pg_array(:i).contains(:i2)).should be_true
-    @ds.get(Sequel.pg_array(:i).contains(:i3)).should be_false
+    @ds.get(Sequel.pg_array(:i).contains(:i2)).should == true
+    @ds.get(Sequel.pg_array(:i).contains(:i3)).should == false
 
-    @ds.get(Sequel.pg_array(:i2).contained_by(:i)).should be_true
-    @ds.get(Sequel.pg_array(:i).contained_by(:i2)).should be_false
+    @ds.get(Sequel.pg_array(:i2).contained_by(:i)).should == true
+    @ds.get(Sequel.pg_array(:i).contained_by(:i2)).should == false
 
-    @ds.get(Sequel.pg_array(:i).overlaps(:i2)).should be_true
-    @ds.get(Sequel.pg_array(:i2).overlaps(:i3)).should be_false
+    @ds.get(Sequel.pg_array(:i).overlaps(:i2)).should == true
+    @ds.get(Sequel.pg_array(:i2).overlaps(:i3)).should == false
 
     @ds.get(Sequel.pg_array(:i).dims).should == '[1:3]'
     @ds.get(Sequel.pg_array(:i).length).should == 3
@@ -2113,8 +2260,15 @@ describe 'PostgreSQL array handling' do
     end
     if @db.server_version >= 90300
       @ds.get(Sequel.pg_array(:i5).remove(1).length).should == 2
-      @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([1])).should be_false
-      @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([4])).should be_true
+      @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([1])).should == false
+      @ds.get(Sequel.pg_array(:i5).replace(1, 4).contains([4])).should == true
+    end
+    if @db.server_version >= 90400
+      @ds.get(Sequel.pg_array(:i).cardinality).should == 3
+      @ds.get(Sequel.pg_array(:i4).cardinality).should == 4
+      @ds.get(Sequel.pg_array(:i5).cardinality).should == 3
+
+      @ds.from{Sequel.pg_array([1,2,3]).op.unnest([4,5,6], [7,8]).as(:t1, [:a, :b, :c])}.select_order_map([:a, :b, :c]).should == [[1, 4, 7], [2, 5, 8], [3, 6, nil]]
     end
 
     if @native
@@ -2137,7 +2291,7 @@ describe 'PostgreSQL hstore handling' do
     @db.extension :pg_array, :pg_hstore
     @ds = @db[:items]
     @h = {'a'=>'b', 'c'=>nil, 'd'=>'NULL', 'e'=>'\\\\" \\\' ,=>'}
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after do
     @db.drop_table?(:items)
@@ -2152,6 +2306,7 @@ describe 'PostgreSQL hstore handling' do
     if @native
       rs = @ds.all
       v = rs.first[:h]
+      v.should == @h
       v.should_not be_a_kind_of(Hash)
       v.to_hash.should be_a_kind_of(Hash)
       v.to_hash.should == @h
@@ -2387,139 +2542,172 @@ describe 'PostgreSQL json type' do
     @ds = @db[:items]
     @a = [1, 2, {'a'=>'b'}, 3.0]
     @h = {'a'=>'b', '1'=>[3, 4, 5]}
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after do
     @db.drop_table?(:items)
   end
 
-  specify 'insert and retrieve json values' do
-    @db.create_table!(:items){json :j}
-    @ds.insert(Sequel.pg_json(@h))
-    @ds.count.should == 1
-    if @native
-      rs = @ds.all
-      v = rs.first[:j]
-      v.should_not be_a_kind_of(Hash)
-      v.to_hash.should be_a_kind_of(Hash)
-      v.should == @h
-      v.to_hash.should == @h
+  json_types = [:json]
+  json_types << :jsonb if DB.server_version >= 90400
+  json_types.each do |json_type|
+    json_array_type = "#{json_type}[]"
+    pg_json = lambda{|v| Sequel.send(:"pg_#{json_type}", v)}
+
+    specify 'insert and retrieve json values' do
+      @db.create_table!(:items){column :j, json_type}
+      @ds.insert(pg_json.call(@h))
+      @ds.count.should == 1
+      if @native
+        rs = @ds.all
+        v = rs.first[:j]
+        v.should_not be_a_kind_of(Hash)
+        v.to_hash.should be_a_kind_of(Hash)
+        v.should == @h
+        v.to_hash.should == @h
+        @ds.delete
+        @ds.insert(rs.first)
+        @ds.all.should == rs
+      end
+
       @ds.delete
-      @ds.insert(rs.first)
-      @ds.all.should == rs
+      @ds.insert(pg_json.call(@a))
+      @ds.count.should == 1
+      if @native
+        rs = @ds.all
+        v = rs.first[:j]
+        v.should_not be_a_kind_of(Array)
+        v.to_a.should be_a_kind_of(Array)
+        v.should == @a
+        v.to_a.should == @a
+        @ds.delete
+        @ds.insert(rs.first)
+        @ds.all.should == rs
+      end
     end
 
-    @ds.delete
-    @ds.insert(Sequel.pg_json(@a))
-    @ds.count.should == 1
-    if @native
-      rs = @ds.all
-      v = rs.first[:j]
-      v.should_not be_a_kind_of(Array)
-      v.to_a.should be_a_kind_of(Array)
-      v.should == @a
-      v.to_a.should == @a
-      @ds.delete
-      @ds.insert(rs.first)
-      @ds.all.should == rs
+    specify 'insert and retrieve json[] values' do
+      @db.create_table!(:items){column :j, json_array_type}
+      j = Sequel.pg_array([pg_json.call('a'=>1), pg_json.call(['b', 2])])
+      @ds.insert(j)
+      @ds.count.should == 1
+      if @native
+        rs = @ds.all
+        v = rs.first[:j]
+        v.should_not be_a_kind_of(Array)
+        v.to_a.should be_a_kind_of(Array)
+        v.should == j
+        v.to_a.should == j
+        @ds.delete
+        @ds.insert(rs.first)
+        @ds.all.should == rs
+      end
     end
-  end
 
-  specify 'insert and retrieve json[] values' do
-    @db.create_table!(:items){column :j, 'json[]'}
-    j = Sequel.pg_array([Sequel.pg_json('a'=>1), Sequel.pg_json(['b', 2])])
-    @ds.insert(j)
-    @ds.count.should == 1
-    if @native
-      rs = @ds.all
-      v = rs.first[:j]
-      v.should_not be_a_kind_of(Array)
-      v.to_a.should be_a_kind_of(Array)
-      v.should == j
-      v.to_a.should == j
-      @ds.delete
-      @ds.insert(rs.first)
-      @ds.all.should == rs
+    specify 'with models' do
+      @db.create_table!(:items) do
+        primary_key :id
+        column :h, json_type
+      end
+      c = Class.new(Sequel::Model(@db[:items]))
+      c.plugin :pg_typecast_on_load, :h unless @native
+      c.create(:h=>pg_json.call(@h)).h.should == @h
+      c.create(:h=>pg_json.call(@a)).h.should == @a
     end
-  end
 
-  specify 'use json in bound variables' do
-    @db.create_table!(:items){json :i}
-    @ds.call(:insert, {:i=>Sequel.pg_json(@h)}, {:i=>:$i})
-    @ds.get(:i).should == @h
-    @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json(@h)).should == {:i=>@h}
-    @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json({})).should == nil
-    @ds.filter(Sequel.cast(:i, String)=>:$i).call(:delete, :i=>Sequel.pg_json(@h)).should == 1
+    specify 'use json in bound variables' do
+      @db.create_table!(:items){column :i, json_type}
+      @ds.call(:insert, {:i=>pg_json.call(@h)}, {:i=>:$i})
+      @ds.get(:i).should == @h
 
-    @ds.call(:insert, {:i=>Sequel.pg_json(@a)}, {:i=>:$i})
-    @ds.get(:i).should == @a
-    @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json(@a)).should == {:i=>@a}
-    @ds.filter(Sequel.cast(:i, String)=>:$i).call(:first, :i=>Sequel.pg_json([])).should == nil
+      @ds.delete
+      @ds.call(:insert, {:i=>pg_json.call('a'=>nil)}, {:i=>:$i})
+      @ds.get(:i).should == pg_json.call('a'=>nil)
 
-    @ds.delete
-    @ds.call(:insert, {:i=>Sequel.pg_json('a'=>nil)}, {:i=>:$i})
-    @ds.get(:i).should == Sequel.pg_json('a'=>nil)
-
-    @db.create_table!(:items){column :i, 'json[]'}
-    j = Sequel.pg_array([Sequel.pg_json('a'=>1), Sequel.pg_json(['b', 2])], :text)
-    @ds.call(:insert, {:i=>j}, {:i=>:$i})
-    @ds.get(:i).should == j
-    @ds.filter(Sequel.cast(:i, 'text[]')=>:$i).call(:first, :i=>j).should == {:i=>j}
-    @ds.filter(Sequel.cast(:i, 'text[]')=>:$i).call(:first, :i=>Sequel.pg_array([])).should == nil
-  end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG
+      @db.create_table!(:items){column :i, json_array_type}
+      j = Sequel.pg_array([pg_json.call('a'=>1), pg_json.call(['b', 2])], json_type)
+      @ds.call(:insert, {:i=>j}, {:i=>:$i})
+      @ds.get(:i).should == j
+    end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG
 
-  specify 'with models' do
-    @db.create_table!(:items) do
-      primary_key :id
-      json :h
-    end
-    c = Class.new(Sequel::Model(@db[:items]))
-    c.plugin :pg_typecast_on_load, :h unless @native
-    c.create(:h=>Sequel.pg_json(@h)).h.should == @h
-    c.create(:h=>Sequel.pg_json(@a)).h.should == @a
-  end
-
-  specify 'operations/functions with pg_json_ops' do
-    Sequel.extension :pg_json_ops
-    jo = Sequel.pg_json('a'=>1, 'b'=>{'c'=>2, 'd'=>{'e'=>3}}).op
-    ja = Sequel.pg_json([2, 3, %w'a b']).op
-
-    @db.get(jo['a']).should == 1
-    @db.get(jo['b']['c']).should == 2
-    @db.get(jo[%w'b c']).should == 2
-    @db.get(jo['b'].get_text(%w'd e')).should == "3"
-    @db.get(jo[%w'b d'].get_text('e')).should == "3"
-    @db.get(ja[1]).should == 3
-    @db.get(ja[%w'2 1']).should == 'b'
-
-    @db.get(jo.extract('a')).should == 1
-    @db.get(jo.extract('b').extract('c')).should == 2
-    @db.get(jo.extract('b', 'c')).should == 2
-    @db.get(jo.extract('b', 'd', 'e')).should == 3
-    @db.get(jo.extract_text('b', 'd')).should == '{"e":3}'
-    @db.get(jo.extract_text('b', 'd', 'e')).should == '3'
-
-    @db.get(ja.array_length).should == 3
-    @db.from(ja.array_elements.as(:v)).select_map(:v).should == [2, 3, %w'a b']
-
-    @db.from(jo.keys.as(:k)).select_order_map(:k).should == %w'a b'
-    @db.from(jo.each).select_order_map(:key).should == %w'a b'
-    @db.from(jo.each).order(:key).select_map(:value).should == [1, {'c'=>2, 'd'=>{'e'=>3}}]
-    @db.from(jo.each_text).select_order_map(:key).should == %w'a b'
-    @db.from(jo.each_text).order(:key).where(:key=>'b').get(:value).should =~ /\{"d":\{"e":3\},"c":2\}|\{"c":2,"d":\{"e":3\}\}/
-
-    Sequel.extension :pg_row_ops
-    @db.create_table!(:items) do
-      Integer :a
-      String :b
-    end
-    j = Sequel.pg_json('a'=>1, 'b'=>'c').op
-    @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:a]).should == 1
-    @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:b]).should == 'c'
-    j = Sequel.pg_json([{'a'=>1, 'b'=>'c'}, {'a'=>2, 'b'=>'d'}]).op
-    @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:a).should == [1, 2]
-    @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:b).should == %w'c d'
-  end if DB.server_version >= 90300 && DB.adapter_scheme == :postgres
+    specify 'operations/functions with pg_json_ops' do
+      Sequel.extension :pg_json_ops
+      jo = pg_json.call('a'=>1, 'b'=>{'c'=>2, 'd'=>{'e'=>3}}).op
+      ja = pg_json.call([2, 3, %w'a b']).op
+
+      @db.get(jo['a']).should == 1
+      @db.get(jo['b']['c']).should == 2
+      @db.get(jo[%w'b c']).should == 2
+      @db.get(jo['b'].get_text(%w'd e')).should == "3"
+      @db.get(jo[%w'b d'].get_text('e')).should == "3"
+      @db.get(ja[1]).should == 3
+      @db.get(ja[%w'2 1']).should == 'b'
+
+      @db.get(jo.extract('a')).should == 1
+      @db.get(jo.extract('b').extract('c')).should == 2
+      @db.get(jo.extract('b', 'c')).should == 2
+      @db.get(jo.extract('b', 'd', 'e')).should == 3
+      @db.get(jo.extract_text('b', 'd')).gsub(' ', '').should == '{"e":3}'
+      @db.get(jo.extract_text('b', 'd', 'e')).should == '3'
+
+      @db.get(ja.array_length).should == 3
+      @db.from(ja.array_elements.as(:v)).select_map(:v).should == [2, 3, %w'a b']
+
+      if DB.server_version >= 90400 
+        @db.get(jo.typeof).should == 'object'
+        @db.get(ja.typeof).should == 'array'
+        @db.from(ja.array_elements_text.as(:v)).select_map(:v).map{|s| s.gsub(' ', '')}.should == ['2', '3', '["a","b"]']
+        @db.from(jo.to_record(true).as(:v, [Sequel.lit('a integer'), Sequel.lit('b text')])).select_map(:a).should == [1]
+        @db.from(pg_json.call([{'a'=>1, 'b'=>1}]).op.to_recordset.as(:v, [Sequel.lit('a integer'), Sequel.lit('b integer')])).select_map(:a).should == [1]
+
+        if json_type == :jsonb
+          @db.get(jo.has_key?('a')).should == true
+          @db.get(jo.has_key?('c')).should == false
+          @db.get(pg_json.call(['2', '3', %w'a b']).op.include?('2')).should == true
+          @db.get(pg_json.call(['2', '3', %w'a b']).op.include?('4')).should == false
+
+          @db.get(jo.contain_all(['a', 'b'])).should == true
+          @db.get(jo.contain_all(['a', 'c'])).should == false
+          @db.get(jo.contain_all(['d', 'c'])).should == false
+          @db.get(jo.contain_any(['a', 'b'])).should == true
+          @db.get(jo.contain_any(['a', 'c'])).should == true
+          @db.get(jo.contain_any(['d', 'c'])).should == false
+
+          @db.get(jo.contains(jo)).should == true
+          @db.get(jo.contained_by(jo)).should == true
+          @db.get(jo.contains('a'=>1)).should == true
+          @db.get(jo.contained_by('a'=>1)).should == false
+          @db.get(pg_json.call('a'=>1).op.contains(jo)).should == false
+          @db.get(pg_json.call('a'=>1).op.contained_by(jo)).should == true
+
+          @db.get(ja.contains(ja)).should == true
+          @db.get(ja.contained_by(ja)).should == true
+          @db.get(ja.contains([2,3])).should == true
+          @db.get(ja.contained_by([2,3])).should == false
+          @db.get(pg_json.call([2,3]).op.contains(ja)).should == false
+          @db.get(pg_json.call([2,3]).op.contained_by(ja)).should == true
+        end
+      end
+
+      @db.from(jo.keys.as(:k)).select_order_map(:k).should == %w'a b'
+      @db.from(jo.each).select_order_map(:key).should == %w'a b'
+      @db.from(jo.each).order(:key).select_map(:value).should == [1, {'c'=>2, 'd'=>{'e'=>3}}]
+      @db.from(jo.each_text).select_order_map(:key).should == %w'a b'
+      @db.from(jo.each_text).order(:key).where(:key=>'b').get(:value).gsub(' ', '').should =~ /\{"d":\{"e":3\},"c":2\}|\{"c":2,"d":\{"e":3\}\}/
+
+      Sequel.extension :pg_row_ops
+      @db.create_table!(:items) do
+        Integer :a
+        String :b
+      end
+      j = Sequel.pg_json('a'=>1, 'b'=>'c').op
+      @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:a]).should == 1
+      @db.get(j.populate(Sequel.cast(nil, :items)).pg_row[:b]).should == 'c'
+      j = Sequel.pg_json([{'a'=>1, 'b'=>'c'}, {'a'=>2, 'b'=>'d'}]).op
+      @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:a).should == [1, 2]
+      @db.from(j.populate_set(Sequel.cast(nil, :items))).select_order_map(:b).should == %w'c d'
+    end if DB.server_version >= 90300 && (DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc)
+  end
 end if DB.server_version >= 90200
 
 describe 'PostgreSQL inet/cidr types' do
@@ -2539,7 +2727,7 @@ describe 'PostgreSQL inet/cidr types' do
       @ipv6 = IPAddr.new(@v6)
       @ipv6nm = IPAddr.new(@v6nm)
     end
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after do
     @db.drop_table?(:items)
@@ -2584,7 +2772,7 @@ describe 'PostgreSQL inet/cidr types' do
     @ds.count.should == 1
     if @native
       rs = @ds.all
-      rs.first.values.all?{|c| c.is_a?(Sequel::Postgres::PGArray)}.should be_true
+      rs.first.values.all?{|c| c.is_a?(Sequel::Postgres::PGArray)}.should == true
       rs.first[:i].first.should == @ipv4
       rs.first[:c].first.should == @ipv4nm
       rs.first[:m].first.should == '12:34:56:78:90:ab'
@@ -2650,7 +2838,7 @@ describe 'PostgreSQL range types' do
     @r.each{|k, v| @ra[k] = Sequel.pg_array([v], @map[k])}
     @r.each{|k, v| @pgr[k] = Sequel.pg_range(v)}
     @r.each{|k, v| @pgra[k] = Sequel.pg_array([Sequel.pg_range(v)], @map[k])}
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after do
     @db.drop_table?(:items)
@@ -2732,83 +2920,81 @@ describe 'PostgreSQL range types' do
     v.delete(:id)
     v.should == @r
 
-    unless @db.adapter_scheme == :jdbc
-      @db.create_table!(:items){primary_key :id; column :i4, 'int4range[]'; column :i8, 'int8range[]'; column :n, 'numrange[]'; column :d, 'daterange[]'; column :t, 'tsrange[]'; column :tz, 'tstzrange[]'}
-      c = Class.new(Sequel::Model(@db[:items]))
-      c.plugin :pg_typecast_on_load, :i4, :i8, :n, :d, :t, :tz unless @native
-      v = c.create(@ra).values
-      v.delete(:id)
-      v.each{|k,v1| v1.should == @ra[k].to_a}
-    end
+    @db.create_table!(:items){primary_key :id; column :i4, 'int4range[]'; column :i8, 'int8range[]'; column :n, 'numrange[]'; column :d, 'daterange[]'; column :t, 'tsrange[]'; column :tz, 'tstzrange[]'}
+    c = Class.new(Sequel::Model(@db[:items]))
+    c.plugin :pg_typecast_on_load, :i4, :i8, :n, :d, :t, :tz unless @native
+    v = c.create(@ra).values
+    v.delete(:id)
+    v.each{|k,v1| v1.should == @ra[k].to_a}
   end
 
   specify 'operations/functions with pg_range_ops' do
     Sequel.extension :pg_range_ops
 
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(2..4)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(3..6)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(0..6)).should be_false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(2..4)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(3..6)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contains(0..6)).should == false
 
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(0..6)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(3..6)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(2..4)).should be_false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(0..6)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(3..6)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.contained_by(2..4)).should == false
 
-    @db.get(Sequel.pg_range(1..5, :int4range).op.overlaps(5..6)).should be_true
-    @db.get(Sequel.pg_range(1...5, :int4range).op.overlaps(5..6)).should be_false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.overlaps(5..6)).should == true
+    @db.get(Sequel.pg_range(1...5, :int4range).op.overlaps(5..6)).should == false
     
-    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(6..10)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(5..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..0)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..3)).should be_false
-
-    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(6..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(5..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..0)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..3)).should be_false
-
-    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(6..10)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(5..10)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..0)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..3)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..7)).should be_true
-
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(6..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(5..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(3..10)).should be_false
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..10)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..0)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..3)).should be_true
-    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-5..-1)).should be_true
-
-    @db.get(Sequel.pg_range(1..5, :int4range).op.adjacent_to(6..10)).should be_true
-    @db.get(Sequel.pg_range(1...5, :int4range).op.adjacent_to(6..10)).should be_false
-
-    @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(6..10)).should be_false
-    @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(11..20)).should be_true
-
-    @db.get((Sequel.pg_range(1..5, :int4range).op * (2..6)).adjacent_to(6..10)).should be_true
-    @db.get((Sequel.pg_range(1..4, :int4range).op * (2..6)).adjacent_to(6..10)).should be_false
-
-    @db.get((Sequel.pg_range(1..5, :int4range).op - (2..6)).adjacent_to(2..10)).should be_true
-    @db.get((Sequel.pg_range(0..4, :int4range).op - (3..6)).adjacent_to(4..10)).should be_false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(6..10)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(5..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..0)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.left_of(-1..3)).should == false
+
+    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(6..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(5..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..0)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.right_of(-1..3)).should == false
+
+    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(6..10)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(5..10)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..0)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..3)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.ends_before(-1..7)).should == true
+
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(6..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(5..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(3..10)).should == false
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..10)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..0)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-1..3)).should == true
+    @db.get(Sequel.pg_range(1..5, :int4range).op.starts_after(-5..-1)).should == true
+
+    @db.get(Sequel.pg_range(1..5, :int4range).op.adjacent_to(6..10)).should == true
+    @db.get(Sequel.pg_range(1...5, :int4range).op.adjacent_to(6..10)).should == false
+
+    @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(6..10)).should == false
+    @db.get((Sequel.pg_range(1..5, :int4range).op + (6..10)).adjacent_to(11..20)).should == true
+
+    @db.get((Sequel.pg_range(1..5, :int4range).op * (2..6)).adjacent_to(6..10)).should == true
+    @db.get((Sequel.pg_range(1..4, :int4range).op * (2..6)).adjacent_to(6..10)).should == false
+
+    @db.get((Sequel.pg_range(1..5, :int4range).op - (2..6)).adjacent_to(2..10)).should == true
+    @db.get((Sequel.pg_range(0..4, :int4range).op - (3..6)).adjacent_to(4..10)).should == false
 
     @db.get(Sequel.pg_range(0..4, :int4range).op.lower).should == 0
     @db.get(Sequel.pg_range(0..4, :int4range).op.upper).should == 5
 
-    @db.get(Sequel.pg_range(0..4, :int4range).op.isempty).should be_false
-    @db.get(Sequel::Postgres::PGRange.empty(:int4range).op.isempty).should be_true
+    @db.get(Sequel.pg_range(0..4, :int4range).op.isempty).should == false
+    @db.get(Sequel::Postgres::PGRange.empty(:int4range).op.isempty).should == true
 
-    @db.get(Sequel.pg_range(1..5, :numrange).op.lower_inc).should be_true
-    @db.get(Sequel::Postgres::PGRange.new(1, 5, :exclude_begin=>true, :db_type=>:numrange).op.lower_inc).should be_false
+    @db.get(Sequel.pg_range(1..5, :numrange).op.lower_inc).should == true
+    @db.get(Sequel::Postgres::PGRange.new(1, 5, :exclude_begin=>true, :db_type=>:numrange).op.lower_inc).should == false
 
-    @db.get(Sequel.pg_range(1..5, :numrange).op.upper_inc).should be_true
-    @db.get(Sequel.pg_range(1...5, :numrange).op.upper_inc).should be_false
+    @db.get(Sequel.pg_range(1..5, :numrange).op.upper_inc).should == true
+    @db.get(Sequel.pg_range(1...5, :numrange).op.upper_inc).should == false
 
-    @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.lower_inf).should be_false
-    @db.get(Sequel::Postgres::PGRange.new(nil, 5, :db_type=>:int4range).op.lower_inf).should be_true
+    @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.lower_inf).should == false
+    @db.get(Sequel::Postgres::PGRange.new(nil, 5, :db_type=>:int4range).op.lower_inf).should == true
 
-    @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.upper_inf).should be_false
-    @db.get(Sequel::Postgres::PGRange.new(1, nil, :db_type=>:int4range).op.upper_inf).should be_true
+    @db.get(Sequel::Postgres::PGRange.new(1, 5, :db_type=>:int4range).op.upper_inf).should == false
+    @db.get(Sequel::Postgres::PGRange.new(1, nil, :db_type=>:int4range).op.upper_inf).should == true
   end
 end if DB.server_version >= 90200
 
@@ -2817,7 +3003,7 @@ describe 'PostgreSQL interval types' do
     @db = DB
     @db.extension :pg_array, :pg_interval
     @ds = @db[:items]
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after(:all) do
     Sequel::Postgres::PG_TYPES.delete(1186)
@@ -2829,7 +3015,8 @@ describe 'PostgreSQL interval types' do
   specify 'insert and retrieve interval values' do
     @db.create_table!(:items){interval :i}
     [
-      ['0', '00:00:00',  0, [[:seconds, 0]]],
+      ['0', '00:00:00',  0, []],
+      ['1', '00:00:01',  1, [[:seconds, 1]]],
       ['1 microsecond', '00:00:00.000001',  0.000001, [[:seconds, 0.000001]]],
       ['1 millisecond', '00:00:00.001',  0.001, [[:seconds, 0.001]]],
       ['1 second', '00:00:01', 1, [[:seconds, 1]]],
@@ -2851,9 +3038,9 @@ describe 'PostgreSQL interval types' do
       if @native
         @ds.get(Sequel.cast(:i, String)).should == outstr
         rs = @ds.all
-        rs.first[:i].is_a?(ActiveSupport::Duration).should be_true
+        rs.first[:i].is_a?(ActiveSupport::Duration).should == true
         rs.first[:i].should == ActiveSupport::Duration.new(value, parts)
-        rs.first[:i].parts.sort_by{|k,v| k.to_s}.should == parts.sort_by{|k,v| k.to_s}
+        rs.first[:i].parts.sort_by{|k,v| k.to_s}.reject{|k,v| v == 0}.should == parts.sort_by{|k,v| k.to_s}
         @ds.delete
         @ds.insert(rs.first)
         @ds.all.should == rs
@@ -2868,8 +3055,8 @@ describe 'PostgreSQL interval types' do
     @ds.count.should == 1
     if @native
       rs = @ds.all
-      rs.first[:i].is_a?(Sequel::Postgres::PGArray).should be_true
-      rs.first[:i].first.is_a?(ActiveSupport::Duration).should be_true
+      rs.first[:i].is_a?(Sequel::Postgres::PGArray).should == true
+      rs.first[:i].first.is_a?(ActiveSupport::Duration).should == true
       rs.first[:i].first.should == ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]])
       rs.first[:i].first.parts.sort_by{|k,v| k.to_s}.should == [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]].sort_by{|k,v| k.to_s}
       @ds.delete
@@ -2905,7 +3092,7 @@ describe 'PostgreSQL interval types' do
     c = Class.new(Sequel::Model(@db[:items]))
     c.plugin :pg_typecast_on_load, :i, :c unless @native
     v = c.create(:i=>'1 year 2 mons 25 days 05:06:07').i
-    v.is_a?(ActiveSupport::Duration).should be_true
+    v.is_a?(ActiveSupport::Duration).should == true
     v.should == ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]])
     v.parts.sort_by{|k,_| k.to_s}.should == [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]].sort_by{|k,_| k.to_s}
   end
@@ -2914,8 +3101,8 @@ end if (begin require 'active_support/duration'; require 'active_support/inflect
 describe 'PostgreSQL row-valued/composite types' do
   before(:all) do
     @db = DB
-    Sequel.extension :pg_array_ops, :pg_row_ops
     @db.extension :pg_array, :pg_row
+    Sequel.extension :pg_array_ops, :pg_row_ops
     @ds = @db[:person]
 
     @db.create_table!(:address) do
@@ -2935,7 +3122,7 @@ describe 'PostgreSQL row-valued/composite types' do
     @db.register_row_type(Sequel.qualify(:public, :person))
     @db.register_row_type(:public__company)
 
-    @native = DB.adapter_scheme == :postgres
+    @native = DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
   end
   after(:all) do
     @db.drop_table?(:company, :person, :address)
@@ -2980,7 +3167,7 @@ describe 'PostgreSQL row-valued/composite types' do
       @db.drop_table(:domain_check)
       @db << "DROP DOMAIN positive_integer"
     end
-  end if DB.adapter_scheme == :postgres
+  end if DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
 
   specify 'insert and retrieve arrays of row types' do
     @ds = @db[:company]
@@ -3170,12 +3357,10 @@ describe 'PostgreSQL row-valued/composite types' do
 
       Company.plugin :pg_typecast_on_load, :employees unless @native
       e = Person.new(:id=>1, :address=>a)
-      unless @db.adapter_scheme == :jdbc
-        o = Company.create(:id=>1, :employees=>[{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}])
-        o.employees.should == [e]
-        o = Company.create(:id=>1, :employees=>[e])
-        o.employees.should == [e]
-      end
+      o = Company.create(:id=>1, :employees=>[{:id=>1, :address=>{:street=>'123 Sesame St', :city=>'Somewhere', :zip=>'12345'}}])
+      o.employees.should == [e]
+      o = Company.create(:id=>1, :employees=>[e])
+      o.employees.should == [e]
     end
   end
 end
diff --git a/spec/adapters/spec_helper.rb b/spec/adapters/spec_helper.rb
index 79a8c52..366c7f3 100644
--- a/spec/adapters/spec_helper.rb
+++ b/spec/adapters/spec_helper.rb
@@ -24,7 +24,9 @@ class Sequel::Database
   end
 end
 
-(defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do
+require File.join(File.dirname(File.expand_path(__FILE__)), "../rspec_helper.rb")
+
+RSPEC_EXAMPLE_GROUP.class_eval do
   def log 
     begin
       DB.loggers << Logger.new(STDOUT)
diff --git a/spec/adapters/sqlanywhere_spec.rb b/spec/adapters/sqlanywhere_spec.rb
new file mode 100644
index 0000000..9fc7ae8
--- /dev/null
+++ b/spec/adapters/sqlanywhere_spec.rb
@@ -0,0 +1,170 @@
+SEQUEL_ADAPTER_TEST = :sqlanywhere
+
+require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb')
+
+if DB.table_exists?(:test)
+  DB.drop_table(:test)
+end
+
+describe "Convert smallint to boolean" do
+  before do
+    @db = DB
+  end
+  after do
+    Sequel::SqlAnywhere.convert_smallint_to_bool = true
+    @db.convert_smallint_to_bool = true
+  end
+  
+  describe "Sequel::SqlAnywhere.convert_smallint_to_bool" do
+    before do
+      @db.create_table!(:booltest){column :b, 'smallint'; column :i, 'integer'}
+      @ds = @db[:booltest]
+    end
+    after do
+      @db.drop_table(:booltest)
+    end
+
+    specify "should consider smallint datatypes as boolean if set, but if not, as larger smallints" do
+      @db.create_table!(:booltest){column :b, 'smallint'; column :i, 'integer'}
+      @db.schema(:booltest, :reload=>true).first.last[:type].should == :boolean
+      @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i
+
+      Sequel::SqlAnywhere.convert_smallint_to_bool = false
+      @db2 = Sequel.connect(DB.url)
+      @db2.schema(:booltest, :reload=>true).first.last[:type].should == :integer
+      @db2.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i
+
+      @db.schema(:booltest, :reload=>true).first.last[:type].should == :boolean
+      @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i
+
+      @db2.disconnect
+    end
+
+    describe "datasets" do
+      specify "should return smallints as bools and integers as integers when set" do
+        @ds.delete
+        @ds << {:b=>true, :i=>10}
+        @ds.all.should == [{:b=>true, :i=>10}]
+        @ds.delete
+        @ds << {:b=>false, :i=>0}
+        @ds.all.should == [{:b=>false, :i=>0}]
+        @ds.delete
+        @ds << {:b=>true, :i=>1}
+        @ds.all.should == [{:b=>true, :i=>1}]
+      end
+
+      specify "should return all smallints as integers when unset" do
+        Sequel::SqlAnywhere.convert_smallint_to_bool = false
+        @db2 = Sequel.connect(DB.url)
+        @ds2 = @db2[:booltest]
+        @ds2.delete
+        @ds2 << {:b=>true, :i=>10}
+        @ds2.all.should == [{:b=>1, :i=>10}]
+        @ds2.delete
+        @ds2 << {:b=>false, :i=>0}
+        @ds2.all.should == [{:b=>0, :i=>0}]
+        
+        @ds2.delete
+        @ds2 << {:b=>1, :i=>10}
+        @ds2.all.should == [{:b=>1, :i=>10}]
+        @ds2.delete
+        @ds2 << {:b=>0, :i=>0}
+        @ds2.all.should == [{:b=>0, :i=>0}]
+
+        @db2.disconnect
+      end
+    end
+  end
+  
+  describe "Database#convert_smallint_to_bool" do
+    before do
+      @db.create_table!(:booltest){column :b, 'smallint'; column :i, 'integer'}
+    end
+    after do
+      @db.drop_table(:booltest)
+    end
+  
+    specify "should consider smallint datatypes as boolean if set, but not larger smallints" do
+      @db.schema(:booltest, :reload=>true).first.last[:type].should == :boolean
+      @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i
+      @db.convert_smallint_to_bool = false
+      @db.schema(:booltest, :reload=>true).first.last[:type].should == :integer
+      @db.schema(:booltest, :reload=>true).first.last[:db_type].should match /smallint/i
+    end
+  
+    describe "datasets" do
+      specify "should return smallints as bools and integers as integers when set" do
+        @ds = @db[:booltest]
+        @ds.delete
+        @ds << {:b=>true, :i=>10}
+        @ds.all.should == [{:b=>true, :i=>10}]
+        @ds.delete
+        @ds << {:b=>false, :i=>0}
+        @ds.all.should == [{:b=>false, :i=>0}]
+        @ds.delete
+        @ds << {:b=>true, :i=>1}
+        @ds.all.should == [{:b=>true, :i=>1}]
+      end
+  
+      specify "should return all smallints as integers when unset" do
+        @db2 = Sequel.connect(DB.url)
+        @db2.convert_smallint_to_bool = false
+        @ds2 = @db2[:booltest]
+        @ds2.delete
+        @ds2 << {:b=>true, :i=>10}
+        @ds2.all.should == [{:b=>1, :i=>10}]
+        @ds2.delete
+        @ds2 << {:b=>false, :i=>0}
+        @ds2.all.should == [{:b=>0, :i=>0}]
+      
+        @ds2.delete
+        @ds2 << {:b=>1, :i=>10}
+        @ds2.all.should == [{:b=>1, :i=>10}]
+        @ds2.delete
+        @ds2 << {:b=>0, :i=>0}
+        @ds2.all.should == [{:b=>0, :i=>0}]
+
+        @db2.disconnect
+      end
+    end
+  end
+
+  describe "Dataset#convert_smallint_to_bool" do
+    before do
+      @db.create_table!(:booltest){column :b, 'smallint'; column :i, 'integer'}
+      @ds = @db[:booltest]
+    end
+    after do
+      @db.drop_table(:booltest)
+    end
+    
+    specify "should return smallints as bools and integers as integers when set" do
+      @ds.delete
+      @ds << {:b=>true, :i=>10}
+      @ds.all.should == [{:b=>true, :i=>10}]
+      @ds.delete
+      @ds << {:b=>false, :i=>0}
+      @ds.all.should == [{:b=>false, :i=>0}]
+      @ds.delete
+      @ds << {:b=>true, :i=>1}
+      @ds.all.should == [{:b=>true, :i=>1}]
+    end
+
+    specify "should return all smallints as integers when unset" do
+      @ds.convert_smallint_to_bool = false
+      @ds.delete
+      @ds << {:b=>true, :i=>10}
+      @ds.all.should == [{:b=>1, :i=>10}]
+      @ds.delete
+      @ds << {:b=>false, :i=>0}
+      @ds.all.should == [{:b=>0, :i=>0}]
+    
+      @ds.delete
+      @ds << {:b=>1, :i=>10}
+      @ds.all.should == [{:b=>1, :i=>10}]
+      @ds.delete
+      @ds << {:b=>0, :i=>0}
+      @ds.all.should == [{:b=>0, :i=>0}]
+    end
+  end
+end
diff --git a/spec/adapters/sqlite_spec.rb b/spec/adapters/sqlite_spec.rb
index a4c60b5..0359d41 100644
--- a/spec/adapters/sqlite_spec.rb
+++ b/spec/adapters/sqlite_spec.rb
@@ -640,4 +640,11 @@ describe "A SQLite database" do
     @db.transaction_mode.should == :immediate
     proc {@db.transaction(:mode => :invalid) {}}.should raise_error(Sequel::Error)
   end
+
+  specify "should keep unique constraints when copying tables" do
+    @db.alter_table(:test2){add_unique_constraint :name}
+    @db.alter_table(:test2){drop_column :value}
+    @db[:test2].insert(:name=>'a')
+    proc{@db[:test2].insert(:name=>'a')}.should raise_error(Sequel::ConstraintViolation)
+  end
 end
diff --git a/spec/bin_spec.rb b/spec/bin_spec.rb
index 03c31f6..7a2c1d1 100644
--- a/spec/bin_spec.rb
+++ b/spec/bin_spec.rb
@@ -26,6 +26,8 @@ DB2 = Sequel.connect("#{CONN_PREFIX}#{BIN_SPEC_DB2}")
 File.delete(BIN_SPEC_DB) if File.file?(BIN_SPEC_DB)
 File.delete(BIN_SPEC_DB2) if File.file?(BIN_SPEC_DB2)
 
+require File.join(File.dirname(File.expand_path(__FILE__)), "rspec_helper.rb")
+
 describe "bin/sequel" do
   def bin(opts={})
     cmd = "#{opts[:pre]}\"#{RUBY}\" -I lib bin/sequel #{opts[:args]} #{"#{CONN_PREFIX}#{BIN_SPEC_DB}" unless opts[:no_conn]} #{opts[:post]}> #{OUTPUT}#{" 2>&1" if opts[:stderr]}"
@@ -138,7 +140,7 @@ END
   end
 
   it "-E should echo SQL statements to stdout" do
-    bin(:args=>'-E -c DB.tables').should =~ %r{I, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) PRAGMA foreign_keys = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) PRAGMA case_sensitive_like = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n}
+    bin(:args=>'-E -c DB.tables').should =~ %r{SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n}
   end
 
   it "-I should include directory in load path" do
@@ -147,7 +149,7 @@ END
 
   it "-l should log SQL statements to file" do
     bin(:args=>"-l #{TMP_FILE} -c DB.tables").should == ''
-    File.read(TMP_FILE).should =~ %r{I, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) PRAGMA foreign_keys = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) PRAGMA case_sensitive_like = 1\nI, \[\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+ #\d+\]  INFO -- : \(\d\.\d+s\) SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n}
+    File.read(TMP_FILE).should =~ %r{SELECT \* FROM `sqlite_master` WHERE \(type = 'table' AND NOT name = 'sqlite_sequence'\)\n}
   end
 
   it "-L should load all *.rb files in given directory" do
diff --git a/spec/core/connection_pool_spec.rb b/spec/core/connection_pool_spec.rb
index 1f8ec2f..149ba5c 100644
--- a/spec/core/connection_pool_spec.rb
+++ b/spec/core/connection_pool_spec.rb
@@ -911,6 +911,12 @@ shared_examples_for "All connection pools classes" do
     x = nil
     @class.new(mock_db.call{123}, :after_connect=>proc{|c| x = [c, c]}).hold{}
     x.should == [123, 123]
+    @class.new(mock_db.call{123}, :after_connect=>lambda{|c| x = [c, c]}).hold{}
+    x.should == [123, 123]
+    @class.new(mock_db.call{123}, :after_connect=>proc{|c, s| x = [c, s]}).hold{}
+    x.should == [123, :default]
+    @class.new(mock_db.call{123}, :after_connect=>lambda{|c, s| x = [c, s]}).hold{}
+    x.should == [123, :default]
   end
   
   specify "should raise a DatabaseConnectionError if the connection raises an exception" do
diff --git a/spec/core/database_spec.rb b/spec/core/database_spec.rb
index 8468533..b827d3b 100644
--- a/spec/core/database_spec.rb
+++ b/spec/core/database_spec.rb
@@ -483,18 +483,18 @@ describe "Database#disconnect_connection" do
     o = Object.new
     def o.close() @closed=true end
     Sequel::Database.new.disconnect_connection(o)
-    o.instance_variable_get(:@closed).should be_true
+    o.instance_variable_get(:@closed).should == true
   end
 end
 
 describe "Database#valid_connection?" do
   specify "should issue a query to validate the connection" do
     db = Sequel.mock
-    db.synchronize{|c| db.valid_connection?(c)}.should be_true
+    db.synchronize{|c| db.valid_connection?(c)}.should == true
     db.synchronize do |c|
       def c.execute(*) raise Sequel::DatabaseError, "error" end
       db.valid_connection?(c)
-    end.should be_false
+    end.should == false
   end
 end
 
@@ -581,7 +581,7 @@ describe "Database#test_connection" do
   end
   
   specify "should return true if successful" do
-    @db.test_connection.should be_true
+    @db.test_connection.should == true
   end
 
   specify "should raise an error if the attempting to connect raises an error" do
@@ -593,10 +593,10 @@ end
 describe "Database#table_exists?" do
   specify "should test existence by selecting a row from the table's dataset" do
     db = Sequel.mock(:fetch=>[Sequel::Error, [], [{:a=>1}]])
-    db.table_exists?(:a).should be_false
+    db.table_exists?(:a).should == false
     db.sqls.should == ["SELECT NULL AS nil FROM a LIMIT 1"]
-    db.table_exists?(:b).should be_true
-    db.table_exists?(:c).should be_true
+    db.table_exists?(:b).should == true
+    db.table_exists?(:c).should == true
   end
 end
 
@@ -746,7 +746,7 @@ shared_examples_for "Database#transaction" do
   specify "should have in_transaction? return true if inside a transaction" do
     c = nil
     @db.transaction{c = @db.in_transaction?}
-    c.should be_true
+    c.should == true
   end
   
   specify "should have in_transaction? handle sharding correctly" do
@@ -757,7 +757,7 @@ shared_examples_for "Database#transaction" do
   end
   
   specify "should have in_transaction? return false if not in a transaction" do
-    @db.in_transaction?.should be_false
+    @db.in_transaction?.should == false
   end
   
   specify "should return nil if Sequel::Rollback is called in the transaction" do
@@ -825,7 +825,7 @@ shared_examples_for "Database#transaction" do
     @db.sqls.should == ['BEGIN', 'BEGIN -- test', 'DROP TABLE test;', 'COMMIT -- test', 'COMMIT']
   end
   
-  if (!defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or RUBY_ENGINE == 'rbx') and RUBY_VERSION < '1.9'
+  if (!defined?(RUBY_ENGINE) or RUBY_ENGINE == 'ruby' or RUBY_ENGINE == 'rbx') and !RUBY_VERSION.start_with?('1.9')
     specify "should handle Thread#kill for transactions inside threads" do
       q = Queue.new
       q1 = Queue.new
@@ -927,7 +927,6 @@ describe "Database#transaction with savepoint support" do
   it_should_behave_like "Database#transaction"
 
   specify "should support after_commit inside savepoints" do
-    meta_def(@db, :supports_savepoints?){true}
     @db.transaction do
       @db.after_commit{@db.execute('foo')}
       @db.transaction(:savepoint=>true){@db.after_commit{@db.execute('bar')}}
@@ -937,7 +936,6 @@ describe "Database#transaction with savepoint support" do
   end
 
   specify "should support after_rollback inside savepoints" do
-    meta_def(@db, :supports_savepoints?){true}
     @db.transaction do
       @db.after_rollback{@db.execute('foo')}
       @db.transaction(:savepoint=>true){@db.after_rollback{@db.execute('bar')}}
@@ -948,14 +946,12 @@ describe "Database#transaction with savepoint support" do
   end
 
   specify "should raise an error if you attempt to use after_commit inside a savepoint in a prepared transaction" do
-    meta_def(@db, :supports_savepoints?){true}
     meta_def(@db, :supports_prepared_transactions?){true}
     proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_commit{@db.execute('foo')}}}}.should raise_error(Sequel::Error)
     @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1','ROLLBACK TO SAVEPOINT autopoint_1', 'ROLLBACK']
   end
 
   specify "should raise an error if you attempt to use after_rollback inside a savepoint in a prepared transaction" do
-    meta_def(@db, :supports_savepoints?){true}
     meta_def(@db, :supports_prepared_transactions?){true}
     proc{@db.transaction(:prepare=>'XYZ'){@db.transaction(:savepoint=>true){@db.after_rollback{@db.execute('foo')}}}}.should raise_error(Sequel::Error)
     @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1','ROLLBACK TO SAVEPOINT autopoint_1', 'ROLLBACK']
@@ -1023,6 +1019,25 @@ describe "Database#transaction with savepoints" do
     @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT']
   end
   
+  specify "should use savepoints if surrounding transaction uses :auto_savepoint option" do
+    @db.transaction(:auto_savepoint=>true){@db.transaction{@db.execute 'DROP TABLE test;'}}
+    @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT']
+
+    @db.transaction(:auto_savepoint=>true){@db.transaction{@db.transaction{@db.execute 'DROP TABLE test;'}}}
+    @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT']
+
+    @db.transaction(:auto_savepoint=>true){@db.transaction(:auto_savepoint=>true){@db.transaction{@db.execute 'DROP TABLE test;'}}}
+    @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'SAVEPOINT autopoint_2', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_2', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT']
+
+    @db.transaction{@db.transaction(:auto_savepoint=>true, :savepoint=>true){@db.transaction{@db.execute 'DROP TABLE test;'}}}
+    @db.sqls.should == ['BEGIN', 'SAVEPOINT autopoint_1', 'SAVEPOINT autopoint_2', 'DROP TABLE test;', 'RELEASE SAVEPOINT autopoint_2', 'RELEASE SAVEPOINT autopoint_1', 'COMMIT']
+  end
+
+  specify "should not use savepoints if surrounding transaction uses :auto_savepoint and current transaction uses :savepoint=>false option" do
+    @db.transaction(:auto_savepoint=>true){@db.transaction(:savepoint=>false){@db.execute 'DROP TABLE test;'}}
+    @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT']
+  end
+  
   specify "should not use a savepoint if no transaction is in progress" do
     @db.transaction(:savepoint=>true){@db.execute 'DROP TABLE test;'}
     @db.sqls.should == ['BEGIN', 'DROP TABLE test;', 'COMMIT']
@@ -1741,16 +1756,16 @@ describe "Database#typecast_value" do
   end
 
   specify "should typecast boolean values to true, false, or nil" do
-    @db.typecast_value(:boolean, false).should be_false
-    @db.typecast_value(:boolean, 0).should be_false
-    @db.typecast_value(:boolean, "0").should be_false
-    @db.typecast_value(:boolean, 'f').should be_false
-    @db.typecast_value(:boolean, 'false').should be_false
-    @db.typecast_value(:boolean, true).should be_true
-    @db.typecast_value(:boolean, 1).should be_true
-    @db.typecast_value(:boolean, '1').should be_true
-    @db.typecast_value(:boolean, 't').should be_true
-    @db.typecast_value(:boolean, 'true').should be_true
+    @db.typecast_value(:boolean, false).should == false
+    @db.typecast_value(:boolean, 0).should == false
+    @db.typecast_value(:boolean, "0").should == false
+    @db.typecast_value(:boolean, 'f').should == false
+    @db.typecast_value(:boolean, 'false').should == false
+    @db.typecast_value(:boolean, true).should == true
+    @db.typecast_value(:boolean, 1).should == true
+    @db.typecast_value(:boolean, '1').should == true
+    @db.typecast_value(:boolean, 't').should == true
+    @db.typecast_value(:boolean, 'true').should == true
     @db.typecast_value(:boolean, '').should be_nil
   end
 
@@ -2169,6 +2184,18 @@ describe "Database#supports_savepoints?" do
   end
 end
 
+describe "Database#supports_views_with_check_option?" do
+  specify "should be false by default" do
+    Sequel::Database.new.supports_views_with_check_option?.should == false
+  end
+end
+
+describe "Database#supports_views_with_local_check_option?" do
+  specify "should be false by default" do
+    Sequel::Database.new.supports_views_with_local_check_option?.should == false
+  end
+end
+
 describe "Database#supports_savepoints_in_prepared_transactions?" do
   specify "should be false by default" do
     Sequel::Database.new.supports_savepoints_in_prepared_transactions?.should == false
@@ -2326,13 +2353,13 @@ describe "Database extensions" do
   specify "should be able to register an extension with a block and have Database#extension call the block" do
     @db.quote_identifiers = false
     Sequel::Database.register_extension(:foo){|db| db.quote_identifiers = true}
-    @db.extension(:foo).quote_identifiers?.should be_true
+    @db.extension(:foo).quote_identifiers?.should == true
   end
 
   specify "should be able to register an extension with a callable and Database#extension call the callable" do
     @db.quote_identifiers = false
     Sequel::Database.register_extension(:foo, proc{|db| db.quote_identifiers = true})
-    @db.extension(:foo).quote_identifiers?.should be_true
+    @db.extension(:foo).quote_identifiers?.should == true
   end
 
   specify "should be able to load multiple extensions in the same call" do
@@ -2341,7 +2368,7 @@ describe "Database extensions" do
     Sequel::Database.register_extension(:foo, proc{|db| db.quote_identifiers = true})
     Sequel::Database.register_extension(:bar, proc{|db| db.identifier_input_method = nil})
     @db.extension(:foo, :bar)
-    @db.quote_identifiers?.should be_true
+    @db.quote_identifiers?.should == true
     @db.identifier_input_method.should be_nil
   end
 
@@ -2434,3 +2461,22 @@ describe "Database#schema_type_class" do
     end
   end
 end
+
+describe "Database#execute_{dui,ddl,insert}" do
+  before do
+    @db = Sequel::Database.new
+    def @db.execute(sql, opts={})
+      (@sqls ||= []) << sql
+    end
+    def @db.sqls
+      @sqls
+    end
+  end
+
+  specify "should execute the SQL" do
+    @db.execute_dui "DELETE FROM table"
+    @db.execute_ddl "SET foo"
+    @db.execute_insert "INSERT INTO table DEFAULT VALUES"
+    @db.sqls.should == ["DELETE FROM table", "SET foo", "INSERT INTO table DEFAULT VALUES"]
+  end
+end
diff --git a/spec/core/dataset_spec.rb b/spec/core/dataset_spec.rb
index db14abb..0f18a02 100644
--- a/spec/core/dataset_spec.rb
+++ b/spec/core/dataset_spec.rb
@@ -39,7 +39,7 @@ describe "Dataset" do
     ds._fetch = {:x=>1}
     called = false
     ds.each{|a| called = true; a.should == {:x=>1}}
-    called.should be_true
+    called.should == true
   end
   
   specify "should get quote_identifiers default from database" do
@@ -870,6 +870,7 @@ describe "Dataset#as" do
   specify "should set up an alias" do
     dataset = Sequel.mock.dataset.from(:test)
     dataset.select(dataset.limit(1).select(:name).as(:n)).sql.should == 'SELECT (SELECT name FROM test LIMIT 1) AS n FROM test'
+    dataset.select(dataset.limit(1).select(:name).as(:n, [:nm])).sql.should == 'SELECT (SELECT name FROM test LIMIT 1) AS n(nm) FROM test'
   end
 end
 
@@ -1148,6 +1149,11 @@ describe "Dataset#from" do
     @dataset.from(:a, @dataset.from(:b).lateral).select_sql.should == "SELECT * FROM a, LATERAL (SELECT * FROM b) AS t1"
   end
 
+  specify "should automatically use a default from table if no from table is present" do
+    def @dataset.empty_from_sql; ' FROM DEFFROM'; end
+    @dataset.select_sql.should == "SELECT * FROM DEFFROM"
+  end
+
   specify "should accept :schema__table___alias symbol format" do
     @dataset.from(:abc__def).select_sql.should == "SELECT * FROM abc.def"
     @dataset.from(:a_b__c).select_sql.should == "SELECT * FROM a_b.c"
@@ -1625,6 +1631,49 @@ describe "Dataset#limit" do
   end
 end
 
+describe "Dataset#offset" do
+  before do
+    @dataset = Sequel.mock.dataset.from(:test)
+  end
+
+  specify "should include an OFFSET clause in the select statement" do
+    @dataset.offset(10).sql.should == 'SELECT * FROM test OFFSET 10'
+  end
+
+  specify "should convert regular strings to integers" do
+    @dataset.offset('a() - 1').sql.should == 'SELECT * FROM test OFFSET 0'
+  end
+
+  specify "should raise an error if a negative offset is used" do
+    proc{@dataset.offset(-1)}.should raise_error(Sequel::Error)
+  end
+
+  specify "should be able to reset offset with nil values" do
+    @dataset.offset(6).offset(nil).sql.should == 'SELECT * FROM test'
+  end
+
+  specify "should not convert literal strings to integers" do
+    @dataset.offset(Sequel.lit('a() - 1')).sql.should == 'SELECT * FROM test OFFSET a() - 1'
+  end
+
+  specify "should not convert other objects" do
+    @dataset.offset(Sequel.function(:a) - 1).sql.should == 'SELECT * FROM test OFFSET (a() - 1)'
+  end
+
+  specify "should override offset given to limit" do
+    @dataset.limit(nil, 5).offset(6).sql.should == 'SELECT * FROM test OFFSET 6'
+  end
+
+  specify "should not be overridable by limit if limit is not given an offset" do
+    @dataset.offset(6).limit(nil).sql.should == 'SELECT * FROM test OFFSET 6'
+  end
+
+  specify "should be overridable by limit if limit is given an offset" do
+    @dataset.offset(6).limit(nil, nil).sql.should == 'SELECT * FROM test'
+    @dataset.offset(6).limit(nil, 5).sql.should == 'SELECT * FROM test OFFSET 5'
+  end
+end
+
 describe "Dataset#naked" do
   specify "should returned clone dataset without row_proc" do
     d = Sequel.mock.dataset
@@ -1998,6 +2047,10 @@ describe "Dataset#from_self" do
     @ds.from_self(:alias=>:some_name).sql.should == 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS some_name'
   end
   
+  specify "should use the user-specified column aliases" do
+    @ds.from_self(:alias=>:some_name, :column_aliases=>[:c1, :c2]).sql.should == 'SELECT * FROM (SELECT name FROM test LIMIT 1) AS some_name(c1, c2)'
+  end
+  
   specify "should use the user-specified alias for joins" do
     @ds.from_self(:alias=>:some_name).inner_join(:posts, :alias=>:name).sql.should == \
       'SELECT * FROM (SELECT name FROM test LIMIT 1) AS some_name INNER JOIN posts ON (posts.alias = some_name.name)'
@@ -2106,6 +2159,10 @@ describe "Dataset#join_table" do
     @d.from('stats').join(Sequel.expr(:players).as(:p), {:id => :player_id}).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" AS "p" ON ("p"."id" = "stats"."player_id")'
   end
   
+  specify "should support aliased tables with an implicit column aliases" do
+    @d.from('stats').join(Sequel.expr(:players).as(:p, [:c1, :c2]), {:id => :player_id}).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" AS "p"("c1", "c2") ON ("p"."id" = "stats"."player_id")'
+  end
+  
   specify "should support using an alias for the FROM when doing the first join with unqualified condition columns" do
     @d.from(Sequel.as(:foo, :f)).join_table(:inner, :bar, :id => :bar_id).sql.should == 'SELECT * FROM "foo" AS "f" INNER JOIN "bar" ON ("bar"."id" = "f"."bar_id")'
   end
@@ -2210,7 +2267,9 @@ describe "Dataset#join_table" do
   specify "should hoist WITH clauses from subqueries if the dataset doesn't support CTEs in subselects" do
     meta_def(@d, :supports_cte?){true}
     meta_def(@d, :supports_cte_in_subselect?){false}
-    @d.join(Sequel.mock.dataset.from(:categories).with(:a, Sequel.mock.dataset.from(:b)), [:id]).sql.should == 'WITH "a" AS (SELECT * FROM b) SELECT * FROM "items" INNER JOIN (SELECT * FROM categories) AS "t1" USING ("id")'
+    ds = Sequel.mock.dataset.from(:categories)
+    meta_def(ds, :supports_cte?){true}
+    @d.join(ds.with(:a, Sequel.mock.dataset.from(:b)), [:id]).sql.should == 'WITH "a" AS (SELECT * FROM b) SELECT * FROM "items" INNER JOIN (SELECT * FROM categories) AS "t1" USING ("id")'
   end
 
   specify "should raise an error if using an array of symbols with a block" do
@@ -2844,14 +2903,36 @@ describe "Dataset#import" do
       'COMMIT']
   end
 
+  specify "should slice based on the default_import_slice option" do
+    def @ds.default_import_slice; 2 end
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]])
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) VALUES (1, 2)",
+      "INSERT INTO items (x, y) VALUES (3, 4)",
+      'COMMIT',
+      'BEGIN',
+      "INSERT INTO items (x, y) VALUES (5, 6)",
+      'COMMIT']
+
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice=>nil)
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) VALUES (1, 2)",
+      "INSERT INTO items (x, y) VALUES (3, 4)",
+      "INSERT INTO items (x, y) VALUES (5, 6)",
+      'COMMIT']
+  end
+
   specify "should accept a columns array and a values array with :commit_every option" do
-    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :commit_every => 3)
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :commit_every => 2)
     @db.sqls.should == ['BEGIN',
       "INSERT INTO items (x, y) VALUES (1, 2)",
       "INSERT INTO items (x, y) VALUES (3, 4)",
+      'COMMIT',
+      'BEGIN',
       "INSERT INTO items (x, y) VALUES (5, 6)",
       'COMMIT']
   end
+
   specify "should accept a columns array and a values array with :slice option" do
     @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice => 2)
     @db.sqls.should == ['BEGIN',
@@ -2862,6 +2943,55 @@ describe "Dataset#import" do
       "INSERT INTO items (x, y) VALUES (5, 6)",
       'COMMIT']
   end
+
+  specify "should use correct sql for :values strategy" do
+    def @ds.multi_insert_sql_strategy; :values end
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]])
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) VALUES (1, 2), (3, 4), (5, 6)",
+      'COMMIT']
+
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice=>2)
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) VALUES (1, 2), (3, 4)",
+      'COMMIT',
+      'BEGIN',
+      "INSERT INTO items (x, y) VALUES (5, 6)",
+      'COMMIT']
+  end
+
+  specify "should use correct sql for :union strategy" do
+    def @ds.multi_insert_sql_strategy; :union end
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]])
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) SELECT 1, 2 UNION ALL SELECT 3, 4 UNION ALL SELECT 5, 6",
+      'COMMIT']
+
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice=>2)
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) SELECT 1, 2 UNION ALL SELECT 3, 4",
+      'COMMIT',
+      'BEGIN',
+      "INSERT INTO items (x, y) SELECT 5, 6",
+      'COMMIT']
+  end
+
+  specify "should use correct sql for :union strategy when FROM is required" do
+    def @ds.empty_from_sql; ' FROM foo' end
+    def @ds.multi_insert_sql_strategy; :union end
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]])
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) SELECT 1, 2 FROM foo UNION ALL SELECT 3, 4 FROM foo UNION ALL SELECT 5, 6 FROM foo",
+      'COMMIT']
+
+    @ds.import([:x, :y], [[1, 2], [3, 4], [5, 6]], :slice=>2)
+    @db.sqls.should == ['BEGIN',
+      "INSERT INTO items (x, y) SELECT 1, 2 FROM foo UNION ALL SELECT 3, 4 FROM foo",
+      'COMMIT',
+      'BEGIN',
+      "INSERT INTO items (x, y) SELECT 5, 6 FROM foo",
+      'COMMIT']
+  end
 end
 
 describe "Dataset#multi_insert" do
@@ -3211,46 +3341,30 @@ describe "Dataset#grep" do
   end
 end
 
-describe "Dataset default #fetch_rows, #insert, #update, #delete, #with_sql_delete, #truncate, #execute" do
+describe "Dataset default #fetch_rows, #insert, #update, #delete, #truncate, #execute" do
   before do
-    @db = Sequel::Database.new
+    @db = Sequel.mock(:servers=>{:read_only=>{}}, :autoid=>1)
     @ds = @db[:items]
   end
 
   specify "#delete should execute delete SQL" do
-    @db.should_receive(:execute).once.with('DELETE FROM items', :server=>:default)
-    @ds.delete
-    @db.should_receive(:execute_dui).once.with('DELETE FROM items', :server=>:default)
-    @ds.delete
-  end
-
-  specify "#with_sql_delete should execute delete SQL" do
-    sql = 'DELETE FROM foo'
-    @db.should_receive(:execute).once.with(sql, :server=>:default)
-    @ds.with_sql_delete(sql)
-    @db.should_receive(:execute_dui).once.with(sql, :server=>:default)
-    @ds.with_sql_delete(sql)
+    @ds.delete.should == 0
+    @db.sqls.should == ["DELETE FROM items"]
   end
 
   specify "#insert should execute insert SQL" do
-    @db.should_receive(:execute).once.with('INSERT INTO items DEFAULT VALUES', :server=>:default)
-    @ds.insert([])
-    @db.should_receive(:execute_insert).once.with('INSERT INTO items DEFAULT VALUES', :server=>:default)
-    @ds.insert([])
+    @ds.insert([]).should == 1
+    @db.sqls.should == ["INSERT INTO items DEFAULT VALUES"]
   end
 
   specify "#update should execute update SQL" do
-    @db.should_receive(:execute).once.with('UPDATE items SET number = 1', :server=>:default)
-    @ds.update(:number=>1)
-    @db.should_receive(:execute_dui).once.with('UPDATE items SET number = 1', :server=>:default)
-    @ds.update(:number=>1)
+    @ds.update(:number=>1).should == 0
+    @db.sqls.should == ["UPDATE items SET number = 1"]
   end
   
   specify "#truncate should execute truncate SQL" do
-    @db.should_receive(:execute).once.with('TRUNCATE TABLE items', :server=>:default)
-    @ds.truncate.should == nil
-    @db.should_receive(:execute_ddl).once.with('TRUNCATE TABLE items', :server=>:default)
     @ds.truncate.should == nil
+    @db.sqls.should == ["TRUNCATE TABLE items"]
   end
   
   specify "#truncate should raise an InvalidOperation exception if the dataset is filtered" do
@@ -3259,8 +3373,71 @@ describe "Dataset default #fetch_rows, #insert, #update, #delete, #with_sql_dele
   end
   
   specify "#execute should execute the SQL on the database" do
-    @db.should_receive(:execute).once.with('SELECT 1', :server=>:read_only)
     @ds.send(:execute, 'SELECT 1')
+    @db.sqls.should == ["SELECT 1 -- read_only"]
+  end
+end
+
+describe "Dataset#with_sql_*" do
+  before do
+    @db = Sequel.mock(:servers=>{:read_only=>{}}, :autoid=>1, :fetch=>{:id=>1})
+    @ds = @db[:items]
+  end
+
+  specify "#with_sql_insert should execute given insert SQL" do
+    @ds.with_sql_insert('INSERT INTO foo (1)').should == 1
+    @db.sqls.should == ["INSERT INTO foo (1)"]
+  end
+
+  specify "#with_sql_delete should execute given delete SQL" do
+    @ds.with_sql_delete('DELETE FROM foo').should == 0
+    @db.sqls.should == ["DELETE FROM foo"]
+  end
+
+  specify "#with_sql_update should execute given update SQL" do
+    @ds.with_sql_update('UPDATE foo SET a = 1').should == 0
+    @db.sqls.should == ["UPDATE foo SET a = 1"]
+  end
+
+  specify "#with_sql_all should return all rows from running the SQL" do
+    @ds.with_sql_all('SELECT * FROM foo').should == [{:id=>1}]
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_all should yield each row to the block" do
+    a = []
+    @ds.with_sql_all('SELECT * FROM foo'){|r| a << r}
+    a.should == [{:id=>1}]
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_each should yield each row to the block" do
+    a = []
+    @ds.with_sql_each('SELECT * FROM foo'){|r| a << r}
+    a.should == [{:id=>1}]
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_first should return first row" do
+    @ds.with_sql_first('SELECT * FROM foo').should == {:id=>1}
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_first should return nil if no rows returned" do
+    @db.fetch = []
+    @ds.with_sql_first('SELECT * FROM foo').should == nil
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_single_value should return first value from first row" do
+    @ds.with_sql_single_value('SELECT * FROM foo').should == 1
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
+  end
+
+  specify "#with_sql_single_value should return nil if no rows returned" do
+    @db.fetch = []
+    @ds.with_sql_single_value('SELECT * FROM foo').should == nil
+    @db.sqls.should == ["SELECT * FROM foo -- read_only"]
   end
 end
 
@@ -3499,6 +3676,7 @@ describe "Sequel::Dataset#qualify" do
 
   specify "should handle SQL::AliasedExpressions" do
     @ds.select(Sequel.expr(:a).as(:b)).qualify.sql.should == 'SELECT t.a AS b FROM t'
+    @ds.select(Sequel.expr(:a).as(:b, [:c, :d])).qualify.sql.should == 'SELECT t.a AS b(c, d) FROM t'
   end
 
   specify "should handle SQL::CaseExpressions" do
@@ -3537,9 +3715,9 @@ describe "Sequel::Dataset#qualify" do
     @ds.filter(Sequel::SQL::Wrapper.new(:a)).qualify.sql.should == 'SELECT t.* FROM t WHERE t.a'
   end
 
-  specify "should handle SQL::WindowFunctions" do
+  specify "should handle SQL::Functions with windows" do
     meta_def(@ds, :supports_window_functions?){true}
-    @ds.select{sum(:over, :args=>:a, :partition=>:b, :order=>:c){}}.qualify.sql.should == 'SELECT sum(t.a) OVER (PARTITION BY t.b ORDER BY t.c) FROM t'
+    @ds.select{sum(:a).over(:partition=>:b, :order=>:c)}.qualify.sql.should == 'SELECT sum(t.a) OVER (PARTITION BY t.b ORDER BY t.c) FROM t'
   end
 
   specify "should handle SQL::DelayedEvaluation" do
@@ -3638,6 +3816,7 @@ describe "Sequel::Dataset #with and #with_recursive" do
   before do
     @db = Sequel::Database.new
     @ds = @db[:t]
+    def @ds.supports_cte?(*) true end
   end
   
   specify "#with should take a name and dataset and use a WITH clause" do
@@ -3676,9 +3855,10 @@ describe "Sequel::Dataset #with and #with_recursive" do
   end
 
   specify "#with should work on insert, update, and delete statements if they support it" do
-    [:insert, :update, :delete].each do |m|
-      meta_def(@ds, :"#{m}_clause_methods"){[:"#{m}_with_sql"] + super()}
-    end
+    sc = class << @ds; self; end
+    Sequel::Dataset.def_sql_method(sc, :delete, %w'with delete from where')
+    Sequel::Dataset.def_sql_method(sc, :insert, %w'with insert into columns values')
+    Sequel::Dataset.def_sql_method(sc, :update, %w'with update table set where')
     @ds.with(:t, @db[:x]).insert_sql(1).should == 'WITH t AS (SELECT * FROM x) INSERT INTO t VALUES (1)'
     @ds.with(:t, @db[:x]).update_sql(:foo=>1).should == 'WITH t AS (SELECT * FROM x) UPDATE t SET foo = 1'
     @ds.with(:t, @db[:x]).delete_sql.should == 'WITH t AS (SELECT * FROM x) DELETE FROM t'
@@ -4220,7 +4400,7 @@ describe "Custom ASTTransformer" do
     end.new
     ds = Sequel.mock.dataset.from(:t).cross_join(:a___g).join(:b___h, [:c]).join(:d___i, :e=>:f)
     ds.sql.should == 'SELECT * FROM t CROSS JOIN a AS g INNER JOIN b AS h USING (c) INNER JOIN d AS i ON (i.e = h.f)'
-    ds.clone(:from=>c.transform(ds.opts[:from]), :join=>c.transform(ds.opts[:join])).sql.should == 'SELECT * FROM tt CROSS JOIN aa AS gg INNER JOIN bb AS hh USING (cc) INNER JOIN dd AS ii ON (ii.ee = hh.ff)'
+    ds.clone(:from=>c.transform(ds.opts[:from]), :join=>c.transform(ds.opts[:join])).sql.should == 'SELECT * FROM tt CROSS JOIN aa AS g INNER JOIN bb AS h USING (cc) INNER JOIN dd AS i ON (ii.ee = hh.ff)'
   end
 end
 
@@ -4228,9 +4408,11 @@ describe "Dataset#returning" do
   before do
     @ds = Sequel.mock(:fetch=>proc{|s| {:foo=>s}})[:t].returning(:foo)
     @pr = proc do
-      [:insert, :update, :delete].each do |m|
-        meta_def(@ds, :"#{m}_clause_methods"){super() + [:"#{m}_returning_sql"]}
-      end
+      def @ds.supports_returning?(*) true end
+      sc = class << @ds; self; end
+      Sequel::Dataset.def_sql_method(sc, :delete, %w'delete from where returning')
+      Sequel::Dataset.def_sql_method(sc, :insert, %w'insert into columns values returning')
+      Sequel::Dataset.def_sql_method(sc, :update, %w'update table set where returning')
     end
   end
   
@@ -4271,7 +4453,7 @@ describe "Dataset emulating bitwise operator support" do
     @ds = Sequel::Database.new.dataset
     @ds.quote_identifiers = true
     def @ds.complex_expression_sql_append(sql, op, args)
-      sql << complex_expression_arg_pairs(args){|a, b| "bitand(#{literal(a)}, #{literal(b)})"}
+      complex_expression_arg_pairs_append(sql, args){|a, b| Sequel.function(:bitand, a, b)}
     end
   end
 
@@ -4284,11 +4466,11 @@ end
 
 describe "Dataset feature defaults" do
   it "should not require aliases for recursive CTEs by default" do
-    Sequel::Database.new.dataset.recursive_cte_requires_column_aliases?.should be_false
+    Sequel::Database.new.dataset.recursive_cte_requires_column_aliases?.should == false
   end
 
   it "should not require placeholder type specifiers by default" do
-    Sequel::Database.new.dataset.requires_placeholder_type_specifiers?.should be_false
+    Sequel::Database.new.dataset.requires_placeholder_type_specifiers?.should == false
   end
 end
 
@@ -4320,13 +4502,13 @@ describe "Dataset extensions" do
   specify "should be able to register an extension with a block and Database#extension call the block" do
     @ds.quote_identifiers = false
     Sequel::Dataset.register_extension(:foo){|db| db.quote_identifiers = true}
-    @ds.extension(:foo).quote_identifiers?.should be_true
+    @ds.extension(:foo).quote_identifiers?.should == true
   end
 
   specify "should be able to register an extension with a callable and Database#extension call the callable" do
     @ds.quote_identifiers = false
     Sequel::Dataset.register_extension(:foo, proc{|db| db.quote_identifiers = true})
-    @ds.extension(:foo).quote_identifiers?.should be_true
+    @ds.extension(:foo).quote_identifiers?.should == true
   end
 
   specify "should be able to load multiple extensions in the same call" do
@@ -4335,7 +4517,7 @@ describe "Dataset extensions" do
     Sequel::Dataset.register_extension(:foo, proc{|ds| ds.quote_identifiers = true})
     Sequel::Dataset.register_extension(:bar, proc{|ds| ds.identifier_input_method = nil})
     ds = @ds.extension(:foo, :bar)
-    ds.quote_identifiers?.should be_true
+    ds.quote_identifiers?.should == true
     ds.identifier_input_method.should be_nil
   end
 
@@ -4519,6 +4701,49 @@ describe "Dataset#paged_each" do
     @ds.limit(nil, 2).paged_each(:rows_per_fetch=>3, &@proc)
     @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 2", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 5", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 8", "SELECT * FROM test ORDER BY x LIMIT 3 OFFSET 11"]
   end
+
+  it "should support :strategy=>:filter" do
+    @ds._fetch = @db.each_slice(5).to_a
+    @ds.paged_each(:rows_per_fetch=>5, :strategy=>:filter, &@proc)
+    @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY x LIMIT 5", "SELECT * FROM test WHERE (x > 4) ORDER BY x LIMIT 5", "SELECT * FROM test WHERE (x > 9) ORDER BY x LIMIT 5"]
+    @rows.should == @db
+
+    @rows = []
+    db = @db.map{|h| h[:y] = h[:x] % 5; h[:z] = h[:x] % 9; h}.sort_by{|h| [h[:z], -h[:y], h[:x]]}
+    @ds._fetch = db.each_slice(5).to_a
+    @ds.order(Sequel.identifier(:z), Sequel.desc(Sequel.qualify(:test, :y)), Sequel.asc(:x)).paged_each(:rows_per_fetch=>5, :strategy=>:filter, &@proc)
+    @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY z, test.y DESC, x ASC LIMIT 5",
+      "SELECT * FROM test WHERE ((z > 3) OR ((z = 3) AND (test.y < 3)) OR ((z = 3) AND (test.y = 3) AND (x > 3))) ORDER BY z, test.y DESC, x ASC LIMIT 5",
+      "SELECT * FROM test WHERE ((z > 8) OR ((z = 8) AND (test.y < 3)) OR ((z = 8) AND (test.y = 3) AND (x > 8))) ORDER BY z, test.y DESC, x ASC LIMIT 5"]
+    @rows.should == db
+  end
+
+  it "should support :strategy=>:filter with :filter_values option" do
+    db = @db.map{|h| h[:y] = h[:x] % 5; h[:z] = h[:x] % 9; h}.sort_by{|h| [h[:z], -h[:y], h[:x]]}
+    @ds._fetch = db.each_slice(5).to_a
+    @ds.order(Sequel.identifier(:z), Sequel.desc(Sequel.qualify(:test, :y) * 2), Sequel.asc(:x)).paged_each(:rows_per_fetch=>5, :strategy=>:filter, :filter_values=>proc{|row, expr| [row[expr[0].value], row[expr[1].args.first.column] * expr[1].args.last, row[expr[2]]]}, &@proc)
+    @ds.db.sqls[1...-1].should == ["SELECT * FROM test ORDER BY z, (test.y * 2) DESC, x ASC LIMIT 5",
+      "SELECT * FROM test WHERE ((z > 3) OR ((z = 3) AND ((test.y * 2) < 6)) OR ((z = 3) AND ((test.y * 2) = 6) AND (x > 3))) ORDER BY z, (test.y * 2) DESC, x ASC LIMIT 5",
+      "SELECT * FROM test WHERE ((z > 8) OR ((z = 8) AND ((test.y * 2) < 6)) OR ((z = 8) AND ((test.y * 2) = 6) AND (x > 8))) ORDER BY z, (test.y * 2) DESC, x ASC LIMIT 5"]
+    @rows.should == db
+  end
+end
+
+describe "Dataset#current_datetime" do
+  after do
+    Sequel.datetime_class = Time
+  end
+
+  it "should return an instance of Sequel.datetime_class for the current datetime" do
+    t = Sequel::Dataset.new(nil).current_datetime 
+    t.should be_a_kind_of(Time)
+    (Time.now - t < 0.1).should == true
+
+    Sequel.datetime_class = DateTime
+    t = Sequel::Dataset.new(nil).current_datetime 
+    t.should be_a_kind_of(DateTime)
+    (DateTime.now - t < (0.1/86400)).should == true
+  end
 end
 
 describe "Dataset#escape_like" do
@@ -4533,13 +4758,13 @@ end
 
 describe "Dataset#supports_replace?" do
   it "should be false by default" do
-    Sequel::Dataset.new(nil).supports_replace?.should be_false
+    Sequel::Dataset.new(nil).supports_replace?.should == false
   end
 end
 
 describe "Dataset#supports_lateral_subqueries?" do
   it "should be false by default" do
-    Sequel::Dataset.new(nil).supports_lateral_subqueries?.should be_false
+    Sequel::Dataset.new(nil).supports_lateral_subqueries?.should == false
   end
 end
 
@@ -4584,3 +4809,125 @@ describe "Frozen Datasets" do
     @ds.select(:a).sql.should == 'SELECT a FROM test'
   end
 end
+
+describe "Dataset mutation methods" do
+  def m(&block)
+    ds = Sequel.mock[:t]
+    def ds.supports_cte?(*) true end
+    ds.instance_exec(&block)
+    ds.sql
+  end
+
+  it "should modify the dataset in place" do
+    dsc = Sequel.mock[:u]
+    dsc.instance_variable_set(:@columns, [:v])
+
+    m{and!(:a=>1).or!(:b=>2)}.should == "SELECT * FROM t WHERE ((a = 1) OR (b = 2))"
+    m{select!(:f).graph!(dsc, :b=>:c).set_graph_aliases!(:e=>[:m, :n]).add_graph_aliases!(:d=>[:g, :c])}.should == "SELECT m.n AS e, g.c AS d FROM t LEFT OUTER JOIN u ON (u.b = t.c)"
+    m{cross_join!(:a)}.should == "SELECT * FROM t CROSS JOIN a"
+    m{distinct!}.should == "SELECT DISTINCT * FROM t"
+    m{except!(dsc)}.should == "SELECT * FROM (SELECT * FROM t EXCEPT SELECT * FROM u) AS t1"
+    m{exclude!(:a=>1)}.should == "SELECT * FROM t WHERE (a != 1)"
+    m{exclude_having!(:a=>1)}.should == "SELECT * FROM t HAVING (a != 1)"
+    m{exclude_where!(:a=>1)}.should == "SELECT * FROM t WHERE (a != 1)"
+    m{filter!(:a=>1)}.should == "SELECT * FROM t WHERE (a = 1)"
+    m{for_update!}.should == "SELECT * FROM t FOR UPDATE"
+    m{from!(:p)}.should == "SELECT * FROM p"
+    m{full_join!(:a, [:b])}.should == "SELECT * FROM t FULL JOIN a USING (b)"
+    m{full_outer_join!(:a, [:b])}.should == "SELECT * FROM t FULL OUTER JOIN a USING (b)"
+    m{grep!(:a, 'b')}.should == "SELECT * FROM t WHERE ((a LIKE 'b' ESCAPE '\\'))"
+    m{group!(:a)}.should == "SELECT * FROM t GROUP BY a"
+    m{group_and_count!(:a)}.should == "SELECT a, count(*) AS count FROM t GROUP BY a"
+    m{group_by!(:a)}.should == "SELECT * FROM t GROUP BY a"
+    m{having!(:a)}.should == "SELECT * FROM t HAVING a"
+    m{inner_join!(:a, [:b])}.should == "SELECT * FROM t INNER JOIN a USING (b)"
+    m{intersect!(dsc)}.should == "SELECT * FROM (SELECT * FROM t INTERSECT SELECT * FROM u) AS t1"
+    m{where!(:a).invert!}.should == "SELECT * FROM t WHERE NOT a"
+    m{join!(:a, [:b])}.should == "SELECT * FROM t INNER JOIN a USING (b)"
+    m{join_table!(:inner, :a, [:b])}.should == "SELECT * FROM t INNER JOIN a USING (b)"
+    m{left_join!(:a, [:b])}.should == "SELECT * FROM t LEFT JOIN a USING (b)"
+    m{left_outer_join!(:a, [:b])}.should == "SELECT * FROM t LEFT OUTER JOIN a USING (b)"
+    m{limit!(1)}.should == "SELECT * FROM t LIMIT 1"
+    m{lock_style!(:update)}.should == "SELECT * FROM t FOR UPDATE"
+    m{natural_full_join!(:a)}.should == "SELECT * FROM t NATURAL FULL JOIN a"
+    m{natural_join!(:a)}.should == "SELECT * FROM t NATURAL JOIN a"
+    m{natural_left_join!(:a)}.should == "SELECT * FROM t NATURAL LEFT JOIN a"
+    m{natural_right_join!(:a)}.should == "SELECT * FROM t NATURAL RIGHT JOIN a"
+    m{offset!(1)}.should == "SELECT * FROM t OFFSET 1"
+    m{order!(:a).reverse_order!}.should == "SELECT * FROM t ORDER BY a DESC"
+    m{order_by!(:a).order_more!(:b).order_append!(:c).order_prepend!(:d).reverse!}.should == "SELECT * FROM t ORDER BY d DESC, a DESC, b DESC, c DESC"
+    m{qualify!}.should == "SELECT t.* FROM t"
+    m{right_join!(:a, [:b])}.should == "SELECT * FROM t RIGHT JOIN a USING (b)"
+    m{right_outer_join!(:a, [:b])}.should == "SELECT * FROM t RIGHT OUTER JOIN a USING (b)"
+    m{select!(:a)}.should == "SELECT a FROM t"
+    m{select_all!(:t).select_more!(:b).select_append!(:c)}.should == "SELECT t.*, b, c FROM t"
+    m{select_group!(:a)}.should == "SELECT a FROM t GROUP BY a"
+    m{where!(:a).unfiltered!}.should == "SELECT * FROM t"
+    m{group!(:a).ungrouped!}.should == "SELECT * FROM t"
+    m{limit!(1).unlimited!}.should == "SELECT * FROM t"
+    m{order!(:a).unordered!}.should == "SELECT * FROM t"
+    m{union!(dsc)}.should == "SELECT * FROM (SELECT * FROM t UNION SELECT * FROM u) AS t1"
+    m{with!(:a, dsc)}.should == "WITH a AS (SELECT * FROM u) SELECT * FROM t"
+    m{with_recursive!(:a, dsc, dsc)}.should == "WITH a AS (SELECT * FROM u UNION ALL SELECT * FROM u) SELECT * FROM t"
+    m{with_sql!('SELECT foo')}.should == "SELECT foo"
+
+    dsc.server!(:a)
+    dsc.opts[:server].should == :a
+    dsc.graph!(dsc, {:b=>:c}, :table_alias=>:foo).ungraphed!.opts[:graph].should be_nil
+  end
+end
+
+describe "Dataset emulated complex expression operators" do
+  before do
+    @ds = Sequel.mock[:test]
+    def @ds.complex_expression_sql_append(sql, op, args)
+      case op
+      when :&, :|, :^, :%, :<<, :>>, :'B~'
+        complex_expression_emulate_append(sql, op, args)
+      else
+        super
+      end
+    end
+    @n = Sequel.expr(:x).sql_number
+  end
+
+  it "should emulate &" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:&, @n)).should == "x"
+    @ds.literal(@n & 1).should == "BITAND(x, 1)"
+    @ds.literal(@n & 1 & 2).should == "BITAND(BITAND(x, 1), 2)"
+  end
+
+  it "should emulate |" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:|, @n)).should == "x"
+    @ds.literal(@n | 1).should == "BITOR(x, 1)"
+    @ds.literal(@n | 1 | 2).should == "BITOR(BITOR(x, 1), 2)"
+  end
+
+  it "should emulate ^" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:^, @n)).should == "x"
+    @ds.literal(@n ^ 1).should == "BITXOR(x, 1)"
+    @ds.literal(@n ^ 1 ^ 2).should == "BITXOR(BITXOR(x, 1), 2)"
+  end
+
+  it "should emulate %" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:%, @n)).should == "x"
+    @ds.literal(@n % 1).should == "MOD(x, 1)"
+    @ds.literal(@n % 1 % 2).should == "MOD(MOD(x, 1), 2)"
+  end
+
+  it "should emulate >>" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:>>, @n)).should == "x"
+    @ds.literal(@n >> 1).should == "(x / power(2, 1))"
+    @ds.literal(@n >> 1 >> 2).should == "(x / power(2, 1) / power(2, 2))"
+  end
+
+  it "should emulate <<" do
+    @ds.literal(Sequel::SQL::NumericExpression.new(:<<, @n)).should == "x"
+    @ds.literal(@n << 1).should == "(x * power(2, 1))"
+    @ds.literal(@n << 1 << 2).should == "(x * power(2, 1) * power(2, 2))"
+  end
+
+  it "should emulate B~" do
+    @ds.literal(~@n).should == "((0 - x) - 1)"
+  end
+end
diff --git a/spec/core/expression_filters_spec.rb b/spec/core/expression_filters_spec.rb
index a36f4fd..f03adc4 100644
--- a/spec/core/expression_filters_spec.rb
+++ b/spec/core/expression_filters_spec.rb
@@ -390,6 +390,22 @@ describe "Blockless Ruby Filters" do
   it "should handled emulated trim function" do
     @d.lit(Sequel.trim(:a)).should == 'trim(a)'
   end
+
+  it "should handled emulated function where only name is emulated" do
+    dsc = Class.new(Sequel::Dataset)
+    dsc::EMULATED_FUNCTION_MAP[:trim] = :foo
+    dsc.new(@d.db).literal(Sequel.trim(:a)).should == 'foo(a)'
+  end
+
+  it "should handled emulated function needing full emulation" do
+    dsc = Class.new(Sequel::Dataset) do
+      def emulate_function?(n) n == :trim end
+      def emulate_function_sql_append(sql, f)
+        sql << "#{f.name}FOO(lower(#{f.args.first}))"
+      end
+    end
+    dsc.new(@d.db).literal(Sequel.trim(:a)).should == 'trimFOO(lower(a))'
+  end
 end
 
 describe Sequel::SQL::VirtualRow do
@@ -423,11 +439,25 @@ describe Sequel::SQL::VirtualRow do
     @d.l{count(:*){}}.should == 'count(*)'
   end
 
+  it "should support * method on functions to raise error if function already has an argument" do
+    proc{@d.l{count(1).*}}.should raise_error(Sequel::Error)
+  end
+
+  it "should support * method on functions to use * as the argument" do
+    @d.l{count{}.*}.should == 'count(*)'
+    @d.literal(Sequel.expr{sum(1) * 2}).should == '(sum(1) * 2)'
+  end
+
   it "should treat methods with a block and a leading argument :distinct as a function call with DISTINCT and the additional method arguments" do
     @d.l{count(:distinct, column1){}}.should == 'count(DISTINCT "column1")'
     @d.l{count(:distinct, column1, column2){}}.should == 'count(DISTINCT "column1", "column2")'
   end
 
+  it "should support distinct methods on functions to use DISTINCT before the arguments" do
+    @d.l{count(column1).distinct}.should == 'count(DISTINCT "column1")'
+    @d.l{count(column1, column2).distinct}.should == 'count(DISTINCT "column1", "column2")'
+  end
+
   it "should raise an error if an unsupported argument is used with a block" do
     proc{@d.where{count(:blah){}}}.should raise_error(Sequel::Error)
   end
@@ -479,6 +509,19 @@ describe Sequel::SQL::VirtualRow do
     @d.l{count(:over, :* =>true, :partition=>a, :order=>b, :window=>:win, :frame=>:rows){}}.should == 'count(*) OVER ("win" PARTITION BY "a" ORDER BY "b" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)'
   end
 
+  it "should support over method on functions to create window functions" do
+    @d.l{rank{}.over}.should == 'rank() OVER ()'
+    @d.l{sum(c).over(:partition=>a, :order=>b, :window=>:win, :frame=>:rows)}.should == 'sum("c") OVER ("win" PARTITION BY "a" ORDER BY "b" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)'
+  end
+
+  it "should support over method with a Window argument" do
+    @d.l{sum(c).over(Sequel::SQL::Window.new(:partition=>a, :order=>b, :window=>:win, :frame=>:rows))}.should == 'sum("c") OVER ("win" PARTITION BY "a" ORDER BY "b" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)'
+  end
+
+  it "should raise error if over is called on a function that already has a window " do
+    proc{@d.l{rank{}.over.over}}.should raise_error(Sequel::Error)
+  end
+
   it "should raise an error if window functions are not supported" do
     class << @d; remove_method :supports_window_functions? end
     meta_def(@d, :supports_window_functions?){false}
@@ -486,6 +529,61 @@ describe Sequel::SQL::VirtualRow do
     proc{Sequel.mock.dataset.filter{count(:over, :* =>true, :partition=>a, :order=>b, :window=>:win, :frame=>:rows){}}.sql}.should raise_error(Sequel::Error)
   end
   
+  it "should handle lateral function calls" do
+    @d.l{rank{}.lateral}.should == 'LATERAL rank()' 
+  end
+
+  it "should handle ordered-set and hypothetical-set function calls" do
+    @d.l{mode{}.within_group(:a)}.should == 'mode() WITHIN GROUP (ORDER BY "a")' 
+    @d.l{mode{}.within_group(:a, :b)}.should == 'mode() WITHIN GROUP (ORDER BY "a", "b")' 
+  end
+
+  it "should handle filtered aggregate function calls" do
+    @d.l{count{}.*.filter(:a, :b)}.should == 'count(*) FILTER (WHERE ("a" AND "b"))' 
+    @d.l{count{}.*.filter(:a=>1)}.should == 'count(*) FILTER (WHERE ("a" = 1))'
+    @d.l{count{}.*.filter{b > 1}}.should == 'count(*) FILTER (WHERE ("b" > 1))'
+    @d.l{count{}.*.filter(:a=>1){b > 1}}.should == 'count(*) FILTER (WHERE (("a" = 1) AND ("b" > 1)))'
+  end
+
+  it "should handle fitlered ordered-set and hypothetical-set function calls" do
+    @d.l{mode{}.within_group(:a).filter(:a=>1)}.should == 'mode() WITHIN GROUP (ORDER BY "a") FILTER (WHERE ("a" = 1))' 
+  end
+
+  it "should handle function calls with ordinality" do
+    @d.l{foo{}.with_ordinality}.should == 'foo() WITH ORDINALITY' 
+  end
+
+  it "should support function method on identifiers to create functions" do
+    @d.l{rank.function}.should == 'rank()' 
+    @d.l{sum.function(c)}.should == 'sum("c")'
+    @d.l{sum.function(c, 1)}.should == 'sum("c", 1)'
+  end
+
+  it "should support function method on qualified identifiers to create functions" do
+    @d.l{sch__rank.function}.should == 'sch.rank()' 
+    @d.l{sch__sum.function(c)}.should == 'sch.sum("c")'
+    @d.l{sch__sum.function(c, 1)}.should == 'sch.sum("c", 1)'
+    @d.l{Sequel.qualify(sch__sum, :x__y).function(c, 1)}.should == 'sch.sum.x.y("c", 1)'
+  end
+
+  it "should handle quoted function names" do
+    def @d.supports_quoted_function_names?; true; end
+    @d.l{rank.function}.should == '"rank"()' 
+    @d.l{sch__rank.function}.should == '"sch"."rank"()' 
+  end
+
+  it "should quote function names if a quoted function is used and database supports quoted function names" do
+    def @d.supports_quoted_function_names?; true; end
+    @d.l{rank{}.quoted}.should == '"rank"()' 
+    @d.l{sch__rank{}.quoted}.should == '"sch__rank"()' 
+  end
+
+  it "should not quote function names if an unquoted function is used" do
+    def @d.supports_quoted_function_names?; true; end
+    @d.l{rank.function.unquoted}.should == 'rank()' 
+    @d.l{sch__rank.function.unquoted}.should == 'sch.rank()' 
+  end
+
   it "should deal with classes without requiring :: prefix" do
     @d.l{date < Date.today}.should == "(\"date\" < '#{Date.today}')"
     @d.l{date < Sequel::CURRENT_DATE}.should == "(\"date\" < CURRENT_DATE)"
@@ -678,7 +776,7 @@ describe "Sequel core extension replacements" do
   end
 
   it "Sequel.& should join all arguments given with AND" do
-    l(Sequel.&(:a), "(a)")
+    l(Sequel.&(:a), "a")
     l(Sequel.&(:a, :b=>:c), "(a AND (b = c))")
     l(Sequel.&(:a, {:b=>:c}, Sequel.lit('d')), "(a AND (b = c) AND d)")
   end
@@ -688,7 +786,7 @@ describe "Sequel core extension replacements" do
   end
 
   it "Sequel.| should join all arguments given with OR" do
-    l(Sequel.|(:a), "(a)")
+    l(Sequel.|(:a), "a")
     l(Sequel.|(:a, :b=>:c), "(a OR (b = c))")
     l(Sequel.|(:a, {:b=>:c}, Sequel.lit('d')), "(a OR (b = c) OR d)")
   end
@@ -772,7 +870,7 @@ describe "Sequel core extension replacements" do
 
   it "Sequel.{+,-,*,/} should accept arguments and use the appropriate operator" do
     %w'+ - * /'.each do |op|
-      l(Sequel.send(op, 1), '(1)')
+      l(Sequel.send(op, 1), '1')
       l(Sequel.send(op, 1, 2), "(1 #{op} 2)")
       l(Sequel.send(op, 1, 2, 3), "(1 #{op} 2 #{op} 3)")
     end
@@ -903,6 +1001,22 @@ describe "Sequel::SQLTime" do
     @db.literal(Sequel::SQLTime.create(1, 2, 3)).should == "'01:02:03.000000'"
     @db.literal(Sequel::SQLTime.create(1, 2, 3, 500000)).should == "'01:02:03.500000'"
   end
+
+  specify "#to_s should include hour, minute, and second by default" do
+    Sequel::SQLTime.create(1, 2, 3).to_s.should == "01:02:03"
+    Sequel::SQLTime.create(1, 2, 3, 500000).to_s.should == "01:02:03"
+  end
+
+  specify "#to_s should handle arguments with super" do
+    t = Sequel::SQLTime.create(1, 2, 3)
+    begin
+      Time.now.to_s('%F')
+    rescue
+      proc{t.to_s('%F')}.should raise_error
+    else
+      proc{t.to_s('%F')}.should_not raise_error
+    end
+  end
 end
 
 describe "Sequel::SQL::Wrapper" do
@@ -1066,6 +1180,6 @@ end
 
 describe "Sequel core extensions" do
   specify "should have Sequel.core_extensions? be false by default" do
-    Sequel.core_extensions?.should be_false
+    Sequel.core_extensions?.should == false
   end
 end
diff --git a/spec/core/mock_adapter_spec.rb b/spec/core/mock_adapter_spec.rb
index 288f4ea..6dda877 100644
--- a/spec/core/mock_adapter_spec.rb
+++ b/spec/core/mock_adapter_spec.rb
@@ -14,7 +14,7 @@ describe "Sequel Mock Adapter" do
   specify "should each not return any rows by default" do
     called = false
     Sequel.mock[:t].each{|r| called = true}
-    called.should be_false
+    called.should == false
   end
 
   specify "should return 0 for update/delete/with_sql_delete/execute_dui by default" do
@@ -299,7 +299,7 @@ describe "Sequel Mock Adapter" do
   end
 
   specify "should not quote identifiers by default" do
-    Sequel.mock.send(:quote_identifiers_default).should be_false
+    Sequel.mock.send(:quote_identifiers_default).should == false
   end
 
   specify "should allow overriding of server_version" do
@@ -424,13 +424,15 @@ describe "Sequel Mock Adapter" do
       class Sequel::Database; @identifier_input_method=nil; end
       class Sequel::Database; @identifier_output_method=nil; end
       Sequel.mock(:host=>'access').select(Date.new(2011, 12, 13)).sql.should == 'SELECT #2011-12-13#'
+      Sequel.mock(:host=>'cubrid').from(:a).offset(1).sql.should == 'SELECT * FROM "a" LIMIT 1,4294967295'
       Sequel.mock(:host=>'db2').select(1).sql.should == 'SELECT 1 FROM "SYSIBM"."SYSDUMMY1"'
       Sequel.mock(:host=>'firebird')[:a].distinct.limit(1, 2).sql.should == 'SELECT DISTINCT FIRST 1 SKIP 2 * FROM "A"'
       Sequel.mock(:host=>'informix')[:a].distinct.limit(1, 2).sql.should == 'SELECT SKIP 2 FIRST 1 DISTINCT * FROM A'
       Sequel.mock(:host=>'mssql')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM [A] WHERE (CONTAINS ([B], 'c'))"
       Sequel.mock(:host=>'mysql')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM `a` WHERE (MATCH (`b`) AGAINST ('c'))"
       Sequel.mock(:host=>'oracle')[:a].limit(1).sql.should == 'SELECT * FROM (SELECT * FROM "A") "T1" WHERE (ROWNUM <= 1)'
-      Sequel.mock(:host=>'postgres')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM \"a\" WHERE (to_tsvector('simple'::regconfig, (COALESCE(\"b\", ''))) @@ to_tsquery('simple'::regconfig, 'c'))"
+      Sequel.mock(:host=>'postgres')[:a].full_text_search(:b, 'c').sql.should == "SELECT * FROM \"a\" WHERE (to_tsvector(CAST('simple' AS regconfig), (COALESCE(\"b\", ''))) @@ to_tsquery(CAST('simple' AS regconfig), 'c'))"
+      Sequel.mock(:host=>'sqlanywhere').from(:a).offset(1).sql.should == 'SELECT TOP 2147483647 START AT (1 + 1) * FROM "A"'
       Sequel.mock(:host=>'sqlite')[:a___b].sql.should == "SELECT * FROM `a` AS 'b'"
     ensure
       Sequel.quote_identifiers = qi
@@ -439,9 +441,11 @@ describe "Sequel Mock Adapter" do
     end
   end
 
-  specify "should automatically set version for postgres and mssql" do
-    Sequel.mock(:host=>'postgres').server_version.should == 90103
-    Sequel.mock(:host=>'mssql').server_version.should == 10000000
+  specify "should automatically set version for adapters nedding versions" do
+    Sequel.mock(:host=>'postgres').server_version.should == 90400
+    Sequel.mock(:host=>'mssql').server_version.should == 11000000
+    Sequel.mock(:host=>'mysql').server_version.should == 50617
+    Sequel.mock(:host=>'sqlite').sqlite_version.should == 30804
   end
 
   specify "should stub out the primary_key method for postgres" do
diff --git a/spec/core/object_graph_spec.rb b/spec/core/object_graph_spec.rb
index 9ff4c09..621ac49 100644
--- a/spec/core/object_graph_spec.rb
+++ b/spec/core/object_graph_spec.rb
@@ -1,12 +1,12 @@
 require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper')
 
-describe Sequel::Dataset, " graphing" do
+describe Sequel::Dataset, "graphing" do
   before do
     @db = Sequel.mock(:columns=>proc do |sql|
       case sql
       when /points/
         [:id, :x, :y]
-      when /lines/
+      when /lines|foo/
         [:id, :x, :y, :graph_id]
       else
         [:id, :name, :x, :y, :lines_x]
@@ -19,198 +19,275 @@ describe Sequel::Dataset, " graphing" do
     @db.sqls
   end
 
-  it "#graph should not modify the current dataset's opts" do
-    o1 = @ds1.opts
-    o2 = o1.dup
-    ds1 = @ds1.graph(@ds2, :x=>:id)
-    @ds1.opts.should == o1
-    @ds1.opts.should == o2
-    ds1.opts.should_not == o1
+  describe "#graph" do
+    it "should not modify the current dataset's opts" do
+      o1 = @ds1.opts
+      o2 = o1.dup
+      ds1 = @ds1.graph(@ds2, :x=>:id)
+      @ds1.opts.should == o1
+      @ds1.opts.should == o2
+      ds1.opts.should_not == o1
+    end
+
+    it "should not modify the current dataset's opts if current dataset is already graphed" do
+      ds2 = @ds1.graph(@ds2)
+      proc{@ds1.graph(@ds2)}.should_not raise_error
+      proc{ds2.graph(@ds3)}.should_not raise_error
+      proc{ds2.graph(@ds3)}.should_not raise_error
+    end
+
+    it "should accept a simple dataset and pass the table to join" do
+      ds = @ds1.graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should use currently selected columns as the basis for the selected columns in a new graph" do
+      ds = @ds1.select(:id).graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, lines.id AS lines_id, lines.x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+
+      ds = @ds1.select(:id, :x).graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+
+      ds = @ds1.select(Sequel.identifier(:id), Sequel.qualify(:points, :x)).graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+
+      ds = @ds1.select(Sequel.identifier(:id).qualify(:points), Sequel.identifier(:x).as(:y)).graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+
+      ds = @ds1.select(:id, Sequel.identifier(:x).qualify(Sequel.identifier(:points)).as(Sequel.identifier(:y))).graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should raise error if currently selected expressions cannot be handled" do
+      proc{@ds1.select(1).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error)
+    end
+
+    it "should accept a complex dataset and pass it directly to join" do
+      ds = @ds1.graph(@ds2.select_all(:lines), {:x=>:id})
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should accept a complex dataset and pass it directly to join" do
+      ds = @ds1.graph(@ds2.filter(:x=>1), {:x=>:id})
+      ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM lines WHERE (x = 1)) AS t1 ON (t1.x = points.id)'
+      ds = @ds1.graph(@ds2.select_all(:lines).filter(:x=>1), {:x=>:id})
+      ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT lines.* FROM lines WHERE (x = 1)) AS t1 ON (t1.x = points.id)'
+    end
+
+    it "should work on from_self datasets" do
+      ds = @ds1.from_self.graph(@ds2, :x=>:id)
+      ds.sql.should == 'SELECT t1.id, t1.x, t1.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM (SELECT * FROM points) AS t1 LEFT OUTER JOIN lines ON (lines.x = t1.id)'
+      ds = @ds1.graph(@ds2.from_self, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1 ON (t1.x = points.id)'
+      ds = @ds1.from_self.from_self.graph(@ds2.from_self.from_self, :x=>:id)
+      ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t2.id AS t2_id, t2.x AS t2_x, t2.y AS t2_y, t2.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1) AS t2 ON (t2.x = t1.id)'
+      ds = @ds1.from(@ds1, @ds3).graph(@ds2.from_self, :x=>:id)
+      ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t3.id AS t3_id, t3.x AS t3_x, t3.y AS t3_y, t3.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1, (SELECT * FROM graphs) AS t2) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t3 ON (t3.x = t1.id)'
+    end
+
+    it "should accept a symbol table name as the dataset" do
+      ds = @ds1.graph(:lines, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should accept a schema qualified symbolic table name as the dataset" do
+      ds = @ds1.graph(:schema__lines, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN schema.lines AS lines ON (lines.x = points.id)'
+    end
+
+    it "allows giving table alias in symbolic argument" do
+      ds = @ds1.graph(:lines___sketch, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, sketch.id AS sketch_id, sketch.x AS sketch_x, sketch.y AS sketch_y, sketch.graph_id FROM points LEFT OUTER JOIN lines AS sketch ON (sketch.x = points.id)'
+      ds = @ds1.graph(:schema__lines___sketch, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, sketch.id AS sketch_id, sketch.x AS sketch_x, sketch.y AS sketch_y, sketch.graph_id FROM points LEFT OUTER JOIN schema.lines AS sketch ON (sketch.x = points.id)'
+    end
+
+    it "should accept a SQL::Identifier as the dataset" do
+      ds = @ds1.graph(Sequel.identifier(:lines), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ds = @ds1.graph(Sequel.identifier('lines'), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines AS lines ON (lines.x = points.id)'
+    end
+
+    it "should handle a SQL::Identifier with double underscores correctly" do
+      ds = @ds1.graph(Sequel.identifier(:lin__es), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lin__es.id AS lin__es_id, lin__es.name, lin__es.x AS lin__es_x, lin__es.y AS lin__es_y, lin__es.lines_x FROM points LEFT OUTER JOIN lin__es ON (lin__es.x = points.id)'
+      ds = @ds1.from(Sequel.identifier(:poi__nts)).graph(Sequel.identifier(:lin__es), :x=>:id)
+      ds.sql.should == 'SELECT poi__nts.id, poi__nts.name, poi__nts.x, poi__nts.y, poi__nts.lines_x, lin__es.id AS lin__es_id, lin__es.name AS lin__es_name, lin__es.x AS lin__es_x, lin__es.y AS lin__es_y, lin__es.lines_x AS lin__es_lines_x FROM poi__nts LEFT OUTER JOIN lin__es ON (lin__es.x = poi__nts.id)'
+      ds = @ds1.from(Sequel.identifier(:poi__nts).qualify(:foo)).graph(Sequel.identifier(:lin__es).qualify(:bar), :x=>:id)
+      ds.sql.should == 'SELECT foo.poi__nts.id, foo.poi__nts.x, foo.poi__nts.y, foo.poi__nts.graph_id, lin__es.id AS lin__es_id, lin__es.name, lin__es.x AS lin__es_x, lin__es.y AS lin__es_y, lin__es.lines_x FROM foo.poi__nts LEFT OUTER JOIN bar.lin__es AS lin__es ON (lin__es.x = foo.poi__nts.id)'
+    end
+
+    it "should accept a SQL::QualifiedIdentifier as the dataset" do
+      ds = @ds1.graph(Sequel.qualify(:schema, :lines), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN schema.lines AS lines ON (lines.x = points.id)'
+      ds = @ds1.graph(Sequel.qualify('schema', 'lines'), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN schema.lines AS lines ON (lines.x = points.id)'
+      ds = @ds1.graph(Sequel.qualify(Sequel.identifier(:schema), Sequel.identifier(:lines)), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN schema.lines AS lines ON (lines.x = points.id)'
+      ds = @ds1.graph(Sequel.qualify(Sequel.identifier('schema'), Sequel.identifier('lines')), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN schema.lines AS lines ON (lines.x = points.id)'
+    end
+
+    it "should handle a qualified identifier as the source" do
+      ds = @ds1.from(:schema__points).graph(:lines, :x=>:id)
+      ds.sql.should == 'SELECT schema.points.id, schema.points.x, schema.points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM schema.points LEFT OUTER JOIN lines ON (lines.x = schema.points.id)'
+      ds = @ds1.from(Sequel.qualify(:schema, :points)).graph(:lines, :x=>:id)
+      ds.sql.should == 'SELECT schema.points.id, schema.points.x, schema.points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM schema.points LEFT OUTER JOIN lines ON (lines.x = schema.points.id)'
+    end
+
+    it "should accept a SQL::AliasedExpression as the dataset" do
+      ds = @ds1.graph(Sequel.as(:lines, :foo), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, foo.id AS foo_id, foo.x AS foo_x, foo.y AS foo_y, foo.graph_id FROM points LEFT OUTER JOIN lines AS foo ON (foo.x = points.id)'
+      ds = @ds1.graph(Sequel.as(:schema__lines, :foo), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, foo.id AS foo_id, foo.x AS foo_x, foo.y AS foo_y, foo.graph_id FROM points LEFT OUTER JOIN schema.lines AS foo ON (foo.x = points.id)'
+      ds = @ds1.graph(Sequel.as(Sequel.identifier(:lines), :foo), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, foo.id AS foo_id, foo.x AS foo_x, foo.y AS foo_y, foo.graph_id FROM points LEFT OUTER JOIN lines AS foo ON (foo.x = points.id)'
+      ds = @ds1.graph(Sequel.as(Sequel.qualify(:schema, :lines), :foo), :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, foo.id AS foo_id, foo.x AS foo_x, foo.y AS foo_y, foo.graph_id FROM points LEFT OUTER JOIN schema.lines AS foo ON (foo.x = points.id)'
+    end
+
+    it "should raise an error if a symbol, dataset, or model is not used" do
+      proc{@ds1.graph(Object.new, :x=>:id)}.should raise_error(Sequel::Error)
+    end
+
+    it "should accept a :table_alias option" do
+      ds = @ds1.graph(:lines, {:x=>:id}, :table_alias=>:planes)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, planes.id AS planes_id, planes.x AS planes_x, planes.y AS planes_y, planes.graph_id FROM points LEFT OUTER JOIN lines AS planes ON (planes.x = points.id)'
+    end
+
+    it "should accept a :implicit_qualifier option" do
+      ds = @ds1.graph(:lines, {:x=>:id}, :implicit_qualifier=>:planes)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = planes.id)'
+    end
+
+    it "should accept a :join_type option" do
+      ds = @ds1.graph(:lines, {:x=>:id}, :join_type=>:inner)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points INNER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should not select any columns from the graphed table if :select option is false" do
+      ds = @ds1.graph(:lines, {:x=>:id}, :select=>false).graph(:graphs, :id=>:graph_id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
+    end
+
+    it "should use the given columns if :select option is used" do
+      ds = @ds1.graph(:lines, {:x=>:id}, :select=>[:x, :graph_id]).graph(:graphs, :id=>:graph_id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.x AS lines_x, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
+    end
+
+    it "should pass all join_conditions to join_table" do
+      ds = @ds1.graph(@ds2, [[:x, :id], [:y, :id]])
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))'
+    end
+
+    it "should accept a block instead of conditions and pass it to join_table" do
+      ds = @ds1.graph(@ds2){|ja, lja, js| [[Sequel.qualify(ja, :x), Sequel.qualify(lja, :id)], [Sequel.qualify(ja, :y), Sequel.qualify(lja, :id)]]}
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))'
+    end
+
+    it "should not add columns if graph is called after set_graph_aliases" do
+      ds = @ds1.set_graph_aliases([[:x,[:points, :x]], [:y,[:lines, :y]]])
+      ds.sql.should == 'SELECT points.x, lines.y FROM points'
+      ds = ds.graph(:lines, :x=>:id)
+      ds.sql.should == 'SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should allow graphing of multiple datasets" do
+      ds = @ds1.graph(@ds2, :x=>:id).graph(@ds3, :id=>:graph_id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
+    end
+
+    it "should allow graphing of the same dataset multiple times" do
+      ds = @ds1.graph(@ds2, :x=>:id).graph(@ds2, {:y=>:points__id}, :table_alias=>:graph)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graph.id AS graph_id_0, graph.x AS graph_x, graph.y AS graph_y, graph.graph_id AS graph_graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN lines AS graph ON (graph.y = points.id)'
+    end
+
+    it "should raise an error if the table/table alias has already been used" do
+      proc{@ds1.graph(@ds1, :x=>:id)}.should raise_error(Sequel::Error)
+      proc{@ds1.graph(@ds2, :x=>:id)}.should_not raise_error
+      proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error)
+      proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, {:x=>:id}, :table_alias=>:blah)}.should_not raise_error
+    end
   end
 
-  it "#graph should not modify the current dataset's opts if current dataset is already graphed" do
-    ds2 = @ds1.graph(@ds2)
-    proc{@ds1.graph(@ds2)}.should_not raise_error
-    proc{ds2.graph(@ds3)}.should_not raise_error
-    proc{ds2.graph(@ds3)}.should_not raise_error
+  describe "#set_graph_aliases" do
+    it "should not modify the current dataset's opts" do
+      o1 = @ds1.opts
+      o2 = o1.dup
+      ds1 = @ds1.set_graph_aliases(:x=>[:graphs,:id])
+      @ds1.opts.should == o1
+      @ds1.opts.should == o2
+      ds1.opts.should_not == o1
+    end
+
+    it "should specify the graph mapping" do
+      ds = @ds1.graph(:lines, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ds = ds.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y])
+      ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
+      'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ].should(include(ds.sql))
+    end
+
+    it "should allow a third entry to specify an expression to use other than the default" do
+      ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :x, 1], :y=>[:lines, :y, Sequel.function(:random)])
+      ['SELECT 1 AS x, random() AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
+      'SELECT random() AS y, 1 AS x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ].should(include(ds.sql))
+    end
+
+    it "should allow a single array entry to specify a table, assuming the same column as the key" do
+      ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points], :y=>[:lines])
+      ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
+      'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ].should(include(ds.sql))
+    end
+
+    it "should allow hash values to be symbols specifying table, assuming the same column as the key" do
+      ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>:points, :y=>:lines)
+      ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
+      'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+      ].should(include(ds.sql))
+    end
+
+    it "should only alias columns if necessary" do
+      ds = @ds1.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y])
+      ['SELECT points.x, lines.y FROM points',
+      'SELECT lines.y, points.x FROM points'
+      ].should(include(ds.sql))
+
+      ds = @ds1.set_graph_aliases(:x1=>[:points, :x], :y=>[:lines, :y])
+      ['SELECT points.x AS x1, lines.y FROM points',
+      'SELECT lines.y, points.x AS x1 FROM points'
+      ].should(include(ds.sql))
+    end
   end
 
-  it "#graph should accept a simple dataset and pass the table to join" do
-    ds = @ds1.graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+  describe "#add_graph_aliases" do
+    it "should not modify the current dataset's opts" do
+      ds1 = @ds1.set_graph_aliases(:x=>[:graphs,:id])
+      o1 = ds1.opts
+      o2 = o1.dup
+      ds2 = ds1.add_graph_aliases(:y=>[:blah,:id])
+      ds1.opts.should == o1
+      ds1.opts.should == o2
+      ds2.opts.should_not == o1
+    end
+
+    it "should add columns to the graph mapping" do
+      @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :q]).add_graph_aliases(:y=>[:lines, :r]).sql.should == 'SELECT points.q AS x, lines.r AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
+    end
+
+    it "should raise an error if called without existing graph aliases" do
+      proc{@ds1.add_graph_aliases(:y=>[:lines, :r])}.should raise_error(Sequel::Error)
+    end
   end
 
-  it "#graph should use currently selected columns as the basis for the selected columns in a new graph" do
-    ds = @ds1.select(:id).graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, lines.id AS lines_id, lines.x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-
-    ds = @ds1.select(:id, :x).graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-
-    ds = @ds1.select(Sequel.identifier(:id), Sequel.qualify(:points, :x)).graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, lines.id AS lines_id, lines.x AS lines_x, lines.y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-
-    ds = @ds1.select(Sequel.identifier(:id).qualify(:points), Sequel.identifier(:x).as(:y)).graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-
-    ds = @ds1.select(:id, Sequel.identifier(:x).qualify(Sequel.identifier(:points)).as(Sequel.identifier(:y))).graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-  end
-
-  it "#graph should raise error if currently selected expressions cannot be handled" do
-    proc{@ds1.select(1).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error)
-  end
-
-  it "#graph should accept a complex dataset and pass it directly to join" do
-    ds = @ds1.graph(@ds2.filter(:x=>1), {:x=>:id})
-    ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM lines WHERE (x = 1)) AS t1 ON (t1.x = points.id)'
-  end
-
-  it "#graph should work on from_self datasets" do
-    ds = @ds1.from_self.graph(@ds2, :x=>:id)
-    ds.sql.should == 'SELECT t1.id, t1.x, t1.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM (SELECT * FROM points) AS t1 LEFT OUTER JOIN lines ON (lines.x = t1.id)'
-    ds = @ds1.graph(@ds2.from_self, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, t1.id AS t1_id, t1.x AS t1_x, t1.y AS t1_y, t1.graph_id FROM points LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1 ON (t1.x = points.id)'
-    ds = @ds1.from_self.from_self.graph(@ds2.from_self.from_self, :x=>:id)
-    ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t2.id AS t2_id, t2.x AS t2_x, t2.y AS t2_y, t2.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM (SELECT * FROM lines) AS t1) AS t1) AS t2 ON (t2.x = t1.id)'
-    ds = @ds1.from(@ds1, @ds3).graph(@ds2.from_self, :x=>:id)
-    ds.sql.should == 'SELECT t1.id, t1.x, t1.y, t3.id AS t3_id, t3.x AS t3_x, t3.y AS t3_y, t3.graph_id FROM (SELECT * FROM (SELECT * FROM points) AS t1, (SELECT * FROM graphs) AS t2) AS t1 LEFT OUTER JOIN (SELECT * FROM (SELECT * FROM lines) AS t1) AS t3 ON (t3.x = t1.id)'
-  end
-
-  it "#graph should accept a symbol table name as the dataset" do
-    ds = @ds1.graph(:lines, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-  end
-
-  it "#graph should raise an error if a symbol, dataset, or model is not used" do
-    proc{@ds1.graph(Object.new, :x=>:id)}.should raise_error(Sequel::Error)
-  end
-
-  it "#graph should accept a :table_alias option" do
-    ds = @ds1.graph(:lines, {:x=>:id}, :table_alias=>:planes)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, planes.id AS planes_id, planes.x AS planes_x, planes.y AS planes_y, planes.graph_id FROM points LEFT OUTER JOIN lines AS planes ON (planes.x = points.id)'
-  end
-
-  it "#graph should accept a :implicit_qualifier option" do
-    ds = @ds1.graph(:lines, {:x=>:id}, :implicit_qualifier=>:planes)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = planes.id)'
-  end
-
-  it "#graph should accept a :join_type option" do
-    ds = @ds1.graph(:lines, {:x=>:id}, :join_type=>:inner)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points INNER JOIN lines ON (lines.x = points.id)'
-  end
-
-  it "#graph should not select any columns from the graphed table if :select option is false" do
-    ds = @ds1.graph(:lines, {:x=>:id}, :select=>false).graph(:graphs, :id=>:graph_id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
-  end
-
-  it "#graph should use the given columns if :select option is used" do
-    ds = @ds1.graph(:lines, {:x=>:id}, :select=>[:x, :graph_id]).graph(:graphs, :id=>:graph_id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.x AS lines_x, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
-  end
-
-  it "#graph should pass all join_conditions to join_table" do
-    ds = @ds1.graph(@ds2, [[:x, :id], [:y, :id]])
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))'
-  end
-
-  it "#graph should accept a block instead of conditions and pass it to join_table" do
-    ds = @ds1.graph(@ds2){|ja, lja, js| [[Sequel.qualify(ja, :x), Sequel.qualify(lja, :id)], [Sequel.qualify(ja, :y), Sequel.qualify(lja, :id)]]}
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON ((lines.x = points.id) AND (lines.y = points.id))'
-  end
-
-  it "#graph should not add columns if graph is called after set_graph_aliases" do
-    ds = @ds1.set_graph_aliases([[:x,[:points, :x]], [:y,[:lines, :y]]])
-    ds.sql.should == 'SELECT points.x, lines.y FROM points'
-    ds = ds.graph(:lines, :x=>:id)
-    ds.sql.should == 'SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-  end
-
-  it "#graph should allow graphing of multiple datasets" do
-    ds = @ds1.graph(@ds2, :x=>:id).graph(@ds3, :id=>:graph_id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x AS graphs_lines_x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN graphs ON (graphs.id = lines.graph_id)'
-  end
-
-  it "#graph should allow graphing of the same dataset multiple times" do
-    ds = @ds1.graph(@ds2, :x=>:id).graph(@ds2, {:y=>:points__id}, :table_alias=>:graph)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id, graph.id AS graph_id_0, graph.x AS graph_x, graph.y AS graph_y, graph.graph_id AS graph_graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id) LEFT OUTER JOIN lines AS graph ON (graph.y = points.id)'
-  end
-
-  it "#graph should raise an error if the table/table alias has already been used" do
-    proc{@ds1.graph(@ds1, :x=>:id)}.should raise_error(Sequel::Error)
-    proc{@ds1.graph(@ds2, :x=>:id)}.should_not raise_error
-    proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error)
-    proc{@ds1.graph(@ds2, :x=>:id).graph(@ds2, {:x=>:id}, :table_alias=>:blah)}.should_not raise_error
-  end
-
-  it "#set_graph_aliases and #add_graph_aliases should not modify the current dataset's opts" do
-    o1 = @ds1.opts
-    o2 = o1.dup
-    ds1 = @ds1.set_graph_aliases(:x=>[:graphs,:id])
-    @ds1.opts.should == o1
-    @ds1.opts.should == o2
-    ds1.opts.should_not == o1
-    o3 = ds1.opts
-    ds2 = ds1.add_graph_aliases(:y=>[:blah,:id])
-    ds1.opts.should == o3
-    ds1.opts.should == o3
-    ds2.opts.should_not == o2
-  end
-
-  it "#set_graph_aliases should specify the graph mapping" do
-    ds = @ds1.graph(:lines, :x=>:id)
-    ds.sql.should == 'SELECT points.id, points.x, points.y, lines.id AS lines_id, lines.x AS lines_x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-    ds = ds.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y])
-    ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
-    'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-    ].should(include(ds.sql))
-  end
-
-  it "#add_graph_aliases should add columns to the graph mapping" do
-    @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :q]).add_graph_aliases(:y=>[:lines, :r]).sql.should == 'SELECT points.q AS x, lines.r AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-  end
-
-  it "#add_graph_aliases should raise an error if called without existing graph aliases" do
-    proc{@ds1.add_graph_aliases(:y=>[:lines, :r])}.should raise_error(Sequel::Error)
-  end
-
-  it "#set_graph_aliases should allow a third entry to specify an expression to use other than the default" do
-    ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points, :x, 1], :y=>[:lines, :y, Sequel.function(:random)])
-    ['SELECT 1 AS x, random() AS y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
-    'SELECT random() AS y, 1 AS x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-    ].should(include(ds.sql))
-  end
-
-  it "#set_graph_aliases should allow a single array entry to specify a table, assuming the same column as the key" do
-    ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>[:points], :y=>[:lines])
-    ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
-    'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-    ].should(include(ds.sql))
-  end
-
-  it "#set_graph_aliases should allow hash values to be symbols specifying table, assuming the same column as the key" do
-    ds = @ds1.graph(:lines, :x=>:id).set_graph_aliases(:x=>:points, :y=>:lines)
-    ['SELECT points.x, lines.y FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)',
-    'SELECT lines.y, points.x FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
-    ].should(include(ds.sql))
-  end
-
-  it "#set_graph_aliases should only alias columns if necessary" do
-    ds = @ds1.set_graph_aliases(:x=>[:points, :x], :y=>[:lines, :y])
-    ['SELECT points.x, lines.y FROM points',
-    'SELECT lines.y, points.x FROM points'
-    ].should(include(ds.sql))
-
-    ds = @ds1.set_graph_aliases(:x1=>[:points, :x], :y=>[:lines, :y])
-    ['SELECT points.x AS x1, lines.y FROM points',
-    'SELECT lines.y, points.x AS x1 FROM points'
-    ].should(include(ds.sql))
-  end
-
-  it "#ungraphed should remove the splitting of result sets into component tables" do
-    @db.fetch = {:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}
-    @ds1.graph(@ds2, :x=>:id).ungraphed.all.should == [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}]
+  describe "#ungraphed" do
+    it "should remove the splitting of result sets into component tables" do
+      @db.fetch = {:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}
+      @ds1.graph(@ds2, :x=>:id).ungraphed.all.should == [{:id=>1,:x=>2,:y=>3,:lines_id=>4,:lines_x=>5,:lines_y=>6,:graph_id=>7}]
+    end
   end
 end
diff --git a/spec/core/placeholder_literalizer_spec.rb b/spec/core/placeholder_literalizer_spec.rb
new file mode 100644
index 0000000..a65931e
--- /dev/null
+++ b/spec/core/placeholder_literalizer_spec.rb
@@ -0,0 +1,145 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "Dataset::PlaceholderLiteralizer" do
+  before do
+    @c = Sequel::Dataset::PlaceholderLiteralizer
+    @db = Sequel.mock
+    @ds = @db[:items]
+    @h = {:id=>1}
+    @ds.db.fetch = @h
+  end
+  
+  specify "should handle calls with no placeholders" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>1)}
+    loader.first.should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should handle calls with a single placeholder" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.first(1).should == @h
+    loader.first(2).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)", "SELECT * FROM items WHERE (a = 2)"]
+  end
+  
+  specify "should handle calls with multiple placeholders" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg).where(:b=>Sequel.+(pl.arg, 1)).where(pl.arg)}
+    loader.first(1, :c, :id=>1).should == @h
+    loader.first(2, :d, :id=>2).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE ((a = 1) AND (b = (c + 1)) AND (id = 1))", "SELECT * FROM items WHERE ((a = 2) AND (b = (d + 1)) AND (id = 2))"]
+  end
+  
+  specify "should handle calls with placeholders and delayed arguments" do
+    h = :h
+    s = :s
+    d = @ds.having(Sequel.delay{h}).select(Sequel.delay{s})
+    loader = @c.loader(d){|pl, ds| ds.where(:a=>pl.arg).where(:b=>Sequel.+(pl.arg, 1)).where(pl.arg)}
+    loader.first(1, :c, :id=>1).should == @h
+    h = :h2
+    s = :s2
+    loader.first(2, :d, :id=>2).should == @h
+    @db.sqls.should == ["SELECT s FROM items WHERE ((a = 1) AND (b = (c + 1)) AND (id = 1)) HAVING h", "SELECT s2 FROM items WHERE ((a = 2) AND (b = (d + 1)) AND (id = 2)) HAVING h2"]
+  end
+  
+  specify "should handle calls with a placeholders used as filter arguments" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(pl.arg)}
+    loader.first(:id=>1).should == @h
+    loader.first(proc{a(b)}).should == @h
+    loader.first("a = 1").should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE (id = 1)", "SELECT * FROM items WHERE a(b)", "SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should handle calls with a placeholders used as right hand side of condition specifiers" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.first(1).should == @h
+    loader.first([1, 2]).should == @h
+    loader.first(nil).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)", "SELECT * FROM items WHERE (a IN (1, 2))", "SELECT * FROM items WHERE (a IS NULL)"]
+  end
+  
+  specify "should handle calls with a placeholder used multiple times" do
+    loader = @c.loader(@ds){|pl, ds| a = pl.arg; ds.where(:a=>a).where(:b=>a)}
+    loader.first(1).should == @h
+    loader.first(2).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE ((a = 1) AND (b = 1))", "SELECT * FROM items WHERE ((a = 2) AND (b = 2))"]
+  end
+  
+  specify "should handle calls with a placeholder used multiple times in different capacities" do
+    loader = @c.loader(@ds){|pl, ds| a = pl.arg; ds.where(a).where(:b=>a)}
+    loader.first("a = 1").should == @h
+    loader.first(["a = ?", 2]).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE ((a = 1) AND (b = 'a = 1'))", "SELECT * FROM items WHERE ((a = 2) AND (b IN ('a = ?', 2)))"]
+  end
+  
+  specify "should handle calls with manually specified argument positions" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg(1)).where(:b=>pl.arg(0))}
+    loader.first(1, 2).should == @h
+    loader.first(2, 1).should == @h
+    @db.sqls.should == ["SELECT * FROM items WHERE ((a = 2) AND (b = 1))", "SELECT * FROM items WHERE ((a = 1) AND (b = 2))"]
+  end
+  
+  specify "should handle dataset with row procs" do
+    @ds.row_proc = proc{|r| {:foo=>r[:id]+1}}
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.first(1).should == {:foo=>2}
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should return all rows for #all" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.all(1).should == [@h]
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should iterate over block for #all" do
+    a = []
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.all(1){|r| a << r}.should == [@h]
+    a.should == [@h]
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should iterate over block for #each" do
+    a = []
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.each(1){|r| a << r}
+    a.should == [@h]
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 1)"]
+  end
+  
+  specify "should return first value for #get" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    loader.get(2).should == 1
+    @db.sqls.should == ["SELECT * FROM items WHERE (a = 2)"]
+  end
+
+  specify "should literalize args as NULL if :placeholder_literal_null is set" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(pl.arg=>:a).clone(:placeholder_literal_null=>true)}
+    loader.sql(1).should == "SELECT * FROM items WHERE (NULL = a)"
+  end
+  
+  specify "should raise an error if called with an incorrect number of arguments" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg)}
+    proc{loader.first}.should raise_error(Sequel::Error)
+    proc{loader.first(1, 2)}.should raise_error(Sequel::Error)
+  end
+
+  specify "should raise an error if called with an incorrect number of arguments when manually providing argument positions" do
+    loader = @c.loader(@ds){|pl, ds| ds.where(:a=>pl.arg(1))}
+    proc{loader.first}.should raise_error(Sequel::Error)
+    proc{loader.first(1)}.should raise_error(Sequel::Error)
+    proc{loader.first(1, 2, 3)}.should raise_error(Sequel::Error)
+  end
+
+  specify "should raise an error if argument literalized into a different string than returned by query" do
+    o = Object.new
+    def o.wrap(v)
+      @v = v
+      self
+    end
+    def o.sql_literal(ds)
+      ds.literal(@v)
+    end
+    proc{@c.loader(@ds){|pl, ds| ds.where(o.wrap(pl.arg))}}.should raise_error(Sequel::Error)
+  end
+end
diff --git a/spec/core/schema_generator_spec.rb b/spec/core/schema_generator_spec.rb
index 2548ecb..744883c 100644
--- a/spec/core/schema_generator_spec.rb
+++ b/spec/core/schema_generator_spec.rb
@@ -20,7 +20,7 @@ describe Sequel::Schema::Generator do
   end
   
   it "should respond to everything" do
-    @generator.respond_to?(:foo).should be_true
+    @generator.respond_to?(:foo).should == true
   end if RUBY_VERSION >= '1.9'
 
   it "should primary key column first" do
@@ -68,9 +68,9 @@ describe Sequel::Schema::Generator do
   
   it "finds columns" do
     [:title, :body, :parent_id, :id].each do |col|
-      @generator.has_column?(col).should be_true
+      @generator.has_column?(col).should == true
     end
-    @generator.has_column?(:foo).should_not be_true
+    @generator.has_column?(:foo).should_not == true
   end
   
   it "creates constraints" do
diff --git a/spec/core/schema_spec.rb b/spec/core/schema_spec.rb
index 4b9b9b1..20b4535 100644
--- a/spec/core/schema_spec.rb
+++ b/spec/core/schema_spec.rb
@@ -701,6 +701,12 @@ describe "DB#create_table!" do
     @db.create_table!(:cats){|*a|}
     @db.sqls.should == ['DROP TABLE cats', 'CREATE TABLE cats ()']
   end
+  
+  specify "should use IF EXISTS if the database supports it" do
+    meta_def(@db, :supports_drop_table_if_exists?){true}
+    @db.create_table!(:cats){|*a|}
+    @db.sqls.should == ['DROP TABLE IF EXISTS cats', 'CREATE TABLE cats ()']
+  end
 end
 
 describe "DB#create_table?" do
@@ -775,6 +781,54 @@ describe "DB#create_join_table" do
   end
 end
   
+describe "DB#create_join_table?" do
+  before do
+    @db = Sequel.mock
+  end
+  
+  specify "should create the table if it does not already exist" do
+    meta_def(@db, :table_exists?){|a| false}
+    @db.create_join_table?(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)']
+  end
+
+  specify "should not create the table if it already exists" do
+    meta_def(@db, :table_exists?){|a| true}
+    @db.create_join_table?(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == []
+  end
+
+  specify "should use IF NOT EXISTS if the database supports it" do
+    meta_def(@db, :supports_create_table_if_not_exists?){true}
+    @db.create_join_table?(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == ['CREATE TABLE IF NOT EXISTS cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)']
+  end
+end
+  
+describe "DB#create_join_table!" do
+  before do
+    @db = Sequel.mock
+  end
+  
+  specify "should drop the table first if it already exists" do
+    meta_def(@db, :table_exists?){|a| true}
+    @db.create_join_table!(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == ['DROP TABLE cats_dogs', 'CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)']
+  end
+
+  specify "should not drop the table if it doesn't exists" do
+    meta_def(@db, :table_exists?){|a| false}
+    @db.create_join_table!(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == ['CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)']
+  end
+
+  specify "should use IF EXISTS if the database supports it" do
+    meta_def(@db, :supports_drop_table_if_exists?){true}
+    @db.create_join_table!(:cat_id=>:cats, :dog_id=>:dogs)
+    @db.sqls.should == ['DROP TABLE IF EXISTS cats_dogs', 'CREATE TABLE cats_dogs (cat_id integer NOT NULL REFERENCES cats, dog_id integer NOT NULL REFERENCES dogs, PRIMARY KEY (cat_id, dog_id))', 'CREATE INDEX cats_dogs_dog_id_cat_id_index ON cats_dogs (dog_id, cat_id)']
+  end
+end
+  
 describe "DB#drop_join_table" do
   before do
     @db = Sequel.mock
@@ -1345,6 +1399,13 @@ describe "Database#create_view" do
     @db.sqls.should == ['CREATE VIEW test (d, e) AS SELECT a, b FROM items ORDER BY c']
   end
 
+  specify "should handle :check option" do
+    @db.create_view :test, @db[:items].select(:a, :b).order(:c), :check=>true
+    @db.sqls.should == ['CREATE VIEW test AS SELECT a, b FROM items ORDER BY c WITH CHECK OPTION']
+    @db.create_view :test, @db[:items].select(:a, :b).order(:c), :check=>:local
+    @db.sqls.should == ['CREATE VIEW test AS SELECT a, b FROM items ORDER BY c WITH LOCAL CHECK OPTION']
+  end
+
   specify "should handle create_or_replace_view" do
     @db.create_or_replace_view :sch__test, "SELECT * FROM xyz"
     @db.sqls.should == ['DROP VIEW sch.test', 'CREATE VIEW sch.test AS SELECT * FROM xyz']
@@ -1379,10 +1440,15 @@ describe "Database#drop_view" do
     @db.sqls.should == ['DROP VIEW cats', 'DROP VIEW dogs']
   end
 
-  specify "should take an options hash and support the :cascade option" do
+  specify "should support the :cascade option" do
     @db.drop_view :cats, :dogs, :cascade=>true
     @db.sqls.should == ['DROP VIEW cats CASCADE', 'DROP VIEW dogs CASCADE']
   end
+
+  specify "should support the :if_exists option" do
+    @db.drop_view :cats, :dogs, :if_exists=>true
+    @db.sqls.should == ['DROP VIEW IF EXISTS cats', 'DROP VIEW IF EXISTS dogs']
+  end
 end
 
 describe "Database#alter_table_sql" do
diff --git a/spec/core/spec_helper.rb b/spec/core/spec_helper.rb
index 1517024..52f5775 100644
--- a/spec/core/spec_helper.rb
+++ b/spec/core/spec_helper.rb
@@ -11,7 +11,9 @@ unless Object.const_defined?('Sequel')
 end
 Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_spec\.rb/}
 
-(defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do
+require File.join(File.dirname(File.expand_path(__FILE__)), "../rspec_helper.rb")
+
+RSPEC_EXAMPLE_GROUP.class_eval do
   def meta_def(obj, name, &block)
     (class << obj; self end).send(:define_method, name, &block)
   end
diff --git a/spec/core_extensions_spec.rb b/spec/core_extensions_spec.rb
index 4deb64a..82fe663 100644
--- a/spec/core_extensions_spec.rb
+++ b/spec/core_extensions_spec.rb
@@ -22,9 +22,11 @@ if RUBY_VERSION < '1.9.0'
   Sequel.extension :ruby18_symbol_extensions
 end
 
+require File.join(File.dirname(File.expand_path(__FILE__)), "rspec_helper.rb")
+
 describe "Sequel core extensions" do
   specify "should have Sequel.core_extensions? be true if enabled" do
-    Sequel.core_extensions?.should be_true
+    Sequel.core_extensions?.should == true
   end
 end
 
@@ -645,6 +647,12 @@ describe "Postgres extensions integration" do
 
   it "Symbol#pg_json should return an JSONOp" do
     @db.literal(:a.pg_json[%w'a b']).should == "(a #> ARRAY['a','b'])"
+    @db.literal(:a.pg_json.extract('a')).should == "json_extract_path(a, 'a')"
+  end
+
+  it "Symbol#pg_jsonb should return an JSONBOp" do
+    @db.literal(:a.pg_jsonb[%w'a b']).should == "(a #> ARRAY['a','b'])"
+    @db.literal(:a.pg_jsonb.extract('a')).should == "jsonb_extract_path(a, 'a')"
   end
 
   it "Symbol#pg_range should return a RangeOp" do
@@ -660,6 +668,10 @@ describe "Postgres extensions integration" do
     @db.literal([1].pg_json).should == "'[1]'::json"
   end
 
+  it "Array#pg_jsonb should return a JSONBArray" do
+    @db.literal([1].pg_jsonb).should == "'[1]'::jsonb"
+  end
+
   it "Array#pg_row should return a ArrayRow" do
     @db.literal([1].pg_row).should == "ROW(1)"
   end
@@ -672,6 +684,10 @@ describe "Postgres extensions integration" do
     @db.literal({'a'=>'b'}.pg_json).should == "'{\"a\":\"b\"}'::json"
   end
 
+  it "Hash#pg_jsonb should return an JSONBHash" do
+    @db.literal({'a'=>'b'}.pg_jsonb).should == "'{\"a\":\"b\"}'::jsonb"
+  end
+
   it "Range#pg_range should return an PGRange" do
     @db.literal((1..2).pg_range).should == "'[1,2]'"
     @db.literal((1..2).pg_range(:int4range)).should == "'[1,2]'::int4range"
diff --git a/spec/extensions/active_model_spec.rb b/spec/extensions/active_model_spec.rb
index 8dc2baa..e4dfa8a 100644
--- a/spec/extensions/active_model_spec.rb
+++ b/spec/extensions/active_model_spec.rb
@@ -2,13 +2,22 @@ require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 
 begin
   require 'active_model'
-  require 'test/unit'
-  if Test::Unit.respond_to?(:run=)
-    Test::Unit.run = false
-    require 'test/unit/testresult'
-  elsif defined?(MiniTest::Unit)
-    class << MiniTest::Unit
-      def autorun; end
+  begin
+    require 'minitest'
+    if defined?(MiniTest::Unit)
+      class << MiniTest::Unit
+        def autorun; end
+      end
+    end
+    if defined?(MiniTest::Test)
+      test_class = MiniTest::Test
+    end
+  rescue
+    require 'test/unit'
+    test_class = Test::Unit::TestCase
+    if Test::Unit.respond_to?(:run=)
+      Test::Unit.run = false
+      require 'test/unit/testresult'
     end
   end
 rescue LoadError => e
@@ -16,7 +25,7 @@ rescue LoadError => e
 else
 describe "ActiveModel plugin" do
   specify "should be compliant to the ActiveModel spec" do
-    tc = Class.new(Test::Unit::TestCase)
+    tc = Class.new(test_class)
     tc.class_eval do
       define_method(:setup) do
         class ::AMLintTest < Sequel::Model
@@ -94,7 +103,7 @@ describe "ActiveModel plugin" do
       end
       
     end
-    if defined?(MiniTest::Unit)
+    if defined?(MiniTest::Test) || defined?(MiniTest::Unit)
       tc.instance_methods.map{|x| x.to_s}.reject{|n| n !~ /\Atest_/}.each do |m|
         i = tc.new(m)
         i.setup
diff --git a/spec/extensions/association_pks_spec.rb b/spec/extensions/association_pks_spec.rb
index ce19b6a..05be16d 100644
--- a/spec/extensions/association_pks_spec.rb
+++ b/spec/extensions/association_pks_spec.rb
@@ -89,8 +89,10 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN (1, 3)))"
     sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)'
-    sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/
-    sqls.length.should == 3
+    sqls[2].should == 'BEGIN'
+    sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/
+    sqls[4].should == 'COMMIT'
+    sqls.length.should == 5
   end
 
   specify "should return correct right-side associated cpks for one_to_many associations" do
@@ -119,9 +121,9 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM albums_vocalists WHERE ((album_id = 2) AND ((first, last) NOT IN (('F1', 'L1'), ('F2', 'L2'))))"
     sqls[1].should == 'SELECT first, last FROM albums_vocalists WHERE (album_id = 2)'
-    match = sqls[2].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/)
+    match = sqls[3].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/)
     Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F1'", "last"=>"'L1'", "album_id"=>"2"}
-    sqls.length.should == 3
+    sqls.length.should == 5
   end
 
   specify "should return correct associated pks for left-side cpks for one_to_many associations" do
@@ -152,9 +154,9 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM vocalists_instruments WHERE ((first = 'F2') AND (last = 'L2') AND (instrument_id NOT IN (1, 2)))"
     sqls[1].should == "SELECT instrument_id FROM vocalists_instruments WHERE ((first = 'F2') AND (last = 'L2'))"
-    match = sqls[2].match(/INSERT INTO vocalists_instruments \((.*)\) VALUES \((.*)\)/)
+    match = sqls[3].match(/INSERT INTO vocalists_instruments \((.*)\) VALUES \((.*)\)/)
     Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "instrument_id"=>"1"}
-    sqls.length.should == 3
+    sqls.length.should == 5
   end
 
   specify "should return correct right-side associated cpks for left-side cpks for one_to_many associations" do
@@ -185,9 +187,9 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2') AND ((year, week) NOT IN ((1997, 1), (1997, 2))))"
     sqls[1].should == "SELECT year, week FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2'))"
-    match = sqls[2].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/)
+    match = sqls[3].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/)
     Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "year"=>"1997", "week"=>"1"}
-    sqls.length.should == 3
+    sqls.length.should == 5
   end
 
   specify "should use transactions if the object is configured to use transactions" do
@@ -231,8 +233,8 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN (1, 3)))"
     sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)'
-    sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/
-    sqls.length.should == 3
+    sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, 1|1, 2)\)/
+    sqls.length.should == 5
   end
 
   specify "should not automatically convert keys to numbers if the primary key is an integer for many_to_many associations" do
@@ -241,9 +243,9 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM albums_tags WHERE ((album_id = 2) AND (tag_id NOT IN ('1', '3')))"
     sqls[1].should == 'SELECT tag_id FROM albums_tags WHERE (album_id = 2)'
-    sqls[2].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '1'|'1', 2)\)/
-    sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '3'|'3', 2)\)/
-    sqls.length.should == 4
+    sqls[3].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '1'|'1', 2)\)/
+    sqls[4].should =~ /INSERT INTO albums_tags \((album_id, tag_id|tag_id, album_id)\) VALUES \((2, '3'|'3', 2)\)/
+    sqls.length.should == 6
   end
 
   specify "should automatically convert keys to numbers for appropriate integer primary key for composite key associations" do
@@ -254,9 +256,9 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2') AND ((year, week) NOT IN ((1997, 1), (1997, 2))))"
     sqls[1].should == "SELECT year, week FROM vocalists_hits WHERE ((first = 'F2') AND (last = 'L2'))"
-    match = sqls[2].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/)
+    match = sqls[3].match(/INSERT INTO vocalists_hits \((.*)\) VALUES \((.*)\)/)
     Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"'F2'", "last"=>"'L2'", "year"=>"1997", "week"=>"1"}
-    sqls.length.should == 3
+    sqls.length.should == 5
 
     @Vocalist.db_schema[:first][:type] = :integer
     @Vocalist.db_schema[:last][:type] = :integer
@@ -270,10 +272,10 @@ describe "Sequel::Plugins::AssociationPks" do
     sqls = @db.sqls
     sqls[0].should == "DELETE FROM albums_vocalists WHERE ((album_id = 2) AND ((first, last) NOT IN ((11, 11), (12, 12))))"
     sqls[1].should == 'SELECT first, last FROM albums_vocalists WHERE (album_id = 2)'
-    match = sqls[2].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/)
-    Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"11", "last"=>"11", "album_id"=>"2"}
     match = sqls[3].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/)
+    Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"11", "last"=>"11", "album_id"=>"2"}
+    match = sqls[4].match(/INSERT INTO albums_vocalists \((.*)\) VALUES \((.*)\)/)
     Hash[match[1].split(', ').zip(match[2].split(', '))].should == {"first"=>"12", "last"=>"12", "album_id"=>"2"}
-    sqls.length.should == 4
+    sqls.length.should == 6
   end
 end
diff --git a/spec/extensions/association_proxies_spec.rb b/spec/extensions/association_proxies_spec.rb
index 8f6fe28..a6357db 100644
--- a/spec/extensions/association_proxies_spec.rb
+++ b/spec/extensions/association_proxies_spec.rb
@@ -25,7 +25,7 @@ describe "Sequel::Plugins::AssociationProxies" do
   
   it "should send method calls to the association dataset if sent a non-array method" do
     @i.associations.has_key?(:tags).should == false
-    @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
+    @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
     @i.associations.has_key?(:tags).should == false
   end
   
@@ -34,9 +34,9 @@ describe "Sequel::Plugins::AssociationProxies" do
       opts[:method] == :where || opts[:arguments].length == 2 || opts[:block]
     end
     @i.associations.has_key?(:tags).should == false
-    @t.where(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
-    @t.filter('a = ?', 1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
-    @t.filter{{:a=>1}}.sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
+    @t.where(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
+    @t.filter('a = ?', 1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
+    @t.filter{{:a=>1}}.sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
 
     @i.associations.has_key?(:tags).should == false
     Item.plugin :association_proxies do |opts|
@@ -47,11 +47,11 @@ describe "Sequel::Plugins::AssociationProxies" do
       is_size && !cached && !proxy_arg && !proxy_block
     end
     @t.size.should == 1
-    Item.db.sqls.should == ["SELECT count(*) AS count FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) LIMIT 1"]
+    Item.db.sqls.should == ["SELECT count(*) AS count FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE (items_tags.item_id = 1) LIMIT 1"]
     @i.tags{|ds| ds}.size.should == 1
-    Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1))"]
+    Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE (items_tags.item_id = 1)"]
     @i.tags(true).size.should == 1
-    Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1))"]
+    Item.db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE (items_tags.item_id = 1)"]
     @t.size.should == 1
     Item.db.sqls.should == []
   end
@@ -63,7 +63,7 @@ describe "Sequel::Plugins::AssociationProxies" do
     Item.db.sqls.length.should == 0
     @i.tags(true).select{|x| false}.should == []
     Item.db.sqls.length.should == 1
-    @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
+    @t.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
     Item.db.sqls.length.should == 0
   end
   
@@ -80,7 +80,7 @@ describe "Sequel::Plugins::AssociationProxies" do
     i.associations.has_key?(:tags).should == false
     i.tags.select{|x| false}.should == []
     i.associations.has_key?(:tags).should == true
-    i.tags.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON ((items_tags.tag_id = tags.id) AND (items_tags.item_id = 1)) WHERE (a = 1)"
+    i.tags.filter(:a=>1).sql.should == "SELECT tags.* FROM tags INNER JOIN items_tags ON (items_tags.tag_id = tags.id) WHERE ((items_tags.item_id = 1) AND (a = 1))"
   end
   
 end
diff --git a/spec/extensions/auto_validations_spec.rb b/spec/extensions/auto_validations_spec.rb
index 0874fe5..7cec975 100644
--- a/spec/extensions/auto_validations_spec.rb
+++ b/spec/extensions/auto_validations_spec.rb
@@ -15,6 +15,7 @@ describe "Sequel::Plugins::AutoValidations" do
     end
     def db.supports_index_parsing?() true end
     def db.indexes(t, *)
+      raise if t.is_a?(Sequel::Dataset)
       return [] if t != :test
       {:a=>{:columns=>[:name, :num], :unique=>true}, :b=>{:columns=>[:num], :unique=>false}}
     end
@@ -27,19 +28,19 @@ describe "Sequel::Plugins::AutoValidations" do
   end
 
   it "should have automatically created validations" do
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not present"], :name=>["is not present"]}
 
     @m.name = ''
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not present"]}
 
     @m.set(:d=>'/', :num=>'a', :name=>'1')
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]}
 
     @m.set(:d=>Date.today, :num=>1)
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
   end
 
@@ -47,20 +48,25 @@ describe "Sequel::Plugins::AutoValidations" do
     def (@m.db).supports_index_parsing?() false end
     @m.model.send(:setup_auto_validations)
     @m.set(:d=>Date.today, :num=>1, :name=>'1')
-    @m.valid?.should be_true
+    @m.valid?.should == true
+  end
+
+  it "should handle models that select from subqueries" do
+    @c.set_dataset @c.dataset.from_self
+    proc{@c.send(:setup_auto_validations)}.should_not raise_error
   end
 
   it "should support :not_null=>:presence option" do
     @c.plugin :auto_validations, :not_null=>:presence
     @m.set(:d=>Date.today, :num=>'')
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:name=>["is not present"]}
   end
 
   it "should automatically validate explicit nil values for columns with not nil defaults" do
     @m.set(:d=>Date.today, :name=>1, :nnd=>nil)
     @m.id = nil
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:id=>["is not present"], :nnd=>["is not present"]}
   end
 
@@ -68,46 +74,64 @@ describe "Sequel::Plugins::AutoValidations" do
     @c = Class.new(@c)
     @m = @c.new
     @c.skip_auto_validations(:not_null)
-    @m.valid?.should be_true
+    @m.valid?.should == true
 
     @m.set(:d=>'/', :num=>'a', :name=>'1')
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]}
 
     @c.skip_auto_validations(:types)
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
 
     @c.skip_auto_validations(:unique)
-    @m.valid?.should be_true
+    @m.valid?.should == true
   end
 
   it "should allow skipping all auto validations" do
     @c = Class.new(@c)
     @m = @c.new
     @c.skip_auto_validations(:all)
-    @m.valid?.should be_true
+    @m.valid?.should == true
     @m.set(:d=>'/', :num=>'a', :name=>'1')
-    @m.valid?.should be_true
+    @m.valid?.should == true
   end
 
   it "should work correctly in subclasses" do
     @c = Class.new(@c)
     @m = @c.new
-    @m.valid?.should be_false
+    @m.valid?.should == false
+    @m.errors.should == {:d=>["is not present"], :name=>["is not present"]}
+
+    @m.set(:d=>'/', :num=>'a', :name=>'1')
+    @m.valid?.should == false
+    @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]}
+
+    @m.set(:d=>Date.today, :num=>1)
+    @m.valid?.should == false
+    @m.errors.should == {[:name, :num]=>["is already taken"]}
+  end
+
+  it "should work correctly in STI subclasses" do
+    @c.plugin(:single_table_inheritance, :num, :model_map=>{1=>@c}, :key_map=>proc{[1, 2]})
+    sc = Class.new(@c)
+    @m = sc.new
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not present"], :name=>["is not present"]}
 
     @m.set(:d=>'/', :num=>'a', :name=>'1')
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:d=>["is not a valid date"], :num=>["is not a valid integer"]}
 
+    @m.db.sqls
     @m.set(:d=>Date.today, :num=>1)
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
+    @m.db.sqls.should == ["SELECT count(*) AS count FROM test WHERE ((name = '1') AND (num = 1)) LIMIT 1"]
   end
 
   it "should work correctly when changing the dataset" do
     @c.set_dataset(@c.db[:foo])
-    @c.new.valid?.should be_true
+    @c.new.valid?.should == true
   end
 end
diff --git a/spec/extensions/caching_spec.rb b/spec/extensions/caching_spec.rb
index 6bf1362..4b5293b 100644
--- a/spec/extensions/caching_spec.rb
+++ b/spec/extensions/caching_spec.rb
@@ -173,14 +173,14 @@ describe Sequel::Model, "caching" do
     @cache[m.cache_key].should == m
     m.name = 'hey'
     m.save
-    @cache.has_key?(m.cache_key).should be_false
+    @cache.has_key?(m.cache_key).should == false
     @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "UPDATE items SET name = 'hey' WHERE (id = 1)"]
 
     m = @c2[1]
     @cache[m.cache_key].should == m
     m.name = 'hey'
     m.save
-    @cache.has_key?(m.cache_key).should be_false
+    @cache.has_key?(m.cache_key).should == false
     @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "UPDATE items SET name = 'hey' WHERE (id = 1)"]
   end
 
@@ -188,13 +188,13 @@ describe Sequel::Model, "caching" do
     m = @c[1]
     @cache[m.cache_key].should == m
     m.delete
-    @cache.has_key?(m.cache_key).should be_false
+    @cache.has_key?(m.cache_key).should == false
     @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "DELETE FROM items WHERE id = 1"]
 
     m = @c2[1]
     @cache[m.cache_key].should == m
     m.delete
-    @cache.has_key?(m.cache_key).should be_false
+    @cache.has_key?(m.cache_key).should == false
     @c.db.sqls.should == ["SELECT * FROM items WHERE id = 1", "DELETE FROM items WHERE id = 1"]
   end
   
diff --git a/spec/extensions/class_table_inheritance_spec.rb b/spec/extensions/class_table_inheritance_spec.rb
index 67b11e1..028fd2a 100644
--- a/spec/extensions/class_table_inheritance_spec.rb
+++ b/spec/extensions/class_table_inheritance_spec.rb
@@ -94,6 +94,18 @@ describe "class_table_inheritance plugin" do
     Manager.all.collect{|x| x.class}.should == [Manager, Manager]
   end
   
+  it "should handle a model map with integer values" do
+    Employee.plugin(:class_table_inheritance, :key=>:kind, :model_map=>{0=>:Employee, 1=>:Manager, 2=>:Executive})
+    Object.send(:remove_const, :Executive)
+    Object.send(:remove_const, :Manager)
+    class ::Manager < Employee; end 
+    class ::Executive < Manager; end 
+    Employee.dataset._fetch = [{:kind=>nil},{:kind=>0},{:kind=>1}, {:kind=>2}]
+    Employee.all.collect{|x| x.class}.should == [Employee, Employee, Manager, Executive]
+    Manager.dataset._fetch = [{:kind=>nil},{:kind=>0},{:kind=>1}, {:kind=>2}]
+    Manager.all.collect{|x| x.class}.should == [Manager, Employee, Manager, Executive]
+  end
+  
   it "should fallback to the main class if the given class does not exist" do
     @ds._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Blah'}, {:kind=>'Staff'}]
     Employee.all.collect{|x| x.class}.should == [Employee, Manager, Employee, Staff]
@@ -104,19 +116,19 @@ describe "class_table_inheritance plugin" do
     Manager.all.collect{|x| x.class}.should == [Manager, Executive, Manager]
   end
 
-  it "should add a before_create hook that sets the model class name for the key" do
+  it "should sets the model class name for the key when creating new parent class records" do
     Employee.create
     @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Employee')"]
   end
   
-  it "should add a before_create hook that sets the model class name for the key in subclasses" do
+  it "should sets the model class name for the key when creating new subclass records" do
     Executive.create
     @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Executive')",
       "INSERT INTO managers (id) VALUES (1)",
       "INSERT INTO executives (id) VALUES (1)"]
   end
 
-  it "should ignore existing cti_key value" do
+  it "should ignore existing cti_key value when creating new records" do
     Employee.create(:kind=>'Manager')
     @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Employee')"]
   end
@@ -127,6 +139,14 @@ describe "class_table_inheritance plugin" do
       "INSERT INTO managers (id) VALUES (1)"]
   end
 
+  it "should handle validations on the type column field" do
+    o = Employee.new
+    def o.validate
+      errors.add(:kind, 'not present') unless kind
+    end
+    o.valid?.should == true
+  end
+
   it "should raise an error if attempting to create an anonymous subclass" do
     proc{Class.new(Manager)}.should raise_error(Sequel::Error)
   end
diff --git a/spec/extensions/columns_introspection_spec.rb b/spec/extensions/columns_introspection_spec.rb
index 7576221..c34d704 100644
--- a/spec/extensions/columns_introspection_spec.rb
+++ b/spec/extensions/columns_introspection_spec.rb
@@ -74,6 +74,7 @@ describe "columns_introspection extension" do
 
   specify "should issue a database query when common table expressions are used" do
     @db.instance_variable_set(:@schemas, "a"=>[[:x, {}]])
+    def @ds.supports_cte?(*) true end
     @ds.with(:a, @ds).columns
     @db.sqls.length.should == 1
   end
diff --git a/spec/extensions/constraint_validations_spec.rb b/spec/extensions/constraint_validations_spec.rb
index 4e6fbe4..0f68c8b 100644
--- a/spec/extensions/constraint_validations_spec.rb
+++ b/spec/extensions/constraint_validations_spec.rb
@@ -133,7 +133,9 @@ describe "constraint_validations extension" do
     @db.extension(:constraint_validations)
     @db.create_table(:foo){String :name; validate{presence :name}}
     sqls = @db.sqls
-    parse_insert(sqls.slice!(1)).should == {:validation_type=>"presence", :column=>"name", :table=>"foo"}
+    s = sqls.slice!(1)
+    m = /\AINSERT INTO sequel_constraint_validations \((.*)\) SELECT (.*) FROM DUAL\z/.match(s)
+    Hash[*m[1].split(', ').map{|v| v.to_sym}.zip(m[2].split(', ').map{|v| parse_insert_value(v)}).reject{|k, v| v.nil?}.flatten].should == {:validation_type=>"presence", :column=>"name", :table=>"foo"}
     sqls.should == ["BEGIN", "COMMIT", "CREATE TABLE foo (name varchar(255), CHECK ((name IS NOT NULL) AND (trim(name) IS NOT NULL)))"]
   end
 
@@ -276,6 +278,13 @@ describe "constraint_validations extension" do
     @db.sqls.should == ["DELETE FROM sequel_constraint_validations WHERE ((table, constraint_name) IN (('foo', 'bar')))", "ALTER TABLE foo DROP CONSTRAINT bar"]
   end
 
+  it "should drop constraints and validations before adding new ones" do
+    @db.alter_table(:foo){String :name; validate{unique :name; drop :bar}}
+    sqls = @db.sqls
+    parse_insert(sqls.slice!(2)).should == {:validation_type=>"unique", :column=>"name", :table=>"foo"}
+    sqls.should == ["DELETE FROM sequel_constraint_validations WHERE ((table, constraint_name) IN (('foo', 'bar')))", "BEGIN", "COMMIT", "ALTER TABLE foo ADD UNIQUE (name)", "ALTER TABLE foo DROP CONSTRAINT bar"]
+  end
+
   it "should raise an error if attempting to validate inclusion with a range of non-integers" do
     proc{@db.create_table(:foo){String :name; validate{includes 'a'..'z', :name}}}.should raise_error(Sequel::Error)
   end
diff --git a/spec/extensions/core_refinements_spec.rb b/spec/extensions/core_refinements_spec.rb
index 03b0619..8a74672 100644
--- a/spec/extensions/core_refinements_spec.rb
+++ b/spec/extensions/core_refinements_spec.rb
@@ -1,6 +1,6 @@
 require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 
-if RUBY_VERSION >= '2.0.0'
+if RUBY_VERSION >= '2.0.0' && RUBY_ENGINE == 'ruby'
 Sequel.extension :core_refinements, :pg_array, :pg_hstore, :pg_row, :pg_range, :pg_row_ops, :pg_range_ops, :pg_array_ops, :pg_hstore_ops, :pg_json, :pg_json_ops
 using Sequel::CoreRefinements
 
@@ -468,6 +468,12 @@ describe "Postgres extensions integration" do
 
   it "Symbol#pg_json should return an JSONOp" do
     @db.literal(:a.pg_json[%w'a b']).should == "(a #> ARRAY['a','b'])"
+    @db.literal(:a.pg_json.extract('a')).should == "json_extract_path(a, 'a')"
+  end
+
+  it "Symbol#pg_jsonb should return an JSONBOp" do
+    @db.literal(:a.pg_jsonb[%w'a b']).should == "(a #> ARRAY['a','b'])"
+    @db.literal(:a.pg_jsonb.extract('a')).should == "jsonb_extract_path(a, 'a')"
   end
 
   it "Symbol#pg_range should return a RangeOp" do
@@ -483,6 +489,10 @@ describe "Postgres extensions integration" do
     @db.literal([1].pg_json).should == "'[1]'::json"
   end
 
+  it "Array#pg_jsonb should return a JSONBArray" do
+    @db.literal([1].pg_jsonb).should == "'[1]'::jsonb"
+  end
+
   it "Array#pg_row should return a ArrayRow" do
     @db.literal([1].pg_row).should == "ROW(1)"
   end
@@ -495,6 +505,10 @@ describe "Postgres extensions integration" do
     @db.literal({'a'=>'b'}.pg_json).should == "'{\"a\":\"b\"}'::json"
   end
 
+  it "Hash#pg_jsonb should return an JSONBHash" do
+    @db.literal({'a'=>'b'}.pg_jsonb).should == "'{\"a\":\"b\"}'::jsonb"
+  end
+
   it "Range#pg_range should return an PGRange" do
     @db.literal((1..2).pg_range).should == "'[1,2]'"
     @db.literal((1..2).pg_range(:int4range)).should == "'[1,2]'::int4range"
diff --git a/spec/extensions/current_datetime_timestamp_spec.rb b/spec/extensions/current_datetime_timestamp_spec.rb
new file mode 100644
index 0000000..42f9089
--- /dev/null
+++ b/spec/extensions/current_datetime_timestamp_spec.rb
@@ -0,0 +1,27 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper')
+
+describe "current_datetime_timestamp extension" do
+  before do
+    @ds = Sequel.mock[:table].extension(:current_datetime_timestamp)
+  end
+  after do
+    Sequel.datetime_class = Time
+  end
+
+  specify "should have current_timestamp respect Sequel.datetime_class" do
+    t = Sequel::Dataset.new(nil).current_datetime 
+    t.should be_a_kind_of(Time)
+    (Time.now - t < 0.1).should == true
+
+    Sequel.datetime_class = DateTime
+    t = Sequel::Dataset.new(nil).current_datetime 
+    t.should be_a_kind_of(DateTime)
+    (DateTime.now - t < (0.1/86400)).should == true
+  end
+
+  specify "should have current_timestamp value be literalized as CURRENT_TIMESTAMP" do
+    @ds.literal(@ds.current_datetime).should == 'CURRENT_TIMESTAMP'
+    Sequel.datetime_class = DateTime
+    @ds.literal(@ds.current_datetime).should == 'CURRENT_TIMESTAMP'
+  end
+end
diff --git a/spec/extensions/dataset_associations_spec.rb b/spec/extensions/dataset_associations_spec.rb
index 0fe794f..cbf86f8 100644
--- a/spec/extensions/dataset_associations_spec.rb
+++ b/spec/extensions/dataset_associations_spec.rb
@@ -3,6 +3,10 @@ require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 describe "Sequel::Plugins::DatasetAssociations" do
   before do
     @db = Sequel.mock
+    @db.extend_datasets do
+      def supports_window_functions?; true; end
+      def supports_distinct_on?; true; end
+    end
     @Base = Class.new(Sequel::Model)
     @Base.plugin :dataset_associations
 
@@ -29,10 +33,12 @@ describe "Sequel::Plugins::DatasetAssociations" do
     @Artist.one_to_one :first_album, :class=>@Album
     @Album.many_to_one :artist, :class=>@Artist
     @Album.many_to_many :tags, :class=>@Tag
+    @Album.one_through_one :first_tag, :class=>@Tag, :right_key=>:tag_id
     @Tag.many_to_many :albums, :class=>@Album
     @Artist.pg_array_to_many :artist_tags, :class=>@Tag, :key=>:tag_ids
     @Tag.many_to_pg_array :artists, :class=>@Artist
     @Artist.many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :class=>@Tag
+    @Artist.one_through_many :otag, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :class=>@Tag
   end
 
   it "should work for many_to_one associations" do
@@ -60,14 +66,28 @@ describe "Sequel::Plugins::DatasetAssociations" do
     ds = @Album.tags
     ds.should be_a_kind_of(Sequel::Dataset)
     ds.model.should == @Tag
-    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM albums INNER JOIN albums_tags ON (albums_tags.album_id = albums.id)))"
+    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id) IN (SELECT albums.id FROM albums))))"
+  end
+
+  it "should work for one_through_one associations" do
+    ds = @Album.first_tags
+    ds.should be_a_kind_of(Sequel::Dataset)
+    ds.model.should == @Tag
+    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id) IN (SELECT albums.id FROM albums))))"
   end
 
   it "should work for many_through_many associations" do
     ds = @Artist.tags
     ds.should be_a_kind_of(Sequel::Dataset)
     ds.model.should == @Tag
-    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id)))"
+    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id) WHERE (albums.artist_id IN (SELECT artists.id FROM artists))))"
+  end
+
+  it "should work for one_through_many associations" do
+    ds = @Artist.otags
+    ds.should be_a_kind_of(Sequel::Dataset)
+    ds.model.should == @Tag
+    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id) WHERE (albums.artist_id IN (SELECT artists.id FROM artists))))"
   end
 
   it "should work for pg_array_to_many associations" do
@@ -104,7 +124,7 @@ describe "Sequel::Plugins::DatasetAssociations" do
     ds = @Artist.albums.tags
     ds.should be_a_kind_of(Sequel::Dataset)
     ds.model.should == @Tag
-    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM albums INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums.artist_id IN (SELECT artists.id FROM artists))))"
+    ds.sql.should == "SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id) IN (SELECT albums.id FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))))))"
   end
 
   it "should deal correctly with filters before the association method" do
@@ -154,11 +174,45 @@ describe "Sequel::Plugins::DatasetAssociations" do
     @Artist.one_to_many :albums, :clone=>:albums, :select=>[:id, :name]
     @Artist.albums.sql.should == "SELECT id, name FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists))"
   end
+
+  it "should deal correctly with :order option for one_to_one associations" do
+    @Artist.one_to_one :first_album, :clone=>:first_album, :order=>:name
+    @Artist.first_albums.sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (albums.id IN (SELECT DISTINCT ON (albums.artist_id) albums.id FROM albums ORDER BY albums.artist_id, name))) ORDER BY name'
+  end
+
+  it "should deal correctly with :limit option for one_to_many associations" do
+    @Artist.one_to_many :albums, :clone=>:albums, :limit=>10, :order=>:name
+    @Artist.albums.sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND (albums.id IN (SELECT id FROM (SELECT albums.id, row_number() OVER (PARTITION BY albums.artist_id ORDER BY name) AS x_sequel_row_number_x FROM albums) AS t1 WHERE (x_sequel_row_number_x <= 10)))) ORDER BY name'
+  end
+
+  it "should deal correctly with :order option for one_through_one associations" do
+    @Album.one_through_one :first_tag, :clone=>:first_tag, :order=>:tags__name
+    @Album.first_tags.sql.should == 'SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (((albums_tags.album_id) IN (SELECT albums.id FROM albums)) AND ((albums_tags.album_id, tags.id) IN (SELECT DISTINCT ON (albums_tags.album_id) albums_tags.album_id, tags.id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) ORDER BY albums_tags.album_id, tags.name))))) ORDER BY tags.name'
+  end
+
+  it "should deal correctly with :limit option for many_to_many associations" do
+    @Album.many_to_many :tags, :clone=>:tags, :limit=>10, :order=>:tags__name
+    @Album.tags.sql.should == 'SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (((albums_tags.album_id) IN (SELECT albums.id FROM albums)) AND ((albums_tags.album_id, tags.id) IN (SELECT b, c FROM (SELECT albums_tags.album_id AS b, tags.id AS c, row_number() OVER (PARTITION BY albums_tags.album_id ORDER BY tags.name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_i [...]
+  end
+
+  it "should deal correctly with :order option for one_through_many associations" do
+    @Artist.one_through_many :otag, :clone=>:otag, :order=>:id, :order=>:tags__name
+    @Artist.otags.sql.should == 'SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id) WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND ((albums.artist_id, tags.id) IN (SELECT DISTINCT ON (albums.artist_id) albums.artist_id, tags.id FROM tags INNER JOIN albums_tags ON (albums_tags.t [...]
+  end
+
+  it "should deal correctly with :limit option for many_through_many associations" do
+    @Artist.many_through_many :tags, :clone=>:tags, :limit=>10, :order=>:tags__name
+    @Artist.tags.sql.should == 'SELECT tags.* FROM tags WHERE (tags.id IN (SELECT albums_tags.tag_id FROM artists INNER JOIN albums ON (albums.artist_id = artists.id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id) WHERE ((albums.artist_id IN (SELECT artists.id FROM artists)) AND ((albums.artist_id, tags.id) IN (SELECT b, c FROM (SELECT albums.artist_id AS b, tags.id AS c, row_number() OVER (PARTITION BY albums.artist_id ORDE [...]
+  end
 end
 
 describe "Sequel::Plugins::DatasetAssociations with composite keys" do
   before do
     @db = Sequel.mock
+    @db.extend_datasets do
+      def supports_window_functions?; true; end
+      def supports_distinct_on?; true; end
+    end
     @Base = Class.new(Sequel::Model)
     @Base.plugin :dataset_associations
 
@@ -187,8 +241,10 @@ describe "Sequel::Plugins::DatasetAssociations with composite keys" do
     @Artist.one_to_one :first_album, :class=>@Album, :key=>[:artist_id1, :artist_id2]
     @Album.many_to_one :artist, :class=>@Artist, :key=>[:artist_id1, :artist_id2]
     @Album.many_to_many :tags, :class=>@Tag, :left_key=>[:album_id1, :album_id2], :right_key=>[:tag_id1, :tag_id2]
+    @Album.one_through_one :first_tag, :class=>@Tag, :left_key=>[:album_id1, :album_id2], :right_key=>[:tag_id1, :tag_id2]
     @Tag.many_to_many :albums, :class=>@Album, :right_key=>[:album_id1, :album_id2], :left_key=>[:tag_id1, :tag_id2]
     @Artist.many_through_many :tags, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]], :class=>@Tag
+    @Artist.one_through_many :otag, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]], :class=>@Tag
   end
 
   it "should work for many_to_one associations" do
@@ -204,14 +260,52 @@ describe "Sequel::Plugins::DatasetAssociations with composite keys" do
   end
 
   it "should work for many_to_many associations" do
-    @Album.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM albums INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2))))"
+    @Album.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.id1) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id1, albums_tags.album_id2) IN (SELECT albums.id1, albums.id2 FROM albums))))"
+  end
+
+  it "should work for one_through_one associations" do
+    @Album.first_tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.id1) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id1, albums_tags.album_id2) IN (SELECT albums.id1, albums.id2 FROM albums))))"
   end
 
   it "should work for many_through_many associations" do
-    @Artist.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2))))"
+    @Artist.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) INNER JOIN tags ON ((tags.id1 = albums_tags.tag_id1) AND (tags.id2 = albums_tags.tag_id2)) WHERE ((albums.artist_id1, albums.artist_id2) IN (S [...]
+  end
+
+  it "should work for one_through_many associations" do
+    @Artist.otags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) INNER JOIN tags ON ((tags.id1 = albums_tags.tag_id1) AND (tags.id2 = albums_tags.tag_id2)) WHERE ((albums.artist_id1, albums.artist_id2) IN ( [...]
   end
 
   it "should work correctly when chaining" do
-    @Artist.albums.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM albums INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists))))"
+    @Artist.albums.tags.sql.should == "SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.id1) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id1, albums_tags.album_id2) IN (SELECT albums.id1, albums.id2 FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists))))))"
+  end
+
+  it "should deal correctly with :order option for one_to_one associations" do
+    @Artist.one_to_one :first_album, :clone=>:first_album, :order=>:name
+    @Artist.first_albums.sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists)) AND ((albums.id1, albums.id2) IN (SELECT DISTINCT ON (albums.artist_id1, albums.artist_id2) albums.id1, albums.id2 FROM albums ORDER BY albums.artist_id1, albums.artist_id2, name))) ORDER BY name'
+  end
+
+  it "should deal correctly with :limit option for one_to_many associations" do
+    @Artist.one_to_many :albums, :clone=>:albums, :limit=>10, :order=>:name
+    @Artist.albums.sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists)) AND ((albums.id1, albums.id2) IN (SELECT id1, id2 FROM (SELECT albums.id1, albums.id2, row_number() OVER (PARTITION BY albums.artist_id1, albums.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM albums) AS t1 WHERE (x_sequel_row_number_x <= 10)))) ORDER BY name'
+  end
+
+  it "should deal correctly with :order option for one_through_one associations" do
+    @Album.one_through_one :first_tag, :clone=>:first_tag, :order=>:tags__name
+    @Album.first_tags.sql.should == 'SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.id1) AND (albums_tags.tag_id2 = tags.id2)) WHERE (((albums_tags.album_id1, albums_tags.album_id2) IN (SELECT albums.id1, albums.id2 FROM albums)) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id1, tags.id2) IN (SELECT DISTINCT ON (albums_tags.album_id1, albums_tags.album_id2 [...]
+  end
+
+  it "should deal correctly with :limit option for many_to_many associations" do
+    @Album.many_to_many :tags, :clone=>:tags, :limit=>10, :order=>:tags__name
+    @Album.tags.sql.should == 'SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.id1) AND (albums_tags.tag_id2 = tags.id2)) WHERE (((albums_tags.album_id1, albums_tags.album_id2) IN (SELECT albums.id1, albums.id2 FROM albums)) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id1, tags.id2) IN (SELECT b, c, d, e FROM (SELECT albums_tags.album_id1 AS b, albums_tags [...]
+  end
+
+  it "should deal correctly with :order option for one_through_many associations" do
+    @Artist.one_through_many :otag, :clone=>:otag, :order=>:tags__name
+    @Artist.otags.sql.should == 'SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) INNER JOIN tags ON ((tags.id1 = albums_tags.tag_id1) AND (tags.id2 = albums_tags.tag_id2)) WHERE (((albums.artist_id1, albums.artist_id2) IN  [...]
+  end
+
+  it "should deal correctly with :limit option for many_through_many associations" do
+    @Artist.many_through_many :tags, :clone=>:tags, :limit=>10, :order=>:tags__name
+    @Artist.tags.sql.should == 'SELECT tags.* FROM tags WHERE ((tags.id1, tags.id2) IN (SELECT albums_tags.tag_id1, albums_tags.tag_id2 FROM artists INNER JOIN albums ON ((albums.artist_id1 = artists.id1) AND (albums.artist_id2 = artists.id2)) INNER JOIN albums_tags ON ((albums_tags.album_id1 = albums.id1) AND (albums_tags.album_id2 = albums.id2)) INNER JOIN tags ON ((tags.id1 = albums_tags.tag_id1) AND (tags.id2 = albums_tags.tag_id2)) WHERE (((albums.artist_id1, albums.artist_id2) IN ( [...]
   end
 end
diff --git a/spec/extensions/defaults_setter_spec.rb b/spec/extensions/defaults_setter_spec.rb
index c6a75a1..8f933e0 100644
--- a/spec/extensions/defaults_setter_spec.rb
+++ b/spec/extensions/defaults_setter_spec.rb
@@ -47,6 +47,18 @@ describe "Sequel::Plugins::DefaultsSetter" do
     (t - DateTime.now).should < 1/86400.0
   end
 
+  it "should work correctly with the current_datetime_timestamp extension" do
+    @db.autoid = 1
+    @db.fetch = {:id=>1}
+    @c.dataset = @c.dataset.extension(:current_datetime_timestamp)
+    c = @pr.call(Sequel::CURRENT_TIMESTAMP)
+    @db.sqls
+    o = c.new
+    o.a = o.a
+    o.save
+    @db.sqls.should == ["INSERT INTO foo (a) VALUES (CURRENT_TIMESTAMP)", "SELECT * FROM foo WHERE (id = 1) LIMIT 1"]
+  end
+
   it "should not override a given value" do
     @pr.call(2)
     @c.new('a'=>3).a.should == 3
diff --git a/spec/extensions/eager_each_spec.rb b/spec/extensions/eager_each_spec.rb
index b25eea8..80c7875 100644
--- a/spec/extensions/eager_each_spec.rb
+++ b/spec/extensions/eager_each_spec.rb
@@ -33,4 +33,10 @@ describe "Sequel::Plugins::EagerEach" do
     a.map{|c| c.associations[:children]}.should == [[@c.load(:id=>3, :parent_id=>1), @c.load(:id=>4, :parent_id=>1)], [@c.load(:id=>5, :parent_id=>2), @c.load(:id=>6, :parent_id=>2)]]
     @c.db.sqls.should == ['SELECT items.id, items.parent_id, children.id AS children_id, children.parent_id AS children_parent_id FROM items LEFT OUTER JOIN items AS children ON (children.parent_id = items.id)']
   end
+
+  it "should not attempt to eager load when getting the columns" do
+    ds = @c.eager(:children)
+    def ds.all; raise; end
+    proc{ds.columns!}.should_not raise_error
+  end
 end
diff --git a/spec/extensions/error_splitter_spec.rb b/spec/extensions/error_splitter_spec.rb
index e4b27e3..a9dbb3f 100644
--- a/spec/extensions/error_splitter_spec.rb
+++ b/spec/extensions/error_splitter_spec.rb
@@ -11,7 +11,7 @@ describe "Sequel::Plugins::ErrorSplitter" do
   end
 
   it "should split errors for multiple columns and assign them to each column" do
-    @m.valid?.should be_false
+    @m.valid?.should == false
     @m.errors.should == {:a=>['is bad'], :b=>['is bad']}
   end
 end
diff --git a/spec/extensions/eval_inspect_spec.rb b/spec/extensions/eval_inspect_spec.rb
index e0d90ae..3cc0281 100644
--- a/spec/extensions/eval_inspect_spec.rb
+++ b/spec/extensions/eval_inspect_spec.rb
@@ -13,6 +13,7 @@ describe "eval_inspect extension" do
     [
       # Objects with components where eval(inspect) == self
       Sequel::SQL::AliasedExpression.new(:b, :a),
+      Sequel::SQL::AliasedExpression.new(:b, :a, [:c, :d]),
       Sequel::SQL::CaseExpression.new({:b=>:a}, :c),
       Sequel::SQL::CaseExpression.new({:b=>:a}, :c, :d),
       Sequel::SQL::Cast.new(:a, :b),
@@ -28,9 +29,12 @@ describe "eval_inspect extension" do
       Sequel::NOTNULL,
       Sequel::SQL::Function.new(:a, :b, :c),
       Sequel::SQL::Identifier.new(:a),
-      Sequel::SQL::JoinClause.new(:inner, :b, :c),
-      Sequel::SQL::JoinOnClause.new({:d=>:a}, :inner, :b, :c),
-      Sequel::SQL::JoinUsingClause.new([:a], :inner, :b, :c),
+      Sequel::SQL::JoinClause.new(:inner, :b),
+      Sequel::SQL::JoinOnClause.new({:d=>:a}, :inner, :b),
+      Sequel::SQL::JoinUsingClause.new([:a], :inner, :b),
+      Sequel::SQL::JoinClause.new(:inner, Sequel.as(:b, :c, [:d, :e])),
+      Sequel::SQL::JoinOnClause.new({:d=>:a}, :inner, Sequel.as(:b, :c, [:d, :e])),
+      Sequel::SQL::JoinUsingClause.new([:a], :inner, Sequel.as(:b, :c, [:d, :e])),
       Sequel::SQL::PlaceholderLiteralString.new('? = ?', [:a, :b]),
       Sequel::SQL::PlaceholderLiteralString.new(':a = :b', [{:a=>:b, :b=>42}]),
       Sequel::SQL::OrderedExpression.new(:a),
@@ -40,7 +44,7 @@ describe "eval_inspect extension" do
       Sequel::SQL::QualifiedIdentifier.new(:b, :a),
       Sequel::SQL::Subscript.new(:a, [1, 2]),
       Sequel::SQL::Window.new(:order=>:a, :partition=>:b),
-      Sequel::SQL::WindowFunction.new(Sequel::SQL::Function.new(:a, :b, :c), Sequel::SQL::Window.new(:order=>:a, :partition=>:b)),
+      Sequel::SQL::Function.new(:a, :b, :c).over(:order=>:a, :partition=>:b),
       Sequel::SQL::Wrapper.new(:a),
       
       # Objects with components where eval(inspect) != self
diff --git a/spec/extensions/hook_class_methods_spec.rb b/spec/extensions/hook_class_methods_spec.rb
index a59892f..88f0cf1 100644
--- a/spec/extensions/hook_class_methods_spec.rb
+++ b/spec/extensions/hook_class_methods_spec.rb
@@ -354,17 +354,17 @@ describe "Model.has_hooks?" do
   end
   
   specify "should return false if no hooks are defined" do
-    @c.has_hooks?(:before_save).should be_false
+    @c.has_hooks?(:before_save).should == false
   end
   
   specify "should return true if hooks are defined" do
     @c.before_save {'blah'}
-    @c.has_hooks?(:before_save).should be_true
+    @c.has_hooks?(:before_save).should == true
   end
   
   specify "should return true if hooks are inherited" do
     @d = Class.new(@c)
-    @d.has_hooks?(:before_save).should be_false
+    @d.has_hooks?(:before_save).should == false
   end
 end
 
@@ -399,15 +399,15 @@ describe "Model#add_hook_type" do
   specify "it should return true for bar when before_bar and after_bar hooks are returing true" do
     a = 1
     @f.before_bar { a += 1}
-    @f.new.bar.should be_true
+    @f.new.bar.should == true
     a.should == 2
     @f.after_bar { a *= 2}
-    @f.new.bar.should be_true
+    @f.new.bar.should == true
     a.should == 6
   end
 
   specify "it should return nil for bar when before_bar and after_bar hooks are returing false" do
-    @f.new.bar.should be_true
+    @f.new.bar.should == true
     @f.after_bar { false }
     @f.new.bar.should == :a
     @f.before_bar { false }
diff --git a/spec/extensions/instance_hooks_spec.rb b/spec/extensions/instance_hooks_spec.rb
index 8bcf6d0..138adce 100644
--- a/spec/extensions/instance_hooks_spec.rb
+++ b/spec/extensions/instance_hooks_spec.rb
@@ -177,6 +177,20 @@ describe "InstanceHooks plugin" do
     @r.should == [2, 1, 4, 3]
   end
 
+  it "should not clear validations hooks on successful save" do
+    @x.after_validation_hook{@x.errors.add(:id, 'a') if @x.id == 1; r 1}
+    @x.before_validation_hook{r 2}
+    @x.save.should == nil
+    @r.should == [2, 1]
+    @x.save.should == nil
+    @r.should == [2, 1, 2, 1]
+    @x.id = 2
+    @x.save.should == @x
+    @r.should == [2, 1, 2, 1, 2, 1]
+    @x.save.should == @x
+    @r.should == [2, 1, 2, 1, 2, 1]
+  end
+
   it "should not allow addition of instance hooks to frozen instances" do
     @x.after_destroy_hook{r 1}
     @x.before_destroy_hook{r 2}
diff --git a/spec/extensions/many_through_many_spec.rb b/spec/extensions/many_through_many_spec.rb
index 42d4fb4..3cdaa79 100644
--- a/spec/extensions/many_through_many_spec.rb
+++ b/spec/extensions/many_through_many_spec.rb
@@ -41,7 +41,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:id3
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id => 1)]
-    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (3)))"]
+    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (3))"]
     a.first.tags.should == [@c2.load(:id=>4)]
     DB.sqls.should == []
   end
@@ -52,7 +52,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_loading_predicate_key=>Sequel./(:albums_artists__artist_id, 3)
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id => 1)]
-    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, (albums_artists.artist_id / 3) AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id / 3) IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, (albums_artists.artist_id / 3) AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id / 3) IN (1))"]
     a.first.tags.should == [@c2.load(:id=>4)]
   end
   
@@ -82,7 +82,7 @@ describe Sequel::Model, "many_through_many" do
   it "should allow only two arguments with the :through option" do
     @c1.many_through_many :tags, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
@@ -90,31 +90,34 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.many_through_many :other_tags, :clone=>:tags
     n = @c1.load(:id => 1234)
-    n.other_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.other_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
   it "should use join tables given" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
   it "should handle multiple aliasing of tables" do
-    class ::Album < Sequel::Model
+    begin
+      class ::Album < Sequel::Model
+      end
+      @c1.many_through_many :albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]]
+      n = @c1.load(:id => 1234)
+      n.albums_dataset.sql.should == 'SELECT albums.* FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) INNER JOIN artists ON (artists.id = albums_artists.artist_id) INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) INNER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) INNER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.album_id = albums_0.id) WHERE (albums_artists_1.artist_id = 1234)'
+      n.albums.should == [Album.load(:id=>1, :x=>1)]
+    ensure
+      Object.send(:remove_const, :Album)
     end
-    @c1.many_through_many :albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]]
-    n = @c1.load(:id => 1234)
-    n.albums_dataset.sql.should == 'SELECT albums.* FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) INNER JOIN artists ON (artists.id = albums_artists.artist_id) INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) INNER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) INNER JOIN albums_artists AS albums_artists_1 ON ((albums_artists_1.album_id = albums_0.id) AND (albums_artists_1.artist_id = 1234))'
-    n.albums.should == [Album.load(:id=>1, :x=>1)]
-    Object.send(:remove_const, :Album)
   end
 
   it "should use explicit class if given" do
     @c1.many_through_many :albums_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag
     n = @c1.load(:id => 1234)
-    n.albums_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.albums_tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.albums_tags.should == [@c2.load(:id=>1)]
   end
 
@@ -122,7 +125,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :right_primary_key=>:tag_id, :left_primary_key=>:yyy
     n = @c1.load(:id => 1234)
     n.yyy = 85
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 85))'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 85)'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
@@ -130,7 +133,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
     n = @c1.load(:id => 1234)
     n.yyy = 85
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2) AND (albums_artists.b1 = 1234) AND (albums_artists.b2 = 85))'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1 = 1234) AND (albums_artists.b2 = 85))'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
@@ -154,6 +157,45 @@ describe Sequel::Model, "many_through_many" do
     @c1.filter(:tags=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NO [...]
   end
 
+  it "should allowing filtering by many_through_many associations with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234))))"
+  end
+
+  it "should allowing filtering by many_through_many associations with :conditions with a single through table" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_artists ON (albums_artists.album_id = tags.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234))))"
+  end
+
+  it "should allowing filtering by many_through_many associations with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_ar [...]
+  end
+
+  it "should allowing filtering by many_through_many associations with :limit" do
+    def (@c2.dataset).supports_window_functions?; true end
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>10
+    @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id IS NOT NULL) AND ((albums_artists.artist_id, tags.id) IN (SELECT b, c FROM (SELECT albums_artists.artist_id AS b, tags.id AS c, row_num [...]
+  end
+
+  it "should allowing filtering by many_through_many associations with :limit and composite keys" do
+    def (@c2.dataset).supports_window_functions?; true end
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :limit=>10
+    @c1.filter(:tags=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1 IS NOT N [...]
+  end
+
+  it "should allowing filtering by many_through_many associations with :limit and :conditions" do
+    def (@c2.dataset).supports_window_functions?; true end
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}, :limit=>10
+    @c1.filter(:tags=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND ((albums_artists.artist_id, tags.id) IN (SELECT b, c FROM (SELECT albums_artists.artist_id AS b, tags [...]
+  end
+
+  it "should allowing filtering by many_through_many associations with :limit and :conditions and composite keys" do
+    def (@c2.dataset).supports_window_functions?; true end
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}, :limit=>10
+    @c1.filter(:tags=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_ar [...]
+  end
+
   it "should allowing excluding by many_through_many associations" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.exclude(:tags=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
@@ -164,6 +206,16 @@ describe Sequel::Model, "many_through_many" do
     @c1.exclude(:tags=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 [...]
   end
 
+  it "should allowing excluding by many_through_many associations with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234)))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by many_through_many associations with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (alb [...]
+  end
+
   it "should allowing filtering by multiple many_through_many associations" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.filter(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL))))'
@@ -174,6 +226,16 @@ describe Sequel::Model, "many_through_many" do
     @c1.filter(:tags=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_artists [...]
   end
 
+  it "should allowing filtering by multiple many_through_many associations with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (1234, 2345)))))"
+  end
+
+  it "should allowing filtering by multiple many_through_many associations with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>[@c2.load(:id=>1, :h1=>1234, :h2=>85), @c2.load(:id=>2, :h1=>2345, :h2=>95)]).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums [...]
+  end
+
   it "should allowing excluding by multiple many_through_many associations" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.exclude(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
@@ -184,6 +246,16 @@ describe Sequel::Model, "many_through_many" do
     @c1.exclude(:tags=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_a [...]
   end
 
+  it "should allowing excluding by multiple many_through_many associations with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (1234, 2345))))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by multiple many_through_many associations with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>[@c2.load(:id=>1, :h1=>1234, :h2=>85), @c2.load(:id=>2, :h1=>2345, :h2=>95)]).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 =  [...]
+  end
+
   it "should allowing filtering/excluding many_through_many associations with NULL values" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.filter(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'f\''
@@ -200,6 +272,16 @@ describe Sequel::Model, "many_through_many" do
     @c1.filter(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (tags.h2 [...]
   end
 
+  it "should allowing filtering by many_through_many association datasets with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
+  it "should allowing filtering by many_through_many association datasets with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tags=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_artists.b1 IS NOT NUL [...]
+  end
+
   it "should allowing excluding by many_through_many association datasets" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
@@ -210,43 +292,53 @@ describe Sequel::Model, "many_through_many" do
     @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (t [...]
   end
 
+  it "should allowing excluding by many_through_many association datasets with :conditions" do
+    @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by many_through_many association datasets with :conditions and composite keys" do
+    @c1.many_through_many :tags, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tags=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_artists.b1 IS N [...]
+  end
+
   it "should support a :conditions option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32}
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (a = 32)'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 32) AND (albums_artists.artist_id = 1234))'
     n.tags.should == [@c2.load(:id=>1)]
 
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>['a = ?', 42]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (a = 42)'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 42) AND (albums_artists.artist_id = 1234))'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
   it "should support an :order option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) ORDER BY blah'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) ORDER BY blah'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
   it "should support an array for the :order option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) ORDER BY blah1, blah2'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) ORDER BY blah1, blah2'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
   it "should support a select option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>:blah
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT blah FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.tags_dataset.sql.should == 'SELECT blah FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
   it "should support an array for the select option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>[Sequel::SQL::ColumnAll.new(:tags), :albums__name]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.*, albums.name FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))'
+    n.tags_dataset.sql.should == 'SELECT tags.*, albums.name FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)'
     n.tags.should == [@c2.load(:id=>1)]
   end
   
@@ -254,7 +346,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] do |ds| ds.filter(:yyy=>@yyy) end
     n = @c1.load(:id => 1234)
     n.yyy = 85
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (yyy = 85)'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id = 1234) AND (yyy = 85))'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
@@ -262,7 +354,7 @@ describe Sequel::Model, "many_through_many" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah do |ds| ds.filter(:yyy=>@yyy) end
     n = @c1.load(:id => 1234)
     n.yyy = 85
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) WHERE (yyy = 85) ORDER BY blah'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id = 1234) AND (yyy = 85)) ORDER BY blah'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
@@ -276,12 +368,12 @@ describe Sequel::Model, "many_through_many" do
   it "should support a :limit option" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>10
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) LIMIT 10'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 10'
     n.tags.should == [@c2.load(:id=>1)]
 
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>[10, 10]
     n = @c1.load(:id => 1234)
-    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234)) LIMIT 10 OFFSET 10'
+    n.tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 10 OFFSET 10'
     n.tags.should == [@c2.load(:id=>1)]
   end
 
@@ -294,7 +386,7 @@ describe Sequel::Model, "many_through_many" do
   it "should provide an array with all members of the association" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
     @c1.load(:id => 1234).tags.should == [@c2.load(:id=>1)]
-    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))']
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)']
   end
 
   it "should populate cache when accessed" do
@@ -303,7 +395,7 @@ describe Sequel::Model, "many_through_many" do
     n.associations[:tags].should == nil
     DB.sqls.should == []
     n.tags.should == [@c2.load(:id=>1)]
-    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))']
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)']
     n.associations[:tags].should == n.tags
     DB.sqls.length.should == 0
   end
@@ -322,7 +414,7 @@ describe Sequel::Model, "many_through_many" do
     n.associations[:tags] = []
     DB.sqls.should == []
     n.tags(true).should == [@c2.load(:id=>1)]
-    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1234))']
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234)']
     n.associations[:tags].should == n.tags
     DB.sqls.length.should == 0
   end
@@ -399,7 +491,7 @@ describe 'Sequel::Plugins::ManyThroughMany::ManyThroughManyAssociationReflection
   end
 end
 
-describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
+describe "many_through_many eager loading methods" do
   before do
     class ::Artist < Sequel::Model
       plugin :many_through_many
@@ -461,7 +553,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
   it "should eagerly load a single many_through_many association" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
-    DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+    DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -472,8 +564,8 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     sqls = DB.sqls
     sqls.length.should == 3
     sqls[0].should == 'SELECT * FROM artists'
-    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'))
-    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'))
+    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
     a = a.first
     a.tags.should == [Tag.load(:id=>2)]
     a.albums.should == [Album.load(:id=>3)]
@@ -486,8 +578,8 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     sqls = DB.sqls
     sqls.length.should == 3
     sqls[0].should == 'SELECT * FROM artists'
-    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'))
-    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))'))
+    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
     a = a.first
     a.tags.should == [Tag.load(:id=>2)]
     a.albums.should == [Album.load(:id=>3)]
@@ -498,8 +590,8 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags=>:tracks).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))',
-      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
     a = a.first
     a.tags.should == [Tag.load(:id=>2)]
     a.tags.first.tracks.should == [Track.load(:id=>4)]
@@ -511,8 +603,8 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))',
-      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
     a = a.first
     a.tags.should == [Tag.load(:id=>2)]
     a.tags.first.tracks.should == [Track.load(:id=>4)]
@@ -523,8 +615,8 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:tracks
     a = @c1.load(:id=>1)
     a.tags.should == [Tag.load(:id=>2)]
-    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1))',
-      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON ((albums_tags.album_id = albums.id) AND (albums_tags.tag_id IN (2)))']
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1)',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
     a.tags.first.tracks.should == [Track.load(:id=>4)]
     DB.sqls.length.should == 0
   end
@@ -544,7 +636,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     @c1.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_graph=>:tracks
     a = @c1.load(:id=>1)
     a.tags
-    DB.sqls.should == [ 'SELECT tags.id, tracks.id AS tracks_id FROM (SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id = 1))) AS tags LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums ON (albums.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (t [...]
+    DB.sqls.should == [ 'SELECT tags.id, tracks.id AS tracks_id FROM (SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1)) AS tags LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums ON (albums.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks ON (t [...]
     a.tags.should == [Tag.load(:id=>2)]
     a.tags.first.tracks.should == [Track.load(:id=>4)]
     DB.sqls.length.should == 0
@@ -555,7 +647,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE (a = 32)']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 32) AND (albums_artists.artist_id IN (1)))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -565,7 +657,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) ORDER BY blah']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1)) ORDER BY blah']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -575,7 +667,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE a']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (a AND (albums_artists.artist_id IN (1)))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -585,67 +677,96 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1))) WHERE b']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (b AND (albums_artists.artist_id IN (1)))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
 
   it "should respect the :limit option on a many_through_many association" do
     @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2
-    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}, {:x_foreign_key_x=>1, :id=>7}]
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}]
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (1 = albums_artists.artist_id) LIMIT 2) AS t1']
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
 
     @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[1,1]
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>6}]
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (1 = albums_artists.artist_id) LIMIT 1 OFFSET 1) AS t1']
     a.first.first_two_tags.should == [Tag.load(:id=>6)]
     DB.sqls.length.should == 0
 
     @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1]
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>6}, {:x_foreign_key_x=>1, :id=>7}]
+    a = @c1.eager(:first_two_tags).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (1 = albums_artists.artist_id) OFFSET 1) AS t1']
+    a.first.first_two_tags.should == [Tag.load(:id=>6), Tag.load(:id=>7)]
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a many_through_many association using a :ruby strategy" do
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :eager_limit_strategy=>:ruby
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}, {:x_foreign_key_x=>1, :id=>7}]
+    a = @c1.eager(:first_two_tags).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
+    DB.sqls.length.should == 0
+
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[1,1], :eager_limit_strategy=>:ruby
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.first_two_tags.should == [Tag.load(:id=>6)]
+    DB.sqls.length.should == 0
+
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :eager_limit_strategy=>:ruby
+    a = @c1.eager(:first_two_tags).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
     a.first.first_two_tags.should == [Tag.load(:id=>6), Tag.load(:id=>7)]
     DB.sqls.length.should == 0
   end
 
   it "should respect the :limit option on a many_through_many association using a :window_function strategy" do
     Tag.dataset.meta_def(:supports_window_functions?){true}
-    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :order=>:name
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :order=>:name, :eager_limit_strategy=>:window_function
     Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5},{:x_foreign_key_x=>1, :id=>6}]
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE (x_sequel_row_number_x <= 2)']
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))) AS t1 WHERE (x_sequel_row_number_x <= 2)']
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
 
-    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name, :eager_limit_strategy=>:window_function
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))']
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))']
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
 
-    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name
+    @c1.many_through_many :first_two_tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name, :eager_limit_strategy=>:window_function
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))) AS t1 WHERE (x_sequel_row_number_x >= 2)']
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))) AS t1 WHERE (x_sequel_row_number_x >= 2)']
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
   end
 
-  it "should respect the :limit option on a many_through_many association with composite primary keys on the main table using a :window_function strategy" do
+  it "should respect the :limit option on a many_through_many association with composite primary keys on the main table" do
     Tag.dataset.meta_def(:supports_window_functions?){true}
     @c1.set_primary_key([:id1, :id2])
     @c1.columns :id1, :id2
@@ -655,15 +776,38 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id1=>1, :id2=>2)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id1, albums_art [...]
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((1 = albums_artists.artist_id1) AND (2 = albums_artists.artist_id2)) ORDER BY name LIMIT 2) AS t1']
+    a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
+    DB.sqls.length.should == 0
+
+    @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1]
+    a = @c1.eager(:first_two_tags).all
+    a.should == [@c1.load(:id1=>1, :id2=>2)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((1 = albums_artists.artist_id1) AND (2 = albums_artists.artist_id2)) LIMIT 2 OFFSET 1) AS t1']
+    a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a many_through_many association with composite primary keys on the main table using a :window_function strategy" do
+    Tag.dataset.meta_def(:supports_window_functions?){true}
+    @c1.set_primary_key([:id1, :id2])
+    @c1.columns :id1, :id2
+    @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>2, :order=>:name, :eager_limit_strategy=>:window_function
+    @c1.dataset._fetch = [{:id1=>1, :id2=>2}]
+    Tag.dataset._fetch = [{:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>5}, {:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>6}]
+    a = @c1.eager(:first_two_tags).all
+    a.should == [@c1.load(:id1=>1, :id2=>2)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id1, albums_ar [...]
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
 
-    @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name
+    @c1.many_through_many :first_two_tags, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[2,1], :order=>:name, :eager_limit_strategy=>:window_function
     a = @c1.eager(:first_two_tags).all
     a.should == [@c1.load(:id1=>1, :id2=>2)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND ((albums_artists.artist_id1, albums_art [...]
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id1, albums_ar [...]
     a.first.first_two_tags.should == [Tag.load(:id=>5), Tag.load(:id=>6)]
     DB.sqls.length.should == 0
   end
@@ -679,7 +823,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.name, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT tags.name, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -692,7 +836,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1, :yyy=>8)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (8)))']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (8))']
     a.first.tags.should == [Tag.load(:tag_id=>2)]
     DB.sqls.length.should == 0
   end
@@ -705,7 +849,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>1, :yyy=>8)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.b1 AS x_foreign_key_0_x, albums_artists.b2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2) AND ((albums_artists.b1, albums_artists.b2) IN ((1, 8))))']
+      'SELECT tags.*, albums_artists.b1 AS x_foreign_key_0_x, albums_artists.b2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1, albums_artists.b2) IN ((1, 8)))']
     a.first.tags.should == [Tag.load(:id=>2)]
     DB.sqls.length.should == 0
   end
@@ -715,7 +859,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager(:tags).all
     a.should == [@c1.load(:id=>2)]
     DB.sqls.should == ['SELECT * FROM artists',
-      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
     a.first.tags.should == [Tag.load(:id=>6)]
     DB.sqls.length.should == 0
   end
@@ -724,6 +868,10 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     proc{@c1.eager_graph(Object.new)}.should raise_error(Sequel::Error)
   end
 
+  it "should support association_join" do
+    @c1.association_join(:tags).sql.should == "SELECT * FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags ON (tags.id = albums_tags.tag_id)"
+  end
+
   it "should eagerly graph a single many_through_many association" do
     a = @c1.eager_graph(:tags).all
     a.should == [@c1.load(:id=>1)]
@@ -732,6 +880,19 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     DB.sqls.length.should == 0
   end
 
+  it "should eagerly graph a single many_through_many association using the :window_function strategy" do
+    def (Tag.dataset).supports_window_functions?() true end
+    def (Tag.dataset).columns() literal(opts[:select]) =~ /x_foreign_key_x/ ? [:id, :x_foreign_key_x] : [:id] end
+    @c1.many_through_many :tags, :clone=>:tags, :limit=>2
+    ds = @c1.eager_graph_with_options(:tags, :limit_strategy=>true)
+    ds._fetch = {:id=>1, :tags_id=>2}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN (SELECT id, x_foreign_key_x FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id)) AS t1 WHERE (x_sequel_row_number_ [...]
+    a.first.tags.should == [Tag.load(:id=>2)]
+    DB.sqls.length.should == 0
+  end
+
   it "should eagerly graph multiple associations in a single call" do 
     a = @c1.eager_graph(:tags, :albums).all
     a.should == [@c1.load(:id=>1)]
@@ -801,7 +962,7 @@ describe "Sequel::Plugins::ManyThroughMany eager loading methods" do
     a = @c1.eager_graph(:tags).eager(:albums).all
     a.should == [@c1.load(:id=>1)]
     DB.sqls.should == ['SELECT artists.id, tags.id AS tags_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags ON (tags.id = albums_tags.tag_id)',
-      'SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON ((albums_artists.album_id = albums.id) AND (albums_artists.artist_id IN (1)))']
+      'SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
     a = a.first
     a.tags.should == [Tag.load(:id=>2)]
     a.albums.should == [Album.load(:id=>3)]
@@ -986,13 +1147,13 @@ describe "many_through_many associations with non-column expression keys" do
 
   it "should have working regular association methods" do
     @Foo.first.foos.should == [@foo]
-    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT foos.* FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON ((f_0.r[0] = f.l[1]) AND (f_0.l[0] = 2))"]
+    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT foos.* FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON (f_0.r[0] = f.l[1]) WHERE (f_0.l[0] = 2)"]
   end
 
   it "should have working eager loading methods" do
     @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]]
     @Foo.eager(:foos).all.map{|o| [o, o.foos]}.should == [[@foo, [@foo]]]
-    @db.sqls.should == ["SELECT * FROM foos", "SELECT foos.*, f_0.l[0] AS x_foreign_key_x FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON ((f_0.r[0] = f.l[1]) AND (f_0.l[0] IN (2)))"]
+    @db.sqls.should == ["SELECT * FROM foos", "SELECT foos.*, f_0.l[0] AS x_foreign_key_x FROM foos INNER JOIN f ON (f.r[1] = foos.object_ids[0]) INNER JOIN f AS f_0 ON (f_0.r[0] = f.l[1]) WHERE (f_0.l[0] IN (2))"]
   end
 
   it "should have working eager graphing methods" do
@@ -1011,3 +1172,970 @@ describe "many_through_many associations with non-column expression keys" do
     @db.sqls.should == ["SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT f.l[0] FROM f INNER JOIN f AS f_0 ON (f_0.l[1] = f.r[0]) WHERE ((f_0.r[1] IN (SELECT foos.object_ids[0] FROM foos WHERE ((id = 1) AND (foos.object_ids[0] IS NOT NULL)))) AND (f.l[0] IS NOT NULL)))) LIMIT 1"]
   end
 end
+
+describe Sequel::Model, "one_through_many" do
+  before do
+    class ::Artist < Sequel::Model
+      attr_accessor :yyy
+      columns :id
+      plugin :many_through_many
+    end
+    class ::Tag < Sequel::Model
+      columns :id, :h1, :h2
+    end
+    @c1 = Artist
+    @c2 = Tag
+    @dataset = @c2.dataset
+    @dataset._fetch = {:id=>1}
+    DB.reset
+  end
+  after do
+    Object.send(:remove_const, :Artist)
+    Object.send(:remove_const, :Tag)
+  end
+
+  it "should support using a custom :left_primary_key option when eager loading many_to_many associations" do
+    @c1.send(:define_method, :id3){id*3}
+    @c1.dataset._fetch = {:id=>1}
+    @c2.dataset._fetch = {:id=>4, :x_foreign_key_x=>3}
+    @c1.one_through_many :tag, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:id3
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id => 1)]
+    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (3))"]
+    a.first.tag.should == @c2.load(:id=>4)
+    DB.sqls.should == []
+  end
+
+  it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup" do
+    @c1.dataset._fetch = {:id=>1}
+    @c2.dataset._fetch = {:id=>4, :x_foreign_key_x=>1}
+    @c1.one_through_many :tag, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_loading_predicate_key=>Sequel./(:albums_artists__artist_id, 3)
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id => 1)]
+    DB.sqls.should == ['SELECT * FROM artists', "SELECT tags.*, (albums_artists.artist_id / 3) AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id / 3) IN (1))"]
+    a.first.tag.should == @c2.load(:id=>4)
+  end
+  
+  it "should raise an error if in invalid form of through is used" do
+    proc{@c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id]]}.should raise_error(Sequel::Error)
+    proc{@c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], {:table=>:album_tags, :left=>:album_id}]}.should raise_error(Sequel::Error)
+    proc{@c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], :album_tags]}.should raise_error(Sequel::Error)
+  end
+
+  it "should allow only two arguments with the :through option" do
+    @c1.one_through_many :tag, :through=>[[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should be clonable" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.many_through_many :tags, :clone=>:tag
+    @c1.one_through_many :tag, :clone=>:tags
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should use join tables given" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should handle multiple aliasing of tables" do
+    begin
+      class ::Album < Sequel::Model
+      end
+      @c1.one_through_many :album, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]]
+      n = @c1.load(:id => 1234)
+      n.album_dataset.sql.should == 'SELECT albums.* FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) INNER JOIN artists ON (artists.id = albums_artists.artist_id) INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) INNER JOIN albums AS albums_0 ON (albums_0.id = albums_artists_0.album_id) INNER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.album_id = albums_0.id) WHERE (albums_artists_1.artist_id = 1234) [...]
+      n.album.should == Album.load(:id=>1, :x=>1)
+    ensure
+      Object.send(:remove_const, :Album)
+    end
+  end
+
+  it "should use explicit class if given" do
+    @c1.one_through_many :album_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag
+    n = @c1.load(:id => 1234)
+    n.album_tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.album_tag.should == @c2.load(:id=>1)
+  end
+
+  it "should accept :left_primary_key and :right_primary_key option for primary keys to use in current and associated table" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :right_primary_key=>:tag_id, :left_primary_key=>:yyy
+    n = @c1.load(:id => 1234)
+    n.yyy = 85
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 85) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should handle composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    n = @c1.load(:id => 1234)
+    n.yyy = 85
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1 = 1234) AND (albums_artists.b2 = 85)) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should allowing filtering by one_through_many associations" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))'
+  end
+
+  it "should allowing filtering by one_through_many associations with a single through table" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id]]
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists WHERE ((albums_artists.album_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))'
+  end
+
+  it "should allowing filtering by one_through_many associations with aliased tables" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums_artists, :id, :id], [:albums_artists, :album_id, :tag_id]]
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.id = albums_artists.album_id) INNER JOIN albums_artists AS albums_artists_1 ON (albums_artists_1.album_id = albums_artists_0.id) WHERE ((albums_artists_1.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL))))'
+  end
+
+  it "should allowing filtering by one_through_many associations with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.filter(:tag=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2 IS NOT [...]
+  end
+
+  it "should allowing filtering by one_through_many associations with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234))))"
+  end
+
+  it "should allowing filtering by one_through_many associations with :conditions with a single through table" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_artists ON (albums_artists.album_id = tags.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234))))"
+  end
+
+  it "should allowing filtering by one_through_many associations with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_art [...]
+  end
+
+  it "should allowing filtering by one_through_many associations with :order" do
+    def (@c2.dataset).supports_distinct_on?; true end
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:name
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id IS NOT NULL) AND ((albums_artists.artist_id, tags.id) IN (SELECT DISTINCT ON (albums_artists.artist_id) albums_artists.artist_id, tags.i [...]
+  end
+
+  it "should allowing filtering by one_through_many associations with :order and composite keys" do
+    def (@c2.dataset).supports_distinct_on?; true end
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :order=>:name
+    @c1.filter(:tag=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1 IS NOT NU [...]
+  end
+
+  it "should allowing filtering by one_through_many associations with :order and :conditions" do
+    def (@c2.dataset).supports_distinct_on?; true end
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}, :order=>:name
+    @c1.filter(:tag=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND ((albums_artists.artist_id, tags.id) IN (SELECT DISTINCT ON (albums_artists.artist_id) albums_artists. [...]
+  end
+
+  it "should allowing filtering by one_through_many associations with :order and :conditions and composite keys" do
+    def (@c2.dataset).supports_distinct_on?; true end
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}, :order=>:name
+    @c1.filter(:tag=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_art [...]
+  end
+
+  it "should allowing excluding by one_through_many associations" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.exclude(:tag=>@c2.load(:id=>1234)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id = 1234) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
+  end
+
+  it "should allowing excluding by one_through_many associations with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.exclude(:tag=>@c2.load(:h1=>1234, :h2=>85)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE ((albums_tags.g1 = 1234) AND (albums_tags.g2 = 85) AND (albums_artists.b1 IS NOT NULL) AND (albums_artists.b2  [...]
+  end
+
+  it "should allowing excluding by one_through_many associations with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>@c2.load(:id=>1234)).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id = 1234)))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by one_through_many associations with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>@c2.load(:id=>1, :h1=>1234, :h2=>85)).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albu [...]
+  end
+
+  it "should allowing filtering by multiple one_through_many associations" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.filter(:tag=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL))))'
+  end
+
+  it "should allowing filtering by multiple one_through_many associations with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.filter(:tag=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_artists. [...]
+  end
+
+  it "should allowing filtering by multiple one_through_many associations with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (1234, 2345)))))"
+  end
+
+  it "should allowing filtering by multiple one_through_many associations with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>[@c2.load(:id=>1, :h1=>1234, :h2=>85), @c2.load(:id=>2, :h1=>2345, :h2=>95)]).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums. [...]
+  end
+
+  it "should allowing excluding by multiple one_through_many associations" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.exclude(:tag=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (1234, 2345)) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
+  end
+
+  it "should allowing excluding by multiple one_through_many associations with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.exclude(:tag=>[@c2.load(:h1=>1234, :h2=>85), @c2.load(:h1=>2345, :h2=>95)]).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN ((1234, 85), (2345, 95))) AND (albums_ar [...]
+  end
+
+  it "should allowing excluding by multiple one_through_many associations with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>[@c2.load(:id=>1234), @c2.load(:id=>2345)]).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (1234, 2345))))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by multiple one_through_many associations with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>[@c2.load(:id=>1, :h1=>1234, :h2=>85), @c2.load(:id=>2, :h1=>2345, :h2=>95)]).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = a [...]
+  end
+
+  it "should allowing filtering/excluding one_through_many associations with NULL values" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.filter(:tag=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'f\''
+    @c1.exclude(:tag=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'t\''
+  end
+
+  it "should allowing filtering by one_through_many association datasets" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.filter(:tag=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_artists.artist_id IS NOT NULL))))'
+  end
+
+  it "should allowing filtering by one_through_many association datasets with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.filter(:tag=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (tags.h2  [...]
+  end
+
+  it "should allowing filtering by one_through_many association datasets with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE (artists.id IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
+  it "should allowing filtering by one_through_many association datasets with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.filter(:tag=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE ((artists.id, artists.yyy) IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_artists.b1 IS NOT NULL [...]
+  end
+
+  it "should allowing excluding by one_through_many association datasets" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.exclude(:tag=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM albums_artists INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_artists.artist_id IS NOT NULL)))) OR (artists.id IS NULL))'
+  end
+
+  it "should allowing excluding by one_through_many association datasets with composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    @c1.exclude(:tag=>@c2.filter(:x=>1)).sql.should == 'SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM albums_artists INNER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) INNER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) WHERE (((albums_tags.g1, albums_tags.g2) IN (SELECT tags.h1, tags.h2 FROM tags WHERE ((x = 1) AND (tags.h1 IS NOT NULL) AND (ta [...]
+  end
+
+  it "should allowing excluding by one_through_many association datasets with :conditions" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE ((artists.id NOT IN (SELECT albums_artists.artist_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((name = 'A') AND (albums_artists.artist_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (artists.id IS NULL))"
+  end
+
+  it "should allowing excluding by one_through_many association datasets with :conditions and composite keys" do
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy], :conditions=>{:name=>'A'}
+    @c1.exclude(:tag=>@c2.filter(:x=>1)).sql.should == "SELECT * FROM artists WHERE (((artists.id, artists.yyy) NOT IN (SELECT albums_artists.b1, albums_artists.b2 FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((name = 'A') AND (albums_artists.b1 IS NO [...]
+  end
+
+  it "should support a :conditions option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32}
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 32) AND (albums_artists.artist_id = 1234)) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>['a = ?', 42]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 42) AND (albums_artists.artist_id = 1234)) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should support an :order option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) ORDER BY blah LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should support an array for the :order option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) ORDER BY blah1, blah2 LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should support a select option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>:blah
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT blah FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should support an array for the select option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>[Sequel::SQL::ColumnAll.new(:tags), :albums__name]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.*, albums.name FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+  
+  it "should accept a block" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] do |ds| ds.filter(:yyy=>@yyy) end
+    n = @c1.load(:id => 1234)
+    n.yyy = 85
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id = 1234) AND (yyy = 85)) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should allow the :order option while accepting a block" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah do |ds| ds.filter(:yyy=>@yyy) end
+    n = @c1.load(:id => 1234)
+    n.yyy = 85
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id = 1234) AND (yyy = 85)) ORDER BY blah LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should support a :dataset option that is used instead of the default" do
+    @c1.one_through_many :tag, [[:a, :b, :c]], :dataset=>proc{Tag.join(:albums_tags, [:tag_id]).join(:albums, [:album_id]).join(:albums_artists, [:album_id]).filter(:albums_artists__artist_id=>id)}
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags USING (tag_id) INNER JOIN albums USING (album_id) INNER JOIN albums_artists USING (album_id) WHERE (albums_artists.artist_id = 1234) LIMIT 1'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should support a :limit option to specify an offset" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :limit=>[nil, 10]
+    n = @c1.load(:id => 1234)
+    n.tag_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1 OFFSET 10'
+    n.tag.should == @c2.load(:id=>1)
+  end
+
+  it "should have the :eager option affect the _dataset method" do
+    @c2.many_to_many :fans
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:fans
+    @c1.load(:id => 1234).tag_dataset.opts[:eager].should == {:fans=>nil}
+  end
+  
+  it "should return the associated object" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    @c1.load(:id => 1234).tag.should == @c2.load(:id=>1)
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1']
+  end
+
+  it "should populate cache when accessed" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    n = @c1.load(:id => 1234)
+    n.associations[:tag].should == nil
+    DB.sqls.should == []
+    n.tag.should == @c2.load(:id=>1)
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1']
+    n.associations[:tag].should == n.tag
+    DB.sqls.length.should == 0
+  end
+
+  it "should use cache if available" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    n = @c1.load(:id => 1234)
+    n.associations[:tag] = nil
+    n.tag.should == nil
+    DB.sqls.should == []
+  end
+
+  it "should not use cache if asked to reload" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    n = @c1.load(:id => 1234)
+    n.associations[:tag] = nil
+    DB.sqls.should == []
+    n.tag(true).should == @c2.load(:id=>1)
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1234) LIMIT 1']
+    n.associations[:tag].should == n.tag
+    DB.sqls.length.should == 0
+  end
+
+  it "should not add associations methods directly to class" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+    im = @c1.instance_methods.collect{|x| x.to_s}
+    im.should(include('tag'))
+    im.should(include('tag_dataset'))
+    im2 = @c1.instance_methods(false).collect{|x| x.to_s}
+    im2.should_not(include('tag'))
+    im2.should_not(include('tag_dataset'))
+  end
+
+  it "should support after_load association callback" do
+    h = []
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :after_load=>:al
+    @c1.class_eval do
+      self::Foo = h
+      def al(v)
+        model::Foo << v.pk * 20
+      end
+    end
+    @c2.dataset._fetch = [{:id=>20}]
+    p = @c1.load(:id=>10, :parent_id=>20)
+    p.tag
+    h.should == [400]
+    p.tag.pk.should == 20
+  end
+end
+
+describe "one_through_many eager loading methods" do
+  before do
+    class ::Artist < Sequel::Model
+      plugin :many_through_many
+      one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
+      one_through_many :other_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>:Tag
+      one_through_many :album, [[:albums_artists, :artist_id, :album_id]]
+      one_through_many :artist, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]]
+    end
+    class ::Tag < Sequel::Model
+      plugin :many_through_many
+      one_through_many :track, [[:albums_tags, :tag_id, :album_id], [:albums, :id, :id]], :right_primary_key=>:album_id
+    end
+    class ::Album < Sequel::Model
+    end
+    class ::Track < Sequel::Model
+    end
+    Artist.dataset.columns(:id)._fetch = proc do |sql|
+      h = {:id => 1}
+      if sql =~ /FROM artists LEFT OUTER JOIN albums_artists/
+        h[:tag_id] = 2
+        h[:album_id] = 3 if sql =~ /LEFT OUTER JOIN albums AS album/
+        h[:track_id] = 4 if sql =~ /LEFT OUTER JOIN tracks AS track/
+        h[:other_tag_id] = 9 if sql =~ /other_tag\.id AS other_tag_id/
+        h[:artist_id] = 10 if sql =~ /artists_0\.id AS artist_id/
+      end
+      h
+    end
+    
+    Tag.dataset._fetch = proc do |sql|
+      h = {:id => 2}
+      if sql =~ /albums_artists.artist_id IN \(([18])\)/
+        h[:x_foreign_key_x] = $1.to_i 
+      elsif sql =~ /\(\(albums_artists.b1, albums_artists.b2\) IN \(\(1, 8\)\)\)/
+        h.merge!(:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>8)
+      end
+      h[:tag_id] = h.delete(:id) if sql =~ /albums_artists.artist_id IN \(8\)/
+      h
+    end
+    
+    Album.dataset._fetch = proc do |sql|
+      h = {:id => 3}
+      h[:x_foreign_key_x] = 1 if sql =~ /albums_artists.artist_id IN \(1\)/
+      h
+    end
+    
+    Track.dataset._fetch = proc do |sql|
+      h = {:id => 4}
+      h[:x_foreign_key_x] = 2 if sql =~ /albums_tags.tag_id IN \(2\)/
+      h
+    end
+
+    @c1 = Artist
+    DB.reset
+  end
+  after do
+    [:Artist, :Tag, :Album, :Track].each{|x| Object.send(:remove_const, x)}
+  end
+  
+  it "should eagerly load a single one_through_many association" do
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists', 'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should eagerly load multiple associations in a single call" do
+    a = @c1.eager(:tag, :album).all
+    a.should == [@c1.load(:id=>1)]
+    sqls = DB.sqls
+    sqls.length.should == 3
+    sqls[0].should == 'SELECT * FROM artists'
+    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.album.should == Album.load(:id=>3)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should eagerly load multiple associations in separate" do
+    a = @c1.eager(:tag).eager(:album).all
+    a.should == [@c1.load(:id=>1)]
+    sqls = DB.sqls
+    sqls.length.should == 3
+    sqls[0].should == 'SELECT * FROM artists'
+    sqls[1..-1].should(include('SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    sqls[1..-1].should(include('SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))'))
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.album.should == Album.load(:id=>3)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should allow cascading of eager loading for associations of associated models" do
+    a = @c1.eager(:tag=>:track).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.tag.track.should == Track.load(:id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should cascade eagerly loading when the :eager association option is used" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:track
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.tag.track.should == Track.load(:id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should respect :eager when lazily loading an association" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager=>:track
+    a = @c1.load(:id=>1)
+    a.tag.should == Tag.load(:id=>2)
+    DB.sqls.should == ['SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1) LIMIT 1',
+      'SELECT tracks.*, albums_tags.tag_id AS x_foreign_key_x FROM tracks INNER JOIN albums ON (albums.id = tracks.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) WHERE (albums_tags.tag_id IN (2))']
+    a.tag.track.should == Track.load(:id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should raise error if attempting to eagerly load an association using :eager_graph option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_graph=>:track
+    proc{@c1.eager(:tag).all}.should raise_error(Sequel::Error)
+  end
+  
+  it "should respect :eager_graph when lazily loading an association" do
+    Tag.dataset._fetch = {:id=>2, :track_id=>4}
+    Tag.dataset.extend(Module.new {
+      def columns
+        [:id]
+      end
+    })
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_graph=>:track
+    a = @c1.load(:id=>1)
+    a.tag
+    DB.sqls.should == [ 'SELECT tags.id, track.id AS track_id FROM (SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id = 1) LIMIT 1) AS tags LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tags.id) LEFT OUTER JOIN albums ON (albums.id = albums_tags_0.album_id) LEFT OUTER JOIN tracks [...]
+    a.tag.should == Tag.load(:id=>2)
+    a.tag.track.should == Track.load(:id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should respect :conditions when eagerly loading" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32}
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((a = 32) AND (albums_artists.artist_id IN (1)))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should respect :order when eagerly loading" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:blah, :eager_limit_strategy=>:ruby
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1)) ORDER BY blah']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should use the association's block when eager loading by default" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]] do |ds| ds.filter(:a) end
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (a AND (albums_artists.artist_id IN (1)))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should use the :eager_block option when eager loading if given" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :eager_block=>proc{|ds| ds.filter(:b)} do |ds| ds.filter(:a) end
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (b AND (albums_artists.artist_id IN (1)))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a one_through_many association" do
+    @c1.one_through_many :second_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1]
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>6}]
+    a = @c1.eager(:second_tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (1 = albums_artists.artist_id) LIMIT 1 OFFSET 1) AS t1']
+    a.first.second_tag.should == Tag.load(:id=>6)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a one_through_many association using the :ruby strategy" do
+    @c1.one_through_many :second_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :eager_limit_strategy=>:ruby
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5}, {:x_foreign_key_x=>1, :id=>6}]
+    a = @c1.eager(:second_tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.second_tag.should == Tag.load(:id=>6)
+    DB.sqls.length.should == 0
+  end
+
+  it "should eagerly load a single one_through_many association using the :distinct_on strategy" do
+    Tag.dataset.meta_def(:supports_distinct_on?){true}
+    @c1.one_through_many :second_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :order=>:name, :eager_limit_strategy=>:distinct_on
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5}]
+    a = @c1.eager(:second_tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists', "SELECT DISTINCT ON (albums_artists.artist_id) tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1)) ORDER BY albums_artists.artist_id, name"]
+    a.first.second_tag.should == Tag.load(:id=>5)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should eagerly load a single one_through_many association using the :window_function strategy" do
+    Tag.dataset.meta_def(:supports_window_functions?){true}
+    @c1.one_through_many :second_tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name, :eager_limit_strategy=>:window_function
+    Tag.dataset._fetch = [{:x_foreign_key_x=>1, :id=>5}]
+    a = @c1.eager(:second_tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 2)']
+    a.first.second_tag.should == Tag.load(:id=>5)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a one_through_many association with composite primary keys on the main table" do
+    Tag.dataset.meta_def(:supports_window_functions?){true}
+    @c1.set_primary_key([:id1, :id2])
+    @c1.columns :id1, :id2
+
+    @c1.one_through_many :second_tag, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name
+    ds = @c1.eager(:second_tag)
+    ds._fetch = {:id1=>1, :id2=>2}
+    Tag.dataset._fetch = [{:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>5}]
+    a = ds.all
+    a.should == [@c1.load(:id1=>1, :id2=>2)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((1 = albums_artists.artist_id1) AND (2 = albums_artists.artist_id2)) ORDER BY name LIMIT 1 OFFSET 1) AS t1']
+    a.first.second_tag.should == Tag.load(:id=>5)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the :limit option on a one_through_many association with composite primary keys on the main table using a :window_function strategy" do
+    Tag.dataset.meta_def(:supports_window_functions?){true}
+    @c1.set_primary_key([:id1, :id2])
+    @c1.columns :id1, :id2
+
+    @c1.one_through_many :second_tag, [[:albums_artists, [:artist_id1, :artist_id2], :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :class=>Tag, :limit=>[nil,1], :order=>:name, :eager_limit_strategy=>:window_function
+    ds = @c1.eager(:second_tag)
+    ds._fetch = {:id1=>1, :id2=>2}
+    Tag.dataset._fetch = [{:x_foreign_key_0_x=>1, :x_foreign_key_1_x=>2, :id=>5}]
+    a = ds.all
+    a.should == [@c1.load(:id1=>1, :id2=>2)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT * FROM (SELECT tags.*, albums_artists.artist_id1 AS x_foreign_key_0_x, albums_artists.artist_id2 AS x_foreign_key_1_x, row_number() OVER (PARTITION BY albums_artists.artist_id1, albums_artists.artist_id2 ORDER BY name) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE ((albums_artists.artist_id1, albums_ar [...]
+    a.first.second_tag.should == Tag.load(:id=>5)
+    DB.sqls.length.should == 0
+  end
+
+  it "should raise an error when attempting to eagerly load an association with the :allow_eager option set to false" do
+    proc{@c1.eager(:tag).all}.should_not raise_error
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :allow_eager=>false
+    proc{@c1.eager(:tag).all}.should raise_error(Sequel::Error)
+  end
+
+  it "should respect the association's :select option" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :select=>:tags__name
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.name, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect one_through_many association's :left_primary_key and :right_primary_key options" do
+    @c1.send(:define_method, :yyy){values[:yyy]}
+    @c1.dataset._fetch = {:id=>1, :yyy=>8}
+    @c1.dataset.meta_def(:columns){[:id, :yyy]}
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:yyy, :right_primary_key=>:tag_id
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1, :yyy=>8)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.tag_id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (8))']
+    a.first.tag.should == Tag.load(:tag_id=>2)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should handle composite keys" do
+    @c1.send(:define_method, :yyy){values[:yyy]}
+    @c1.dataset._fetch = {:id=>1, :yyy=>8}
+    @c1.dataset.meta_def(:columns){[:id, :yyy]}
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:h1, :h2], :left_primary_key=>[:id, :yyy]
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>1, :yyy=>8)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.b1 AS x_foreign_key_0_x, albums_artists.b2 AS x_foreign_key_1_x FROM tags INNER JOIN albums_tags ON ((albums_tags.g1 = tags.h1) AND (albums_tags.g2 = tags.h2)) INNER JOIN albums ON ((albums.e1 = albums_tags.f1) AND (albums.e2 = albums_tags.f2)) INNER JOIN albums_artists ON ((albums_artists.c1 = albums.d1) AND (albums_artists.c2 = albums.d2)) WHERE ((albums_artists.b1, albums_artists.b2) IN ((1, 8)))']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect :after_load callbacks on associations when eager loading" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :after_load=>lambda{|o, a| o[:id] *= 2; a[:id] *= 3}
+    a = @c1.eager(:tag).all
+    a.should == [@c1.load(:id=>2)]
+    DB.sqls.should == ['SELECT * FROM artists',
+      'SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a.first.tag.should == Tag.load(:id=>6)
+    DB.sqls.length.should == 0
+  end
+    
+  it "should support association_join" do
+    @c1.association_join(:tag).sql.should == "SELECT * FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)"
+  end
+
+  it "should eagerly graph a single one_through_many association" do
+    a = @c1.eager_graph(:tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should eagerly graph a single one_through_many association using the :distinct_on strategy" do
+    def (Tag.dataset).supports_distinct_on?() true end
+    ds = @c1.eager_graph_with_options(:tag, :limit_strategy=>true)
+    ds._fetch = {:id=>1, :tag_id=>2}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN (SELECT DISTINCT ON (albums_artists.artist_id) tags.*, albums_artists.artist_id AS x_foreign_key_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) ORDER BY albums_artists.artist_id) AS tag ON (tag.x_foreign_key_x = artists.id)']
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should eagerly graph a single one_through_many association using the :window_function strategy" do
+    def (Tag.dataset).supports_window_functions?() true end
+    def (Tag.dataset).columns() literal(opts[:select]) =~ /x_foreign_key_x/ ? [:id, :x_foreign_key_x] : [:id] end
+    ds = @c1.eager_graph_with_options(:tag, :limit_strategy=>true)
+    ds._fetch = {:id=>1, :tag_id=>2}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN (SELECT id, x_foreign_key_x FROM (SELECT tags.*, albums_artists.artist_id AS x_foreign_key_x, row_number() OVER (PARTITION BY albums_artists.artist_id) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) INNER JOIN albums_artists ON (albums_artists.album_id = albums.id)) AS t1 WHERE (x_sequel_row_number_x  [...]
+    a.first.tag.should == Tag.load(:id=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should eagerly graph multiple associations in a single call" do 
+    a = @c1.eager_graph(:tag, :album).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, album.id AS album_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS album ON ( [...]
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.album.should == Album.load(:id=>3)
+    DB.sqls.length.should == 0
+  end
+
+  it "should eagerly graph multiple associations in separate calls" do 
+    a = @c1.eager_graph(:tag).eager_graph(:album).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, album.id AS album_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS album ON ( [...]
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.album.should == Album.load(:id=>3)
+    DB.sqls.length.should == 0
+  end
+
+  it "should allow cascading of eager graphing for associations of associated models" do
+    a = @c1.eager_graph(:tag=>:track).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, track.id AS track_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tag.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = [...]
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.tag.track.should == Track.load(:id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "should eager graph multiple associations from the same table" do
+    a = @c1.eager_graph(:tag, :other_tag).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, other_tag.id AS other_tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS al [...]
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.other_tag.should == Tag.load(:id=>9)
+    DB.sqls.length.should == 0
+  end
+
+  it "should eager graph a self_referential association" do
+    ds = @c1.eager_graph(:tag, :artist)
+    ds._fetch = {:id=>1, :tag_id=>2, :artist_id=>10}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, artist.id AS artist_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS albums_0 [...]
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.artist.should == @c1.load(:id=>10)
+    DB.sqls.length.should == 0
+  end
+
+  it "should be able to use eager and eager_graph together" do
+    a = @c1.eager_graph(:tag).eager(:album).all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)',
+      'SELECT albums.*, albums_artists.artist_id AS x_foreign_key_x FROM albums INNER JOIN albums_artists ON (albums_artists.album_id = albums.id) WHERE (albums_artists.artist_id IN (1))']
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.album.should == Album.load(:id=>3)
+    DB.sqls.length.should == 0
+  end
+
+  it "should handle no associated records when eagerly graphing a single one_through_many association" do
+    ds = @c1.eager_graph(:tag)
+    ds._fetch = {:id=>1, :tag_id=>nil}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)']
+    a.first.tag.should == nil
+    DB.sqls.length.should == 0
+  end
+
+  it "should handle no associated records when eagerly graphing multiple one_through_many associations" do
+    ds = @c1.eager_graph(:tag, :album)
+    ds._fetch = [{:id=>1, :tag_id=>5, :album_id=>6}, {:id=>7, :tag_id=>nil, :albums_0_id=>nil}]
+    a = ds.all
+    a.should == [@c1.load(:id=>1), @c1.load(:id=>7)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, album.id AS album_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.artist_id = artists.id) LEFT OUTER JOIN albums AS album ON ( [...]
+    a.first.tag.should == Tag.load(:id=>5)
+    a.first.album.should == Album.load(:id=>6)
+    a.last.tag.should == nil
+    a.last.album.should == nil
+    DB.sqls.length.should == 0
+  end
+
+  it "should handle missing associated records when cascading eager graphing for associations of associated models" do
+    ds = @c1.eager_graph(:tag=>:track)
+    ds._fetch = [{:id=>1, :tag_id=>2, :track_id=>nil}, {:id=>2, :tag_id=>nil, :tracks_id=>nil}]
+    a = ds.all
+    a.should == [@c1.load(:id=>1), @c1.load(:id=>2)]
+    DB.sqls.should == ['SELECT artists.id, tag.id AS tag_id, track.id AS track_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) LEFT OUTER JOIN albums_tags AS albums_tags_0 ON (albums_tags_0.tag_id = tag.id) LEFT OUTER JOIN albums AS albums_0 ON (albums_0.id = [...]
+    a.last.tag.should == nil
+    a = a.first
+    a.tag.should == Tag.load(:id=>2)
+    a.tag.track.should == nil
+    DB.sqls.length.should == 0
+  end
+
+  it "eager graphing should respect :left_primary_key and :right_primary_key options" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :left_primary_key=>:yyy, :right_primary_key=>:tag_id
+    @c1.dataset.meta_def(:columns){[:id, :yyy]}
+    Tag.dataset.meta_def(:columns){[:id, :tag_id]}
+    ds = @c1.eager_graph(:tag)
+    ds._fetch = {:id=>1, :yyy=>8, :tag_id=>2, :tag_tag_id=>4}
+    a = ds.all
+    a.should == [@c1.load(:id=>1, :yyy=>8)]
+    DB.sqls.should == ['SELECT artists.id, artists.yyy, tag.id AS tag_id, tag.tag_id AS tag_tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.yyy) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.tag_id = albums_tags.tag_id)']
+    a.first.tag.should == Tag.load(:id=>2, :tag_id=>4)
+    DB.sqls.length.should == 0
+  end
+  
+  it "eager graphing should respect composite keys" do 
+    @c1.one_through_many :tag, [[:albums_artists, [:b1, :b2], [:c1, :c2]], [:albums, [:d1, :d2], [:e1, :e2]], [:albums_tags, [:f1, :f2], [:g1, :g2]]], :right_primary_key=>[:id, :tag_id], :left_primary_key=>[:id, :yyy]
+    @c1.dataset.meta_def(:columns){[:id, :yyy]}
+    Tag.dataset.meta_def(:columns){[:id, :tag_id]}
+    ds = @c1.eager_graph(:tag)
+    ds._fetch = {:id=>1, :yyy=>8, :tag_id=>2, :tag_tag_id=>4}
+    a = ds.all
+    a.should == [@c1.load(:id=>1, :yyy=>8)]
+    DB.sqls.should == ['SELECT artists.id, artists.yyy, tag.id AS tag_id, tag.tag_id AS tag_tag_id FROM artists LEFT OUTER JOIN albums_artists ON ((albums_artists.b1 = artists.id) AND (albums_artists.b2 = artists.yyy)) LEFT OUTER JOIN albums ON ((albums.d1 = albums_artists.c1) AND (albums.d2 = albums_artists.c2)) LEFT OUTER JOIN albums_tags ON ((albums_tags.f1 = albums.e1) AND (albums_tags.f2 = albums.e2)) LEFT OUTER JOIN tags AS tag ON ((tag.id = albums_tags.g1) AND (tag.tag_id = albums [...]
+    a.first.tag.should == Tag.load(:id=>2, :tag_id=>4)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the association's :graph_select option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :graph_select=>:b
+    ds = @c1.eager_graph(:tag)
+    ds._fetch = {:id=>1, :b=>2}
+    a = ds.all
+    a.should == [@c1.load(:id=>1)]
+    DB.sqls.should == ['SELECT artists.id, tag.b FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)']
+    a.first.tag.should == Tag.load(:b=>2)
+    DB.sqls.length.should == 0
+  end
+
+  it "should respect the association's :graph_join_type option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]], :graph_join_type=>:inner
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) INNER JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)'
+  end
+
+  it "should respect the association's :join_type option on through" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :join_type=>:natural}, [:albums_tags, :album_id, :tag_id]], :graph_join_type=>:inner
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists INNER JOIN albums_artists ON (albums_artists.artist_id = artists.id) NATURAL JOIN albums ON (albums.id = albums_artists.album_id) INNER JOIN albums_tags ON (albums_tags.album_id = albums.id) INNER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)'
+  end
+
+  it "should respect the association's :conditions option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :conditions=>{:a=>32}
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON ((tag.id = albums_tags.tag_id) AND (tag.a = 32))'
+  end
+
+  it "should respect the association's :graph_conditions option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_conditions=>{:a=>42}
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON ((tag.id = albums_tags.tag_id) AND (tag.a = 42))'
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_conditions=>{:a=>42}, :conditions=>{:a=>32}
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON ((tag.id = albums_tags.tag_id) AND (tag.a = 42))'
+  end
+
+  it "should respect the association's :conditions option on through" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :conditions=>{:a=>42}}, [:albums_tags, :album_id, :tag_id]]
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON ((albums.id = albums_artists.album_id) AND (albums.a = 42)) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)'
+  end
+
+  it "should respect the association's :graph_block option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}}
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON ((tag.id = albums_tags.tag_id) AND (tag.active IS TRUE))'
+  end
+
+  it "should respect the association's :block option on through" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :block=>proc{|ja,lja,js| {Sequel.qualify(ja, :active)=>true}}}, [:albums_tags, :album_id, :tag_id]]
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON ((albums.id = albums_artists.album_id) AND (albums.active IS TRUE)) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)'
+  end
+
+  it "should respect the association's :graph_only_conditions option" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :graph_only_conditions=>{:a=>32}
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.a = 32)'
+  end
+
+  it "should respect the association's :only_conditions option on through" do 
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id, :only_conditions=>{:a=>42}}, [:albums_tags, :album_id, :tag_id]]
+    @c1.eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.a = 42) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id)'
+  end
+
+  it "should create unique table aliases for all associations" do
+    @c1.eager_graph(:artist=>{:artist=>:artist}).sql.should == "SELECT artists.id, artist.id AS artist_id, artist_0.id AS artist_0_id, artist_1.id AS artist_1_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_artists AS albums_artists_0 ON (albums_artists_0.album_id = albums.id) LEFT OUTER JOIN artists AS artist ON (artist.id = albums_artists_0.artist_id) LEFT OU [...]
+  end
+
+  it "should respect the association's :order" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[:blah1, :blah2]
+    @c1.order(:artists__blah2, :artists__blah3).eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) ORDER BY artists.blah2, artists.blah3, tag.blah1, tag.blah2'
+  end
+
+  it "should only qualify unqualified symbols, identifiers, or ordered versions in association's :order" do
+    @c1.one_through_many :tag, [[:albums_artists, :artist_id, :album_id], {:table=>:albums, :left=>:id, :right=>:id}, [:albums_tags, :album_id, :tag_id]], :order=>[Sequel.identifier(:blah__id), Sequel.identifier(:blah__id).desc, Sequel.desc(:blah__id), :blah__id, :album_id, Sequel.desc(:album_id), 1, Sequel.lit('RANDOM()'), Sequel.qualify(:b, :a)]
+    @c1.order(:artists__blah2, :artists__blah3).eager_graph(:tag).sql.should == 'SELECT artists.id, tag.id AS tag_id FROM artists LEFT OUTER JOIN albums_artists ON (albums_artists.artist_id = artists.id) LEFT OUTER JOIN albums ON (albums.id = albums_artists.album_id) LEFT OUTER JOIN albums_tags ON (albums_tags.album_id = albums.id) LEFT OUTER JOIN tags AS tag ON (tag.id = albums_tags.tag_id) ORDER BY artists.blah2, artists.blah3, tag.blah__id, tag.blah__id DESC, blah.id DESC, blah.id, ta [...]
+  end
+end
diff --git a/spec/extensions/migration_spec.rb b/spec/extensions/migration_spec.rb
index a25c1dc..4e184a9 100644
--- a/spec/extensions/migration_spec.rb
+++ b/spec/extensions/migration_spec.rb
@@ -60,8 +60,8 @@ describe "Migration.apply" do
 
   specify "should respond to the methods the database responds to" do
     m = Sequel::Migration.new(Sequel.mock)
-    m.respond_to?(:foo).should be_false
-    m.respond_to?(:execute).should be_true
+    m.respond_to?(:foo).should == false
+    m.respond_to?(:execute).should == true
   end if RUBY_VERSION >= '1.9'
 end
 
@@ -292,16 +292,16 @@ describe "Sequel::IntegerMigrator" do
   end
 
   specify "should automatically create the schema_info table with the version column" do
-    @db.table_exists?(:schema_info).should be_false
+    @db.table_exists?(:schema_info).should == false
     Sequel::Migrator.run(@db, @dirname, :target=>0)
-    @db.table_exists?(:schema_info).should be_true
+    @db.table_exists?(:schema_info).should == true
     @db.dataset.columns.should == [:version]
   end
 
   specify "should allow specifying the table and columns" do
-    @db.table_exists?(:si).should be_false
+    @db.table_exists?(:si).should == false
     Sequel::Migrator.run(@db, @dirname, :target=>0, :table=>:si, :column=>:sic)
-    @db.table_exists?(:si).should be_true
+    @db.table_exists?(:si).should == true
     @db.dataset.columns.should == [:sic]
   end
   
@@ -313,9 +313,9 @@ describe "Sequel::IntegerMigrator" do
   end
   
   specify "should be able to tell whether there are outstanding migrations" do
-    Sequel::Migrator.is_current?(@db, @dirname).should be_false
+    Sequel::Migrator.is_current?(@db, @dirname).should == false
     Sequel::Migrator.apply(@db, @dirname)
-    Sequel::Migrator.is_current?(@db, @dirname).should be_true
+    Sequel::Migrator.is_current?(@db, @dirname).should == true
   end 
 
   specify "should have #check_current raise an exception if the migrator is not current" do
@@ -474,31 +474,31 @@ describe "Sequel::TimestampMigrator" do
   specify "should handle migrating up or down all the way" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should handle migrating up or down to specific timestamps" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir, 1273253851)
-    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm3333).should be_false
+    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm3333).should == false
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb'
     @m.apply(@db, @dir, 1273253849)
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm1111).should be_true
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm1111).should == true
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
   end
 
   specify "should not be current when there are migrations to apply" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir)
-    @m.is_current?(@db, @dir).should be_true
+    @m.is_current?(@db, @dir).should == true
     @dir = 'spec/files/interleaved_timestamped_migrations'
-    @m.is_current?(@db, @dir).should be_false
+    @m.is_current?(@db, @dir).should == false
   end
 
   specify "should raise an exception if the migrator is not current" do
@@ -514,7 +514,7 @@ describe "Sequel::TimestampMigrator" do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb 1273253852_create_albums.rb 1273253853_3_create_users.rb'
   end
 
@@ -523,7 +523,7 @@ describe "Sequel::TimestampMigrator" do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
@@ -532,113 +532,113 @@ describe "Sequel::TimestampMigrator" do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir, 1273253851)
-    [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb'
   end
 
   specify "should correctly update schema_migrations table when an error occurs when migrating up or down" do
     @dir = 'spec/files/bad_timestamped_migrations'
     proc{@m.apply(@db, @dir)}.should raise_error
-    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm3333).should be_false
+    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm3333).should == false
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb'
     proc{@m.apply(@db, @dir, 0)}.should raise_error
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm1111).should be_true
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm1111).should == true
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
   end
 
   specify "should handle multiple migrations with the same timestamp correctly" do
     @dir = 'spec/files/duplicate_timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb'
     @m.apply(@db, @dir, 1273253853)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb'
     @m.apply(@db, @dir, 1273253849)
-    [:sm1111].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111].each{|n| @db.table_exists?(n).should == true}
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
     @m.apply(@db, @dir, 1273253848)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should convert schema_info table to schema_migrations table" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb'
 
     @m.apply(@db, @dir, 4)
-    [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
-    [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
+    [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb'
 
     @m.apply(@db, @dir, 0)
-    [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should be_true}
-    [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should == true}
+    [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir, 2)
-    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir, 1273253850)
-    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should == true}
+    [:sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb'
   end
 
   specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table and target is less than last integer migration version" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir, 1)
-    [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir, 2)
-    [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should be_true}
-    [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should == true}
+    [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb'
 
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb'
   end
 
   specify "should raise error for applied migrations not in file system" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
 
     @dir = 'spec/files/missing_timestamped_migrations'
     proc{@m.apply(@db, @dir, 0)}.should raise_error(Sequel::Migrator::Error)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
   end
   
   specify "should not raise error for applied migrations not in file system if :allow_missing_migration_files is true" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
 
     @dir = 'spec/files/missing_timestamped_migrations'
     proc{@m.run(@db, @dir, :allow_missing_migration_files => true)}.should_not raise_error
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
   end
   
@@ -651,21 +651,21 @@ describe "Sequel::TimestampMigrator" do
   specify "should handle migration filenames in a case insensitive manner" do
     @dir = 'spec/files/uppercase_timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should :table and :column options" do
     @dir = 'spec/files/timestamped_migrations'
     @m.run(@db, @dir, :table=>:sm, :column=>:fn)
-    [:sm, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:sm, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:sm].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
     @m.run(@db, @dir, :target=>0, :table=>:sm, :column=>:fn)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:sm].select_order_map(:fn).should == []
   end
 
diff --git a/spec/extensions/mssql_optimistic_locking_spec.rb b/spec/extensions/mssql_optimistic_locking_spec.rb
new file mode 100644
index 0000000..f294365
--- /dev/null
+++ b/spec/extensions/mssql_optimistic_locking_spec.rb
@@ -0,0 +1,91 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper')
+
+describe "MSSSQL optimistic locking plugin" do
+  before do
+    @db = Sequel.mock(:host=>'mssql')
+    @c = Class.new(Sequel::Model(@db[:items]))
+    @c.columns :id, :name, :timestamp
+    @c.plugin :mssql_optimistic_locking
+    @o = @c.load(:id=>1, :name=>'a', :timestamp=>'1234')
+    @db.sqls
+  end
+
+  it "should not include the lock column when updating" do
+    @db.fetch = [[{:timestamp=>'2345'}]]
+    @o.save
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.timestamp WHERE ((id = 1) AND (timestamp = 0x31323334))"]
+  end
+
+  it "should automatically update lock column using new value from database" do
+    @db.fetch = [[{:timestamp=>'2345'}]]
+    @o.save
+    @o.timestamp.should == '2345'
+  end
+
+  it "should raise error when updating stale object" do
+    @db.fetch = []
+    @o.timestamp = '2345'
+    proc{@o.save}.should raise_error(Sequel::NoExistingObject)
+    @o.timestamp.should == '2345'
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.timestamp WHERE ((id = 1) AND (timestamp = 0x32333435))"]
+  end
+
+  it "should raise error when destroying stale object" do
+    @db.numrows = 0
+    @o.timestamp = '2345'
+    proc{@o.destroy}.should raise_error(Sequel::NoExistingObject)
+    @db.sqls.should == ["DELETE FROM items WHERE ((id = 1) AND (timestamp = 0x32333435))"]
+  end
+
+  it "should allow refresh after failed save" do
+    @db.fetch = []
+    @o.timestamp = '2345'
+    proc{@o.save}.should raise_error(Sequel::NoExistingObject)
+    @db.fetch = {:id=>1, :name=>'a', :timestamp=>'2345'}
+    @o.refresh
+    @db.sqls
+    @o.save
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.timestamp WHERE ((id = 1) AND (timestamp = 0x32333435))"]
+  end
+
+  specify "should allow changing the lock column via model.lock_column=" do
+    @c = Class.new(Sequel::Model(@db[:items]))
+    @c.columns :id, :name, :lv
+    @c.plugin :mssql_optimistic_locking
+    @c.lock_column = :lv
+    @o = @c.load(:id=>1, :name=>'a', :lv=>'1234')
+    @db.sqls
+    @db.fetch = []
+    proc{@o.save}.should raise_error(Sequel::NoExistingObject)
+    @o.lv.should == '1234'
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.lv WHERE ((id = 1) AND (lv = 0x31323334))"]
+    @o = @c.load(:id=>1, :name=>'a', :lv=>'1234')
+    @db.fetch = {:lv=>'2345'}
+    @o.save
+    @o.lv.should == '2345'
+  end
+
+  specify "should allow changing the lock column via plugin option" do
+    @c = Class.new(Sequel::Model(@db[:items]))
+    @c.columns :id, :name, :lv
+    @c.plugin :mssql_optimistic_locking, :lock_column=>:lv
+    @o = @c.load(:id=>1, :name=>'a', :lv=>'1234')
+    @db.sqls
+    @db.fetch = []
+    proc{@o.save}.should raise_error(Sequel::NoExistingObject)
+    @o.lv.should == '1234'
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.lv WHERE ((id = 1) AND (lv = 0x31323334))"]
+    @o = @c.load(:id=>1, :name=>'a', :lv=>'1234')
+    @db.fetch = {:lv=>'2345'}
+    @o.save
+    @o.lv.should == '2345'
+  end
+
+  specify "should work when subclassing" do
+    c = Class.new(@c)
+    o = c.load(:id=>1, :name=>'a', :timestamp=>'1234')
+    @db.fetch = [[{:timestamp=>'2345'}]]
+    o.save
+    @db.sqls.should == ["UPDATE TOP (1) items SET name = 'a' OUTPUT inserted.timestamp WHERE ((id = 1) AND (timestamp = 0x31323334))"]
+  end
+end
diff --git a/spec/extensions/nested_attributes_spec.rb b/spec/extensions/nested_attributes_spec.rb
index dff2876..4832ba8 100644
--- a/spec/extensions/nested_attributes_spec.rb
+++ b/spec/extensions/nested_attributes_spec.rb
@@ -5,7 +5,7 @@ describe "NestedAttributes plugin" do
     if should.is_a?(Array)
       should.should include(is)
     else
-      should.should == is
+      is.should == should
     end
   end
 
@@ -71,6 +71,42 @@ describe "NestedAttributes plugin" do
       ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"])
   end
   
+  it "should support creating new one_to_many and one_to_one objects with presence validations on the foreign key" do
+    @Album.class_eval do
+      plugin :validation_helpers
+      def validate
+        validates_presence :artist_id
+        super
+      end
+    end
+    a = @Artist.new({:name=>'Ar', :albums_attributes=>[{:name=>'Al'}]})
+    @db.sqls.should == []
+    a.save
+    check_sql_array("INSERT INTO artists (name) VALUES ('Ar')",
+      ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"])
+
+    a = @Artist.new(:name=>'Ar')
+    a.id = 1
+    a.first_album_attributes = {:name=>'Al'}
+    @db.sqls.should == []
+    a.save
+    check_sql_array(["INSERT INTO artists (name, id) VALUES ('Ar', 1)", "INSERT INTO artists (id, name) VALUES (1, 'Ar')"],
+      "UPDATE albums SET artist_id = NULL WHERE (artist_id = 1)",
+      ["INSERT INTO albums (artist_id, name) VALUES (1, 'Al')", "INSERT INTO albums (name, artist_id) VALUES ('Al', 1)"])
+  end
+
+  it "should should not remove existing values from object when validating" do
+    @Artist.one_to_one :first_album, :class=>@Album, :key=>:id
+    @Artist.nested_attributes :first_album
+    @db.fetch = {:id=>1}
+    a = @Artist.load(:id=>1)
+    a.set(:first_album_attributes=>{:id=>1, :name=>'Ar'})
+    a.first_album.values.should == {:id=>1, :name=>'Ar'}
+    @db.sqls.should == ["SELECT * FROM albums WHERE (albums.id = 1) LIMIT 1"]
+    a.save_changes
+    check_sql_array("UPDATE albums SET name = 'Ar' WHERE (id = 1)")
+  end
+
   it "should support creating new many_to_many objects" do
     a = @Album.new({:name=>'Al', :tags_attributes=>[{:name=>'T'}]})
     @db.sqls.should == []
diff --git a/spec/extensions/pagination_spec.rb b/spec/extensions/pagination_spec.rb
index ed81228..aa6b606 100644
--- a/spec/extensions/pagination_spec.rb
+++ b/spec/extensions/pagination_spec.rb
@@ -62,20 +62,20 @@ describe "A paginated dataset" do
   end
 
   specify "should know if current page is last page" do
-    @paginated.last_page?.should be_false
-    @d.paginate(2, 20).last_page?.should be_false
-    @d.paginate(5, 30).last_page?.should be_false
-    @d.paginate(6, 30).last_page?.should be_true
+    @paginated.last_page?.should == false
+    @d.paginate(2, 20).last_page?.should == false
+    @d.paginate(5, 30).last_page?.should == false
+    @d.paginate(6, 30).last_page?.should == true
 
     @d.meta_def(:count) {0}
-    @d.paginate(1, 30).last_page?.should be_true
-    @d.paginate(2, 30).last_page?.should be_false
+    @d.paginate(1, 30).last_page?.should == true
+    @d.paginate(2, 30).last_page?.should == false
   end
 
   specify "should know if current page is first page" do
-    @paginated.first_page?.should be_true
-    @d.paginate(1, 20).first_page?.should be_true
-    @d.paginate(2, 20).first_page?.should be_false
+    @paginated.first_page?.should == true
+    @d.paginate(1, 20).first_page?.should == true
+    @d.paginate(2, 20).first_page?.should == false
   end
 
   specify "should work with fixed sql" do
diff --git a/spec/extensions/pg_array_associations_spec.rb b/spec/extensions/pg_array_associations_spec.rb
index 52bc471..892f200 100644
--- a/spec/extensions/pg_array_associations_spec.rb
+++ b/spec/extensions/pg_array_associations_spec.rb
@@ -2,16 +2,19 @@ require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 
 describe Sequel::Model, "pg_array_associations" do
   before do
-    class ::Artist < Sequel::Model
+    @db = Sequel.mock(:numrows=>1)
+    class ::Artist < Sequel::Model(@db)
       attr_accessor :yyy
       columns :id, :tag_ids
       plugin :pg_array_associations
       pg_array_to_many :tags
+      pg_array_to_many :a_tags, :clone=>:tags, :conditions=>{:name=>'A'}, :key=>:tag_ids
     end
-    class ::Tag < Sequel::Model
+    class ::Tag < Sequel::Model(@db)
       columns :id
       plugin :pg_array_associations
       many_to_pg_array :artists
+      many_to_pg_array :a_artists, :clone=>:artists, :conditions=>{:name=>'A'}
       def id3
         id*3
       end
@@ -24,7 +27,7 @@ describe Sequel::Model, "pg_array_associations" do
     @o2 = @c2.first
     @n1 = @c1.new
     @n2 = @c2.new
-    DB.reset
+    @db.sqls
   end
   after do
     Object.send(:remove_const, :Artist)
@@ -45,42 +48,62 @@ describe Sequel::Model, "pg_array_associations" do
     @n1.tags.should == []
     @c1.load(:tag_ids=>[]).tags.should == []
     @n2.artists.should == []
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should use correct SQL when loading associations lazily" do
     @o1.tags.should == [@o2]
     @o2.artists.should == [@o1]
-    DB.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])"]
+    @db.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]::integer[])"]
   end
 
   it "should accept :primary_key option for primary keys to use in current and associated table" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel./(:id, 3)
     @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id3
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE ((tags.id / 3) IN (1, 2, 3))"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[6])"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[6]::integer[])"
   end
   
   it "should allowing filtering by associations" do
-    @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])"
+    @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]::integer[])"
     @c2.filter(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))"
   end
 
+  it "should allowing filtering by associations with :conditions" do
+    @c1.filter(:a_tags=>@o2).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id = 2)))), 'f')"
+    @c2.filter(:a_artists=>@o1).sql.should == "SELECT * FROM tags WHERE (tags.id IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id = 1))))"
+  end
+
   it "should allowing excluding by associations" do
-    @c1.exclude(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids @> ARRAY[2]) OR (artists.tag_ids IS NULL))"
+    @c1.exclude(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids @> ARRAY[2]::integer[]) OR (artists.tag_ids IS NULL))"
     @c2.exclude(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (1, 2, 3)) OR (tags.id IS NULL))"
   end
 
+  it "should allowing excluding by associations with :conditions" do
+    @c1.exclude(:a_tags=>@o2).sql.should == "SELECT * FROM artists WHERE (NOT coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id = 2)))), 'f') OR (artists.tag_ids IS NULL))"
+    @c2.exclude(:a_artists=>@o1).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id = 1)))) OR (tags.id IS NULL))"
+  end
+
   it "should allowing filtering by multiple associations" do
-    @c1.filter(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[1,2])"
+    @c1.filter(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[1,2]::integer[])"
     @c2.filter(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE (tags.id IN (3, 4, 5))"
   end
 
+  it "should allowing filtering by multiple associations with :conditions" do
+    @c1.filter(:a_tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id IN (1, 2))))), 'f')"
+    @c2.filter(:a_artists=>[@c1.load(:id=>7, :tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:id=>8, :tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE (tags.id IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id IN (7, 8)))))"
+  end
+
   it "should allowing excluding by multiple associations" do
-    @c1.exclude(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids && ARRAY[1,2]) OR (artists.tag_ids IS NULL))"
+    @c1.exclude(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (NOT (artists.tag_ids && ARRAY[1,2]::integer[]) OR (artists.tag_ids IS NULL))"
     @c2.exclude(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (3, 4, 5)) OR (tags.id IS NULL))"
   end
 
+  it "should allowing excluding by multiple associations with :conditions" do
+    @c1.exclude(:a_tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (NOT coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id IN (1, 2))))), 'f') OR (artists.tag_ids IS NULL))"
+    @c2.exclude(:a_artists=>[@c1.load(:id=>7, :tag_ids=>Sequel.pg_array([3, 4])), @c1.load(:id=>8, :tag_ids=>Sequel.pg_array([4, 5]))]).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id IN (7, 8))))) OR (tags.id IS NULL))"
+  end
+
   it "should allowing filtering/excluding associations with NULL or empty values" do
     @c1.filter(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'f\''
     @c1.exclude(:tags=>@c2.new).sql.should == 'SELECT * FROM artists WHERE \'t\''
@@ -90,7 +113,7 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.filter(:artists=>@c1.load(:tag_ids=>[])).sql.should == 'SELECT * FROM tags WHERE \'f\''
     @c2.exclude(:artists=>@c1.load(:tag_ids=>[])).sql.should == 'SELECT * FROM tags WHERE \'t\''
 
-    @c1.filter(:tags=>[@c2.new, @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"
+    @c1.filter(:tags=>[@c2.new, @c2.load(:id=>2)]).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"
     @c2.filter(:artists=>[@c1.load(:tag_ids=>Sequel.pg_array([3, 4])), @c1.new]).sql.should == "SELECT * FROM tags WHERE (tags.id IN (3, 4))"
   end
 
@@ -99,17 +122,27 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.filter(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE (tags.id IN (SELECT unnest(artists.tag_ids) FROM artists WHERE (id = 1)))"
   end
 
+  it "should allowing filtering by association datasets with :conditions" do
+    @c1.filter(:a_tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (id = 1)))))), 'f')"
+    @c2.filter(:a_artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE (tags.id IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (id = 1))))))"
+  end
+
   it "should allowing excluding by association datasets" do
     @c1.exclude(:tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE (NOT coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE (id = 1))), 'f') OR (artists.tag_ids IS NULL))"
     @c2.exclude(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (SELECT unnest(artists.tag_ids) FROM artists WHERE (id = 1))) OR (tags.id IS NULL))"
   end
 
+  it "should allowing excluding by association datasets with :conditions" do
+    @c1.exclude(:a_tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE (NOT coalesce((artists.tag_ids && (SELECT array_agg(tags.id) FROM tags WHERE ((name = 'A') AND (tags.id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (id = 1)))))), 'f') OR (artists.tag_ids IS NULL))"
+    @c2.exclude(:a_artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE ((tags.id NOT IN (SELECT unnest(artists.tag_ids) FROM artists WHERE ((name = 'A') AND (artists.tag_ids IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (id = 1)))))) OR (tags.id IS NULL))"
+  end
+
   it "filter by associations should respect key options" do 
     @c1.class_eval{def tag3_ids; tag_ids.map{|x| x*3} end}
     @c1.pg_array_to_many :tags, :clone=>:tags, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2]
     @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>Sequel.*(:id, 3), :primary_key_method=>:id3, :key=>:tag3_ids, :key_column=>Sequel.pg_array(:tag_ids)[1..2]
 
-    @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[6])"
+    @c1.filter(:tags=>@o2).sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[6]::integer[])"
     @c2.filter(:artists=>@o1).sql.should == "SELECT * FROM tags WHERE ((tags.id * 3) IN (3, 6, 9))"
     @c1.filter(:tags=>@c2.where(:id=>1)).sql.should == "SELECT * FROM artists WHERE coalesce((artists.tag_ids[1:2] && (SELECT array_agg((tags.id * 3)) FROM tags WHERE (id = 1))), 'f')"
     @c2.filter(:artists=>@c1.where(:id=>1)).sql.should == "SELECT * FROM tags WHERE ((tags.id * 3) IN (SELECT unnest(artists.tag_ids[1:2]) FROM artists WHERE (id = 1)))"
@@ -120,12 +153,12 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.many_to_pg_array :artists, :clone=>:artists, :key=>:tag2_ids
     @c1.class_eval{def tag2_ids; tag_ids.map{|x| x * 2} end}
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (2, 4, 6))"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag2_ids @> ARRAY[2])"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag2_ids @> ARRAY[2]::integer[])"
   end
   
   it "should support a :key_column option" do
     @c2.many_to_pg_array :artists, :clone=>:artists, :key_column=>Sequel.pg_array(:tag_ids)[1..2], :key=>:tag2_ids
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[2])"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids[1:2] @> ARRAY[2]::integer[])"
   end
   
   it "should support a :primary_key option" do
@@ -133,35 +166,35 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.many_to_pg_array :artists, :clone=>:artists, :primary_key=>:id2
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id2 IN (1, 2, 3))"
     @c2.class_eval{def id2; id*2 end}
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[4])"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[4]::integer[])"
   end
   
   it "should support a :conditions option" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :conditions=>{:a=>1}
     @c2.many_to_pg_array :artists, :clone=>:artists, :conditions=>{:a=>1}
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE ((a = 1) AND (tags.id IN (1, 2, 3)))"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((a = 1) AND (artists.tag_ids @> ARRAY[2]))"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((a = 1) AND (artists.tag_ids @> ARRAY[2]::integer[]))"
   end
   
   it "should support an :order option" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :order=>[:a, :b]
     @c2.many_to_pg_array :artists, :clone=>:artists, :order=>[:a, :b]
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3)) ORDER BY a, b"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]) ORDER BY a, b"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]::integer[]) ORDER BY a, b"
   end
   
   it "should support a select option" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :select=>[:a, :b]
     @c2.many_to_pg_array :artists, :clone=>:artists, :select=>[:a, :b]
     @c1.load(:tag_ids=>Sequel.pg_array([1,2,3])).tags_dataset.sql.should == "SELECT a, b FROM tags WHERE (tags.id IN (1, 2, 3))"
-    @c2.load(:id=>1).artists_dataset.sql.should == "SELECT a, b FROM artists WHERE (artists.tag_ids @> ARRAY[1])"
+    @c2.load(:id=>1).artists_dataset.sql.should == "SELECT a, b FROM artists WHERE (artists.tag_ids @> ARRAY[1]::integer[])"
   end
   
   it "should accept a block" do
     @c1.pg_array_to_many :tags, :clone=>:tags do |ds| ds.filter(:yyy=>@yyy) end
     @c2.many_to_pg_array :artists, :clone=>:artists do |ds| ds.filter(:a=>1) end
     @c1.new(:yyy=>6, :tag_ids=>Sequel.pg_array([1,2,3])).tags_dataset.sql.should == "SELECT * FROM tags WHERE ((tags.id IN (1, 2, 3)) AND (yyy = 6))"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((artists.tag_ids @> ARRAY[2]) AND (a = 1))"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE ((artists.tag_ids @> ARRAY[2]::integer[]) AND (a = 1))"
   end
 
   it "should support a :dataset option that is used instead of the default" do
@@ -175,7 +208,7 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[2, 3]
     @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[3, 2]
     @o1.tags_dataset.sql.should == "SELECT * FROM tags WHERE (tags.id IN (1, 2, 3)) LIMIT 2 OFFSET 3"
-    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]) LIMIT 3 OFFSET 2"
+    @o2.artists_dataset.sql.should == "SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]::integer[]) LIMIT 3 OFFSET 2"
   end
 
   it "should support a :uniq option that removes duplicates from the association" do
@@ -193,8 +226,8 @@ describe Sequel::Model, "pg_array_associations" do
   end
   
   it "reflection remove_before_destroy? should return correct values" do
-    @c1.association_reflection(:tags).remove_before_destroy?.should be_true
-    @c2.association_reflection(:artists).remove_before_destroy?.should be_false
+    @c1.association_reflection(:tags).remove_before_destroy?.should == true
+    @c2.association_reflection(:artists).remove_before_destroy?.should == false
   end
   
   it "reflection reciprocal should be correct" do
@@ -205,17 +238,17 @@ describe Sequel::Model, "pg_array_associations" do
   it "should eagerly load correctly" do
     a = @c1.eager(:tags).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should support using custom key options when eager loading associations" do
@@ -225,28 +258,28 @@ describe Sequel::Model, "pg_array_associations" do
 
     a = @c1.eager(:tags).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(\(tags\.id \* 3\) IN \([369], [369], [369]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ["SELECT * FROM tags", "SELECT * FROM artists WHERE (artists.tag_ids[1:2] && ARRAY[6])"]
+    @db.sqls.should == ["SELECT * FROM tags", "SELECT * FROM artists WHERE (artists.tag_ids[1:2] && ARRAY[6]::integer[])"]
     a.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should allow cascading of eager loading for associations of associated models" do
     a = @c1.eager(:tags=>:artists).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.slice!(1).should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
-    sqls.should == ['SELECT * FROM artists', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    sqls.should == ['SELECT * FROM artists', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.first.tags.should == [@o2]
     a.first.tags.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should respect :eager when lazily loading an association" do
@@ -254,16 +287,16 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.many_to_pg_array :artists2, :clone=>:artists, :eager=>:tags
 
     @o1.tags2.should == [@o2]
-    DB.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ["SELECT * FROM tags WHERE (tags.id IN (1, 2, 3))", "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     @o1.tags2.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @o2.artists2.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
-    sqls.should == ["SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2])"]
+    sqls.should == ["SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[2]::integer[])"]
     @o2.artists2.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should cascade eagerly loading when the :eager_graph association option is used" do
@@ -274,36 +307,36 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])}
 
     @o1.tags2.should == [@o2]
-    DB.sqls.first.should =~ /SELECT tags\.id, artists\.id AS artists_id, artists\.tag_ids FROM tags LEFT OUTER JOIN artists ON \(artists.tag_ids @> ARRAY\[tags.id\]\) WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
+    @db.sqls.first.should =~ /SELECT tags\.id, artists\.id AS artists_id, artists\.tag_ids FROM tags LEFT OUTER JOIN artists ON \(artists.tag_ids @> ARRAY\[tags.id\]\) WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     @o1.tags2.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @o2.artists2.should == [@o1]
-    DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids @> ARRAY[2])"]
+    @db.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids @> ARRAY[2]::integer[])"]
     @o2.artists2.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c2.dataset._fetch = {:id=>2, :artists_id=>1, :tag_ids=>Sequel.pg_array([1,2,3])}
     @c1.dataset._fetch = {:id=>1, :tag_ids=>Sequel.pg_array([1,2,3])}
 
     a = @c1.eager(:tags2).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT tags\.id, artists\.id AS artists_id, artists\.tag_ids FROM tags LEFT OUTER JOIN artists ON \(artists.tag_ids @> ARRAY\[tags.id\]\) WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.should == [@o1]
     a.first.tags2.should == [@o2]
     a.first.tags2.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c2.dataset._fetch = {:id=>2}
     @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])}
 
     a = @c2.eager(:artists2).all
-    DB.sqls.should == ["SELECT * FROM tags", "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ["SELECT * FROM tags", "SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.should == [@o2]
     a.first.artists2.should == [@o1]
     a.first.artists2.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should respect the :limit option when eager loading" do
@@ -312,29 +345,29 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>2
     a = @c1.eager(:tags).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.first.tags.should == [@c2.load(:id=>1), @c2.load(:id=>2)]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[1, 1]
     a = @c1.eager(:tags).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.first.tags.should == [@c2.load(:id=>2)]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c1.pg_array_to_many :tags, :clone=>:tags, :limit=>[nil, 1]
     a = @c1.eager(:tags).all
     a.should == [@o1]
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.pop.should =~ /SELECT \* FROM tags WHERE \(tags\.id IN \([123], [123], [123]\)\)/ 
     sqls.should == ["SELECT * FROM artists"]
     a.first.tags.should == [@c2.load(:id=>2), @c2.load(:id=>3)]
-    DB.sqls.length.should == 0
+    @db.sqls.length.should == 0
 
     @c2.dataset._fetch = [{:id=>2}]
     @c1.dataset._fetch = [{:id=>5, :tag_ids=>Sequel.pg_array([1,2,3])},{:id=>6, :tag_ids=>Sequel.pg_array([2,3])}, {:id=>7, :tag_ids=>Sequel.pg_array([1,2])}]
@@ -342,23 +375,28 @@ describe Sequel::Model, "pg_array_associations" do
     @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>2
     a = @c2.eager(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.first.artists.should == [@c1.load(:id=>5, :tag_ids=>Sequel.pg_array([1,2,3])), @c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3]))]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[1, 1]
     a = @c2.eager(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.first.artists.should == [@c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3]))]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @c2.many_to_pg_array :artists, :clone=>:artists, :limit=>[nil, 1]
     a = @c2.eager(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2])"]
+    @db.sqls.should == ['SELECT * FROM tags', "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::integer[])"]
     a.first.artists.should == [@c1.load(:id=>6, :tag_ids=>Sequel.pg_array([2,3])), @c1.load(:id=>7, :tag_ids=>Sequel.pg_array([1,2]))]
-    DB.sqls.should == []
+    @db.sqls.should == []
+  end
+
+  it "should support association_join" do
+    @c1.association_join(:tags).sql.should == "SELECT * FROM artists INNER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"
+    @c2.association_join(:artists).sql.should == "SELECT * FROM tags INNER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"
   end
 
   it "should eagerly graph associations" do
@@ -366,16 +404,16 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3])}
 
     a = @c1.eager_graph(:tags).all
-    DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"]
+    @db.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"]
     a.should == [@o1]
     a.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager_graph(:artists).all
-    DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"]
+    @db.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"]
     a.should == [@o2]
     a.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should allow cascading of eager graphing for associations of associated models" do
@@ -383,18 +421,18 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.dataset._fetch = {:id=>1, :tags_id=>2, :tag_ids=>Sequel.pg_array([1,2,3]), :artists_0_id=>1, :artists_0_tag_ids=>Sequel.pg_array([1,2,3])}
 
     a = @c1.eager_graph(:tags=>:artists).all
-    DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id, artists_0.id AS artists_0_id, artists_0.tag_ids AS artists_0_tag_ids FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN artists AS artists_0 ON (artists_0.tag_ids @> ARRAY[tags.id])"]
+    @db.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id, artists_0.id AS artists_0_id, artists_0.tag_ids AS artists_0_tag_ids FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN artists AS artists_0 ON (artists_0.tag_ids @> ARRAY[tags.id])"]
     a.should == [@o1]
     a.first.tags.should == [@o2]
     a.first.tags.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager_graph(:artists=>:tags).all
-    DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids, tags_0.id AS tags_0_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN tags AS tags_0 ON (artists.tag_ids @> ARRAY[tags_0.id])"]
+    @db.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids, tags_0.id AS tags_0_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id]) LEFT OUTER JOIN tags AS tags_0 ON (artists.tag_ids @> ARRAY[tags_0.id])"]
     a.should == [@o2]
     a.first.artists.should == [@o1]
     a.first.artists.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "eager graphing should respect key options" do 
@@ -407,15 +445,15 @@ describe Sequel::Model, "pg_array_associations" do
 
     a = @c1.eager_graph(:tags).all
     a.should == [@o1]
-    DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids[1:2] @> ARRAY[(tags.id * 3)])"]
+    @db.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id AS tags_id FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids[1:2] @> ARRAY[(tags.id * 3)])"]
     a.first.tags.should == [@o2]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager_graph(:artists).all
     a.should == [@o2]
-    DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids[1:2] @> ARRAY[tags.id3])"]
+    @db.sqls.should == ["SELECT tags.id, artists.id AS artists_id, artists.tag_ids FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids[1:2] @> ARRAY[tags.id3])"]
     a.first.artists.should == [@o1]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should respect the association's :graph_select option" do 
@@ -426,16 +464,16 @@ describe Sequel::Model, "pg_array_associations" do
     @c1.dataset._fetch = {:id=>1, :id2=>2, :tag_ids=>Sequel.pg_array([1,2,3])}
 
     a = @c1.eager_graph(:tags).all
-    DB.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id2 FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"]
+    @db.sqls.should == ["SELECT artists.id, artists.tag_ids, tags.id2 FROM artists LEFT OUTER JOIN tags ON (artists.tag_ids @> ARRAY[tags.id])"]
     a.should == [@o1]
     a.first.tags.should == [@c2.load(:id2=>2)]
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     a = @c2.eager_graph(:artists).all
-    DB.sqls.should == ["SELECT tags.id, artists.id AS artists_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"]
+    @db.sqls.should == ["SELECT tags.id, artists.id AS artists_id FROM tags LEFT OUTER JOIN artists ON (artists.tag_ids @> ARRAY[tags.id])"]
     a.should == [@o2]
     a.first.artists.should == [@c1.load(:id=>1)]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should respect the association's :graph_join_type option" do 
@@ -483,34 +521,46 @@ describe Sequel::Model, "pg_array_associations" do
   it "should define an add_ method for adding associated objects" do
     @o1.add_tag(@c2.load(:id=>4))
     @o1.tag_ids.should == [1,2,3,4]
-    DB.sqls.should == []
+    @db.sqls.should == []
     @o1.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"]
 
     @o2.add_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([4])))
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4,2] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4,2] WHERE (id = 1)"]
   end
 
   it "should define a remove_ method for removing associated objects" do
     @o1.remove_tag(@o2)
     @o1.tag_ids.should == [1,3]
-    DB.sqls.should == []
+    @db.sqls.should == []
     @o1.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3] WHERE (id = 1)"]
 
     @o2.remove_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([1,2,3,4])))
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"]
   end
 
   it "should define a remove_all_ method for removing all associated objects" do
     @o1.remove_all_tags
     @o1.tag_ids.should == []
-    DB.sqls.should == []
+    @db.sqls.should == []
     @o1.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"]
 
     @o2.remove_all_artists
-    DB.sqls.should == ["UPDATE artists SET tag_ids = array_remove(tag_ids, 2) WHERE (tag_ids @> ARRAY[2])"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = array_remove(tag_ids, 2) WHERE (tag_ids @> ARRAY[2])"]
+  end
+
+  it "should allow calling add_ and remove_ methods on new objects for pg_array_to_many associations" do
+    a = Artist.new
+    a.add_tag(@c2.load(:id=>4))
+    a.tag_ids.should == [4]
+    a.remove_tag(@c2.load(:id=>4))
+    a.tag_ids.should == []
+    a.add_tag(@c2.load(:id=>4))
+    a.tag_ids.should == [4]
+    a.remove_all_tags
+    a.tag_ids.should == []
   end
 
   it "should have pg_array_to_many association modification methods save if :save_after_modify option is used" do
@@ -518,73 +568,73 @@ describe Sequel::Model, "pg_array_associations" do
 
     @o1.add_tag(@c2.load(:id=>4))
     @o1.tag_ids.should == [1,2,3,4]
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,2,3,4] WHERE (id = 1)"]
 
     @o1.remove_tag(@o2)
     @o1.tag_ids.should == [1,3,4]
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1,3,4] WHERE (id = 1)"]
 
     @o1.remove_all_tags
     @o1.tag_ids.should == []
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[] WHERE (id = 1)"]
   end
 
   it "should have association modification methods deal with nil values" do
     v = @c1.load(:id=>1)
     v.add_tag(@c2.load(:id=>4))
     v.tag_ids.should == [4]
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::integer[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::integer[] WHERE (id = 1)"]
 
     @o2.add_artist(@c1.load(:id=>1))
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::integer[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::integer[] WHERE (id = 1)"]
 
     v = @c1.load(:id=>1)
     v.remove_tag(@c2.load(:id=>4))
     v.tag_ids.should == nil
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @o2.remove_artist(@c1.load(:id=>1))
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     v = @c1.load(:id=>1)
     v.remove_all_tags
     v.tag_ids.should == nil
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should have association modification methods deal with empty arrays values" do
     v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([]))
     v.add_tag(@c2.load(:id=>4))
     v.tag_ids.should == [4]
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4] WHERE (id = 1)"]
 
     @o2.add_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([])))
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2] WHERE (id = 1)"]
 
     v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([]))
     v.remove_tag(@c2.load(:id=>4))
     v.tag_ids.should == []
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     @o2.remove_artist(@c1.load(:id=>1, :tag_ids=>Sequel.pg_array([])))
-    DB.sqls.should == []
+    @db.sqls.should == []
 
     v = @c1.load(:id=>1, :tag_ids=>Sequel.pg_array([]))
     v.remove_all_tags
     v.tag_ids.should == []
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should respect the :array_type option when manually creating arrays" do
@@ -593,11 +643,80 @@ describe Sequel::Model, "pg_array_associations" do
     v = @c1.load(:id=>1)
     v.add_tag(@c2.load(:id=>4))
     v.tag_ids.should == [4]
-    DB.sqls.should == []
+    @db.sqls.should == []
     v.save_changes
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::int8[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[4]::int8[] WHERE (id = 1)"]
 
     @o2.add_artist(@c1.load(:id=>1))
-    DB.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::int8[] WHERE (id = 1)"]
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[2]::int8[] WHERE (id = 1)"]
+  end
+
+  it "should respect the :array_type option in the associations dataset" do
+    @c2.many_to_pg_array :artists, :clone=>:artists, :array_type=>:int8
+    @c2.load(:id=>1).artists_dataset.sql.should == 'SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[1]::int8[])'
+  end
+
+  it "should respect the :array_type option when eager loading" do
+    @c2.many_to_pg_array :artists, :clone=>:artists, :array_type=>:int8
+    @c2.eager(:artists).all
+    @db.sqls.should == ["SELECT * FROM tags", "SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[2]::int8[])"]
+  end
+
+  it "should respect the :array_type option when filtering by associations" do
+    @c1.pg_array_to_many :tags, :clone=>:tags, :array_type=>:int8
+    @c1.where(:tags=>@c2.load(:id=>1)).sql.should == 'SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[1]::int8[])'
+    @c1.where(:tags=>[@c2.load(:id=>1), @c2.load(:id=>2)]).sql.should == 'SELECT * FROM artists WHERE (artists.tag_ids && ARRAY[1,2]::int8[])'
+  end
+
+  it "should automatically determine the array type by looking at the schema" do
+    @c1.db_schema[:tag_ids][:db_type] = 'int8'
+    @c2.many_to_pg_array :artists, :clone=>:artists
+    @c1.pg_array_to_many :tags, :clone=>:tags, :save_after_modify=>true
+    @c2.load(:id=>1).artists_dataset.sql.should == 'SELECT * FROM artists WHERE (artists.tag_ids @> ARRAY[1]::int8[])'
+    @c1.load(:id=>1).add_tag(@c2.load(:id=>1))
+    @db.sqls.should == ["UPDATE artists SET tag_ids = ARRAY[1]::int8[] WHERE (id = 1)"]
+  end
+
+  it "should automatically determine the array type by looking at the schema" do
+  end
+
+  it "should not validate the current/associated object in add_ and remove_ if the :validate=>false option is used" do
+    @c1.pg_array_to_many :tags, :clone=>:tags, :validate=>false, :save_after_modify=>true
+    @c2.many_to_pg_array :artists, :clone=>:artists, :validate=>false
+    a = @c1.load(:id=>1)
+    t = @c2.load(:id=>2)
+    def a.validate() errors.add(:id, 'foo') end
+    a.associations[:tags] = []
+    a.add_tag(t).should == t
+    a.tags.should == [t]
+    a.remove_tag(t).should == t
+    a.tags.should == []
+
+    t.associations[:artists] = []
+    t.add_artist(a).should == a
+    t.artists.should == [a]
+    t.remove_artist(a).should == a
+    t.artists.should == []
+  end
+
+  it "should not raise exception in add_ and remove_ if the :raise_on_save_failure=>false option is used" do
+    @c1.pg_array_to_many :tags, :clone=>:tags, :raise_on_save_failure=>false, :save_after_modify=>true
+    @c2.many_to_pg_array :artists, :clone=>:artists, :raise_on_save_failure=>false
+    a = @c1.load(:id=>1)
+    t = @c2.load(:id=>2)
+    def a.validate() errors.add(:id, 'foo') end
+    a.associations[:tags] = []
+    a.add_tag(t).should == nil
+    a.tags.should == []
+    a.associations[:tags] = [t]
+    a.remove_tag(t).should == nil
+    a.tags.should == [t]
+
+    t.associations[:artists] = []
+    t.add_artist(a).should == nil
+    t.artists.should == []
+    t.associations[:artists] = [a]
+    t.remove_artist(a).should == nil
+    t.artists.should == [a]
   end
 end
diff --git a/spec/extensions/pg_array_ops_spec.rb b/spec/extensions/pg_array_ops_spec.rb
index ae949f2..52e6354 100644
--- a/spec/extensions/pg_array_ops_spec.rb
+++ b/spec/extensions/pg_array_ops_spec.rb
@@ -63,6 +63,10 @@ describe "Sequel::Postgres::ArrayOp" do
     @db.literal(@a.unshift(:b)).should == "(b || a)"
   end
 
+  it "#cardinality should use the cardinality function" do
+    @db.literal(@a.cardinality).should == "cardinality(a)"
+  end
+
   it "#dims should use the array_dims function" do
     @db.literal(@a.dims).should == "array_dims(a)"
   end
@@ -95,6 +99,8 @@ describe "Sequel::Postgres::ArrayOp" do
 
   it "#unnest should use the unnest function" do
     @db.literal(@a.unnest).should == "unnest(a)"
+    @db.literal(@a.unnest(:b, :c)).should == "unnest(a, b, c)"
+    @db.literal(@a.unnest([1])).should == "unnest(a, ARRAY[1])"
   end
 
   it "#pg_array should return self" do
diff --git a/spec/extensions/pg_array_spec.rb b/spec/extensions/pg_array_spec.rb
index e9ee772..a20fa01 100644
--- a/spec/extensions/pg_array_spec.rb
+++ b/spec/extensions/pg_array_spec.rb
@@ -24,6 +24,9 @@ describe "pg_array extension" do
     c = @converter[1009]
     c.call("{a}").to_a.first.should be_a_kind_of(String)
     c.call("{}").to_a.should == []
+    c.call('{""}').to_a.should == [""]
+    c.call('{"",""}').to_a.should == ["",""]
+    c.call('{"","",""}').to_a.should == ["","",""]
     c.call("{a}").to_a.should == ['a']
     c.call('{"a b"}').to_a.should == ['a b']
     c.call('{a,b}').to_a.should == ['a', 'b']
@@ -126,6 +129,17 @@ describe "pg_array extension" do
     c.call('{NULLA,"NULL",NULL}').to_a.should == ["NULLA", "NULL", nil]
   end
 
+  it "should raise errors when for certain recognized invalid arrays" do
+    c = @converter[1009]
+    proc{c.call('')}.should raise_error(Sequel::Error)
+    proc{c.call('}')}.should raise_error(Sequel::Error)
+    proc{c.call('{{}')}.should raise_error(Sequel::Error)
+    proc{c.call('{}}')}.should raise_error(Sequel::Error)
+    proc{c.call('{a""}')}.should raise_error(Sequel::Error)
+    proc{c.call('{a{}}')}.should raise_error(Sequel::Error)
+    proc{c.call('{""a}')}.should raise_error(Sequel::Error)
+  end
+
   it "should literalize arrays without types correctly" do
     @db.literal(@m::PGArray.new([])).should == 'ARRAY[]'
     @db.literal(@m::PGArray.new([1])).should == 'ARRAY[1]'
@@ -189,7 +203,7 @@ describe "pg_array extension" do
 
   it "should parse array types from the schema correctly" do
     @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'integer[]'}, {:name=>'f', :db_type=>'real[]'}, {:name=>'d', :db_type=>'numeric[]'}, {:name=>'t', :db_type=>'text[]'}]
-    @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :integer_array, :float_array, :decimal_array, :string_array]
+    @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :integer_array, :real_array, :decimal_array, :string_array]
   end
 
   it "should support typecasting of the various array types" do
@@ -283,8 +297,10 @@ describe "pg_array extension" do
     @db.typecast_value(:foo15_array, ['t']).should == [true]
   end
 
-  it "should raise an error if using :scalar_oid option with unexisting scalar conversion proc" do
-    proc{Sequel::Postgres::PGArray.register('foo', :scalar_oid=>0)}.should raise_error(Sequel::Error)
+  it "should not raise an error if using :scalar_oid option with unexisting scalar conversion proc" do
+    h = {}
+    Sequel::Postgres::PGArray.register('foo', :oid=>1234, :scalar_oid=>0, :type_procs=>h)
+    h[1234].call('{t}').should == ["t"]
   end
 
   it "should raise an error if using :converter option and a block argument" do
@@ -315,19 +331,13 @@ describe "pg_array extension" do
     @db.literal(Sequel::Postgres::PG_TYPES[3].call('{}')).should == "'{}'::blah[]"
   end
 
-  it "should use and not override existing database typecast method if :typecast_method option is given" do
-    Sequel::Postgres::PGArray.register('foo', :typecast_method=>:float)
-    @db.fetch = [{:name=>'id', :db_type=>'foo[]'}]
-    @db.schema(:items).map{|e| e[1][:type]}.should == [:float_array]
-  end
-
   it "should support registering custom array types on a per-Database basis" do
     @db.register_array_type('banana', :oid=>7865){|s| s}
     @db.typecast_value(:banana_array, []).should be_a_kind_of(Sequel::Postgres::PGArray)
     @db.fetch = [{:name=>'id', :db_type=>'banana[]'}]
     @db.schema(:items).map{|e| e[1][:type]}.should == [:banana_array]
     @db.conversion_procs.should have_key(7865)
-    @db.respond_to?(:typecast_value_banana_array, true).should be_true
+    @db.respond_to?(:typecast_value_banana_array, true).should == true
 
     db = Sequel.connect('mock://postgres', :quote_identifiers=>false)
     db.extend_datasets(Module.new{def supports_timestamp_timezones?; false; end; def supports_timestamp_usecs?; false; end})
@@ -335,7 +345,7 @@ describe "pg_array extension" do
     db.fetch = [{:name=>'id', :db_type=>'banana[]'}]
     db.schema(:items).map{|e| e[1][:type]}.should == [nil]
     db.conversion_procs.should_not have_key(7865)
-    db.respond_to?(:typecast_value_banana_array, true).should be_false
+    db.respond_to?(:typecast_value_banana_array, true).should == false
   end
 
   it "should automatically look up the array and scalar oids when registering per-Database types" do
diff --git a/spec/extensions/pg_hstore_spec.rb b/spec/extensions/pg_hstore_spec.rb
index a3704a4..edf663f 100644
--- a/spec/extensions/pg_hstore_spec.rb
+++ b/spec/extensions/pg_hstore_spec.rb
@@ -78,14 +78,14 @@ describe "pg_hstore extension" do
     Sequel.hstore('foo2'=>'bar').fetch(:foo){|key| k = key }.should == 'foo'
     k.should == 'foo'
     
-    Sequel.hstore('foo'=>'bar').has_key?(:foo).should be_true
-    Sequel.hstore('foo'=>'bar').has_key?(:bar).should be_false
-    Sequel.hstore('foo'=>'bar').key?(:foo).should be_true
-    Sequel.hstore('foo'=>'bar').key?(:bar).should be_false
-    Sequel.hstore('foo'=>'bar').member?(:foo).should be_true
-    Sequel.hstore('foo'=>'bar').member?(:bar).should be_false
-    Sequel.hstore('foo'=>'bar').include?(:foo).should be_true
-    Sequel.hstore('foo'=>'bar').include?(:bar).should be_false
+    Sequel.hstore('foo'=>'bar').has_key?(:foo).should == true
+    Sequel.hstore('foo'=>'bar').has_key?(:bar).should == false
+    Sequel.hstore('foo'=>'bar').key?(:foo).should == true
+    Sequel.hstore('foo'=>'bar').key?(:bar).should == false
+    Sequel.hstore('foo'=>'bar').member?(:foo).should == true
+    Sequel.hstore('foo'=>'bar').member?(:bar).should == false
+    Sequel.hstore('foo'=>'bar').include?(:foo).should == true
+    Sequel.hstore('foo'=>'bar').include?(:bar).should == false
 
     Sequel.hstore('foo'=>'bar', '1'=>'2').values_at(:foo3, :foo, :foo2, 1).should == [nil, 'bar', nil, '2']
 
@@ -96,16 +96,16 @@ describe "pg_hstore extension" do
   end
 
   it "should convert has_value?/value? lookups to string" do
-    Sequel.hstore('foo'=>'bar').has_value?(:bar).should be_true
-    Sequel.hstore('foo'=>'bar').has_value?(:foo).should be_false
-    Sequel.hstore('foo'=>'bar').value?(:bar).should be_true
-    Sequel.hstore('foo'=>'bar').value?(:foo).should be_false
+    Sequel.hstore('foo'=>'bar').has_value?(:bar).should == true
+    Sequel.hstore('foo'=>'bar').has_value?(:foo).should == false
+    Sequel.hstore('foo'=>'bar').value?(:bar).should == true
+    Sequel.hstore('foo'=>'bar').value?(:foo).should == false
   end
 
   it "should handle nil values in has_value?/value? lookups" do
-    Sequel.hstore('foo'=>'').has_value?('').should be_true
-    Sequel.hstore('foo'=>'').has_value?(nil).should be_false
-    Sequel.hstore('foo'=>nil).has_value?(nil).should be_true
+    Sequel.hstore('foo'=>'').has_value?('').should == true
+    Sequel.hstore('foo'=>'').has_value?(nil).should == false
+    Sequel.hstore('foo'=>nil).has_value?(nil).should == true
   end
 
   it "should have underlying hash convert lookups by key to string" do
diff --git a/spec/extensions/pg_interval_spec.rb b/spec/extensions/pg_interval_spec.rb
index afd768c..17f379d 100644
--- a/spec/extensions/pg_interval_spec.rb
+++ b/spec/extensions/pg_interval_spec.rb
@@ -49,18 +49,18 @@ describe "pg_interval extension" do
     d = ActiveSupport::Duration.new(31557600 + 2*86400*30 + 3*86400*7 + 4*86400 + 5*3600 + 6*60 + 7, [[:years, 1], [:months, 2], [:days, 25], [:seconds, 18367]])
     @db.typecast_value(:interval, d).object_id.should == d.object_id
 
-    @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").is_a?(ActiveSupport::Duration).should be_true
+    @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").is_a?(ActiveSupport::Duration).should == true
     @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").should == d
     @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s}
     @db.typecast_value(:interval, "1 year 2 mons 25 days 05:06:07.0").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s}
 
-    @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").is_a?(ActiveSupport::Duration).should be_true
+    @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").is_a?(ActiveSupport::Duration).should == true
     @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").should == d
     @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7 secs").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s}
     @db.typecast_value(:interval, "1 year 2 mons 25 days 5 hours 6 mins 7.0 secs").parts.sort_by{|k,v| k.to_s}.should == d.parts.sort_by{|k,v| k.to_s}
 
     d2 = ActiveSupport::Duration.new(1, [[:seconds, 1]])
-    @db.typecast_value(:interval, 1).is_a?(ActiveSupport::Duration).should be_true
+    @db.typecast_value(:interval, 1).is_a?(ActiveSupport::Duration).should == true
     @db.typecast_value(:interval, 1).should == d2
     @db.typecast_value(:interval, 1).parts.sort_by{|k,v| k.to_s}.should == d2.parts.sort_by{|k,v| k.to_s}
 
diff --git a/spec/extensions/pg_json_ops_spec.rb b/spec/extensions/pg_json_ops_spec.rb
index 9b5e953..a87f4ae 100644
--- a/spec/extensions/pg_json_ops_spec.rb
+++ b/spec/extensions/pg_json_ops_spec.rb
@@ -6,6 +6,7 @@ describe "Sequel::Postgres::JSONOp" do
   before do
     @db = Sequel.connect('mock://postgres', :quote_identifiers=>false)
     @j = Sequel.pg_json_op(:j)
+    @jb = Sequel.pg_jsonb_op(:j)
     @l = proc{|o| @db.literal(o)}
   end
 
@@ -48,77 +49,167 @@ describe "Sequel::Postgres::JSONOp" do
 
   it "should have #array_length use the json_array_length function" do
     @l[@j.array_length].should == "json_array_length(j)"
+    @l[@jb.array_length].should == "jsonb_array_length(j)"
   end
 
   it "should have #array_length return a numeric expression" do
     @l[@j.array_length & 1].should == "(json_array_length(j) & 1)"
+    @l[@jb.array_length & 1].should == "(jsonb_array_length(j) & 1)"
   end
 
   it "should have #each use the json_each function" do
     @l[@j.each].should == "json_each(j)"
+    @l[@jb.each].should == "jsonb_each(j)"
   end
 
   it "should have #each_text use the json_each_text function" do
     @l[@j.each_text].should == "json_each_text(j)"
+    @l[@jb.each_text].should == "jsonb_each_text(j)"
   end
 
   it "should have #extract use the json_extract_path function" do
     @l[@j.extract('a')].should == "json_extract_path(j, 'a')"
     @l[@j.extract('a', 'b')].should == "json_extract_path(j, 'a', 'b')"
+    @l[@jb.extract('a')].should == "jsonb_extract_path(j, 'a')"
+    @l[@jb.extract('a', 'b')].should == "jsonb_extract_path(j, 'a', 'b')"
   end
 
   it "should have #extract return a JSONOp" do
     @l[@j.extract('a')[1]].should == "(json_extract_path(j, 'a') -> 1)"
+    @l[@jb.extract('a')[1]].should == "(jsonb_extract_path(j, 'a') -> 1)"
   end
 
   it "should have #extract_text use the json_extract_path_text function" do
     @l[@j.extract_text('a')].should == "json_extract_path_text(j, 'a')"
     @l[@j.extract_text('a', 'b')].should == "json_extract_path_text(j, 'a', 'b')"
+    @l[@jb.extract_text('a')].should == "jsonb_extract_path_text(j, 'a')"
+    @l[@jb.extract_text('a', 'b')].should == "jsonb_extract_path_text(j, 'a', 'b')"
   end
 
   it "should have #extract_text return an SQL::StringExpression" do
     @l[@j.extract_text('a') + 'a'].should == "(json_extract_path_text(j, 'a') || 'a')"
+    @l[@jb.extract_text('a') + 'a'].should == "(jsonb_extract_path_text(j, 'a') || 'a')"
   end
 
   it "should have #keys use the json_object_keys function" do
     @l[@j.keys].should == "json_object_keys(j)"
+    @l[@jb.keys].should == "jsonb_object_keys(j)"
   end
 
   it "should have #array_elements use the json_array_elements function" do
     @l[@j.array_elements].should == "json_array_elements(j)"
+    @l[@jb.array_elements].should == "jsonb_array_elements(j)"
+  end
+
+  it "should have #array_elements use the json_array_elements_text function" do
+    @l[@j.array_elements_text].should == "json_array_elements_text(j)"
+    @l[@jb.array_elements_text].should == "jsonb_array_elements_text(j)"
+  end
+
+  it "should have #typeof use the json_typeof function" do
+    @l[@j.typeof].should == "json_typeof(j)"
+    @l[@jb.typeof].should == "jsonb_typeof(j)"
+  end
+
+  it "should have #to_record use the json_to_record function" do
+    @l[@j.to_record].should == "json_to_record(j, false)"
+    @l[@jb.to_record].should == "jsonb_to_record(j, false)"
+    @l[@j.to_record(true)].should == "json_to_record(j, true)"
+    @l[@jb.to_record(true)].should == "jsonb_to_record(j, true)"
+  end
+
+  it "should have #to_recordset use the json_to_recordsetfunction" do
+    @l[@j.to_recordset].should == "json_to_recordset(j, false)"
+    @l[@jb.to_recordset].should == "jsonb_to_recordset(j, false)"
+    @l[@j.to_recordset(true)].should == "json_to_recordset(j, true)"
+    @l[@jb.to_recordset(true)].should == "jsonb_to_recordset(j, true)"
   end
 
   it "should have #populate use the json_populate_record function" do
     @l[@j.populate(:a)].should == "json_populate_record(a, j)"
+    @l[@jb.populate(:a)].should == "jsonb_populate_record(a, j)"
   end
 
   it "should have #populate_set use the json_populate_record function" do
     @l[@j.populate_set(:a)].should == "json_populate_recordset(a, j)"
+    @l[@jb.populate_set(:a)].should == "jsonb_populate_recordset(a, j)"
+  end
+
+  it "#contain_all should use the ?& operator" do
+    @l[@jb.contain_all(:h1)].should == "(j ?& h1)"
+  end
+
+  it "#contain_all handle arrays" do
+    @l[@jb.contain_all(%w'h1')].should == "(j ?& ARRAY['h1'])"
+  end
+
+  it "#contain_any should use the ?| operator" do
+    @l[@jb.contain_any(:h1)].should == "(j ?| h1)"
+  end
+
+  it "#contain_any should handle arrays" do
+    @l[@jb.contain_any(%w'h1')].should == "(j ?| ARRAY['h1'])"
+  end
+
+  it "#contains should use the @> operator" do
+    @l[@jb.contains(:h1)].should == "(j @> h1)"
+  end
+
+  it "#contains should handle hashes" do
+    @l[@jb.contains('a'=>'b')].should == "(j @> '{\"a\":\"b\"}'::jsonb)"
+  end
+
+  it "#contains should handle arrays" do
+    @l[@jb.contains([1, 2])].should == "(j @> '[1,2]'::jsonb)"
+  end
+
+  it "#contained_by should use the <@ operator" do
+    @l[@jb.contained_by(:h1)].should == "(j <@ h1)"
+  end
+
+  it "#contained_by should handle hashes" do
+    @l[@jb.contained_by('a'=>'b')].should == "(j <@ '{\"a\":\"b\"}'::jsonb)"
+  end
+
+  it "#contained_by should handle arrays" do
+    @l[@jb.contained_by([1, 2])].should == "(j <@ '[1,2]'::jsonb)"
+  end
+
+  it "#has_key? and aliases should use the ? operator" do
+    @l[@jb.has_key?('a')].should == "(j ? 'a')"
+    @l[@jb.include?('a')].should == "(j ? 'a')"
   end
 
   it "#pg_json should return self" do
     @j.pg_json.should equal(@j)
+    @jb.pg_jsonb.should equal(@jb)
   end
 
   it "Sequel.pg_json_op should return arg for JSONOp" do
     Sequel.pg_json_op(@j).should equal(@j)
+    Sequel.pg_jsonb_op(@jb).should equal(@jb)
   end
 
   it "should be able to turn expressions into json ops using pg_json" do
     @db.literal(Sequel.qualify(:b, :a).pg_json[1]).should == "(b.a -> 1)"
     @db.literal(Sequel.function(:a, :b).pg_json[1]).should == "(a(b) -> 1)"
+    @db.literal(Sequel.qualify(:b, :a).pg_jsonb[1]).should == "(b.a -> 1)"
+    @db.literal(Sequel.function(:a, :b).pg_jsonb[1]).should == "(a(b) -> 1)"
   end
 
   it "should be able to turn literal strings into json ops using pg_json" do
     @db.literal(Sequel.lit('a').pg_json[1]).should == "(a -> 1)"
+    @db.literal(Sequel.lit('a').pg_jsonb[1]).should == "(a -> 1)"
   end
 
   it "should be able to turn symbols into json ops using Sequel.pg_json_op" do
     @db.literal(Sequel.pg_json_op(:a)[1]).should == "(a -> 1)"
+    @db.literal(Sequel.pg_jsonb_op(:a)[1]).should == "(a -> 1)"
   end
 
   it "should be able to turn symbols into json ops using Sequel.pg_json" do
     @db.literal(Sequel.pg_json(:a)[1]).should == "(a -> 1)"
+    @db.literal(Sequel.pg_jsonb(:a)[1]).should == "(a -> 1)"
   end
 
   it "should allow transforming JSONArray instances into ArrayOp instances" do
@@ -128,4 +219,12 @@ describe "Sequel::Postgres::JSONOp" do
   it "should allow transforming JSONHash instances into ArrayOp instances" do
     @db.literal(Sequel.pg_json('a'=>1).op['a']).should == "('{\"a\":1}'::json -> 'a')"
   end
+
+  it "should allow transforming JSONBArray instances into ArrayOp instances" do
+    @db.literal(Sequel.pg_jsonb([1,2]).op[1]).should == "('[1,2]'::jsonb -> 1)"
+  end
+
+  it "should allow transforming JSONBHash instances into ArrayOp instances" do
+    @db.literal(Sequel.pg_jsonb('a'=>1).op['a']).should == "('{\"a\":1}'::jsonb -> 'a')"
+  end
 end
diff --git a/spec/extensions/pg_json_spec.rb b/spec/extensions/pg_json_spec.rb
index 03065bf..adf8f7e 100644
--- a/spec/extensions/pg_json_spec.rb
+++ b/spec/extensions/pg_json_spec.rb
@@ -8,6 +8,8 @@ describe "pg_json extension" do
     @m = m::JSONDatabaseMethods
     @hc = m::JSONHash
     @ac = m::JSONArray
+    @bhc = m::JSONBHash
+    @bac = m::JSONBArray
 
     # Create subclass in correct namespace for easily overriding methods
     j = m::JSON = JSON.dup
@@ -46,6 +48,14 @@ describe "pg_json extension" do
     @m.db_parse_json('1.1').should == 1.1
   end
 
+  it "should parse json and non-json plain strings, integers, and floats correctly in db_parse_jsonb" do
+    @m.db_parse_jsonb('{"a": "b", "c": {"d": "e"}}').to_hash.should == {'a'=>'b', 'c'=>{'d'=>'e'}}
+    @m.db_parse_jsonb('[1, [2], {"a": "b"}]').to_a.should == [1, [2], {'a'=>'b'}]
+    @m.db_parse_jsonb('1').should == 1
+    @m.db_parse_jsonb('"b"').should == 'b'
+    @m.db_parse_jsonb('1.1').should == 1.1
+  end
+
   it "should raise an error when attempting to parse invalid json" do
     proc{@m.parse_json('')}.should raise_error(Sequel::InvalidValue)
     proc{@m.parse_json('1')}.should raise_error(Sequel::InvalidValue)
@@ -72,6 +82,13 @@ describe "pg_json extension" do
     @db.literal(Sequel.pg_json('a'=>'b')).should == "'{\"a\":\"b\"}'::json"
   end
 
+  it "should literalize JSONHash and JSONArray to strings correctly" do
+    @db.literal(Sequel.pg_jsonb([])).should == "'[]'::jsonb"
+    @db.literal(Sequel.pg_jsonb([1, [2], {'a'=>'b'}])).should == "'[1,[2],{\"a\":\"b\"}]'::jsonb"
+    @db.literal(Sequel.pg_jsonb({})).should == "'{}'::jsonb"
+    @db.literal(Sequel.pg_jsonb('a'=>'b')).should == "'{\"a\":\"b\"}'::jsonb"
+  end
+
   it "should have Sequel.pg_json return JSONHash and JSONArray as is" do
     a = Sequel.pg_json({})
     Sequel.pg_json(a).should equal(a)
@@ -79,22 +96,72 @@ describe "pg_json extension" do
     Sequel.pg_json(a).should equal(a)
   end
 
-  it "should have JSONHash#to_hash method for getting underlying hash" do
+  it "should have Sequel.pg_json convert jsonb values" do
+    a = {}
+    v = Sequel.pg_json(Sequel.pg_jsonb(a))
+    v.to_hash.should equal(a)
+    v.should be_a_kind_of(@hc)
+
+    a = []
+    v = Sequel.pg_json(Sequel.pg_jsonb(a))
+    v.to_a.should equal(a)
+    v.should be_a_kind_of(@ac)
+  end
+
+  it "should have Sequel.pg_jsonb return JSONBHash and JSONBArray as is" do
+    a = Sequel.pg_jsonb({})
+    Sequel.pg_jsonb(a).should equal(a)
+    a = Sequel.pg_jsonb([])
+    Sequel.pg_jsonb(a).should equal(a)
+  end
+
+  it "should have Sequel.pg_jsonb convert json values" do
+    a = {}
+    v = Sequel.pg_jsonb(Sequel.pg_json(a))
+    v.to_hash.should equal(a)
+    v.should be_a_kind_of(@bhc)
+
+    a = []
+    v = Sequel.pg_jsonb(Sequel.pg_json(a))
+    v.to_a.should equal(a)
+    v.should be_a_kind_of(@bac)
+  end
+
+  it "should have JSONHashBase#to_hash method for getting underlying hash" do
     Sequel.pg_json({}).to_hash.should be_a_kind_of(Hash)
+    Sequel.pg_jsonb({}).to_hash.should be_a_kind_of(Hash)
+  end
+
+  it "should allow aliasing json objects" do
+    @db.literal(Sequel.pg_json({}).as(:a)).should == "'{}'::json AS a"
+    @db.literal(Sequel.pg_json([]).as(:a)).should == "'[]'::json AS a"
+    @db.literal(Sequel.pg_jsonb({}).as(:a)).should == "'{}'::jsonb AS a"
+    @db.literal(Sequel.pg_jsonb([]).as(:a)).should == "'[]'::jsonb AS a"
+  end
+
+  it "should allow casting json objects" do
+    @db.literal(Sequel.pg_json({}).cast(String)).should == "CAST('{}'::json AS text)"
+    @db.literal(Sequel.pg_json([]).cast(String)).should == "CAST('[]'::json AS text)"
+    @db.literal(Sequel.pg_jsonb({}).cast(String)).should == "CAST('{}'::jsonb AS text)"
+    @db.literal(Sequel.pg_jsonb([]).cast(String)).should == "CAST('[]'::jsonb AS text)"
   end
 
-  it "should have JSONArray#to_a method for getting underlying array" do
+  it "should have JSONArrayBase#to_a method for getting underlying array" do
     Sequel.pg_json([]).to_a.should be_a_kind_of(Array)
+    Sequel.pg_jsonb([]).to_a.should be_a_kind_of(Array)
   end
 
-  it "should support using JSONHash and JSONArray as bound variables" do
+  it "should support using JSONHashBase and JSONArrayBase as bound variables" do
     @db.bound_variable_arg(1, nil).should == 1
     @db.bound_variable_arg(Sequel.pg_json([1]), nil).should == '[1]'
     @db.bound_variable_arg(Sequel.pg_json('a'=>'b'), nil).should == '{"a":"b"}'
+    @db.bound_variable_arg(Sequel.pg_jsonb([1]), nil).should == '[1]'
+    @db.bound_variable_arg(Sequel.pg_jsonb('a'=>'b'), nil).should == '{"a":"b"}'
   end
 
-  it "should support using json[] types in bound variables" do
+  it "should support using json[] and jsonb[] types in bound variables" do
     @db.bound_variable_arg(Sequel.pg_array([Sequel.pg_json([{"a"=>1}]), Sequel.pg_json("b"=>[1, 2])]), nil).should == '{"[{\\"a\\":1}]","{\\"b\\":[1,2]}"}'
+    @db.bound_variable_arg(Sequel.pg_array([Sequel.pg_jsonb([{"a"=>1}]), Sequel.pg_jsonb("b"=>[1, 2])]), nil).should == '{"[{\\"a\\":1}]","{\\"b\\":[1,2]}"}'
   end
 
   it "should parse json type from the schema correctly" do
@@ -102,23 +169,56 @@ describe "pg_json extension" do
     @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :json]
   end
 
+  it "should parse json type from the schema correctly" do
+    @db.fetch = [{:name=>'id', :db_type=>'integer'}, {:name=>'i', :db_type=>'jsonb'}]
+    @db.schema(:items).map{|e| e[1][:type]}.should == [:integer, :jsonb]
+  end
+
   it "should support typecasting for the json type" do
     h = Sequel.pg_json(1=>2)
     a = Sequel.pg_json([1])
     @db.typecast_value(:json, h).should equal(h)
     @db.typecast_value(:json, h.to_hash).should == h
     @db.typecast_value(:json, h.to_hash).should be_a_kind_of(@hc)
+    @db.typecast_value(:json, Sequel.pg_jsonb(h)).should == h
+    @db.typecast_value(:json, Sequel.pg_jsonb(h)).should be_a_kind_of(@hc)
     @db.typecast_value(:json, a).should equal(a)
     @db.typecast_value(:json, a.to_a).should == a
     @db.typecast_value(:json, a.to_a).should be_a_kind_of(@ac)
+    @db.typecast_value(:json, Sequel.pg_jsonb(a)).should == a
+    @db.typecast_value(:json, Sequel.pg_jsonb(a)).should be_a_kind_of(@ac)
     @db.typecast_value(:json, '[]').should == Sequel.pg_json([])
+    @db.typecast_value(:json, '[]').should be_a_kind_of(@ac)
     @db.typecast_value(:json, '{"a": "b"}').should == Sequel.pg_json("a"=>"b")
+    @db.typecast_value(:json, '{"a": "b"}').should be_a_kind_of(@hc)
     proc{@db.typecast_value(:json, '')}.should raise_error(Sequel::InvalidValue)
     proc{@db.typecast_value(:json, 1)}.should raise_error(Sequel::InvalidValue)
   end
 
+  it "should support typecasting for the jsonb type" do
+    h = Sequel.pg_jsonb(1=>2)
+    a = Sequel.pg_jsonb([1])
+    @db.typecast_value(:jsonb, h).should equal(h)
+    @db.typecast_value(:jsonb, h.to_hash).should == h
+    @db.typecast_value(:jsonb, h.to_hash).should be_a_kind_of(@bhc)
+    @db.typecast_value(:jsonb, Sequel.pg_json(h)).should == h
+    @db.typecast_value(:jsonb, Sequel.pg_json(h)).should be_a_kind_of(@bhc)
+    @db.typecast_value(:jsonb, a).should equal(a)
+    @db.typecast_value(:jsonb, a.to_a).should == a
+    @db.typecast_value(:jsonb, a.to_a).should be_a_kind_of(@bac)
+    @db.typecast_value(:jsonb, Sequel.pg_json(a)).should == a
+    @db.typecast_value(:jsonb, Sequel.pg_json(a)).should be_a_kind_of(@bac)
+    @db.typecast_value(:jsonb, '[]').should == Sequel.pg_jsonb([])
+    @db.typecast_value(:jsonb, '[]').should be_a_kind_of(@bac)
+    @db.typecast_value(:jsonb, '{"a": "b"}').should == Sequel.pg_jsonb("a"=>"b")
+    @db.typecast_value(:jsonb, '{"a": "b"}').should be_a_kind_of(@bhc)
+    proc{@db.typecast_value(:jsonb, '')}.should raise_error(Sequel::InvalidValue)
+    proc{@db.typecast_value(:jsonb, 1)}.should raise_error(Sequel::InvalidValue)
+  end
+
   it "should return correct results for Database#schema_type_class" do
     @db.schema_type_class(:json).should == [Sequel::Postgres::JSONHash, Sequel::Postgres::JSONArray]
+    @db.schema_type_class(:jsonb).should == [Sequel::Postgres::JSONBHash, Sequel::Postgres::JSONBArray]
     @db.schema_type_class(:integer).should == Integer
   end
 end
diff --git a/spec/extensions/pg_range_spec.rb b/spec/extensions/pg_range_spec.rb
index 02ae8dd..b4700f3 100644
--- a/spec/extensions/pg_range_spec.rb
+++ b/spec/extensions/pg_range_spec.rb
@@ -285,8 +285,8 @@ describe "pg_range extension" do
 
     it "should quack like a range" do
       if RUBY_VERSION >= '1.9'
-        @r1.cover?(1.5).should be_true
-        @r1.cover?(2.5).should be_false
+        @r1.cover?(1.5).should == true
+        @r1.cover?(2.5).should == false
         @r1.first(1).should == [1]
         @r1.last(1).should == [2]
       end
@@ -357,12 +357,12 @@ describe "pg_range extension" do
     end
 
     it "should have #exclude_begin? and #exclude_end indicate whether the beginning or ending of the range is excluded" do
-      @r1.exclude_begin?.should be_false
-      @r1.exclude_end?.should be_false
-      @r2.exclude_begin?.should be_true
-      @r2.exclude_end?.should be_false
-      @r3.exclude_begin?.should be_false
-      @r3.exclude_end?.should be_true
+      @r1.exclude_begin?.should == false
+      @r1.exclude_end?.should == false
+      @r2.exclude_begin?.should == true
+      @r2.exclude_end?.should == false
+      @r3.exclude_begin?.should == false
+      @r3.exclude_end?.should == true
     end
 
     it "should have #to_range raise an exception if the PGRange cannot be represented by a Range" do
@@ -381,24 +381,24 @@ describe "pg_range extension" do
     end
 
     it "should have #unbounded_begin? and #unbounded_end indicate whether the beginning or ending of the range is unbounded" do
-      @r1.unbounded_begin?.should be_false
-      @r1.unbounded_end?.should be_false
-      @r2.unbounded_begin?.should be_false
-      @r2.unbounded_end?.should be_true
-      @r3.unbounded_begin?.should be_true
-      @r3.unbounded_end?.should be_false
+      @r1.unbounded_begin?.should == false
+      @r1.unbounded_end?.should == false
+      @r2.unbounded_begin?.should == false
+      @r2.unbounded_end?.should == true
+      @r3.unbounded_begin?.should == true
+      @r3.unbounded_end?.should == false
     end
 
     it "should have #valid_ruby_range? return true if the PGRange can be represented as a Range" do
-      @r1.valid_ruby_range?.should be_true
-      @R.new(1, 2, :exclude_end=>true).valid_ruby_range?.should be_true
+      @r1.valid_ruby_range?.should == true
+      @R.new(1, 2, :exclude_end=>true).valid_ruby_range?.should == true
     end
 
     it "should have #valid_ruby_range? return false if the PGRange cannot be represented as a Range" do
-      @R.new(nil, 1).valid_ruby_range?.should be_false
-      @R.new(1, nil).valid_ruby_range?.should be_false
-      @R.new(0, 1, :exclude_begin=>true).valid_ruby_range?.should be_false
-      @R.empty.valid_ruby_range?.should be_false
+      @R.new(nil, 1).valid_ruby_range?.should == false
+      @R.new(1, nil).valid_ruby_range?.should == false
+      @R.new(0, 1, :exclude_begin=>true).valid_ruby_range?.should == false
+      @R.empty.valid_ruby_range?.should == false
     end
   end
 end
diff --git a/spec/extensions/pg_row_spec.rb b/spec/extensions/pg_row_spec.rb
index 9973f4e..2d08cca 100644
--- a/spec/extensions/pg_row_spec.rb
+++ b/spec/extensions/pg_row_spec.rb
@@ -272,7 +272,7 @@ describe "pg_row extension" do
     @db.conversion_procs[4] = proc{|s| called = true; s}
     @db.register_row_type(:foo)
     @db.conversion_procs[1].call('()').should == {:bar=>nil}
-    called.should be_false
+    called.should == false
   end
 
   it "should registering array type for row type if type has an array oid" do
diff --git a/spec/extensions/prepared_statements_associations_spec.rb b/spec/extensions/prepared_statements_associations_spec.rb
index 3324a02..964136a 100644
--- a/spec/extensions/prepared_statements_associations_spec.rb
+++ b/spec/extensions/prepared_statements_associations_spec.rb
@@ -22,8 +22,10 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @Artist.one_to_one :album, :class=>@Album, :key=>:artist_id
     @Album.many_to_one :artist, :class=>@Artist
     @Album.many_to_many :tags, :class=>@Tag, :join_table=>:albums_tags, :left_key=>:album_id
+    @Album.one_through_one :tag, :clone=>:tags
     @Artist.plugin :many_through_many
     @Artist.many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :class=>@Tag
+    @Artist.one_through_many :tag, :clone=>:tags
     @db.sqls
   end
 
@@ -38,10 +40,16 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @db.sqls.should == ["SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) -- prepared"]
+
+    @Album.load(:id=>1, :artist_id=>2).tag
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) LIMIT 1 -- prepared"]
 
     @Artist.load(:id=>1).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.artist_id = 1)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) -- prepared"]
+
+    @Artist.load(:id=>1).tag
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"]
   end
 
   specify "should run correct SQL for composite key associations" do
@@ -49,7 +57,10 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @Artist.one_to_one :album, :class=>@Album, :key=>[:artist_id, :artist_id2], :primary_key=>[:id, :id2]
     @Album.many_to_one :artist, :class=>@Artist, :key=>[:artist_id, :artist_id2], :primary_key=>[:id, :id2]
     @Album.many_to_many :tags, :class=>@Tag, :join_table=>:albums_tags, :left_key=>[:album_id, :album_id2], :right_key=>[:tag_id, :tag_id2], :right_primary_key=>[:id, :id2], :left_primary_key=>[:id, :id2]
+    @Album.one_through_one :tag, :clone=>:tags
+
     @Artist.many_through_many :tags, [[:albums, [:artist_id, :artist_id2], [:id, :id2]], [:albums_tags, [:album_id, :album_id2], [:tag_id, :tag_id2]]], :class=>@Tag, :right_primary_key=>[:id, :id2], :left_primary_key=>[:id, :id2]
+    @Artist.one_through_many :tag, :clone=>:tags
 
     @Artist.load(:id=>1, :id2=>2).albums
     @db.sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
@@ -61,10 +72,16 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @db.sqls.should == ["SELECT * FROM artists WHERE ((artists.id = 2) AND (artists.id2 = 3)) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2, :id2=>3).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2) AND (albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) -- prepared"]
+
+    @Album.load(:id=>1, :artist_id=>2, :id2=>3).tag
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) LIMIT 1 -- prepared"]
 
     @Artist.load(:id=>1, :id2=>2).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2) AND (albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
+
+    @Artist.load(:id=>1, :id2=>2).tag
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"]
   end
 
   specify "should not run query if no objects can be associated" do
@@ -76,6 +93,8 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
   specify "should run a regular query if there is a callback" do
     @Artist.load(:id=>1).albums(proc{|ds| ds})
     @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1)"]
+    @Artist.load(:id=>1).album(proc{|ds| ds})
+    @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1) LIMIT 1"]
   end
 
   specify "should run a regular query if :prepared_statement=>false option is used for the association" do
diff --git a/spec/extensions/rcte_tree_spec.rb b/spec/extensions/rcte_tree_spec.rb
index 656c74c..1c5e495 100644
--- a/spec/extensions/rcte_tree_spec.rb
+++ b/spec/extensions/rcte_tree_spec.rb
@@ -2,14 +2,18 @@ require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 
 describe Sequel::Model, "rcte_tree" do
   before do
-    @c = Class.new(Sequel::Model(DB[:nodes]))
+    @db = Sequel.mock
+    @db.extend_datasets do
+      def supports_cte?(*) true end
+    end
+    @c = Class.new(Sequel::Model(@db[:nodes]))
     @c.class_eval do
       def self.name; 'Node'; end
       columns :id, :name, :parent_id, :i, :pi
     end
     @ds = @c.dataset
     @o = @c.load(:id=>2, :parent_id=>1, :name=>'AA', :i=>3, :pi=>4)
-    DB.reset
+    @db.sqls
   end
 
   it "should define the correct associations" do
@@ -44,11 +48,11 @@ describe Sequel::Model, "rcte_tree" do
     @c.plugin :rcte_tree
     @ds._fetch = [[{:id=>1, :name=>'A', :parent_id=>3}]]
     @c.eager(:ancestors).all
-    DB.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x, id, name, parent_id, i, pi) AS (SELECT id AS x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes WHERE (id IN (3)) UNION ALL SELECT t.x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.parent_id = nodes.id)) SELECT * FROM t AS nodes"]
+    @db.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x, id, name, parent_id, i, pi) AS (SELECT id AS x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes WHERE (id IN (3)) UNION ALL SELECT t.x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.parent_id = nodes.id)) SELECT * FROM t AS nodes"]
 
     @ds._fetch = [[{:id=>1, :name=>'A', :parent_id=>3}]]
     @c.eager(:descendants).all
-    DB.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x, id, name, parent_id, i, pi) AS (SELECT parent_id AS x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes WHERE (parent_id IN (1)) UNION ALL SELECT t.x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.id = nodes.parent_id)) SELECT * FROM t AS nodes"]
+    @db.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x, id, name, parent_id, i, pi) AS (SELECT parent_id AS x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes WHERE (parent_id IN (1)) UNION ALL SELECT t.x_root_x, nodes.id, nodes.name, nodes.parent_id, nodes.i, nodes.pi FROM nodes INNER JOIN t ON (t.id = nodes.parent_id)) SELECT * FROM t AS nodes"]
   end
   
   it "should use the correct SQL for lazy associations when giving options" do
@@ -114,7 +118,7 @@ describe Sequel::Model, "rcte_tree" do
        {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>1}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>2},
        {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>1}]]
     os = @ds.eager(:ancestors).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT id AS x_root_x, nodes\.\* FROM nodes WHERE \(id IN \([12], [12]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.parent_id = nodes\.id\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D'), @c.load(:id=>9, :parent_id=>nil, :name=>'E')]
@@ -126,7 +130,7 @@ describe Sequel::Model, "rcte_tree" do
     os.map{|o| o.parent.parent if o.parent}.should == [@c.load(:id=>8, :name=>'?', :parent_id=>nil), @c.load(:id=>1, :name=>'00', :parent_id=>8), @c.load(:id=>8, :name=>'?', :parent_id=>nil), nil]
     os.map{|o| o.parent.parent.parent if o.parent and o.parent.parent}.should == [nil, @c.load(:id=>8, :name=>'?', :parent_id=>nil), nil, nil]
     os.map{|o| o.parent.parent.parent.parent if o.parent and o.parent.parent and o.parent.parent.parent}.should == [nil, nil, nil, nil]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load ancestors when giving options" do
@@ -136,7 +140,7 @@ describe Sequel::Model, "rcte_tree" do
        {:i=>1, :name=>'00', :pi=>8, :kal=>1}, {:i=>1, :name=>'00', :pi=>8, :kal=>2},
        {:i=>8, :name=>'?', :pi=>nil, :kal=>2}, {:i=>8, :name=>'?', :pi=>nil, :kal=>1}]]
     os = @ds.eager(:as).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH cte AS \(SELECT i AS kal, nodes\.\* FROM nodes WHERE \(i IN \([12], [12]\)\) UNION ALL SELECT cte\.kal, nodes\.\* FROM nodes INNER JOIN cte ON \(cte\.pi = nodes\.i\)\) SELECT \* FROM cte/
     os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D'), @c.load(:i=>9, :pi=>nil, :name=>'E')]
@@ -157,7 +161,7 @@ describe Sequel::Model, "rcte_tree" do
        {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>1}, {:id=>1, :name=>'00', :parent_id=>8, :x_root_x=>2},
        {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>2}, {:id=>8, :name=>'?', :parent_id=>nil, :x_root_x=>1}]]
     @ds.eager(:ancestors).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT id AS x_root_x, nodes\.\* FROM nodes WHERE \(\(id IN \([12], [12]\)\) AND \(i = 1\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.parent_id = nodes\.id\) WHERE \(i = 1\)\) SELECT \* FROM t AS nodes WHERE \(i = 1\)/
   end
@@ -169,7 +173,7 @@ describe Sequel::Model, "rcte_tree" do
        {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2},
        {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7}]]
     os = @ds.eager(:descendants).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\* FROM nodes WHERE \(parent_id IN \([267], [267], [267]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D')]
@@ -179,7 +183,7 @@ describe Sequel::Model, "rcte_tree" do
     os.map{|o| o.children}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E')], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7)]]
     os.map{|o1| o1.children.map{|o2| o2.children}}.should == [[[@c.load(:id=>3, :name=>'00', :parent_id=>6)], []], [[]], [[@c.load(:id=>5, :name=>'?', :parent_id=>4)]]]
     os.map{|o1| o1.children.map{|o2| o2.children.map{|o3| o3.children}}}.should == [[[[]], []], [[]], [[[]]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load descendants when giving options" do
@@ -189,7 +193,7 @@ describe Sequel::Model, "rcte_tree" do
        {:i=>3, :name=>'00', :pi=>6, :kal=>6}, {:i=>3, :name=>'00', :pi=>6, :kal=>2},
        {:i=>4, :name=>'?', :pi=>7, :kal=>7}, {:i=>5, :name=>'?', :pi=>4, :kal=>7}]]
     os = @ds.eager(:ds).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH cte AS \(SELECT pi AS kal, nodes\.\* FROM nodes WHERE \(pi IN \([267], [267], [267]\)\) UNION ALL SELECT cte\.kal, nodes\.\* FROM nodes INNER JOIN cte ON \(cte\.i = nodes\.pi\)\) SELECT \* FROM cte/
     os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D')]
@@ -199,7 +203,7 @@ describe Sequel::Model, "rcte_tree" do
     os.map{|o| o.cs}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E')], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7)]]
     os.map{|o1| o1.cs.map{|o2| o2.cs}}.should == [[[@c.load(:i=>3, :name=>'00', :pi=>6)], []], [[]], [[@c.load(:i=>5, :name=>'?', :pi=>4)]]]
     os.map{|o1| o1.cs.map{|o2| o2.cs.map{|o3| o3.cs}}}.should == [[[[]], []], [[]], [[[]]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load descendants to a given level" do
@@ -209,9 +213,9 @@ describe Sequel::Model, "rcte_tree" do
        {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6, :x_level_x=>0}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2, :x_level_x=>1},
        {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7, :x_level_x=>0}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7, :x_level_x=>1}]]
     os = @ds.eager(:descendants=>2).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
-    sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\*, 0 AS x_level_x FROM nodes WHERE \(parent_id IN \([267], [267], [267]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\*, \(t\.x_level_x \+ 1\) AS x_level_x FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\) WHERE \(t\.x_level_x < 1\)\) SELECT \* FROM t AS nodes/
+    sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\*, CAST\(0 AS integer\) AS x_level_x FROM nodes WHERE \(parent_id IN \([267], [267], [267]\)\) UNION ALL SELECT t\.x_root_x, nodes\.\*, \(t\.x_level_x \+ 1\) AS x_level_x FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\) WHERE \(t\.x_level_x < 1\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :parent_id=>1, :name=>'AA'), @c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>7, :parent_id=>1, :name=>'D')]
     os.map{|o| o.descendants}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E'), @c.load(:id=>3, :name=>'00', :parent_id=>6)],
       [@c.load(:id=>3, :name=>'00', :parent_id=>6)],
@@ -219,7 +223,7 @@ describe Sequel::Model, "rcte_tree" do
     os.map{|o| o.associations[:children]}.should == [[@c.load(:id=>6, :parent_id=>2, :name=>'C'), @c.load(:id=>9, :parent_id=>2, :name=>'E')], [@c.load(:id=>3, :name=>'00', :parent_id=>6)], [@c.load(:id=>4, :name=>'?', :parent_id=>7)]]
     os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children]}}.should == [[[@c.load(:id=>3, :name=>'00', :parent_id=>6)], []], [[]], [[@c.load(:id=>5, :name=>'?', :parent_id=>4)]]]
     os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children].map{|o3| o3.associations[:children]}}}.should == [[[[]], []], [[]], [[nil]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load descendants to a given level when giving options" do
@@ -229,9 +233,9 @@ describe Sequel::Model, "rcte_tree" do
        {:i=>3, :name=>'00', :pi=>6, :kal=>6, :lal=>0}, {:i=>3, :name=>'00', :pi=>6, :kal=>2, :lal=>1},
        {:i=>4, :name=>'?', :pi=>7, :kal=>7, :lal=>0}, {:i=>5, :name=>'?', :pi=>4, :kal=>7, :lal=>1}]]
     os = @ds.eager(:ds=>2).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
-    sqls.last.should =~ /WITH cte AS \(SELECT pi AS kal, nodes\.\*, 0 AS lal FROM nodes WHERE \(pi IN \([267], [267], [267]\)\) UNION ALL SELECT cte\.kal, nodes\.\*, \(cte\.lal \+ 1\) AS lal FROM nodes INNER JOIN cte ON \(cte\.i = nodes\.pi\) WHERE \(cte\.lal < 1\)\) SELECT \* FROM cte/
+    sqls.last.should =~ /WITH cte AS \(SELECT pi AS kal, nodes\.\*, CAST\(0 AS integer\) AS lal FROM nodes WHERE \(pi IN \([267], [267], [267]\)\) UNION ALL SELECT cte\.kal, nodes\.\*, \(cte\.lal \+ 1\) AS lal FROM nodes INNER JOIN cte ON \(cte\.i = nodes\.pi\) WHERE \(cte\.lal < 1\)\) SELECT \* FROM cte/
     os.should == [@c.load(:i=>2, :pi=>1, :name=>'AA'), @c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>7, :pi=>1, :name=>'D')]
     os.map{|o| o.ds}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E'), @c.load(:i=>3, :name=>'00', :pi=>6)],
       [@c.load(:i=>3, :name=>'00', :pi=>6)],
@@ -239,7 +243,7 @@ describe Sequel::Model, "rcte_tree" do
     os.map{|o| o.associations[:cs]}.should == [[@c.load(:i=>6, :pi=>2, :name=>'C'), @c.load(:i=>9, :pi=>2, :name=>'E')], [@c.load(:i=>3, :name=>'00', :pi=>6)], [@c.load(:i=>4, :name=>'?', :pi=>7)]]
     os.map{|o1| o1.associations[:cs].map{|o2| o2.associations[:cs]}}.should == [[[@c.load(:i=>3, :name=>'00', :pi=>6)], []], [[]], [[@c.load(:i=>5, :name=>'?', :pi=>4)]]]
     os.map{|o1| o1.associations[:cs].map{|o2| o2.associations[:cs].map{|o3| o3.associations[:cs]}}}.should == [[[[]], []], [[]], [[nil]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 
   it "should eagerly load descendants respecting association option :conditions" do
@@ -249,7 +253,7 @@ describe Sequel::Model, "rcte_tree" do
        {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>6}, {:id=>3, :name=>'00', :parent_id=>6, :x_root_x=>2},
        {:id=>4, :name=>'?', :parent_id=>7, :x_root_x=>7}, {:id=>5, :name=>'?', :parent_id=>4, :x_root_x=>7}]]
     @ds.eager(:descendants).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x, nodes\.\* FROM nodes WHERE \(\(parent_id IN \([267], [267], [267]\)\) AND \(i = 1\)\) UNION ALL SELECT t\.x_root_x, nodes\.\* FROM nodes INNER JOIN t ON \(t\.id = nodes\.parent_id\) WHERE \(i = 1\)\) SELECT \* FROM t AS nodes WHERE \(i = 1\)/
   end
@@ -257,7 +261,11 @@ end
 
 describe Sequel::Model, "rcte_tree with composite keys" do
   before do
-    @c = Class.new(Sequel::Model(DB[:nodes]))
+    @db = Sequel.mock
+    @db.extend_datasets do
+      def supports_cte?(*) true end
+    end
+    @c = Class.new(Sequel::Model(@db[:nodes]))
     @c.class_eval do
       def self.name; 'Node'; end
       columns :id, :id2, :name, :parent_id, :parent_id2, :i, :pi
@@ -265,7 +273,7 @@ describe Sequel::Model, "rcte_tree with composite keys" do
     end
     @ds = @c.dataset
     @o = @c.load(:id=>2, :id2=>5, :parent_id=>1, :parent_id2=>6, :name=>'AA', :i=>3, :pi=>4)
-    DB.reset
+    @db.sqls
   end
 
   it "should use the correct SQL for lazy associations" do
@@ -288,11 +296,11 @@ describe Sequel::Model, "rcte_tree with composite keys" do
     @c.plugin :rcte_tree, :key=>[:parent_id, :parent_id2]
     @ds._fetch = [[{:id=>1, :id2=>2, :name=>'A', :parent_id=>3, :parent_id2=>4}]]
     @c.eager(:ancestors).all
-    DB.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x_0, x_root_x_1, id, id2, name, parent_id, parent_id2, i, pi) AS (SELECT id AS x_root_x_0, id2 AS x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes WHERE ((id, id2) IN ((3, 4))) UNION ALL SELECT t.x_root_x_0, t.x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes INNER JOIN t ON ((t.parent_id = nodes.id) AND (t.par [...]
+    @db.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x_0, x_root_x_1, id, id2, name, parent_id, parent_id2, i, pi) AS (SELECT id AS x_root_x_0, id2 AS x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes WHERE ((id, id2) IN ((3, 4))) UNION ALL SELECT t.x_root_x_0, t.x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes INNER JOIN t ON ((t.parent_id = nodes.id) AND (t.pa [...]
 
     @ds._fetch = [[{:id=>1, :id2=>2, :name=>'A', :parent_id=>3, :parent_id2=>4}]]
     @c.eager(:descendants).all
-    DB.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x_0, x_root_x_1, id, id2, name, parent_id, parent_id2, i, pi) AS (SELECT parent_id AS x_root_x_0, parent_id2 AS x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes WHERE ((parent_id, parent_id2) IN ((1, 2))) UNION ALL SELECT t.x_root_x_0, t.x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes INNER JOIN t ON ((t.id = [...]
+    @db.sqls.should == ["SELECT * FROM nodes", "WITH t(x_root_x_0, x_root_x_1, id, id2, name, parent_id, parent_id2, i, pi) AS (SELECT parent_id AS x_root_x_0, parent_id2 AS x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes WHERE ((parent_id, parent_id2) IN ((1, 2))) UNION ALL SELECT t.x_root_x_0, t.x_root_x_1, nodes.id, nodes.id2, nodes.name, nodes.parent_id, nodes.parent_id2, nodes.i, nodes.pi FROM nodes INNER JOIN t ON ((t.id  [...]
   end
   
   it "should add all parent associations when lazily loading ancestors" do
@@ -322,7 +330,7 @@ describe Sequel::Model, "rcte_tree with composite keys" do
        {:id=>1, :id2=>2, :name=>'00', :parent_id=>8, :parent_id2=>9, :x_root_x_0=>1, :x_root_x_1=>2}, {:id=>1, :id2=>2, :name=>'00', :parent_id=>8, :parent_id2=>9, :x_root_x_0=>2, :x_root_x_1=>3},
        {:id=>8, :id2=>9, :name=>'?', :parent_id=>nil, :parent_id2=>nil, :x_root_x_0=>2, :x_root_x_1=>3}, {:id=>8, :id2=>9, :name=>'?', :parent_id=>nil, :parent_id2=>nil, :x_root_x_0=>1, :x_root_x_1=>2}]]
     os = @ds.eager(:ancestors).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT id AS x_root_x_0, id2 AS x_root_x_1, nodes\.\* FROM nodes WHERE \(\(id, id2\) IN \(\([12], [23]\), \([12], [23]\)\)\) UNION ALL SELECT t\.x_root_x_0, t\.x_root_x_1, nodes\.\* FROM nodes INNER JOIN t ON \(\(t\.parent_id = nodes\.id\) AND \(t\.parent_id2 = nodes\.id2\)\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :id2=>3, :parent_id=>1, :parent_id2=>2, :name=>'AA'), @c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>7, :id2=>8, :parent_id=>1, :parent_id2=>2, :name=>'D'), @c.load(:id=>9, :id2=>10, :parent_id=>nil, :parent_id2=>nil, :name=>'E')]
@@ -334,7 +342,7 @@ describe Sequel::Model, "rcte_tree with composite keys" do
     os.map{|o| o.parent.parent if o.parent}.should == [@c.load(:id=>8, :id2=>9, :name=>'?', :parent_id=>nil, :parent_id2=>nil), @c.load(:id=>1, :id2=>2, :name=>'00', :parent_id=>8, :parent_id2=>9), @c.load(:id=>8, :id2=>9, :name=>'?', :parent_id=>nil, :parent_id2=>nil), nil]
     os.map{|o| o.parent.parent.parent if o.parent and o.parent.parent}.should == [nil, @c.load(:id=>8, :id2=>9, :name=>'?', :parent_id=>nil, :parent_id2=>nil), nil, nil]
     os.map{|o| o.parent.parent.parent.parent if o.parent and o.parent.parent and o.parent.parent.parent}.should == [nil, nil, nil, nil]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load descendants" do
@@ -344,7 +352,7 @@ describe Sequel::Model, "rcte_tree with composite keys" do
        {:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7, :x_root_x_0=>6, :x_root_x_1=>7}, {:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7, :x_root_x_0=>2, :x_root_x_1=>3},
        {:id=>4, :id2=>5, :name=>'?', :parent_id=>7, :parent_id2=>8, :x_root_x_0=>7, :x_root_x_1=>8}, {:id=>5, :id2=>6, :name=>'?', :parent_id=>4, :parent_id2=>5, :x_root_x_0=>7, :x_root_x_1=>8}]]
     os = @ds.eager(:descendants).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
     sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x_0, parent_id2 AS x_root_x_1, nodes\.\* FROM nodes WHERE \(\(parent_id, parent_id2\) IN \(\([267], [378]\), \([267], [378]\), \([267], [378]\)\)\) UNION ALL SELECT t\.x_root_x_0, t\.x_root_x_1, nodes\.\* FROM nodes INNER JOIN t ON \(\(t\.id = nodes\.parent_id\) AND \(t\.id2 = nodes\.parent_id2\)\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :id2=>3, :parent_id=>1, :parent_id2=>2, :name=>'AA'), @c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>7, :id2=>8, :parent_id=>1, :parent_id2=>2, :name=>'D')]
@@ -354,7 +362,7 @@ describe Sequel::Model, "rcte_tree with composite keys" do
     os.map{|o| o.children}.should == [[@c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>9, :id2=>10, :parent_id=>2, :parent_id2=>3, :name=>'E')], [@c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)], [@c.load(:id=>4, :id2=>5, :name=>'?', :parent_id=>7, :parent_id2=>8)]]
     os.map{|o1| o1.children.map{|o2| o2.children}}.should == [[[@c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)], []], [[]], [[@c.load(:id=>5, :id2=>6, :name=>'?', :parent_id=>4, :parent_id2=>5)]]]
     os.map{|o1| o1.children.map{|o2| o2.children.map{|o3| o3.children}}}.should == [[[[]], []], [[]], [[[]]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
   
   it "should eagerly load descendants to a given level" do
@@ -364,9 +372,9 @@ describe Sequel::Model, "rcte_tree with composite keys" do
        {:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7, :x_root_x_0=>6, :x_root_x_1=>7, :x_level_x=>0}, {:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7, :x_root_x_0=>2, :x_root_x_1=>3, :x_level_x=>1},
        {:id=>4, :id2=>5, :name=>'?', :parent_id=>7, :parent_id2=>8, :x_root_x_0=>7, :x_root_x_1=>8, :x_level_x=>0}, {:id=>5, :id2=>6, :name=>'?', :parent_id=>4, :parent_id2=>5, :x_root_x_0=>7, :x_root_x_1=>8, :x_level_x=>1}]]
     os = @ds.eager(:descendants=>2).all
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM nodes"
-    sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x_0, parent_id2 AS x_root_x_1, nodes\.\*, 0 AS x_level_x FROM nodes WHERE \(\(parent_id, parent_id2\) IN \(\([267], [378]\), \([267], [378]\), \([267], [378]\)\)\) UNION ALL SELECT t\.x_root_x_0, t\.x_root_x_1, nodes\.\*, \(t\.x_level_x \+ 1\) AS x_level_x FROM nodes INNER JOIN t ON \(\(t\.id = nodes\.parent_id\) AND \(t\.id2 = nodes\.parent_id2\)\) WHERE \(t\.x_level_x < 1\)\) SELECT \* FROM t AS nodes/
+    sqls.last.should =~ /WITH t AS \(SELECT parent_id AS x_root_x_0, parent_id2 AS x_root_x_1, nodes\.\*, CAST\(0 AS integer\) AS x_level_x FROM nodes WHERE \(\(parent_id, parent_id2\) IN \(\([267], [378]\), \([267], [378]\), \([267], [378]\)\)\) UNION ALL SELECT t\.x_root_x_0, t\.x_root_x_1, nodes\.\*, \(t\.x_level_x \+ 1\) AS x_level_x FROM nodes INNER JOIN t ON \(\(t\.id = nodes\.parent_id\) AND \(t\.id2 = nodes\.parent_id2\)\) WHERE \(t\.x_level_x < 1\)\) SELECT \* FROM t AS nodes/
     os.should == [@c.load(:id=>2, :id2=>3, :parent_id=>1, :parent_id=>2, :name=>'AA'), @c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>7, :id2=>8, :parent_id=>1, :parent_id2=>2, :name=>'D')]
     os.map{|o| o.descendants}.should == [[@c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>9, :id2=>10, :parent_id=>2, :parent_id2=>3, :name=>'E'), @c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)],
       [@c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)],
@@ -374,6 +382,6 @@ describe Sequel::Model, "rcte_tree with composite keys" do
     os.map{|o| o.associations[:children]}.should == [[@c.load(:id=>6, :id2=>7, :parent_id=>2, :parent_id2=>3, :name=>'C'), @c.load(:id=>9, :id2=>10, :parent_id=>2, :parent_id2=>3, :name=>'E')], [@c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)], [@c.load(:id=>4, :id2=>5, :name=>'?', :parent_id=>7, :parent_id2=>8)]]
     os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children]}}.should == [[[@c.load(:id=>3, :id2=>4, :name=>'00', :parent_id=>6, :parent_id2=>7)], []], [[]], [[@c.load(:id=>5, :id2=>6, :name=>'?', :parent_id=>4, :parent_id2=>5)]]]
     os.map{|o1| o1.associations[:children].map{|o2| o2.associations[:children].map{|o3| o3.associations[:children]}}}.should == [[[[]], []], [[]], [[nil]]]
-    DB.sqls.should == []
+    @db.sqls.should == []
   end
 end
diff --git a/spec/extensions/schema_caching_spec.rb b/spec/extensions/schema_caching_spec.rb
index 73b0c54..f861cbf 100644
--- a/spec/extensions/schema_caching_spec.rb
+++ b/spec/extensions/schema_caching_spec.rb
@@ -12,9 +12,9 @@ describe "schema_caching extension" do
   end
 
   it "Database#dump_schema_cache should dump cached schema to the given file" do
-    File.exist?(@filename).should be_false
+    File.exist?(@filename).should == false
     @db.dump_schema_cache(@filename)
-    File.exist?(@filename).should be_true
+    File.exist?(@filename).should == true
     File.size(@filename).should > 0
   end
 
@@ -34,7 +34,7 @@ describe "schema_caching extension" do
 
   it "Database#load_schema_cache? should load cached schema from the given file if it exists" do
     db = Sequel::Database.new.extension(:schema_caching)
-    File.exist?(@filename).should be_false
+    File.exist?(@filename).should == false
     db.load_schema_cache?(@filename)
     db.instance_variable_get(:@schemas).should == {}
   end
diff --git a/spec/extensions/sequel_3_dataset_methods_spec.rb b/spec/extensions/sequel_3_dataset_methods_spec.rb
index 5a5d3c6..934c251 100644
--- a/spec/extensions/sequel_3_dataset_methods_spec.rb
+++ b/spec/extensions/sequel_3_dataset_methods_spec.rb
@@ -71,7 +71,6 @@ describe "Dataset#opts=" do
   specify "should change the dataset's opts" do
     db = Sequel.mock
     ds = db[:items].extension(:sequel_3_dataset_methods)
-    db2 = Sequel.mock
     ds.sql.should == 'SELECT * FROM items'
     ds.opts = {}
     ds.sql.should == 'SELECT *'
diff --git a/spec/extensions/serialization_spec.rb b/spec/extensions/serialization_spec.rb
index 5269f23..faf84ee 100644
--- a/spec/extensions/serialization_spec.rb
+++ b/spec/extensions/serialization_spec.rb
@@ -300,4 +300,23 @@ describe "Serialization plugin" do
     o.dup.deserialized_values.should == o.deserialized_values
     o.dup.deserialized_values.should_not equal(o.deserialized_values)
   end
+
+  it "should have changed_columns include serialized columns if those columns have changed" do
+    @c.plugin :serialization, :yaml, :abc, :def
+    @c.dataset._fetch = {:id => 1, :abc => "--- 1\n", :def => "--- hello\n"}
+    o = @c.first
+    o.changed_columns.should == []
+    o.abc = 1
+    o.changed_columns.should == []
+    o.abc = 1
+    o.changed_columns.should == []
+    o.abc = 2
+    o.changed_columns.should == [:abc]
+    o.def = 'hello'
+    o.changed_columns.should == [:abc]
+    o.def = 'hello'
+    o.changed_columns.should == [:abc]
+    o.def = 'hello2'
+    o.changed_columns.should == [:abc, :def]
+  end
 end
diff --git a/spec/extensions/sharding_spec.rb b/spec/extensions/sharding_spec.rb
index c8ba374..9d02569 100644
--- a/spec/extensions/sharding_spec.rb
+++ b/spec/extensions/sharding_spec.rb
@@ -113,7 +113,7 @@ describe "sharding plugin" do
     album.artist.update(:name=>'AS')
     @db.sqls.should == ["SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1 -- s1", "UPDATE artists SET name = 'AS' WHERE (id = 2) -- s1"]
     album.tags.map{|a| a.update(:name=>'SR')}
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) -- s1", "UPDATE tags SET name = 'SR' WHERE (id = 3) -- s1"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) -- s1", "UPDATE tags SET name = 'SR' WHERE (id = 3) -- s1"]
     @Artist.server(:s2).first.albums.map{|a| a.update(:name=>'MO')}
     @db.sqls.should == ["SELECT * FROM artists LIMIT 1 -- s2", "SELECT * FROM albums WHERE (albums.artist_id = 2) -- s2", "UPDATE albums SET name = 'MO' WHERE (id = 1) -- s2"]
   end 
@@ -145,7 +145,7 @@ describe "sharding plugin" do
     sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 2) AND (albums.id = 1)) LIMIT 1 -- s2"]
     
     album.remove_tag(3)
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.album_id = 1)) WHERE (tags.id = 3) LIMIT 1 -- s1", "DELETE FROM albums_tags WHERE ((album_id = 1) AND (tag_id = 3)) -- s1"]
+    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id = 1) AND (tags.id = 3)) LIMIT 1 -- s1", "DELETE FROM albums_tags WHERE ((album_id = 1) AND (tag_id = 3)) -- s1"]
   end 
 
   specify "should have objects retrieved from a specific shard remove all associated objects from that shard" do
diff --git a/spec/extensions/shared_caching_spec.rb b/spec/extensions/shared_caching_spec.rb
index d36fb32..2e7344a 100644
--- a/spec/extensions/shared_caching_spec.rb
+++ b/spec/extensions/shared_caching_spec.rb
@@ -27,7 +27,7 @@ describe "Shared caching behavior" do
       @c.load(:id=>3, :caching_model_id=>1, :caching_model_id2=>2).caching_model2.should equal(@cm12)
       @c.load(:id=>3, :caching_model_id=>2, :caching_model_id2=>1).caching_model2.should equal(@cm21)
       @db.sqls.should == []
-      @cc.dataset._fetch = []
+      @db.fetch = []
       @c.load(:id=>4, :caching_model_id=>2, :caching_model_id2=>2).caching_model2.should == nil
     end
   end
@@ -39,7 +39,7 @@ describe "Shared caching behavior" do
       @c.load(:id=>3, :caching_model_id=>1).caching_model.should equal(@cm1)
       @c.load(:id=>4, :caching_model_id=>2).caching_model.should equal(@cm2)
       @db.sqls.should == []
-      @cc.dataset._fetch = []
+      @db.fetch = []
       @c.load(:id=>4, :caching_model_id=>3).caching_model.should == nil
     end
 
@@ -128,7 +128,7 @@ describe "Shared caching behavior" do
       @cache = cache
       
       @cc.plugin :caching, @cache
-      @cc.dataset._fetch = {:id=>1}
+      @db.fetch = {:id=>1}
       @cm1 = @cc[1]
       @cm2 = @cc[2]
       @cm12 = @cc[1, 2]
@@ -143,7 +143,7 @@ describe "Shared caching behavior" do
 
   describe "With static_cache plugin with single key" do
     before do
-      @cc.dataset._fetch = [{:id=>1}, {:id=>2}]
+      @db.fetch = [{:id=>1}, {:id=>2}]
       @cc.plugin :static_cache
       @cm1 = @cc[1]
       @cm2 = @cc[2]
@@ -163,7 +163,7 @@ describe "Shared caching behavior" do
   describe "With static_cache plugin with composite key" do
     before do
       @cc.set_primary_key([:id, :id2])
-      @cc.dataset._fetch = [{:id=>1, :id2=>2}, {:id=>2, :id2=>1}]
+      @db.fetch = [{:id=>1, :id2=>2}, {:id=>2, :id2=>1}]
       @cc.plugin :static_cache
       @cm12 = @cc[[1, 2]]
       @cm21 = @cc[[2, 1]]
diff --git a/spec/extensions/single_table_inheritance_spec.rb b/spec/extensions/single_table_inheritance_spec.rb
index 29b342c..197fa9d 100644
--- a/spec/extensions/single_table_inheritance_spec.rb
+++ b/spec/extensions/single_table_inheritance_spec.rb
@@ -70,19 +70,27 @@ describe Sequel::Model, "single table inheritance plugin" do
     called.should == false
   end
 
-  it "should add a before_create hook that sets the model class name for the key" do
+  it "should set the model class name when saving" do
     StiTest.new.save
     StiTestSub1.new.save
     StiTestSub2.new.save
     DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTest')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1", "INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE ((sti_tests.kind IN ('StiTestSub1')) AND (id = 10)) LIMIT 1", "INSERT INTO sti_tests (kind) VALUES ('StiTestSub2')", "SELECT * FROM sti_tests WHERE ((sti_tests.kind IN ('StiTestSub2')) AND (id = 10)) LIMIT 1"]
   end
 
-  it "should have the before_create hook not override an existing value" do
+  it "should handle validations on the type column field" do
+    o = StiTestSub1.new
+    def o.validate
+      errors.add(:kind, 'not present') unless kind
+    end
+    o.valid?.should == true
+  end
+
+  it "should override an existing value in the class name field" do
     StiTest.create(:kind=>'StiTestSub1')
     DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1"]
   end
 
-  it "should have the before_create hook handle columns with the same name as existing method names" do
+  it "should handle type column with the same name as existing method names" do
     StiTest.plugin :single_table_inheritance, :type
     StiTest.columns :id, :type
     StiTest.create
diff --git a/spec/extensions/spec_helper.rb b/spec/extensions/spec_helper.rb
index b26e9c5..7af6fca 100644
--- a/spec/extensions/spec_helper.rb
+++ b/spec/extensions/spec_helper.rb
@@ -31,13 +31,15 @@ rescue LoadError
 end
 
 Sequel.extension :meta_def
-Sequel.extension :core_refinements if RUBY_VERSION >= '2.0.0'
+Sequel.extension :core_refinements if RUBY_VERSION >= '2.0.0' && RUBY_ENGINE == 'ruby'
 
 def skip_warn(s)
   warn "Skipping test of #{s}" if ENV["SKIPPED_TEST_WARN"]
 end
 
-(defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do
+require File.join(File.dirname(File.expand_path(__FILE__)), "../rspec_helper.rb")
+
+RSPEC_EXAMPLE_GROUP.class_eval do
   if ENV['SEQUEL_DEPRECATION_WARNINGS']
     class << self
       alias qspecify specify
diff --git a/spec/extensions/static_cache_spec.rb b/spec/extensions/static_cache_spec.rb
index 5dd18bf..0830c58 100644
--- a/spec/extensions/static_cache_spec.rb
+++ b/spec/extensions/static_cache_spec.rb
@@ -78,7 +78,7 @@ describe "Sequel::Plugins::StaticCache with :frozen=>false option" do
     end
 
     it "should have map without a block not return a frozen object" do
-      @c.map.frozen?.should be_false
+      @c.map.frozen?.should == false
     end
 
     it "should have map with a block and argument raise" do
@@ -100,7 +100,7 @@ describe "Sequel::Plugins::StaticCache with :frozen=>false option" do
     end
 
     it "should have all not return a frozen object" do
-      @c.all.frozen?.should be_false
+      @c.all.frozen?.should == false
     end
 
     it "should have all return things in dataset order" do
@@ -135,7 +135,7 @@ describe "Sequel::Plugins::StaticCache with :frozen=>false option" do
     end
 
     it "should have to_hash not return a frozen object" do
-      @c.to_hash.frozen?.should be_false
+      @c.to_hash.frozen?.should == false
     end
 
     it "should have to_hash_groups without arguments return the cached objects without a query" do
@@ -201,7 +201,7 @@ describe "Sequel::Plugins::StaticCache with :frozen=>false option" do
     end
 
     it "all of the static cache values (model instances) should be frozen" do
-      @c.all.all?{|o| o.frozen?}.should be_true
+      @c.all.all?{|o| o.frozen?}.should == true
     end
 
     it "should make .[] method with primary key return cached instances" do
@@ -286,47 +286,47 @@ describe "Sequel::Plugins::StaticCache with :frozen=>false option" do
     it_should_behave_like "Sequel::Plugins::StaticCache"
 
     it "record retrieved by primary key should not be frozen" do
-      @c[1].frozen?.should be_false
-      @c.cache_get_pk(1).frozen?.should be_false
+      @c[1].frozen?.should == false
+      @c.cache_get_pk(1).frozen?.should == false
     end
 
     it "none of values returned in #all should be frozen" do
-      @c.all.all?{|o| !o.frozen?}.should be_true
+      @c.all.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values yielded by each should be frozen" do
       a = []
       @c.each{|o| a << o}
-      a.all?{|o| !o.frozen?}.should be_true
+      a.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values yielded by Enumerable method should be frozen" do
-      @c.sort_by{|o| o.id}.all?{|o| !o.frozen?}.should be_true
+      @c.sort_by{|o| o.id}.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values returned by map without an argument or block should be frozen" do
-      @c.map{|o| o}.all?{|o| !o.frozen?}.should be_true
-      @c.map.all?{|o| !o.frozen?}.should be_true
+      @c.map{|o| o}.all?{|o| !o.frozen?}.should == true
+      @c.map.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values in the hash returned by to_hash without an argument should be frozen" do
-      @c.to_hash.values.all?{|o| !o.frozen?}.should be_true
+      @c.to_hash.values.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values in the hash returned by to_hash with a single argument should be frozen" do
-      @c.to_hash(:id).values.all?{|o| !o.frozen?}.should be_true
+      @c.to_hash(:id).values.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values in the hash returned by to_hash with a single array argument should be frozen" do
-      @c.to_hash([:id, :id]).values.all?{|o| !o.frozen?}.should be_true
+      @c.to_hash([:id, :id]).values.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values in the hash returned by to_hash_groups with a single argument should be frozen" do
-      @c.to_hash_groups(:id).values.flatten.all?{|o| !o.frozen?}.should be_true
+      @c.to_hash_groups(:id).values.flatten.all?{|o| !o.frozen?}.should == true
     end
 
     it "none of values in the hash returned by to_hash_groups with a single array argument should be frozen" do
-      @c.to_hash_groups([:id, :id]).values.flatten.all?{|o| !o.frozen?}.should be_true
+      @c.to_hash_groups([:id, :id]).values.flatten.all?{|o| !o.frozen?}.should == true
     end
 
     it "should not automatically update the cache when creating new model objects" do
diff --git a/spec/extensions/table_select_spec.rb b/spec/extensions/table_select_spec.rb
new file mode 100644
index 0000000..f2e93e4
--- /dev/null
+++ b/spec/extensions/table_select_spec.rb
@@ -0,0 +1,71 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "Sequel::Plugins::TableSelect" do
+  before do
+    @Album = Class.new(Sequel::Model(Sequel.mock[:albums]))
+  end
+
+  it "should add a table.* selection to existing dataset without explicit selection" do
+    @Album.plugin :table_select
+    @Album.dataset.sql.should == 'SELECT albums.* FROM albums'
+
+    @Album.dataset = :albs
+    @Album.dataset.sql.should == 'SELECT albs.* FROM albs'
+
+    @Album.dataset = Sequel.identifier(:albs)
+    @Album.dataset.sql.should == 'SELECT albs.* FROM albs'
+  end
+
+  it "should handle qualified tables" do
+    @Album.dataset = :s__albums
+    @Album.plugin :table_select
+    @Album.dataset.sql.should == 'SELECT s.albums.* FROM s.albums'
+
+    @Album.dataset = Sequel.qualify(:s2, :albums)
+    @Album.dataset.sql.should == 'SELECT s2.albums.* FROM s2.albums'
+  end
+
+  it "should handle aliases" do
+    @Album.dataset = :albums___a
+    @Album.plugin :table_select
+    @Album.dataset.sql.should == 'SELECT a.* FROM albums AS a'
+
+    @Album.dataset = Sequel.as(:albums, :b)
+    @Album.dataset.sql.should == 'SELECT b.* FROM albums AS b'
+
+    @Album.dataset = :s__albums___a
+    @Album.dataset.sql.should == 'SELECT a.* FROM s.albums AS a'
+
+    @Album.dataset = @Album.db[:albums].from_self
+    @Album.dataset.sql.should == 'SELECT t1.* FROM (SELECT * FROM albums) AS t1'
+
+    @Album.dataset = Sequel.as(@Album.db[:albums], :b)
+    @Album.dataset.sql.should == 'SELECT b.* FROM (SELECT * FROM albums) AS b'
+  end
+
+  it "should not add a table.* selection on existing dataset with explicit selection" do
+    @Album.dataset = @Album.dataset.select(:name)
+    @Album.plugin :table_select
+    @Album.dataset.sql.should == 'SELECT name FROM albums'
+
+    @Album.dataset = @Album.dataset.select(:name, :artist)
+    @Album.dataset.sql.should == 'SELECT name, artist FROM albums'
+  end
+
+  it "should not add a table.* selection on existing dataset with multiple tables" do
+    @Album.dataset = @Album.db.from(:a1, :a2)
+    @Album.plugin :table_select
+    @Album.dataset.sql.should == 'SELECT * FROM a1, a2'
+
+    @Album.dataset = @Album.db.from(:a1).cross_join(:a2)
+    @Album.dataset.sql.should == 'SELECT * FROM a1 CROSS JOIN a2'
+  end
+
+  it "works correctly when loaded on model without a dataset" do
+    c = Class.new(Sequel::Model)
+    c.plugin :table_select
+    sc = Class.new(c)
+    sc.dataset = :a
+    sc.dataset.sql.should == "SELECT a.* FROM a"
+  end
+end
diff --git a/spec/extensions/tactical_eager_loading_spec.rb b/spec/extensions/tactical_eager_loading_spec.rb
index 2052d25..7c060a1 100644
--- a/spec/extensions/tactical_eager_loading_spec.rb
+++ b/spec/extensions/tactical_eager_loading_spec.rb
@@ -75,4 +75,8 @@ describe "Sequel::Plugins::TacticalEagerLoading" do
     ts.map{|x| x.children}.should == [[], [], [ts[0]], [ts[1]]]
     DB.sqls.length.should == 1
   end
+
+  it "#marshallable should make marshalling not fail" do
+    proc{Marshal.dump(@c.all.map{|x| x.marshallable!})}.should_not raise_error
+  end
 end
diff --git a/spec/extensions/timestamps_spec.rb b/spec/extensions/timestamps_spec.rb
index bf06076..0b505d4 100644
--- a/spec/extensions/timestamps_spec.rb
+++ b/spec/extensions/timestamps_spec.rb
@@ -22,6 +22,16 @@ describe "Sequel::Plugins::Timestamps" do
     Sequel.datetime_class = Time
   end
   
+  it "should handle validations on the timestamp fields for new objects" do
+    @c.plugin :timestamps, :update_on_create=>true
+    o = @c.new
+    def o.validate
+      errors.add(model.create_timestamp_field, 'not present') unless send(model.create_timestamp_field)
+      errors.add(model.update_timestamp_field, 'not present') unless send(model.update_timestamp_field)
+    end
+    o.valid?.should == true
+  end
+
   it "should set the create timestamp field on creation" do
     o = @c.create
     @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')"]
@@ -34,6 +44,15 @@ describe "Sequel::Plugins::Timestamps" do
     o.updated_at.should == '2009-08-01'
   end
 
+  it "should work with current_datetime_timestamp extension" do
+    Sequel.datetime_class = Time
+    @c.dataset = @c.dataset.extension(:current_datetime_timestamp)
+    o = @c.create
+    @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES (CURRENT_TIMESTAMP)"]
+    o = @c.load(:id=>1).save
+    @c.db.sqls.should == ["UPDATE t SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"]
+  end
+
   it "should not update the update timestamp on creation" do
     @c.create.updated_at.should == nil
   end
diff --git a/spec/extensions/to_dot_spec.rb b/spec/extensions/to_dot_spec.rb
index 4482a7b..52b6be0 100644
--- a/spec/extensions/to_dot_spec.rb
+++ b/spec/extensions/to_dot_spec.rb
@@ -25,6 +25,7 @@ END
   end
 
   it "should handle WITH" do
+    def @ds.supports_cte?(*) true end
     a = dot(@ds.with(:a, @ds))
     a[0..3].should == ["1 -> 2 [label=\"with\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Hash\"];"]
     [["3 -> 4 [label=\"dataset\"];", "4 [label=\"Dataset\"];", "3 -> 5 [label=\"name\"];", "5 [label=\":a\"];"],
@@ -107,6 +108,10 @@ END
     dot(@ds.from(Sequel.as(:a, :b))).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"alias\"];", "5 [label=\":b\"];"]
   end
 
+  it "should handle SQL::AliasedExpressions with column aliases" do
+    dot(@ds.from(Sequel.as(:a, :b, [:c, :d]))).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"AliasedExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"alias\"];", "5 [label=\":b\"];", "3 -> 6 [label=\"columns\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":c\"];", "6 -> 8 [label=\"1\"];", "8 [label=\":d\"];"]
+  end
+
   it "should handle SQL::CaseExpressions" do
     dot(@ds.select(Sequel.case({:a=>:b}, :c, :d))).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"CaseExpression\"];", "3 -> 4 [label=\"expression\"];", "4 [label=\":d\"];", "3 -> 5 [label=\"conditions\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\":a\"];", "6 -> 8 [label=\"1\"];", "8 [label=\":b\"];", "3 -> 9 [label=\"default\"];", "9 [label=\":c\"];"]
   end
@@ -116,15 +121,15 @@ END
   end
 
   it "should handle SQL::Function" do
-    dot(@ds.select{a(b)}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Function: a\"];", "3 -> 4 [label=\"0\"];", "4 [label=\"Identifier\"];", "4 -> 5 [label=\"value\"];", "5 [label=\":b\"];"]
+    dot(@ds.select{a(b)}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Function: a\"];", "3 -> 4 [label=\"0\"];", "4 [label=\"Identifier\"];", "4 -> 5 [label=\"value\"];", "5 [label=\":b\"];", "3 -> 6 [label=\"args\"];", "6 [label=\"Array\"];", "6 -> 7 [label=\"0\"];", "7 [label=\"Identifier\"];", "7 -> 8 [label=\"value\"];", "8 [label=\":b\"];", "3 -> 9 [label=\"opts\"];", "9 [label=\"Hash\"];"]
   end
 
   it "should handle SQL::Subscript" do
     dot(@ds.select(Sequel.subscript(:a, 1))).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Subscript\"];", "3 -> 4 [label=\"f\"];", "4 [label=\":a\"];", "3 -> 5 [label=\"sub\"];", "5 [label=\"Array\"];", "5 -> 6 [label=\"0\"];", "6 [label=\"1\"];"]
   end
 
-  it "should handle SQL::WindowFunction" do
-    dot(@ds.select{sum(:over, :partition=>(:a)){}}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"WindowFunction\"];", "3 -> 4 [label=\"function\"];", "4 [label=\"Function: sum\"];", "3 -> 5 [label=\"window\"];", "5 [label=\"Window\"];", "5 -> 6 [label=\"opts\"];", "6 [label=\"Hash\"];", "6 -> 7 [label=\"partition\"];", "7 [label=\":a\"];"]
+  it "should handle SQL::Function with a window" do
+    dot(@ds.select{sum{}.over(:partition=>:a)}).should == ["1 -> 2 [label=\"select\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\"Function: sum\"];", "3 -> 4 [label=\"args\"];", "4 [label=\"Array\"];", "3 -> 5 [label=\"opts\"];", "5 [label=\"Hash\"];", "5 -> 6 [label=\"over\"];", "6 [label=\"Window\"];", "6 -> 7 [label=\"opts\"];", "7 [label=\"Hash\"];", "7 -> 8 [label=\"partition\"];", "8 [label=\":a\"];"]
   end
 
   it "should handle SQL::PlaceholderLiteralString" do
@@ -136,7 +141,7 @@ END
   end
 
   it "should handle JOIN USING" do
-    dot(@ds.from(:a).join(:d, [:c], :table_alias=>:c)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];", "1 -> 4 [label=\"join\"];", "4 [label=\"Array\"];", "4 -> 5 [label=\"0\"];", "5 [label=\"INNER JOIN USING\"];", "5 -> 6 [label=\"table\"];", "6 [label=\":d\"];", "5 -> 7 [label=\"alias\"];", "7 [label=\":c\"];", "5 -> 8 [label=\"using\"];", "8 [label=\"Array\"];", "8 -> 9 [label=\"0\"];", "9 [label=\":c\"];"]
+    dot(@ds.from(:a).join(:d, [:c], :table_alias=>:c)).should == ["1 -> 2 [label=\"from\"];", "2 [label=\"Array\"];", "2 -> 3 [label=\"0\"];", "3 [label=\":a\"];", "1 -> 4 [label=\"join\"];", "4 [label=\"Array\"];", "4 -> 5 [label=\"0\"];", "5 [label=\"INNER JOIN USING\"];", "5 -> 6 [label=\"table\"];", "6 [label=\"AliasedExpression\"];", "6 -> 7 [label=\"expression\"];", "7 [label=\":d\"];", "6 -> 8 [label=\"alias\"];", "8 [label=\":c\"];", "5 -> 9 [label=\"using\"];", "9 [label=\"Array [...]
   end
 
   it "should handle other types" do
diff --git a/spec/extensions/touch_spec.rb b/spec/extensions/touch_spec.rb
index a2b03a6..16968ea 100644
--- a/spec/extensions/touch_spec.rb
+++ b/spec/extensions/touch_spec.rb
@@ -26,6 +26,15 @@ describe "Touch plugin" do
     DB.sqls.first.should =~ /UPDATE a SET updated_at = '[-0-9 :.]+' WHERE \(id = 1\)/
   end
 
+  specify "should work with current_datetime_timestamp extension" do
+    c = Class.new(Sequel::Model).set_dataset(:a)
+    c.dataset = c.dataset.extension(:current_datetime_timestamp)
+    c.plugin :touch
+    c.columns :id, :updated_at
+    c.load(:id=>1).touch
+    DB.sqls.should == ["UPDATE a SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"]
+  end
+
   specify "should allow #touch instance method for updating the updated_at column" do
     @Artist.plugin :touch
     @a.touch
@@ -130,7 +139,7 @@ describe "Touch plugin" do
     @Artist.plugin :touch, :associations=>:albums
     @a.touch
     DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)",
-      "SELECT albums.* FROM albums INNER JOIN aa ON ((aa.album_id = albums.id) AND (aa.artist_id = 1))",
+      "SELECT albums.* FROM albums INNER JOIN aa ON (aa.album_id = albums.id) WHERE (aa.artist_id = 1)",
       "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"]
   end
 
@@ -140,7 +149,7 @@ describe "Touch plugin" do
     @Artist.plugin :touch, :associations=>:albums
     @a.touch
     DB.sqls.should == ["UPDATE artists SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)",
-      "SELECT albums.* FROM albums INNER JOIN aa ON ((aa.album_id = albums.id) AND (aa.artist_id = 1))",
+      "SELECT albums.* FROM albums INNER JOIN aa ON (aa.album_id = albums.id) WHERE (aa.artist_id = 1)",
       "UPDATE albums SET updated_at = CURRENT_TIMESTAMP WHERE (id = 1)"]
   end
 
diff --git a/spec/extensions/tree_spec.rb b/spec/extensions/tree_spec.rb
index 4b66593..7e1e1d1 100644
--- a/spec/extensions/tree_spec.rb
+++ b/spec/extensions/tree_spec.rb
@@ -89,12 +89,12 @@ describe Sequel::Model, "tree plugin" do
   end
 
   it "should have root? return true for a root node and false for a child node" do
-    @c.load(:parent_id => nil).root?.should be_true
-    @c.load(:parent_id => 1).root?.should be_false
+    @c.load(:parent_id => nil).root?.should == true
+    @c.load(:parent_id => 1).root?.should == false
   end
 
   it "should have root? return false for an new node" do
-    @c.new.root?.should be_false
+    @c.new.root?.should == false
   end
 
   it "should have self_and_siblings return the children of the current node's parent" do
@@ -207,14 +207,14 @@ describe Sequel::Model, "tree plugin with composite keys" do
   end
 
   it "should have root? return true for a root node and false for a child node" do
-    @c.load(:parent_id => nil, :parent_id2=>nil).root?.should be_true
-    @c.load(:parent_id => 1, :parent_id2=>nil).root?.should be_true
-    @c.load(:parent_id => nil, :parent_id2=>2).root?.should be_true
-    @c.load(:parent_id => 1, :parent_id2=>2).root?.should be_false
+    @c.load(:parent_id => nil, :parent_id2=>nil).root?.should == true
+    @c.load(:parent_id => 1, :parent_id2=>nil).root?.should == true
+    @c.load(:parent_id => nil, :parent_id2=>2).root?.should == true
+    @c.load(:parent_id => 1, :parent_id2=>2).root?.should == false
   end
 
   it "should have root? return false for an new node" do
-    @c.new.root?.should be_false
+    @c.new.root?.should == false
   end
 
   it "should have self_and_siblings return the children of the current node's parent" do
diff --git a/spec/extensions/update_or_create_spec.rb b/spec/extensions/update_or_create_spec.rb
new file mode 100644
index 0000000..7c42990
--- /dev/null
+++ b/spec/extensions/update_or_create_spec.rb
@@ -0,0 +1,81 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "Sequel::Plugins::UpdateOrCreate" do
+  before do
+    @db = Sequel.mock(:autoid=>proc{1}, :numrows=>1)
+    @c = Class.new(Sequel::Model(@db[:test]))
+    @c.plugin :update_or_create
+    @c.columns :id, :a, :b 
+    @db.sqls
+  end
+
+  it ".update_or_create should update an existing record if one exists" do
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.update_or_create(:a=>2){|t| t.b = 4}.should == @c.load(:id=>1, :a=>2, :b=>4)
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 2) LIMIT 1", "UPDATE test SET b = 4 WHERE (id = 1)"]
+
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.update_or_create({:a=>2}, :b=>4).should == @c.load(:id=>1, :a=>2, :b=>4)
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 2) LIMIT 1", "UPDATE test SET b = 4 WHERE (id = 1)"]
+
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.update_or_create({:a=>2}, :a=>3){|t| t.b = 4}.should == @c.load(:id=>1, :a=>3, :b=>4)
+    sqls = @db.sqls
+    sqls.shift.should == "SELECT * FROM test WHERE (a = 2) LIMIT 1"
+    sqls.shift.should =~ /UPDATE test SET [ab] = [34], [ab] = [34] WHERE \(id = 1\)/
+  end
+
+  it ".update_or_create should create a record if an existing record does not exist" do
+    @db.fetch = [[], [{:id=>1, :a=>1, :b=>4}]]
+    @c.update_or_create(:a=>1){|t| t.b = 4}.should == @c.load(:id=>1, :a=>1, :b=>4)
+    sqls = @db.sqls
+    sqls.shift.should == "SELECT * FROM test WHERE (a = 1) LIMIT 1"
+    sqls.shift.should =~ /INSERT INTO test \([ab], [ab]\) VALUES \([14], [14]\)/
+    sqls.shift.should == "SELECT * FROM test WHERE (id = 1) LIMIT 1"
+
+    @db.fetch = [[], [{:id=>1, :a=>1, :b=>4}]]
+    @c.update_or_create({:a=>1}, :b=>4).should == @c.load(:id=>1, :a=>1, :b=>4)
+    sqls = @db.sqls
+    sqls.shift.should == "SELECT * FROM test WHERE (a = 1) LIMIT 1"
+    sqls.shift.should =~ /INSERT INTO test \([ab], [ab]\) VALUES \([14], [14]\)/
+    sqls.shift.should == "SELECT * FROM test WHERE (id = 1) LIMIT 1"
+
+    @db.fetch = [[], [{:id=>1, :a=>3, :b=>4}]]
+    @c.update_or_create({:a=>1}, :a=>3){|t| t.b = 4}.should == @c.load(:id=>1, :a=>3, :b=>4)
+    sqls = @db.sqls
+    sqls.shift.should == "SELECT * FROM test WHERE (a = 1) LIMIT 1"
+    sqls.shift.should =~ /INSERT INTO test \([ab], [ab]\) VALUES \([34], [34]\)/
+    sqls.shift.should == "SELECT * FROM test WHERE (id = 1) LIMIT 1"
+  end
+
+  it ".find_or_new should return an existing record" do
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.find_or_new(:a=>2){|t| t.b = 4}.should == @c.load(:id=>1, :a=>2, :b=>4)
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 2) LIMIT 1"]
+
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.find_or_new({:a=>2}, :b=>4).should == @c.load(:id=>1, :a=>2, :b=>4)
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 2) LIMIT 1"]
+
+    @db.fetch = [[{:id=>1, :a=>2, :b=>3}]]
+    @c.find_or_new({:a=>2}, :a=>3){|t| t.b = 4}.should == @c.load(:id=>1, :a=>3, :b=>4)
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 2) LIMIT 1"]
+  end
+
+  it ".find_or_new should return a new record if no record exists" do
+    o = @c.find_or_new(:a=>1){|t| t.b = 4}
+    o.should == @c.load(:a=>1, :b=>4)
+    o.new?.should == true
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 1) LIMIT 1"]
+
+    o = @c.find_or_new({:a=>1}, :b=>4)
+    o.should == @c.load(:a=>1, :b=>4)
+    o.new?.should == true
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 1) LIMIT 1"]
+
+    o = @c.find_or_new({:a=>1}, :a=>3){|t| t.b = 4}
+    o.should == @c.load(:a=>3, :b=>4)
+    o.new?.should == true
+    @db.sqls.should == ["SELECT * FROM test WHERE (a = 1) LIMIT 1"]
+  end
+end
diff --git a/spec/extensions/validation_class_methods_spec.rb b/spec/extensions/validation_class_methods_spec.rb
index 49edeb7..353730d 100644
--- a/spec/extensions/validation_class_methods_spec.rb
+++ b/spec/extensions/validation_class_methods_spec.rb
@@ -142,9 +142,9 @@ describe Sequel::Model do
   specify "should have the validates block have appropriate respond_to?" do
     c = nil
     @c.validates{c = respond_to?(:foo)}
-    c.should be_false
+    c.should == false
     @c.validates{c = respond_to?(:length_of)}
-    c.should be_true
+    c.should == true
   end if RUBY_VERSION >= '1.9'
 end
 
@@ -616,7 +616,7 @@ describe "Superclass validations" do
     o.errors.full_messages.should == ['value is too short', 'value is invalid']
 
     o.value = 'abcde'
-    o.valid?.should be_true
+    o.valid?.should == true
   end
   
   specify "should have skip_superclass_validations? return whether superclass validations were skipped" do
@@ -630,14 +630,14 @@ describe "Superclass validations" do
 
     o = @c2.new
     o.value = 'ab'
-    o.valid?.should be_true
+    o.valid?.should == true
 
     o.value = '12'
     o.valid?.should == false
     o.errors.full_messages.should == ['value is invalid']
 
     o.value = 'abcde'
-    o.valid?.should be_true
+    o.valid?.should == true
   end
 end
 
@@ -733,9 +733,9 @@ describe Sequel::Model, "Validations" do
     end
 
     @person = Person.new :first_name => "Lancelot99"
-    @person.valid?.should be_false
+    @person.valid?.should == false
     @person = Person.new :first_name => "Anita"
-    @person.valid?.should be_true
+    @person.valid?.should == true
   end
   
   it "should validate length of column" do
@@ -943,10 +943,10 @@ describe Sequel::Model, "Validations" do
     Person.validations[:first_name].size.should == 2
     
     @person = Person.new :first_name => "Lancelot99"
-    @person.valid?.should be_false
+    @person.valid?.should == false
     
     @person2 = Person.new :first_name => "Wayne"
-    @person2.valid?.should be_true
+    @person2.valid?.should == true
   end
 
   it "should allow 'longhand' validations direcly within the model." do
@@ -1008,7 +1008,7 @@ describe "Model#save" do
     
     @m.x = 7
     @m.should be_valid
-    @m.save.should_not be_false
+    @m.save.should_not == false
     DB.sqls.should == ['UPDATE people SET x = 7 WHERE (id = 4)']
   end
   
diff --git a/spec/extensions/validation_helpers_spec.rb b/spec/extensions/validation_helpers_spec.rb
index 3a96c20..5592224 100644
--- a/spec/extensions/validation_helpers_spec.rb
+++ b/spec/extensions/validation_helpers_spec.rb
@@ -384,11 +384,6 @@ describe "Sequel::Plugins::ValidationHelpers" do
     @user.should_not be_valid
     @user.errors.full_messages.should == ['username is already taken']
 
-    ds1 = @c.dataset.filter([[:username, '0records']])
-    ds2 = ds1.exclude(:id=>1)
-    @c.dataset.should_receive(:where).with([[:username, '0records']]).twice.and_return(ds1)
-    ds1.should_receive(:exclude).with(:id=>1).once.and_return(ds2)
-
     @user = @c.load(:id=>1, :username => "0records", :password => "anothertest")
     @user.should be_valid
     DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (id != 1)) LIMIT 1"
@@ -434,11 +429,6 @@ describe "Sequel::Plugins::ValidationHelpers" do
     @user.should_not be_valid
     @user.errors.full_messages.should == ['username and password is already taken']
 
-    ds1 = @c.dataset.filter([[:username, '0records'], [:password, 'anothertest']])
-    ds2 = ds1.exclude(:id=>1)
-    @c.dataset.should_receive(:where).with([[:username, '0records'], [:password, 'anothertest']]).twice.and_return(ds1)
-    ds1.should_receive(:exclude).with(:id=>1).once.and_return(ds2)
-
     @user = @c.load(:id=>1, :username => "0records", :password => "anothertest")
     @user.should be_valid
     DB.sqls.last.should == "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND (password = 'anothertest') AND (id != 1)) LIMIT 1"
@@ -460,7 +450,7 @@ describe "Sequel::Plugins::ValidationHelpers" do
                     "SELECT count(*) AS count FROM items WHERE ((username = '0records') AND active AND (id != 3)) LIMIT 1"]
   end
 
-  it "should support validates_unique with a custom filter" do
+  it "should support validates_unique with :where option" do
     @c.columns(:id, :username, :password)
     @c.set_dataset DB[:items]
     @c.set_validations{validates_unique(:username, :where=>proc{|ds, obj, cols| ds.where(cols.map{|c| [Sequel.function(:lower, c), obj.send(c).downcase]})})}
@@ -473,6 +463,20 @@ describe "Sequel::Plugins::ValidationHelpers" do
                     "SELECT count(*) AS count FROM items WHERE ((lower(username) = '0records') AND (id != 3)) LIMIT 1"]
   end
 
+  it "should support validates_unique with :dataset option" do
+    @c.columns(:id, :username, :password)
+    @c.set_dataset DB[:items]
+    c = @c
+    @c.set_validations{validates_unique(:username, :dataset=>c.where(:a=>[1,2,3]))}
+    @c.dataset._fetch = {:v=>0}
+    
+    DB.reset
+    @c.new(:username => "0records", :password => "anothertest").should be_valid
+    @c.load(:id=>3, :username => "0records", :password => "anothertest").should be_valid
+    DB.sqls.should == ["SELECT count(*) AS count FROM items WHERE ((a IN (1, 2, 3)) AND (username = '0records')) LIMIT 1",
+                    "SELECT count(*) AS count FROM items WHERE ((a IN (1, 2, 3)) AND (username = '0records') AND (id != 3)) LIMIT 1"]
+  end
+
   it "should support :only_if_modified option for validates_unique, and not check uniqueness for existing records if values haven't changed" do
     @c.columns(:id, :username, :password)
     @c.set_dataset DB[:items]
diff --git a/spec/integration/associations_test.rb b/spec/integration/associations_test.rb
index dbcbdc9..e386ab4 100644
--- a/spec/integration/associations_test.rb
+++ b/spec/integration/associations_test.rb
@@ -4,7 +4,7 @@ shared_examples_for "one_to_one eager limit strategies" do
   specify "eager loading one_to_one associations should work correctly" do
     Artist.one_to_one :first_album, {:clone=>:first_album}.merge(@els) if @els
     Artist.one_to_one  :last_album, {:clone=>:last_album}.merge(@els) if @els
-    Artist.one_to_one  :second_album, {:clone=>:second_album}.merge(@els) if @els
+    Artist.one_to_one  :second_album, {:clone=>:second_album}.merge(@els) if @els && @els[:eager_limit_strategy] != :distinct_on
     @album.update(:artist => @artist)
     diff_album = @diff_album.call
     ar = @pr.call[1]
@@ -31,6 +31,44 @@ shared_examples_for "one_to_one eager limit strategies" do
   end
 end
 
+shared_examples_for "one_to_one eager_graph limit strategies" do
+  specify "eager graphing one_to_one associations should work correctly" do
+    @album.update(:artist => @artist)
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist.order(:artists__name)
+    limit_strategy = {:limit_strategy=>@els[:eager_limit_strategy]}
+    
+    a = ds.eager_graph_with_options(:first_album, limit_strategy).all
+    a.should == [@artist, ar]
+    a.first.first_album.should == @album
+    a.last.first_album.should == nil
+    a.first.first_album.values.should == @album.values
+
+    a = ds.eager_graph_with_options(:last_album, limit_strategy).all
+    a = ds.eager_graph(:last_album).all
+    a.should == [@artist, ar]
+    a.first.last_album.should == diff_album
+    a.last.last_album.should == nil
+    a.first.last_album.values.should == diff_album.values
+
+    if @els[:eager_limit_strategy] != :distinct_on && (@els[:eager_limit_strategy] != :correlated_subquery || Album.dataset.supports_offsets_in_correlated_subqueries?) 
+      a = ds.eager_graph_with_options(:second_album, limit_strategy).all
+      a = ds.eager_graph(:second_album).all
+      a.should == [@artist, ar]
+      a.first.second_album.should == diff_album
+      a.last.second_album.should == nil
+      a.first.second_album.values.should == diff_album.values
+    end
+
+    same_album = @same_album.call
+    a = ds.eager_graph_with_options(:first_album, limit_strategy).all
+    a.should == [@artist, ar]
+    [@album, same_album].should include(a.first.first_album)
+    a.last.first_album.should == nil
+  end
+end
+
 shared_examples_for "one_to_many eager limit strategies" do
   specify "should correctly handle limits and offsets when eager loading one_to_many associations" do
     Artist.one_to_many :first_two_albums, {:clone=>:first_two_albums}.merge(@els) if @els
@@ -61,6 +99,94 @@ shared_examples_for "one_to_many eager limit strategies" do
   end
 end
 
+shared_examples_for "one_to_many eager_graph limit strategies" do
+  specify "should correctly handle limits and offsets when eager graphing one_to_many associations" do
+    @album.update(:artist => @artist)
+    middle_album = @middle_album.call
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist.order(:artists__name)
+    limit_strategy = {:limit_strategy=>@els[:eager_limit_strategy]}
+    
+    ars = ds.eager_graph_with_options(:first_two_albums, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.first_two_albums.should == [@album, middle_album]
+    ars.last.first_two_albums.should == []
+    ars.first.first_two_albums.map{|x| x.values}.should == [@album, middle_album].map{|x| x.values}
+
+    if @els[:eager_limit_strategy] != :correlated_subquery || Album.dataset.supports_offsets_in_correlated_subqueries?
+      ars = ds.eager_graph_with_options(:second_two_albums, limit_strategy).all
+      ars.should == [@artist, ar]
+      ars.first.second_two_albums.should == [middle_album, diff_album]
+      ars.last.second_two_albums.should == []
+      ars.first.second_two_albums.map{|x| x.values}.should == [middle_album, diff_album].map{|x| x.values}
+
+      ars = ds.eager_graph_with_options(:not_first_albums, limit_strategy).all
+      ars.should == [@artist, ar]
+      ars.first.not_first_albums.should == [middle_album, diff_album]
+      ars.last.not_first_albums.should == []
+      ars.first.not_first_albums.map{|x| x.values}.should == [middle_album, diff_album].map{|x| x.values}
+    end
+
+    ars = ds.eager_graph_with_options(:last_two_albums, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.last_two_albums.should == [diff_album, middle_album]
+    ars.last.last_two_albums.should == []
+    ars.first.last_two_albums.map{|x| x.values}.should == [diff_album, middle_album].map{|x| x.values}
+  end
+end
+
+shared_examples_for "one_through_one eager limit strategies" do
+  specify "should correctly handle offsets when eager loading one_through_one associations" do
+    Album.one_through_one :first_tag, {:clone=>:first_tag}.merge(@els) if @els
+    Album.one_through_one :second_tag, {:clone=>:second_tag}.merge(@els) if @els && @els[:eager_limit_strategy] != :distinct_on
+    Album.one_through_one :last_tag, {:clone=>:last_tag}.merge(@els) if @els
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    
+    als = Album.eager(:first_tag, :second_tag, :last_tag).order(:name).all
+    als.should == [@album, al]
+    als.first.first_tag.should == @tag
+    als.first.second_tag.should == tu
+    als.first.last_tag.should == tv
+    als.last.first_tag.should == nil
+    als.last.second_tag.should == nil
+    als.last.last_tag.should == nil
+    
+    # Check that no extra columns got added by the eager loading
+    als.first.first_tag.values.should == @tag.values
+    als.first.second_tag.values.should == tu.values
+    als.first.last_tag.values.should == tv.values
+  end
+end
+
+shared_examples_for "one_through_one eager_graph limit strategies" do
+  specify "should correctly handle offsets when eager graphing one_through_one associations" do
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    ds = Album.order(:albums__name)
+    limit_strategy = {:limit_strategy=>@els[:eager_limit_strategy]}
+    
+    als = ds.eager_graph_with_options(:first_tag, limit_strategy).all
+    als.should == [@album, al]
+    als.first.first_tag.should == @tag
+    als.last.first_tag.should == nil
+    als.first.first_tag.values.should == @tag.values
+
+    als = ds.eager_graph_with_options(:second_tag, @els[:eager_limit_strategy] != :distinct_on ? limit_strategy : {}).all
+    als.should == [@album, al]
+    als.first.second_tag.should == tu
+    als.last.second_tag.should == nil
+    als.first.second_tag.values.should == tu.values
+
+    als = ds.eager_graph_with_options(:last_tag, limit_strategy).all
+    als.should == [@album, al]
+    als.first.last_tag.should == tv
+    als.last.last_tag.should == nil
+    als.first.last_tag.values.should == tv.values
+  end
+end
+
 shared_examples_for "many_to_many eager limit strategies" do
   specify "should correctly handle limits and offsets when eager loading many_to_many associations" do
     Album.send @many_to_many_method||:many_to_many, :first_two_tags, {:clone=>:first_two_tags}.merge(@els) if @els
@@ -69,6 +195,7 @@ shared_examples_for "many_to_many eager limit strategies" do
     Album.send @many_to_many_method||:many_to_many, :last_two_tags, {:clone=>:last_two_tags}.merge(@els) if @els
     tu, tv = @other_tags.call
     al = @pr.call.first
+    al.add_tag(tu)
     
     als = Album.eager(:first_two_tags, :second_two_tags, :not_first_tags, :last_two_tags).order(:name).all
     als.should == [@album, al]
@@ -76,9 +203,9 @@ shared_examples_for "many_to_many eager limit strategies" do
     als.first.second_two_tags.should == [tu, tv]
     als.first.not_first_tags.should == [tu, tv]
     als.first.last_two_tags.should == [tv, tu]
-    als.last.first_two_tags.should == []
+    als.last.first_two_tags.should == [tu]
     als.last.second_two_tags.should == []
-    als.last.last_two_tags.should == []
+    als.last.last_two_tags.should == [tu]
     
     # Check that no extra columns got added by the eager loading
     als.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values}
@@ -88,6 +215,40 @@ shared_examples_for "many_to_many eager limit strategies" do
   end
 end
 
+shared_examples_for "many_to_many eager_graph limit strategies" do
+  specify "should correctly handle limits and offsets when eager loading many_to_many associations" do
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    al.add_tag(tu)
+    ds = Album.order(:albums__name)
+    limit_strategy = {:limit_strategy=>(@els||{})[:eager_limit_strategy]}
+    
+    als = ds.eager_graph_with_options(:first_two_tags, limit_strategy).all
+    als.should == [@album, al]
+    als.first.first_two_tags.should == [@tag, tu]
+    als.last.first_two_tags.should == [tu]
+    als.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values}
+
+    als = ds.eager_graph_with_options(:second_two_tags, limit_strategy).all
+    als.should == [@album, al]
+    als.first.second_two_tags.should == [tu, tv]
+    als.last.second_two_tags.should == []
+    als.first.second_two_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values}
+
+    als = ds.eager_graph_with_options(:not_first_tags, limit_strategy).all
+    als.should == [@album, al]
+    als.first.not_first_tags.should == [tu, tv]
+    als.last.not_first_tags.should == []
+    als.first.not_first_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values}
+
+    als = ds.eager_graph_with_options(:last_two_tags, limit_strategy).all
+    als.should == [@album, al]
+    als.first.last_two_tags.should == [tv, tu]
+    als.last.last_two_tags.should == [tu]
+    als.first.last_two_tags.map{|x| x.values}.should == [tv, tu].map{|x| x.values}
+  end
+end
+
 shared_examples_for "many_through_many eager limit strategies" do
   specify "should correctly handle limits and offsets when eager loading many_through_many associations" do
     Artist.many_through_many :first_two_tags, {:clone=>:first_two_tags}.merge(@els) if @els
@@ -96,7 +257,9 @@ shared_examples_for "many_through_many eager limit strategies" do
     Artist.many_through_many :last_two_tags, {:clone=>:last_two_tags}.merge(@els) if @els
     @album.update(:artist => @artist)
     tu, tv = @other_tags.call
-    ar = @pr.call[1]
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
     
     ars = Artist.eager(:first_two_tags, :second_two_tags, :not_first_tags, :last_two_tags).order(:name).all
     ars.should == [@artist, ar]
@@ -104,9 +267,10 @@ shared_examples_for "many_through_many eager limit strategies" do
     ars.first.second_two_tags.should == [tu, tv]
     ars.first.not_first_tags.should == [tu, tv]
     ars.first.last_two_tags.should == [tv, tu]
-    ars.last.first_two_tags.should == []
+    ars.last.first_two_tags.should == [tu]
     ars.last.second_two_tags.should == []
-    ars.last.last_two_tags.should == []
+    ars.last.not_first_tags.should == []
+    ars.last.last_two_tags.should == [tu]
     
     # Check that no extra columns got added by the eager loading
     ars.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values}
@@ -116,20 +280,206 @@ shared_examples_for "many_through_many eager limit strategies" do
   end
 end
 
+shared_examples_for "many_through_many eager_graph limit strategies" do
+  specify "should correctly handle limits and offsets when eager loading many_through_many associations" do
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:artists__name)
+    limit_strategy = {:limit_strategy=>@els[:eager_limit_strategy]}
+    
+    ars = ds.eager_graph_with_options(:first_two_tags, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.first_two_tags.should == [@tag, tu]
+    ars.last.first_two_tags.should == [tu]
+    ars.first.first_two_tags.map{|x| x.values}.should == [@tag, tu].map{|x| x.values}
+
+    ars = ds.eager_graph_with_options(:second_two_tags, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.second_two_tags.should == [tu, tv]
+    ars.last.second_two_tags.should == []
+    ars.first.second_two_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values}
+
+    ars = ds.eager_graph_with_options(:not_first_tags, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.not_first_tags.should == [tu, tv]
+    ars.last.not_first_tags.should == []
+    ars.first.not_first_tags.map{|x| x.values}.should == [tu, tv].map{|x| x.values}
+
+    ars = ds.eager_graph_with_options(:last_two_tags, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.last_two_tags.should == [tv, tu]
+    ars.last.last_two_tags.should == [tu]
+    ars.first.last_two_tags.map{|x| x.values}.should == [tv, tu].map{|x| x.values}
+  end
+end
+
+shared_examples_for "one_through_many eager limit strategies" do
+  specify "should correctly handle offsets when eager loading one_through_many associations" do
+    Artist.one_through_many :first_tag, {:clone=>:first_tag}.merge(@els) if @els
+    Artist.one_through_many :second_tag, {:clone=>:second_tag}.merge(@els) if @els && @els[:eager_limit_strategy] != :distinct_on
+    Artist.one_through_many :last_tag, {:clone=>:last_tag}.merge(@els) if @els
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    
+    ars = Artist.eager(:first_tag, :second_tag, :last_tag).order(:name).all
+    ars.should == [@artist, ar]
+    ars.first.first_tag.should == @tag
+    ars.first.second_tag.should == tu
+    ars.first.last_tag.should == tv
+    ars.last.first_tag.should == tu
+    ars.last.second_tag.should == nil
+    ars.last.last_tag.should == tu
+    
+    # Check that no extra columns got added by the eager loading
+    ars.first.first_tag.values.should == @tag.values
+    ars.first.second_tag.values.should == tu.values
+    ars.first.last_tag.values.should == tv.values
+  end
+end
+
+shared_examples_for "one_through_many eager_graph limit strategies" do
+  specify "should correctly handle offsets when eager graphing one_through_many associations" do
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:artists__name)
+    limit_strategy = {:limit_strategy=>@els[:eager_limit_strategy]}
+    
+    ars = ds.eager_graph_with_options(:first_tag, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.first_tag.should == @tag
+    ars.last.first_tag.should == tu
+    ars.first.first_tag.values.should == @tag.values
+
+    ars = ds.eager_graph_with_options(:second_tag, @els[:eager_limit_strategy] != :distinct_on ? limit_strategy : {}).all
+    ars.should == [@artist, ar]
+    ars.first.second_tag.should == tu
+    ars.last.second_tag.should == nil
+    ars.first.second_tag.values.should == tu.values
+
+    ars = ds.eager_graph_with_options(:last_tag, limit_strategy).all
+    ars.should == [@artist, ar]
+    ars.first.last_tag.should == tv
+    ars.last.last_tag.should == tu
+    ars.first.last_tag.values.should == tv.values
+  end
+end
+
 shared_examples_for "eager limit strategies" do
   it_should_behave_like "one_to_one eager limit strategies"
   it_should_behave_like "one_to_many eager limit strategies"
   it_should_behave_like "many_to_many eager limit strategies"
+  it_should_behave_like "one_through_one eager limit strategies"
   it_should_behave_like "many_through_many eager limit strategies"
+  it_should_behave_like "one_through_many eager limit strategies"
+end
+
+shared_examples_for "eager_graph limit strategies" do
+  it_should_behave_like "one_to_one eager_graph limit strategies"
+  it_should_behave_like "one_to_many eager_graph limit strategies"
+  it_should_behave_like "many_to_many eager_graph limit strategies"
+  it_should_behave_like "one_through_one eager_graph limit strategies"
+  it_should_behave_like "many_through_many eager_graph limit strategies"
+  it_should_behave_like "one_through_many eager_graph limit strategies"
 end
 
 shared_examples_for "filtering/excluding by associations" do
+  specify "should handle association inner joins" do
+    @Artist.association_join(:albums).all.should == []
+    @Artist.association_join(:first_album).all.should == []
+    @Album.association_join(:artist).all.should == []
+    @Album.association_join(:tags).all.should == []
+    @Album.association_join(:alias_tags).all.should == []
+    @Tag.association_join(:albums).all.should == []
+    unless @no_many_through_many
+      @Artist.association_join(:tags).all.should == []
+      @Artist.association_join(:first_tag).all.should == []
+    end
+
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    
+    @Artist.association_join(:albums).select_all(:artists).all.should == [@artist]
+    @Artist.association_join(:first_album).select_all(:artists).all.should == [@artist]
+    @Album.association_join(:artist).select_all(:albums).all.should == [@album]
+    @Album.association_join(:tags).select_all(:albums).all.should == [@album]
+    @Album.association_join(:alias_tags).select_all(:albums).all.should == [@album]
+    @Tag.association_join(:albums).select_all(:tags).all.should == [@tag]
+    unless @no_many_through_many
+      @Artist.association_join(:tags).select_all(:artists).all.should == [@artist]
+      @Artist.association_join(:first_tag).select_all(:artists).all.should == [@artist]
+    end
+
+    @Artist.association_join(:albums).select_all(:albums).naked.all.should == [@album.values]
+    @Artist.association_join(:first_album).select_all(:first_album).naked.all.should == [@album.values]
+    @Album.association_join(:artist).select_all(:artist).naked.all.should == [@artist.values]
+    @Album.association_join(:tags).select_all(:tags).naked.all.should == [@tag.values]
+    @Album.association_join(:alias_tags).select_all(:alias_tags).naked.all.should == [@tag.values]
+    @Tag.association_join(:albums).select_all(:albums).naked.all.should == [@album.values]
+    unless @no_many_through_many
+      @Artist.association_join(:tags).select_all(:tags).naked.all.should == [@tag.values]
+      @Artist.association_join(:first_tag).select_all(:first_tag).naked.all.should == [@tag.values]
+    end
+  end
+
+  specify "should handle association left joins" do
+    @Artist.association_left_join(:albums).select_all(:artists).all.should == [@artist]
+    @Artist.association_left_join(:first_album).select_all(:artists).all.should == [@artist]
+    @Album.association_left_join(:artist).select_all(:albums).all.should == [@album]
+    @Album.association_left_join(:tags).select_all(:albums).all.should == [@album]
+    @Album.association_left_join(:alias_tags).select_all(:albums).all.should == [@album]
+    @Tag.association_left_join(:albums).select_all(:tags).all.should == [@tag]
+    unless @no_many_through_many
+      @Artist.association_left_join(:tags).select_all(:artists).all.should == [@artist]
+      @Artist.association_left_join(:first_tag).select_all(:artists).all.should == [@artist]
+    end
+
+    nil_hash = lambda{|obj| [obj.values.keys.inject({}){|h,k| h[k] = nil; h}]}
+    @Artist.association_left_join(:albums).select_all(:albums).naked.all.should == nil_hash[@album]
+    @Artist.association_left_join(:first_album).select_all(:first_album).naked.all.should == nil_hash[@album]
+    @Album.association_left_join(:artist).select_all(:artist).naked.all.should == nil_hash[@artist]
+    @Album.association_left_join(:tags).select_all(:tags).naked.all.should == nil_hash[@tag]
+    @Album.association_left_join(:alias_tags).select_all(:alias_tags).naked.all.should == nil_hash[@tag]
+    @Tag.association_left_join(:albums).select_all(:albums).naked.all.should == nil_hash[@album]
+    unless @no_many_through_many
+      @Artist.association_left_join(:tags).select_all(:tags).naked.all.should == nil_hash[@tag]
+      @Artist.association_left_join(:first_tag).select_all(:first_tag).naked.all.should == nil_hash[@tag]
+    end
+
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    
+
+    @Artist.association_left_join(:albums).select_all(:albums).naked.all.should == [@album.values]
+    @Artist.association_left_join(:first_album).select_all(:first_album).naked.all.should == [@album.values]
+    @Album.association_left_join(:artist).select_all(:artist).naked.all.should == [@artist.values]
+    @Album.association_left_join(:tags).select_all(:tags).naked.all.should == [@tag.values]
+    @Album.association_left_join(:alias_tags).select_all(:alias_tags).naked.all.should == [@tag.values]
+    @Tag.association_left_join(:albums).select_all(:albums).naked.all.should == [@album.values]
+    unless @no_many_through_many
+      @Artist.association_left_join(:tags).select_all(:tags).naked.all.should == [@tag.values]
+      @Artist.association_left_join(:first_tag).select_all(:first_tag).naked.all.should == [@tag.values]
+    end
+  end
+
   specify "should work correctly when filtering by associations" do
     @album.update(:artist => @artist)
     @album.add_tag(@tag)
     
     @Artist.filter(:albums=>@album).all.should == [@artist]
     @Artist.filter(:first_album=>@album).all.should == [@artist]
+    unless @no_many_through_many
+      @Artist.filter(:tags=>@tag).all.should == [@artist]
+      @Artist.filter(:first_tag=>@tag).all.should == [@artist]
+    end
     @Album.filter(:artist=>@artist).all.should == [@album]
     @Album.filter(:tags=>@tag).all.should == [@album]
     @Album.filter(:alias_tags=>@tag).all.should == [@album]
@@ -145,12 +495,82 @@ shared_examples_for "filtering/excluding by associations" do
 
     @Artist.exclude(:albums=>@album).all.should == [artist]
     @Artist.exclude(:first_album=>@album).all.should == [artist]
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>@tag).all.should == [artist]
+      @Artist.exclude(:first_tag=>@tag).all.should == [artist]
+    end
     @Album.exclude(:artist=>@artist).all.should == [album]
     @Album.exclude(:tags=>@tag).all.should == [album]
     @Album.exclude(:alias_tags=>@tag).all.should == [album]
     @Tag.exclude(:albums=>@album).all.should == [tag]
     @Album.exclude(:artist=>@artist, :tags=>@tag).all.should == [album]
   end
+
+  specify "should work correctly when filtering by associations with conditions" do
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    
+    @Artist.filter(:a_albums=>@album).all.should == [@artist]
+    @Artist.filter(:first_a_album=>@album).all.should == [@artist]
+    @album.update(:name=>'Foo')
+    @Artist.filter(:a_albums=>@album).all.should == []
+    @Artist.filter(:first_a_album=>@album).all.should == []
+
+    @Album.filter(:a_artist=>@artist).all.should == [@album]
+    @artist.update(:name=>'Foo')
+    @Album.filter(:a_artist=>@artist).all.should == []
+
+    @Album.filter(:t_tags=>@tag).all.should == [@album]
+    @Album.filter(:alias_t_tags=>@tag).all.should == [@album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@tag).all.should == [@album]
+      @Album.filter(:alias_t_tag=>@tag).all.should == [@album]
+      @Artist.filter(:t_tags=>@tag).all.should == [@artist]
+      @Artist.filter(:t_tag=>@tag).all.should == [@artist]
+    end
+    @tag.update(:name=>'Foo')
+    @Album.filter(:t_tags=>@tag).all.should == []
+    @Album.filter(:alias_t_tags=>@tag).all.should == []
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@tag).all.should == []
+      @Album.filter(:alias_t_tag=>@tag).all.should == []
+      @Artist.filter(:t_tags=>@tag).all.should == []
+      @Artist.filter(:t_tag=>@tag).all.should == []
+    end
+  end
+  
+  specify "should work correctly when excluding by associations with conditions" do
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    
+    @Artist.exclude(:a_albums=>@album).all.should == []
+    @Artist.exclude(:first_a_album=>@album).all.should == []
+    @album.update(:name=>'Foo')
+    @Artist.exclude(:a_albums=>@album).all.should == [@artist]
+    @Artist.exclude(:first_a_album=>@album).all.should == [@artist]
+
+    @Album.exclude(:a_artist=>@artist).all.should == []
+    @artist.update(:name=>'Foo')
+    @Album.exclude(:a_artist=>@artist).all.should == [@album]
+
+    @Album.exclude(:t_tags=>@tag).all.should == []
+    @Album.exclude(:alias_t_tags=>@tag).all.should == []
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@tag).all.should == []
+      @Album.exclude(:alias_t_tag=>@tag).all.should == []
+      @Artist.exclude(:t_tags=>@tag).all.should == []
+      @Artist.exclude(:t_tag=>@tag).all.should == []
+    end
+    @tag.update(:name=>'Foo')
+    @Album.exclude(:t_tags=>@tag).all.should == [@album]
+    @Album.exclude(:alias_t_tags=>@tag).all.should == [@album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@tag).all.should == [@album]
+      @Album.exclude(:alias_t_tag=>@tag).all.should == [@album]
+      @Artist.exclude(:t_tags=>@tag).all.should == [@artist]
+      @Artist.exclude(:t_tag=>@tag).all.should == [@artist]
+    end
+  end
   
   specify "should work correctly when filtering by multiple associations" do
     album, artist, tag = @pr.call
@@ -165,6 +585,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Tag.filter(:albums=>[@album, album]).all.should == [@tag]
     @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [@album]
     @artist.albums_dataset.filter(:tags=>[@tag, tag]).all.should == [@album]
+    unless @no_many_through_many
+      @Artist.filter(:tags=>[@tag, tag]).all.should == [@artist]
+      @Artist.filter(:first_tag=>[@tag, tag]).all.should == [@artist]
+    end
 
     album.add_tag(tag)
 
@@ -175,6 +599,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.filter(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
     @Tag.filter(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag]
     @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [@album]
+    unless @no_many_through_many
+      @Artist.filter(:tags=>[@tag, tag]).all.should == [@artist]
+      @Artist.filter(:first_tag=>[@tag, tag]).all.should == [@artist]
+    end
 
     album.update(:artist => artist)
 
@@ -185,6 +613,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.filter(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
     @Tag.filter(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag]
     @Album.filter(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+    unless @no_many_through_many
+      @Artist.filter(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.filter(:first_tag=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
   end
 
   specify "should work correctly when excluding by multiple associations" do
@@ -197,6 +629,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.exclude(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
     @Tag.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@tag, tag]
     @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.exclude(:first_tag=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
 
     @album.update(:artist => @artist)
     @album.add_tag(@tag)
@@ -208,6 +644,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.exclude(:alias_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [album]
     @Tag.exclude(:albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [tag]
     @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [album]
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>[@tag, tag]).all.should == [artist]
+      @Artist.exclude(:first_tag=>[@tag, tag]).all.should == [artist]
+    end
 
     album.add_tag(tag)
 
@@ -218,6 +658,10 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.exclude(:alias_tags=>[@tag, tag]).all.should == []
     @Tag.exclude(:albums=>[@album, album]).all.should == []
     @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == [album]
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>[@tag, tag]).all.should == [artist]
+      @Artist.exclude(:first_tag=>[@tag, tag]).all.should == [artist]
+    end
 
     album.update(:artist => artist)
 
@@ -228,6 +672,123 @@ shared_examples_for "filtering/excluding by associations" do
     @Album.exclude(:alias_tags=>[@tag, tag]).all.should == []
     @Tag.exclude(:albums=>[@album, album]).all.should == []
     @Album.exclude(:artist=>[@artist, artist], :tags=>[@tag, tag]).all.should == []
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>[@tag, tag]).all.should == []
+      @Artist.exclude(:first_tag=>[@tag, tag]).all.should == []
+    end
+  end
+  
+  specify "should work correctly when filtering associations with conditions with multiple objects" do
+    album, artist, tag = @pr.call
+    album.update(:name=>@album.name)
+    artist.update(:name=>@artist.name)
+    tag.update(:name=>@tag.name)
+
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    album.update(:artist => @artist)
+    tag.add_album(@album)
+    
+    @Artist.filter(:a_albums=>[@album, album]).all.should == [@artist]
+    @Artist.filter(:first_a_album=>[@album, album]).all.should == [@artist]
+    @album.update(:name=>'Foo')
+    @Artist.filter(:a_albums=>[@album, album]).all.should == [@artist]
+    @Artist.filter(:first_a_album=>[@album, album]).all.should == [@artist]
+    album.update(:name=>'Foo')
+    @Artist.filter(:a_albums=>[@album, album]).all.should == []
+    @Artist.filter(:first_a_album=>[@album, album]).all.should == []
+
+    album.update(:artist => nil)
+    artist.add_album(@album)
+    @Album.filter(:a_artist=>[@artist, artist]).all.should == [@album]
+    @artist.update(:name=>'Foo')
+    @Album.filter(:a_artist=>[@artist, artist]).all.should == [@album]
+    artist.update(:name=>'Foo')
+    @Album.filter(:a_artist=>[@artist, artist]).all.should == []
+
+    @Album.filter(:t_tags=>[@tag, tag]).all.should == [@album]
+    @Album.filter(:alias_t_tags=>[@tag, tag]).all.should == [@album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>[@tag, tag]).all.should == [@album]
+      @Album.filter(:alias_t_tag=>[@tag, tag]).all.should == [@album]
+      @Artist.filter(:t_tags=>[@tag, tag]).all.should == [artist]
+      @Artist.filter(:t_tag=>[@tag, tag]).all.should == [artist]
+    end
+    @tag.update(:name=>'Foo')
+    @Album.filter(:t_tags=>[@tag, tag]).all.should == [@album]
+    @Album.filter(:alias_t_tags=>[@tag, tag]).all.should == [@album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>[@tag, tag]).all.should == [@album]
+      @Album.filter(:alias_t_tag=>[@tag, tag]).all.should == [@album]
+      @Artist.filter(:t_tags=>[@tag, tag]).all.should == [artist]
+      @Artist.filter(:t_tag=>[@tag, tag]).all.should == [artist]
+    end
+    tag.update(:name=>'Foo')
+    @Album.filter(:t_tags=>[@tag, tag]).all.should == []
+    @Album.filter(:alias_t_tags=>[@tag, tag]).all.should == []
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>[@tag, tag]).all.should == []
+      @Album.filter(:alias_t_tag=>[@tag, tag]).all.should == []
+      @Artist.filter(:t_tags=>[@tag, tag]).all.should == []
+      @Artist.filter(:t_tag=>[@tag, tag]).all.should == []
+    end
+  end
+  
+  specify "should work correctly when excluding associations with conditions with multiple objects" do
+    album, artist, tag = @pr.call
+    album.update(:name=>@album.name)
+    artist.update(:name=>@artist.name)
+    tag.update(:name=>@tag.name)
+
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    album.update(:artist => @artist)
+    tag.add_album(@album)
+    
+    artist.add_album(@album)
+    @Artist.exclude(:a_albums=>[@album, album]).all.should == []
+    @Artist.exclude(:first_a_album=>[@album, album]).all.should == []
+    @album.update(:name=>'Foo')
+    @Artist.exclude(:a_albums=>[@album, album]).all.should == [artist]
+    @Artist.exclude(:first_a_album=>[@album, album]).all.should == [artist]
+    album.update(:name=>'Foo')
+    @Artist.exclude(:a_albums=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Artist.exclude(:first_a_album=>[@album, album]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+
+    @Album.exclude(:a_artist=>[@artist, artist]).all.should == []
+    album.update(:artist => nil)
+    @artist.update(:name=>'Foo')
+    @Album.exclude(:a_artist=>[@artist, artist]).all.should == [album]
+    artist.update(:name=>'Foo')
+    @Album.exclude(:a_artist=>[@artist, artist]).all.sort_by{|x| x.pk}.should == [@album, album]
+
+    @tag.add_album(album)
+    @Album.exclude(:t_tags=>[@tag, tag]).all.should == []
+    @Album.exclude(:alias_t_tags=>[@tag, tag]).all.should == []
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>[@tag, tag]).all.should == []
+      @Album.exclude(:alias_t_tag=>[@tag, tag]).all.should == []
+      @Artist.exclude(:t_tags=>[@tag, tag]).all.should == [@artist]
+      @Artist.exclude(:t_tag=>[@tag, tag]).all.should == [@artist]
+    end
+    @tag.update(:name=>'Foo')
+    @Album.exclude(:t_tags=>[@tag, tag]).all.should == [album]
+    @Album.exclude(:alias_t_tags=>[@tag, tag]).all.should == [album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>[@tag, tag]).all.should == [album]
+      @Album.exclude(:alias_t_tag=>[@tag, tag]).all.should == [album]
+      @Artist.exclude(:t_tags=>[@tag, tag]).all.should == [@artist]
+      @Artist.exclude(:t_tag=>[@tag, tag]).all.should == [@artist]
+    end
+    tag.update(:name=>'Foo')
+    @Album.exclude(:t_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.exclude(:alias_t_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Album.exclude(:alias_t_tag=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Artist.exclude(:t_tags=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.exclude(:t_tag=>[@tag, tag]).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
   end
   
   specify "should work correctly when excluding by associations in regards to NULL values" do
@@ -239,6 +800,18 @@ shared_examples_for "filtering/excluding by associations" do
     @Tag.exclude(:albums=>@album).all.should == [@tag]
     @Album.exclude(:artist=>@artist, :tags=>@tag).all.should == [@album]
 
+    @Artist.exclude(:a_albums=>@album).all.should == [@artist]
+    @Artist.exclude(:first_a_album=>@album).all.should == [@artist]
+    @Album.exclude(:a_artist=>@artist).all.should == [@album]
+    @Album.exclude(:t_tags=>@tag).all.should == [@album]
+    @Album.exclude(:alias_t_tags=>@tag).all.should == [@album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@tag).all.should == [@album]
+      @Album.exclude(:alias_t_tag=>@tag).all.should == [@album]
+      @Artist.exclude(:t_tags=>@tag).all.should == [@artist]
+      @Artist.exclude(:t_tag=>@tag).all.should == [@artist]
+    end
+
     @album.update(:artist => @artist)
     @artist.albums_dataset.exclude(:tags=>@tag).all.should == [@album]
   end
@@ -247,14 +820,22 @@ shared_examples_for "filtering/excluding by associations" do
     @ins.call
     @Album.exclude(:tags=>@tag).all.should == [@album]
     @Album.exclude(:alias_tags=>@tag).all.should == [@album]
+    @Album.exclude(:t_tags=>@tag).all.should == [@album]
+    @Album.exclude(:alias_t_tags=>@tag).all.should == [@album]
     @album.add_tag(@tag)
     @Album.filter(:tags=>@tag).all.should == [@album]
     @Album.filter(:alias_tags=>@tag).all.should == [@album]
+    @Album.filter(:t_tags=>@tag).all.should == [@album]
+    @Album.filter(:alias_t_tags=>@tag).all.should == [@album]
     album, tag = @pr.call.values_at(0, 2)
     @Album.exclude(:tags=>@tag).all.should == [album]
     @Album.exclude(:alias_tags=>@tag).all.should == [album]
+    @Album.exclude(:t_tags=>@tag).all.should == [album]
+    @Album.exclude(:alias_t_tags=>@tag).all.should == [album]
     @Album.exclude(:tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album]
     @Album.exclude(:alias_tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.exclude(:t_tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.exclude(:alias_t_tags=>tag).all.sort_by{|x| x.pk}.should == [@album, album]
   end
 
   specify "should work correctly when filtering by association datasets" do
@@ -282,6 +863,15 @@ shared_examples_for "filtering/excluding by associations" do
     @Tag.filter(:albums=>@Album).all.sort_by{|x| x.pk}.should == [@tag, tag]
     @Tag.filter(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [tag]
     @Tag.filter(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+
+    unless @no_many_through_many
+      @Artist.filter(:tags=>@Tag).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.filter(:tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+      @Artist.filter(:tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+      @Artist.filter(:first_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.filter(:first_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+      @Artist.filter(:first_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    end
   end
 
   specify "should work correctly when excluding by association datasets" do
@@ -294,6 +884,9 @@ shared_examples_for "filtering/excluding by associations" do
     @Artist.exclude(:albums=>@Album).all.sort_by{|x| x.pk}.should == []
     @Artist.exclude(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
     @Artist.exclude(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Artist.exclude(:first_album=>@Album).all.sort_by{|x| x.pk}.should == []
+    @Artist.exclude(:first_album=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+    @Artist.exclude(:first_album=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
     @Album.exclude(:artist=>@Artist).all.sort_by{|x| x.pk}.should == []
     @Album.exclude(:artist=>@Artist.filter(Array(Artist.primary_key).map{|k| Sequel.qualify(Artist.table_name, k)}.zip(Array(artist.pk)))).all.sort_by{|x| x.pk}.should == [@album]
     @Album.exclude(:artist=>@Artist.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
@@ -306,6 +899,587 @@ shared_examples_for "filtering/excluding by associations" do
     @Tag.exclude(:albums=>@Album).all.sort_by{|x| x.pk}.should == []
     @Tag.exclude(:albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@tag]
     @Tag.exclude(:albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@tag, tag]
+
+    unless @no_many_through_many
+      @Artist.exclude(:tags=>@Tag).all.sort_by{|x| x.pk}.should == []
+      @Artist.exclude(:tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+      @Artist.exclude(:tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.exclude(:first_tag=>@Tag).all.sort_by{|x| x.pk}.should == []
+      @Artist.exclude(:first_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+      @Artist.exclude(:first_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
+  end
+
+  specify "should work correctly when filtering by association datasets with conditions" do
+    album, artist, tag = @pr.call
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    album.add_tag(tag)
+    album.update(:artist => artist)
+
+    @Artist.filter(:a_albums=>@Album).all.sort_by{|x| x.pk}.should == [@artist]
+    @Artist.filter(:first_a_album=>@Album).all.sort_by{|x| x.pk}.should == [@artist]
+    @Album.filter(:a_artist=>@Artist).all.sort_by{|x| x.pk}.should == [@album]
+    @Album.filter(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album]
+    @Album.filter(:alias_t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@album]
+      @Album.filter(:alias_t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@album]
+      @Artist.filter(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@artist]
+      @Artist.filter(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@artist]
+    end
+
+    artist.update(:name=>@artist.name)
+    album.update(:name=>@album.name)
+    tag.update(:name=>@tag.name)
+
+    @Artist.filter(:a_albums=>@Album).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Artist.filter(:first_a_album=>@Album).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Album.filter(:a_artist=>@Artist).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.filter(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.filter(:alias_t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Album.filter(:alias_t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Artist.filter(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.filter(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
+
+    @Artist.filter(:a_albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+    @Artist.filter(:first_a_album=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+    @Album.filter(:a_artist=>@Artist.filter(Array(Artist.primary_key).map{|k| Sequel.qualify(Artist.table_name, k)}.zip(Array(artist.pk)))).all.sort_by{|x| x.pk}.should == [album]
+    @Album.filter(:t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+    @Album.filter(:alias_t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+      @Album.filter(:alias_t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+      @Artist.filter(:t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+      @Artist.filter(:t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [artist]
+    end
+
+    @Artist.filter(:a_albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    @Artist.filter(:first_a_album=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    @Album.filter(:a_artist=>@Artist.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    @Album.filter(:t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+    @Album.filter(:t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    @Album.filter(:alias_t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    unless @no_many_through_many
+      @Album.filter(:t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [album]
+      @Album.filter(:t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+      @Album.filter(:alias_t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+      @Artist.filter(:t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+      @Artist.filter(:t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == []
+    end
+  end
+
+  specify "should work correctly when excluding by association datasets with conditions" do
+    album, artist, tag = @pr.call
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+    album.add_tag(tag)
+    album.update(:artist => artist)
+
+    @Artist.exclude(:a_albums=>@Album).all.sort_by{|x| x.pk}.should == [artist]
+    @Artist.exclude(:first_a_album=>@Album).all.sort_by{|x| x.pk}.should == [artist]
+    @Album.exclude(:a_artist=>@Artist).all.sort_by{|x| x.pk}.should == [album]
+    @Album.exclude(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [album]
+    @Album.exclude(:alias_t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [album]
+      @Album.exclude(:alias_t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [album]
+      @Artist.exclude(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == [artist]
+      @Artist.exclude(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == [artist]
+    end
+
+    artist.update(:name=>@artist.name)
+    album.update(:name=>@album.name)
+    tag.update(:name=>@tag.name)
+
+    @Artist.exclude(:a_albums=>@Album).all.sort_by{|x| x.pk}.should == []
+    @Artist.exclude(:first_a_album=>@Album).all.sort_by{|x| x.pk}.should == []
+    @Album.exclude(:a_artist=>@Artist).all.sort_by{|x| x.pk}.should == []
+    @Album.exclude(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == []
+    @Album.exclude(:alias_t_tags=>@Tag).all.sort_by{|x| x.pk}.should == []
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == []
+      @Album.exclude(:alias_t_tag=>@Tag).all.sort_by{|x| x.pk}.should == []
+      @Artist.exclude(:t_tags=>@Tag).all.sort_by{|x| x.pk}.should == []
+      @Artist.exclude(:t_tag=>@Tag).all.sort_by{|x| x.pk}.should == []
+    end
+
+    @Artist.exclude(:a_albums=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+    @Artist.exclude(:first_a_album=>@Album.filter(Array(Album.primary_key).map{|k| Sequel.qualify(Album.table_name, k)}.zip(Array(album.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+    @Album.exclude(:a_artist=>@Artist.filter(Array(Artist.primary_key).map{|k| Sequel.qualify(Artist.table_name, k)}.zip(Array(artist.pk)))).all.sort_by{|x| x.pk}.should == [@album]
+    @Album.exclude(:t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album]
+    @Album.exclude(:alias_t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album]
+      @Album.exclude(:alias_t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@album]
+      @Artist.exclude(:t_tags=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+      @Artist.exclude(:t_tag=>@Tag.filter(Array(Tag.primary_key).map{|k| Sequel.qualify(Tag.table_name, k)}.zip(Array(tag.pk)))).all.sort_by{|x| x.pk}.should == [@artist]
+    end
+
+    @Artist.exclude(:a_albums=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Artist.exclude(:first_a_album=>@Album.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    @Album.exclude(:a_artist=>@Artist.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.exclude(:t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
+    @Album.exclude(:alias_t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
+    unless @no_many_through_many
+      @Album.exclude(:t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Album.exclude(:alias_t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@album, album]
+      @Artist.exclude(:t_tags=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+      @Artist.exclude(:t_tag=>@Tag.filter(1=>0)).all.sort_by{|x| x.pk}.should == [@artist, artist]
+    end
+  end
+end
+
+shared_examples_for "filter by associations one_to_one limit strategies" do
+  specify "filter by associations with limited one_to_one associations should work correctly" do
+    Artist.one_to_one :first_album, {:clone=>:first_album}.merge(@els)
+    Artist.one_to_one :last_album, {:clone=>:last_album}.merge(@els)
+    Artist.one_to_one :second_album, {:clone=>:second_album}.merge(@els)
+    @album.update(:artist => @artist)
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist.order(:name)
+    
+    ds.where(:first_album=>@album).all.should == [@artist]
+    ds.where(:first_album=>diff_album).all.should == []
+    ds.exclude(:first_album=>@album).all.should == [ar]
+    ds.exclude(:first_album=>diff_album).all.should == [@artist, ar]
+
+    if @els[:eager_limit_strategy] != :distinct_on && (@els[:eager_limit_strategy] != :correlated_subquery || Album.dataset.supports_offsets_in_correlated_subqueries?) 
+      ds.where(:second_album=>@album).all.should == []
+      ds.where(:second_album=>diff_album).all.should == [@artist]
+      ds.exclude(:second_album=>@album).all.should == [@artist, ar]
+      ds.exclude(:second_album=>diff_album).all.should == [ar]
+    end
+
+    ds.where(:last_album=>@album).all.should == []
+    ds.where(:last_album=>diff_album).all.should == [@artist]
+    ds.exclude(:last_album=>@album).all.should == [@artist, ar]
+    ds.exclude(:last_album=>diff_album).all.should == [ar]
+
+    Artist.one_to_one :first_album, :clone=>:first_album do |ads| ads.where(:albums__name=>diff_album.name) end
+    ar.add_album(diff_album)
+    ds.where(:first_album=>[@album, diff_album]).all.should == [ar]
+    ds.exclude(:first_album=>[@album, diff_album]).all.should == [@artist]
+  end
+end
+
+shared_examples_for "filter by associations singular association limit strategies" do
+  it_should_behave_like "filter by associations one_to_one limit strategies"
+
+  specify "dataset associations with limited one_to_one associations should work correctly" do
+    Artist.one_to_one :first_album, {:clone=>:first_album}.merge(@els)
+    Artist.one_to_one :last_album, {:clone=>:last_album}.merge(@els)
+    Artist.one_to_one :second_album, {:clone=>:second_album}.merge(@els) if @els[:eager_limit_strategy] != :distinct_on
+    @album.update(:artist => @artist)
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist
+    
+    ds.where(@artist.pk_hash).first_albums.all.should == [@album]
+    ds.where(@artist.pk_hash).second_albums.all.should == [diff_album]
+    ds.where(@artist.pk_hash).last_albums.all.should == [diff_album]
+    ds.where(ar.pk_hash).first_albums.all.should == []
+    ds.where(ar.pk_hash).second_albums.all.should == []
+    ds.where(ar.pk_hash).last_albums.all.should == []
+
+    Artist.one_to_one :first_album, :clone=>:first_album do |ads| ads.where(:albums__name=>diff_album.name) end
+    ar.add_album(diff_album)
+    ds.where(@artist.pk_hash).first_albums.all.should == []
+    ds.where(ar.pk_hash).first_albums.all.should == [diff_album]
+  end
+
+  specify "filter by associations with limited one_through_one associations should work correctly" do
+    Album.one_through_one :first_tag, {:clone=>:first_tag}.merge(@els)
+    Album.one_through_one :second_tag, {:clone=>:second_tag}.merge(@els) if @els[:eager_limit_strategy] != :distinct_on
+    Album.one_through_one :last_tag, {:clone=>:last_tag}.merge(@els)
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    ds = Album.order(:name)
+    al.add_tag(tu)
+    
+    ds.where(:first_tag=>@tag).all.should == [@album]
+    ds.where(:first_tag=>tu).all.should == [al]
+    ds.where(:first_tag=>tv).all.should == []
+    ds.exclude(:first_tag=>@tag).all.should == [al]
+    ds.exclude(:first_tag=>tu).all.should == [@album]
+    ds.exclude(:first_tag=>tv).all.should == [@album, al]
+
+    ds.where(:second_tag=>@tag).all.should == []
+    ds.where(:second_tag=>tu).all.should == [@album]
+    ds.where(:second_tag=>tv).all.should == []
+    ds.exclude(:second_tag=>@tag).all.should == [@album, al]
+    ds.exclude(:second_tag=>tu).all.should == [al]
+    ds.exclude(:second_tag=>tv).all.should == [@album, al]
+
+    ds.where(:last_tag=>@tag).all.should == []
+    ds.where(:last_tag=>tu).all.should == [al]
+    ds.where(:last_tag=>tv).all.should == [@album]
+    ds.exclude(:last_tag=>@tag).all.should == [@album, al]
+    ds.exclude(:last_tag=>tu).all.should == [@album]
+    ds.exclude(:last_tag=>tv).all.should == [al]
+
+    Album.one_through_one :first_tag, :clone=>:first_tag do |ads| ads.where(:tags__name=>tu.name) end
+    Album.one_through_one :second_tag, :clone=>:second_tag do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(:first_tag=>[@tag, tu]).all.should == [@album, al]
+    ds.exclude(:first_tag=>[@tag, tu]).all.should == []
+
+    al.add_tag(tv)
+    ds.where(:second_tag=>[tv, tu]).all.should == [@album, al]
+    ds.exclude(:second_tag=>[tv, tu]).all.should == []
+  end
+
+  specify "dataset associations with limited one_through_one associations should work correctly" do
+    Album.one_through_one :first_tag, {:clone=>:first_tag}.merge(@els)
+    Album.one_through_one :second_tag, {:clone=>:second_tag}.merge(@els) if @els[:eager_limit_strategy] != :distinct_on
+    Album.one_through_one :last_tag, {:clone=>:last_tag}.merge(@els)
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    ds = Album
+    al.add_tag(tu)
+    
+    ds.where(@album.pk_hash).first_tags.all.should == [@tag]
+    ds.where(@album.pk_hash).second_tags.all.should == [tu]
+    ds.where(@album.pk_hash).last_tags.all.should == [tv]
+    ds.where(al.pk_hash).first_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_tags.all.should == []
+    ds.where(al.pk_hash).last_tags.all.should == [tu]
+
+    Album.one_through_one :first_tag, :clone=>:first_tag do |ads| ads.where(:tags__name=>tu.name) end
+    Album.one_through_one :second_tag, :clone=>:second_tag do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(@album.pk_hash).first_tags.all.should == [tu]
+    ds.where(@album.pk_hash).second_tags.all.should == [tv]
+    ds.where(al.pk_hash).first_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_tags.all.should == []
+
+    al.add_tag(tv)
+    ds.where(@album.pk_hash).first_tags.all.should == [tu]
+    ds.where(@album.pk_hash).second_tags.all.should == [tv]
+    ds.where(al.pk_hash).first_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_tags.all.should == [tv]
+  end
+
+  specify "filter by associations with limited one_through_many associations should work correctly" do
+    Artist.one_through_many :first_tag, {:clone=>:first_tag}.merge(@els)
+    Artist.one_through_many :second_tag, {:clone=>:second_tag}.merge(@els) if @els[:eager_limit_strategy] != :distinct_on
+    Artist.one_through_many :last_tag, {:clone=>:last_tag}.merge(@els)
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:name)
+
+    ds.where(:first_tag=>@tag).all.should == [@artist]
+    ds.where(:first_tag=>tu).all.should == [ar]
+    ds.where(:first_tag=>tv).all.should == []
+    ds.exclude(:first_tag=>@tag).all.should == [ar]
+    ds.exclude(:first_tag=>tu).all.should == [@artist]
+    ds.exclude(:first_tag=>tv).all.should == [@artist, ar]
+
+    ds.where(:second_tag=>@tag).all.should == []
+    ds.where(:second_tag=>tu).all.should == [@artist]
+    ds.where(:second_tag=>tv).all.should == []
+    ds.exclude(:second_tag=>@tag).all.should == [@artist, ar]
+    ds.exclude(:second_tag=>tu).all.should == [ar]
+    ds.exclude(:second_tag=>tv).all.should == [@artist, ar]
+
+    ds.where(:last_tag=>@tag).all.should == []
+    ds.where(:last_tag=>tu).all.should == [ar]
+    ds.where(:last_tag=>tv).all.should == [@artist]
+    ds.exclude(:last_tag=>@tag).all.should == [@artist, ar]
+    ds.exclude(:last_tag=>tu).all.should == [@artist]
+    ds.exclude(:last_tag=>tv).all.should == [ar]
+
+    Artist.one_through_many :first_tag, :clone=>:first_tag do |ads| ads.where(:tags__name=>tu.name) end
+    Artist.one_through_many :second_tag, :clone=>:second_tag do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(:first_tag=>[@tag, tu]).all.should == [@artist, ar]
+    ds.exclude(:first_tag=>[@tag, tu]).all.should == []
+
+    al.add_tag(tv)
+    ds.where(:second_tag=>[tv, tu]).all.should == [@artist, ar]
+    ds.exclude(:second_tag=>[tv, tu]).all.should == []
+  end
+
+  specify "dataset associations with limited one_through_many associations should work correctly" do
+    Artist.one_through_many :first_tag, {:clone=>:first_tag}.merge(@els)
+    Artist.one_through_many :second_tag, {:clone=>:second_tag}.merge(@els) if @els[:eager_limit_strategy] != :distinct_on
+    Artist.one_through_many :last_tag, {:clone=>:last_tag}.merge(@els)
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:name)
+
+    ds.where(@artist.pk_hash).first_tags.all.should == [@tag]
+    ds.where(@artist.pk_hash).second_tags.all.should == [tu]
+    ds.where(@artist.pk_hash).last_tags.all.should == [tv]
+    ds.where(ar.pk_hash).first_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_tags.all.should == []
+    ds.where(ar.pk_hash).last_tags.all.should == [tu]
+
+    Artist.one_through_many :first_tag, :clone=>:first_tag do |ads| ads.where(:tags__name=>tu.name) end
+    Artist.one_through_many :second_tag, :clone=>:second_tag do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(@artist.pk_hash).first_tags.all.should == [tu]
+    ds.where(@artist.pk_hash).second_tags.all.should == [tv]
+    ds.where(ar.pk_hash).first_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_tags.all.should == []
+
+    al.add_tag(tv)
+    ds.where(@artist.pk_hash).first_tags.all.should == [tu]
+    ds.where(@artist.pk_hash).second_tags.all.should == [tv]
+    ds.where(ar.pk_hash).first_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_tags.all.should == [tv]
+  end
+end
+
+shared_examples_for "filter by associations one_to_many limit strategies" do
+  specify "filter by associations with limited one_to_many associations should work correctly" do
+    Artist.one_to_many :first_two_albums, {:clone=>:first_two_albums}.merge(@els)
+    Artist.one_to_many :second_two_albums, {:clone=>:second_two_albums}.merge(@els)
+    Artist.one_to_many :not_first_albums, {:clone=>:not_first_albums}.merge(@els)
+    Artist.one_to_many :last_two_albums, {:clone=>:last_two_albums}.merge(@els)
+    @album.update(:artist => @artist)
+    middle_album = @middle_album.call
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist.order(:name)
+
+    ds.where(:first_two_albums=>@album).all.should == [@artist]
+    ds.where(:first_two_albums=>middle_album).all.should == [@artist]
+    ds.where(:first_two_albums=>diff_album).all.should == []
+    ds.exclude(:first_two_albums=>@album).all.should == [ar]
+    ds.exclude(:first_two_albums=>middle_album).all.should == [ar]
+    ds.exclude(:first_two_albums=>diff_album).all.should == [@artist, ar]
+    
+    assocs = if @els[:eager_limit_strategy] != :correlated_subquery || Album.dataset.supports_offsets_in_correlated_subqueries?
+      [:second_two_albums, :not_first_albums, :last_two_albums]
+    else
+      [:last_two_albums]
+    end
+
+    assocs.each do |a|
+      ds.where(a=>@album).all.should == []
+      ds.where(a=>middle_album).all.should == [@artist]
+      ds.where(a=>diff_album).all.should == [@artist]
+      ds.exclude(a=>@album).all.should == [@artist, ar]
+      ds.exclude(a=>middle_album).all.should == [ar]
+      ds.exclude(a=>diff_album).all.should == [ar]
+    end
+
+    Artist.one_to_one :first_two_albums, :clone=>:first_two_albums do |ads| ads.where(:albums__name=>diff_album.name) end
+    ar.add_album(diff_album)
+    ds.where(:first_two_albums=>[@album, diff_album]).all.should == [ar]
+    ds.exclude(:first_two_albums=>[@album, diff_album]).all.should == [@artist]
+  end
+end
+
+shared_examples_for "filter by associations limit strategies" do
+  it_should_behave_like "filter by associations singular association limit strategies"
+  it_should_behave_like "filter by associations one_to_many limit strategies"
+
+  specify "dataset associations with limited one_to_many associations should work correctly" do
+    Artist.one_to_many :first_two_albums, {:clone=>:first_two_albums}.merge(@els)
+    Artist.one_to_many :second_two_albums, {:clone=>:second_two_albums}.merge(@els)
+    Artist.one_to_many :not_first_albums, {:clone=>:not_first_albums}.merge(@els)
+    Artist.one_to_many :last_two_albums, {:clone=>:last_two_albums}.merge(@els)
+    @album.update(:artist => @artist)
+    middle_album = @middle_album.call
+    diff_album = @diff_album.call
+    ar = @pr.call[1]
+    ds = Artist.order(:name)
+
+    ds.where(@artist.pk_hash).first_two_albums.all.should == [@album, middle_album]
+    ds.where(@artist.pk_hash).second_two_albums.all.should == [middle_album, diff_album]
+    ds.where(@artist.pk_hash).not_first_albums.all.should == [middle_album, diff_album]
+    ds.where(@artist.pk_hash).last_two_albums.all.should == [diff_album, middle_album]
+    ds.where(ar.pk_hash).first_two_albums.all.should == []
+    ds.where(ar.pk_hash).second_two_albums.all.should == []
+    ds.where(ar.pk_hash).not_first_albums.all.should == []
+    ds.where(ar.pk_hash).last_two_albums.all.should == []
+
+    Artist.one_to_one :first_two_albums, :clone=>:first_two_albums do |ads| ads.where(:albums__name=>[diff_album.name, middle_album.name]) end
+    ar.add_album(diff_album)
+    ds.where(@artist.pk_hash).first_two_albums.all.should == [middle_album]
+    ds.where(ar.pk_hash).first_two_albums.all.should == [diff_album]
+  end
+
+  specify "filter by associations with limited many_to_many associations should work correctly" do
+    Album.send :many_to_many, :first_two_tags, {:clone=>:first_two_tags}.merge(@els)
+    Album.send :many_to_many, :second_two_tags, {:clone=>:second_two_tags}.merge(@els)
+    Album.send :many_to_many, :not_first_tags, {:clone=>:not_first_tags}.merge(@els)
+    Album.send :many_to_many, :last_two_tags, {:clone=>:last_two_tags}.merge(@els)
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    al.add_tag(tu)
+    ds = Album.order(:name)
+    
+    ds.where(:first_two_tags=>@tag).all.should == [@album]
+    ds.where(:first_two_tags=>tu).all.should == [@album, al]
+    ds.where(:first_two_tags=>tv).all.should == []
+    ds.exclude(:first_two_tags=>@tag).all.should == [al]
+    ds.exclude(:first_two_tags=>tu).all.should == []
+    ds.exclude(:first_two_tags=>tv).all.should == [@album, al]
+
+    ds.where(:second_two_tags=>@tag).all.should == []
+    ds.where(:second_two_tags=>tu).all.should == [@album]
+    ds.where(:second_two_tags=>tv).all.should == [@album]
+    ds.exclude(:second_two_tags=>@tag).all.should == [@album, al]
+    ds.exclude(:second_two_tags=>tu).all.should == [al]
+    ds.exclude(:second_two_tags=>tv).all.should == [al]
+
+    ds.where(:not_first_tags=>@tag).all.should == []
+    ds.where(:not_first_tags=>tu).all.should == [@album]
+    ds.where(:not_first_tags=>tv).all.should == [@album]
+    ds.exclude(:not_first_tags=>@tag).all.should == [@album, al]
+    ds.exclude(:not_first_tags=>tu).all.should == [al]
+    ds.exclude(:not_first_tags=>tv).all.should == [al]
+
+    ds.where(:last_two_tags=>@tag).all.should == []
+    ds.where(:last_two_tags=>tu).all.should == [@album, al]
+    ds.where(:last_two_tags=>tv).all.should == [@album]
+    ds.exclude(:last_two_tags=>@tag).all.should == [@album, al]
+    ds.exclude(:last_two_tags=>tu).all.should == []
+    ds.exclude(:last_two_tags=>tv).all.should == [al]
+
+    Album.many_to_many :first_two_tags, :clone=>:first_two_tags do |ads| ads.where(:tags__name=>tu.name) end
+    Album.many_to_many :second_two_tags, :clone=>:second_two_tags do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(:first_two_tags=>[@tag, tu]).all.should == [@album, al]
+    ds.exclude(:first_two_tags=>[@tag, tu]).all.should == []
+
+    al.add_tag(tv)
+    ds.where(:second_two_tags=>[tv, tu]).all.should == [@album, al]
+    ds.exclude(:second_two_tags=>[tv, tu]).all.should == []
+  end
+
+  specify "dataset associations with limited many_to_many associations should work correctly" do
+    Album.send :many_to_many, :first_two_tags, {:clone=>:first_two_tags}.merge(@els)
+    Album.send :many_to_many, :second_two_tags, {:clone=>:second_two_tags}.merge(@els)
+    Album.send :many_to_many, :not_first_tags, {:clone=>:not_first_tags}.merge(@els)
+    Album.send :many_to_many, :last_two_tags, {:clone=>:last_two_tags}.merge(@els)
+    tu, tv = @other_tags.call
+    al = @pr.call.first
+    al.add_tag(tu)
+    ds = Album.order(:name)
+    
+    ds.where(@album.pk_hash).first_two_tags.all.should == [@tag, tu]
+    ds.where(@album.pk_hash).second_two_tags.all.should == [tu, tv]
+    ds.where(@album.pk_hash).not_first_tags.all.should == [tu, tv]
+    ds.where(@album.pk_hash).last_two_tags.all.should == [tv, tu]
+    ds.where(al.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_two_tags.all.should == []
+    ds.where(al.pk_hash).not_first_tags.all.should == []
+    ds.where(al.pk_hash).last_two_tags.all.should == [tu]
+
+    Album.many_to_many :first_two_tags, :clone=>:first_two_tags do |ads| ads.where(:tags__name=>tu.name) end
+    Album.many_to_many :second_two_tags, :clone=>:second_two_tags do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(@album.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(@album.pk_hash).second_two_tags.all.should == [tv]
+    ds.where(al.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_two_tags.all.should == []
+
+    al.add_tag(tv)
+    ds.where(@album.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(@album.pk_hash).second_two_tags.all.should == [tv]
+    ds.where(al.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(al.pk_hash).second_two_tags.all.should == [tv]
+  end
+
+  specify "filter by associations with limited many_through_many associations should work correctly" do
+    Artist.many_through_many :first_two_tags, {:clone=>:first_two_tags}.merge(@els)
+    Artist.many_through_many :second_two_tags, {:clone=>:second_two_tags}.merge(@els)
+    Artist.many_through_many :not_first_tags, {:clone=>:not_first_tags}.merge(@els)
+    Artist.many_through_many :last_two_tags, {:clone=>:last_two_tags}.merge(@els)
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:name)
+    
+    ds.where(:first_two_tags=>@tag).all.should == [@artist]
+    ds.where(:first_two_tags=>tu).all.should == [@artist, ar]
+    ds.where(:first_two_tags=>tv).all.should == []
+    ds.exclude(:first_two_tags=>@tag).all.should == [ar]
+    ds.exclude(:first_two_tags=>tu).all.should == []
+    ds.exclude(:first_two_tags=>tv).all.should == [@artist, ar]
+
+    ds.where(:second_two_tags=>@tag).all.should == []
+    ds.where(:second_two_tags=>tu).all.should == [@artist]
+    ds.where(:second_two_tags=>tv).all.should == [@artist]
+    ds.exclude(:second_two_tags=>@tag).all.should == [@artist, ar]
+    ds.exclude(:second_two_tags=>tu).all.should == [ar]
+    ds.exclude(:second_two_tags=>tv).all.should == [ar]
+
+    ds.where(:not_first_tags=>@tag).all.should == []
+    ds.where(:not_first_tags=>tu).all.should == [@artist]
+    ds.where(:not_first_tags=>tv).all.should == [@artist]
+    ds.exclude(:not_first_tags=>@tag).all.should == [@artist, ar]
+    ds.exclude(:not_first_tags=>tu).all.should == [ar]
+    ds.exclude(:not_first_tags=>tv).all.should == [ar]
+
+    ds.where(:last_two_tags=>@tag).all.should == []
+    ds.where(:last_two_tags=>tu).all.should == [@artist, ar]
+    ds.where(:last_two_tags=>tv).all.should == [@artist]
+    ds.exclude(:last_two_tags=>@tag).all.should == [@artist, ar]
+    ds.exclude(:last_two_tags=>tu).all.should == []
+    ds.exclude(:last_two_tags=>tv).all.should == [ar]
+
+    Artist.many_through_many :first_two_tags, :clone=>:first_tag do |ads| ads.where(:tags__name=>tu.name) end
+    Artist.many_through_many :second_two_tags, :clone=>:first_tag do |ads| ads.where(:tags__name=>[tv.name, tu.name]) end
+
+    ds.where(:first_two_tags=>[@tag, tu]).all.should == [@artist, ar]
+    ds.exclude(:first_two_tags=>[@tag, tu]).all.should == []
+
+    al.add_tag(tv)
+    ds.where(:second_two_tags=>[tv, tu]).all.should == [@artist, ar]
+    ds.exclude(:second_two_tags=>[tv, tu]).all.should == []
+  end
+
+  specify "dataset associations with limited many_through_many associations should work correctly" do
+    Artist.many_through_many :first_two_tags, {:clone=>:first_two_tags}.merge(@els)
+    Artist.many_through_many :second_two_tags, {:clone=>:second_two_tags}.merge(@els)
+    Artist.many_through_many :not_first_tags, {:clone=>:not_first_tags}.merge(@els)
+    Artist.many_through_many :last_two_tags, {:clone=>:last_two_tags}.merge(@els)
+    @album.update(:artist => @artist)
+    tu, tv = @other_tags.call
+    al, ar, _ = @pr.call
+    al.update(:artist=>ar)
+    al.add_tag(tu)
+    ds = Artist.order(:name)
+    
+    ds.where(@artist.pk_hash).first_two_tags.all.should == [@tag, tu]
+    ds.where(@artist.pk_hash).second_two_tags.all.should == [tu, tv]
+    ds.where(@artist.pk_hash).not_first_tags.all.should == [tu, tv]
+    ds.where(@artist.pk_hash).last_two_tags.all.should == [tv, tu]
+    ds.where(ar.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_two_tags.all.should == []
+    ds.where(ar.pk_hash).not_first_tags.all.should == []
+    ds.where(ar.pk_hash).last_two_tags.all.should == [tu]
+
+    Artist.many_through_many :first_two_tags, :clone=>:first_two_tags do |ads| ads.where(:tags__name=>tu.name) end
+    Artist.many_through_many :second_two_tags, :clone=>:second_two_tags do |ads| ads.where(:tags__name=>[tu.name, tv.name]) end
+
+    ds.where(@artist.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(@artist.pk_hash).second_two_tags.all.should == [tv]
+    ds.where(ar.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_two_tags.all.should == []
+
+    al.add_tag(tv)
+    ds.where(@artist.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(@artist.pk_hash).second_two_tags.all.should == [tv]
+    ds.where(ar.pk_hash).first_two_tags.all.should == [tu]
+    ds.where(ar.pk_hash).second_two_tags.all.should == [tv]
   end
 end
 
@@ -360,7 +1534,11 @@ shared_examples_for "basic regular and composite key associations" do
     Album.tags.all.should == []
     Album.alias_tags.all.should == []
     Artist.albums.all.should == []
-    Artist.tags.all.should == [] unless @no_many_through_many
+    unless @no_many_through_many
+      Album.first_tags.all.should == []
+      Artist.tags.all.should == []
+      Artist.first_tags.all.should == []
+    end
     Artist.albums.tags.all.should == []
 
     @album.update(:artist => @artist)
@@ -371,7 +1549,11 @@ shared_examples_for "basic regular and composite key associations" do
     Album.tags.all.should == [@tag]
     Album.alias_tags.all.should == [@tag]
     Artist.albums.all.should == [@album]
-    Artist.tags.all.should == [@tag] unless @no_many_through_many
+    unless @no_many_through_many
+      Album.first_tags.all.should == [@tag]
+      Artist.tags.all.should == [@tag]
+      Artist.first_tags.all.should == [@tag]
+    end
     Artist.albums.tags.all.should == [@tag]
 
     album.add_tag(tag)
@@ -382,7 +1564,11 @@ shared_examples_for "basic regular and composite key associations" do
     Album.tags.order(:name).all.should == [@tag, tag]
     Album.alias_tags.order(:name).all.should == [@tag, tag]
     Artist.albums.order(:name).all.should == [@album, album]
-    Artist.tags.order(:name).all.should == [@tag, tag] unless @no_many_through_many
+    unless @no_many_through_many
+      Album.first_tags.order(:name).all.should == [@tag, tag]
+      Artist.tags.order(:name).all.should == [@tag, tag]
+      Artist.first_tags.order(:name).all.should == [@tag, tag]
+    end
     Artist.albums.tags.order(:name).all.should == [@tag, tag]
 
     Tag.filter(Tag.qualified_primary_key_hash(tag.pk)).albums.all.should == [album]
@@ -390,7 +1576,11 @@ shared_examples_for "basic regular and composite key associations" do
     Album.filter(Album.qualified_primary_key_hash(album.pk)).tags.all.should == [tag]
     Album.filter(Album.qualified_primary_key_hash(album.pk)).alias_tags.all.should == [tag]
     Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.all.should == [album]
-    Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).tags.all.should == [tag] unless @no_many_through_many
+    unless @no_many_through_many
+      Album.filter(Album.qualified_primary_key_hash(album.pk)).first_tags.all.should == [tag]
+      Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).tags.all.should == [tag]
+      Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).first_tags.all.should == [tag]
+    end
     Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.tags.all.should == [tag]
 
     Artist.filter(Artist.qualified_primary_key_hash(artist.pk)).albums.filter(Album.qualified_primary_key_hash(album.pk)).tags.all.should == [tag]
@@ -510,18 +1700,32 @@ shared_examples_for "regular and composite key associations" do
     it_should_behave_like "filtering/excluding by associations"
   end
 
+  describe "with default/union :eager_limit_strategy" do
+    before do
+      @els = {}
+    end
+    it_should_behave_like "eager limit strategies"
+  end
+
   describe "with :eager_limit_strategy=>:ruby" do
     before do
       @els = {:eager_limit_strategy=>:ruby}
     end
     it_should_behave_like "eager limit strategies"
+    it_should_behave_like "eager_graph limit strategies"
   end
 
-  describe "with :eager_limit_strategy=>true" do
+  describe "with :eager_limit_strategy=>:distinct_on" do
     before do
-      @els = {:eager_limit_strategy=>true}
+      @els = {:eager_limit_strategy=>:distinct_on}
     end
     it_should_behave_like "one_to_one eager limit strategies"
+    it_should_behave_like "one_through_one eager limit strategies"
+    it_should_behave_like "one_through_many eager limit strategies"
+    it_should_behave_like "one_to_one eager_graph limit strategies"
+    it_should_behave_like "one_through_one eager_graph limit strategies"
+    it_should_behave_like "one_through_many eager_graph limit strategies"
+    it_should_behave_like "filter by associations singular association limit strategies"
   end if DB.dataset.supports_ordered_distinct_on?
 
   describe "with :eager_limit_strategy=>:window_function" do
@@ -529,6 +1733,8 @@ shared_examples_for "regular and composite key associations" do
       @els = {:eager_limit_strategy=>:window_function}
     end
     it_should_behave_like "eager limit strategies"
+    it_should_behave_like "eager_graph limit strategies"
+    it_should_behave_like "filter by associations limit strategies"
   end if DB.dataset.supports_window_functions?
 
   specify "should work with a many_through_many association" do
@@ -559,6 +1765,35 @@ shared_examples_for "regular and composite key associations" do
     a.first.artist.should == @artist
     a.first.artist.tags.should == [@tag]
   end
+
+  specify "should work with a one_through_many association" do
+    @album.update(:artist => @artist)
+    @album.add_tag(@tag)
+
+    @album.reload
+    @artist.reload
+    @tag.reload
+    
+    @album.tags.should == [@tag]
+    
+    a = Artist.eager(:first_tag).all
+    a.should == [@artist]
+    a.first.first_tag.should == @tag
+    
+    a = Artist.eager_graph(:first_tag).all
+    a.should == [@artist]
+    a.first.first_tag.should == @tag
+    
+    a = Album.eager(:artist=>:first_tag).all
+    a.should == [@album]
+    a.first.artist.should == @artist
+    a.first.artist.first_tag.should == @tag
+    
+    a = Album.eager_graph(:artist=>:first_tag).all
+    a.should == [@album]
+    a.first.artist.should == @artist
+    a.first.artist.first_tag.should == @tag
+  end
 end
 
 describe "Sequel::Model Simple Associations" do
@@ -588,29 +1823,44 @@ describe "Sequel::Model Simple Associations" do
     class ::Artist < Sequel::Model(@db)
       plugin :dataset_associations
       one_to_many :albums, :order=>:name
-      one_to_one :first_album, :class=>:Album, :order=>:name
-      one_to_one :second_album, :class=>:Album, :order=>:name, :limit=>[nil, 1]
+      one_to_one :first_album, :clone=>:albums
+      one_to_one :second_album, :clone=>:albums, :limit=>[nil, 1]
       one_to_one :last_album, :class=>:Album, :order=>Sequel.desc(:name)
-      one_to_many :first_two_albums, :class=>:Album, :order=>:name, :limit=>2
-      one_to_many :second_two_albums, :class=>:Album, :order=>:name, :limit=>[2, 1]
-      one_to_many :not_first_albums, :class=>:Album, :order=>:name, :limit=>[nil, 1]
+      one_to_many :first_two_albums, :clone=>:albums, :limit=>2
+      one_to_many :second_two_albums, :clone=>:albums, :limit=>[2, 1]
+      one_to_many :not_first_albums, :clone=>:albums, :limit=>[nil, 1]
       one_to_many :last_two_albums, :class=>:Album, :order=>Sequel.desc(:name), :limit=>2
+      one_to_many :a_albums, :clone=>:albums, :conditions=>{:name=>'Al'}
+      one_to_one :first_a_album, :clone=>:a_albums
       plugin :many_through_many
       many_through_many :tags, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]]
-      many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2
-      many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1]
-      many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1]
-      many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2
+      many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2, :graph_order=>:name
+      many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1], :graph_order=>:name
+      many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1], :graph_order=>:name
+      many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2, :graph_order=>Sequel.desc(:name)
+      many_through_many :t_tags, :clone=>:tags, :conditions=>{:tags__name=>'T'}
+      one_through_many :first_tag, [[:albums, :artist_id, :id], [:albums_tags, :album_id, :tag_id]], :order=>:tags__name, :graph_order=>:name, :class=>:Tag
+      one_through_many :second_tag, :clone=>:first_tag, :limit=>[nil, 1]
+      one_through_many :last_tag, :clone=>:first_tag, :order=>Sequel.desc(:tags__name), :graph_order=>Sequel.desc(:name)
+      one_through_many :t_tag, :clone=>:first_tag, :conditions=>{:tags__name=>'T'}
     end
     class ::Album < Sequel::Model(@db)
       plugin :dataset_associations
       many_to_one :artist, :reciprocal=>nil
+      many_to_one :a_artist, :clone=>:artist, :conditions=>{:name=>'Ar'}, :key=>:artist_id
       many_to_many :tags, :right_key=>:tag_id
       many_to_many :alias_tags, :clone=>:tags, :join_table=>:albums_tags___at
       many_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2
       many_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1]
       many_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1]
       many_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2
+      many_to_many :t_tags, :clone=>:tags, :conditions=>{:name=>'T'}
+      many_to_many :alias_t_tags, :clone=>:t_tags, :join_table=>:albums_tags___at
+      one_through_one :first_tag, :clone=>:tags, :order=>:name
+      one_through_one :second_tag, :clone=>:first_tag, :limit=>[nil, 1]
+      one_through_one :last_tag, :clone=>:tags, :order=>Sequel.desc(:name)
+      one_through_one :t_tag, :clone=>:t_tags
+      one_through_one :alias_t_tag, :clone=>:alias_t_tags
     end
     class ::Tag < Sequel::Model(@db)
       plugin :dataset_associations
@@ -635,6 +1885,34 @@ describe "Sequel::Model Simple Associations" do
   
   it_should_behave_like "regular and composite key associations"
 
+  describe "with :correlated_subquery limit strategy" do
+    before do
+      @els = {:eager_limit_strategy=>:correlated_subquery}
+    end
+
+    it_should_behave_like "one_to_one eager_graph limit strategies"
+    it_should_behave_like "one_to_many eager_graph limit strategies"
+    it_should_behave_like "filter by associations one_to_one limit strategies"
+    it_should_behave_like "filter by associations one_to_many limit strategies"
+  end if DB.dataset.supports_limits_in_correlated_subqueries?
+
+  specify "should handle eager loading limited associations for many objects" do
+    @db[:artists].import([:name], (1..99).map{|i| [i.to_s]})
+    artists = Artist.eager(:albums).all
+    artists.length.should == 100
+    artists.each{|a| a.albums.should == []}
+    artists = Artist.eager(:first_two_albums).all
+    artists.length.should == 100
+    artists.each{|a| a.first_two_albums.should == []}
+    @db[:albums].insert([:artist_id], @db[:artists].select(:id))
+    artists = Artist.eager(:albums).all
+    artists.length.should == 100
+    artists.each{|a| a.albums.length.should == 1}
+    artists = Artist.eager(:first_two_albums).all
+    artists.length.should == 100
+    artists.each{|a| a.first_two_albums.length.should == 1}
+  end
+
   specify "should handle many_to_one associations with same name as :key" do
     Album.def_column_alias(:artist_id_id, :artist_id)
     Album.many_to_one :artist_id, :key_column =>:artist_id, :class=>Artist
@@ -794,31 +2072,46 @@ describe "Sequel::Model Composite Key Associations" do
       set_primary_key [:id1, :id2]
       unrestrict_primary_key
       one_to_many :albums, :key=>[:artist_id1, :artist_id2], :order=>:name
-      one_to_one :first_album, :clone=>:albums, :order=>:name
+      one_to_one :first_album, :clone=>:albums
       one_to_one :last_album, :clone=>:albums, :order=>Sequel.desc(:name)
       one_to_one :second_album, :clone=>:albums, :limit=>[nil, 1]
       one_to_many :first_two_albums, :clone=>:albums, :order=>:name, :limit=>2
       one_to_many :second_two_albums, :clone=>:albums, :order=>:name, :limit=>[2, 1]
       one_to_many :not_first_albums, :clone=>:albums, :order=>:name, :limit=>[nil, 1]
       one_to_many :last_two_albums, :clone=>:albums, :order=>Sequel.desc(:name), :limit=>2
+      one_to_many :a_albums, :clone=>:albums do |ds| ds.where(:name=>'Al') end
+      one_to_one :first_a_album, :clone=>:a_albums
       plugin :many_through_many
       many_through_many :tags, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]]
-      many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2
-      many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1]
-      many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1]
-      many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2
+      many_through_many :first_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>2, :graph_order=>:name
+      many_through_many :second_two_tags, :clone=>:tags, :order=>:tags__name, :limit=>[2, 1], :graph_order=>:name
+      many_through_many :not_first_tags, :clone=>:tags, :order=>:tags__name, :limit=>[nil, 1], :graph_order=>:name
+      many_through_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:tags__name), :limit=>2, :graph_order=>Sequel.desc(:name)
+      many_through_many :t_tags, :clone=>:tags do |ds| ds.where(:tags__name=>'T') end
+      one_through_many :first_tag, [[:albums, [:artist_id1, :artist_id2], [:id1, :id2]], [:albums_tags, [:album_id1, :album_id2], [:tag_id1, :tag_id2]]], :order=>:tags__name, :graph_order=>:name, :class=>:Tag
+      one_through_many :second_tag, :clone=>:first_tag, :limit=>[nil, 1]
+      one_through_many :last_tag, :clone=>:first_tag, :order=>Sequel.desc(:tags__name), :graph_order=>Sequel.desc(:name)
+      one_through_many :t_tag, :clone=>:first_tag do |ds| ds.where(:tags__name=>'T') end
     end
     class ::Album < Sequel::Model(@db)
       plugin :dataset_associations
       set_primary_key [:id1, :id2]
       unrestrict_primary_key
       many_to_one :artist, :key=>[:artist_id1, :artist_id2], :reciprocal=>nil
+      many_to_one(:a_artist, :clone=>:artist){|ds| ds.where(:name=>'Ar')}
       many_to_many :tags, :left_key=>[:album_id1, :album_id2], :right_key=>[:tag_id1, :tag_id2]
       many_to_many :alias_tags, :clone=>:tags, :join_table=>:albums_tags___at
       many_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2
       many_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1]
       many_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1]
       many_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2
+      many_to_many :t_tags, :clone=>:tags do |ds| ds.where(:name=>'T') end
+      many_to_many :alias_t_tags, :clone=>:t_tags, :join_table=>:albums_tags___at
+      one_through_one :first_tag, :clone=>:tags, :order=>:name
+      one_through_one :second_tag, :clone=>:first_tag, :limit=>[nil, 1]
+      one_through_one :last_tag, :clone=>:tags, :order=>Sequel.desc(:name)
+      one_through_one :t_tag, :clone=>:t_tags
+      one_through_one :alias_t_tag, :clone=>:alias_t_tags
     end
     class ::Tag < Sequel::Model(@db)
       plugin :dataset_associations
@@ -845,6 +2138,17 @@ describe "Sequel::Model Composite Key Associations" do
 
   it_should_behave_like "regular and composite key associations"
 
+  describe "with :correlated_subquery limit strategy" do
+    before do
+      @els = {:eager_limit_strategy=>:correlated_subquery}
+    end
+
+    it_should_behave_like "one_to_one eager_graph limit strategies"
+    it_should_behave_like "one_to_many eager_graph limit strategies"
+    it_should_behave_like "filter by associations one_to_one limit strategies"
+    it_should_behave_like "filter by associations one_to_many limit strategies"
+  end if DB.dataset.supports_limits_in_correlated_subqueries? && DB.dataset.supports_multiple_column_in?
+
   specify "should have add method accept hashes and create new records" do
     @artist.remove_all_albums
     Album.dataset.delete
@@ -911,18 +2215,23 @@ describe "Sequel::Model pg_array_to_many" do
     class ::Artist < Sequel::Model(@db)
       plugin :dataset_associations
       one_to_many :albums, :order=>:name
-      one_to_one :first_album, :class=>:Album, :order=>:name
+      one_to_one :first_album, :clone=>:albums
+      one_to_many :a_albums, :clone=>:albums do |ds| ds.where(:name=>'Al') end
+      one_to_one :first_a_album, :clone=>:a_albums
     end
     class ::Album < Sequel::Model(@db)
       plugin :dataset_associations
       plugin :pg_array_associations
       many_to_one :artist, :reciprocal=>nil
+      many_to_one :a_artist, :clone=>:artist, :key=>:artist_id do |ds| ds.where(:name=>'Ar') end
       pg_array_to_many :tags, :key=>:tag_ids, :save_after_modify=>true
       pg_array_to_many :alias_tags, :clone=>:tags
       pg_array_to_many :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2
       pg_array_to_many :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1]
       pg_array_to_many :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1]
       pg_array_to_many :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2
+      pg_array_to_many :t_tags, :clone=>:tags do |ds| ds.where(:tags__name=>'T') end
+      pg_array_to_many :alias_t_tags, :clone=>:t_tags
     end
     class ::Tag < Sequel::Model(@db)
       plugin :dataset_associations
@@ -950,6 +2259,7 @@ describe "Sequel::Model pg_array_to_many" do
   
   it_should_behave_like "basic regular and composite key associations"
   it_should_behave_like "many_to_many eager limit strategies"
+  it_should_behave_like "many_to_many eager_graph limit strategies"
 
   it "should handle adding and removing entries in array" do
     a = Album.create
@@ -958,7 +2268,7 @@ describe "Sequel::Model pg_array_to_many" do
     a.remove_tag(@tag)
     a.save
   end
-end if DB.database_type == :postgres && DB.adapter_scheme == :postgres && DB.server_version >= 90300
+end if DB.database_type == :postgres && [:postgres, :jdbc].include?(DB.adapter_scheme) && DB.server_version >= 90300
 
 describe "Sequel::Model many_to_pg_array" do
   before(:all) do
@@ -987,22 +2297,27 @@ describe "Sequel::Model many_to_pg_array" do
       plugin :dataset_associations
       one_to_many :albums, :order=>:name
       one_to_one :first_album, :class=>:Album, :order=>:name
+      one_to_many :a_albums, :clone=>:albums do |ds| ds.where(:name=>'Al') end
+      one_to_one :first_a_album, :clone=>:a_albums
     end
     class ::Album < Sequel::Model(@db)
       plugin :dataset_associations
       plugin :pg_array_associations
       many_to_one :artist, :reciprocal=>nil
+      many_to_one :a_artist, :clone=>:artist, :key=>:artist_id do |ds| ds.where(:name=>'Ar') end
       many_to_pg_array :tags
       many_to_pg_array :alias_tags, :clone=>:tags
       many_to_pg_array :first_two_tags, :clone=>:tags, :order=>:name, :limit=>2
       many_to_pg_array :second_two_tags, :clone=>:tags, :order=>:name, :limit=>[2, 1]
       many_to_pg_array :not_first_tags, :clone=>:tags, :order=>:name, :limit=>[nil, 1]
       many_to_pg_array :last_two_tags, :clone=>:tags, :order=>Sequel.desc(:name), :limit=>2
+      many_to_pg_array :t_tags, :clone=>:tags do |ds| ds.where(:tags__name=>'T') end
+      many_to_pg_array :alias_t_tags, :clone=>:t_tags
     end
     class ::Tag < Sequel::Model(@db)
       plugin :dataset_associations
       plugin :pg_array_associations
-      pg_array_to_many :albums
+      pg_array_to_many :albums, :save_after_modify=>true
     end
     @album = Album.create(:name=>'Al')
     @artist = Artist.create(:name=>'Ar')
@@ -1025,6 +2340,7 @@ describe "Sequel::Model many_to_pg_array" do
   
   it_should_behave_like "basic regular and composite key associations"
   it_should_behave_like "many_to_many eager limit strategies"
+  it_should_behave_like "many_to_many eager_graph limit strategies"
 
   it "should handle adding and removing entries in array" do
     a = Album.create
@@ -1032,7 +2348,7 @@ describe "Sequel::Model many_to_pg_array" do
     a.add_tag(@tag)
     a.remove_tag(@tag)
   end
-end if DB.database_type == :postgres && DB.adapter_scheme == :postgres && DB.server_version >= 90300
+end if DB.database_type == :postgres && [:postgres, :jdbc].include?(DB.adapter_scheme) && DB.server_version >= 90300
 
 describe "Sequel::Model Associations with clashing column names" do
   before(:all) do
@@ -1062,6 +2378,7 @@ describe "Sequel::Model Associations with clashing column names" do
     @Foo.one_to_one :bar, :primary_key=>:obj_id, :primary_key_column=>:object_id, :key=>:object_id, :key_method=>:obj_id, :class=>@Bar
     @Bar.many_to_one :foo, :key=>:obj_id, :key_column=>:object_id, :primary_key=>:object_id, :primary_key_method=>:obj_id, :class=>@Foo
     @Foo.many_to_many :mtmbars, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:foo_id, :right_key=>:object_id, :class=>@Bar
+    @Foo.one_through_one :mtmbar, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:foo_id, :right_key=>:object_id, :class=>@Bar
     @Bar.many_to_many :mtmfoos, :join_table=>:bars_foos, :left_primary_key=>:obj_id, :left_primary_key_column=>:object_id, :right_primary_key=>:object_id, :right_primary_key_method=>:obj_id, :left_key=>:object_id, :right_key=>:foo_id, :class=>@Foo
     @foo = @Foo.create(:obj_id=>2)
     @bar = @Bar.create(:obj_id=>2)
@@ -1076,6 +2393,7 @@ describe "Sequel::Model Associations with clashing column names" do
     @Foo.first.bars.should == [@bar]
     @Foo.first.bar.should == @bar
     @Foo.first.mtmbars.should == [@bar]
+    @Foo.first.mtmbar.should == @bar
     @Bar.first.mtmfoos.should == [@foo]
   end
 
@@ -1084,6 +2402,7 @@ describe "Sequel::Model Associations with clashing column names" do
     @Foo.eager(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]]
     @Foo.eager(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]]
     @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]]
+    @Foo.eager(:mtmbar).all.map{|o| [o, o.mtmbar]}.should == [[@foo, @bar]]
     @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]]
   end
 
@@ -1092,6 +2411,7 @@ describe "Sequel::Model Associations with clashing column names" do
     @Foo.eager_graph(:bars).all.map{|o| [o, o.bars]}.should == [[@foo, [@bar]]]
     @Foo.eager_graph(:bar).all.map{|o| [o, o.bar]}.should == [[@foo, @bar]]
     @Foo.eager_graph(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]]
+    @Foo.eager_graph(:mtmbar).all.map{|o| [o, o.mtmbar]}.should == [[@foo, @bar]]
     @Bar.eager_graph(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]]
   end
 
@@ -1105,7 +2425,7 @@ describe "Sequel::Model Associations with clashing column names" do
     @bar.obj_id.should == 2
 
     @foo.add_bar(b)
-    @foo.bars.sort_by{|x| x.obj_id}.should == [@bar, b]
+    @foo.bars.sort_by{|x| x.id}.should == [@bar, b]
     @foo.remove_bar(b)
     @foo.bars.should == [@bar]
     @foo.remove_all_bars
diff --git a/spec/integration/database_test.rb b/spec/integration/database_test.rb
index d24bf39..2a55ae6 100644
--- a/spec/integration/database_test.rb
+++ b/spec/integration/database_test.rb
@@ -21,7 +21,7 @@ describe Sequel::Database do
   end
 
   specify "should raise Sequel::DatabaseError on invalid SQL" do
-    proc{@db << "SELECT"}.should raise_error(Sequel::DatabaseError)
+    proc{@db << "S"}.should raise_error(Sequel::DatabaseError)
   end
 
   specify "should have Sequel::DatabaseError#sql give the SQL causing the error" do
@@ -45,7 +45,15 @@ describe Sequel::Database do
       proc{@db[:test].update(:a=>'1')}.should raise_error(Sequel::UniqueConstraintViolation)
     end
 
-    cspecify "should raise Sequel::CheckConstraintViolation when a check constraint is violated", :mysql, :sqlite, [:db2] do
+    cspecify "should raise Sequel::UniqueConstraintViolation when a unique constraint is violated for composite primary keys", [:jdbc, :sqlite], [:db2] do
+      @db.create_table!(:test){String :a; String :b; primary_key [:a, :b]}
+      @db[:test].insert(:a=>'1', :b=>'2')
+      proc{@db[:test].insert(:a=>'1', :b=>'2')}.should raise_error(Sequel::UniqueConstraintViolation)
+      @db[:test].insert(:a=>'3', :b=>'4')
+      proc{@db[:test].update(:a=>'1', :b=>'2')}.should raise_error(Sequel::UniqueConstraintViolation)
+    end
+
+    cspecify "should raise Sequel::CheckConstraintViolation when a check constraint is violated", :mysql, [:db2], [proc{|db| db.sqlite_version < 30802}, :sqlite] do
       @db.create_table!(:test){String :a; check Sequel.~(:a=>'1')}
       proc{@db[:test].insert('1')}.should raise_error(Sequel::CheckConstraintViolation)
       @db[:test].insert('2')
@@ -78,7 +86,7 @@ describe Sequel::Database do
       @db << "SELECT"
     rescue Sequel::DatabaseError=>e
       if defined?(Java::JavaLang::Exception)
-        (e.wrapped_exception.is_a?(Exception) || e.wrapped_exception.is_a?(Java::JavaLang::Exception)).should be_true
+        (e.wrapped_exception.is_a?(Exception) || e.wrapped_exception.is_a?(Java::JavaLang::Exception)).should == true
       else
         e.wrapped_exception.should be_a_kind_of(Exception)
       end
@@ -98,8 +106,8 @@ describe Sequel::Database do
 
   cspecify "should provide ability to check connections for validity", [:do, :postgres] do
     conn = @db.synchronize{|c| c}
-    @db.valid_connection?(conn).should be_true
+    @db.valid_connection?(conn).should == true
     @db.disconnect
-    @db.valid_connection?(conn).should be_false
+    @db.valid_connection?(conn).should == false
   end
 end
diff --git a/spec/integration/dataset_test.rb b/spec/integration/dataset_test.rb
index 752bacd..309f556 100644
--- a/spec/integration/dataset_test.rb
+++ b/spec/integration/dataset_test.rb
@@ -75,7 +75,14 @@ describe "Simple Dataset operations" do
   end
 
   specify "should graph correctly" do
-    @ds.graph(:items, {:id=>:id}, :table_alias=>:b).extension(:graph_each).all.should == [{:items=>{:id=>1, :number=>10}, :b=>{:id=>1, :number=>10}}]
+    a =  [{:items=>{:id=>1, :number=>10}, :b=>{:id=>1, :number=>10}}]
+    pr = proc{|t| @ds.graph(t, {:id=>:id}, :table_alias=>:b).extension(:graph_each).all.should == a}
+    pr[:items]
+    pr[:items___foo]
+    pr[Sequel.identifier(:items)]
+    pr[Sequel.identifier('items')]
+    pr[Sequel.as(:items, :foo)]
+    pr[Sequel.as(Sequel.identifier('items'), 'foo')]
   end
 
   specify "should graph correctly with a subselect" do
@@ -117,17 +124,47 @@ describe "Simple Dataset operations" do
   specify "should support iterating over large numbers of records with paged_each" do
     (2..100).each{|i| @ds.insert(:number=>i*10)}
 
+    [:offset, :filter].each do |strategy|
+      rows = []
+      @ds.order(:number).paged_each(:rows_per_fetch=>5, :strategy=>strategy){|row| rows << row}
+      rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
+
+      rows = []
+      @ds.order(:number).paged_each(:rows_per_fetch=>3, :strategy=>strategy){|row| rows << row}
+      rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
+
+      rows = []
+      @ds.order(:number, :id).paged_each(:rows_per_fetch=>5, :strategy=>strategy){|row| rows << row}
+      rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
+
+      rows = []
+      @ds.reverse_order(:number).paged_each(:rows_per_fetch=>5, :strategy=>strategy){|row| rows << row}
+      rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}.reverse
+
+      rows = []
+      @ds.order(Sequel.desc(:number), :id).paged_each(:rows_per_fetch=>5, :strategy=>strategy){|row| rows << row}
+      rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}.reverse
+    end
+
     rows = []
-    @ds.order(:number).paged_each(:rows_per_fetch=>5){|row| rows << row}
-    rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
+    @ds.order(:number).limit(50, 25).paged_each(:rows_per_fetch=>3){|row| rows << row}
+    rows.should == (26..75).map{|i| {:id=>i, :number=>i*10}}
 
     rows = []
-    @ds.order(:number).paged_each(:rows_per_fetch=>3){|row| rows << row}
+    @ds.order(Sequel.*(:number, 2)).paged_each(:rows_per_fetch=>5){|row| rows << row}
     rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
 
     rows = []
-    @ds.order(:number).limit(50, 25).paged_each(:rows_per_fetch=>3){|row| rows << row}
-    rows.should == (26..75).map{|i| {:id=>i, :number=>i*10}}
+    @ds.order(Sequel.*(:number, 2)).paged_each(:rows_per_fetch=>5, :strategy=>:filter, :filter_values=>proc{|row, _| [row[:number] * 2]}){|row| rows << row}
+    rows.should == (1..100).map{|i| {:id=>i, :number=>i*10}}
+
+    if DB.adapter_scheme == :jdbc
+      # check retrival with varying fetch sizes
+      array = (1..100).to_a
+      [1, 2, 5, 10, 33, 50, 100, 1000].each do |i|
+        @ds.with_fetch_size(i).select_order_map(:id).should == array
+      end
+    end
   end
 
   specify "should fetch all results correctly" do
@@ -157,6 +194,24 @@ describe "Simple Dataset operations" do
     @ds.order(:id).limit(2, 0).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}]
     @ds.order(:id).limit(2, 1).all.should == [{:id=>2, :number=>20}]
   end
+
+  specify "should fetch correctly with just offset" do
+    @ds.order(:id).offset(0).all.should == [{:id=>1, :number=>10}]
+    @ds.order(:id).offset(1).all.should == []
+    @ds.insert(:number=>20)
+    @ds.order(:id).offset(0).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}]
+    @ds.order(:id).offset(1).all.should == [{:id=>2, :number=>20}]
+    @ds.order(:id).offset(2).all.should == []
+  end
+
+  specify "should fetch correctly with a limit and offset using seperate methods" do
+    @ds.order(:id).limit(2).offset(0).all.should == [{:id=>1, :number=>10}]
+    @ds.order(:id).limit(2).offset(1).all.should == []
+    @ds.insert(:number=>20)
+    @ds.order(:id).limit(1).offset(1).all.should == [{:id=>2, :number=>20}]
+    @ds.order(:id).limit(2).offset(0).all.should == [{:id=>1, :number=>10}, {:id=>2, :number=>20}]
+    @ds.order(:id).limit(2).offset(1).all.should == [{:id=>2, :number=>20}]
+  end
   
   specify "should provide correct columns when using a limit and offset" do
     ds = @ds.order(:id).limit(1, 1)
@@ -186,6 +241,12 @@ describe "Simple Dataset operations" do
     @ds.limit(2, 1).all.should == []
   end
 
+  specify "should be orderable by column number" do
+    @ds.insert(:number=>20)
+    @ds.insert(:number=>10)
+    @ds.order(2, 1).select_map([:id, :number]).should == [[1, 10], [3, 10], [2, 20]]
+  end
+
   specify "should fetch correctly with a limit in an IN subselect" do
     @ds.where(:id=>@ds.select(:id).order(:id).limit(2)).all.should == [{:id=>1, :number=>10}]
     @ds.insert(:number=>20)
@@ -232,6 +293,10 @@ describe "Simple Dataset operations" do
     @ds.select(:id___x, :number___n).first.should == {:x=>1, :n=>10}
   end
 
+  specify "should support table aliases with column aliases" do
+    DB.from(@ds.as(:i, [:x, :n])).first.should == {:x=>1, :n=>10}
+  end if DB.dataset.supports_derived_column_lists?
+
   specify "should handle true/false properly" do
     @ds.filter(Sequel::TRUE).select_map(:number).should == [10]
     @ds.filter(Sequel::FALSE).select_map(:number).should == []
@@ -251,7 +316,7 @@ describe "Simple dataset operations with nasty table names" do
     @db.quote_identifiers = @qi
   end
 
-  cspecify "should work correctly", :mssql, :oracle do
+  cspecify "should work correctly", :oracle, :sqlanywhere, [:jdbc, :mssql] do
     @db.create_table!(@table) do
       primary_key :id
       Integer :number
@@ -287,6 +352,11 @@ describe Sequel::Dataset do
     @d.count.should == 3
   end
 
+  specify "should handle functions with identifier names correctly" do
+    @d << {:name => 'abc', :value => 6}
+    @d.get{sum.function(:value)}.should == 6
+  end
+
   specify "should handle aggregate methods on limited datasets correctly" do
     @d << {:name => 'abc', :value => 6}
     @d << {:name => 'bcd', :value => 12}
@@ -375,7 +445,7 @@ describe Sequel::Database do
     DB.get(Sequel.cast(Sequel.blob(""), File).as(:a)).should == ""
   end
 
-  cspecify "should properly escape identifiers", :db2, :oracle do
+  cspecify "should properly escape identifiers", :db2, :oracle, :sqlanywhere do
     DB.create_table(:"\\'\"[]"){Integer :id}
     DB.drop_table(:"\\'\"[]")
   end
@@ -719,35 +789,35 @@ if DB.dataset.supports_window_functions?
     end
     
     specify "should give correct results for aggregate window functions" do
-      @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over(:partition=>:group_id).as(:sum)}.all.should ==
         [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.select(:id){sum(:over, :args=>amount){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over.as(:sum)}.all.should ==
         [{:sum=>111111, :id=>1}, {:sum=>111111, :id=>2}, {:sum=>111111, :id=>3}, {:sum=>111111, :id=>4}, {:sum=>111111, :id=>5}, {:sum=>111111, :id=>6}]
     end
       
     specify "should give correct results for ranking window functions with orders" do
-      @ds.select(:id){rank(:over, :partition=>group_id, :order=>id){}.as(:rank)}.all.should ==
+      @ds.select(:id){rank{}.over(:partition=>:group_id, :order=>:id).as(:rank)}.all.should ==
         [{:rank=>1, :id=>1}, {:rank=>2, :id=>2}, {:rank=>3, :id=>3}, {:rank=>1, :id=>4}, {:rank=>2, :id=>5}, {:rank=>3, :id=>6}]
-      @ds.select(:id){rank(:over, :order=>id){}.as(:rank)}.all.should ==
+      @ds.select(:id){rank{}.over(:order=>id).as(:rank)}.all.should ==
         [{:rank=>1, :id=>1}, {:rank=>2, :id=>2}, {:rank=>3, :id=>3}, {:rank=>4, :id=>4}, {:rank=>5, :id=>5}, {:rank=>6, :id=>6}]
     end
       
-    cspecify "should give correct results for aggregate window functions with orders", :mssql do
-      @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id){}.as(:sum)}.all.should ==
+    specify "should give correct results for aggregate window functions with orders" do
+      @ds.select(:id){sum(:amount).over(:partition=>:group_id, :order=>:id).as(:sum)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.select(:id){sum(:over, :args=>amount, :order=>id){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over(:order=>:id).as(:sum)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}]
     end
     
-    cspecify "should give correct results for aggregate window functions with frames", :mssql do
-      @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id, :frame=>:all){}.as(:sum)}.all.should ==
+    specify "should give correct results for aggregate window functions with frames" do
+      @ds.select(:id){sum(:amount).over(:partition=>:group_id, :order=>:id, :frame=>:all).as(:sum)}.all.should ==
         [{:sum=>111, :id=>1}, {:sum=>111, :id=>2}, {:sum=>111, :id=>3}, {:sum=>111000, :id=>4}, {:sum=>111000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.select(:id){sum(:over, :args=>amount, :order=>id, :frame=>:all){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over(:order=>:id, :frame=>:all).as(:sum)}.all.should ==
         [{:sum=>111111, :id=>1}, {:sum=>111111, :id=>2}, {:sum=>111111, :id=>3}, {:sum=>111111, :id=>4}, {:sum=>111111, :id=>5}, {:sum=>111111, :id=>6}]
         
-      @ds.select(:id){sum(:over, :args=>amount, :partition=>group_id, :order=>id, :frame=>:rows){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over(:partition=>:group_id, :order=>:id, :frame=>:rows).as(:sum)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1000, :id=>4}, {:sum=>11000, :id=>5}, {:sum=>111000, :id=>6}]
-      @ds.select(:id){sum(:over, :args=>amount, :order=>id, :frame=>:rows){}.as(:sum)}.all.should ==
+      @ds.select(:id){sum(:amount).over(:order=>:id, :frame=>:rows).as(:sum)}.all.should ==
         [{:sum=>1, :id=>1}, {:sum=>11, :id=>2}, {:sum=>111, :id=>3}, {:sum=>1111, :id=>4}, {:sum=>11111, :id=>5}, {:sum=>111111, :id=>6}]
     end
   end
@@ -773,7 +843,7 @@ describe Sequel::SQL::Constants do
     @db.drop_table?(:constants)
   end
   
-  cspecify "should have working CURRENT_DATE", [:odbc, :mssql], [:jdbc, :sqlite], :oracle do
+  cspecify "should have working CURRENT_DATE", [:jdbc, :sqlite], :oracle do
     @db.create_table!(:constants){Date :d}
     @ds.insert(:d=>Sequel::CURRENT_DATE)
     d = @c2[@ds.get(:d)]
@@ -841,6 +911,11 @@ describe "Sequel::Dataset#import and #multi_insert" do
     @ids.import([:i], [[10], [20], [30]], :slice_size=>3)
     @ids.all.should == [{:i=>10}, {:i=>20}, {:i=>30}]
   end
+
+  it "should import many rows at once" do
+    @ids.import([:i], (1..1000).to_a.map{|x| [x]})
+    @ids.select_order_map(:i).should == (1..1000).to_a
+  end
 end
 
 describe "Sequel::Dataset#import and #multi_insert :return=>:primary_key " do
@@ -1339,7 +1414,7 @@ describe "Sequel::Dataset DSL support" do
     @ds.filter([:a, :b]=>[]).all.should == []
     @ds.exclude([:a, :b]=>[]).all.should == []
 
-    unless Sequel.guarded?(:mssql, :oracle, :db2)
+    unless Sequel.guarded?(:mssql, :oracle, :db2, :sqlanywhere)
       # Some databases don't like boolean results in the select list
       pr = proc{|r| r.is_a?(Integer) ? (r != 0) : r}
       pr[@ds.get(Sequel.expr(:a=>[]))].should == nil
@@ -1357,7 +1432,7 @@ describe "Sequel::Dataset DSL support" do
     ds.filter([:a, :b]=>[]).all.should == []
     ds.exclude([:a, :b]=>[]).all.should == [{:a=>nil, :b=>nil}]
 
-    unless Sequel.guarded?(:mssql, :oracle, :db2)
+    unless Sequel.guarded?(:mssql, :oracle, :db2, :sqlanywhere)
       # Some databases don't like boolean results in the select list
       pr = proc{|r| r.is_a?(Integer) ? (r != 0) : r}
       pr[ds.get(Sequel.expr(:a=>[]))].should == false
diff --git a/spec/integration/migrator_test.rb b/spec/integration/migrator_test.rb
index 454694a..0fae0a7 100644
--- a/spec/integration/migrator_test.rb
+++ b/spec/integration/migrator_test.rb
@@ -13,66 +13,66 @@ describe Sequel::Migrator do
   specify "should be able to migrate up and down all the way successfully" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_info].get(:version).should == 3
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_info].get(:version).should == 0
   end
   
   specify "should be able to migrate up and down to specific versions successfully" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir, 2)
-    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm3333).should be_false
+    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm3333).should == false
     @db[:schema_info].get(:version).should == 2
     @m.apply(@db, @dir, 1)
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm1111).should be_true
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm1111).should == true
     @db[:schema_info].get(:version).should == 1
   end
 
   specify "should correctly set migration version to the last successful migration if the migration raises an error when migrating up" do
     @dir = 'spec/files/bad_up_migration'
     proc{@m.apply(@db, @dir)}.should raise_error
-    [:schema_info, :sm11111].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm22222).should be_false
+    [:schema_info, :sm11111].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm22222).should == false
     @db[:schema_info].get(:version).should == 1
     @m.apply(@db, @dir, 0)
-    [:sm11111, :sm22222].each{|n| @db.table_exists?(n).should be_false}
+    [:sm11111, :sm22222].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_info].get(:version).should == 0
   end
 
   specify "should correctly set migration version to the last successful migration if the migration raises an error when migrating down" do
     @dir = 'spec/files/bad_down_migration'
     @m.apply(@db, @dir)
-    [:schema_info, :sm11111, :sm22222].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm11111, :sm22222].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_info].get(:version).should == 2
     proc{@m.apply(@db, @dir, 0)}.should raise_error
-    [:sm22222].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm11111).should be_true
+    [:sm22222].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm11111).should == true
     @db[:schema_info].get(:version).should == 1
   end
 
   specify "should handle migrating up or down all the way with timestamped migrations" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb 1273253853_3_create_users.rb'
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should handle migrating up or down to specific timestamps with timestamped migrations" do
     @dir = 'spec/files/timestamped_migrations'
     @m.apply(@db, @dir, 1273253851)
-    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm3333).should be_false
+    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm3333).should == false
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb'
     @m.apply(@db, @dir, 1273253849)
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm1111).should be_true
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm1111).should == true
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
   end
 
@@ -81,7 +81,7 @@ describe Sequel::Migrator do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb 1273253852_create_albums.rb 1273253853_3_create_users.rb'
   end
 
@@ -90,7 +90,7 @@ describe Sequel::Migrator do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir, 0)
-    [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm1122, :sm2222, :sm2233, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
@@ -99,89 +99,89 @@ describe Sequel::Migrator do
     @m.apply(@db, @dir)
     @dir = 'spec/files/interleaved_timestamped_migrations'
     @m.apply(@db, @dir, 1273253851)
-    [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_migrations, :sm1111, :sm1122, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    [:sm2233, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253850_create_artists.rb 1273253851_create_nodes.rb'
   end
 
   specify "should correctly update schema_migrations table when an error occurs when migrating up or down using timestamped migrations" do
     @dir = 'spec/files/bad_timestamped_migrations'
     proc{@m.apply(@db, @dir)}.should raise_error
-    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    @db.table_exists?(:sm3333).should be_false
+    [:schema_migrations, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    @db.table_exists?(:sm3333).should == false
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253851_create_nodes.rb'
     proc{@m.apply(@db, @dir, 0)}.should raise_error
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
-    @db.table_exists?(:sm1111).should be_true
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
+    @db.table_exists?(:sm1111).should == true
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
   end
 
   specify "should handle multiple migrations with the same timestamp correctly" do
     @dir = 'spec/files/duplicate_timestamped_migrations'
     @m.apply(@db, @dir)
-    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb'
     @m.apply(@db, @dir, 1273253853)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb 1273253853_create_nodes.rb 1273253853_create_users.rb'
     @m.apply(@db, @dir, 1273253849)
-    [:sm1111].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111].each{|n| @db.table_exists?(n).should == true}
+    [:sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'1273253849_create_sessions.rb'
     @m.apply(@db, @dir, 1273253848)
-    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_false}
+    [:sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should convert schema_info table to schema_migrations table" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb'
 
     @m.apply(@db, @dir, 4)
-    [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should be_true}
-    [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :schema_migrations, :sm1111, :sm2222, :sm3333].each{|n| @db.table_exists?(n).should == true}
+    [:sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb'
 
     @m.apply(@db, @dir, 0)
-    [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should be_true}
-    [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :schema_migrations].each{|n| @db.table_exists?(n).should == true}
+    [:sm1111, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == []
   end
 
   specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir, 2)
-    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir, 1273253850)
-    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should be_true}
-    [:sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :sm3333, :schema_migrations, :sm1122].each{|n| @db.table_exists?(n).should == true}
+    [:sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb'
   end
 
   specify "should handle unapplied migrations when migrating schema_info table to schema_migrations table and target is less than last integer migration version" do
     @dir = 'spec/files/integer_migrations'
     @m.apply(@db, @dir, 1)
-    [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :sm2222, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
 
     @dir = 'spec/files/convert_to_timestamp_migrations'
     @m.apply(@db, @dir, 2)
-    [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should be_true}
-    [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :sm1111, :sm2222, :schema_migrations].each{|n| @db.table_exists?(n).should == true}
+    [:sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == false}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb'
 
     @m.apply(@db, @dir)
-    [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should be_true}
+    [:schema_info, :sm1111, :sm2222, :schema_migrations, :sm3333, :sm1122, :sm2233].each{|n| @db.table_exists?(n).should == true}
     @db[:schema_migrations].select_order_map(:filename).should == %w'001_create_sessions.rb 002_create_nodes.rb 003_3_create_users.rb 1273253850_create_artists.rb 1273253852_create_albums.rb'
   end
 
@@ -189,52 +189,52 @@ describe Sequel::Migrator do
     @dir = 'spec/files/reversible_migrations'
     @db.drop_table?(:a, :b)
     @m.apply(@db, @dir, 1)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a]
 
     @m.apply(@db, @dir, 2)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a, :b]
 
     @m.apply(@db, @dir, 3)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a, :c]
 
     @m.apply(@db, @dir, 4)
-    [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :b].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should == false}
     @db[:b].columns.should == [:a, :c]
 
     @m.apply(@db, @dir, 5)
-    [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :b].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should == false}
     @db[:b].columns.should == [:a, :c, :e]
 
     @m.apply(@db, @dir, 4)
-    [:schema_info, :b].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :b].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :a].each{|n| @db.table_exists?(n).should == false}
     @db[:b].columns.should == [:a, :c]
 
     @m.apply(@db, @dir, 3)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a, :c]
 
     @m.apply(@db, @dir, 2)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a, :b]
 
     @m.apply(@db, @dir, 1)
-    [:schema_info, :a].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info, :a].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :b].each{|n| @db.table_exists?(n).should == false}
     @db[:a].columns.should == [:a]
 
     @m.apply(@db, @dir, 0)
-    [:schema_info].each{|n| @db.table_exists?(n).should be_true}
-    [:schema_migrations, :a, :b].each{|n| @db.table_exists?(n).should be_false}
+    [:schema_info].each{|n| @db.table_exists?(n).should == true}
+    [:schema_migrations, :a, :b].each{|n| @db.table_exists?(n).should == false}
   end
 end
diff --git a/spec/integration/model_test.rb b/spec/integration/model_test.rb
index fd56c8d..ba32f7a 100644
--- a/spec/integration/model_test.rb
+++ b/spec/integration/model_test.rb
@@ -22,6 +22,23 @@ describe "Sequel::Model basic support" do
     Item.find(:name=>'J').should == Item.load(:id=>1, :name=>'J')
   end
   
+  specify ".finder should create method that returns first matching item" do
+    def Item.by_name(name) where(:name=>name) end
+    Item.finder :by_name
+    Item.first_by_name('J').should == nil
+    Item.create(:name=>'J')
+    Item.first_by_name('J').should == Item.load(:id=>1, :name=>'J')
+    Item.first_by_name(['J', 'K']).should == Item.load(:id=>1, :name=>'J')
+  end
+  
+  specify ".prepared_finder should create method that returns first matching item" do
+    def Item.by_name(name) where(:name=>name) end
+    Item.prepared_finder :by_name
+    Item.first_by_name('J').should == nil
+    Item.create(:name=>'J')
+    Item.first_by_name('J').should == Item.load(:id=>1, :name=>'J')
+  end
+  
   specify ".find_or_create should return first matching item, or create it if it doesn't exist" do
     Item.all.should == []
     Item.find_or_create(:name=>'J').should == Item.load(:id=>1, :name=>'J')
@@ -107,7 +124,7 @@ describe "Sequel::Model basic support" do
 
     i.rb = true
     i.destroy.should be_nil
-    i.exists?.should be_true
+    i.exists?.should == true
     i.hooks.should == [:ad, :adr]
 
     i.name = 'K'
@@ -118,7 +135,7 @@ describe "Sequel::Model basic support" do
 
     i.rb = false
     i.destroy.should_not be_nil
-    i.exists?.should be_false
+    i.exists?.should == false
     i.hooks.should == [:ad, :adc]
   end
 
diff --git a/spec/integration/prepared_statement_test.rb b/spec/integration/prepared_statement_test.rb
index 4602e4d..513fc97 100644
--- a/spec/integration/prepared_statement_test.rb
+++ b/spec/integration/prepared_statement_test.rb
@@ -283,7 +283,7 @@ describe "Bound Argument Types" do
     @db.drop_table?(:items)
   end
 
-  cspecify "should handle date type", [:do, :sqlite], :mssql, [:jdbc, :sqlite], :oracle do 
+  cspecify "should handle date type", [:do, :sqlite], [:tinytds], [:jdbc, :mssql], [:jdbc, :sqlite], :oracle do 
     @ds.filter(:d=>:$x).prepare(:first, :ps_date).call(:x=>@vs[:d])[:d].should == @vs[:d]
   end
 
diff --git a/spec/integration/schema_test.rb b/spec/integration/schema_test.rb
index b0785e8..4522242 100644
--- a/spec/integration/schema_test.rb
+++ b/spec/integration/schema_test.rb
@@ -206,7 +206,7 @@ describe "Database foreign key parsing" do
   end
 
   specify "should parse foreign key information into an array of hashes" do
-    @db.create_table!(:a, :engine=>:InnoDB){primary_key :c; Integer :d; index :d, :unique=>true}
+    @db.create_table!(:a, :engine=>:InnoDB){primary_key :c; Integer :d, :null => false, :unique => true}
     @db.create_table!(:b, :engine=>:InnoDB){foreign_key :e, :a}
     @pr[:a]
     @pr[:b, [[:e], :a, [:pk, :c]]]
@@ -217,7 +217,7 @@ describe "Database foreign key parsing" do
     @db.alter_table(:b){add_foreign_key [:f], :a, :key=>[:c]}
     @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:c]], [[:f], :a, [:d]]]
 
-    @db.alter_table(:a){add_index [:d, :c], :unique=>true}
+    @db.alter_table(:a){add_unique_constraint [:d, :c]}
     @db.alter_table(:b){add_foreign_key [:f, :e], :a, :key=>[:d, :c]}
     @pr[:b, [[:e], :a, [:pk, :c]], [[:f], :a, [:c]], [[:f], :a, [:d]], [[:f, :e], :a, [:d, :c]]]
 
@@ -232,9 +232,9 @@ describe "Database foreign key parsing" do
   end
 
   specify "should handle composite foreign and primary keys" do
-    @db.create_table!(:a, :engine=>:InnoDB){Integer :b; Integer :c; primary_key [:b, :c]; index [:c, :b], :unique=>true}
-    @db.create_table!(:b, :engine=>:InnoDB){Integer :e; Integer :f; foreign_key [:e, :f], :a; foreign_key [:f, :e], :a, :key=>[:c, :b]}
-    @pr[:b, [[:e, :f], :a, [:pk, :b, :c]], [[:f, :e], :a, [:c, :b]]]
+    @db.create_table!(:a, :engine=>:InnoDB){Integer :b, :null=>false; Integer :c, :null=>false; Integer :d, :null=>false; primary_key [:b, :c]; unique [:d, :c]}
+    @db.create_table!(:b, :engine=>:InnoDB){Integer :e, :null=>false; Integer :f, :null=>false; Integer :g, :null=>false; foreign_key [:e, :f], :a; foreign_key [:g, :f], :a, :key=>[:d, :c]}
+    @pr[:b, [[:e, :f], :a, [:pk, :b, :c]], [[:g, :f], :a, [:d, :c]]]
   end
 end if DB.supports_foreign_key_parsing?
 
@@ -266,14 +266,20 @@ describe "Database schema modifiers" do
     @db[:items2].all.should == [{:number=>10}]
   end
   
+  specify "should not raise an error if table doesn't exist when using drop_table :if_exists" do
+    proc{@db.drop_table(:items, :if_exists=>true)}.should_not raise_error
+  end if DB.supports_drop_table_if_exists?
+
   describe "views" do
     before do
+      @db.drop_view(:items_view2) rescue nil
       @db.drop_view(:items_view) rescue nil
       @db.create_table(:items){Integer :number}
       @ds.insert(:number=>1)
       @ds.insert(:number=>2)
     end
     after do
+      @db.drop_view(:items_view2) rescue nil
       @db.drop_view(:items_view) rescue nil
     end
 
@@ -282,11 +288,46 @@ describe "Database schema modifiers" do
       @db[:items_view].map(:number).should == [1]
     end
 
+    specify "should create views with check options correctly" do
+      @db.create_view(:items_view, @ds.where{number > 2}, :check=>true)
+      proc{@db[:items_view].insert(1)}.should raise_error(Sequel::DatabaseError)
+      @db[:items_view].insert(3)
+      @db[:items_view].select_order_map(:number).should == [3]
+      @db.create_view(:items_view2, @db[:items_view].where{number > 1}, :check=>true)
+      proc{@db[:items_view2].insert(1)}.should raise_error(Sequel::DatabaseError)
+      proc{@db[:items_view2].insert(2)}.should raise_error(Sequel::DatabaseError)
+      @db[:items_view2].insert(4)
+      @db[:items_view2].select_order_map(:number).should == [3, 4]
+      @ds.select_order_map(:number).should == [1, 2, 3, 4]
+    end if DB.supports_views_with_check_option?
+
+    specify "should create views with local check options correctly" do
+      @db.create_view(:items_view, @ds.where{number > 2})
+      @db[:items_view].insert(3)
+      @db[:items_view].select_order_map(:number).should == [3]
+      @db.create_view(:items_view2, @db[:items_view].where{number > 1}, :check=>:local)
+      proc{@db[:items_view2].insert(1)}.should raise_error(Sequel::DatabaseError)
+      @db[:items_view2].insert(2)
+      @db[:items_view2].insert(4)
+      @db[:items_view2].select_order_map(:number).should == [3, 4]
+      @ds.select_order_map(:number).should == [1, 2, 2, 3, 4]
+    end if DB.supports_views_with_local_check_option?
+
     cspecify "should create views with explicit columns correctly", :sqlite do
       @db.create_view(:items_view, @ds.where(:number=>1), :columns=>[:n])
       @db[:items_view].map(:n).should == [1]
     end
 
+    specify "should drop views correctly" do
+      @db.create_view(:items_view, @ds.where(:number=>1))
+      @db.drop_view(:items_view)
+      proc{@db[:items_view].map(:number)}.should raise_error(Sequel::DatabaseError)
+    end
+
+    specify "should not raise an error if view doesn't exist when using drop_view :if_exists" do
+      proc{@db.drop_view(:items_view, :if_exists=>true)}.should_not raise_error
+    end if DB.supports_drop_table_if_exists?
+
     specify "should create or replace views correctly" do
       @db.create_or_replace_view(:items_view, @ds.where(:number=>1))
       @db[:items_view].map(:number).should == [1]
@@ -298,7 +339,7 @@ describe "Database schema modifiers" do
   specify "should handle create table in a rolled back transaction" do
     @db.drop_table?(:items)
     @db.transaction(:rollback=>:always){@db.create_table(:items){Integer :number}}
-    @db.table_exists?(:items).should be_false
+    @db.table_exists?(:items).should == false
   end if DB.supports_transactional_ddl?
   
   describe "join tables" do
@@ -400,7 +441,7 @@ describe "Database schema modifiers" do
     @ds.all.should == [{:number=>10, :name=>nil}]
   end
 
-  cspecify "should add primary key columns to tables correctly", :h2, :derby do
+  cspecify "should add primary key columns to tables correctly", :derby do
     @db.create_table!(:items){Integer :number}
     @ds.insert(:number=>10)
     @db.alter_table(:items){add_primary_key :id}
@@ -418,7 +459,7 @@ describe "Database schema modifiers" do
     proc{@ds.insert(10)}.should_not raise_error
   end
 
-  cspecify "should add foreign key columns to tables correctly", :hsqldb do
+  specify "should add foreign key columns to tables correctly" do
     @db.create_table!(:items){primary_key :id}
     @ds.insert
     i = @ds.get(:id)
@@ -545,7 +586,7 @@ describe "Database schema modifiers" do
   end
 
   specify "should add unnamed unique constraints and foreign key table constraints correctly" do
-    @db.create_table!(:items, :engine=>:InnoDB){Integer :id; Integer :item_id}
+    @db.create_table!(:items, :engine=>:InnoDB){Integer :id, :null => false; Integer :item_id, :null => false}
     @db.alter_table(:items) do
       add_unique_constraint [:item_id, :id]
       add_foreign_key [:id, :item_id], :items, :key=>[:item_id, :id]
@@ -607,18 +648,16 @@ describe "Database schema modifiers" do
     @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id]
   end
 
-  cspecify "should remove foreign key columns from tables correctly", :h2, :mssql, :hsqldb do
-    # MySQL with InnoDB cannot drop foreign key columns unless you know the
-    # name of the constraint, see Bug #14347
-    @db.create_table!(:items, :engine=>:MyISAM) do
+  specify "should remove foreign key columns from tables correctly" do
+    @db.create_table!(:items, :engine=>:InnoDB) do
       primary_key :id
       Integer :i
       foreign_key :item_id, :items
     end
     @ds.insert(:i=>10)
-    @db.drop_column(:items, :item_id)
+    @db.alter_table(:items){drop_foreign_key :item_id}
     @db.schema(:items, :reload=>true).map{|x| x.first}.should == [:id, :i]
-  end
+  end if DB.supports_foreign_key_parsing?
 
   specify "should remove multiple columns in a single alter_table block" do
     @db.create_table!(:items) do
diff --git a/spec/integration/spec_helper.rb b/spec/integration/spec_helper.rb
index f2fd2b8..c862f3a 100644
--- a/spec/integration/spec_helper.rb
+++ b/spec/integration/spec_helper.rb
@@ -60,7 +60,9 @@ def Sequel.guarded?(*checked)
   false
 end
 
-(defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do
+require File.join(File.dirname(File.expand_path(__FILE__)), "../rspec_helper.rb")
+
+RSPEC_EXAMPLE_GROUP.class_eval do
   def log
     begin
       DB.loggers << Logger.new(STDOUT)
diff --git a/spec/integration/transaction_test.rb b/spec/integration/transaction_test.rb
index f9b71d1..6d4cf92 100644
--- a/spec/integration/transaction_test.rb
+++ b/spec/integration/transaction_test.rb
@@ -23,10 +23,10 @@ describe "Database transactions" do
   end
 
   specify "should have #in_transaction? work correctly" do
-    @db.in_transaction?.should be_false
+    @db.in_transaction?.should == false
     c = nil
     @db.transaction{c = @db.in_transaction?}
-    c.should be_true
+    c.should == true
   end
 
   specify "should correctly rollback transactions" do
@@ -84,7 +84,7 @@ describe "Database transactions" do
   end 
   
   if DB.supports_savepoints?
-    cspecify "should support nested transactions through savepoints using the savepoint option", [:jdbc, :sqlite] do
+    specify "should support nested transactions through savepoints using the savepoint option" do
       @db.transaction do
         @d << {:name => '1'}
         @db.transaction(:savepoint=>true) do
@@ -107,6 +107,30 @@ describe "Database transactions" do
 
       @d.order(:name).map(:name).should == %w{1 4 5 6}
     end
+
+    specify "should support nested transactions through savepoints using the auto_savepoint option" do
+      @db.transaction(:auto_savepoint=>true) do
+        @d << {:name => '1'}
+        @db.transaction do
+          @d << {:name => '2'}
+          @db.transaction do
+            @d << {:name => '3'}
+            raise Sequel::Rollback
+          end
+        end
+        @d << {:name => '4'}
+        @db.transaction(:auto_savepoint=>true) do
+          @d << {:name => '6'}
+          @db.transaction do
+            @d << {:name => '7'}
+            raise Sequel::Rollback
+          end
+        end
+        @d << {:name => '5'}
+      end
+
+      @d.order(:name).map(:name).should == %w{1 4 5 6}
+    end
   end
 
   specify "should handle returning inside of the block by committing" do
diff --git a/spec/integration/type_test.rb b/spec/integration/type_test.rb
index fed0ea9..a3090d9 100644
--- a/spec/integration/type_test.rb
+++ b/spec/integration/type_test.rb
@@ -6,6 +6,10 @@ describe "Supported types" do
     DB[:items]
   end
 
+  after(:all) do
+    DB.drop_table?(:items)
+  end
+
   specify "should support casting correctly" do
     ds = create_items_table_with_column(:number, Integer)
     ds.insert(:number => 1)
@@ -69,7 +73,7 @@ describe "Supported types" do
     ds.all.should == [{:name=>'Test User'*100}]
   end
   
-  cspecify "should support generic date type", [:do, :sqlite], [:jdbc, :sqlite], :mssql, :oracle do
+  cspecify "should support generic date type", [:do, :sqlite], [:jdbc, :sqlite], [:tinytds], [:jdbc, :mssql], :oracle do
     ds = create_items_table_with_column(:dat, Date)
     d = Date.today
     ds.insert(:dat => d)
diff --git a/spec/model/association_reflection_spec.rb b/spec/model/association_reflection_spec.rb
index 7bc97b7..977bee1 100644
--- a/spec/model/association_reflection_spec.rb
+++ b/spec/model/association_reflection_spec.rb
@@ -269,22 +269,23 @@ describe Sequel::Model::Associations::AssociationReflection, "#remove_before_des
 
   it "should be true for many_to_one and many_to_many associations" do
     @c.many_to_one :c, :class=>@c
-    @c.association_reflection(:c).remove_before_destroy?.should be_true
+    @c.association_reflection(:c).remove_before_destroy?.should == true
     @c.many_to_many :cs, :class=>@c
-    @c.association_reflection(:cs).remove_before_destroy?.should be_true
+    @c.association_reflection(:cs).remove_before_destroy?.should == true
   end
 
   it "should be false for one_to_one and one_to_many associations" do
     @c.one_to_one :c, :class=>@c
-    @c.association_reflection(:c).remove_before_destroy?.should be_false
+    @c.association_reflection(:c).remove_before_destroy?.should == false
     @c.one_to_many :cs, :class=>@c
-    @c.association_reflection(:cs).remove_before_destroy?.should be_false
+    @c.association_reflection(:cs).remove_before_destroy?.should == false
   end
 end
 
-describe Sequel::Model::Associations::AssociationReflection, "#eager_limit_strategy" do
+describe Sequel::Model::Associations::AssociationReflection, "#filter_by_associations_limit_strategy" do
   before do
-    @c = Class.new(Sequel::Model(:a))
+    @db = Sequel.mock
+    @c = Class.new(Sequel::Model(@db[:a]))
   end
   after do
     Sequel::Model.default_eager_limit_strategy = true
@@ -292,53 +293,83 @@ describe Sequel::Model::Associations::AssociationReflection, "#eager_limit_strat
 
   it "should be nil by default for *_one associations" do
     @c.many_to_one :c, :class=>@c
-    @c.association_reflection(:c).eager_limit_strategy.should be_nil
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
     @c.one_to_one :c, :class=>@c
-    @c.association_reflection(:c).eager_limit_strategy.should be_nil
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
+    @c.one_through_one :c, :class=>@c
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
   end
 
-  it "should be :ruby by default for *_many associations" do
+  it "should be :correlated_subquery by default for one_to_many and one_to_one with :order associations" do
+    @c.one_to_one :c, :class=>@c, :order=>:a
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should == :correlated_subquery
     @c.one_to_many :cs, :class=>@c, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :ruby
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :correlated_subquery
+  end
+
+  it "should be :ruby by default for many_to_many and one_through_one with :order associations" do
+    @c.one_through_one :c, :class=>@c, :order=>:a
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should == :ruby
     @c.many_to_many :cs, :class=>@c, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :ruby
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :ruby
   end
 
-  it "should be nil for many_to_one associations" do
+  it "should be nil for many_to_one associations even if :eager_limit_strategy or :filter_limit_strategy is used" do
     @c.many_to_one :c, :class=>@c, :eager_limit_strategy=>true
-    @c.association_reflection(:c).eager_limit_strategy.should be_nil
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
     @c.many_to_one :c, :class=>@c, :eager_limit_strategy=>:distinct_on
-    @c.association_reflection(:c).eager_limit_strategy.should be_nil
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
+    @c.many_to_one :c, :class=>@c, :filter_limit_strategy=>true
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
   end
 
   it "should be a symbol for other associations if given a symbol" do
     @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>:distinct_on
-    @c.association_reflection(:c).eager_limit_strategy.should == :distinct_on
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should == :distinct_on
     @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>:window_function, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
   end
 
   it "should use :distinct_on for one_to_one associations if picking and the association dataset supports ordered distinct on" do
     def (@c.dataset).supports_ordered_distinct_on?() true end
     @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>true
-    @c.association_reflection(:c).eager_limit_strategy.should == :distinct_on
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should == :distinct_on
   end
 
   it "should use :window_function for associations if picking and the association dataset supports window functions" do
     def (@c.dataset).supports_window_functions?() true end
     @c.one_to_one :c, :class=>@c, :eager_limit_strategy=>true
-    @c.association_reflection(:c).eager_limit_strategy.should == :window_function
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should == :window_function
     @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
     @c.many_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
   end
 
-  it "should use :ruby for *_many associations if picking and the association dataset doesn't window functions" do
+  it "should use :ruby for one_to_many associations if the database doesn't support limits in subqueries" do
+    def (@c.dataset).supports_limits_in_correlated_subqueries?; false; end
     @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :ruby
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :ruby
+  end
+
+  it "should use :ruby for one_to_many associations if offset doesn't work in correlated subqueries and an offset is used" do
+    def (@c.dataset).supports_offsets_in_correlated_subqueries?; false; end
+    @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :correlated_subquery
+    @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>[1, 1]
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :ruby
+  end
+
+  it "should use :ruby for one_to_many associations if composite primary key is used and database does not multiple columns in IN" do
+    def (@c.dataset).supports_multiple_column_in?; false; end
+    @c.set_primary_key [:id, :id2]
+    @c.one_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1, :key=>[:id, :id2]
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :ruby
+  end
+
+  it "should use :ruby for many_to_many associations if picking and the association dataset doesn't window functions" do
     @c.many_to_many :cs, :class=>@c, :eager_limit_strategy=>true, :limit=>1
-    @c.association_reflection(:cs).eager_limit_strategy.should == :ruby
+    @c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :ruby
   end
 
   it "should respect Model.default_eager_limit_strategy to *_many associations" do
@@ -348,24 +379,32 @@ describe Sequel::Model::Associations::AssociationReflection, "#eager_limit_strat
     c.dataset = :a
     c.default_eager_limit_strategy.should == :window_function
     c.one_to_many :cs, :class=>c, :limit=>1
-    c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
     c.many_to_many :cs, :class=>c, :limit=>1
-    c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
 
     Sequel::Model.default_eager_limit_strategy = true
     c = Class.new(Sequel::Model)
     c.dataset = :a
     c.one_to_many :cs, :class=>c, :limit=>1
-    c.association_reflection(:cs).eager_limit_strategy.should == :ruby
+    c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :correlated_subquery
     def (c.dataset).supports_window_functions?() true end
     c.many_to_many :cs, :class=>c, :limit=>1
-    c.association_reflection(:cs).eager_limit_strategy.should == :window_function
+    c.association_reflection(:cs).send(:filter_by_associations_limit_strategy).should == :window_function
   end
 
   it "should ignore Model.default_eager_limit_strategy for one_to_one associations" do
     @c.default_eager_limit_strategy = :window_function
     @c.one_to_one :c, :class=>@c
-    @c.association_reflection(:c).eager_limit_strategy.should be_nil
+    @c.association_reflection(:c).send(:filter_by_associations_limit_strategy).should be_nil
+  end
+end
+
+describe Sequel::Model::Associations::AssociationReflection, "#apply_eager_dataset_changes" do
+  it "should apply the eager block as well as the association options to the dataset" do
+    @c = Class.new(Sequel::Model(:foo))
+    @c.one_to_many :cs, :class=>@c, :select=>:a, :order=>:b do |ds| ds.where(:c) end
+    @c.association_reflection(:cs).apply_eager_dataset_changes(@c.dataset).sql.should == 'SELECT a FROM foo WHERE c ORDER BY b'
   end
 end
 
diff --git a/spec/model/associations_spec.rb b/spec/model/associations_spec.rb
index 7f23ffd..002147c 100644
--- a/spec/model/associations_spec.rb
+++ b/spec/model/associations_spec.rb
@@ -106,6 +106,7 @@ describe Sequel::Model, "associate" do
       klass.many_to_one :par, :clone=>:par_parent, :select=>:b
       klass.one_to_many :par1s, :clone=>:par_parent1s, :order=>:b, :limit=>10, :block=>nil
       klass.many_to_many(:par2s, :clone=>:par_parent2s, :order=>:c){3}
+      klass.many_to_one :par3, :clone=>:par
       
       klass.association_reflection(:par).associated_class.should == ParParent
       klass.association_reflection(:par1s).associated_class.should == ParParent
@@ -114,12 +115,16 @@ describe Sequel::Model, "associate" do
       klass.association_reflection(:par)[:order].should == :a
       klass.association_reflection(:par).select.should == :b
       klass.association_reflection(:par)[:block].call.should == 1
+      klass.association_reflection(:par)[:eager_block].call.should == 1
       klass.association_reflection(:par1s)[:limit].should == 10
       klass.association_reflection(:par1s)[:order].should == :b
       klass.association_reflection(:par1s)[:block].should == nil
       klass.association_reflection(:par2s)[:after_load].length.should == 1
       klass.association_reflection(:par2s)[:order].should == :c
       klass.association_reflection(:par2s)[:block].call.should == 3
+
+      klass.association_reflection(:par3)[:block].call.should == 1
+      klass.association_reflection(:par3)[:eager_block].call.should == 1
     ensure
       Object.send(:remove_const, :ParParent)
     end
@@ -138,6 +143,13 @@ describe Sequel::Model, "associate" do
     proc{c.one_to_one :c2, :clone=>:cs}.should_not raise_error
   end
 
+  it "should allow cloning of many_to_many to one_through_one associations and vice-versa" do
+    c = Class.new(Sequel::Model(:c))
+    c.many_to_many :c
+    proc{c.one_through_one :cs, :clone=>:c}.should_not raise_error
+    proc{c.many_to_many :c2, :clone=>:cs}.should_not raise_error
+  end
+
   it "should clear associations cache when refreshing object manually" do
     c = Class.new(Sequel::Model(:c))
     c.many_to_one :c
@@ -1075,6 +1087,14 @@ describe Sequel::Model, "one_to_one" do
     proc{p.parent = nil}.should raise_error(Sequel::Error)
   end
 
+  it "should not validate the associated object in setter if the :validate=>false option is used" do
+    @c2.one_to_one :parent, :class => @c2, :validate=>false
+    n = @c2.new(:id => 1234)
+    a = @c2.new(:id => 2345)
+    def a.validate() errors.add(:id, 'foo') end
+    (n.parent = a).should == a
+  end
+
   it "should raise an error if a callback is not a proc or symbol" do
     @c2.one_to_one :parent, :class => @c2, :before_set=>Object.new
     proc{@c2.new.parent = @c2.load(:id=>1)}.should raise_error(Sequel::Error)
@@ -1354,6 +1374,18 @@ describe Sequel::Model, "one_to_many" do
     n.remove_attribute(a).should == a
   end
 
+  it "should not raise exception in add_ and remove_ if the :raise_on_save_failure=>false option is used" do
+    @c2.one_to_many :attributes, :class => @c1, :raise_on_save_failure=>false
+    n = @c2.new(:id => 1234)
+    a = @c1.new(:id => 2345)
+    def a.validate() errors.add(:id, 'foo') end
+    n.associations[:attributes] = []
+    n.add_attribute(a).should == nil
+    n.associations[:attributes].should == []
+    n.remove_attribute(a).should == nil
+    n.associations[:attributes].should == []
+  end
+
   it "should raise an error if the model object doesn't have a valid primary key" do
     @c2.one_to_many :attributes, :class => @c1 
     a = @c2.new
@@ -1828,14 +1860,19 @@ describe Sequel::Model, "many_to_many" do
 
   it "should use implicit key values and join table if omitted" do
     @c2.many_to_many :attributes, :class => @c1 
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
+  end
+  
+  it "should use implicit key values and join table if omitted" do
+    @c2.one_through_one :attribute, :class => @c1 
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1'
   end
   
   it "should use implicit class if omitted" do
     begin
       class ::Tag < Sequel::Model; end
       @c2.many_to_many :tags
-      @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON ((nodes_tags.tag_id = tags.id) AND (nodes_tags.node_id = 1234))'
+      @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON (nodes_tags.tag_id = tags.id) WHERE (nodes_tags.node_id = 1234)'
     ensure
       Object.send(:remove_const, :Tag)
     end
@@ -1847,7 +1884,7 @@ describe Sequel::Model, "many_to_many" do
         class Tag < Sequel::Model; end
       end
       @c2.many_to_many :tags, :class=>'::Historical::Tag'
-      @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON ((nodes_tags.tag_id = tags.id) AND (nodes_tags.node_id = 1234))'
+      @c2.new(:id => 1234).tags_dataset.sql.should == 'SELECT tags.* FROM tags INNER JOIN nodes_tags ON (nodes_tags.tag_id = tags.id) WHERE (nodes_tags.node_id = 1234)'
     ensure
       Object.send(:remove_const, :Historical)
     end
@@ -1855,41 +1892,41 @@ describe Sequel::Model, "many_to_many" do
   
   it "should respect :eager_loader_predicate_key when lazily loading" do
     @c2.many_to_many :attributes, :class => @c1, :eager_loading_predicate_key=>Sequel.subscript(:attributes_nodes__node_id, 0)
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id[0] = 1234))'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id[0] = 1234)'
   end
   
   it "should use explicit key values and join table if given" do
     @c2.many_to_many :attributes, :class => @c1, :left_key => :nodeid, :right_key => :attributeid, :join_table => :attribute2node
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attribute2node ON ((attribute2node.attributeid = attributes.id) AND (attribute2node.nodeid = 1234))'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attribute2node ON (attribute2node.attributeid = attributes.id) WHERE (attribute2node.nodeid = 1234)'
   end
   
   it "should support a conditions option" do
     @c2.many_to_many :attributes, :class => @c1, :conditions => {:a=>32}
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (a = 32)'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((a = 32) AND (attributes_nodes.node_id = 1234))'
 
     @c2.many_to_many :attributes, :class => @c1, :conditions => ['a = ?', 32]
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (a = 32)'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((a = 32) AND (attributes_nodes.node_id = 1234))'
     @c2.new(:id => 1234).attributes.should == [@c1.load({})]
   end
   
   it "should support an order option" do
     @c2.many_to_many :attributes, :class => @c1, :order => :blah
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) ORDER BY blah'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) ORDER BY blah'
   end
   
   it "should support an array for the order option" do
     @c2.many_to_many :attributes, :class => @c1, :order => [:blah1, :blah2]
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) ORDER BY blah1, blah2'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) ORDER BY blah1, blah2'
   end
   
   it "should support :left_primary_key and :right_primary_key options" do
     @c2.many_to_many :attributes, :class => @c1, :left_primary_key=>:xxx, :right_primary_key=>:yyy
-    @c2.new(:id => 1234, :xxx=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.yyy) AND (attributes_nodes.node_id = 5))'
+    @c2.new(:id => 1234, :xxx=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.yyy) WHERE (attributes_nodes.node_id = 5)'
   end
   
   it "should support composite keys" do
     @c2.many_to_many :attributes, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :y]
-    @c2.load(:id => 1234, :x=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.r1 = attributes.id) AND (attributes_nodes.r2 = attributes.y) AND (attributes_nodes.l1 = 1234) AND (attributes_nodes.l2 = 5))'
+    @c2.load(:id => 1234, :x=>5).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.r1 = attributes.id) AND (attributes_nodes.r2 = attributes.y)) WHERE ((attributes_nodes.l1 = 1234) AND (attributes_nodes.l2 = 5))'
   end
   
   it "should not issue query if not all keys have values" do
@@ -1914,13 +1951,13 @@ describe Sequel::Model, "many_to_many" do
   it "should support a select option" do
     @c2.many_to_many :attributes, :class => @c1, :select => :blah
 
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT blah FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT blah FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
   end
   
   it "should support an array for the select option" do
     @c2.many_to_many :attributes, :class => @c1, :select => [Sequel::SQL::ColumnAll.new(:attributes), :attribute_nodes__blah2]
 
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.*, attribute_nodes.blah2 FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.*, attribute_nodes.blah2 FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
   end
   
   it "should accept a block" do
@@ -1930,7 +1967,7 @@ describe Sequel::Model, "many_to_many" do
 
     n = @c2.new(:id => 1234)
     n.xxx = 555
-    n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (xxx = 555)'
+    n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (xxx = 555))'
   end
 
   it "should allow the :order option while accepting a block" do
@@ -1940,7 +1977,7 @@ describe Sequel::Model, "many_to_many" do
 
     n = @c2.new(:id => 1234)
     n.xxx = 555
-    n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (xxx = 555) ORDER BY blah1, blah2'
+    n.attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (xxx = 555)) ORDER BY blah1, blah2'
   end
 
   it "should support a :dataset option that is used instead of the default" do
@@ -1957,7 +1994,7 @@ describe Sequel::Model, "many_to_many" do
   end
 
   it "should support a :dataset option that accepts the reflection as an argument" do
-    @c2.many_to_many :attributes, :class => @c1, :dataset=>lambda{|opts| opts.associated_dataset.join_table(:natural, :an).filter(:an__nodeid=>pk)}, :order=> :a, :limit=>10, :select=>nil do |ds|
+    @c2.many_to_many :attributes, :class => @c1, :dataset=>lambda{|opts| opts.associated_class.natural_join(:an).filter(:an__nodeid=>pk)}, :order=> :a, :limit=>10, :select=>nil do |ds|
       ds.filter(:xxx => @xxx)
     end
 
@@ -1970,9 +2007,9 @@ describe Sequel::Model, "many_to_many" do
 
   it "should support a :limit option" do
     @c2.many_to_many :attributes, :class => @c1 , :limit=>10
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) LIMIT 10'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 10'
     @c2.many_to_many :attributes, :class => @c1 , :limit=>[10, 10]
-    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) LIMIT 10 OFFSET 10'
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 10 OFFSET 10'
   end
 
   it "should have the :eager option affect the _dataset method" do
@@ -1984,7 +2021,7 @@ describe Sequel::Model, "many_to_many" do
     @c2.many_to_many :attributes, :class => @c1, :join_table => :attribute2node___attributes_nodes
     n = @c2.load(:id => 1234)
     a = @c1.load(:id => 2345)
-    n.attributes_dataset.sql.should == "SELECT attributes.* FROM attributes INNER JOIN attribute2node AS attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))"
+    n.attributes_dataset.sql.should == "SELECT attributes.* FROM attributes INNER JOIN attribute2node AS attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)"
     a.should == n.add_attribute(a)
     a.should == n.remove_attribute(a)
     n.remove_all_attributes
@@ -2025,7 +2062,6 @@ describe Sequel::Model, "many_to_many" do
     @c2.many_to_many :attributes, :class => @c1
     
     n = @c2.load(:id => 1234)
-    a = @c1.load(:id => 2345)
     @c1.dataset._fetch = []
     proc{n.add_attribute(2345)}.should raise_error(Sequel::NoMatchingRow)
     DB.sqls.should == ["SELECT * FROM attributes WHERE id = 2345"]
@@ -2063,7 +2099,7 @@ describe Sequel::Model, "many_to_many" do
     n = @c2.new(:id => 1234)
     @c1.dataset._fetch = {:id=>234}
     n.remove_attribute(234).should == @c1.load(:id => 234)
-    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE (attributes.id = 234) LIMIT 1",
+    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (attributes.id = 234)) LIMIT 1",
       "DELETE FROM attributes_nodes WHERE ((node_id = 1234) AND (attribute_id = 234))"]
   end
     
@@ -2155,8 +2191,8 @@ describe Sequel::Model, "many_to_many" do
     @c1.dataset._fetch = {:id=>234, :y=>8}
     @c1.load(:id => 234, :y=>8).should == n.remove_attribute([234, 8])
     sqls = DB.sqls
-    ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE ((attributes.id = 234) AND (attributes.y = 8)) LIMIT 1",
-      "SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234)) WHERE ((attributes.y = 8) AND (attributes.id = 234)) LIMIT 1"].should include(sqls.shift)
+    ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (attributes.id = 234) AND (attributes.y = 8)) LIMIT 1",
+      "SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (attributes.y = 8) AND (attributes.id = 234)) LIMIT 1"].should include(sqls.shift)
     sqls.should == ["DELETE FROM attributes_nodes WHERE ((node_id = 1234) AND (attribute_id = 234))"]
   end
     
@@ -2218,7 +2254,7 @@ describe Sequel::Model, "many_to_many" do
     @c2.many_to_many :attributes, :class => @c1
     
     @c2.new(:id => 1234).attributes.should == [@c1.load({})]
-    DB.sqls.should == ['SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))']
+    DB.sqls.should == ['SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)']
   end
 
   it "should populate cache when accessed" do
@@ -2245,7 +2281,7 @@ describe Sequel::Model, "many_to_many" do
     n = @c2.new(:id => 1234)
     n.associations[:attributes] = 42
     n.attributes(true).should_not == 42
-    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1234))"]
+    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)"]
   end
 
   it "should add item to cache if it exists when calling add_" do
@@ -2571,7 +2607,7 @@ describe Sequel::Model, "many_to_many" do
   
   it "should support a :distinct option that uses the DISTINCT clause" do
     @c2.many_to_many :attributes, :class => @c1, :distinct=>true
-    @c2.load(:id=>10).attributes_dataset.sql.should == "SELECT DISTINCT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 10))"
+    @c2.load(:id=>10).attributes_dataset.sql.should == "SELECT DISTINCT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 10)"
   end
 
   it "should not apply association options when removing all associated records" do
@@ -2588,35 +2624,320 @@ describe Sequel::Model, "many_to_many" do
     end
     @c1.dataset._fetch = {:id=>2}
     @c2.load(:id=>1).remove_attribute(2)
-    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.attribute_id = attributes.id) AND (attributes_nodes.node_id = 1)) WHERE ((join_table_att = 3) AND (attributes.id = 2)) LIMIT 1",
+    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1) AND (join_table_att = 3) AND (attributes.id = 2)) LIMIT 1",
       "DELETE FROM attributes_nodes WHERE ((node_id = 1) AND (attribute_id = 2))"] 
   end
 end
 
-describe "Filtering by associations" do
+describe Sequel::Model, "one_through_one" do
   before do
-    @Album = Class.new(Sequel::Model(:albums))
-    artist = @Artist = Class.new(Sequel::Model(:artists))
-    tag = @Tag = Class.new(Sequel::Model(:tags))
-    track = @Track = Class.new(Sequel::Model(:tracks))
-    album_info = @AlbumInfo = Class.new(Sequel::Model(:album_infos))
+    @c1 = Class.new(Sequel::Model(:attributes)) do
+      unrestrict_primary_key
+      attr_accessor :yyy
+      def self.name; 'Attribute'; end
+      def self.to_s; 'Attribute'; end
+      columns :id, :y, :z
+    end
+
+    @c2 = Class.new(Sequel::Model(:nodes)) do
+      unrestrict_primary_key
+      attr_accessor :xxx
+      
+      def self.name; 'Node'; end
+      def self.to_s; 'Node'; end
+      columns :id, :x
+    end
+    @dataset = @c2.dataset
+    @c1.dataset.autoid = 1
+
+    [@c1, @c2].each{|c| c.dataset._fetch = {}}
+    DB.reset
+  end
+
+  it "should use implicit key values and join table if omitted" do
+    @c2.one_through_one :attribute, :class => @c1 
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1'
+  end
+  
+  it "should respect :eager_loader_predicate_key when lazily loading" do
+    @c2.one_through_one :attribute, :class => @c1, :eager_loading_predicate_key=>Sequel.subscript(:attributes_nodes__node_id, 0)
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id[0] = 1234) LIMIT 1'
+  end
+  
+  it "should use explicit key values and join table if given" do
+    @c2.one_through_one :attribute, :class => @c1, :left_key => :nodeid, :right_key => :attributeid, :join_table => :attribute2node
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attribute2node ON (attribute2node.attributeid = attributes.id) WHERE (attribute2node.nodeid = 1234) LIMIT 1'
+  end
+  
+  it "should support a conditions option" do
+    @c2.one_through_one :attribute, :class => @c1, :conditions => {:a=>32}
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((a = 32) AND (attributes_nodes.node_id = 1234)) LIMIT 1'
+
+    @c2.one_through_one :attribute, :class => @c1, :conditions => ['a = ?', 32]
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((a = 32) AND (attributes_nodes.node_id = 1234)) LIMIT 1'
+    @c2.new(:id => 1234).attribute.should == @c1.load({})
+  end
+  
+  it "should support an order option" do
+    @c2.one_through_one :attribute, :class => @c1, :order => :blah
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) ORDER BY blah LIMIT 1'
+  end
+  
+  it "should support an array for the order option" do
+    @c2.one_through_one :attribute, :class => @c1, :order => [:blah1, :blah2]
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) ORDER BY blah1, blah2 LIMIT 1'
+  end
+  
+  it "should support :left_primary_key and :right_primary_key options" do
+    @c2.one_through_one :attribute, :class => @c1, :left_primary_key=>:xxx, :right_primary_key=>:yyy
+    @c2.new(:id => 1234, :xxx=>5).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.yyy) WHERE (attributes_nodes.node_id = 5) LIMIT 1'
+  end
+  
+  it "should support composite keys" do
+    @c2.one_through_one :attribute, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :y]
+    @c2.load(:id => 1234, :x=>5).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON ((attributes_nodes.r1 = attributes.id) AND (attributes_nodes.r2 = attributes.y)) WHERE ((attributes_nodes.l1 = 1234) AND (attributes_nodes.l2 = 5)) LIMIT 1'
+  end
+  
+  it "should not issue query if not all keys have values" do
+    @c2.one_through_one :attribute, :class => @c1, :left_key=>[:l1, :l2], :right_key=>[:r1, :r2], :left_primary_key=>[:id, :x], :right_primary_key=>[:id, :y]
+    @c2.load(:id => 1234, :x=>nil).attribute.should == nil
+    DB.sqls.should == []
+  end
+  
+  it "should raise an Error unless same number of composite keys used" do
+    proc{@c2.one_through_one :attribute, :class => @c1, :left_key=>[:node_id, :id]}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :left_key=>[:node_id, :id], :left_primary_key=>:id}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :left_key=>:id, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :left_key=>[:node_id, :id, :x], :left_primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error)
+    
+    proc{@c2.one_through_one :attribute, :class => @c1, :right_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :right_key=>[:node_id, :id], :right_primary_key=>:id}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :right_key=>:id, :left_primary_key=>[:node_id, :id]}.should raise_error(Sequel::Error)
+    proc{@c2.one_through_one :attribute, :class => @c1, :right_key=>[:node_id, :id, :x], :right_primary_key=>[:parent_id, :id]}.should raise_error(Sequel::Error)
+  end
+  
+  it "should support a select option" do
+    @c2.one_through_one :attribute, :class => @c1, :select => :blah
+
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT blah FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1'
+  end
+  
+  it "should support an array for the select option" do
+    @c2.one_through_one :attribute, :class => @c1, :select => [Sequel::SQL::ColumnAll.new(:attributes), :attribute_nodes__blah2]
+
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.*, attribute_nodes.blah2 FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1'
+  end
+  
+  it "should accept a block" do
+    @c2.one_through_one :attribute, :class => @c1 do |ds|
+      ds.filter(:xxx => @xxx)
+    end
+
+    n = @c2.new(:id => 1234)
+    n.xxx = 555
+    n.attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (xxx = 555)) LIMIT 1'
+  end
+
+  it "should allow the :order option while accepting a block" do
+    @c2.one_through_one :attribute, :class => @c1, :order=>[:blah1, :blah2] do |ds|
+      ds.filter(:xxx => @xxx)
+    end
+
+    n = @c2.new(:id => 1234)
+    n.xxx = 555
+    n.attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE ((attributes_nodes.node_id = 1234) AND (xxx = 555)) ORDER BY blah1, blah2 LIMIT 1'
+  end
+
+  it "should support a :dataset option that is used instead of the default" do
+    c1 = @c1
+    @c2.one_through_one :attribute, :class => @c1, :dataset=>proc{c1.join_table(:natural, :an).filter(:an__nodeid=>pk)}, :order=> :a, :select=>nil do |ds|
+      ds.filter(:xxx => @xxx)
+    end
+
+    n = @c2.new(:id => 1234)
+    n.xxx = 555
+    n.attribute_dataset.sql.should == 'SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 1'
+    n.attribute.should == @c1.load({})
+    DB.sqls.should == ['SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 1']
+  end
+
+  it "should support a :dataset option that accepts the reflection as an argument" do
+    @c2.one_through_one :attribute, :class => @c1, :dataset=>lambda{|opts| opts.associated_class.natural_join(:an).filter(:an__nodeid=>pk)}, :order=> :a, :select=>nil do |ds|
+      ds.filter(:xxx => @xxx)
+    end
+
+    n = @c2.new(:id => 1234)
+    n.xxx = 555
+    n.attribute_dataset.sql.should == 'SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 1'
+    n.attribute.should == @c1.load({})
+    DB.sqls.should == ['SELECT * FROM attributes NATURAL JOIN an WHERE ((an.nodeid = 1234) AND (xxx = 555)) ORDER BY a LIMIT 1']
+  end
+
+  it "should support a :limit option to specify an offset" do
+    @c2.one_through_one :attribute, :class => @c1 , :limit=>[nil, 10]
+    @c2.new(:id => 1234).attribute_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1 OFFSET 10'
+  end
+
+  it "should have the :eager option affect the _dataset method" do
+    @c2.one_through_one :attribute, :class => @c2 , :eager=>:attribute
+    @c2.new(:id => 1234).attribute_dataset.opts[:eager].should == {:attribute=>nil}
+  end
+  
+  it "should handle an aliased join table" do
+    @c2.one_through_one :attribute, :class => @c1, :join_table => :attribute2node___attributes_nodes
+    n = @c2.load(:id => 1234)
+    n.attribute_dataset.sql.should == "SELECT attributes.* FROM attributes INNER JOIN attribute2node AS attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1"
+  end
+  
+  it "should raise an error if the model object doesn't have a valid primary key" do
+    @c2.one_through_one :attribute, :class => @c1 
+    a = @c2.new
+    proc{a.attribute_dataset}.should raise_error(Sequel::Error)
+  end
+  
+  it "should provide an array with all members of the association" do
+    @c2.one_through_one :attribute, :class => @c1
+    
+    @c2.new(:id => 1234).attribute.should == @c1.load({})
+    DB.sqls.should == ['SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1']
+  end
+
+  it "should populate cache when accessed" do
+    @c2.one_through_one :attribute, :class => @c1
+
+    n = @c2.new(:id => 1234)
+    n.associations.include?(:attribute).should == false
+    atts = n.attribute
+    atts.should == n.associations[:attribute]
+  end
+
+  it "should use cache if available" do
+    @c2.one_through_one :attribute, :class => @c1
+
+    n = @c2.new(:id => 1234)
+    n.associations[:attribute] = 42
+    n.attribute.should == 42
+    DB.sqls.should == []
+  end
+
+  it "should not use cache if asked to reload" do
+    @c2.one_through_one :attribute, :class => @c1
+
+    n = @c2.new(:id => 1234)
+    n.associations[:attribute] = 42
+    n.attribute(true).should_not == 42
+    DB.sqls.should == ["SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234) LIMIT 1"]
+  end
+
+  it "should not add associations methods directly to class" do
+    @c2.one_through_one :attribute, :class => @c1
+    im = @c2.instance_methods.collect{|x| x.to_s}
+    im.should(include('attribute'))
+    im.should(include('attribute_dataset'))
+    im2 = @c2.instance_methods(false).collect{|x| x.to_s}
+    im2.should_not(include('attribute'))
+    im2.should_not(include('attribute_dataset'))
+  end
+
+  it "should support after_load association callback" do
+    h = []
+    @c2.one_through_one :attribute, :class => @c1, :after_load=>[proc{|x,y| h << [x.pk, y.pk]}, :al]
+    @c2.class_eval do
+      self::Foo = h
+      def al(v)
+        model::Foo << v.pk
+      end
+    end
+    @c1.dataset._fetch = [{:id=>20}]
+    p = @c2.load(:id=>10, :parent_id=>20)
+    attribute = p.attribute
+    h.should == [[10, 20], 20]
+    attribute.pk.should == 20
+  end
+
+  it "should support a :distinct option that uses the DISTINCT clause" do
+    @c2.one_through_one :attribute, :class => @c1, :distinct=>true
+    @c2.load(:id=>10).attribute_dataset.sql.should == "SELECT DISTINCT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 10) LIMIT 1"
+  end
+end
+
+describe "Filtering by associations" do
+  before(:all) do
+    db = Sequel.mock
+    db.extend_datasets do
+      def supports_window_functions?; true; end
+      def supports_distinct_on?; true; end
+    end
+    @Album = Class.new(Sequel::Model(db[:albums]))
+    artist = @Artist = Class.new(Sequel::Model(db[:artists]))
+    tag = @Tag = Class.new(Sequel::Model(db[:tags]))
+    track = @Track = Class.new(Sequel::Model(db[:tracks]))
+    album_info = @AlbumInfo = Class.new(Sequel::Model(db[:album_infos]))
     @Artist.columns :id, :id1, :id2
     @Tag.columns :id, :tid1, :tid2
     @Track.columns :id, :album_id, :album_id1, :album_id2
     @AlbumInfo.columns :id, :album_id, :album_id1, :album_id2
     @Album.class_eval do
       columns :id, :id1, :id2, :artist_id, :artist_id1, :artist_id2
-      many_to_one :artist, :class=>artist
+      b = lambda{|ds| ds.where(:name=>'B')}
+      c = {:name=>'A'}
+
+      many_to_one :artist, :class=>artist, :key=>:artist_id
       one_to_many :tracks, :class=>track, :key=>:album_id
+      one_to_one :track, :class=>track, :key=>:album_id
       one_to_one :album_info, :class=>album_info, :key=>:album_id
-      many_to_many :tags, :class=>tag, :left_key=>:album_id, :join_table=>:albums_tags
+      many_to_many :tags, :class=>tag, :left_key=>:album_id, :join_table=>:albums_tags, :right_key=>:tag_id
+
+      many_to_one :a_artist, :clone=>:artist, :conditions=>c
+      one_to_many :a_tracks, :clone=>:tracks, :conditions=>c
+      one_to_one :a_album_info, :clone=>:album_info, :conditions=>c
+      many_to_many :a_tags, :clone=>:tags, :conditions=>c
+
+      many_to_one :b_artist, :clone=>:artist, &b
+      one_to_many :b_tracks, :clone=>:tracks, &b
+      one_to_one :b_album_info, :clone=>:album_info, &b
+      many_to_many :b_tags, :clone=>:tags, &b
+
+      one_to_many :l_tracks, :clone=>:tracks, :limit=>10
+      one_to_one :l_track, :clone=>:tracks, :order=>:name
+      many_to_many :l_tags, :clone=>:tags, :limit=>10
+      one_through_one :l_tag, :clone=>:tags, :order=>:name
+
+      one_to_many :al_tracks, :clone=>:l_tracks, :conditions=>c
+      one_to_one :al_track, :clone=>:l_track, :conditions=>c
+      many_to_many :al_tags, :clone=>:l_tags, :conditions=>c
+      one_through_one :al_tag, :clone=>:l_tag, :conditions=>c
 
       many_to_one :cartist, :class=>artist, :key=>[:artist_id1, :artist_id2], :primary_key=>[:id1, :id2]
       one_to_many :ctracks, :class=>track, :key=>[:album_id1, :album_id2], :primary_key=>[:id1, :id2]
       one_to_one :calbum_info, :class=>album_info, :key=>[:album_id1, :album_id2], :primary_key=>[:id1, :id2]
       many_to_many :ctags, :class=>tag, :left_key=>[:album_id1, :album_id2], :left_primary_key=>[:id1, :id2], :right_key=>[:tag_id1, :tag_id2], :right_primary_key=>[:tid1, :tid2], :join_table=>:albums_tags
+
+      many_to_one :a_cartist, :clone=>:cartist, :conditions=>c
+      one_to_many :a_ctracks, :clone=>:ctracks, :conditions=>c
+      one_to_one :a_calbum_info, :clone=>:calbum_info, :conditions=>c
+      many_to_many :a_ctags, :clone=>:ctags, :conditions=>c
+
+      many_to_one :b_cartist, :clone=>:cartist, &b
+      one_to_many :b_ctracks, :clone=>:ctracks, &b
+      one_to_one :b_calbum_info, :clone=>:calbum_info, &b
+      many_to_many :b_ctags, :clone=>:ctags, &b
+
+      one_to_many :l_ctracks, :clone=>:ctracks, :limit=>10
+      one_to_one :l_ctrack, :clone=>:ctracks, :order=>:name
+      many_to_many :l_ctags, :clone=>:ctags, :limit=>10
+      one_through_one :l_ctag, :clone=>:ctags, :order=>:name
+
+      one_to_many :al_ctracks, :clone=>:l_ctracks, :conditions=>c
+      one_to_one :al_ctrack, :clone=>:l_ctrack, :conditions=>c
+      many_to_many :al_ctags, :clone=>:l_ctags, :conditions=>c
+      one_through_one :al_ctag, :clone=>:l_ctag, :conditions=>c
     end
   end
+  after do
+    @Album.default_eager_limit_strategy = true
+  end
 
   it "should be able to filter on many_to_one associations" do
     @Album.filter(:artist=>@Artist.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id = 3)'
@@ -2634,6 +2955,111 @@ describe "Filtering by associations" do
     @Album.filter(:tags=>@Tag.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id = 3) AND (albums_tags.album_id IS NOT NULL))))'
   end
 
+  it "should be able to filter on many_to_one associations with :conditions" do
+    @Album.filter(:a_artist=>@Artist.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id = 3))))"
+  end
+
+  it "should be able to filter on one_to_many associations with :conditions" do
+    @Album.filter(:a_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :conditions" do
+    @Album.filter(:a_album_info=>@AlbumInfo.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with :conditions" do
+    @Album.filter(:a_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id = 3))))"
+  end
+
+  it "should be able to filter on many_to_one associations with block" do
+    @Album.filter(:b_artist=>@Artist.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id = 3))))"
+  end
+
+  it "should be able to filter on one_to_many associations with block" do
+    @Album.filter(:b_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with block" do
+    @Album.filter(:b_album_info=>@AlbumInfo.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with block" do
+    @Album.filter(:b_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id = 3))))"
+  end
+
+  it "should be able to filter on one_to_many associations with :limit" do
+    @Album.filter(:l_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x <= 10))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order" do
+    @Album.filter(:l_track=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id) tracks.id FROM tracks ORDER BY tracks.album_id, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :filter_limit_strategy" do
+    @Album.one_to_one :l_track2, :clone=>:track, :filter_limit_strategy=>:window_function
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :eager_limit_strategy" do
+    @Album.one_to_one :l_track2, :clone=>:track, :eager_limit_strategy=>:window_function
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :filter_limit_strategy" do
+    @Album.one_to_one :l_track2, :clone=>:l_track, :filter_limit_strategy=>:window_function
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :eager_limit_strategy" do
+    @Album.one_to_one :l_track2, :clone=>:l_track, :eager_limit_strategy=>:window_function
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and Model.default_eager_limit_strategy" do
+    @Album.default_eager_limit_strategy = :window_function
+    @Album.one_to_one :l_track2, :clone=>:l_track
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :eager_limit_strategy=>:union" do
+    @Album.one_to_one :l_track2, :clone=>:l_track, :eager_limit_strategy=>:union
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id) tracks.id FROM tracks ORDER BY tracks.album_id, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :eager_limit_strategy=>:ruby" do
+    @Album.one_to_one :l_track2, :clone=>:l_track, :eager_limit_strategy=>:ruby
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id) tracks.id FROM tracks ORDER BY tracks.album_id, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :filter_limit_strategy :correlated_subquery" do
+    @Album.one_to_one :l_track2, :clone=>:track, :filter_limit_strategy=>:correlated_subquery
+    @Album.filter(:l_track2=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT t1.id FROM tracks AS t1 WHERE (t1.album_id = tracks.album_id) LIMIT 1)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with :limit" do
+    @Album.filter(:l_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id IS NOT NULL) AND ((albums_tags.album_id, tags.id) IN (SELECT b, c FROM (SELECT albums_tags.album_id AS b, tags.id AS c, row_number() OVER (PARTITION BY albums_tags.album_id) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) [...]
+  end
+
+  it "should be able to filter on one_through_one associations with :order" do
+    @Album.filter(:l_tag=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((albums_tags.album_id IS NOT NULL) AND ((albums_tags.album_id, tags.id) IN (SELECT DISTINCT ON (albums_tags.album_id) albums_tags.album_id, tags.id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) ORDER BY albums_tags.album_id, name)) AND (tags.id = 3))))"
+  end
+
+  it "should be able to filter on one_to_many associations with :limit and :conditions" do
+    @Album.filter(:al_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks WHERE (name = 'A')) AS t1 WHERE (x_sequel_row_number_x <= 10))) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :conditions" do
+    @Album.filter(:al_track=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id) tracks.id FROM tracks WHERE (name = 'A') ORDER BY tracks.album_id, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with :limit and :conditions" do
+    @Album.filter(:al_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND ((albums_tags.album_id, tags.id) IN (SELECT b, c FROM (SELECT albums_tags.album_id AS b, tags.id AS c, row_number() OVER (PARTITION BY albums_tags.album_id) AS x_sequel_row_number_x FROM tags INNER JOIN albums_tags ON (albums_tags [...]
+  end
+
+  it "should be able to filter on one_through_one associations with :order and :conditions" do
+    @Album.filter(:al_tag=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND ((albums_tags.album_id, tags.id) IN (SELECT DISTINCT ON (albums_tags.album_id) albums_tags.album_id, tags.id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (name = 'A') ORDER BY albums_tags.album_id, name) [...]
+  end
+
   it "should be able to filter on many_to_one associations with composite keys" do
     @Album.filter(:cartist=>@Artist.load(:id1=>3, :id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1 = 3) AND (albums.artist_id2 = 4))'
   end
@@ -2650,6 +3076,75 @@ describe "Filtering by associations" do
     @Album.filter(:ctags=>@Tag.load(:tid1=>3, :tid2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE ((albums_tags.tag_id1 = 3) AND (albums_tags.tag_id2 = 4) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))'
   end
 
+  it "should be able to filter on many_to_one associations with :conditions and composite keys" do
+    @Album.filter(:a_cartist=>@Artist.load(:id=>5, :id1=>3, :id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_many associations with :conditions and composite keys" do
+    @Album.filter(:a_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :conditions and composite keys" do
+    @Album.filter(:a_calbum_info=>@AlbumInfo.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with block and composite keys" do
+    @Album.filter(:a_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_one associations with block and composite keys" do
+    @Album.filter(:b_cartist=>@Artist.load(:id=>5, :id1=>3, :id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_many associations with block and composite keys" do
+    @Album.filter(:b_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with block and composite keys" do
+    @Album.filter(:b_calbum_info=>@AlbumInfo.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with block and composite keys" do
+    @Album.filter(:b_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_many associations with :limit and composite keys" do
+    @Album.filter(:l_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id1, tracks.album_id2) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x <= 10))) AND (trac [...]
+  end
+
+  it "should be able to filter on one_to_many associations with composite keys and :filter_limit_strategy :correlated_subquery" do
+    @Album.one_to_one :l_ctracks2, :clone=>:l_ctracks, :filter_limit_strategy=>:correlated_subquery
+    @Album.filter(:l_ctracks2=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT t1.id FROM tracks AS t1 WHERE ((t1.album_id1 = tracks.album_id1) AND (t1.album_id2 = tracks.album_id2)) LIMIT 1)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on one_to_one associations with :order and composite keys" do
+    @Album.filter(:l_ctrack=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id1, tracks.album_id2) tracks.id FROM tracks ORDER BY tracks.album_id1, tracks.album_id2, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with :limit and composite keys" do
+    @Album.filter(:l_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id) IN (SELECT b, c, d FROM (SELECT albums_tags.alb [...]
+  end
+
+  it "should be able to filter on one_through_one associations with :order and composite keys" do
+    @Album.filter(:l_ctag=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id) IN (SELECT DISTINCT ON (albums_tags.album_id1, a [...]
+  end
+
+  it "should be able to filter on one_to_many associations with :limit and :conditions and composite keys" do
+    @Album.filter(:al_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT id FROM (SELECT tracks.id, row_number() OVER (PARTITION BY tracks.album_id1, tracks.album_id2) AS x_sequel_row_number_x FROM tracks WHERE (name = 'A')) AS t1 WHERE (x_s [...]
+  end
+
+  it "should be able to filter on one_to_one associations with :order and :conditions and composite keys" do
+    @Album.filter(:al_ctrack=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT DISTINCT ON (tracks.album_id1, tracks.album_id2) tracks.id FROM tracks WHERE (name = 'A') ORDER BY tracks.album_id1, tracks.album_id2, name)) AND (tracks.id = 5))))"
+  end
+
+  it "should be able to filter on many_to_many associations with :limit and :conditions and composite keys" do
+    @Album.filter(:al_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id) IN (SELECT b, c, d FROM (SELE [...]
+  end
+
+  it "should be able to filter on one_through_one associations with :order and :conditions and composite keys" do
+    @Album.filter(:al_ctag=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND ((albums_tags.album_id1, albums_tags.album_id2, tags.id) IN (SELECT DISTINCT ON (albums [...]
+  end
+
   it "should work inside a complex filter" do
     artist = @Artist.load(:id=>3)
     @Album.filter{foo & {:artist=>artist}}.sql.should == 'SELECT * FROM albums WHERE (foo AND (albums.artist_id = 3))'
@@ -2706,6 +3201,38 @@ describe "Filtering by associations" do
     @Album.exclude(:tags=>@Tag.load(:id=>3)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id = 3) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))'
   end
 
+  it "should be able to exclude on many_to_one associations with :conditions" do
+    @Album.exclude(:a_artist=>@Artist.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id = 3)))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many associations with :conditions" do
+    @Album.exclude(:a_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id = 5)))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one associations with :conditions" do
+    @Album.exclude(:a_album_info=>@AlbumInfo.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id = 5)))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many associations with :conditions" do
+    @Album.exclude(:a_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id = 3)))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_one associations with block" do
+    @Album.exclude(:b_artist=>@Artist.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id = 3)))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many associations with block" do
+    @Album.exclude(:b_tracks=>@Track.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id = 5)))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one associations with block" do
+    @Album.exclude(:b_album_info=>@AlbumInfo.load(:id=>5, :album_id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id = 5)))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many associations with block" do
+    @Album.exclude(:b_tags=>@Tag.load(:id=>3)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id = 3)))) OR (albums.id IS NULL))"
+  end
+
   it "should be able to exclude on many_to_one associations with composite keys" do
     @Album.exclude(:cartist=>@Artist.load(:id1=>3, :id2=>4)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1 != 3) OR (albums.artist_id2 != 4) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))'
   end
@@ -2722,6 +3249,38 @@ describe "Filtering by associations" do
     @Album.exclude(:ctags=>@Tag.load(:tid1=>3, :tid2=>4)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE ((albums_tags.tag_id1 = 3) AND (albums_tags.tag_id2 = 4) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))'
   end
 
+  it "should be able to exclude on many_to_one associations with :conditions and composite keys" do
+    @Album.exclude(:a_cartist=>@Artist.load(:id=>5, :id1=>3, :id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id = 5)))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many associations with :conditions and composite keys" do
+    @Album.exclude(:a_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one associations with :conditions and composite keys" do
+    @Album.exclude(:a_calbum_info=>@AlbumInfo.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many associations with block and composite keys" do
+    @Album.exclude(:a_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_one associations with block and composite keys" do
+    @Album.exclude(:b_cartist=>@Artist.load(:id=>5, :id1=>3, :id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id = 5)))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many associations with block and composite keys" do
+    @Album.exclude(:b_ctracks=>@Track.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one associations with block and composite keys" do
+    @Album.exclude(:b_calbum_info=>@AlbumInfo.load(:id=>5, :album_id1=>3, :album_id2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many associations with block and composite keys" do
+    @Album.exclude(:b_ctags=>@Tag.load(:id=>5, :tid1=>3, :tid2=>4)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id = 5)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
   it "should be able to filter on multiple many_to_one associations" do
     @Album.filter(:artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.artist_id IN (3, 4))'
   end
@@ -2738,6 +3297,38 @@ describe "Filtering by associations" do
     @Album.filter(:tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3, 4)) AND (albums_tags.album_id IS NOT NULL))))'
   end
 
+  it "should be able to filter on multiple many_to_one associations with :conditions" do
+    @Album.filter(:a_artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id IN (3, 4)))))"
+  end
+
+  it "should be able to filter on multiple one_to_many associations with :conditions" do
+    @Album.filter(:a_tracks=>[@Track.load(:id=>5, :album_id=>3), @Track.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (5, 6)))))"
+  end
+
+  it "should be able to filter on multiple one_to_one associations with :conditions" do
+    @Album.filter(:a_album_info=>[@AlbumInfo.load(:id=>5, :album_id=>3), @AlbumInfo.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (5, 6)))))"
+  end
+
+  it "should be able to filter on multiple many_to_many associations with :conditions" do
+    @Album.filter(:a_tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (3, 4)))))"
+  end
+
+  it "should be able to filter on multiple many_to_one associations with block" do
+    @Album.filter(:b_artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id IN (3, 4)))))"
+  end
+
+  it "should be able to filter on multiple one_to_many associations with block" do
+    @Album.filter(:b_tracks=>[@Track.load(:id=>5, :album_id=>3), @Track.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (5, 6)))))"
+  end
+
+  it "should be able to filter on multiple one_to_one associations with block" do
+    @Album.filter(:b_album_info=>[@AlbumInfo.load(:id=>5, :album_id=>3), @AlbumInfo.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (5, 6)))))"
+  end
+
+  it "should be able to filter on multiple many_to_many associations with block" do
+    @Album.filter(:b_tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (3, 4)))))"
+  end
+
   it "should be able to filter on multiple many_to_one associations with composite keys" do
     @Album.filter(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>5, :id2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN ((3, 4), (5, 6)))'
   end
@@ -2754,6 +3345,38 @@ describe "Filtering by associations" do
     @Album.filter(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5, :tid2=>6)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4), (5, 6))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))'
   end
 
+  it "should be able to filter on multiple many_to_one associations with :conditions and composite keys" do
+    @Album.filter(:a_cartist=>[@Artist.load(:id=>7, :id1=>3, :id2=>4), @Artist.load(:id=>8, :id1=>5, :id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple one_to_many associations with :conditions and composite keys" do
+    @Album.filter(:a_ctracks=>[@Track.load(:id=>7, :album_id1=>3, :album_id2=>4), @Track.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple one_to_one associations with :conditions and composite keys" do
+    @Album.filter(:a_calbum_info=>[@AlbumInfo.load(:id=>7, :album_id1=>3, :album_id2=>4), @AlbumInfo.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple many_to_many associations with block and composite keys" do
+    @Album.filter(:a_ctags=>[@Tag.load(:id=>7, :tid1=>3, :tid2=>4), @Tag.load(:id=>8, :tid1=>5, :tid2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple many_to_one associations with block and composite keys" do
+    @Album.filter(:b_cartist=>[@Artist.load(:id=>7, :id1=>3, :id2=>4), @Artist.load(:id=>8, :id1=>5, :id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple one_to_many associations with block and composite keys" do
+    @Album.filter(:b_ctracks=>[@Track.load(:id=>7, :album_id1=>3, :album_id2=>4), @Track.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple one_to_one associations with block and composite keys" do
+    @Album.filter(:b_calbum_info=>[@AlbumInfo.load(:id=>7, :album_id1=>3, :album_id2=>4), @AlbumInfo.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (7, 8)))))"
+  end
+
+  it "should be able to filter on multiple many_to_many associations with block and composite keys" do
+    @Album.filter(:b_ctags=>[@Tag.load(:id=>7, :tid1=>3, :tid2=>4), @Tag.load(:id=>8, :tid1=>5, :tid2=>6)]).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (7, 8)))))"
+  end
+
   it "should be able to exclude on multiple many_to_one associations" do
     @Album.exclude(:artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id NOT IN (3, 4)) OR (albums.artist_id IS NULL))'
   end
@@ -2770,6 +3393,38 @@ describe "Filtering by associations" do
     @Album.exclude(:tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (3, 4)) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))'
   end
 
+  it "should be able to exclude on multiple many_to_one associations with :conditions" do
+    @Album.exclude(:a_artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id IN (3, 4))))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_many associations with :conditions" do
+    @Album.exclude(:a_tracks=>[@Track.load(:id=>5, :album_id=>3), @Track.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (5, 6))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_one associations with :conditions" do
+    @Album.exclude(:a_album_info=>[@AlbumInfo.load(:id=>5, :album_id=>3), @AlbumInfo.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (5, 6))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple many_to_many associations with :conditions" do
+    @Album.exclude(:a_tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (3, 4))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple many_to_one associations with block" do
+    @Album.exclude(:b_artist=>[@Artist.load(:id=>3), @Artist.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id IN (3, 4))))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_many associations with block" do
+    @Album.exclude(:b_tracks=>[@Track.load(:id=>5, :album_id=>3), @Track.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (5, 6))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_one associations with block" do
+    @Album.exclude(:b_album_info=>[@AlbumInfo.load(:id=>5, :album_id=>3), @AlbumInfo.load(:id=>6, :album_id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (5, 6))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on multiple many_to_many associations with block" do
+    @Album.exclude(:b_tags=>[@Tag.load(:id=>3), @Tag.load(:id=>4)]).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (3, 4))))) OR (albums.id IS NULL))"
+  end
+
   it "should be able to exclude on multiple many_to_one associations with composite keys" do
     @Album.exclude(:cartist=>[@Artist.load(:id1=>3, :id2=>4), @Artist.load(:id1=>5, :id2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN ((3, 4), (5, 6))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))'
   end
@@ -2786,6 +3441,38 @@ describe "Filtering by associations" do
     @Album.exclude(:ctags=>[@Tag.load(:tid1=>3, :tid2=>4), @Tag.load(:tid1=>5, :tid2=>6)]).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN ((3, 4), (5, 6))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))'
   end
 
+  it "should be able to exclude on multiple many_to_one associations with :conditions and composite keys" do
+    @Album.exclude(:a_cartist=>[@Artist.load(:id=>7, :id1=>3, :id2=>4), @Artist.load(:id=>8, :id1=>5, :id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (7, 8))))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_many associations with :conditions and composite keys" do
+    @Album.exclude(:a_ctracks=>[@Track.load(:id=>7, :album_id1=>3, :album_id2=>4), @Track.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (7, 8))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_one associations with :conditions and composite keys" do
+    @Album.exclude(:a_calbum_info=>[@AlbumInfo.load(:id=>7, :album_id1=>3, :album_id2=>4), @AlbumInfo.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (7, 8))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple many_to_many associations with :conditions and composite keys" do
+    @Album.exclude(:a_ctags=>[@Tag.load(:id=>7, :tid1=>3, :tid2=>4), @Tag.load(:id=>8, :tid1=>5, :tid2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (7, 8))))) OR (albums.id1 IS [...]
+  end
+
+  it "should be able to exclude on multiple many_to_one associations with block and composite keys" do
+    @Album.exclude(:b_cartist=>[@Artist.load(:id=>7, :id1=>3, :id2=>4), @Artist.load(:id=>8, :id1=>5, :id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (7, 8))))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_many associations with block and composite keys" do
+    @Album.exclude(:b_ctracks=>[@Track.load(:id=>7, :album_id1=>3, :album_id2=>4), @Track.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (7, 8))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple one_to_one associations with block and composite keys" do
+    @Album.exclude(:b_calbum_info=>[@AlbumInfo.load(:id=>7, :album_id1=>3, :album_id2=>4), @AlbumInfo.load(:id=>8, :album_id1=>5, :album_id2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (7, 8))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on multiple many_to_many associations with block and composite keys" do
+    @Album.exclude(:b_ctags=>[@Tag.load(:id=>7, :tid1=>3, :tid2=>4), @Tag.load(:id=>8, :tid1=>5, :tid2=>6)]).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (7, 8))))) OR (albums.id1 IS [...]
+  end
+
   it "should be able to handle NULL values when filtering many_to_one associations" do
     @Album.filter(:artist=>@Artist.new).sql.should == 'SELECT * FROM albums WHERE \'f\''
   end
@@ -2962,6 +3649,38 @@ describe "Filtering by associations" do
     @Album.filter(:tags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_tags.album_id IS NOT NULL))))'
   end
 
+  it "should be able to filter on many_to_one association datasets with :conditions" do
+    @Album.filter(:a_artist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_many association datasets with :conditions" do
+    @Album.filter(:a_tracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_one association datasets with :conditions" do
+    @Album.filter(:a_album_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_many association datasets with :conditions" do
+    @Album.filter(:a_tags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_one association datasets with block" do
+    @Album.filter(:b_artist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.artist_id IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_many association datasets with block" do
+    @Album.filter(:b_tracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_one association datasets with block" do
+    @Album.filter(:b_album_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_many association datasets with block" do
+    @Album.filter(:b_tags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (albums.id IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
   it "should be able to filter on many_to_one association datasets with composite keys" do
     @Album.filter(:cartist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((x = 1) AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL))))'
   end
@@ -2978,6 +3697,38 @@ describe "Filtering by associations" do
     @Album.filter(:ctags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN (SELECT tags.tid1, tags.tid2 FROM tags WHERE ((x = 1) AND (tags.tid1 IS NOT NULL) AND (tags.tid2 IS NOT NULL)))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL))))'
   end
 
+  it "should be able to filter on many_to_one association datasets with :conditions and composite keys" do
+    @Album.filter(:a_cartist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_many association datasets with :conditions and composite keys" do
+    @Album.filter(:a_ctracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_one association datasets with :conditions and composite keys" do
+    @Album.filter(:a_calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_many association datasets with :conditions and composite keys" do
+    @Album.filter(:a_ctags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_one association datasets with block and composite keys" do
+    @Album.filter(:b_cartist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id1, albums.artist_id2) IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_many association datasets with block and composite keys" do
+    @Album.filter(:b_ctracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on one_to_one association datasets with block and composite keys" do
+    @Album.filter(:b_calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1))))))"
+  end
+
+  it "should be able to filter on many_to_many association datasets with block and composite keys" do
+    @Album.filter(:b_ctags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id1, albums.id2) IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1))))))"
+  end
+
   it "should be able to exclude on many_to_one association datasets" do
     @Album.exclude(:artist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((x = 1) AND (artists.id IS NOT NULL)))) OR (albums.artist_id IS NULL))'
   end
@@ -2994,6 +3745,38 @@ describe "Filtering by associations" do
     @Album.exclude(:tags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM albums_tags WHERE ((albums_tags.tag_id IN (SELECT tags.id FROM tags WHERE ((x = 1) AND (tags.id IS NOT NULL)))) AND (albums_tags.album_id IS NOT NULL)))) OR (albums.id IS NULL))'
   end
 
+  it "should be able to exclude on many_to_one association datasets with :conditions" do
+    @Album.exclude(:a_artist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'A') AND (artists.id IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1)))))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many association datasets with :conditions" do
+    @Album.exclude(:a_tracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'A') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one association datasets with :conditions" do
+    @Album.exclude(:a_album_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many association datasets with :conditions" do
+    @Album.exclude(:a_tags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'A') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_one association datasets with block" do
+    @Album.exclude(:b_artist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.artist_id NOT IN (SELECT artists.id FROM artists WHERE ((name = 'B') AND (artists.id IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1)))))) OR (albums.artist_id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many association datasets with block" do
+    @Album.exclude(:b_tracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT tracks.album_id FROM tracks WHERE ((name = 'B') AND (tracks.album_id IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one association datasets with block" do
+    @Album.exclude(:b_album_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT album_infos.album_id FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many association datasets with block" do
+    @Album.exclude(:b_tags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE ((albums.id NOT IN (SELECT albums_tags.album_id FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE ((name = 'B') AND (albums_tags.album_id IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (albums.id IS NULL))"
+  end
+
   it "should be able to exclude on many_to_one association datasets with composite keys" do
     @Album.exclude(:cartist=>@Artist.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((x = 1) AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL)))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))'
   end
@@ -3010,6 +3793,38 @@ describe "Filtering by associations" do
     @Album.exclude(:ctags=>@Tag.filter(:x=>1)).sql.should == 'SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM albums_tags WHERE (((albums_tags.tag_id1, albums_tags.tag_id2) IN (SELECT tags.tid1, tags.tid2 FROM tags WHERE ((x = 1) AND (tags.tid1 IS NOT NULL) AND (tags.tid2 IS NOT NULL)))) AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL)))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))'
   end
 
+  it "should be able to exclude on many_to_one association datasets with :conditions and composite keys" do
+    @Album.exclude(:a_cartist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'A') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1)))))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many association datasets with :conditions and composite keys" do
+    @Album.exclude(:a_ctracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'A') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one association datasets with :conditions and composite keys" do
+    @Album.exclude(:a_calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'A') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many association datasets with :conditions and composite keys" do
+    @Album.exclude(:a_ctags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'A') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_one association datasets with block and composite keys" do
+    @Album.exclude(:b_cartist=>@Artist.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.artist_id1, albums.artist_id2) NOT IN (SELECT artists.id1, artists.id2 FROM artists WHERE ((name = 'B') AND (artists.id1 IS NOT NULL) AND (artists.id2 IS NOT NULL) AND (artists.id IN (SELECT artists.id FROM artists WHERE (x = 1)))))) OR (albums.artist_id1 IS NULL) OR (albums.artist_id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_many association datasets with block and composite keys" do
+    @Album.exclude(:b_ctracks=>@Track.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT tracks.album_id1, tracks.album_id2 FROM tracks WHERE ((name = 'B') AND (tracks.album_id1 IS NOT NULL) AND (tracks.album_id2 IS NOT NULL) AND (tracks.id IN (SELECT tracks.id FROM tracks WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on one_to_one association datasets with block and composite keys" do
+    @Album.exclude(:b_calbum_info=>@AlbumInfo.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT album_infos.album_id1, album_infos.album_id2 FROM album_infos WHERE ((name = 'B') AND (album_infos.album_id1 IS NOT NULL) AND (album_infos.album_id2 IS NOT NULL) AND (album_infos.id IN (SELECT album_infos.id FROM album_infos WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
+  it "should be able to exclude on many_to_many association datasets with block and composite keys" do
+    @Album.exclude(:b_ctags=>@Tag.filter(:x=>1)).sql.should == "SELECT * FROM albums WHERE (((albums.id1, albums.id2) NOT IN (SELECT albums_tags.album_id1, albums_tags.album_id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id1 = tags.tid1) AND (albums_tags.tag_id2 = tags.tid2)) WHERE ((name = 'B') AND (albums_tags.album_id1 IS NOT NULL) AND (albums_tags.album_id2 IS NOT NULL) AND (tags.id IN (SELECT tags.id FROM tags WHERE (x = 1)))))) OR (albums.id1 IS NULL) OR (albums.id2 IS NULL))"
+  end
+
   it "should do a regular IN query if the dataset for a different model is used" do
     @Album.filter(:artist=>@Album.select(:x)).sql.should == 'SELECT * FROM albums WHERE (artist IN (SELECT x FROM albums))'
   end
@@ -3046,9 +3861,9 @@ describe "Sequel::Model Associations with clashing column names" do
     @Foo.first.bar.should == @bar
     @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_id = 2) LIMIT 1"]
     @Foo.first.mtmbars.should == [@bar]
-    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON ((bars_foos.object_id = bars.object_id) AND (bars_foos.foo_id = 2))"]
+    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON (bars_foos.object_id = bars.object_id) WHERE (bars_foos.foo_id = 2)"]
     @Bar.first.mtmfoos.should == [@foo]
-    @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_id = foos.object_id) AND (bars_foos.object_id = 2))"]
+    @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON (bars_foos.foo_id = foos.object_id) WHERE (bars_foos.object_id = 2)"]
   end
 
   it "should have working eager loading methods" do
@@ -3060,10 +3875,10 @@ describe "Sequel::Model Associations with clashing column names" do
     @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_id IN (2))"]
     @db.fetch = [[{:id=>1, :object_id=>2}], [{:id=>1, :object_id=>2, :x_foreign_key_x=>2}]]
     @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]]
-    @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_id AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON ((bars_foos.object_id = bars.object_id) AND (bars_foos.foo_id IN (2)))"]
+    @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_id AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON (bars_foos.object_id = bars.object_id) WHERE (bars_foos.foo_id IN (2))"]
     @db.fetch = [[{:id=>1, :object_id=>2}], [{:id=>1, :object_id=>2, :x_foreign_key_x=>2}]]
     @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]]
-    @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.object_id AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_id = foos.object_id) AND (bars_foos.object_id IN (2)))"]
+    @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.object_id AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON (bars_foos.foo_id = foos.object_id) WHERE (bars_foos.object_id IN (2))"]
   end
 
   it "should have working eager graphing methods" do
@@ -3097,6 +3912,35 @@ describe "Sequel::Model Associations with clashing column names" do
     @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_id IN (SELECT bars_foos.object_id FROM bars_foos WHERE ((bars_foos.foo_id = 2) AND (bars_foos.object_id IS NOT NULL)))) LIMIT 1"]
   end
 
+  it "should have working filter by associations for associations with :conditions with model instances" do
+    @Bar.many_to_one :foo, :clone=>:foo, :conditions=>{:name=>'A'}
+    @Foo.one_to_many :bars, :clone=>:bars, :conditions=>{:name=>'A'}
+    @Foo.one_to_one :bar, :clone=>:bars
+    @Foo.many_to_many :mtmbars, :clone=>:mtmbars, :conditions=>{:name=>'A'}
+    @Bar.many_to_many :mtmfoos, :clone=>:mtmfoos, :conditions=>{:name=>'A'}
+
+    @Bar.where(:foo=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_id IN (SELECT foos.object_id FROM foos WHERE ((name = 'A') AND (foos.object_id IS NOT NULL) AND (foos.id = 1))))"
+    @Foo.where(:bars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars.object_id FROM bars WHERE ((name = 'A') AND (bars.object_id IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:bar=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars.object_id FROM bars WHERE ((name = 'A') AND (bars.object_id IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:mtmbars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars_foos.foo_id FROM bars INNER JOIN bars_foos ON (bars_foos.object_id = bars.object_id) WHERE ((name = 'A') AND (bars_foos.foo_id IS NOT NULL) AND (bars.id = 1))))"
+    @Bar.where(:mtmfoos=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_id IN (SELECT bars_foos.object_id FROM foos INNER JOIN bars_foos ON (bars_foos.foo_id = foos.object_id) WHERE ((name = 'A') AND (bars_foos.object_id IS NOT NULL) AND (foos.id = 1))))"
+  end
+
+  it "should have working filter by associations for associations with block with model instances" do
+    b = lambda{|ds| ds.where(:name=>'A')}
+    @Bar.many_to_one :foo, :clone=>:foo, &b
+    @Foo.one_to_many :bars, :clone=>:bars, &b
+    @Foo.one_to_one :bar, :clone=>:bars
+    @Foo.many_to_many :mtmbars, :clone=>:mtmbars, &b
+    @Bar.many_to_many :mtmfoos, :clone=>:mtmfoos, &b
+
+    @Bar.where(:foo=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_id IN (SELECT foos.object_id FROM foos WHERE ((name = 'A') AND (foos.object_id IS NOT NULL) AND (foos.id = 1))))"
+    @Foo.where(:bars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars.object_id FROM bars WHERE ((name = 'A') AND (bars.object_id IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:bar=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars.object_id FROM bars WHERE ((name = 'A') AND (bars.object_id IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:mtmbars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_id IN (SELECT bars_foos.foo_id FROM bars INNER JOIN bars_foos ON (bars_foos.object_id = bars.object_id) WHERE ((name = 'A') AND (bars_foos.foo_id IS NOT NULL) AND (bars.id = 1))))"
+    @Bar.where(:mtmfoos=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_id IN (SELECT bars_foos.object_id FROM foos INNER JOIN bars_foos ON (bars_foos.foo_id = foos.object_id) WHERE ((name = 'A') AND (bars_foos.object_id IS NOT NULL) AND (foos.id = 1))))"
+  end
+
   it "should have working modification methods" do
     b = @Bar.load(:id=>2, :object_id=>3)
     f = @Foo.load(:id=>2, :object_id=>3)
@@ -3166,9 +4010,9 @@ describe "Sequel::Model Associations with non-column expression keys" do
     @Foo.first.bar.should == @bar
     @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT * FROM bars WHERE (bars.object_ids[0] = 2) LIMIT 1"]
     @Foo.first.mtmbars.should == [@bar]
-    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON ((bars_foos.bar_ids[0] = bars.object_ids[0]) AND (bars_foos.foo_ids[0] = 2))"]
+    @db.sqls.should == ["SELECT * FROM foos LIMIT 1", "SELECT bars.* FROM bars INNER JOIN bars_foos ON (bars_foos.bar_ids[0] = bars.object_ids[0]) WHERE (bars_foos.foo_ids[0] = 2)"]
     @Bar.first.mtmfoos.should == [@foo]
-    @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_ids[0] = foos.object_ids[0]) AND (bars_foos.bar_ids[0] = 2))"]
+    @db.sqls.should == ["SELECT * FROM bars LIMIT 1", "SELECT foos.* FROM foos INNER JOIN bars_foos ON (bars_foos.foo_ids[0] = foos.object_ids[0]) WHERE (bars_foos.bar_ids[0] = 2)"]
   end
 
   it "should have working eager loading methods" do
@@ -3180,10 +4024,10 @@ describe "Sequel::Model Associations with non-column expression keys" do
     @db.sqls.should == ["SELECT * FROM foos", "SELECT * FROM bars WHERE (bars.object_ids[0] IN (2))"]
     @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]]
     @Foo.eager(:mtmbars).all.map{|o| [o, o.mtmbars]}.should == [[@foo, [@bar]]]
-    @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_ids[0] AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON ((bars_foos.bar_ids[0] = bars.object_ids[0]) AND (bars_foos.foo_ids[0] IN (2)))"]
+    @db.sqls.should == ["SELECT * FROM foos", "SELECT bars.*, bars_foos.foo_ids[0] AS x_foreign_key_x FROM bars INNER JOIN bars_foos ON (bars_foos.bar_ids[0] = bars.object_ids[0]) WHERE (bars_foos.foo_ids[0] IN (2))"]
     @db.fetch = [[{:id=>1, :object_ids=>[2]}], [{:id=>1, :object_ids=>[2], :x_foreign_key_x=>2}]]
     @Bar.eager(:mtmfoos).all.map{|o| [o, o.mtmfoos]}.should == [[@bar, [@foo]]]
-    @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.bar_ids[0] AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON ((bars_foos.foo_ids[0] = foos.object_ids[0]) AND (bars_foos.bar_ids[0] IN (2)))"]
+    @db.sqls.should == ["SELECT * FROM bars", "SELECT foos.*, bars_foos.bar_ids[0] AS x_foreign_key_x FROM foos INNER JOIN bars_foos ON (bars_foos.foo_ids[0] = foos.object_ids[0]) WHERE (bars_foos.bar_ids[0] IN (2))"]
   end
 
   it "should have working eager graphing methods" do
@@ -3217,6 +4061,35 @@ describe "Sequel::Model Associations with non-column expression keys" do
     @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT bars_foos.bar_ids[0] FROM bars_foos WHERE ((bars_foos.foo_ids[0] = 2) AND (bars_foos.bar_ids[0] IS NOT NULL)))) LIMIT 1"]
   end
 
+  it "should have working filter by associations for associations with :conditions with model instances" do
+    @Bar.many_to_one :foo, :clone=>:foo, :conditions=>{:name=>'A'}
+    @Foo.one_to_many :bars, :clone=>:bars, :conditions=>{:name=>'A'}
+    @Foo.one_to_one :bar, :clone=>:bars
+    @Foo.many_to_many :mtmbars, :clone=>:mtmbars, :conditions=>{:name=>'A'}
+    @Bar.many_to_many :mtmfoos, :clone=>:mtmfoos, :conditions=>{:name=>'A'}
+
+    @Bar.where(:foo=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT foos.object_ids[0] FROM foos WHERE ((name = 'A') AND (foos.object_ids[0] IS NOT NULL) AND (foos.id = 1))))"
+    @Foo.where(:bars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((name = 'A') AND (bars.object_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:bar=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((name = 'A') AND (bars.object_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:mtmbars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars_foos.foo_ids[0] FROM bars INNER JOIN bars_foos ON (bars_foos.bar_ids[0] = bars.object_ids[0]) WHERE ((name = 'A') AND (bars_foos.foo_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Bar.where(:mtmfoos=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT bars_foos.bar_ids[0] FROM foos INNER JOIN bars_foos ON (bars_foos.foo_ids[0] = foos.object_ids[0]) WHERE ((name = 'A') AND (bars_foos.bar_ids[0] IS NOT NULL) AND (foos.id = 1))))"
+  end
+
+  it "should have working filter by associations for associations with block with model instances" do
+    b = lambda{|ds| ds.where(:name=>'A')}
+    @Bar.many_to_one :foo, :clone=>:foo, &b
+    @Foo.one_to_many :bars, :clone=>:bars, &b
+    @Foo.one_to_one :bar, :clone=>:bars
+    @Foo.many_to_many :mtmbars, :clone=>:mtmbars, &b
+    @Bar.many_to_many :mtmfoos, :clone=>:mtmfoos, &b
+
+    @Bar.where(:foo=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT foos.object_ids[0] FROM foos WHERE ((name = 'A') AND (foos.object_ids[0] IS NOT NULL) AND (foos.id = 1))))"
+    @Foo.where(:bars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((name = 'A') AND (bars.object_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:bar=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars.object_ids[0] FROM bars WHERE ((name = 'A') AND (bars.object_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Foo.where(:mtmbars=>@bar).sql.should == "SELECT * FROM foos WHERE (foos.object_ids[0] IN (SELECT bars_foos.foo_ids[0] FROM bars INNER JOIN bars_foos ON (bars_foos.bar_ids[0] = bars.object_ids[0]) WHERE ((name = 'A') AND (bars_foos.foo_ids[0] IS NOT NULL) AND (bars.id = 1))))"
+    @Bar.where(:mtmfoos=>@foo).sql.should == "SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT bars_foos.bar_ids[0] FROM foos INNER JOIN bars_foos ON (bars_foos.foo_ids[0] = foos.object_ids[0]) WHERE ((name = 'A') AND (bars_foos.bar_ids[0] IS NOT NULL) AND (foos.id = 1))))"
+  end
+
   it "should have working filter by associations with model datasets" do
     @Bar.first(:foo=>@Foo.where(:id=>@foo.id)).should == @bar
     @db.sqls.should == ["SELECT * FROM bars WHERE (bars.object_ids[0] IN (SELECT foos.object_ids[0] FROM foos WHERE ((id = 1) AND (foos.object_ids[0] IS NOT NULL)))) LIMIT 1"]
@@ -3267,7 +4140,7 @@ describe "Model#freeze" do
   end
 
   it "should freeze the object's associations" do
-    @o.associations.frozen?.should be_true
+    @o.associations.frozen?.should == true
   end
 
   it "should not break associations getters" do
diff --git a/spec/model/base_spec.rb b/spec/model/base_spec.rb
index 59411e2..d1e3aa8 100644
--- a/spec/model/base_spec.rb
+++ b/spec/model/base_spec.rb
@@ -577,8 +577,10 @@ end
 
 describe Sequel::Model, ".[] optimization" do
   before do
-    @db = DB.clone
+    @db = Sequel.mock
     @db.quote_identifiers = true
+    def @db.schema(*) [[:id, {:primary_key=>true}]] end
+    def @db.supports_schema_parsing?() true end
     @c = Class.new(Sequel::Model(@db))
   end
 
@@ -731,3 +733,14 @@ describe "Model datasets #with_pk with #with_pk!" do
     DB.sqls.should == ["SELECT * FROM a WHERE (foo) LIMIT 1"]
   end
 end
+
+describe "Model::include" do
+  it "shouldn't change the signature of Module::include" do
+    mod1 = Module.new
+    mod2 = Module.new
+    including_class = Class.new(Sequel::Model(:items)) do
+      include(mod1, mod2)
+    end
+    including_class.included_modules.should include(mod1, mod2)
+  end
+end
diff --git a/spec/model/class_dataset_methods_spec.rb b/spec/model/class_dataset_methods_spec.rb
index 58b8c1c..c29784c 100644
--- a/spec/model/class_dataset_methods_spec.rb
+++ b/spec/model/class_dataset_methods_spec.rb
@@ -5,6 +5,7 @@ describe Sequel::Model, "class dataset methods"  do
     @db = Sequel.mock
     @c = Class.new(Sequel::Model(@db[:items]))
     @d = @c.dataset
+    def @d.supports_cte?(*) true end
     @d._fetch = {:id=>1}
     @d.autoid = 1
     @d.numrows = 0
@@ -25,7 +26,7 @@ describe Sequel::Model, "class dataset methods"  do
     @c.each{|r| r.should == @c.load(:id=>1)}.should == @d
     @db.sqls.should == ["SELECT * FROM items"]
     @c.each_server{|r| r.opts[:server].should == :default}
-    @c.empty?.should be_false
+    @c.empty?.should == false
     @db.sqls.should == ["SELECT 1 AS one FROM items LIMIT 1"]
     @c.except(@d, :from_self=>false).sql.should == "SELECT * FROM items EXCEPT SELECT * FROM items"
     @c.exclude(:a).sql.should == "SELECT * FROM items WHERE NOT a"
@@ -81,6 +82,7 @@ describe Sequel::Model, "class dataset methods"  do
     @c.natural_join(@c).sql.should == "SELECT * FROM items NATURAL JOIN items"
     @c.natural_left_join(@c).sql.should == "SELECT * FROM items NATURAL LEFT JOIN items"
     @c.natural_right_join(@c).sql.should == "SELECT * FROM items NATURAL RIGHT JOIN items"
+    @c.offset(2).sql.should == "SELECT * FROM items OFFSET 2"
     @c.order(:a).sql.should == "SELECT * FROM items ORDER BY a"
     @c.order_append(:a).sql.should == "SELECT * FROM items ORDER BY a"
     @c.order_by(:a).sql.should == "SELECT * FROM items ORDER BY a"
diff --git a/spec/model/eager_loading_spec.rb b/spec/model/eager_loading_spec.rb
index f373ad7..84867ea 100644
--- a/spec/model/eager_loading_spec.rb
+++ b/spec/model/eager_loading_spec.rb
@@ -7,6 +7,7 @@ describe Sequel::Model, "#eager" do
       many_to_one :band, :class=>'EagerBand', :key=>:band_id
       one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id
       many_to_many :genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag
+      one_through_one :genre, :clone=>:genres
       one_to_many :good_tracks, :class=>'EagerTrack', :reciprocal=>nil, :key=>:album_id do |ds|
         ds.filter(:name=>'Good')
       end
@@ -149,7 +150,14 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
-  it "should use first matching entry when eager loading one_to_one association" do
+  it "should not break if the dataset does not have a row proc" do
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id
+    a = EagerAlbum.eager(:track).naked.all
+    a.should == [{:id => 1, :band_id => 2}]
+    DB.sqls.should == ['SELECT * FROM albums']
+  end
+  
+  it "should eagerly load a single one_to_one association without an order" do
     EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id
     EagerTrack.dataset._fetch = [{:id => 3, :album_id=>1}, {:id => 4, :album_id=>1}]
     a = EagerAlbum.eager(:track).all
@@ -159,19 +167,28 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
+  it "should eagerly load a single one_to_one association with an order" do
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:a
+    a = EagerAlbum.eager(:track).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY a LIMIT 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    DB.sqls.should == []
+  end
+  
   it "should eagerly load a single one_to_one association using the :distinct_on strategy" do
     def (EagerTrack.dataset).supports_distinct_on?() true end
-    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :eager_limit_strategy=>true
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:a, :eager_limit_strategy=>:distinct_on
     a = EagerAlbum.eager(:track).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', 'SELECT DISTINCT ON (tracks.album_id) * FROM tracks WHERE (tracks.album_id IN (1)) ORDER BY tracks.album_id']
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT DISTINCT ON (tracks.album_id) * FROM tracks WHERE (tracks.album_id IN (1)) ORDER BY tracks.album_id, a']
     a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
     DB.sqls.should == []
   end
   
   it "should eagerly load a single one_to_one association using the :window_function strategy" do
     def (EagerTrack.dataset).supports_window_functions?() true end
-    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :eager_limit_strategy=>true
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :eager_limit_strategy=>:window_function
     a = EagerAlbum.eager(:track).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
     DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 1)']
@@ -179,19 +196,18 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
-  it "should not use distinct on eager limit strategy if the association has an offset" do
-    def (EagerTrack.dataset).supports_distinct_on?() true end
-    def (EagerTrack.dataset).supports_window_functions?() true end
-    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :order=>:name
+  it "should automatically use an eager limit stategy if the association has an offset" do
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1]
+    EagerTrack.dataset._fetch = [{:id => 4, :album_id=>1}]
     a = EagerAlbum.eager(:track).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 2)']
-    a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 4, :album_id=>1)
     DB.sqls.should == []
   end
   
-  it "should automatically use an eager limit stategy if the association has an offset" do
-    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1]
+  it "should handle offsets when using the :ruby eager limit stategy" do
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :eager_limit_strategy=>:ruby
     EagerTrack.dataset._fetch = [{:id => 3, :album_id=>1}, {:id => 4, :album_id=>1}]
     a = EagerAlbum.eager(:track).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
@@ -200,6 +216,33 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
+  it "should support a :subqueries_per_union option for the number of subqueries in a union" do
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :subqueries_per_union=>1
+    EagerAlbum.dataset._fetch = [{:id => 1, :band_id => 2}, {:id => 2, :band_id => 3}, {:id => 3, :band_id => 4}]
+    EagerTrack.dataset._fetch = [[{:id => 4, :album_id=>1}], [{:id=>5, :album_id=>2}], [{:id=>6, :album_id=>3}]]
+    a = EagerAlbum.eager(:track).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2), EagerAlbum.load(:id => 2, :band_id => 3), EagerAlbum.load(:id => 3, :band_id => 4)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1', 'SELECT * FROM (SELECT * FROM tracks WHERE (2 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1', 'SELECT * FROM (SELECT * FROM tracks WHERE (3 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 4, :album_id=>1)
+    DB.sqls.should == []
+
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :subqueries_per_union=>2
+    EagerTrack.dataset._fetch = [[{:id => 4, :album_id=>1}, {:id=>5, :album_id=>2}], [{:id=>6, :album_id=>3}]]
+    a = EagerAlbum.eager(:track).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2), EagerAlbum.load(:id => 2, :band_id => 3), EagerAlbum.load(:id => 3, :band_id => 4)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1 UNION ALL SELECT * FROM (SELECT * FROM tracks WHERE (2 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1', 'SELECT * FROM (SELECT * FROM tracks WHERE (3 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 4, :album_id=>1)
+    DB.sqls.should == []
+
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :limit=>[1,1], :subqueries_per_union=>3
+    EagerTrack.dataset._fetch = [[{:id => 4, :album_id=>1}, {:id=>5, :album_id=>2}, {:id=>6, :album_id=>3}]]
+    a = EagerAlbum.eager(:track).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2), EagerAlbum.load(:id => 2, :band_id => 3), EagerAlbum.load(:id => 3, :band_id => 4)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1 UNION ALL SELECT * FROM (SELECT * FROM tracks WHERE (2 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1 UNION ALL SELECT * FROM (SELECT * FROM tracks WHERE (3 = tracks.album_id) LIMIT 1 OFFSET 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 4, :album_id=>1)
+    DB.sqls.should == []
+  end
+  
   it "should eagerly load a single one_to_many association" do
     a = EagerAlbum.eager(:tracks).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
@@ -208,10 +251,65 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
+  it "should eagerly load a single one_through_one association" do
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
+    a.first.genre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+  
+  it "should use first matching entry when eager loading one_through_one association" do
+    EagerGenre.dataset._fetch = [{:id => 3, :x_foreign_key_x=>1}, {:id => 4, :x_foreign_key_x=>1}]
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
+    a.first.genre.should == EagerGenre.load(:id=>3)
+    DB.sqls.should == []
+  end
+  
+  it "should eagerly load a single one_through_one association" do
+    EagerAlbum.one_through_one :genre, :clone=>:genre, :order=>:a
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (1 = ag.album_id) ORDER BY a LIMIT 1) AS t1"]
+    a.first.genre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+  
+  it "should eagerly load a single one_through_one association using the :distinct_on strategy" do
+    def (EagerGenre.dataset).supports_distinct_on?() true end
+    EagerAlbum.one_through_one :genre, :clone=>:genre, :order=>:a, :eager_limit_strategy=>:distinct_on
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT DISTINCT ON (ag.album_id) genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1)) ORDER BY ag.album_id, a"]
+    a.first.genre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+  
+  it "should eagerly load a single one_through_one association using the :window_function strategy" do
+    def (EagerGenre.dataset).supports_window_functions?() true end
+    EagerAlbum.one_through_one :genre, :clone=>:genre, :eager_limit_strategy=>:window_function
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x = 1)"]
+    a.first.genre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+  
+  it "should automatically use an eager limit stategy if the association has an offset" do
+    EagerGenre.dataset._fetch = [{:id => 3, :x_foreign_key_x=>1}, {:id => 4, :x_foreign_key_x=>1}]
+    a = EagerAlbum.eager(:genre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
+    a.first.genre.should == EagerGenre.load(:id=>3)
+    DB.sqls.should == []
+  end
+  
   it "should eagerly load a single many_to_many association" do
     a = EagerAlbum.eager(:genres).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
     a.first.genres.should == [EagerGenre.load(:id=>4)]
     DB.sqls.should == []
   end
@@ -241,11 +339,21 @@ describe Sequel::Model, "#eager" do
     EagerGenre.dataset._fetch = {:id=>4, :x_foreign_key_x=>6}
     a = EagerAlbum.eager(:sgenres).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (6)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (6))"]
     a.first.sgenres.should == [EagerGenre.load(:id=>4)]
     DB.sqls.should == []
   end
 
+  it "should support using a custom :left_primary_key option when eager loading one_through_one associations" do
+    EagerAlbum.one_through_one :sgenre, :clone=>:genre, :left_primary_key=>:band_id3
+    EagerGenre.dataset._fetch = {:id=>4, :x_foreign_key_x=>6}
+    a = EagerAlbum.eager(:sgenre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (6))"]
+    a.first.sgenre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+
   it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup, for many_to_one associations" do
     EagerAlbum.many_to_one :sband, :clone=>:band, :eager_loading_predicate_key=>Sequel./(:bands__id, 3), :primary_key_method=>:id3
     EagerBand.dataset._fetch = {:id=>6}
@@ -270,11 +378,20 @@ describe Sequel::Model, "#eager" do
     EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loading_predicate_key=>Sequel.*(:ag__album_id, 1)
     a = EagerAlbum.eager(:sgenres).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, (ag.album_id * 1) AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND ((ag.album_id * 1) IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, (ag.album_id * 1) AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE ((ag.album_id * 1) IN (1))"]
     a.first.sgenres.should == [EagerGenre.load(:id=>4)]
     DB.sqls.should == []
   end
 
+  it "should handle a :eager_loading_predicate_key option to change the SQL used in the lookup, for one_through_one associations" do
+    EagerAlbum.one_through_one :sgenre, :clone=>:genre, :eager_loading_predicate_key=>Sequel.*(:ag__album_id, 1)
+    a = EagerAlbum.eager(:sgenre).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, (ag.album_id * 1) AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE ((ag.album_id * 1) IN (1))"]
+    a.first.sgenre.should == EagerGenre.load(:id=>4)
+    DB.sqls.should == []
+  end
+
   it "should raise an error for an unhandled :eager_loader_key option" do
     EagerAlbum.many_to_many :sgenres, :clone=>:genres, :eager_loader_key=>1
     ds = EagerAlbum.eager(:sgenres)
@@ -293,13 +410,25 @@ describe Sequel::Model, "#eager" do
   it "should correctly handle a :select=>[] option to many_to_many" do
     EagerAlbum.many_to_many :sgenres, :clone=>:genres, :select=>[]
     EagerAlbum.eager(:sgenres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT *, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT *, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
+  end
+  
+  it "should correctly handle a :select=>[] option to one_through_one" do
+    EagerAlbum.one_through_one :sgenre, :clone=>:genre, :select=>[]
+    EagerAlbum.eager(:sgenre).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT *, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
   end
   
   it "should correctly handle an aliased join table in many_to_many" do
     EagerAlbum.many_to_many :sgenres, :clone=>:genres, :join_table=>:ag___ga
     EagerAlbum.eager(:sgenres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ga.album_id AS x_foreign_key_x FROM genres INNER JOIN ag AS ga ON ((ga.genre_id = genres.id) AND (ga.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ga.album_id AS x_foreign_key_x FROM genres INNER JOIN ag AS ga ON (ga.genre_id = genres.id) WHERE (ga.album_id IN (1))"]
+  end
+  
+  it "should correctly handle an aliased join table in one_through_one" do
+    EagerAlbum.one_through_one :sgenre, :clone=>:genre, :join_table=>:ag___ga
+    EagerAlbum.eager(:sgenre).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ga.album_id AS x_foreign_key_x FROM genres INNER JOIN ag AS ga ON (ga.genre_id = genres.id) WHERE (ga.album_id IN (1))"]
   end
   
   it "should eagerly load multiple associations in a single call" do
@@ -309,7 +438,7 @@ describe Sequel::Model, "#eager" do
     sqls.shift.should == 'SELECT * FROM albums'
     sqls.sort.should == ['SELECT * FROM bands WHERE (bands.id IN (2))',
       'SELECT * FROM tracks WHERE (tracks.album_id IN (1))',
-      'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))']
+      'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))']
     a = a.first
     a.band.should == EagerBand.load(:id=>2)
     a.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
@@ -324,7 +453,7 @@ describe Sequel::Model, "#eager" do
     sqls.shift.should == 'SELECT * FROM albums'
     sqls.sort.should == ['SELECT * FROM bands WHERE (bands.id IN (2))',
       'SELECT * FROM tracks WHERE (tracks.album_id IN (1))',
-      'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))']
+      'SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))']
     a = a.first
     a.band.should == EagerBand.load(:id=>2)
     a.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
@@ -338,7 +467,7 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == ['SELECT * FROM tracks', 
       'SELECT * FROM albums WHERE (albums.id IN (1))',
       'SELECT * FROM bands WHERE (bands.id IN (2))',
-      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"]
+      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON (bm.member_id = members.id) WHERE (bm.band_id IN (2))"]
     a = a.first
     a.album.should == EagerAlbum.load(:id => 1, :band_id => 2)
     a.album.band.should == EagerBand.load(:id => 2)
@@ -346,6 +475,32 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
+  it "should cascade eager loading when using a UNION strategy for eager loading limited associations" do
+    EagerTrack.many_to_one :album2, :clone=>:album
+    EagerAlbum.one_to_one :track, :class=>'EagerTrack', :key=>:album_id, :order=>:a
+    a = EagerAlbum.eager(:track=>:album2).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY a LIMIT 1) AS t1', 'SELECT * FROM albums WHERE (albums.id IN (1))']
+    a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    a.first.track.album2.should == EagerAlbum.load(:id => 1, :band_id => 2)
+    DB.sqls.should == []
+
+    a = EagerAlbum.eager(:track=>[:album2]).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY a LIMIT 1) AS t1', 'SELECT * FROM albums WHERE (albums.id IN (1))']
+    a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    a.first.track.album2.should == EagerAlbum.load(:id => 1, :band_id => 2)
+    DB.sqls.should == []
+
+    a = EagerAlbum.eager(:track=>{:album2=>:track}).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY a LIMIT 1) AS t1', 'SELECT * FROM albums WHERE (albums.id IN (1))', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY a LIMIT 1) AS t1']
+    a.first.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    a.first.track.album2.should == EagerAlbum.load(:id => 1, :band_id => 2)
+    a.first.track.album2.track.should == EagerTrack.load(:id => 3, :album_id=>1)
+    DB.sqls.should == []
+  end
+  
   it "should cascade eagerly loading when the :eager association option is used" do
     a = EagerBand.eager(:albums).all
     a.should == [EagerBand.load(:id=>2)]
@@ -403,7 +558,7 @@ describe Sequel::Model, "#eager" do
     ds._fetch = [{:id=>5, :bands_id=>2, :p_k=>6}, {:id=>5, :bands_id=>3, :p_k=>6}]
     a = EagerBand.load(:id=>2)
     a.graph_members.should == [EagerBandMember.load(:id=>5)]
-    DB.sqls.should == ['SELECT members.id, bands.id AS bands_id, bands.p_k FROM (SELECT members.* FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id = 2))) AS members LEFT OUTER JOIN bm AS bm_0 ON (bm_0.member_id = members.id) LEFT OUTER JOIN bands ON (bands.id = bm_0.band_id) ORDER BY bands.id']
+    DB.sqls.should == ['SELECT members.id, bands.id AS bands_id, bands.p_k FROM (SELECT members.* FROM members INNER JOIN bm ON (bm.member_id = members.id) WHERE (bm.band_id = 2)) AS members LEFT OUTER JOIN bm AS bm_0 ON (bm_0.member_id = members.id) LEFT OUTER JOIN bands ON (bands.id = bm_0.band_id) ORDER BY bands.id']
     a.graph_members.first.bands.should == [EagerBand.load(:id=>2, :p_k=>6), EagerBand.load(:id=>3, :p_k=>6)]
     DB.sqls.should == []
   end
@@ -412,19 +567,19 @@ describe Sequel::Model, "#eager" do
     EagerBandMember.many_to_many :good_bands, :clone=>:bands, :conditions=>{:a=>32}
     a = EagerBandMember.eager(:good_bands).all
     a.should == [EagerBandMember.load(:id => 5)]
-    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) WHERE (a = 32) ORDER BY id']
+    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON (bm.band_id = bands.id) WHERE ((a = 32) AND (bm.member_id IN (5))) ORDER BY id']
     a.first.good_bands.should == [EagerBand.load(:id => 2)]
     DB.sqls.should == []
 
     EagerBandMember.many_to_many :good_bands, :clone=>:bands, :conditions=>"x = 1"
     a = EagerBandMember.eager(:good_bands).all
-    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) WHERE (x = 1) ORDER BY id']
+    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON (bm.band_id = bands.id) WHERE ((x = 1) AND (bm.member_id IN (5))) ORDER BY id']
   end
   
   it "should respect :order when eagerly loading" do
     a = EagerBandMember.eager(:bands).all
     a.should == [EagerBandMember.load(:id => 5)]
-    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON ((bm.band_id = bands.id) AND (bm.member_id IN (5))) ORDER BY id']
+    DB.sqls.should == ['SELECT * FROM members', 'SELECT bands.*, bm.member_id AS x_foreign_key_x FROM bands INNER JOIN bm ON (bm.band_id = bands.id) WHERE (bm.member_id IN (5)) ORDER BY id']
     a.first.bands.should == [EagerBand.load(:id => 2)]
     DB.sqls.should == []
   end
@@ -459,14 +614,14 @@ describe Sequel::Model, "#eager" do
   
   it "should use the association's block when eager loading by default" do
     EagerAlbum.eager(:good_tracks).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE ((tracks.album_id IN (1)) AND (name = 'Good'))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE ((name = 'Good') AND (tracks.album_id IN (1)))"]
   end
 
   it "should use the eager_block option when eager loading if given" do
     EagerBand.eager(:good_albums).all
-    DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))"]
+    DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((name = 'good') AND (albums.band_id IN (2)))"]
     EagerBand.eager(:good_albums=>:good_tracks).all
-    DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))", "SELECT * FROM tracks WHERE ((tracks.album_id IN (1)) AND (name = 'Good'))"]
+    DB.sqls.should == ['SELECT * FROM bands', "SELECT * FROM albums WHERE ((name = 'good') AND (albums.band_id IN (2)))", "SELECT * FROM tracks WHERE ((name = 'Good') AND (tracks.album_id IN (1)))"]
   end
 
   it "should raise an error when attempting to eagerly load an association with the :allow_eager option set to false" do
@@ -480,7 +635,7 @@ describe Sequel::Model, "#eager" do
     EagerAlbum.eager(:track_names).all
     DB.sqls.should == ['SELECT * FROM albums', "SELECT id, name FROM tracks WHERE (tracks.album_id IN (1))"]
     EagerAlbum.eager(:genre_names).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT id, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT id, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
   end
 
   it "should respect many_to_one association's :qualify option" do
@@ -530,22 +685,40 @@ describe Sequel::Model, "#eager" do
     EagerAlbum.many_to_many :special_genres, :class=>:EagerGenre, :left_primary_key=>[:band_id, :id], :left_key=>[:l1, :l2], :right_primary_key=>[:xxx, :id], :right_key=>[:r1, :r2], :join_table=>:ag
     EagerGenre.dataset._fetch = [{:x_foreign_key_0_x=>2, :x_foreign_key_1_x=>1, :id=>5}, {:x_foreign_key_0_x=>2, :x_foreign_key_1_x=>1, :id=>6}]
     as = EagerAlbum.eager(:special_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.l1 AS x_foreign_key_0_x, ag.l2 AS x_foreign_key_1_x FROM genres INNER JOIN ag ON ((ag.r1 = genres.xxx) AND (ag.r2 = genres.id) AND ((ag.l1, ag.l2) IN ((2, 1))))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.l1 AS x_foreign_key_0_x, ag.l2 AS x_foreign_key_1_x FROM genres INNER JOIN ag ON ((ag.r1 = genres.xxx) AND (ag.r2 = genres.id)) WHERE ((ag.l1, ag.l2) IN ((2, 1)))"]
     as.length.should == 1
     as.first.special_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
   end
   
+  it "should respect one_through_one association's composite keys" do
+    EagerAlbum.one_through_one :special_genre, :class=>:EagerGenre, :left_primary_key=>[:band_id, :id], :left_key=>[:l1, :l2], :right_primary_key=>[:xxx, :id], :right_key=>[:r1, :r2], :join_table=>:ag
+    EagerGenre.dataset._fetch = [{:x_foreign_key_0_x=>2, :x_foreign_key_1_x=>1, :id=>5}]
+    as = EagerAlbum.eager(:special_genre).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.l1 AS x_foreign_key_0_x, ag.l2 AS x_foreign_key_1_x FROM genres INNER JOIN ag ON ((ag.r1 = genres.xxx) AND (ag.r2 = genres.id)) WHERE ((ag.l1, ag.l2) IN ((2, 1)))"]
+    as.length.should == 1
+    as.first.special_genre.should == EagerGenre.load(:id=>5)
+  end
+  
   it "should respect many_to_many association's :left_primary_key and :right_primary_key options" do
     EagerAlbum.many_to_many :special_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_primary_key=>:xxx, :right_key=>:genre_id, :join_table=>:ag
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}]
     as = EagerAlbum.eager(:special_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.xxx) AND (ag.album_id IN (2)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.xxx) WHERE (ag.album_id IN (2))"]
     as.length.should == 1
     as.first.special_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
   end
 
-  it "should respect the :limit option on a one_to_many association" do
-    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>2
+  it "should respect one_through_one association's :left_primary_key and :right_primary_key options" do
+    EagerAlbum.one_through_one :special_genre, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_primary_key=>:xxx, :right_key=>:genre_id, :join_table=>:ag
+    EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}]
+    as = EagerAlbum.eager(:special_genre).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.xxx) WHERE (ag.album_id IN (2))"]
+    as.length.should == 1
+    as.first.special_genre.should == EagerGenre.load(:id=>5)
+  end
+
+  it "should respect the :limit option on a one_to_many association using the :ruby strategy" do
+    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>2, :eager_limit_strategy=>:ruby
     EagerTrack.dataset._fetch = [{:album_id=>1, :id=>2}, {:album_id=>1, :id=>3}, {:album_id=>1, :id=>4}]
     as = EagerAlbum.eager(:first_two_tracks).all
     DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"]
@@ -553,43 +726,60 @@ describe Sequel::Model, "#eager" do
     as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>2), EagerTrack.load(:album_id=>1, :id=>3)]
 
     DB.reset
-    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[1,1]
+    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[1,1], :eager_limit_strategy=>:ruby
     as = EagerAlbum.eager(:first_two_tracks).all
     DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"]
     as.length.should == 1
     as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>3)]
 
     DB.reset
-    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[nil,1]
+    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>[nil,1], :eager_limit_strategy=>:ruby
     as = EagerAlbum.eager(:first_two_tracks).all
     DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM tracks WHERE (tracks.album_id IN (1))"]
     as.length.should == 1
     as.first.first_two_tracks.should == [EagerTrack.load(:album_id=>1, :id=>3), EagerTrack.load(:album_id=>1, :id=>4)]
   end
 
-  it "should respect the :limit option on a one_to_many association using the :window_function strategy" do
-    def (EagerTrack.dataset).supports_window_functions?() true end
+  it "should respect the :limit option on a one_to_many association" do
     EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>2
     a = EagerAlbum.eager(:tracks).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x <= 2)']
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY name LIMIT 2) AS t1']
     a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
     DB.sqls.should == []
-  end
   
-  it "should respect the :limit option with an offset on a one_to_many association using the :window_function strategy" do
-    def (EagerTrack.dataset).supports_window_functions?() true end
     EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[2, 1]
     a = EagerAlbum.eager(:tracks).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))']
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY name LIMIT 2 OFFSET 1) AS t1']
+    a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
+    DB.sqls.should == []
+  
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[nil, 1]
+    a = EagerAlbum.eager(:tracks).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE (1 = tracks.album_id) ORDER BY name OFFSET 1) AS t1']
     a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
     DB.sqls.should == []
   end
   
-  it "should respect the :limit option with just an offset on a one_to_many association using the :window_function strategy" do
+  it "should respect the :limit option on a one_to_many association using the :window_function strategy" do
     def (EagerTrack.dataset).supports_window_functions?() true end
-    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[nil, 1]
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>2, :eager_limit_strategy=>:window_function
+    a = EagerAlbum.eager(:tracks).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x <= 2)']
+    a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
+    DB.sqls.should == []
+  
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[2, 1], :eager_limit_strategy=>:window_function
+    a = EagerAlbum.eager(:tracks).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 4))']
+    a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
+    DB.sqls.should == []
+  
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>[nil, 1], :eager_limit_strategy=>:window_function
     a = EagerAlbum.eager(:tracks).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
     DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE (tracks.album_id IN (1))) AS t1 WHERE (x_sequel_row_number_x >= 2)']
@@ -597,49 +787,96 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == []
   end
   
-  it "should respect the limit option on a many_to_many association" do
-    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2
+  it "should use a ruby strategy for limit if :eager_graph option is used" do
+    EagerTrack.many_to_one :album2, :clone=>:album
+    EagerAlbum.one_to_many :first_two_tracks, :class=>:EagerTrack, :key=>:album_id, :limit=>2, :eager_graph=>:album2
+    EagerTrack.dataset._fetch = [{:album_id=>1, :id=>2, :album2_id=>1, :band_id=>5}, {:album_id=>1, :id=>3, :album2_id=>1, :band_id=>5}, {:album_id=>1, :id=>4, :album2_id=>1, :band_id=>5}]
+    as = EagerAlbum.eager(:first_two_tracks).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT tracks.id, tracks.album_id, album2.id AS album2_id, album2.band_id FROM tracks LEFT OUTER JOIN albums AS album2 ON (album2.id = tracks.album_id) WHERE (tracks.album_id IN (1))"]
+    as.length.should == 1
+    tracks = as.first.first_two_tracks
+    tracks.should == [EagerTrack.load(:album_id=>1, :id=>2), EagerTrack.load(:album_id=>1, :id=>3)]
+    tracks.first.album2.should == EagerAlbum.load(:id=>1, :band_id=>5)
+    tracks.last.album2.should == EagerAlbum.load(:id=>1, :band_id=>5)
+  end
+  
+  it "should not use a union strategy for limit by default if providing a per-eager load callback" do
+    def (EagerTrack.dataset).supports_window_functions?() true end
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>2
+    a = EagerAlbum.eager(:tracks=>proc{|ds| ds.where(:id=>3)}).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id ORDER BY name) AS x_sequel_row_number_x FROM tracks WHERE ((tracks.album_id IN (1)) AND (id = 3))) AS t1 WHERE (x_sequel_row_number_x <= 2)']
+    a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
+    DB.sqls.should == []
+  end
+
+  it "should respect the limit option on a many_to_many association using the :ruby strategy" do
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2, :eager_limit_strategy=>:ruby
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}]
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
     
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}]
-    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1]
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1], :eager_limit_strategy=>:ruby
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>6)]
     
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}, {:x_foreign_key_x=>2, :id=>7}]
-    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1]
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1], :eager_limit_strategy=>:ruby
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>6), EagerGenre.load(:id=>7)]
   end
 
-  it "should respect the limit option on a many_to_many association using the :window_function strategy" do
+  it "should respect the limit option on a many_to_many association" do
     def (EagerGenre.dataset).supports_window_functions?() true end
     EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2, :order=>:name
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}]
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE (x_sequel_row_number_x <= 2)"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (2 = ag.album_id) ORDER BY name LIMIT 2) AS t1"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
 
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}]
     EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1], :order=>:name
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 3))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (2 = ag.album_id) ORDER BY name LIMIT 1 OFFSET 1) AS t1"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>5)]
 
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}]
     EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1], :order=>:name
     as = EagerAlbum.eager(:first_two_genres).all
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (2)))) AS t1 WHERE (x_sequel_row_number_x >= 2)"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (2 = ag.album_id) ORDER BY name OFFSET 1) AS t1"]
+    as.length.should == 1
+    as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
+  end
+
+  it "should respect the limit option on a many_to_many association using the :window_function strategy" do
+    def (EagerGenre.dataset).supports_window_functions?() true end
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>2, :order=>:name, :eager_limit_strategy=>:window_function
+    EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}]
+    as = EagerAlbum.eager(:first_two_genres).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))) AS t1 WHERE (x_sequel_row_number_x <= 2)"]
+    as.length.should == 1
+    as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
+
+    EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}]
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1, 1], :order=>:name, :eager_limit_strategy=>:window_function
+    as = EagerAlbum.eager(:first_two_genres).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))) AS t1 WHERE ((x_sequel_row_number_x >= 2) AND (x_sequel_row_number_x < 3))"]
+    as.length.should == 1
+    as.first.first_two_genres.should == [EagerGenre.load(:id=>5)]
+
+    EagerGenre.dataset._fetch = [{:x_foreign_key_x=>2, :id=>5}, {:x_foreign_key_x=>2, :id=>6}]
+    EagerAlbum.many_to_many :first_two_genres, :class=>:EagerGenre, :left_primary_key=>:band_id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil, 1], :order=>:name, :eager_limit_strategy=>:window_function
+    as = EagerAlbum.eager(:first_two_genres).all
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT * FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id ORDER BY name) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (2))) AS t1 WHERE (x_sequel_row_number_x >= 2)"]
     as.length.should == 1
     as.first.first_two_genres.should == [EagerGenre.load(:id=>5), EagerGenre.load(:id=>6)]
   end
@@ -686,7 +923,7 @@ describe Sequel::Model, "#eager" do
     EagerAlbum.many_to_many :al_genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :uniq=>true
     EagerGenre.dataset._fetch = [{:x_foreign_key_x=>1, :id=>8}, {:x_foreign_key_x=>1, :id=>8}]
     a = EagerAlbum.eager(:al_genres).all.first
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
     a.should == EagerAlbum.load(:id => 1, :band_id => 2)
     a.al_genres.should == [EagerGenre.load(:id=>8)]
   end
@@ -694,7 +931,7 @@ describe Sequel::Model, "#eager" do
   it "should respect :distinct option when eagerly loading many_to_many associations" do
     EagerAlbum.many_to_many :al_genres, :class=>'EagerGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :distinct=>true
     a = EagerAlbum.eager(:al_genres).all.first
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT DISTINCT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT DISTINCT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
     a.should == EagerAlbum.load(:id => 1, :band_id => 2)
     a.al_genres.should == [EagerGenre.load(:id=>4)]
   end
@@ -727,18 +964,26 @@ describe Sequel::Model, "#eager" do
   it "should eagerly load a many_to_many association with custom eager block" do
     a = EagerAlbum.eager(:genres => proc {|ds| ds.select(:name)}).all
     a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
-    DB.sqls.should == ['SELECT * FROM albums', "SELECT name, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT name, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
     a.first.genres.should == [EagerGenre.load(:id => 4)]
     DB.sqls.should == []
   end
 
+  it "should eagerly load a one_through_one association with custom eager block" do
+    a = EagerAlbum.eager(:genre => proc {|ds| ds.select(:name)}).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', "SELECT name, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
+    a.first.genre.should == EagerGenre.load(:id => 4)
+    DB.sqls.should == []
+  end
+
   it "should allow cascading of eager loading within a custom eager block" do
     a = EagerTrack.eager(:album => proc {|ds| ds.eager(:band => :members)}).all
     a.should == [EagerTrack.load(:id => 3, :album_id => 1)]
     DB.sqls.should == ['SELECT * FROM tracks',
       'SELECT * FROM albums WHERE (albums.id IN (1))',
       'SELECT * FROM bands WHERE (bands.id IN (2))',
-      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"]
+      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON (bm.member_id = members.id) WHERE (bm.band_id IN (2))"]
     a = a.first
     a.album.should == EagerAlbum.load(:id => 1, :band_id => 2)
     a.album.band.should == EagerBand.load(:id => 2)
@@ -752,7 +997,7 @@ describe Sequel::Model, "#eager" do
     DB.sqls.should == ['SELECT * FROM tracks',
       'SELECT id, band_id FROM albums WHERE (albums.id IN (1))',
       'SELECT * FROM bands WHERE (bands.id IN (2))',
-      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON ((bm.member_id = members.id) AND (bm.band_id IN (2)))"]
+      "SELECT members.*, bm.band_id AS x_foreign_key_x FROM members INNER JOIN bm ON (bm.member_id = members.id) WHERE (bm.band_id IN (2))"]
     a = a.first
     a.album.should == EagerAlbum.load(:id => 1, :band_id => 2)
     a.album.band.should == EagerBand.load(:id => 2)
@@ -789,7 +1034,7 @@ describe Sequel::Model, "#eager" do
 
   it "should call both association and custom eager blocks" do
     EagerBand.eager(:good_albums => proc {|ds| ds.select(:name)}).all
-    DB.sqls.should == ['SELECT * FROM bands', "SELECT name FROM albums WHERE ((albums.band_id IN (2)) AND (name = 'good'))"]
+    DB.sqls.should == ['SELECT * FROM bands', "SELECT name FROM albums WHERE ((name = 'good') AND (albums.band_id IN (2)))"]
   end
 end
 
@@ -800,7 +1045,9 @@ describe Sequel::Model, "#eager_graph" do
       columns :id, :band_id
       many_to_one :band, :class=>'GraphBand', :key=>:band_id
       one_to_many :tracks, :class=>'GraphTrack', :key=>:album_id
+      one_to_one :track, :class=>'GraphTrack', :key=>:album_id
       many_to_many :genres, :class=>'GraphGenre', :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag
+      one_through_one :genre, :clone=>:genres
       many_to_one :previous_album, :class=>'GraphAlbum'
     end
 
@@ -902,6 +1149,21 @@ describe Sequel::Model, "#eager_graph" do
     a.album.band.members.should == [GraphBandMember.load(:id => 5)]
   end
   
+  it "should set up correct inner joins when using association_join" do
+    GraphAlbum.association_join(:band).sql.should == 'SELECT * FROM albums INNER JOIN bands AS band ON (band.id = albums.band_id)'
+    GraphAlbum.association_join(:track).sql.should == 'SELECT * FROM albums INNER JOIN tracks AS track ON (track.album_id = albums.id)'
+    GraphAlbum.association_join(:tracks).sql.should == 'SELECT * FROM albums INNER JOIN tracks ON (tracks.album_id = albums.id)'
+    GraphAlbum.association_join(:genres).sql.should == 'SELECT * FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres ON (genres.id = ag.genre_id)'
+    GraphAlbum.association_join(:genre).sql.should == 'SELECT * FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres AS genre ON (genre.id = ag.genre_id)'
+  end
+  
+  it "should set up correct join types when using association_*_join" do
+    GraphAlbum.association_inner_join(:band).sql.should == 'SELECT * FROM albums INNER JOIN bands AS band ON (band.id = albums.band_id)'
+    GraphAlbum.association_left_join(:track).sql.should == 'SELECT * FROM albums LEFT JOIN tracks AS track ON (track.album_id = albums.id)'
+    GraphAlbum.association_right_join(:tracks).sql.should == 'SELECT * FROM albums RIGHT JOIN tracks ON (tracks.album_id = albums.id)'
+    GraphAlbum.association_full_join(:genres).sql.should == 'SELECT * FROM albums FULL JOIN ag ON (ag.album_id = albums.id) FULL JOIN genres ON (genres.id = ag.genre_id)'
+  end
+  
   it "should eagerly load a single many_to_one association" do
     ds = GraphAlbum.eager_graph(:band)
     ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)'
@@ -922,8 +1184,16 @@ describe Sequel::Model, "#eager_graph" do
     a.first.band_id.should == GraphBand.load(:id => 2, :vocalist_id=>3)
   end
   
+  it "should support :join_type eager_graph option one_to_one association" do
+    ds = GraphAlbum.eager_graph_with_options(:track, :join_type=>:inner)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, track.id AS track_id, track.album_id FROM albums INNER JOIN tracks AS track ON (track.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :track_id=>3, :album_id=>1}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.track.should == GraphTrack.load(:id => 3, :album_id=>1)
+  end
+
   it "should eagerly load a single one_to_one association" do
-    GraphAlbum.one_to_one :track, :class=>'GraphTrack', :key=>:album_id
     ds = GraphAlbum.eager_graph(:track)
     ds.sql.should == 'SELECT albums.id, albums.band_id, track.id AS track_id, track.album_id FROM albums LEFT OUTER JOIN tracks AS track ON (track.album_id = albums.id)'
     ds._fetch = {:id=>1, :band_id=>2, :track_id=>3, :album_id=>1}
@@ -932,6 +1202,45 @@ describe Sequel::Model, "#eager_graph" do
     a.first.track.should == GraphTrack.load(:id => 3, :album_id=>1)
   end
 
+  it "should eagerly graph a single one_to_one association using the :distinct_on strategy" do
+    sub = Class.new(GraphTrack)
+    def (sub.dataset).supports_distinct_on?() true end
+    def (sub.dataset).columns() [:id, :album_id] end
+    GraphAlbum.one_to_one :ltrack, :clone=>:track, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:ltrack, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, ltrack.id AS ltrack_id, ltrack.album_id FROM albums LEFT OUTER JOIN (SELECT DISTINCT ON (tracks.album_id) * FROM tracks ORDER BY tracks.album_id) AS ltrack ON (ltrack.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :ltrack_id=>3, :album_id=>1}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.ltrack.should == sub.load(:id => 3, :album_id=>1)
+  end
+  
+  it "should eagerly graph a single one_to_one association using the :window_function strategy" do
+    sub = Class.new(GraphTrack)
+    def (sub.dataset).supports_window_functions?() true end
+    def (sub.dataset).columns() [:id, :album_id] end
+    GraphAlbum.one_to_one :ltrack, :clone=>:track, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:ltrack, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, ltrack.id AS ltrack_id, ltrack.album_id FROM albums LEFT OUTER JOIN (SELECT id, album_id FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x = 1)) AS ltrack ON (ltrack.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :ltrack_id=>3, :album_id=>1}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.ltrack.should == sub.load(:id => 3, :album_id=>1)
+  end
+  
+  it "should eagerly graph a single one_to_one association using the :correlated_subquery strategy" do
+    sub = Class.new(GraphTrack)
+    def (sub.dataset).supports_window_functions?() true end
+    def (sub.dataset).columns() [:id, :album_id] end
+    GraphAlbum.one_to_one :ltrack, :clone=>:track, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:ltrack, :limit_strategy=>:correlated_subquery)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, ltrack.id AS ltrack_id, ltrack.album_id FROM albums LEFT OUTER JOIN (SELECT * FROM tracks WHERE (tracks.id IN (SELECT t1.id FROM tracks AS t1 WHERE (t1.album_id = tracks.album_id) LIMIT 1))) AS ltrack ON (ltrack.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :ltrack_id=>3, :album_id=>1}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.ltrack.should == sub.load(:id => 3, :album_id=>1)
+  end
+  
   it "should eagerly load a single one_to_many association" do
     ds = GraphAlbum.eager_graph(:tracks)
     ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)'
@@ -941,6 +1250,19 @@ describe Sequel::Model, "#eager_graph" do
     a.first.tracks.should == [GraphTrack.load(:id => 3, :album_id=>1)]
   end
 
+  it "should eagerly graph a single one_to_many association using the :window_function strategy" do
+    sub = Class.new(GraphTrack)
+    def (sub.dataset).supports_window_functions?() true end
+    def (sub.dataset).columns() [:id, :album_id] end
+    GraphAlbum.one_to_many :ltracks, :clone=>:tracks, :limit=>2, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:ltracks, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, ltracks.id AS ltracks_id, ltracks.album_id FROM albums LEFT OUTER JOIN (SELECT id, album_id FROM (SELECT *, row_number() OVER (PARTITION BY tracks.album_id) AS x_sequel_row_number_x FROM tracks) AS t1 WHERE (x_sequel_row_number_x <= 2)) AS ltracks ON (ltracks.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :ltracks_id=>3, :album_id=>1}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.ltracks.should == [sub.load(:id => 3, :album_id=>1)]
+  end
+  
   it "should eagerly load a single many_to_many association" do
     ds = GraphAlbum.eager_graph(:genres)
     ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id)'
@@ -950,11 +1272,62 @@ describe Sequel::Model, "#eager_graph" do
     a.first.genres.should == [GraphGenre.load(:id => 4)]
   end
 
-  it "should correctly handle an aliased join table in many_to_many" do
+  it "should eagerly graph a single many_to_many association using the :window_function strategy" do
+    sub = Class.new(GraphGenre)
+    def (sub.dataset).supports_window_functions?() true end
+    def (sub.dataset).columns() literal(opts[:select]) =~ /x_foreign_key_x/ ? [:id, :x_foreign_key_x] : [:id] end
+    GraphAlbum.many_to_many :lgenres, :clone=>:genres, :class=>sub, :limit=>2
+    ds = GraphAlbum.eager_graph_with_options(:lgenres, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, lgenres.id AS lgenres_id FROM albums LEFT OUTER JOIN (SELECT id, x_foreign_key_x FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id)) AS t1 WHERE (x_sequel_row_number_x <= 2)) AS lgenres ON (lgenres.x_foreign_key_x = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :lgenres_id=>4}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.lgenres.should == [sub.load(:id => 4)]
+  end
+  
+  it "should eagerly load a single one_through_one association" do
+    ds = GraphAlbum.eager_graph(:genre)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, genre.id AS genre_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genre ON (genre.id = ag.genre_id)'
+    ds._fetch = {:id=>1, :band_id=>2, :genre_id=>4}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.genre.should == GraphGenre.load(:id => 4)
+  end
+
+  it "should eagerly graph a single one_through_one association using the :distinct_on strategy" do
+    sub = Class.new(GraphGenre)
+    def (sub.dataset).supports_distinct_on?() true end
+    def (sub.dataset).columns() [:id] end
+    GraphAlbum.one_through_one :lgenre, :clone=>:genre, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:lgenre, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, lgenre.id AS lgenre_id FROM albums LEFT OUTER JOIN (SELECT DISTINCT ON (ag.album_id) genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) ORDER BY ag.album_id) AS lgenre ON (lgenre.x_foreign_key_x = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :lgenre_id=>4}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.lgenre.should == sub.load(:id => 4)
+  end
+  
+  it "should eagerly graph a single one_through_one association using the :window_function strategy" do
+    sub = Class.new(GraphGenre)
+    def (sub.dataset).supports_window_functions?() true end
+    def (sub.dataset).columns() literal(opts[:select]) =~ /x_foreign_key_x/ ? [:id, :x_foreign_key_x] : [:id] end
+    GraphAlbum.one_through_one :lgenre, :clone=>:genre, :class=>sub
+    ds = GraphAlbum.eager_graph_with_options(:lgenre, :limit_strategy=>true)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, lgenre.id AS lgenre_id FROM albums LEFT OUTER JOIN (SELECT id, x_foreign_key_x FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id) AS x_sequel_row_number_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id)) AS t1 WHERE (x_sequel_row_number_x = 1)) AS lgenre ON (lgenre.x_foreign_key_x = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :lgenre_id=>4}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.lgenre.should == sub.load(:id => 4)
+  end
+  
+  it "should correctly handle an aliased join table in many_to_many and one_through_one" do
     c = Class.new(GraphAlbum)
     c.many_to_many :genres, :clone=>:genres, :join_table=>:ag___ga
     c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS ga ON (ga.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ga.genre_id)'
 
+    c.many_to_many :genre, :clone=>:genre, :join_table=>:ag___ga
+    c.eager_graph(:genre).sql.should == 'SELECT albums.id, albums.band_id, genre.id AS genre_id FROM albums LEFT OUTER JOIN ag AS ga ON (ga.album_id = albums.id) LEFT OUTER JOIN genres AS genre ON (genre.id = ga.genre_id)'
+
     c.many_to_many :genres, :clone=>:genres, :join_table=>:ag___albums
     c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS albums_0 ON (albums_0.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = albums_0.genre_id)'
 
@@ -962,6 +1335,10 @@ describe Sequel::Model, "#eager_graph" do
     c.eager_graph(:genres).sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag AS genres_0 ON (genres_0.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = genres_0.genre_id)'
   end
   
+  it "should handle multiple associations in a single call to association_join" do
+    GraphAlbum.association_join(:genres, :tracks, :band).sql.should == 'SELECT * FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres ON (genres.id = ag.genre_id) INNER JOIN tracks ON (tracks.album_id = albums.id) INNER JOIN bands AS band ON (band.id = albums.band_id)'
+  end
+
   it "should eagerly load multiple associations in a single call" do 
     ds = GraphAlbum.eager_graph(:genres, :tracks, :band)
     ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id, tracks.id AS tracks_id, tracks.album_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)'
@@ -974,6 +1351,28 @@ describe Sequel::Model, "#eager_graph" do
     a.genres.should == [GraphGenre.load(:id => 4)]
   end
 
+  it "should eagerly load multiple associations with different limit strategies in a single call" do 
+    subg = Class.new(GraphGenre)
+    def (subg.dataset).supports_distinct_on?() true end
+    def (subg.dataset).supports_window_functions?() true end
+    def (subg.dataset).columns() literal(opts[:select]) =~ /x_foreign_key_x/ ? [:id, :x_foreign_key_x] : [:id] end
+    GraphAlbum.one_through_one :lgenre, :clone=>:genre, :class=>subg
+    GraphAlbum.many_to_many :lgenres, :clone=>:genres, :class=>subg, :limit=>2
+
+    ds = GraphAlbum.eager_graph_with_options([:lgenre, :lgenres], :limit_strategy=>{:lgenre=>:distinct_on, :lgenres=>:window_function})
+    ds.sql.should == 'SELECT albums.id, albums.band_id, lgenre.id AS lgenre_id, lgenres.id AS lgenres_id FROM albums LEFT OUTER JOIN (SELECT DISTINCT ON (ag.album_id) genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) ORDER BY ag.album_id) AS lgenre ON (lgenre.x_foreign_key_x = albums.id) LEFT OUTER JOIN (SELECT id, x_foreign_key_x FROM (SELECT genres.*, ag.album_id AS x_foreign_key_x, row_number() OVER (PARTITION BY ag.album_id) AS x_sequel_r [...]
+    ds._fetch = {:id=>1, :band_id=>2, :lgenres_id=>4, :lgenre_id=>3}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a = a.first
+    a.lgenre.should == subg.load(:id => 3)
+    a.lgenres.should == [subg.load(:id => 4)]
+  end
+  
+  it "should handle multiple associations in separate calls to association_join" do
+    GraphAlbum.association_join(:genres).association_join(:tracks).association_join(:band).sql.should == 'SELECT * FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres ON (genres.id = ag.genre_id) INNER JOIN tracks ON (tracks.album_id = albums.id) INNER JOIN bands AS band ON (band.id = albums.band_id)'
+  end
+
   it "should eagerly load multiple associations in separate calls" do 
     ds = GraphAlbum.eager_graph(:genres).eager_graph(:tracks).eager_graph(:band)
     ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id, tracks.id AS tracks_id, tracks.album_id, band.id AS band_id_0, band.vocalist_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id) LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id) LEFT OUTER JOIN bands AS band ON (band.id = albums.band_id)'
@@ -986,6 +1385,15 @@ describe Sequel::Model, "#eager_graph" do
     a.genres.should == [GraphGenre.load(:id => 4)]
   end
 
+  it "should handle cascading associations in a single call to association_join" do
+    GraphTrack.association_join(:album=>{:band=>:members}).sql.should == 'SELECT * FROM tracks INNER JOIN albums AS album ON (album.id = tracks.album_id) INNER JOIN bands AS band ON (band.id = album.band_id) INNER JOIN bm ON (bm.band_id = band.id) INNER JOIN members ON (members.id = bm.member_id)'
+    GraphBand.association_join({:albums=>:tracks}, :members).sql.should == 'SELECT * FROM bands INNER JOIN albums ON (albums.band_id = bands.id) INNER JOIN tracks ON (tracks.album_id = albums.id) INNER JOIN bm ON (bm.band_id = bands.id) INNER JOIN members ON (members.id = bm.member_id)'
+  end
+
+  it "should handle matching association names for different models when using association_join" do
+    GraphAlbum.association_join(:genres).association_join(:band=>:genres).sql.should == 'SELECT * FROM albums INNER JOIN ag ON (ag.album_id = albums.id) INNER JOIN genres ON (genres.id = ag.genre_id) INNER JOIN bands AS band ON (band.id = albums.band_id) INNER JOIN bg ON (bg.band_id = band.id) INNER JOIN genres AS genres_0 ON (genres_0.id = bg.genre_id)'
+  end
+
   it "should allow cascading of eager loading for associations of associated models" do
     ds = GraphTrack.eager_graph(:album=>{:band=>:members})
     ds.sql.should == 'SELECT tracks.id, tracks.album_id, album.id AS album_id_0, album.band_id, band.id AS band_id_0, band.vocalist_id, members.id AS members_id FROM tracks LEFT OUTER JOIN albums AS album ON (album.id = tracks.album_id) LEFT OUTER JOIN bands AS band ON (band.id = album.band_id) LEFT OUTER JOIN bm ON (bm.band_id = band.id) LEFT OUTER JOIN members ON (members.id = bm.member_id)'
@@ -1106,7 +1514,7 @@ describe Sequel::Model, "#eager_graph" do
     a.tracks.should == [GraphTrack.load(:id=>3, :album_id=>1)]
     a.genres.should == [GraphGenre.load(:id => 6)]
     DB.sqls.should == ['SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)',
-    "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON ((ag.genre_id = genres.id) AND (ag.album_id IN (1)))"]
+    "SELECT genres.*, ag.album_id AS x_foreign_key_x FROM genres INNER JOIN ag ON (ag.genre_id = genres.id) WHERE (ag.album_id IN (1))"]
   end
 
   it "should handle no associated records for a single many_to_one association" do
@@ -1118,6 +1526,15 @@ describe Sequel::Model, "#eager_graph" do
     a.first.band.should == nil
   end
 
+  it "should handle no associated records for a single one_to_one association" do
+    ds = GraphAlbum.eager_graph(:track)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, track.id AS track_id, track.album_id FROM albums LEFT OUTER JOIN tracks AS track ON (track.album_id = albums.id)'
+    ds._fetch = {:id=>1, :band_id=>2, :track_id=>nil, :album_id=>nil}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.track.should == nil
+  end
+
   it "should handle no associated records for a single one_to_many association" do
     ds = GraphAlbum.eager_graph(:tracks)
     ds.sql.should == 'SELECT albums.id, albums.band_id, tracks.id AS tracks_id, tracks.album_id FROM albums LEFT OUTER JOIN tracks ON (tracks.album_id = albums.id)'
@@ -1127,6 +1544,15 @@ describe Sequel::Model, "#eager_graph" do
     a.first.tracks.should == []
   end
 
+  it "should handle no associated records for a single one_through_one association" do
+    ds = GraphAlbum.eager_graph(:genre)
+    ds.sql.should == 'SELECT albums.id, albums.band_id, genre.id AS genre_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS genre ON (genre.id = ag.genre_id)'
+    ds._fetch = {:id=>1, :band_id=>2, :genres_id=>nil}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.genre.should == nil
+  end
+
   it "should handle no associated records for a single many_to_many association" do
     ds = GraphAlbum.eager_graph(:genres)
     ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres ON (genres.id = ag.genre_id)'
@@ -1384,6 +1810,11 @@ describe Sequel::Model, "#eager_graph" do
     GraphAlbum.order(:band_id).eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id) ORDER BY band_id, right_tracks.id, right_tracks.album_id'
   end
 
+  it "should use the association's :graph_order in preference or order" do
+    GraphAlbum.one_to_many :right_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:tracks__id, :tracks__album_id], :graph_order=>[:id, :album_id]
+    GraphAlbum.order(:band_id).eager_graph(:right_tracks).sql.should == 'SELECT albums.id, albums.band_id, right_tracks.id AS right_tracks_id, right_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS right_tracks ON (right_tracks.album_id = albums.id) ORDER BY band_id, right_tracks.id, right_tracks.album_id'
+  end
+
   it "should add the association's :order for cascading associations" do
     GraphBand.one_to_many :a_albums, :class=>'GraphAlbum', :key=>:band_id, :order=>:name, :reciprocal=>nil
     GraphAlbum.one_to_many :b_tracks, :class=>'GraphTrack', :key=>:album_id, :order=>[:id, :album_id]
@@ -1441,6 +1872,8 @@ describe Sequel::Model, "#eager_graph" do
     c1.many_to_many :a_genres, :class=>c2, :left_primary_key=>:id, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:s__ag
     ds = c1.join(:s__t, [:b_id]).eager_graph(:a_genres)
     ds.sql.should == 'SELECT a.id, a_genres.id AS a_genres_id FROM (SELECT * FROM s.a INNER JOIN s.t USING (b_id)) AS a LEFT OUTER JOIN s.ag AS ag ON (ag.album_id = a.id) LEFT OUTER JOIN s.g AS a_genres ON (a_genres.id = ag.genre_id)'
+    ds = c1.eager_graph(:a_genres)
+    ds.sql.should == 'SELECT s.a.id, a_genres.id AS a_genres_id FROM s.a LEFT OUTER JOIN s.ag AS ag ON (ag.album_id = s.a.id) LEFT OUTER JOIN s.g AS a_genres ON (a_genres.id = ag.genre_id)'
   end
 
   it "should respect :after_load callbacks on associations when eager graphing" do
@@ -1473,6 +1906,48 @@ describe Sequel::Model, "#eager_graph" do
     a.al_genres.should == [GraphGenre.load(:id=>7), GraphGenre.load(:id=>12)]
   end
 
+  it "should handle offsets on associations with no results when eager graphing" do
+    GraphAlbum.one_to_many :al_tracks, :class=>GraphTrack, :key=>:album_id, :limit=>[2, 1]
+    ds = GraphAlbum.eager_graph(:al_tracks)
+    ds.sql.should == "SELECT albums.id, albums.band_id, al_tracks.id AS al_tracks_id, al_tracks.album_id FROM albums LEFT OUTER JOIN tracks AS al_tracks ON (al_tracks.album_id = albums.id)"
+    ds._fetch = [{:id=>1, :band_id=>2, :al_tracks_id=>nil, :album_id=>nil}]
+    a = ds.all.first
+    a.should == GraphAlbum.load(:id => 1, :band_id => 2)
+    a.al_tracks.should == []
+  end
+
+  it "should respect offsets on associations when eager graphing" do
+    GraphAlbum.many_to_one :al_band, :class=>GraphBand, :key=>:band_id
+    GraphAlbum.one_to_many :al_tracks, :class=>GraphTrack, :key=>:album_id, :limit=>[1, 1]
+    GraphAlbum.many_to_many :al_genres, :class=>GraphGenre, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[1,1]
+    ds = GraphAlbum.eager_graph(:al_band, :al_tracks, :al_genres)
+    ds.sql.should == "SELECT albums.id, albums.band_id, al_band.id AS al_band_id, al_band.vocalist_id, al_tracks.id AS al_tracks_id, al_tracks.album_id, al_genres.id AS al_genres_id FROM albums LEFT OUTER JOIN bands AS al_band ON (al_band.id = albums.band_id) LEFT OUTER JOIN tracks AS al_tracks ON (al_tracks.album_id = albums.id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS al_genres ON (al_genres.id = ag.genre_id)"
+    ds._fetch = [{:id=>1, :band_id=>2, :al_band_id=>3, :vocalist_id=>4, :al_tracks_id=>5, :album_id=>6, :al_genres_id=>7},
+      {:id=>1, :band_id=>2, :al_band_id=>8, :vocalist_id=>9, :al_tracks_id=>10, :album_id=>11, :al_genres_id=>12},
+      {:id=>1, :band_id=>2, :al_band_id=>13, :vocalist_id=>14, :al_tracks_id=>15, :album_id=>16, :al_genres_id=>17}]
+    a = ds.all.first
+    a.should == GraphAlbum.load(:id => 1, :band_id => 2)
+    a.al_band.should == GraphBand.load(:id=>3, :vocalist_id=>4)
+    a.al_tracks.should == [GraphTrack.load(:id=>10, :album_id=>11)]
+    a.al_genres.should == [GraphGenre.load(:id=>12)]
+  end
+
+  it "should respect offsets on associations when eager graphing one_to_one and one_through_one associations" do
+    GraphAlbum.many_to_one :al_band, :class=>GraphBand, :key=>:band_id
+    GraphAlbum.one_to_one :al_track, :class=>GraphTrack, :key=>:album_id, :limit=>[nil, 1]
+    GraphAlbum.one_through_one :al_genre, :class=>GraphGenre, :left_key=>:album_id, :right_key=>:genre_id, :join_table=>:ag, :limit=>[nil,1]
+    ds = GraphAlbum.eager_graph(:al_band, :al_track, :al_genre)
+    ds.sql.should == "SELECT albums.id, albums.band_id, al_band.id AS al_band_id, al_band.vocalist_id, al_track.id AS al_track_id, al_track.album_id, al_genre.id AS al_genre_id FROM albums LEFT OUTER JOIN bands AS al_band ON (al_band.id = albums.band_id) LEFT OUTER JOIN tracks AS al_track ON (al_track.album_id = albums.id) LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN genres AS al_genre ON (al_genre.id = ag.genre_id)"
+    ds._fetch = [{:id=>1, :band_id=>2, :al_band_id=>3, :vocalist_id=>4, :al_track_id=>5, :album_id=>6, :al_genre_id=>7},
+      {:id=>1, :band_id=>2, :al_band_id=>8, :vocalist_id=>9, :al_track_id=>10, :album_id=>11, :al_genre_id=>12},
+      {:id=>1, :band_id=>2, :al_band_id=>13, :vocalist_id=>14, :al_track_id=>15, :album_id=>16, :al_genre_id=>17}]
+    a = ds.all.first
+    a.should == GraphAlbum.load(:id => 1, :band_id => 2)
+    a.al_band.should == GraphBand.load(:id=>3, :vocalist_id=>4)
+    a.al_track.should == GraphTrack.load(:id=>10, :album_id=>11)
+    a.al_genre.should == GraphGenre.load(:id=>12)
+  end
+
   it "should eagerly load a many_to_one association with a custom callback" do
     ds = GraphAlbum.eager_graph(:band => proc {|ds1| ds1.select(:id).columns(:id)})
     ds.sql.should == 'SELECT albums.id, albums.band_id, band.id AS band_id_0 FROM albums LEFT OUTER JOIN (SELECT id FROM bands) AS band ON (band.id = albums.band_id)'
@@ -1483,7 +1958,6 @@ describe Sequel::Model, "#eager_graph" do
   end
 
   it "should eagerly load a one_to_one association with a custom callback" do
-    GraphAlbum.one_to_one :track, :class=>'GraphTrack', :key=>:album_id
     ds = GraphAlbum.eager_graph(:track => proc {|ds1| ds1.select(:album_id).columns(:album_id)})
     ds.sql.should == 'SELECT albums.id, albums.band_id, track.album_id FROM albums LEFT OUTER JOIN (SELECT album_id FROM tracks) AS track ON (track.album_id = albums.id)'
     ds._fetch = {:id=>1, :band_id=>2, :album_id=>1}
@@ -1501,6 +1975,15 @@ describe Sequel::Model, "#eager_graph" do
     a.first.tracks.should == [GraphTrack.load(:album_id=>1)]
   end
 
+  it "should eagerly load a one_through_one association with a custom callback" do
+    ds = GraphAlbum.eager_graph(:genre => proc {|ds1| ds1.select(:id).columns(:id)})
+    ds.sql.should == 'SELECT albums.id, albums.band_id, genre.id AS genre_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN (SELECT id FROM genres) AS genre ON (genre.id = ag.genre_id)'
+    ds._fetch = {:id=>1, :band_id=>2, :genre_id=>4}
+    a = ds.all
+    a.should == [GraphAlbum.load(:id => 1, :band_id => 2)]
+    a.first.genre.should == GraphGenre.load(:id => 4)
+  end
+
   it "should eagerly load a many_to_many association with a custom callback" do
     ds = GraphAlbum.eager_graph(:genres => proc {|ds1| ds1.select(:id).columns(:id)})
     ds.sql.should == 'SELECT albums.id, albums.band_id, genres.id AS genres_id FROM albums LEFT OUTER JOIN ag ON (ag.album_id = albums.id) LEFT OUTER JOIN (SELECT id FROM genres) AS genres ON (genres.id = ag.genre_id)'
@@ -1534,3 +2017,33 @@ describe Sequel::Model, "#eager_graph" do
     a.album.tracks.should == [GraphTrack.load(:id => 3, :album_id => 1)]
   end
 end
+
+describe "Sequel::Models with double underscores in table names" do
+  before do
+    @db = Sequel.mock(:fetch=>{:id=>1, :foo_id=>2})
+    @Foo = Class.new(Sequel::Model(@db[Sequel.identifier(:fo__os)]))
+    @Foo.columns :id, :foo_id
+    @Foo.one_to_many :foos, :class=>@Foo
+    @db.sqls
+  end
+
+  it "should have working eager_graph implementations" do
+    @db.fetch = {:id=>1, :foo_id=>1, :foos_id=>1, :foos_foo_id=>1}
+    foos = @Foo.eager_graph(:foos).all
+    @db.sqls.should == ["SELECT fo__os.id, fo__os.foo_id, foos.id AS foos_id, foos.foo_id AS foos_foo_id FROM fo__os LEFT OUTER JOIN (SELECT * FROM fo__os) AS foos ON (foos._id = fo__os.id)"]
+    foos.should == [@Foo.load(:id=>1, :foo_id=>1)]
+    foos.first.foos.should == [@Foo.load(:id=>1, :foo_id=>1)]
+  end
+
+  it "should have working eager_graph implementations when qualified" do
+    @Foo.dataset = Sequel.identifier(:fo__os).qualify(:s)
+    @Foo.columns :id, :foo_id
+    @db.sqls
+    @db.fetch = {:id=>1, :foo_id=>1, :foos_id=>1, :foos_foo_id=>1}
+    foos = @Foo.eager_graph(:foos).all
+    @db.sqls.should == ["SELECT s.fo__os.id, s.fo__os.foo_id, foos.id AS foos_id, foos.foo_id AS foos_foo_id FROM s.fo__os LEFT OUTER JOIN (SELECT * FROM s.fo__os) AS foos ON (foos._id = s.fo__os.id)"]
+    foos.should == [@Foo.load(:id=>1, :foo_id=>1)]
+    foos.first.foos.should == [@Foo.load(:id=>1, :foo_id=>1)]
+  end
+end
+
diff --git a/spec/model/model_spec.rb b/spec/model/model_spec.rb
index dc55f37..8cfefb0 100644
--- a/spec/model/model_spec.rb
+++ b/spec/model/model_spec.rb
@@ -303,7 +303,7 @@ describe Sequel::Model, "constructors" do
     block_called = false
     m = @m.new {|i| block_called = true; i.should be_a_kind_of(@m); i.values[:a] = 1}
     
-    block_called.should be_true
+    block_called.should == true
     m.values[:a].should == 1
   end
   
@@ -312,21 +312,21 @@ describe Sequel::Model, "constructors" do
     o = @m.dataset.row_proc.call(:a=>1)
     o.should be_a_kind_of(@m)
     o.values.should == {:a=>1}
-    o.new?.should be_false
+    o.new?.should == false
   end
   
   it "should have .call create an existing object" do
     o = @m.call(:a=>1)
     o.should be_a_kind_of(@m)
     o.values.should == {:a=>1}
-    o.new?.should be_false
+    o.new?.should == false
   end
   
   it "should have .load create an existing object" do
     o = @m.load(:a=>1)
     o.should be_a_kind_of(@m)
     o.values.should == {:a=>1}
-    o.new?.should be_false
+    o.new?.should == false
   end
 end
 
@@ -418,6 +418,231 @@ describe Sequel::Model, ".find" do
   end
 end
 
+describe Sequel::Model, ".finder" do
+  before do
+    @h = {:id=>1}
+    @db = Sequel.mock(:fetch=>@h)
+    @c = Class.new(Sequel::Model(@db[:items]))
+    @c.instance_eval do
+      def foo(a, b)
+        where(:bar=>a).order(b)
+      end
+    end
+    @o = @c.load(@h)
+    @db.sqls
+  end
+
+  specify "should create a method that calls the method given and returns the first instance" do
+    @c.finder :foo
+    @c.first_foo(1, 2).should == @o
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1"]
+  end
+
+  specify "should work correctly when subclassing" do
+    @c.finder(:foo)
+    @sc = Class.new(@c)
+    @sc.set_dataset :foos
+    @db.sqls
+    @sc.first_foo(1, 2).should == @sc.load(@h)
+    @sc.first_foo(3, 4).should == @sc.load(@h)
+    @db.sqls.should == ["SELECT * FROM foos WHERE (bar = 1) ORDER BY 2 LIMIT 1", "SELECT * FROM foos WHERE (bar = 3) ORDER BY 4 LIMIT 1"]
+  end
+
+  specify "should work correctly when dataset is modified" do
+    @c.finder(:foo)
+    @c.first_foo(1, 2).should == @o
+    @c.set_dataset :foos
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1", "SELECT * FROM foos LIMIT 1", "SELECT * FROM foos WHERE (bar = 3) ORDER BY 4 LIMIT 1"]
+  end
+
+  specify "should create a method based on the given block if no method symbol provided" do
+    @c.finder(:name=>:first_foo){|pl, ds| ds.where(pl.arg).limit(1)}
+    @c.first_foo(:id=>1).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (id = 1) LIMIT 1"]
+  end
+
+  specify "should raise an error if both a block and method symbol given" do
+    proc{@c.finder(:foo, :name=>:first_foo){|pl, ds| ds.where(pl.arg)}}.should raise_error(Sequel::Error)
+  end
+
+  specify "should raise an error if two option hashes are provided" do
+    proc{@c.finder({:name2=>:foo}, :name=>:first_foo){|pl, ds| ds.where(pl.arg)}}.should raise_error(Sequel::Error)
+  end
+
+  specify "should support :type option" do
+    @c.finder :foo, :type=>:all
+    @c.finder :foo, :type=>:each
+    @c.finder :foo, :type=>:get
+
+    a = []
+    @c.all_foo(1, 2){|r| a << r}.should == [@o]
+    a.should == [@o]
+   
+    a = []
+    @c.each_foo(3, 4){|r| a << r}
+    a.should == [@o]
+
+    @c.get_foo(5, 6).should == [:id, 1]
+
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4", "SELECT * FROM items WHERE (bar = 5) ORDER BY 6 LIMIT 1"]
+  end
+
+  specify "should support :name option" do
+    @c.finder :foo, :name=>:find_foo
+    @c.find_foo(1, 2).should == @o
+    @c.find_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1"]
+  end
+
+  specify "should support :arity option" do
+    def @c.foobar(*b)
+      ds = dataset
+      b.each_with_index do |a, i|
+        ds = ds.where(i=>a)
+      end
+      ds
+    end
+    @c.finder :foobar, :arity=>1, :name=>:find_foobar_1
+    @c.finder :foobar, :arity=>2, :name=>:find_foobar_2
+    @c.find_foobar_1(:a)
+    @c.find_foobar_2(:a, :b)
+    @db.sqls.should == ["SELECT * FROM items WHERE (0 = a) LIMIT 1", "SELECT * FROM items WHERE ((0 = a) AND (1 = b)) LIMIT 1"]
+  end
+
+  specify "should support :mod option" do
+    m = Module.new
+    @c.finder :foo, :mod=>m
+    proc{@c.first_foo}.should raise_error
+    @c.extend m
+    @c.first_foo(1, 2).should == @o
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1"]
+  end
+
+  specify "should raise error when calling with the wrong arity" do
+    @c.finder :foo
+    proc{@c.first_foo(1)}.should raise_error
+    proc{@c.first_foo(1,2,3)}.should raise_error
+  end
+end
+
+describe Sequel::Model, ".prepared_finder" do
+  before do
+    @h = {:id=>1}
+    @db = Sequel.mock(:fetch=>@h)
+    @db.extend_datasets do
+      def select_sql
+        sql = super
+        sql << ' -- prepared' if is_a?(Sequel::Dataset::PreparedStatementMethods)
+        sql
+      end
+    end
+    @c = Class.new(Sequel::Model(@db[:items]))
+    @c.instance_eval do
+      def foo(a, b)
+        where(:bar=>a).order(b)
+      end
+    end
+    @o = @c.load(@h)
+    @db.sqls
+  end
+
+  specify "should create a method that calls the method given and returns the first instance" do
+    @c.prepared_finder :foo
+    @c.first_foo(1, 2).should == @o
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1 -- prepared"]
+  end
+
+  specify "should work correctly when subclassing" do
+    @c.prepared_finder(:foo)
+    @sc = Class.new(@c)
+    @sc.set_dataset :foos
+    @db.sqls
+    @sc.first_foo(1, 2).should == @sc.load(@h)
+    @sc.first_foo(3, 4).should == @sc.load(@h)
+    @db.sqls.should == ["SELECT * FROM foos WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared", "SELECT * FROM foos WHERE (bar = 3) ORDER BY 4 LIMIT 1 -- prepared"]
+  end
+
+  specify "should work correctly when dataset is modified" do
+    @c.prepared_finder(:foo)
+    @c.first_foo(1, 2).should == @o
+    @c.set_dataset :foos
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared", "SELECT * FROM foos LIMIT 1", "SELECT * FROM foos WHERE (bar = 3) ORDER BY 4 LIMIT 1 -- prepared"]
+  end
+
+  specify "should create a method based on the given block if no method symbol provided" do
+    @c.prepared_finder(:name=>:first_foo){|a1| where(:id=>a1).limit(1)}
+    @c.first_foo(1).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (id = 1) LIMIT 1 -- prepared"]
+  end
+
+  specify "should raise an error if both a block and method symbol given" do
+    proc{@c.prepared_finder(:foo, :name=>:first_foo){|pl, ds| ds.where(pl.arg)}}.should raise_error(Sequel::Error)
+  end
+
+  specify "should raise an error if two option hashes are provided" do
+    proc{@c.prepared_finder({:name2=>:foo}, :name=>:first_foo){|pl, ds| ds.where(pl.arg)}}.should raise_error(Sequel::Error)
+  end
+
+  specify "should support :type option" do
+    @c.prepared_finder :foo, :type=>:all
+    @c.prepared_finder :foo, :type=>:each
+
+    a = []
+    @c.all_foo(1, 2){|r| a << r}.should == [@o]
+    a.should == [@o]
+   
+    a = []
+    @c.each_foo(3, 4){|r| a << r}
+    a.should == [@o]
+
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 -- prepared", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 -- prepared"]
+  end
+
+  specify "should support :name option" do
+    @c.prepared_finder :foo, :name=>:find_foo
+    @c.find_foo(1, 2).should == @o
+    @c.find_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1 -- prepared"]
+  end
+
+  specify "should support :arity option" do
+    def @c.foobar(*b)
+      ds = dataset
+      b.each_with_index do |a, i|
+        ds = ds.where(i=>a)
+      end
+      ds
+    end
+    @c.prepared_finder :foobar, :arity=>1, :name=>:find_foobar_1
+    @c.prepared_finder :foobar, :arity=>2, :name=>:find_foobar_2
+    @c.find_foobar_1(:a)
+    @c.find_foobar_2(:a, :b)
+    @db.sqls.should == ["SELECT * FROM items WHERE (0 = a) LIMIT 1 -- prepared", "SELECT * FROM items WHERE ((0 = a) AND (1 = b)) LIMIT 1 -- prepared"]
+  end
+
+  specify "should support :mod option" do
+    m = Module.new
+    @c.prepared_finder :foo, :mod=>m
+    proc{@c.first_foo}.should raise_error
+    @c.extend m
+    @c.first_foo(1, 2).should == @o
+    @c.first_foo(3, 4).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared", "SELECT * FROM items WHERE (bar = 3) ORDER BY 4 LIMIT 1 -- prepared"]
+  end
+
+  specify "should handle models with names" do
+    def @c.name; 'foobar' end
+    @c.prepared_finder :foo
+    @c.first_foo(1, 2).should == @o
+    @db.sqls.should == ["SELECT * FROM items WHERE (bar = 1) ORDER BY 2 LIMIT 1 -- prepared"]
+  end
+end
+
 describe Sequel::Model, ".fetch" do
   before do
     DB.reset
@@ -437,32 +662,35 @@ end
 
 describe Sequel::Model, ".find_or_create" do
   before do
-    @c = Class.new(Sequel::Model(:items)) do
+    @db = Sequel.mock
+    @c = Class.new(Sequel::Model(@db[:items])) do
       set_primary_key :id
       columns :x
     end
-    DB.reset
+    @db.sqls
   end
 
   it "should find the record" do
+    @db.fetch = [{:x=>1, :id=>1}]
+    @db.autoid = 1
     @c.find_or_create(:x => 1).should == @c.load(:x=>1, :id=>1)
-    DB.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1"]
+    @db.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1"]
   end
   
   it "should create the record if not found" do
-    @c.instance_dataset._fetch = @c.dataset._fetch = [[], {:x=>1, :id=>1}]
-    @c.instance_dataset.autoid = @c.dataset.autoid = 1
+    @db.fetch = [[], {:x=>1, :id=>1}]
+    @db.autoid = 1
     @c.find_or_create(:x => 1).should == @c.load(:x=>1, :id=>1)
-    DB.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1",
+    @db.sqls.should == ["SELECT * FROM items WHERE (x = 1) LIMIT 1",
       "INSERT INTO items (x) VALUES (1)",
       "SELECT * FROM items WHERE (id = 1) LIMIT 1"]
   end
 
   it "should pass the new record to be created to the block if no record is found" do
-    @c.instance_dataset._fetch = @c.dataset._fetch = [[], {:x=>1, :id=>1}]
-    @c.instance_dataset.autoid = @c.dataset.autoid = 1
+    @db.fetch = [[], {:x=>1, :id=>1}]
+    @db.autoid = 1
     @c.find_or_create(:x => 1){|x| x[:y] = 2}.should == @c.load(:x=>1, :id=>1)
-    sqls = DB.sqls
+    sqls = @db.sqls
     sqls.first.should == "SELECT * FROM items WHERE (x = 1) LIMIT 1"
     ["INSERT INTO items (x, y) VALUES (1, 2)", "INSERT INTO items (y, x) VALUES (2, 1)"].should include(sqls[1])
     sqls.last.should == "SELECT * FROM items WHERE (id = 1) LIMIT 1"
@@ -686,6 +914,15 @@ describe "Model.db_schema" do
     @c.db_schema.should == {:x=>{:primary_key=>true}, :y=>{:primary_key=>true}}
     @c.primary_key.should == [:x, :y]
   end
+
+  specify "should set an immutable composite primary key based on the schema" do
+    ds = @dataset
+    d = ds.db
+    def d.schema(table, *opts) [[:x, {:primary_key=>true}], [:y, {:primary_key=>true}]] end
+    @c.dataset = ds
+    @c.primary_key.should == [:x, :y]
+    proc{@c.primary_key.pop}.should raise_error
+  end
   
   specify "should automatically set no primary key based on the schema" do
     ds = @dataset
@@ -697,6 +934,16 @@ describe "Model.db_schema" do
     @c.primary_key.should == nil
   end
   
+  specify "should automatically set primary key for dataset selecting table.*" do
+    ds = @dataset.select_all(:items)
+    d = ds.db
+    def d.schema(table, *opts) [[:x, {:primary_key=>true}]] end
+    @c.primary_key.should == :id
+    @c.dataset = ds
+    @c.db_schema.should == {:x=>{:primary_key=>true}}
+    @c.primary_key.should == :x
+  end
+  
   specify "should not modify the primary key unless all column schema hashes have a :primary_key entry" do
     ds = @dataset
     d = ds.db
diff --git a/spec/model/record_spec.rb b/spec/model/record_spec.rb
index e0b5f1e..2f383e4 100644
--- a/spec/model/record_spec.rb
+++ b/spec/model/record_spec.rb
@@ -364,20 +364,20 @@ describe "Model#freeze" do
   end
 
   it "should freeze the object" do
-    @o.frozen?.should be_true
+    @o.frozen?.should == true
   end
 
   it "should freeze the object if the model doesn't have a primary key" do
     Album.no_primary_key
     @o = Album.load(:id=>1).freeze
-    @o.frozen?.should be_true
+    @o.frozen?.should == true
   end
 
   it "should freeze the object's values, associations, changed_columns, errors, and this" do
-    @o.values.frozen?.should be_true
-    @o.changed_columns.frozen?.should be_true
-    @o.errors.frozen?.should be_true
-    @o.this.frozen?.should be_true
+    @o.values.frozen?.should == true
+    @o.changed_columns.frozen?.should == true
+    @o.errors.frozen?.should == true
+    @o.this.frozen?.should == true
   end
 
   it "should still have working class attr overriddable methods" do
@@ -385,16 +385,16 @@ describe "Model#freeze" do
   end
 
   it "should have working new? method" do
-    @o.new?.should be_false
-    Album.new.freeze.new?.should be_true
+    @o.new?.should == false
+    Album.new.freeze.new?.should == true
   end
 
   it "should have working valid? method" do
-    @o.valid?.should be_true
+    @o.valid?.should == true
     o = Album.new
     def o.validate() errors.add(:foo, '') end
     o.freeze
-    o.valid?.should be_false
+    o.valid?.should == false
   end
 
   it "should raise an Error if trying to save/destroy/delete/refresh" do
@@ -430,8 +430,8 @@ describe "Model#dup" do
   end
 
   it "should keep new status" do
-    @o.dup.new?.should be_false
-    @Album.new.dup.new?.should be_true
+    @o.dup.new?.should == false
+    @Album.new.dup.new?.should == true
   end
 
   it "should not copy frozen status" do
@@ -467,8 +467,8 @@ describe "Model#clone" do
   end
 
   it "should keep new status" do
-    @o.clone.new?.should be_false
-    @Album.new.clone.new?.should be_true
+    @o.clone.new?.should == false
+    @Album.new.clone.new?.should == true
   end
 
   it "should copy frozen status" do
@@ -566,9 +566,9 @@ describe "Model#modified?" do
 
   it "should be true if given a column argument and the column has been changed" do
     o = @c.new
-    o.modified?(:id).should be_false
+    o.modified?(:id).should == false
     o.id = 1
-    o.modified?(:id).should be_true
+    o.modified?(:id).should == true
   end
 end
 
@@ -1182,7 +1182,7 @@ describe Sequel::Model, "#(set|update)_(all|only)" do
 
   it "#set_all should set not set restricted fields" do
     @o1.set_all(:x => 1, :use_after_commit_rollback => false)
-    @o1.use_after_commit_rollback.should be_true
+    @o1.use_after_commit_rollback.should == true
     @o1.values.should == {:x => 1}
   end
 
@@ -1309,17 +1309,17 @@ describe Sequel::Model, "#exists?" do
   end
 
   it "should do a query to check if the record exists" do
-    @model.load(:id=>1).exists?.should be_true
+    @model.load(:id=>1).exists?.should == true
     DB.sqls.should == ['SELECT 1 AS one FROM items WHERE (id = 1) LIMIT 1']
   end
 
   it "should return false when #this.count == 0" do
-    @model.load(:id=>2).exists?.should be_false
+    @model.load(:id=>2).exists?.should == false
     DB.sqls.should == ['SELECT 1 AS one FROM items WHERE (id = 2) LIMIT 1']
   end
 
   it "should return false without issuing a query if the model object is new" do
-    @model.new.exists?.should be_false
+    @model.new.exists?.should == false
     DB.sqls.should == []
   end
 end
diff --git a/spec/model/spec_helper.rb b/spec/model/spec_helper.rb
index 5e18064..0dd24eb 100644
--- a/spec/model/spec_helper.rb
+++ b/spec/model/spec_helper.rb
@@ -5,8 +5,9 @@ unless Object.const_defined?('Sequel') && Sequel.const_defined?('Model')
 end
 Sequel::Deprecation.backtrace_filter = lambda{|line, lineno| lineno < 4 || line =~ /_spec\.rb/}
 
+require File.join(File.dirname(File.expand_path(__FILE__)), "../rspec_helper.rb")
 
-(defined?(RSpec) ? RSpec::Core::ExampleGroup : Spec::Example::ExampleGroup).class_eval do
+RSPEC_EXAMPLE_GROUP.class_eval do
   if ENV['SEQUEL_DEPRECATION_WARNINGS']
     class << self
       alias qspecify specify
diff --git a/spec/model/validations_spec.rb b/spec/model/validations_spec.rb
index b2f6bd6..30f7cf3 100644
--- a/spec/model/validations_spec.rb
+++ b/spec/model/validations_spec.rb
@@ -161,7 +161,7 @@ describe "Model#save" do
     
     @m.x = 7
     @m.should be_valid
-    @m.save.should_not be_false
+    @m.save.should_not == false
     DB.sqls.should == ['UPDATE people SET x = 7 WHERE (id = 4)']
   end
   
diff --git a/spec/rspec_helper.rb b/spec/rspec_helper.rb
new file mode 100644
index 0000000..eb20659
--- /dev/null
+++ b/spec/rspec_helper.rb
@@ -0,0 +1,18 @@
+unless defined?(RSPEC_EXAMPLE_GROUP)
+  if defined?(RSpec)
+    require 'rspec/version'
+    if RSpec::Version::STRING >= '2.11.0'
+      RSpec.configure do |config|
+        config.expect_with :rspec do |c|
+          c.syntax = :should
+        end
+        config.mock_with :rspec do |c|
+          c.syntax = :should
+        end
+      end
+    end
+    RSPEC_EXAMPLE_GROUP = RSpec::Core::ExampleGroup
+  else
+    RSPEC_EXAMPLE_GROUP = Spec::Example::ExampleGroup
+  end
+end
diff --git a/www/layout.html.erb b/www/layout.html.erb
index 78be5b0..5bf78c6 100644
--- a/www/layout.html.erb
+++ b/www/layout.html.erb
@@ -26,10 +26,10 @@
         <li><a href="documentation.html">Documentation</a></li>
         <li><a href="plugins.html">Plugins</a></li>
         <li><a href="press.html">Press</a></li>
-        <li><a href="http://sequel.heroku.com">Blog</a></li>
+        <li><a href="blog.html">Blog</a></li>
       </ul>
       <form action="http://www.google.com/search">
-        <input type="hidden" name="sitesearch" value="sequel.rubyforge.org" />
+        <input type="hidden" name="sitesearch" value="sequel.jeremyevans.net" />
         <input id="searchbox" type="search" placeholder="Site Search" name="q" value="" />
       </form>
     </div>
diff --git a/www/make_www.rb b/www/make_www.rb
index 7e2fc6a..9591fc9 100755
--- a/www/make_www.rb
+++ b/www/make_www.rb
@@ -4,8 +4,8 @@ $: << File.join(File.dirname(__FILE__), '..','lib', 'sequel')
 require 'version'
 Dir.chdir(File.dirname(__FILE__))
 erb = ERB.new(File.read('layout.html.erb'))
-Dir['pages/*'].each do |page|
-  public_loc = "#{page.gsub(/\Apages\//, 'public/')}.html"
+Dir['pages/*.html.erb'].each do |page|
+  public_loc = "#{page.gsub(/\Apages\//, 'public/').gsub('.erb', '')}"
   content = ERB.new(File.read(page)).result(binding)
   title = File.basename(page)
   File.open(public_loc, 'wb'){|f| f.write(erb.result(binding))}
diff --git a/www/pages/development b/www/pages/development.html.erb
similarity index 91%
rename from www/pages/development
rename to www/pages/development.html.erb
index 7c90143..3315944 100644
--- a/www/pages/development
+++ b/www/pages/development.html.erb
@@ -8,7 +8,7 @@
 
 <h3>Source Code</h3>
 
-<p>The master source code repository is <a href="https://github.com/jeremyevans/sequel/">jeremyevans/sequel on github</a>.  The latest release version also has a git clone at RubyForge.</p>
+<p>The master source code repository is <a href="https://github.com/jeremyevans/sequel/">jeremyevans/sequel on github</a>.</p>
 
 <h3>Submitting Patches</h3>
 
diff --git a/www/pages/documentation b/www/pages/documentation.html.erb
similarity index 72%
rename from www/pages/documentation
rename to www/pages/documentation.html.erb
index 707bf3e..f3bd2af 100644
--- a/www/pages/documentation
+++ b/www/pages/documentation.html.erb
@@ -18,6 +18,7 @@
     <li><a href="rdoc/files/doc/migration_rdoc.html">Migrations</a></li>
     <li><a href="rdoc/files/doc/sharding_rdoc.html">Master/Slave Databases and Sharding</a></li>
     <li><a href="rdoc/files/doc/postgresql_rdoc.html">PostgreSQL Specific Support</a></li>
+    <li><a href="rdoc/files/doc/mssql_stored_procedures_rdoc.html">Microsoft SQL Server Stored Procedure Support</a></li>
   </ul></li>
   <li>Datasets<ul>
     <li><a href="rdoc/files/doc/dataset_basics_rdoc.html">Dataset Basics</a></li>
@@ -97,11 +98,11 @@
 <h3>Presentations</h3>
 
 <ul>
-<li><a href="http://jeremyevans-pres.heroku.com/railsclub2013/index.html?trans=no">Jeremy Evans's "Give-and-Go with PostgreSQL and Sequel" Presentation at RailsClub 2013</a> (<a href="http://jeremyevans-pres.heroku.com/railsclub2013/index.html">In Russian</a>)</li>
-<li><a href="http://jeremyevans-pres.heroku.com/heroku201205/index.html">Jeremy Evans's "The Development of Sequel" Presentation in May 2012 at Heroku</a></li>
-<li><a href="http://jeremyevans-pres.heroku.com/pgwest2011/index.html">Jeremy Evans's "Sequel: The Database Toolkit for Ruby" Presentation at PostgreSQL Conference West 2011</a></li>
-<li><a href="http://jeremyevans-pres.heroku.com/lsrc2009_presentation/sequel-lsrc2009.html">Jeremy Evans's "Sequel: SQL in Ruby" Presentation at Lone Star Ruby Conference 2009</a></li>
-<li><a href="http://jeremyevans-pres.heroku.com/rk2009_presentation/sequel-rubykaigi2009.html">Jeremy Evans's "Sequel: SQL in Ruby" Presentation at RubyKaigi 2009</a> (<a href="http://www.ustream.tv/recorded/1825816">Video</a>)</li>
-<li><a href="http://jeremyevans-pres.heroku.com/larc2009_presentation/sequel-larc2009-pres.html">Jeremy Evans's "Sequel: The Database Toolkit for Ruby" Presentation at LA Ruby Conf 2009</a> (<a href="http://confreaks.com/videos/246-larubyconf2009-sequel">Video</a>)</li>
-<li><a href="http://jeremyevans-pres.heroku.com/mwrc2009_presentation.html">Jeremy Evans's "Sequel: The Database Toolkit for Ruby" Presentation at MountainWest RubyConf 2009</a> (<a href="http://confreaks.com/videos/51-mwrc2009-sequel">Video</a>) (<a href="http://jeremyevans-pres.heroku.com/mwrc2009_presentation.txt">Transcript</a>)</li>
+<li><a href="http://code.jeremyevans.net/presentations/railsclub2013/index.html?trans=no">"Give-and-Go with PostgreSQL and Sequel" Presentation at RailsClub 2013</a> (<a href="http://code.jeremyevans.net/presentations/railsclub2013/index.html">In Russian</a>) (<a href="http://live.digicast.ru/ru/view/2116">Video</a>, starts about 6:30)</li>
+<li><a href="http://code.jeremyevans.net/presentations/heroku201205/index.html">"The Development of Sequel" Presentation in May 2012 at Heroku</a></li>
+<li><a href="http://code.jeremyevans.net/presentations/pgwest2011/index.html">"Sequel: The Database Toolkit for Ruby" Presentation at PostgreSQL Conference West 2011</a></li>
+<li><a href="http://code.jeremyevans.net/presentations/lsrc2009_presentation/sequel-lsrc2009.html">"Sequel: SQL in Ruby" Presentation at Lone Star Ruby Conference 2009</a></li>
+<li><a href="http://code.jeremyevans.net/presentations/rk2009_presentation/sequel-rubykaigi2009.html">"Sequel: SQL in Ruby" Presentation at RubyKaigi 2009</a> (<a href="http://www.ustream.tv/recorded/1825816">Video</a>)</li>
+<li><a href="http://code.jeremyevans.net/presentations/larc2009_presentation/sequel-larc2009-pres.html">"Sequel: The Database Toolkit for Ruby" Presentation at LA Ruby Conf 2009</a> (<a href="http://confreaks.com/videos/246-larubyconf2009-sequel">Video</a>)</li>
+<li><a href="http://code.jeremyevans.net/presentations/mwrc2009_presentation.html">"Sequel: The Database Toolkit for Ruby" Presentation at MountainWest RubyConf 2009</a> (<a href="http://confreaks.com/videos/51-mwrc2009-sequel">Video</a>) (<a href="http://code.jeremyevans.net/presentations/mwrc2009_presentation.txt">Transcript</a>)</li>
 </ul>
diff --git a/www/pages/index b/www/pages/index.html.erb
similarity index 97%
rename from www/pages/index
rename to www/pages/index.html.erb
index a7a4c39..58f7cf0 100644
--- a/www/pages/index
+++ b/www/pages/index.html.erb
@@ -8,7 +8,7 @@
 <li>Sequel provides thread safety, connection pooling and a concise DSL for constructing SQL queries and table schemas.</li>
 <li>Sequel includes a comprehensive ORM layer for mapping records to Ruby objects and handling associated records.</li>
 <li>Sequel supports advanced database features such as prepared statements, bound variables, stored procedures, savepoints, two-phase commit, transaction isolation, master/slave configurations, and database sharding.</li>
-<li>Sequel currently has adapters for ADO, Amalgalite, CUBRID, DataObjects, DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL, Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLite3, Swift, and TinyTDS.</li>
+<li>Sequel currently has adapters for ADO, Amalgalite, CUBRID, DataObjects, DB2, DBI, Firebird, IBM_DB, Informix, JDBC, MySQL, Mysql2, ODBC, OpenBase, Oracle, PostgreSQL, SQLAnywhere, SQLite3, Swift, and TinyTDS.</li>
 </ul>
 
 <h3 id='a_short_example'>A short example:</h3>
diff --git a/www/pages/plugins b/www/pages/plugins.html.erb
similarity index 96%
rename from www/pages/plugins
rename to www/pages/plugins.html.erb
index f0e67b1..ab2bf97 100644
--- a/www/pages/plugins
+++ b/www/pages/plugins.html.erb
@@ -11,7 +11,7 @@
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/AssociationProxies.html">association_proxies</a>: Changes the *_to_many association method to return a proxy instead of an array of objects.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/DatasetAssociations.html">dataset_associations</a>: Adds association methods to datasets that return datasets of associated objects.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/EagerEach.html">eager_each</a>: Makes each on an eagerly loaded dataset do eager loading.</li>
-<li><a href="rdoc-plugins/classes/Sequel/Plugins/ManyThroughMany.html">many_through_many</a>: Allows you to create an association to multiple objects through multiple join tables.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/ManyThroughMany.html">many_through_many</a>: Allows you to create an association through multiple join tables.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/NestedAttributes.html">nested_attributes</a>: Allows you to modified associated objects directly through a model object, similar to ActiveRecord's Nested Attributes.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/PgArrayAssociations.html">pg_array_associations</a>: Adds associations types to handle the case where foreign keys are stored in a PostgreSQL array in one of the tables.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/RcteTree.html">rcte_tree</a>: Supports retrieving all ancestors and descendants for tree structured data using recursive common table expressions.</li>
@@ -49,12 +49,14 @@
 </ul></li>
 <li>Saving:<ul>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/InstanceFilters.html">instance_filters</a>: Allows you to add per instance filters that are used when updating or destroying the instance.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/MssqlOptimisticLocking.html">mssql_optimistic_locking</a>: Uses a timestamp/rowversion column on Microsoft SQL Server to prevent concurrent updates overwriting changes.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/OptimisticLocking.html">optimistic_locking</a>: Adds a database-independent locking mechanism to models to prevent concurrent updates overwriting changes.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Sharding.html">sharding</a>: Additional model support for Sequel's sharding support.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/SkipCreateRefresh.html">skip_create_refresh</a>: Allows you to skip the refresh when saving new model objects.</li>
 <li><a href='rdoc-plugins/classes/Sequel/Plugins/Timestamps.html'>timestamps</a>: Creates hooks for automatically setting create and update timestamps.</li>
 <li><a href='rdoc-plugins/classes/Sequel/Plugins/Touch.html'>touch</a>: Allows easily updating timestamps via Model#touch, as well as touching associations when model instances are updated or destroyed.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/UnlimitedUpdate.html">unlimited_update</a>: Works around MySQL warnings when using replication due to LIMIT clause use when updating model instances.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/UpdateOrCreate.html">update_or_create</a>: Adds helper methods for updating an object if it exists, or creating such an object if it does not.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/UpdatePrimaryKey.html">update_primary_key</a>: Allows you to safely update the primary key of a model object.</li>
 </ul></li>
 <li>Serialization:<ul>
@@ -79,6 +81,7 @@
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Schema.html">schema</a>: Adds backwards compatibility for Model.set_schema and Model.create_table.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Scissors.html">scissors</a>: Adds class methods for delete, destroy, and update.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Subclasses.html">subclasses</a>: Allows easy access all model subclasses and descendent classes, without using ObjectSpace.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/TableSelect.html">table_select</a>: Selects table.* instead of just * for model datasets.</li>
 <li><a href='rdoc-plugins/classes/Sequel/Plugins/TypecastOnLoad.html'>typecast_on_load</a>: Fixes bad database typecasting when loading model objects.</li>
 </ul></li>
 </ul>
@@ -140,6 +143,7 @@
 
 <ul>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/columns_introspection_rb.html">columns_introspection</a>: Attemps to skip database queries by introspecting the selected columns if possible.</li>
+<li><a href="rdoc-plugins/files/lib/sequel/extensions/current_datetime_timestamp_rb.html">current_datetime_timestamp</a>: Creates current Time/DateTime objects that are literalized as CURRENT_TIMESTAMP.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/date_arithmetic_rb.html">date_arithmetic</a>: Allows for database-independent date calculations (adding/subtracting an interval to/from a date/timestamp).</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/empty_array_ignore_nulls_rb.html">empty_array_ignore_nulls</a>: Makes Sequel's handling of IN/NOT IN with an empty ignore correct NULL handling.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/filter_having_rb.html">filter_having</a>: Makes Dataset#filter, #and, #or, and #having operate on HAVING clause if HAVING clause is already present.</li>
@@ -170,7 +174,7 @@
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/inflector_rb.html">inflector</a>: Adds instance-level inflection methods to String.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/meta_def_rb.html">meta_def</a>: Adds meta_def method for defining methods to Database, Dataset, and Model classes and instances.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/migration_rb.html">migration</a>: Adds Migration and Migrator classes for easily migrating the database schema forward or reverting to a previous version.</li>
-<li><a href="rdoc-plugins/files/lib/sequel/extensions/named_timezones_rb.html">named_timezones</a>: Allows you to use named timezones instead of just :local and :utc (requires <a href="http://tzinfo.rubyforge.org/">TZInfo</a>).</li>
+<li><a href="rdoc-plugins/files/lib/sequel/extensions/named_timezones_rb.html">named_timezones</a>: Allows you to use named timezones instead of just :local and :utc (requires <a href="http://tzinfo.github.io/">TZInfo</a>).</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_array_ops_rb.html">pg_array_ops</a>: Adds DSL support for calling PostgreSQL array operators and functions.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_hstore_ops_rb.html">pg_hstore_ops</a>: Adds DSL support for calling PostgreSQL hstore operators and functions.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_json_ops_rb.html">pg_json_ops</a>: Adds DSL support for calling PostgreSQL json operators and functions.</li>
@@ -185,6 +189,7 @@
 <h3>External Extensions</h3>
 
 <ul>
+<li><a href='http://github.com/kennym/annotate-sequel'>annotate-sequel</a>: Generates model annotations for Sequel models.</li>
 <li><a href='http://github.com/jeremyevans/fixture_dependencies'>fixture_dependencies</a>: YAML fixture loader that handles dependencies/associated objects, respecting foreign key constraints.</li>
 <li><a href='http://github.com/gucki/i18n_backend_sequel'>i18n_backend_sequel</a>: Allows Sequel to be a backend for i18n translations.</li>
 <li><a href='http://github.com/pk/merb_sequel'>merb_sequel</a>: Merb plugin that provides support for Sequel models.</li>
diff --git a/www/pages/press b/www/pages/press.html.erb
similarity index 100%
rename from www/pages/press
rename to www/pages/press.html.erb

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-ruby-extras/ruby-sequel.git



More information about the Pkg-ruby-extras-commits mailing list