[DRE-commits] [ruby-sequel] 01/03: Imported Upstream version 4.13.0

Dmitry Borodaenko angdraug at moszumanska.debian.org
Sat Aug 23 21:47:38 UTC 2014


This is an automated email from the git hooks/post-receive script.

angdraug pushed a commit to branch master
in repository ruby-sequel.

commit 76cf9fbbe6ecb9fb5652e2c00076855d2b1c4324
Author: Dmitry Borodaenko <angdraug at gmail.com>
Date:   Sat Aug 23 13:16:04 2014 -0700

    Imported Upstream version 4.13.0
---
 CHANGELOG                                          |  96 +++++++++
 Rakefile                                           |  10 +-
 bin/sequel                                         |  18 +-
 doc/opening_databases.rdoc                         |   6 +-
 doc/release_notes/4.12.0.txt                       | 105 ++++++++++
 doc/release_notes/4.13.0.txt                       | 169 ++++++++++++++++
 doc/sql.rdoc                                       |   6 +-
 lib/sequel/adapters/do.rb                          |  34 ++--
 lib/sequel/adapters/do/mysql.rb                    |   8 +
 lib/sequel/adapters/do/postgres.rb                 |   8 +
 lib/sequel/adapters/do/{sqlite.rb => sqlite3.rb}   |   9 +
 lib/sequel/adapters/jdbc.rb                        | 156 ++-------------
 lib/sequel/adapters/jdbc/as400.rb                  |   9 +
 lib/sequel/adapters/jdbc/cubrid.rb                 |   9 +
 lib/sequel/adapters/jdbc/db2.rb                    |   9 +
 lib/sequel/adapters/jdbc/derby.rb                  |   9 +
 .../adapters/jdbc/{firebird.rb => firebirdsql.rb}  |   9 +
 lib/sequel/adapters/jdbc/h2.rb                     |  10 +
 lib/sequel/adapters/jdbc/hsqldb.rb                 |   9 +
 .../jdbc/{informix.rb => informix-sqli.rb}         |   9 +
 .../adapters/jdbc/{progress.rb => jdbcprogress.rb} |   9 +
 lib/sequel/adapters/jdbc/jtds.rb                   |  10 +
 lib/sequel/adapters/jdbc/mysql.rb                  |  14 ++
 lib/sequel/adapters/jdbc/oracle.rb                 |   9 +
 lib/sequel/adapters/jdbc/postgresql.rb             |   9 +
 lib/sequel/adapters/jdbc/sqlanywhere.rb            |  23 +++
 lib/sequel/adapters/jdbc/sqlite.rb                 |  10 +
 lib/sequel/adapters/jdbc/sqlserver.rb              |  10 +
 lib/sequel/adapters/odbc.rb                        |  20 +-
 lib/sequel/adapters/odbc/db2.rb                    |   9 +
 lib/sequel/adapters/odbc/mssql.rb                  |   8 +
 lib/sequel/adapters/odbc/progress.rb               |   8 +
 lib/sequel/adapters/oracle.rb                      |   3 +-
 lib/sequel/adapters/postgres.rb                    |  27 ++-
 lib/sequel/adapters/shared/cubrid.rb               |   3 +-
 lib/sequel/adapters/shared/db2.rb                  |   1 +
 lib/sequel/adapters/shared/firebird.rb             |   9 +-
 lib/sequel/adapters/shared/mssql.rb                |  96 ++++++---
 lib/sequel/adapters/shared/mysql.rb                |   8 +-
 lib/sequel/adapters/shared/oracle.rb               |  20 +-
 lib/sequel/adapters/shared/postgres.rb             |  30 ++-
 lib/sequel/adapters/shared/sqlanywhere.rb          |  13 +-
 lib/sequel/adapters/sqlite.rb                      |  16 +-
 lib/sequel/database/connecting.rb                  |  55 ++++--
 lib/sequel/database/query.rb                       |  15 +-
 lib/sequel/dataset/actions.rb                      |   8 +-
 lib/sequel/dataset/graph.rb                        |  38 ++--
 lib/sequel/dataset/misc.rb                         |  37 ++++
 lib/sequel/dataset/prepared_statements.rb          |   2 +-
 lib/sequel/dataset/query.rb                        |  86 +++++----
 lib/sequel/dataset/sql.rb                          |  66 ++-----
 lib/sequel/extensions/dataset_source_alias.rb      |  90 +++++++++
 lib/sequel/extensions/pg_array.rb                  |  24 ++-
 lib/sequel/extensions/pg_enum.rb                   | 135 +++++++++++++
 lib/sequel/extensions/pg_hstore.rb                 |  10 +-
 lib/sequel/extensions/pg_inet.rb                   |   9 +-
 lib/sequel/extensions/pg_interval.rb               |   8 +-
 lib/sequel/extensions/pg_json.rb                   |  28 +--
 lib/sequel/extensions/pg_range.rb                  |   8 +-
 lib/sequel/extensions/pg_row.rb                    |   4 +-
 lib/sequel/extensions/pg_static_cache_updater.rb   |  16 +-
 lib/sequel/extensions/round_timestamps.rb          |  52 +++++
 lib/sequel/model.rb                                |   7 +-
 lib/sequel/model/associations.rb                   |  34 +++-
 lib/sequel/model/base.rb                           |  97 +++++++---
 lib/sequel/plugins/auto_validations.rb             |  20 +-
 lib/sequel/plugins/class_table_inheritance.rb      |  41 ++--
 lib/sequel/plugins/column_select.rb                |  57 ++++++
 lib/sequel/plugins/composition.rb                  |  30 ++-
 lib/sequel/plugins/dirty.rb                        |  20 +-
 lib/sequel/plugins/hook_class_methods.rb           |   2 +-
 lib/sequel/plugins/insert_returning_select.rb      |  70 +++++++
 lib/sequel/plugins/instance_filters.rb             |  16 +-
 lib/sequel/plugins/lazy_attributes.rb              |  20 +-
 lib/sequel/plugins/list.rb                         |   9 +
 lib/sequel/plugins/modification_detection.rb       |  90 +++++++++
 lib/sequel/plugins/nested_attributes.rb            | 131 +++++++------
 lib/sequel/plugins/prepared_statements.rb          |  21 +-
 .../plugins/prepared_statements_associations.rb    |  14 ++
 lib/sequel/plugins/serialization.rb                |  28 ++-
 .../serialization_modification_detection.rb        |  18 +-
 lib/sequel/plugins/single_table_inheritance.rb     |   4 +-
 lib/sequel/plugins/timestamps.rb                   |  12 +-
 lib/sequel/sql.rb                                  |  45 +----
 lib/sequel/version.rb                              |   2 +-
 spec/adapters/mysql_spec.rb                        |   7 +
 spec/adapters/postgres_spec.rb                     | 110 +++++++++--
 spec/bin_spec.rb                                   |   9 +-
 spec/core/database_spec.rb                         |   6 +
 spec/core/dataset_spec.rb                          | 214 ++++++++++++---------
 spec/core/object_graph_spec.rb                     |   5 +
 spec/core/schema_spec.rb                           |   3 +-
 spec/extensions/auto_validations_spec.rb           |  25 ++-
 spec/extensions/class_table_inheritance_spec.rb    |  31 +--
 spec/extensions/column_select_spec.rb              | 108 +++++++++++
 spec/extensions/composition_spec.rb                |  20 ++
 spec/extensions/dataset_source_alias_spec.rb       |  51 +++++
 spec/extensions/insert_returning_select_spec.rb    |  46 +++++
 spec/extensions/lazy_attributes_spec.rb            |  44 +++--
 spec/extensions/list_spec.rb                       |   5 +
 spec/extensions/modification_detection_spec.rb     |  80 ++++++++
 spec/extensions/nested_attributes_spec.rb          |  33 +++-
 spec/extensions/pg_enum_spec.rb                    |  64 ++++++
 spec/extensions/pg_json_spec.rb                    |  20 +-
 spec/extensions/pg_static_cache_updater_spec.rb    |  12 ++
 .../prepared_statements_associations_spec.rb       |  34 ++--
 spec/extensions/prepared_statements_spec.rb        |  23 ++-
 spec/extensions/round_timestamps_spec.rb           |  43 +++++
 .../serialization_modification_detection_spec.rb   |  11 +-
 spec/extensions/serialization_spec.rb              |  18 ++
 spec/extensions/single_table_inheritance_spec.rb   |   5 +
 spec/extensions/timestamps_spec.rb                 |   6 +
 spec/integration/plugin_test.rb                    |  65 ++++++-
 spec/integration/prepared_statement_test.rb        |  12 ++
 spec/integration/schema_test.rb                    |   7 +
 spec/model/associations_spec.rb                    |  24 +++
 spec/model/eager_loading_spec.rb                   |   9 +
 spec/model/model_spec.rb                           |  16 +-
 spec/model/record_spec.rb                          |  25 ++-
 www/pages/documentation.html.erb                   |   2 +
 www/pages/plugins.html.erb                         |  10 +-
 121 files changed, 2876 insertions(+), 869 deletions(-)

diff --git a/CHANGELOG b/CHANGELOG
index b92a16b..e18c75f 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,99 @@
+=== 4.13.0 (2014-08-01)
+
+* Use copy constructors instead of overriding Model#dup and #clone (ged, jeremyevans) (#852)
+
+* Fix handling of MySQL create_table foreign_key calls using :key option (mimperatore, jeremyevans) (#850)
+
+* Handle another disconnection error in the postgres adapter (lbosque) (#848)
+
+* Make list plugin update remaining positions after destroying an instance (ehq, jeremyevans) (#847)
+
+* Unalias aliased tables in Dataset#insert (jeremyevans)
+
+* Add insert_returning_select plugin, for setting up RETURNING for inserts for models selecting explicit columns (jeremyevans)
+
+* Make Model#save use insert_select if the dataset used for inserting already uses returning (jeremyevans)
+
+* Add Dataset#unqualified_column_for helper method, returning unqualified version of possibly qualified column (jeremyevans)
+
+* Calling Dataset#returning when the Database does not support or emulate RETURNING now raises an Error (jeremyevans)
+
+* Emulate RETURNING on Microsoft SQL Server using OUTPUT, as long as only simple column references are used (jeremyevans)
+
+* Switch class_table_inheritance plugin to use JOIN ON instead of JOIN USING (jeremyevans)
+
+* Qualify primary keys for models with joined datasets when looking up model instances by primary key (jeremyevans)
+
+* Fix qualification of columns when Dataset#graph automatically wraps the initially graphed dataset in a subselect (jeremyevans)
+
+* Make Dataset#joined_dataset? a public method (jeremyevans)
+
+* Allow external jdbc, odbc, and do subadapters to be loaded automatically (jeremyevans)
+
+* Recognize another disconnect error in the jdbc/mysql adapter (jeremyevans)
+
+* Set primary keys correctly for models even if datasets select specific columns (jeremyevans)
+
+* Add dataset_source_alias extension, for automatically aliasing datasets to their first source (jeremyevans)
+
+* Use qualified columns in the lazy_attributes plugin (jeremyevans)
+
+* Add column_select plugin, for using explicit column selections in model datasets (jeremyevans)
+
+* Use associated model's existing selection for join associations if it consists solely of explicitly quailfied columns (jeremyevans)
+
+* Add round_timestamps extension for automatically rounding timestamp values to database precision before literalizing (jeremyevans)
+
+* Make rake default task run plugin specs as well as core/model specs (jeremyevans)
+
+* Use all_tables and all_views for Database#tables and #views on Oracle (jeremyevans)
+
+* Use all_tab_cols instead of user_tab cols for defaults parsing in the oracle adapter (jeremyevans)
+
+* Fix recursive mutex locking issue on JRuby when using Sequel::Model(dataset) (jeremyevans) (#841)
+
+* Make composition and serialization plugins support validations on underlying columns (jeremyevans)
+
+* Fix regression in timestamps and table inheritance plugin where column values would not be saved if validation is skipped (jeremyevans) (#839)
+
+* Add pg_enum extension, for dealing with PostgreSQL enums (jeremyevans)
+
+* Add modification_detection plugin, for automatic detection of in-place column value modifications (jeremyevans)
+
+* Speed up using plain strings, numbers, true, false, and nil in json columns if underlying json library supports them (jeremyevans) (#834)
+
+=== 4.12.0 (2014-07-01)
+
+* Support :readonly Database option in sqlite adapter (ippeiukai, jeremyevans) (#832)
+
+* Automatically setup max_length validations for string columns in the auto_validations plugin (jeremyevans)
+
+* Add :max_length entry to column schema hashes for string types (jeremyevans)
+
+* Add :before_thread_exit option to Database#listen_for_static_cache_updates in pg_static_cache_updater extension (jeremyevans)
+
+* Add Database#values on PostgreSQL to create a dataset that uses VALUES instead of SELECT (jeremyevans)
+
+* Add Model#set_nested_attributes to nested_attributes, allowing setting nested attributes options per-call (jeremyevans)
+
+* Use explicit columns when using automatically prepared SELECT statements in the prepared statement plugins (jeremyevans)
+
+* Make Dataset#insert_select on PostgreSQL respect existing RETURNING clause (jeremyevans)
+
+* Fix eager loading limited associations via a UNION when an association block is used (jeremyevans)
+
+* Associate reciprocal object before saving associated object when creating new objects in nested_attributes (chanks, jeremyevans) (#831)
+
+* Handle intervals containing more than 100 hours in the pg_interval extension's parser (will) (#827)
+
+* Remove methods/class deprecated in 4.11.0 (jeremyevans)
+
+* Allow Dataset#natural_join/cross_join and related methods to take a options hash passed to join_table (jeremyevans)
+
+* Add :reset_implicit_qualifier option to Dataset#join_table, to set false to not reset the implicit qualifier (jeremyevans)
+
+* Support :notice_receiver Database option when postgres adapter is used with pg driver (jeltz, jeremyevans) (#825)
+
 === 4.11.0 (2014-06-03)
 
 * Add :model_map option to class_table_inheritance plugin so class names don't need to be stored in the database (jeremyevans)
diff --git a/Rakefile b/Rakefile
index d001f84..e07e7f7 100644
--- a/Rakefile
+++ b/Rakefile
@@ -33,10 +33,6 @@ RDOC_DEFAULT_OPTS = ["--line-numbers", "--inline-source", '--title', 'Sequel: Th
 
 begin
   # Sequel uses hanna-nouveau for the website RDoc.
-  # Due to bugs in older versions of RDoc, and the
-  # fact that hanna-nouveau does not support RDoc 4,
-  # a specific version of rdoc is required.
-  gem 'rdoc', '= 3.12.2'
   gem 'hanna-nouveau'
   RDOC_DEFAULT_OPTS.concat(['-f', 'hanna'])
 rescue Gem::LoadError
@@ -68,7 +64,7 @@ if rdoc_task_class
   rdoc_task_class.new(:website_rdoc_main) do |rdoc|
     rdoc.rdoc_dir = "www/public/rdoc"
     rdoc.options += RDOC_OPTS + %w'--no-ignore-invalid'
-    rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/*.rb lib/sequel/*.rb lib/sequel/{connection_pool,dataset,database,model}/*.rb doc/*.rdoc doc/release_notes/*.txt lib/sequel/extensions/migration.rb lib/sequel/extensions/core_extensions.rb"
+    rdoc.rdoc_files.add %w"README.rdoc CHANGELOG MIT-LICENSE lib/*.rb lib/sequel/*.rb lib/sequel/{connection_pool,dataset,database,model}/*.rb doc/*.rdoc doc/release_notes/*.txt lib/sequel/extensions/migration.rb"
   end
 
   rdoc_task_class.new(:website_rdoc_adapters) do |rdoc|
@@ -136,7 +132,9 @@ begin
     t
   end
 
-  task :default => [:spec]
+  desc "Run the core, model, and extension/plugin specs"
+  task :default => [:spec, :spec_plugin]
+
   spec_with_cov.call("spec", Dir["spec/{core,model}/*_spec.rb"], "Run core and model specs"){|t| t.rcov_opts.concat(%w'--exclude "lib/sequel/(adapters/([a-ln-z]|m[a-np-z])|extensions/core_extensions)"')}
   spec.call("spec_bin", ["spec/bin_spec.rb"], "Run bin/sequel specs")
   spec.call("spec_core", Dir["spec/core/*_spec.rb"], "Run core specs")
diff --git a/bin/sequel b/bin/sequel
index 665856b..da37a06 100755
--- a/bin/sequel
+++ b/bin/sequel
@@ -2,8 +2,6 @@
 
 require 'rubygems'
 require 'optparse'
-$: << File.join(File.dirname(__FILE__), '..', 'lib')
-require 'sequel'
 
 code = nil
 copy_databases = nil
@@ -13,6 +11,7 @@ env = nil
 migrate_dir = nil
 migrate_ver = nil
 backtrace = nil
+show_version = false
 test = true
 load_dirs = []
 exclusive_options = []
@@ -43,6 +42,7 @@ options = OptionParser.new do |opts|
   
   opts.on("-C", "--copy-databases", "copy one database to another") do 
     copy_databases = true
+    exclusive_options << :C
   end
   
   opts.on("-d", "--dump-migration", "print database migration to STDOUT") do 
@@ -104,8 +104,7 @@ options = OptionParser.new do |opts|
   end
   
   opts.on_tail("-v", "--version", "Show version") do
-    puts "sequel #{Sequel.version}"
-    exit
+    show_version = true
   end
 end
 opts = options
@@ -140,6 +139,15 @@ connect_proc = lambda do |database|
 end
 
 begin
+  $:.unshift(File.expand_path(File.join(File.dirname(__FILE__), '..', 'lib')))
+  require 'sequel'
+  if show_version
+    puts "sequel #{Sequel.version}"
+    unless db || code
+      exit
+    end
+  end
+
   DB = connect_proc[db]
   load_dirs.each{|d| d.is_a?(Array) ? require(d.first) : Dir["#{d}/**/*.rb"].each{|f| load(f)}}
   if migrate_dir
@@ -222,7 +230,7 @@ begin
   end
 rescue => e
   raise e if backtrace
-  error_proc["Error: #{e.class}: #{e.message}#{e.backtrace.first}"]
+  error_proc["Error: #{e.class}: #{e.message}\n#{e.backtrace.first}"]
 end
 
 if !ARGV.empty? 
diff --git a/doc/opening_databases.rdoc b/doc/opening_databases.rdoc
index 665c9ff..2b69412 100644
--- a/doc/opening_databases.rdoc
+++ b/doc/opening_databases.rdoc
@@ -373,9 +373,12 @@ The following additional options are supported:
                                 conversion is done, so an error is raised if you attempt to retrieve an infinite
                                 timestamp/date.  You can set this to :nil to convert to nil, :string to leave
                                 as a string, or :float to convert to an infinite float.
-:encoding :: Set the client_encoding to the given string
 :connect_timeout :: Set the number of seconds to wait for a connection (default 20, only respected
                     if using the pg library).
+:encoding :: Set the client_encoding to the given string
+:notice_receiver :: A proc that be called with the PGresult objects that have notice or warning messages.
+                    The default notice receiver just prints the messages to stderr, but this can be used
+                    to handle notice/warning messages differently.  Only respected if using the pg library).
 :sslmode :: Set to 'disable', 'allow', 'prefer', 'require' to choose how to treat SSL (only
             respected if using the pg library)
 :use_iso_date_format :: This can be set to false to not force the ISO date format.  Sequel forces
@@ -417,6 +420,7 @@ Examples:
 
 The following additional options are supported:
 
+:readonly :: open database in read-only mode
 :timeout :: the busy timeout to use in milliseconds (default: 5000).
 
 === swift
diff --git a/doc/release_notes/4.12.0.txt b/doc/release_notes/4.12.0.txt
new file mode 100644
index 0000000..59988db
--- /dev/null
+++ b/doc/release_notes/4.12.0.txt
@@ -0,0 +1,105 @@
+= New Features
+
+* Database#schema now includes :max_length entries for string
+  columns, specifying the size of the string field.  The
+  auto_validations plugin now uses this information to
+  automatically set up max_length validations on those fields.
+
+* The Dataset join methods now support a :reset_implicit_qualifier
+  option.  If set to false, this makes the join not reset the
+  implicit qualifier, so that the next join will not consider this
+  table as the last table joined.  Example:
+
+    DB[:a].join(:b, :c=>:d).
+      join(:e, :f=>:g)
+    # SELECT * FROM a
+    # INNER JOIN b ON (b.c = a.d)
+    # INNER JOIN e ON (e.f = b.g)
+
+    DB[:a].join(:b, {:c=>:d}, :reset_implicit_qualifier=>false).
+      join(:e, :f=>:g)
+    # SELECT * FROM a
+    # INNER JOIN b ON (b.c = a.d)
+    # INNER JOIN e ON (e.f = a.g)
+
+* The Dataset cross and natural join methods now accept an options
+  hash. Example:
+
+    DB[:a].cross_join(:b, :table_alias=>:c)
+    # SELECT * FROM a CROSS JOIN b AS c
+
+* Model#set_nested_attributes has been added to the nested_attributes
+  plugin, which allows you to to set the nested_attributes options to
+  use per-call.  This is very helpful if you have multiple forms that
+  handle associated objects, but with different input fields used
+  for the associated objects depending on the form.  Example:
+
+    album.set_nested_attributes(:tracks,
+      params[:track_attributes],
+      :fields=>[:a, :b, :c])
+
+* Database#values has been added on PostgreSQL, which creates a
+  dataset that uses VALUES instead of SELECT.  Just as PostgreSQL
+  allows, you can also use orders, limits, and offsets with this
+  dataset.
+
+* A :notice_receiver option is now supported in the postgres adapter
+  if the pg driver is used.  This should be a proc, which will be
+  passed to the pg connection's set_notice_receiver method.
+
+* A Database :readonly option is now supported in the sqlite adapter,
+  which opens the database in a read-only mode, causing an error
+  if a query is issued that would modify the database.
+
+* A :before_thread_exit option has been added to
+  Database#listen_for_static_cache_updates in the
+  pg_static_cache_updater extension, allowing you to run code before
+  the created thread exits.
+
+= Other Improvements
+
+* Eager loading limited associations using a UNION now works
+  correctly when an association block is used.  This fixes a
+  regression that first occurred in 4.10.0, when the union
+  eager loader became the default eager loader.
+
+* When creating a new associated object in the nested_attributes
+  plugin, where the reciprocal association is a many_to_one
+  association, set the cached reciprocal object in the new
+  associated object before saving it.
+
+  This fixes issues when validations in the associated object
+  require access to the current object, which may not yet be
+  saved in the database.
+
+* The prepared_statements and prepared_statements_associations
+  plugins now automatically use explicit column references when
+  preparing statements.  This fixes issues on PostgreSQL when a
+  column is added to a table while a prepared statement exists
+  that selects * from the table.  Previously, all further attempts
+  to use the prepared statement will fail.
+
+  This allows you to run migrations that add columns to tables
+  while concurrently running an application that uses the
+  prepared statements plugins.  Note that many other schema
+  modifications can cause issues when running migrations
+  while concurrently running an application, but most of those
+  are not specific to usage of prepared statements.
+
+* Dataset#insert_select on PostgreSQL now respects an existing
+  RETURNING clause, and won't override it to use RETURNING *.
+
+  A similar fix was applied to the generalized prepared statements
+  support as well.
+
+* The interval parser in the pg_interval extension now supports
+  intervals with 2-10 digits for hours.  Previously, it only
+  supported using 2 digits for hours.
+
+= Backwards Compatibility
+
+* The methods and classes deprecated in 4.11.0 have been removed.
+
+* The nested_attributes internal API has changed significantly. If
+  you were calling any private nested_attributes methods, you'll
+  probably need to update your code.
diff --git a/doc/release_notes/4.13.0.txt b/doc/release_notes/4.13.0.txt
new file mode 100644
index 0000000..4db0b19
--- /dev/null
+++ b/doc/release_notes/4.13.0.txt
@@ -0,0 +1,169 @@
+= New Features
+
+* A modification_detection plugin has been added, for automatic
+  detection of in-place column value modifications.  This makes
+  it so you don't have to call Model#modified! manually when
+  changing a value in place.
+
+* A column_select plugin has been added, for automatically
+  selecting explicitly qualified columns in model datasets.
+  Example:
+
+    Sequel::Model.plugin :column_select
+    class Album < Sequel::Model
+    end
+    Album.dataset.sql
+    # SELECT albums.id, albums.name, albums.artist_id
+    # FROM albums
+
+* An insert_returning_select plugin has been added, for automatically
+  setting up RETURNING clauses for models that select explicit
+  columns.  This is useful when using the column_select or
+  lazy_attributes plugins.
+
+* A pg_enum extension has been added, for easier dealing with
+  PostgreSQL enum types.  The possible values for the type
+  are then returned in the schema hashes under the :enum_values
+  key.  It also adds create_enum, drop_enum, and add_enum_value
+  Database methods for migration support.
+
+* A round_timestamps extension has been added, for automatically
+  rounding timestamps to database supported precision when
+  literalizing.
+
+* A dataset_source_alias extension has been added, for automatically
+  aliasing datasets to their first source, instead of using t1, t2.
+  Example:
+
+    DB.from(:a, DB[:b]).sql
+    # SELECT * FROM a, (SELECT * FROM b) AS t1
+
+    DB.extension(:dataset_source_alias)
+    DB.from(:a, DB[:b]).sql
+    # SELECT * FROM a, (SELECT * FROM b) AS b
+
+* On Microsoft SQL Server, Sequel now emulates RETURNING support
+  using the OUTPUT clause, as long as only simple column references
+  are used.
+
+= Other Improvements
+
+* A regression has been fixed in the timestamps and table
+  inheritance plugins, where column values would not be
+  saved when skipping validations.  This was first broken in
+  4.11.0.
+
+* A regression has been fixed on JRuby and Rubinius when using
+  Sequel::Model(dataset) if the dataset needs to literalize a
+  symbol (and most do).  This was first broken in 4.10.0.
+
+* Primary keys are now automatically setup for models even if
+  the models select specific columns.
+
+* The lazy_attributes plugin now uses qualified columns in its
+  selection, instead of unqualified columns.
+
+* When looking up model instances by primary key, Sequel now uses a
+  qualified primary key if the model uses a joined dataset.
+
+* For associations that require joins, Sequel will now use the
+  associated model's selection directly (instead of
+  associated_table.*) if the associated model's selection consists
+  solely of qualified columns.
+
+  Among other things, this means that a many_to_many association to 
+  a model that uses lazy attributes will not eagerly load the lazy
+  attributes by default.
+
+* Model#save now uses insert_select if there is an existing
+  RETURNING clause used by the underlying dataset, even if the model
+  selects specific columns.
+
+* In Dataset#insert, aliased tables are now automatically unaliased.
+  This allows you to use a dataset with an aliased table and have
+  full SELECT/INSERT/UPDATE/DELETE support, assuming the database
+  supports aliased tables in UPDATE and DELETE.
+
+* Dataset#graph now qualifies columns correctly if the current
+  dataset is a joined dataset and it moves the current dataset to
+  a subselect.
+
+* Dataset#joined_dataset? is now a public method, and can be used to
+  determine whether the dataset uses a join, either explicitly via
+  JOIN or implicitly via multiple FROM tables.
+
+* The Dataset#unqualified_column_for helper method has been added,
+  returning the unqualified version of a possibly qualified column.
+
+* The composition and serialization plugins now support validations
+  on the underlying columns.  Previously, the didn't update the
+  underlying columns until after validations were performed.  This
+  works better when using the auto_validations plugin.
+
+* The class_table_inheritance plugin now uses JOIN ON instead of
+  JOIN USING, which makes it work on all databases that Sequel
+  supports.  Additionally, the plugin now explicitly selects
+  qualified columns from all of the tables.
+
+* The list plugin now adds an after_destroy hook that will renumber
+  rows after the current row, similar to how moving existing values
+  in the list works.
+
+* The pg_json extension is now faster when json column value is a
+  plain string, number, true, false, or nil, if the underlying json
+  library handles such values natively.
+
+* External jdbc, odbc, and do subadapters can now be loaded
+  automatically without requiring them first, assuming proper
+  support in the external subadapter.
+
+* When using create_table on MySQL, correctly handle the :key
+  option to when calling foreign_key with a column reference.
+
+* On Oracle, use all_tab_cols instead of user_tab_cols for getting
+  default values when parsing the schema.  This makes it work if the
+  user does not own the table.
+
+* On Oracle, use all_tables and all_views for Database#tables and
+  Database#views.  This works better for users with limited rights.
+
+* Additional disconnect errors are now recognized in the postgres and
+  jdbc/mysql adapters.
+
+* Sequel::Model now uses copy constructors (e.g. initialize_copy)
+  instead of overriding #dup and #clone.
+
+* The rake default task now runs plugin specs in addition to
+  core and model specs.
+
+= bin/sequel Improvements
+
+* Add the sequel lib directory to the front of the load path
+  instead of the end, fixing cases where you end up requiring an
+  old version of the sequel gem (e.g. by using sequel_pg).
+
+* Add the sequel lib directory as an absolute path, fixing cases
+  where you later change the current directory.
+
+* Require sequel later in the code, so that bin/sequel -h doesn't
+  need to require sequel, and full backtrace is not printed if
+  requiring sequel raises an error (unless -t is used).
+
+* If an exception is raised, put a newline between the exception
+  message and backtrace.
+
+* Don't allow usage of -C with any of -cdDmS.
+
+* If sequel -v is given along with a database or code string to
+  execute, print the Sequel version but also continue, similar
+  to how ruby -v works.
+
+= Backwards Compatibility
+
+* The switch from JOIN ON to JOIN USING in the
+  class_table_inheritance can break certain usage, such as querying
+  using unqualified primary key.  Users should switch to using a
+  qualified primary key instead.
+
+* Calling Dataset#returning when the underlying database does not
+  support it now raises an Error.
diff --git a/doc/sql.rdoc b/doc/sql.rdoc
index dfac6c9..0929d6b 100644
--- a/doc/sql.rdoc
+++ b/doc/sql.rdoc
@@ -287,9 +287,9 @@ Sequel also supports the SQL EXISTS operator using <tt>Dataset#exists</tt>:
 
 Hashes in Sequel use IS if the value is true, false, or nil:
 
-  {:column=>nil) # ("column" IS NULL)
-  {:column=>true) # ("column" IS TRUE)
-  {:column=>false) # ("column" IS FALSE)
+  {:column=>nil} # ("column" IS NULL)
+  {:column=>true} # ("column" IS TRUE)
+  {:column=>false} # ("column" IS FALSE)
 
 Negation works the same way as it does for equality and inclusion:
 
diff --git a/lib/sequel/adapters/do.rb b/lib/sequel/adapters/do.rb
index 7bb910a..0132e1b 100644
--- a/lib/sequel/adapters/do.rb
+++ b/lib/sequel/adapters/do.rb
@@ -11,29 +11,17 @@ module Sequel
   # *  Sequel.connect('do:postgres://user:password@host/database')
   # *  Sequel.connect('do:mysql://user:password@host/database')
   module DataObjects
-    # Contains procs keyed on sub adapter type that extend the
+    # Contains procs keyed on subadapter type that extend the
     # given database object so it supports the correct database type.
-    DATABASE_SETUP = {:postgres=>proc do |db|
-        require 'do_postgres'
-        Sequel.require 'adapters/do/postgres'
-        db.extend(Sequel::DataObjects::Postgres::DatabaseMethods)
-        db.extend_datasets Sequel::Postgres::DatasetMethods
-      end,
-      :mysql=>proc do |db|
-        require 'do_mysql'
-        Sequel.require 'adapters/do/mysql'
-        db.extend(Sequel::DataObjects::MySQL::DatabaseMethods)
-        db.dataset_class = Sequel::DataObjects::MySQL::Dataset
-      end,
-      :sqlite3=>proc do |db|
-        require 'do_sqlite3'
-        Sequel.require 'adapters/do/sqlite'
-        db.extend(Sequel::DataObjects::SQLite::DatabaseMethods)
-        db.extend_datasets Sequel::SQLite::DatasetMethods
-        db.set_integer_booleans
-      end
-    }
+    DATABASE_SETUP = {}
       
+    # Wrapper for require that raises AdapterNotFound if driver could not be loaded
+    def self.load_driver(path)
+      require path
+    rescue LoadError => e
+      raise AdapterNotFound, e.message
+    end
+        
     # DataObjects uses it's own internal connection pooling in addition to the
     # pooling that Sequel uses.  You should make sure that you don't set
     # the connection pool size to more than 8 for a
@@ -108,12 +96,12 @@ module Sequel
       private
       
       # Call the DATABASE_SETUP proc directly after initialization,
-      # so the object always uses sub adapter specific code.  Also,
+      # so the object always uses subadapter specific code.  Also,
       # raise an error immediately if the connection doesn't have a
       # uri, since DataObjects requires one.
       def adapter_initialize
         raise(Error, "No connection string specified") unless uri
-        if prok = DATABASE_SETUP[subadapter.to_sym]
+        if prok = Sequel::Database.load_adapter(subadapter.to_sym, :map=>DATABASE_SETUP, :subdir=>'do')
           prok.call(self)
         end
       end
diff --git a/lib/sequel/adapters/do/mysql.rb b/lib/sequel/adapters/do/mysql.rb
index 58c15c0..25dd8c1 100644
--- a/lib/sequel/adapters/do/mysql.rb
+++ b/lib/sequel/adapters/do/mysql.rb
@@ -1,7 +1,15 @@
+Sequel::DataObjects.load_driver 'do_mysql'
 Sequel.require 'adapters/shared/mysql'
 
 module Sequel
   module DataObjects
+    Sequel.synchronize do
+      DATABASE_SETUP[:mysql] = proc do |db|
+        db.extend(Sequel::DataObjects::MySQL::DatabaseMethods)
+        db.dataset_class = Sequel::DataObjects::MySQL::Dataset
+      end
+    end
+
     # Database and Dataset instance methods for MySQL specific
     # support via DataObjects.
     module MySQL
diff --git a/lib/sequel/adapters/do/postgres.rb b/lib/sequel/adapters/do/postgres.rb
index 358c006..ce334e3 100644
--- a/lib/sequel/adapters/do/postgres.rb
+++ b/lib/sequel/adapters/do/postgres.rb
@@ -1,9 +1,17 @@
+Sequel::DataObjects.load_driver 'do_postgres'
 Sequel.require 'adapters/shared/postgres'
 
 module Sequel
   Postgres::CONVERTED_EXCEPTIONS << ::DataObjects::Error
   
   module DataObjects
+    Sequel.synchronize do
+      DATABASE_SETUP[:postgres] = proc do |db|
+        db.extend(Sequel::DataObjects::Postgres::DatabaseMethods)
+        db.extend_datasets Sequel::Postgres::DatasetMethods
+      end
+    end
+
     # Adapter, Database, and Dataset support for accessing a PostgreSQL
     # database via DataObjects.
     module Postgres
diff --git a/lib/sequel/adapters/do/sqlite.rb b/lib/sequel/adapters/do/sqlite3.rb
similarity index 76%
rename from lib/sequel/adapters/do/sqlite.rb
rename to lib/sequel/adapters/do/sqlite3.rb
index 377a1fe..40e1dd5 100644
--- a/lib/sequel/adapters/do/sqlite.rb
+++ b/lib/sequel/adapters/do/sqlite3.rb
@@ -1,7 +1,16 @@
+Sequel::DataObjects.load_driver 'do_sqlite3'
 Sequel.require 'adapters/shared/sqlite'
 
 module Sequel
   module DataObjects
+    Sequel.synchronize do
+      DATABASE_SETUP[:sqlite3] = proc do |db|
+        db.extend(Sequel::DataObjects::SQLite::DatabaseMethods)
+        db.extend_datasets Sequel::SQLite::DatasetMethods
+        db.set_integer_booleans
+      end
+    end
+
     # Database and Dataset support for SQLite databases accessed via DataObjects.
     module SQLite
       # Instance methods for SQLite Database objects accessed via DataObjects.
diff --git a/lib/sequel/adapters/jdbc.rb b/lib/sequel/adapters/jdbc.rb
index bc02c34..5234ac9 100644
--- a/lib/sequel/adapters/jdbc.rb
+++ b/lib/sequel/adapters/jdbc.rb
@@ -27,146 +27,11 @@ module Sequel
     # to :integer.
     DECIMAL_TYPE_RE = /number|numeric|decimal/io
     
-    # Contains procs keyed on sub adapter type that extend the
+    # Contains procs keyed on subadapter type that extend the
     # given database object so it supports the correct database type.
-    DATABASE_SETUP = {:postgresql=>proc do |db|
-        JDBC.load_gem(:Postgres)
-        org.postgresql.Driver
-        Sequel.require 'adapters/jdbc/postgresql'
-        db.extend(Sequel::JDBC::Postgres::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::Postgres::Dataset
-        org.postgresql.Driver
-      end,
-      :mysql=>proc do |db|
-        JDBC.load_gem(:MySQL)
-        com.mysql.jdbc.Driver
-        Sequel.require 'adapters/jdbc/mysql'
-        db.extend(Sequel::JDBC::MySQL::DatabaseMethods)
-        db.extend_datasets Sequel::MySQL::DatasetMethods
-        com.mysql.jdbc.Driver
-      end,
-      :sqlite=>proc do |db|
-        JDBC.load_gem(:SQLite3)
-        org.sqlite.JDBC
-        Sequel.require 'adapters/jdbc/sqlite'
-        db.extend(Sequel::JDBC::SQLite::DatabaseMethods)
-        db.extend_datasets Sequel::SQLite::DatasetMethods
-        db.set_integer_booleans
-        org.sqlite.JDBC
-      end,
-      :oracle=>proc do |db|
-        Java::oracle.jdbc.driver.OracleDriver
-        Sequel.require 'adapters/jdbc/oracle'
-        db.extend(Sequel::JDBC::Oracle::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::Oracle::Dataset
-        Java::oracle.jdbc.driver.OracleDriver
-      end,
-      :sqlserver=>proc do |db|
-        com.microsoft.sqlserver.jdbc.SQLServerDriver
-        Sequel.require 'adapters/jdbc/sqlserver'
-        db.extend(Sequel::JDBC::SQLServer::DatabaseMethods)
-        db.extend_datasets Sequel::MSSQL::DatasetMethods
-        db.send(:set_mssql_unicode_strings)
-        com.microsoft.sqlserver.jdbc.SQLServerDriver
-      end,
-      :jtds=>proc do |db|
-        JDBC.load_gem(:JTDS)
-        Java::net.sourceforge.jtds.jdbc.Driver
-        Sequel.require 'adapters/jdbc/jtds'
-        db.extend(Sequel::JDBC::JTDS::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::JTDS::Dataset
-        db.send(:set_mssql_unicode_strings)
-        Java::net.sourceforge.jtds.jdbc.Driver
-      end,
-      :h2=>proc do |db|
-        JDBC.load_gem(:H2)
-        org.h2.Driver
-        Sequel.require 'adapters/jdbc/h2'
-        db.extend(Sequel::JDBC::H2::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::H2::Dataset
-        org.h2.Driver
-      end,
-      :hsqldb=>proc do |db|
-        JDBC.load_gem(:HSQLDB)
-        org.hsqldb.jdbcDriver
-        Sequel.require 'adapters/jdbc/hsqldb'
-        db.extend(Sequel::JDBC::HSQLDB::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::HSQLDB::Dataset
-        org.hsqldb.jdbcDriver
-      end,
-      :derby=>proc do |db|
-        JDBC.load_gem(:Derby)
-        org.apache.derby.jdbc.EmbeddedDriver
-        Sequel.require 'adapters/jdbc/derby'
-        db.extend(Sequel::JDBC::Derby::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::Derby::Dataset
-        org.apache.derby.jdbc.EmbeddedDriver
-      end,
-      :as400=>proc do |db|
-        com.ibm.as400.access.AS400JDBCDriver
-        Sequel.require 'adapters/jdbc/as400'
-        db.extend(Sequel::JDBC::AS400::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::AS400::Dataset
-        com.ibm.as400.access.AS400JDBCDriver
-      end,
-      :"informix-sqli"=>proc do |db|
-        com.informix.jdbc.IfxDriver
-        Sequel.require 'adapters/jdbc/informix'
-        db.extend(Sequel::JDBC::Informix::DatabaseMethods)
-        db.extend_datasets Sequel::Informix::DatasetMethods
-        com.informix.jdbc.IfxDriver
-      end,
-      :db2=>proc do |db|
-        com.ibm.db2.jcc.DB2Driver
-        Sequel.require 'adapters/jdbc/db2'
-        db.extend(Sequel::JDBC::DB2::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::DB2::Dataset
-        com.ibm.db2.jcc.DB2Driver
-      end,
-      :firebirdsql=>proc do |db|
-        org.firebirdsql.jdbc.FBDriver
-        Sequel.require 'adapters/jdbc/firebird'
-        db.extend(Sequel::JDBC::Firebird::DatabaseMethods)
-        db.extend_datasets Sequel::Firebird::DatasetMethods
-        org.firebirdsql.jdbc.FBDriver
-      end,
-      :jdbcprogress=>proc do |db|
-        com.progress.sql.jdbc.JdbcProgressDriver
-        Sequel.require 'adapters/jdbc/progress'
-        db.extend(Sequel::JDBC::Progress::DatabaseMethods)
-        db.extend_datasets Sequel::Progress::DatasetMethods
-        com.progress.sql.jdbc.JdbcProgressDriver
-      end,
-      :cubrid=>proc do |db|
-        Java::cubrid.jdbc.driver.CUBRIDDriver
-        Sequel.require 'adapters/jdbc/cubrid'
-        db.extend(Sequel::JDBC::Cubrid::DatabaseMethods)
-        db.extend_datasets Sequel::Cubrid::DatasetMethods
-        Java::cubrid.jdbc.driver.CUBRIDDriver
-      end,
-      :sqlanywhere=>proc do |db|
-        drv = [
-          lambda{Java::sybase.jdbc4.sqlanywhere.IDriver},
-          lambda{Java::ianywhere.ml.jdbcodbc.jdbc4.IDriver},
-          lambda{Java::sybase.jdbc.sqlanywhere.IDriver},
-          lambda{Java::ianywhere.ml.jdbcodbc.jdbc.IDriver},
-          lambda{Java::com.sybase.jdbc4.jdbc.Sybdriver},
-          lambda{Java::com.sybase.jdbc3.jdbc.Sybdriver}
-        ].each do |class_proc|
-          begin
-            break class_proc.call
-          rescue NameError
-          end
-        end
-        Sequel.require 'adapters/jdbc/sqlanywhere'
-        db.extend(Sequel::JDBC::SqlAnywhere::DatabaseMethods)
-        db.dataset_class = Sequel::JDBC::SqlAnywhere::Dataset
-        drv
-      end
-    }
+    DATABASE_SETUP = {}
     
-    # Allowing loading the necessary JDBC support via a gem, which
-    # works for PostgreSQL, MySQL, and SQLite.
+    # Allow loading the necessary JDBC support via a gem.
     def self.load_gem(name)
       begin
         require "jdbc/#{name.to_s.downcase}"
@@ -180,6 +45,18 @@ module Sequel
       end
     end
 
+    # Attempt to load the JDBC driver class, which should be specified as a string
+    # containing the driver class name (which JRuby should autoload).
+    # Note that the string is evaled, so this method is not safe to call with
+    # untrusted input.
+    # Raise a Sequel::AdapterNotFound if evaluating the class name raises a NameError.
+    def self.load_driver(drv, gem=nil)
+      load_gem(gem) if gem
+      eval drv
+    rescue NameError
+      raise Sequel::AdapterNotFound, "#{drv} not loaded#{", try installing jdbc-#{gem.to_s.downcase} gem" if gem}"
+    end
+
     class TypeConvertor
       %w'Boolean Float Double Int Long Short'.each do |meth|
         class_eval("def #{meth}(r, i) v = r.get#{meth}(i); v unless r.wasNull end", __FILE__, __LINE__)
@@ -473,7 +350,7 @@ module Sequel
         
         resolved_uri = jndi? ? get_uri_from_jndi : uri
 
-        @driver = if (match = /\Ajdbc:([^:]+)/.match(resolved_uri)) && (prok = DATABASE_SETUP[match[1].to_sym])
+        @driver = if (match = /\Ajdbc:([^:]+)/.match(resolved_uri)) && (prok = Sequel::Database.load_adapter(match[1].to_sym, :map=>DATABASE_SETUP, :subdir=>'jdbc'))
           prok.call(self)
         else
           @opts[:driver]
@@ -729,6 +606,7 @@ module Sequel
         metadata(:getColumns, nil, schema, table, nil) do |h|
           next if schema_parse_table_skip?(h, schema)
           s = {:type=>schema_column_type(h[:type_name]), :db_type=>h[:type_name], :default=>(h[:column_def] == '' ? nil : h[:column_def]), :allow_null=>(h[:nullable] != 0), :primary_key=>pks.include?(h[:column_name]), :column_size=>h[:column_size], :scale=>h[:decimal_digits]}
+          s[:max_length] = s[:column_size] if s[:type] == :string
           if s[:db_type] =~ DECIMAL_TYPE_RE && s[:scale] == 0
             s[:type] = :integer
           end
diff --git a/lib/sequel/adapters/jdbc/as400.rb b/lib/sequel/adapters/jdbc/as400.rb
index 84bfe54..f847b2d 100644
--- a/lib/sequel/adapters/jdbc/as400.rb
+++ b/lib/sequel/adapters/jdbc/as400.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('com.ibm.as400.access.AS400JDBCDriver')
 Sequel.require 'adapters/jdbc/transactions'
 Sequel.require 'adapters/utils/emulate_offset_with_row_number'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:as400] = proc do |db|
+        db.extend(Sequel::JDBC::AS400::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::AS400::Dataset
+        com.ibm.as400.access.AS400JDBCDriver
+      end
+    end
+
     # Database and Dataset support for AS400 databases accessed via JDBC.
     module AS400
       # Instance methods for AS400 Database objects accessed via JDBC.
diff --git a/lib/sequel/adapters/jdbc/cubrid.rb b/lib/sequel/adapters/jdbc/cubrid.rb
index 9ff05ca..98cb5ed 100644
--- a/lib/sequel/adapters/jdbc/cubrid.rb
+++ b/lib/sequel/adapters/jdbc/cubrid.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('Java::cubrid.jdbc.driver.CUBRIDDriver')
 Sequel.require 'adapters/shared/cubrid'
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:cubrid] = proc do |db|
+        db.extend(Sequel::JDBC::Cubrid::DatabaseMethods)
+        db.extend_datasets Sequel::Cubrid::DatasetMethods
+        Java::cubrid.jdbc.driver.CUBRIDDriver
+      end
+    end
+
     module Cubrid
       module DatabaseMethods
         extend Sequel::Database::ResetIdentifierMangling
diff --git a/lib/sequel/adapters/jdbc/db2.rb b/lib/sequel/adapters/jdbc/db2.rb
index 68ba6a1..7c40ba8 100644
--- a/lib/sequel/adapters/jdbc/db2.rb
+++ b/lib/sequel/adapters/jdbc/db2.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('com.ibm.db2.jcc.DB2Driver')
 Sequel.require 'adapters/shared/db2'
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:db2] = proc do |db|
+        db.extend(Sequel::JDBC::DB2::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::DB2::Dataset
+        com.ibm.db2.jcc.DB2Driver
+      end
+    end
+
     class TypeConvertor
       def DB2Clob(r, i)
         if v = r.getClob(i)
diff --git a/lib/sequel/adapters/jdbc/derby.rb b/lib/sequel/adapters/jdbc/derby.rb
index 3e5562c..87affb9 100644
--- a/lib/sequel/adapters/jdbc/derby.rb
+++ b/lib/sequel/adapters/jdbc/derby.rb
@@ -1,7 +1,16 @@
+Sequel::JDBC.load_driver('org.apache.derby.jdbc.EmbeddedDriver', :Derby)
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:derby] = proc do |db|
+        db.extend(Sequel::JDBC::Derby::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::Derby::Dataset
+        org.apache.derby.jdbc.EmbeddedDriver
+      end
+    end
+
     # Database and Dataset support for Derby databases accessed via JDBC.
     module Derby
       # Instance methods for Derby Database objects accessed via JDBC.
diff --git a/lib/sequel/adapters/jdbc/firebird.rb b/lib/sequel/adapters/jdbc/firebirdsql.rb
similarity index 71%
rename from lib/sequel/adapters/jdbc/firebird.rb
rename to lib/sequel/adapters/jdbc/firebirdsql.rb
index fea08e9..cd4e55f 100644
--- a/lib/sequel/adapters/jdbc/firebird.rb
+++ b/lib/sequel/adapters/jdbc/firebirdsql.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('org.firebirdsql.jdbc.FBDriver')
 Sequel.require 'adapters/shared/firebird'
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:firebirdsql] = proc do |db|
+        db.extend(Sequel::JDBC::Firebird::DatabaseMethods)
+        db.extend_datasets Sequel::Firebird::DatasetMethods
+        org.firebirdsql.jdbc.FBDriver
+      end
+    end
+
     # Database and Dataset instance methods for Firebird specific
     # support via JDBC.
     module Firebird
diff --git a/lib/sequel/adapters/jdbc/h2.rb b/lib/sequel/adapters/jdbc/h2.rb
index 4503654..a102355 100644
--- a/lib/sequel/adapters/jdbc/h2.rb
+++ b/lib/sequel/adapters/jdbc/h2.rb
@@ -1,5 +1,15 @@
+Sequel::JDBC.load_driver('org.h2.Driver', :H2)
+
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:h2] = proc do |db|
+        db.extend(Sequel::JDBC::H2::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::H2::Dataset
+        org.h2.Driver
+      end
+    end
+
     # Database and Dataset support for H2 databases accessed via JDBC.
     module H2
       # Instance methods for H2 Database objects accessed via JDBC.
diff --git a/lib/sequel/adapters/jdbc/hsqldb.rb b/lib/sequel/adapters/jdbc/hsqldb.rb
index e814440..c5cf5c2 100644
--- a/lib/sequel/adapters/jdbc/hsqldb.rb
+++ b/lib/sequel/adapters/jdbc/hsqldb.rb
@@ -1,7 +1,16 @@
+Sequel::JDBC.load_driver('org.hsqldb.jdbcDriver', :HSQLDB)
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:hsqldb] = proc do |db|
+        db.extend(Sequel::JDBC::HSQLDB::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::HSQLDB::Dataset
+        org.hsqldb.jdbcDriver
+      end
+    end
+
     # Database and Dataset support for HSQLDB databases accessed via JDBC.
     module HSQLDB
       # Instance methods for HSQLDB Database objects accessed via JDBC.
diff --git a/lib/sequel/adapters/jdbc/informix.rb b/lib/sequel/adapters/jdbc/informix-sqli.rb
similarity index 63%
rename from lib/sequel/adapters/jdbc/informix.rb
rename to lib/sequel/adapters/jdbc/informix-sqli.rb
index b7d5be2..05b4a7e 100644
--- a/lib/sequel/adapters/jdbc/informix.rb
+++ b/lib/sequel/adapters/jdbc/informix-sqli.rb
@@ -1,7 +1,16 @@
+Sequel::JDBC.load_driver('com.informix.jdbc.IfxDriver')
 Sequel.require 'adapters/shared/informix'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:"informix-sqli"] = proc do |db|
+        db.extend(Sequel::JDBC::Informix::DatabaseMethods)
+        db.extend_datasets Sequel::Informix::DatasetMethods
+        com.informix.jdbc.IfxDriver
+      end
+    end
+
     # Database and Dataset instance methods for Informix specific
     # support via JDBC.
     module Informix
diff --git a/lib/sequel/adapters/jdbc/progress.rb b/lib/sequel/adapters/jdbc/jdbcprogress.rb
similarity index 66%
rename from lib/sequel/adapters/jdbc/progress.rb
rename to lib/sequel/adapters/jdbc/jdbcprogress.rb
index ac5ea06..dd11319 100644
--- a/lib/sequel/adapters/jdbc/progress.rb
+++ b/lib/sequel/adapters/jdbc/jdbcprogress.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('com.progress.sql.jdbc.JdbcProgressDriver')
 Sequel.require 'adapters/shared/progress'
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:jdbcprogress] = proc do |db|
+        db.extend(Sequel::JDBC::Progress::DatabaseMethods)
+        db.extend_datasets Sequel::Progress::DatasetMethods
+        com.progress.sql.jdbc.JdbcProgressDriver
+      end
+    end
+
     # Database and Dataset instance methods for Progress v9 specific
     # support via JDBC.
     module Progress
diff --git a/lib/sequel/adapters/jdbc/jtds.rb b/lib/sequel/adapters/jdbc/jtds.rb
index 23122ae..65963b4 100644
--- a/lib/sequel/adapters/jdbc/jtds.rb
+++ b/lib/sequel/adapters/jdbc/jtds.rb
@@ -1,7 +1,17 @@
+Sequel::JDBC.load_driver('Java::net.sourceforge.jtds.jdbc.Driver', :JTDS)
 Sequel.require 'adapters/jdbc/mssql'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:jtds] = proc do |db|
+        db.extend(Sequel::JDBC::JTDS::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::JTDS::Dataset
+        db.send(:set_mssql_unicode_strings)
+        Java::net.sourceforge.jtds.jdbc.Driver
+      end
+    end
+
     # Database and Dataset instance methods for JTDS specific
     # support via JDBC.
     module JTDS
diff --git a/lib/sequel/adapters/jdbc/mysql.rb b/lib/sequel/adapters/jdbc/mysql.rb
index 976a09b..ceb6481 100644
--- a/lib/sequel/adapters/jdbc/mysql.rb
+++ b/lib/sequel/adapters/jdbc/mysql.rb
@@ -1,7 +1,16 @@
+Sequel::JDBC.load_driver('com.mysql.jdbc.Driver', :MySQL)
 Sequel.require 'adapters/shared/mysql'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:mysql] = proc do |db|
+        db.extend(Sequel::JDBC::MySQL::DatabaseMethods)
+        db.extend_datasets Sequel::MySQL::DatasetMethods
+        com.mysql.jdbc.Driver
+      end
+    end
+
     # Database and Dataset instance methods for MySQL specific
     # support via JDBC.
     module MySQL
@@ -26,6 +35,11 @@ module Sequel
           false
         end
 
+        # Raise a disconnect error if the SQL state of the cause of the exception indicates so.
+        def disconnect_error?(exception, opts)
+          exception.message =~ /\ACommunications link failure/ || super
+        end
+
         # Get the last inserted id using LAST_INSERT_ID().
         def last_insert_id(conn, opts=OPTS)
           if stmt = opts[:stmt]
diff --git a/lib/sequel/adapters/jdbc/oracle.rb b/lib/sequel/adapters/jdbc/oracle.rb
index 6d3eeeb..9f8f68b 100644
--- a/lib/sequel/adapters/jdbc/oracle.rb
+++ b/lib/sequel/adapters/jdbc/oracle.rb
@@ -1,8 +1,17 @@
+Sequel::JDBC.load_driver('Java::oracle.jdbc.driver.OracleDriver')
 Sequel.require 'adapters/shared/oracle'
 Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:oracle] = proc do |db|
+        db.extend(Sequel::JDBC::Oracle::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::Oracle::Dataset
+        Java::oracle.jdbc.driver.OracleDriver
+      end
+    end
+
     class TypeConvertor
       JAVA_BIG_DECIMAL_CONSTRUCTOR = java.math.BigDecimal.java_class.constructor(Java::long).method(:new_instance)
 
diff --git a/lib/sequel/adapters/jdbc/postgresql.rb b/lib/sequel/adapters/jdbc/postgresql.rb
index a609991..5a297eb 100644
--- a/lib/sequel/adapters/jdbc/postgresql.rb
+++ b/lib/sequel/adapters/jdbc/postgresql.rb
@@ -1,9 +1,18 @@
+Sequel::JDBC.load_driver('org.postgresql.Driver', :Postgres)
 Sequel.require 'adapters/shared/postgres'
 
 module Sequel
   Postgres::CONVERTED_EXCEPTIONS << NativeException
   
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:postgresql] = proc do |db|
+        db.extend(Sequel::JDBC::Postgres::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::Postgres::Dataset
+        org.postgresql.Driver
+      end
+    end
+
     class TypeConvertor
       # Return PostgreSQL array types as ruby Arrays instead of
       # JDBC PostgreSQL driver-specific array type. Only used if the
diff --git a/lib/sequel/adapters/jdbc/sqlanywhere.rb b/lib/sequel/adapters/jdbc/sqlanywhere.rb
index f5ecf04..158365b 100644
--- a/lib/sequel/adapters/jdbc/sqlanywhere.rb
+++ b/lib/sequel/adapters/jdbc/sqlanywhere.rb
@@ -3,6 +3,29 @@ Sequel.require 'adapters/jdbc/transactions'
 
 module Sequel
   module JDBC
+    drv = [
+      lambda{Java::sybase.jdbc4.sqlanywhere.IDriver},
+      lambda{Java::ianywhere.ml.jdbcodbc.jdbc4.IDriver},
+      lambda{Java::sybase.jdbc.sqlanywhere.IDriver},
+      lambda{Java::ianywhere.ml.jdbcodbc.jdbc.IDriver},
+      lambda{Java::com.sybase.jdbc4.jdbc.Sybdriver},
+      lambda{Java::com.sybase.jdbc3.jdbc.Sybdriver}
+    ].each do |class_proc|
+      begin
+        break class_proc.call
+      rescue NameError
+      end
+    end
+    raise(Sequel::AdapterNotFound, "no suitable SQLAnywhere JDBC driver found") unless drv
+
+    Sequel.synchronize do
+      DATABASE_SETUP[:sqlanywhere] = proc do |db|
+        db.extend(Sequel::JDBC::SqlAnywhere::DatabaseMethods)
+        db.dataset_class = Sequel::JDBC::SqlAnywhere::Dataset
+        drv
+      end
+    end
+
     class TypeConvertor
       def SqlAnywhereBoolean(r, i)
         if v = Short(r, i)
diff --git a/lib/sequel/adapters/jdbc/sqlite.rb b/lib/sequel/adapters/jdbc/sqlite.rb
index 52266bd..d885409 100644
--- a/lib/sequel/adapters/jdbc/sqlite.rb
+++ b/lib/sequel/adapters/jdbc/sqlite.rb
@@ -1,7 +1,17 @@
+Sequel::JDBC.load_driver('org.sqlite.JDBC', :SQLite3)
 Sequel.require 'adapters/shared/sqlite'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:sqlite] = proc do |db|
+        db.extend(Sequel::JDBC::SQLite::DatabaseMethods)
+        db.extend_datasets Sequel::SQLite::DatasetMethods
+        db.set_integer_booleans
+        org.sqlite.JDBC
+      end
+    end
+
     # Database and Dataset support for SQLite databases accessed via JDBC.
     module SQLite
       # Instance methods for SQLite Database objects accessed via JDBC.
diff --git a/lib/sequel/adapters/jdbc/sqlserver.rb b/lib/sequel/adapters/jdbc/sqlserver.rb
index cc83c2e..cd8efe1 100644
--- a/lib/sequel/adapters/jdbc/sqlserver.rb
+++ b/lib/sequel/adapters/jdbc/sqlserver.rb
@@ -1,7 +1,17 @@
+Sequel::JDBC.load_driver('com.microsoft.sqlserver.jdbc.SQLServerDriver')
 Sequel.require 'adapters/jdbc/mssql'
 
 module Sequel
   module JDBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:sqlserver] = proc do |db|
+        db.extend(Sequel::JDBC::SQLServer::DatabaseMethods)
+        db.extend_datasets Sequel::MSSQL::DatasetMethods
+        db.send(:set_mssql_unicode_strings)
+        com.microsoft.sqlserver.jdbc.SQLServerDriver
+      end
+    end
+
     # Database and Dataset instance methods for SQLServer specific
     # support via JDBC.
     module SQLServer
diff --git a/lib/sequel/adapters/odbc.rb b/lib/sequel/adapters/odbc.rb
index 0076d8c..b252a3f 100644
--- a/lib/sequel/adapters/odbc.rb
+++ b/lib/sequel/adapters/odbc.rb
@@ -2,6 +2,10 @@ require 'odbc'
 
 module Sequel
   module ODBC
+    # Contains procs keyed on subadapter type that extend the
+    # given database object so it supports the correct database type.
+    DATABASE_SETUP = {}
+      
     class Database < Sequel::Database
       set_adapter_scheme :odbc
 
@@ -61,20 +65,8 @@ module Sequel
       private
       
       def adapter_initialize
-        case @opts[:db_type]
-        when 'mssql'
-          Sequel.require 'adapters/odbc/mssql'
-          extend Sequel::ODBC::MSSQL::DatabaseMethods
-          self.dataset_class = Sequel::ODBC::MSSQL::Dataset
-          set_mssql_unicode_strings
-        when 'progress'
-          Sequel.require 'adapters/shared/progress'
-          extend Sequel::Progress::DatabaseMethods
-          extend_datasets(Sequel::Progress::DatasetMethods)
-        when 'db2'
-          Sequel.require 'adapters/shared/db2'
-          extend ::Sequel::DB2::DatabaseMethods
-          extend_datasets ::Sequel::DB2::DatasetMethods
+        if (db_type = @opts[:db_type]) && (prok = Sequel::Database.load_adapter(db_type.to_sym, :map=>DATABASE_SETUP, :subdir=>'odbc'))
+          prok.call(self)
         end
       end
 
diff --git a/lib/sequel/adapters/odbc/db2.rb b/lib/sequel/adapters/odbc/db2.rb
new file mode 100644
index 0000000..537f02d
--- /dev/null
+++ b/lib/sequel/adapters/odbc/db2.rb
@@ -0,0 +1,9 @@
+Sequel.require 'adapters/shared/db2'
+
+Sequel.synchronize do
+  Sequel::ODBC::DATABASE_SETUP[:db2] = proc do |db|
+    db.extend ::Sequel::DB2::DatabaseMethods
+    db.extend_datasets ::Sequel::DB2::DatasetMethods
+  end
+end
+
diff --git a/lib/sequel/adapters/odbc/mssql.rb b/lib/sequel/adapters/odbc/mssql.rb
index 51a93b6..976a3f7 100644
--- a/lib/sequel/adapters/odbc/mssql.rb
+++ b/lib/sequel/adapters/odbc/mssql.rb
@@ -2,6 +2,14 @@ Sequel.require 'adapters/shared/mssql'
 
 module Sequel
   module ODBC
+    Sequel.synchronize do
+      DATABASE_SETUP[:mssql] = proc do |db|
+        db.extend Sequel::ODBC::MSSQL::DatabaseMethods
+        db.dataset_class = Sequel::ODBC::MSSQL::Dataset
+        db.send(:set_mssql_unicode_strings)
+      end
+    end
+
     # Database and Dataset instance methods for MSSQL specific
     # support via ODBC.
     module MSSQL
diff --git a/lib/sequel/adapters/odbc/progress.rb b/lib/sequel/adapters/odbc/progress.rb
new file mode 100644
index 0000000..0800161
--- /dev/null
+++ b/lib/sequel/adapters/odbc/progress.rb
@@ -0,0 +1,8 @@
+Sequel.require 'adapters/shared/progress'
+
+Sequel.synchronize do
+  Sequel::ODBC::DATABASE_SETUP[:progress] = proc do |db|
+    db.extend Sequel::Progress::DatabaseMethods
+    db.extend_datasets(Sequel::Progress::DatasetMethods)
+  end
+end
diff --git a/lib/sequel/adapters/oracle.rb b/lib/sequel/adapters/oracle.rb
index f9d9747..7ee8819 100644
--- a/lib/sequel/adapters/oracle.rb
+++ b/lib/sequel/adapters/oracle.rb
@@ -273,7 +273,7 @@ module Sequel
 
         # Default values
         defaults = begin
-          metadata_dataset.from(:user_tab_cols).
+          metadata_dataset.from(:all_tab_cols).
             where(:table_name=>im.call(table)).
             to_hash(:column_name, :data_default)
         rescue DatabaseError
@@ -305,6 +305,7 @@ module Sequel
               :allow_null => column.nullable?
           }
           h[:type] = oracle_column_type(h)
+          h[:max_length] = h[:char_size] if h[:type] == :string
           table_schema << [m.call(column.name), h]
         end
         table_schema
diff --git a/lib/sequel/adapters/postgres.rb b/lib/sequel/adapters/postgres.rb
index 797ed7b..e3d299a 100644
--- a/lib/sequel/adapters/postgres.rb
+++ b/lib/sequel/adapters/postgres.rb
@@ -108,7 +108,7 @@ module Sequel
     # PGconn subclass for connection specific methods used with the
     # pg, postgres, or postgres-pr driver.
     class Adapter < ::PGconn
-      DISCONNECT_ERROR_RE = /\A(?:could not receive data from server|no connection to the server|connection not open)/
+      DISCONNECT_ERROR_RE = /\A(?:could not receive data from server|no connection to the server|connection not open|terminating connection due to administrator command)/
       
       self.translate_results = false if respond_to?(:translate_results=)
       
@@ -198,12 +198,13 @@ module Sequel
       # Connects to the database.  In addition to the standard database
       # options, using the :encoding or :charset option changes the
       # client encoding for the connection, :connect_timeout is a
-      # connection timeout in seconds, and :sslmode sets whether postgres's
-      # sslmode.  :connect_timeout and :ssl_mode are only supported if the pg
-      # driver is used.
+      # connection timeout in seconds, :sslmode sets whether postgres's
+      # sslmode, and :notice_receiver handles server notices in a proc.
+      # :connect_timeout, :ssl_mode, and :notice_receiver are only supported
+      # if the pg driver is used.
       def connect(server)
         opts = server_opts(server)
-        conn = if SEQUEL_POSTGRES_USES_PG
+        if SEQUEL_POSTGRES_USES_PG
           connection_params = {
             :host => opts[:host],
             :port => opts[:port] || 5432,
@@ -213,9 +214,15 @@ module Sequel
             :connect_timeout => opts[:connect_timeout] || 20,
             :sslmode => opts[:sslmode]
           }.delete_if { |key, value| blank_object?(value) }
-          Adapter.connect(connection_params)
+          conn = Adapter.connect(connection_params)
+
+          conn.instance_variable_set(:@prepared_statements, {})
+
+          if receiver = opts[:notice_receiver]
+            conn.set_notice_receiver(&receiver)
+          end
         else
-          Adapter.connect(
+          conn = Adapter.connect(
             (opts[:host] unless blank_object?(opts[:host])),
             opts[:port] || 5432,
             nil, '',
@@ -224,6 +231,9 @@ module Sequel
             opts[:password]
           )
         end
+
+        conn.instance_variable_set(:@db, self)
+
         if encoding = opts[:encoding] || opts[:charset]
           if conn.respond_to?(:set_client_encoding)
             conn.set_client_encoding(encoding)
@@ -231,8 +241,7 @@ module Sequel
             conn.async_exec("set client_encoding to '#{encoding}'")
           end
         end
-        conn.instance_variable_set(:@db, self)
-        conn.instance_variable_set(:@prepared_statements, {}) if SEQUEL_POSTGRES_USES_PG
+
         connection_configuration_sqls.each{|sql| conn.execute(sql)}
         conn
       end
diff --git a/lib/sequel/adapters/shared/cubrid.rb b/lib/sequel/adapters/shared/cubrid.rb
index a8648c4..539d740 100644
--- a/lib/sequel/adapters/shared/cubrid.rb
+++ b/lib/sequel/adapters/shared/cubrid.rb
@@ -51,12 +51,13 @@ module Sequel
           from(:db_attribute).
           where(:class_name=>m2.call(table_name)).
           order(:def_order).
-          select(:attr_name, :data_type___db_type, :default_value___default, :is_nullable___allow_null).
+          select(:attr_name, :data_type___db_type, :default_value___default, :is_nullable___allow_null, :prec).
           map do |row|
             name = m.call(row.delete(:attr_name))
             row[:allow_null] = row[:allow_null] == 'YES'
             row[:primary_key] = pks.include?(name)
             row[:type] = schema_column_type(row[:db_type])
+            row[:max_length] = row[:prec] if row[:type] == :string
             [name, row]
           end
       end
diff --git a/lib/sequel/adapters/shared/db2.rb b/lib/sequel/adapters/shared/db2.rb
index 9249ee3..9a4ff0e 100644
--- a/lib/sequel/adapters/shared/db2.rb
+++ b/lib/sequel/adapters/shared/db2.rb
@@ -42,6 +42,7 @@ module Sequel
             column[:allow_null]  = column.delete(:nulls) == 'Y'
             column[:primary_key] = column.delete(:identity) == 'Y' || !column[:keyseq].nil?
             column[:type]        = schema_column_type(column[:db_type])
+            column[:max_length]  = column[:longlength] if column[:type] == :string
             [ m.call(column.delete(:name)), column]
           end
       end
diff --git a/lib/sequel/adapters/shared/firebird.rb b/lib/sequel/adapters/shared/firebird.rb
index 06fbddd..8fd688c 100644
--- a/lib/sequel/adapters/shared/firebird.rb
+++ b/lib/sequel/adapters/shared/firebird.rb
@@ -175,7 +175,14 @@ module Sequel
 
       # Insert a record returning the record inserted
       def insert_select(*values)
-        returning.insert(*values){|r| return r}
+        with_sql_first(insert_select_sql(*values))
+      end
+
+      # The SQL to use for an insert_select, adds a RETURNING clause to the insert
+      # unless the RETURNING clause is already present.
+      def insert_select_sql(*values)
+        ds = opts[:returning] ? self : returning
+        ds.insert_sql(*values)
       end
 
       def requires_sql_standard_datetimes?
diff --git a/lib/sequel/adapters/shared/mssql.rb b/lib/sequel/adapters/shared/mssql.rb
index 069b64a..f6e0550 100644
--- a/lib/sequel/adapters/shared/mssql.rb
+++ b/lib/sequel/adapters/shared/mssql.rb
@@ -457,6 +457,7 @@ module Sequel
           else
             schema_column_type(row[:db_type])
           end
+          row[:max_length] = row[:max_chars] if row[:type] == :string
           [m.call(row.delete(:column)), row]
         end
       end
@@ -524,7 +525,6 @@ module Sequel
       DATEPART_SECOND_MIDDLE = ') + datepart(ns, '.freeze
       DATEPART_SECOND_CLOSE = ")/1000000000.0) AS double precision)".freeze
       DATEPART_OPEN = "datepart(".freeze
-      TIMESTAMP_USEC_FORMAT = ".%03d".freeze
       OUTPUT_INSERTED = " OUTPUT INSERTED.*".freeze
       HEX_START = '0x'.freeze
       UNICODE_STRING_START = "N'".freeze
@@ -623,7 +623,14 @@ module Sequel
       # Use the OUTPUT clause to get the value of all columns for the newly inserted record.
       def insert_select(*values)
         return unless supports_insert_select?
-        naked.clone(default_server_opts(:sql=>output(nil, [SQL::ColumnAll.new(:inserted)]).insert_sql(*values))).single_record
+        with_sql_first(insert_select_sql(*values))
+      end
+
+      # Add OUTPUT clause unless there is already an existing output clause, then return
+      # the SQL to insert.
+      def insert_select_sql(*values)
+        ds = (opts[:output] || opts[:returning]) ? self : output(nil, [SQL::ColumnAll.new(:inserted)])
+        ds.insert_sql(*values)
       end
 
       # Specify a table for a SELECT ... INTO query.
@@ -657,20 +664,31 @@ module Sequel
         raise(Error, "SQL Server versions 2000 and earlier do not support the OUTPUT clause") unless supports_output_clause?
         output = {}
         case values
-          when Hash
-            output[:column_list], output[:select_list] = values.keys, values.values
-          when Array
-            output[:select_list] = values
+        when Hash
+          output[:column_list], output[:select_list] = values.keys, values.values
+        when Array
+          output[:select_list] = values
         end
         output[:into] = into
-        clone({:output => output})
+        clone(:output => output)
       end
 
       # MSSQL uses [] to quote identifiers.
       def quoted_identifier_append(sql, name)
         sql << BRACKET_OPEN << name.to_s.gsub(/\]/, DOUBLE_BRACKET_CLOSE) << BRACKET_CLOSE
       end
-      
+
+      # Emulate RETURNING using the output clause.  This only handles values that are simple column references.
+      def returning(*values)
+        values = values.map do |v|
+          unless r = unqualified_column_for(v)
+            raise(Error, "cannot emulate RETURNING via OUTPUT for value: #{v.inspect}")
+          end
+          r
+        end
+        clone(:returning=>values)
+      end
+
       # On MSSQL 2012+ add a default order to the current dataset if an offset is used.
       # The default offset emulation using a subquery would be used in the unordered
       # case by default, and that also adds a default order, so it's better to just
@@ -737,11 +755,16 @@ module Sequel
         is_2012_or_later?
       end
 
-      # MSSQL 2005+ supports the output clause.
+      # MSSQL 2005+ supports the OUTPUT clause.
       def supports_output_clause?
         is_2005_or_later?
       end
 
+      # MSSQL 2005+ can emulate RETURNING via the OUTPUT clause.
+      def supports_returning?(type)
+        supports_insert_select?
+      end
+
       # MSSQL 2005+ supports window functions
       def supports_window_functions?
         true
@@ -815,6 +838,10 @@ module Sequel
       end
       alias update_from_sql delete_from2_sql
 
+      def delete_output_sql(sql)
+        output_sql(sql, :DELETED)
+      end
+
       # There is no function on Microsoft SQL Server that does character length
       # and respects trailing spaces (datalength respects trailing spaces, but
       # counts bytes instead of characters).  Use a hack to work around the
@@ -843,22 +870,10 @@ module Sequel
         @db.schema(self).map{|k, v| k if v[:primary_key] == true}.compact.first
       end
 
-      # MSSQL raises an error if you try to provide more than 3 decimal places
-      # for a fractional timestamp.  This probably doesn't work for smalldatetime
-      # fields.
-      def format_timestamp_usec(usec)
-        sprintf(TIMESTAMP_USEC_FORMAT, usec/1000)
-      end
-
-      # Use OUTPUT INSERTED.* to return all columns of the inserted row,
-      # for use with the prepared statement code.
       def insert_output_sql(sql)
-        if @opts.has_key?(:returning)
-          sql << OUTPUT_INSERTED
-        else
-          output_sql(sql)
-        end
+        output_sql(sql, :INSERTED)
       end
+      alias update_output_sql insert_output_sql
 
       # Handle CROSS APPLY and OUTER APPLY JOIN types
       def join_type_sql(join_type)
@@ -963,9 +978,16 @@ module Sequel
       end
 
       # SQL fragment for MSSQL's OUTPUT clause.
-      def output_sql(sql)
+      def output_sql(sql, type)
         return unless supports_output_clause?
-        return unless output = @opts[:output]
+        if output = @opts[:output]
+          output_list_sql(sql, output)
+        elsif values = @opts[:returning]
+          output_returning_sql(sql, type, values)
+        end
+      end
+
+      def output_list_sql(sql, output)
         sql << OUTPUT
         column_list_append(sql, output[:select_list])
         if into = output[:into]
@@ -978,8 +1000,28 @@ module Sequel
           end
         end
       end
-      alias delete_output_sql output_sql
-      alias update_output_sql output_sql
+
+      def output_returning_sql(sql, type, values)
+        sql << OUTPUT
+        if values.empty?
+          literal_append(sql, SQL::ColumnAll.new(type))
+        else
+          values = values.map do |v|
+            case v
+            when SQL::AliasedExpression
+              Sequel.qualify(type, v.expression).as(v.alias)
+            else
+              Sequel.qualify(type, v)
+            end
+          end
+          column_list_append(sql, values)
+        end
+      end
+
+      # MSSQL supports millisecond timestamp precision.
+      def timestamp_precision
+        3
+      end
 
       # Only include the primary table in the main update clause
       def update_table_sql(sql)
diff --git a/lib/sequel/adapters/shared/mysql.rb b/lib/sequel/adapters/shared/mysql.rb
index 1245dcb..e0e3464 100644
--- a/lib/sequel/adapters/shared/mysql.rb
+++ b/lib/sequel/adapters/shared/mysql.rb
@@ -368,12 +368,10 @@ module Sequel
         generator.columns.each do |c|
           if t = c.delete(:table)
             same_table = t == name
-            k = c[:key]
+            key = c[:key] || key_proc.call(t)
 
-            key ||= key_proc.call(t)
-
-            if same_table && !k.nil?
-              generator.constraints.unshift(:type=>:unique, :columns=>Array(k))
+            if same_table && !key.nil?
+              generator.constraints.unshift(:type=>:unique, :columns=>Array(key))
             end
 
             generator.foreign_key([c[:name]], t, c.merge(:name=>c[:foreign_key_constraint_name], :type=>:foreign_key, :key=>key))
diff --git a/lib/sequel/adapters/shared/oracle.rb b/lib/sequel/adapters/shared/oracle.rb
index e8bfbec..d8b7b6b 100644
--- a/lib/sequel/adapters/shared/oracle.rb
+++ b/lib/sequel/adapters/shared/oracle.rb
@@ -59,19 +59,33 @@ module Sequel
         false
       end
 
+      IGNORE_OWNERS = %w'APEX_040000 CTXSYS EXFSYS MDSYS OLAPSYS ORDDATA ORDSYS SYS SYSTEM XDB XDBMETADATA XDBPM XFILES WMSYS'
+
       def tables(opts=OPTS)
         m = output_identifier_meth
-        metadata_dataset.from(:tabs).server(opts[:server]).select(:table_name).map{|r| m.call(r[:table_name])}
+        metadata_dataset.from(:all_tables).
+          server(opts[:server]).
+          where(:dropped=>'NO').
+          exclude(:owner=>IGNORE_OWNERS).
+          select(:table_name).
+          map{|r| m.call(r[:table_name])}
       end
 
       def views(opts=OPTS) 
         m = output_identifier_meth
-        metadata_dataset.from(:tab).server(opts[:server]).select(:tname).filter(:tabtype => 'VIEW').map{|r| m.call(r[:tname])}
+        metadata_dataset.from(:all_views).
+          server(opts[:server]).
+          exclude(:owner=>IGNORE_OWNERS).
+          select(:view_name).
+          map{|r| m.call(r[:view_name])}
       end 
  
       def view_exists?(name) 
         m = input_identifier_meth
-        metadata_dataset.from(:tab).filter(:tname =>m.call(name), :tabtype => 'VIEW').count > 0 
+        metadata_dataset.from(:all_views).
+          exclude(:owner=>IGNORE_OWNERS).
+          where(:view_name=>m.call(name)).
+          count > 0
       end 
 
       # Oracle supports deferrable constraints.
diff --git a/lib/sequel/adapters/shared/postgres.rb b/lib/sequel/adapters/shared/postgres.rb
index 91da4d1..63ef3f3 100644
--- a/lib/sequel/adapters/shared/postgres.rb
+++ b/lib/sequel/adapters/shared/postgres.rb
@@ -524,6 +524,17 @@ module Sequel
         @supported_types.fetch(type){@supported_types[type] = (from(:pg_type).filter(:typtype=>'b', :typname=>type.to_s).count > 0)}
       end
 
+      # Creates a dataset that uses the VALUES clause:
+      #
+      #   DB.values([[1, 2], [3, 4]])
+      #   VALUES ((1, 2), (3, 4))
+      #
+      #   DB.values([[1, 2], [3, 4]]).order(:column2).limit(1, 1)
+      #   VALUES ((1, 2), (3, 4)) ORDER BY column2 LIMIT 1 OFFSET 1
+      def values(v)
+        @default_dataset.clone(:values=>v)
+      end
+
       # Array of symbols specifying view names in the current database.
       #
       # Options:
@@ -1147,12 +1158,13 @@ module Sequel
       CRLF = "\r\n".freeze
       BLOB_RE = /[\000-\037\047\134\177-\377]/n.freeze
       WINDOW = " WINDOW ".freeze
+      SELECT_VALUES = "VALUES ".freeze
       EMPTY_STRING = ''.freeze
       LOCK_MODES = ['ACCESS SHARE', 'ROW SHARE', 'ROW EXCLUSIVE', 'SHARE UPDATE EXCLUSIVE', 'SHARE', 'SHARE ROW EXCLUSIVE', 'EXCLUSIVE', 'ACCESS EXCLUSIVE'].each{|s| s.freeze}
 
       Dataset.def_sql_method(self, :delete, [['if server_version >= 90100', %w'with delete from using where returning'], ['else', %w'delete from using where returning']])
       Dataset.def_sql_method(self, :insert, [['if server_version >= 90100', %w'with insert into columns values returning'], ['else', %w'insert into columns values returning']])
-      Dataset.def_sql_method(self, :select, [['if server_version >= 80400', %w'with select distinct columns from join where group having window compounds order limit lock'], ['else', %w'select distinct columns from join where group having compounds order limit lock']])
+      Dataset.def_sql_method(self, :select, [['if opts[:values]', %w'values order limit'], ['elsif server_version >= 80400', %w'with select distinct columns from join where group having window compounds order limit lock'], ['else', %w'select distinct columns from join where group having compounds order limit lock']])
       Dataset.def_sql_method(self, :update, [['if server_version >= 90100', %w'with update table set from where returning'], ['else', %w'update table set from where returning']])
 
       # Shared methods for prepared statements when used with PostgreSQL databases.
@@ -1285,7 +1297,15 @@ module Sequel
       # Insert a record returning the record inserted.  Always returns nil without
       # inserting a query if disable_insert_returning is used.
       def insert_select(*values)
-        returning.insert(*values){|r| return r} unless @opts[:disable_insert_returning]
+        return unless supports_insert_select?
+        with_sql_first(insert_select_sql(*values))
+      end
+
+      # The SQL to use for an insert_select, adds a RETURNING clause to the insert
+      # unless the RETURNING clause is already present.
+      def insert_select_sql(*values)
+        ds = opts[:returning] ? self : returning
+        ds.insert_sql(*values)
       end
 
       # Locks all tables in the dataset's FROM clause (but not in JOINs) with
@@ -1510,6 +1530,12 @@ module Sequel
         @opts[:lock] == :share ? (sql << FOR_SHARE) : super
       end
 
+      # Support VALUES clause instead of the SELECT clause to return rows.
+      def select_values_sql(sql)
+        sql << SELECT_VALUES
+        expression_list_append(sql, opts[:values])
+      end
+
       # SQL fragment for named window specifications
       def select_window_sql(sql)
         if ws = @opts[:window]
diff --git a/lib/sequel/adapters/shared/sqlanywhere.rb b/lib/sequel/adapters/shared/sqlanywhere.rb
index 54ac907..e6be6ff 100644
--- a/lib/sequel/adapters/shared/sqlanywhere.rb
+++ b/lib/sequel/adapters/shared/sqlanywhere.rb
@@ -66,6 +66,7 @@ module Sequel
           else
             schema_column_type(row[:db_type])
           end
+          row[:max_length] = row[:width] if row[:type] == :string
           [m.call(row.delete(:name)), row]
         end
       end
@@ -105,7 +106,7 @@ module Sequel
                :columns=>r[:columns].split(',').map{|v| m.call(v.split(' ').first)},
                :table=>m.call(r[:table_name]),
                :key=>r[:column_map].split(',').map{|v| m.call(v.split(' IS ').last)}}
-           end
+          end
         end
         fk_indexes.values
       end
@@ -254,7 +255,6 @@ module Sequel
       DATEPART = 'datepart'.freeze
       REGEXP = 'REGEXP'.freeze
       NOT_REGEXP = 'NOT REGEXP'.freeze
-      TIMESTAMP_USEC_FORMAT = ".%03d".freeze
       APOS = Dataset::APOS
       APOS_RE = Dataset::APOS_RE
       DOUBLE_APOS = Dataset::DOUBLE_APOS
@@ -424,10 +424,6 @@ module Sequel
         end
       end
 
-      def format_timestamp_usec(usec)
-        sprintf(TIMESTAMP_USEC_FORMAT, usec/1000)
-      end
-
       # Sybase uses TOP N for limit.  For Sybase TOP (N) is used
       # to allow the limit to be a bound variable.
       def select_limit_sql(sql)
@@ -464,6 +460,11 @@ module Sequel
           super
         end
       end
+
+      # SQLAnywhere supports millisecond timestamp precision.
+      def timestamp_precision
+        3
+      end
     end
   end
 end
diff --git a/lib/sequel/adapters/sqlite.rb b/lib/sequel/adapters/sqlite.rb
index ca33502..c86ea07 100644
--- a/lib/sequel/adapters/sqlite.rb
+++ b/lib/sequel/adapters/sqlite.rb
@@ -91,14 +91,20 @@ module Sequel
       # The conversion procs to use for this database
       attr_reader :conversion_procs
 
-      # Connect to the database.  Since SQLite is a file based database,
-      # the only options available are :database (to specify the database
-      # name), and :timeout, to specify how long to wait for the database to
-      # be available if it is locked, given in milliseconds (default is 5000).
+      # Connect to the database. Since SQLite is a file based database,
+      # available options are limited:
+      #
+      # :database :: database name (filename or ':memory:' or file: URI)
+      # :readonly :: open database in read-only mode; useful for reading
+      #              static data that you do not want to modify
+      # :timeout :: how long to wait for the database to be available if it
+      #             is locked, given in milliseconds (default is 5000)
       def connect(server)
         opts = server_opts(server)
         opts[:database] = ':memory:' if blank_object?(opts[:database])
-        db = ::SQLite3::Database.new(opts[:database])
+        sqlite3_opts = {}
+        sqlite3_opts[:readonly] = typecast_value_boolean(opts[:readonly]) if opts.has_key?(:readonly)
+        db = ::SQLite3::Database.new(opts[:database].to_s, sqlite3_opts)
         db.busy_timeout(opts.fetch(:timeout, 5000))
         
         connection_pragmas.each{|s| log_yield(s){db.execute_batch(s)}}
diff --git a/lib/sequel/database/connecting.rb b/lib/sequel/database/connecting.rb
index bab1b34..fa427e3 100644
--- a/lib/sequel/database/connecting.rb
+++ b/lib/sequel/database/connecting.rb
@@ -22,23 +22,10 @@ module Sequel
       return scheme if scheme.is_a?(Class)
 
       scheme = scheme.to_s.gsub('-', '_').to_sym
-      
-      unless klass = ADAPTER_MAP[scheme]
-        # attempt to load the adapter file
-        begin
-          require "sequel/adapters/#{scheme}"
-        rescue LoadError => e
-          raise Sequel.convert_exception_class(e, AdapterNotFound)
-        end
-        
-        # make sure we actually loaded the adapter
-        unless klass = ADAPTER_MAP[scheme]
-          raise AdapterNotFound, "Could not load #{scheme} adapter: adapter class not registered in ADAPTER_MAP"
-        end
-      end
-      klass
+
+      load_adapter(scheme)
     end
-        
+
     # Returns the scheme symbol for the Database class.
     def self.adapter_scheme
       @scheme
@@ -90,6 +77,40 @@ module Sequel
       db
     end
     
+    # Load the adapter from the file system.  Raises Sequel::AdapterNotFound
+    # if the adapter cannot be loaded, or if the adapter isn't registered
+    # correctly after being loaded. Options:
+    # :map :: The Hash in which to look for an already loaded adapter (defaults to ADAPTER_MAP).
+    # :subdir :: The subdirectory of sequel/adapters to look in, only to be used for loading
+    #            subadapters.
+    def self.load_adapter(scheme, opts=OPTS)
+      map = opts[:map] || ADAPTER_MAP
+      if subdir = opts[:subdir]
+        file = "#{subdir}/#{scheme}"
+      else
+        file = scheme
+      end
+      
+      unless obj = Sequel.synchronize{map[scheme]}
+        # attempt to load the adapter file
+        begin
+          require "sequel/adapters/#{file}"
+        rescue LoadError => e
+          # If subadapter file doesn't exist, just return, 
+          # using the main adapter class without database customizations.
+          return if subdir
+          raise Sequel.convert_exception_class(e, AdapterNotFound)
+        end
+        
+        # make sure we actually loaded the adapter
+        unless obj = Sequel.synchronize{map[scheme]}
+          raise AdapterNotFound, "Could not load #{file} adapter: adapter class not registered in ADAPTER_MAP"
+        end
+      end
+
+      obj
+    end
+
     # Sets the adapter scheme for the Database class. Call this method in
     # descendants of Database to allow connection using a URL. For example the
     # following:
@@ -104,7 +125,7 @@ module Sequel
     #   Sequel.connect('mydb://user:password@dbserver/mydb')
     def self.set_adapter_scheme(scheme) # :nodoc:
       @scheme = scheme
-      ADAPTER_MAP[scheme] = self
+      Sequel.synchronize{ADAPTER_MAP[scheme] = self}
     end
     private_class_method :set_adapter_scheme
     
diff --git a/lib/sequel/database/query.rb b/lib/sequel/database/query.rb
index 8da4d1e..18960c6 100644
--- a/lib/sequel/database/query.rb
+++ b/lib/sequel/database/query.rb
@@ -157,7 +157,12 @@ module Sequel
 
       cols = schema_parse_table(table_name, opts)
       raise(Error, 'schema parsing returned no columns, table probably doesn\'t exist') if cols.nil? || cols.empty?
-      cols.each{|_,c| c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])}
+      cols.each do |_,c|
+        c[:ruby_default] = column_schema_to_ruby_default(c[:default], c[:type])
+        if !c[:max_length] && c[:type] == :string && (max_length = column_schema_max_length(c[:db_type]))
+          c[:max_length] = max_length
+        end
+      end
       Sequel.synchronize{@schemas[quoted_name] = cols} if cache_schema
       cols
     end
@@ -251,6 +256,14 @@ module Sequel
       column_schema_default_to_ruby_value(default, type) rescue nil
     end
 
+    # Look at the db_type and guess the maximum length of the column.
+    # This assumes types such as varchar(255).
+    def column_schema_max_length(db_type)
+      if db_type =~ /\((\d+)\)/
+        $1.to_i
+      end
+    end
+
     # Return a Method object for the dataset's output_identifier_method.
     # Used in metadata parsing to make sure the returned information is in the
     # correct format.
diff --git a/lib/sequel/dataset/actions.rb b/lib/sequel/dataset/actions.rb
index 6e55c7b..997b54f 100644
--- a/lib/sequel/dataset/actions.rb
+++ b/lib/sequel/dataset/actions.rb
@@ -1022,8 +1022,12 @@ module Sequel
     def unaliased_identifier(c)
       case c
       when Symbol
-        c_table, column, _ = split_symbol(c)
-        c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym
+        table, column, aliaz = split_symbol(c)
+        if aliaz
+          table ? SQL::QualifiedIdentifier.new(table, column) : Sequel.identifier(column)
+        else
+          c
+        end
       when SQL::AliasedExpression
         c.expression
       when SQL::OrderedExpression
diff --git a/lib/sequel/dataset/graph.rb b/lib/sequel/dataset/graph.rb
index 5495f31..948e4a6 100644
--- a/lib/sequel/dataset/graph.rb
+++ b/lib/sequel/dataset/graph.rb
@@ -95,7 +95,8 @@ module Sequel
       ds = self
 
       # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs)
-      if (@opts[:graph_from_self] != false && !@opts[:graph] && (@opts[:from].length > 1 || @opts[:join]))
+      if (@opts[:graph_from_self] != false && !@opts[:graph] && joined_dataset?)
+        from_selfed = true
         implicit_qualifier = options[:from_self_alias] || first_source
         ds = ds.from_self(:alias=>implicit_qualifier)
       end
@@ -109,49 +110,46 @@ module Sequel
       # Whether to add the columns to the list of column aliases
       add_columns = !ds.opts.include?(:graph_aliases)
 
-      # Setup the initial graph data structure if it doesn't exist
       if graph = opts[:graph]
         opts[:graph] = graph = graph.dup
         select = opts[:select].dup
         [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup}
       else
+        # Setup the initial graph data structure if it doesn't exist
         qualifier = ds.first_source_alias
         master = alias_symbol(qualifier)
         raise_alias_error.call if master == table_alias
+
         # Master hash storing all .graph related information
         graph = opts[:graph] = {}
+
         # Associates column aliases back to tables and columns
         column_aliases = graph[:column_aliases] = {}
+
         # Associates table alias (the master is never aliased)
         table_aliases = graph[:table_aliases] = {master=>self}
+
         # Keep track of the alias numbers used
         ca_num = graph[:column_alias_num] = Hash.new(0)
+
         # All columns in the master table are never
         # aliased, but are not included if set_graph_aliases
         # has been used.
         if add_columns
           if (select = @opts[:select]) && !select.empty? && !(select.length == 1 && (select.first.is_a?(SQL::ColumnAll)))
-            select = select.each do |sel|
-              column = case sel
-              when Symbol
-                _, c, a = split_symbol(sel)
-                (a || c).to_sym
-              when SQL::Identifier
-                sel.value.to_sym
-              when SQL::QualifiedIdentifier
-                column = sel.column
-                column = column.value if column.is_a?(SQL::Identifier)
-                column.to_sym
-              when SQL::AliasedExpression
-                column = sel.alias
-                column = column.value if column.is_a?(SQL::Identifier)
-                column.to_sym
+            select = select.map do |sel|
+              raise Error, "can't figure out alias to use for graphing for #{sel.inspect}" unless column = _hash_key_symbol(sel)
+              column_aliases[column] = [master, column]
+              if from_selfed
+                # Initial dataset was wrapped in subselect, selected all
+                # columns in the subselect, qualified by the subselect alias.
+                Sequel.qualify(qualifier, Sequel.identifier(column))
               else
-                raise Error, "can't figure out alias to use for graphing for #{sel.inspect}"
+                # Initial dataset not wrapped in subslect, just make
+                # sure columns are qualified in some way.
+                qualified_expression(sel, qualifier)
               end
-              column_aliases[column] = [master, column]
             end
-            select = qualified_expression(select, qualifier)
           else
             select = columns.map do |column|
               column_aliases[column] = [master, column]
diff --git a/lib/sequel/dataset/misc.rb b/lib/sequel/dataset/misc.rb
index 7589b64..8d21083 100644
--- a/lib/sequel/dataset/misc.rb
+++ b/lib/sequel/dataset/misc.rb
@@ -169,6 +169,11 @@ module Sequel
       "#<#{visible_class_name}: #{sql.inspect}>"
     end
     
+    # Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
+    def joined_dataset?
+     !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join])
+    end
+
     # The alias to use for the row_number column, used when emulating OFFSET
     # support and for eager limit strategies
     def row_number_column
@@ -192,6 +197,17 @@ module Sequel
       end
     end
 
+    # This returns an SQL::Identifier or SQL::AliasedExpression containing an
+    # SQL identifier that represents the unqualified column for the given value.
+    # The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier,
+    # or SQL::AliasedExpression containing one of those.  In other cases, this
+    # returns nil
+    def unqualified_column_for(v)
+      unless v.is_a?(String)
+        _unqualified_column_for(v)
+      end
+    end
+
     # Creates a unique table alias that hasn't already been used in the dataset.
     # table_alias can be any type of object accepted by alias_symbol.
     # The symbol returned will be the implicit alias in the argument,
@@ -232,6 +248,27 @@ module Sequel
 
     private
 
+    # Internal recursive version of unqualified_column_for, handling Strings inside
+    # of other objects.
+    def _unqualified_column_for(v)
+      case v
+      when Symbol
+        _, c, a = Sequel.split_symbol(v)
+        c = Sequel.identifier(c)
+        a ? c.as(a) : c
+      when String
+        Sequel.identifier(v)
+      when SQL::Identifier
+        v
+      when SQL::QualifiedIdentifier
+        _unqualified_column_for(v.column)
+      when SQL::AliasedExpression
+        if expr = unqualified_column_for(v.expression)
+          SQL::AliasedExpression.new(expr, v.alias)
+        end
+      end
+    end
+
     # Return the class name for this dataset, but skip anonymous classes
     def visible_class_name
       c = self.class
diff --git a/lib/sequel/dataset/prepared_statements.rb b/lib/sequel/dataset/prepared_statements.rb
index 136e56e..f6f4e67 100644
--- a/lib/sequel/dataset/prepared_statements.rb
+++ b/lib/sequel/dataset/prepared_statements.rb
@@ -86,7 +86,7 @@ module Sequel
         when :first
           clone(:limit=>1).select_sql
         when :insert_select
-          returning.insert_sql(*@prepared_modify_values)
+          insert_select_sql(*@prepared_modify_values)
         when :insert
           insert_sql(*@prepared_modify_values)
         when :update
diff --git a/lib/sequel/dataset/query.rb b/lib/sequel/dataset/query.rb
index 38f16a1..2a09503 100644
--- a/lib/sequel/dataset/query.rb
+++ b/lib/sequel/dataset/query.rb
@@ -23,10 +23,9 @@ module Sequel
     # block from the method call.
     CONDITIONED_JOIN_TYPES = [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left]
 
-    # These symbols have _join methods created (e.g. natural_join) that
-    # call join_table with the symbol.  They only accept a single table
-    # argument which is passed to join_table, and they raise an error
-    # if called with a block.
+    # These symbols have _join methods created (e.g. natural_join).
+    # They accept a table argument and options hash which is passed to join_table,
+    # and they raise an error if called with a block.
     UNCONDITIONED_JOIN_TYPES = [:natural, :natural_left, :natural_right, :natural_full, :cross]
     
     # All methods that return modified datasets with a joined table added.
@@ -387,37 +386,42 @@ module Sequel
     #
     # Takes the following arguments:
     #
-    # * type - The type of join to do (e.g. :inner)
-    # * table - Depends on type:
-    #   * Dataset - a subselect is performed with an alias of tN for some value of N
-    #   * String, Symbol: table
-    # * expr - specifies conditions, depends on type:
-    #   * Hash, Array of two element arrays - Assumes key (1st arg) is column of joined table (unless already
-    #     qualified), and value (2nd arg) is column of the last joined or primary table (or the
-    #     :implicit_qualifier option).
-    #     To specify multiple conditions on a single joined table column, you must use an array.
-    #     Uses a JOIN with an ON clause.
-    #   * Array - If all members of the array are symbols, considers them as columns and 
-    #     uses a JOIN with a USING clause.  Most databases will remove duplicate columns from
-    #     the result set if this is used.
-    #   * nil - If a block is not given, doesn't use ON or USING, so the JOIN should be a NATURAL
-    #     or CROSS join. If a block is given, uses an ON clause based on the block, see below.
-    #   * Everything else - pretty much the same as a using the argument in a call to where,
-    #     so strings are considered literal, symbols specify boolean columns, and Sequel
-    #     expressions can be used. Uses a JOIN with an ON clause.
-    # * options - a hash of options, with any of the following keys:
-    #   * :table_alias - the name of the table's alias when joining, necessary for joining
-    #     to the same table more than once.  No alias is used by default.
-    #   * :implicit_qualifier - The name to use for qualifying implicit conditions.  By default,
-    #     the last joined or primary table is used.
-    #   * :qualify - Can be set to false to not do any implicit qualification.  Can be set
-    #     to :deep to use the Qualifier AST Transformer, which will attempt to qualify
-    #     subexpressions of the expression tree.  Can be set to :symbol to only qualify
-    #     symbols. Defaults to the value of default_join_table_qualification.
-    # * block - The block argument should only be given if a JOIN with an ON clause is used,
-    #   in which case it yields the table alias/name for the table currently being joined,
-    #   the table alias/name for the last joined (or first table), and an array of previous
-    #   SQL::JoinClause. Unlike +where+, this block is not treated as a virtual row block.
+    # type :: The type of join to do (e.g. :inner)
+    # table :: table to join into the current dataset.  Generally one of the following types:
+    #          String, Symbol :: identifier used as table or view name
+    #          Dataset :: a subselect is performed with an alias of tN for some value of N
+    #          SQL::Function :: set returning function
+    #          SQL::AliasedExpression :: already aliased expression.  Uses given alias unless
+    #                                    overridden by the :table_alias option.
+    # expr :: conditions used when joining, depends on type:
+    #         Hash, Array of pairs :: Assumes key (1st arg) is column of joined table (unless already
+    #                                 qualified), and value (2nd arg) is column of the last joined or
+    #                                 primary table (or the :implicit_qualifier option).
+    #                                 To specify multiple conditions on a single joined table column,
+    #                                 you must use an array.  Uses a JOIN with an ON clause.
+    #         Array :: If all members of the array are symbols, considers them as columns and 
+    #                  uses a JOIN with a USING clause.  Most databases will remove duplicate columns from
+    #                  the result set if this is used.
+    #         nil :: If a block is not given, doesn't use ON or USING, so the JOIN should be a NATURAL
+    #                or CROSS join. If a block is given, uses an ON clause based on the block, see below.
+    #         otherwise :: Treats the argument as a filter expression, so strings are considered literal, symbols
+    #                      specify boolean columns, and Sequel expressions can be used. Uses a JOIN with an ON clause.
+    # options :: a hash of options, with the following keys supported:
+    #            :table_alias :: Override the table alias used when joining.  In general you shouldn't use this
+    #                            option, you should provide the appropriate SQL::AliasedExpression as the table
+    #                            argument.
+    #            :implicit_qualifier :: The name to use for qualifying implicit conditions.  By default,
+    #                                   the last joined or primary table is used.
+    #            :reset_implicit_qualifier :: Can set to false to ignore this join when future joins determine qualifier
+    #                                         for implicit conditions.
+    #            :qualify :: Can be set to false to not do any implicit qualification.  Can be set
+    #                        to :deep to use the Qualifier AST Transformer, which will attempt to qualify
+    #                        subexpressions of the expression tree.  Can be set to :symbol to only qualify
+    #                        symbols. Defaults to the value of default_join_table_qualification.
+    # block :: The block argument should only be given if a JOIN with an ON clause is used,
+    #          in which case it yields the table alias/name for the table currently being joined,
+    #          the table alias/name for the last joined (or first table), and an array of previous
+    #          SQL::JoinClause. Unlike +where+, this block is not treated as a virtual row block.
     #
     # Examples:
     #
@@ -505,7 +509,8 @@ module Sequel
         SQL::JoinOnClause.new(expr, type, table_expr)
       end
 
-      opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name}
+      opts = {:join => (@opts[:join] || []) + [join]}
+      opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false
       opts[:num_dataset_sources] = table_alias_num if table_alias_num
       clone(opts)
     end
@@ -514,7 +519,13 @@ module Sequel
       class_eval("def #{jtype}_join(*args, &block); join_table(:#{jtype}, *args, &block) end", __FILE__, __LINE__)
     end
     UNCONDITIONED_JOIN_TYPES.each do |jtype|
-      class_eval("def #{jtype}_join(table); raise(Sequel::Error, '#{jtype}_join does not accept join table blocks') if block_given?; join_table(:#{jtype}, table) end", __FILE__, __LINE__)
+      class_eval(<<-END, __FILE__, __LINE__+1)
+        def #{jtype}_join(table, opts=Sequel::OPTS)
+          raise(Sequel::Error, '#{jtype}_join does not accept join table blocks') if block_given?
+          raise(Sequel::Error, '#{jtype}_join 2nd argument should be an options hash, not conditions') unless opts.is_a?(Hash)
+          join_table(:#{jtype}, table, nil, opts)
+        end
+      END
     end
 
     # Marks this dataset as a lateral dataset.  If used in another dataset's FROM
@@ -683,6 +694,7 @@ module Sequel
     #   DB[:items].returning(nil) # RETURNING NULL
     #   DB[:items].returning(:id, :name) # RETURNING id, name
     def returning(*values)
+      raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert)
       clone(:returning=>values)
     end
 
diff --git a/lib/sequel/dataset/sql.rb b/lib/sequel/dataset/sql.rb
index 89596ef..d341732 100644
--- a/lib/sequel/dataset/sql.rb
+++ b/lib/sequel/dataset/sql.rb
@@ -280,7 +280,6 @@ module Sequel
     FORMAT_DATE_STANDARD = "DATE '%Y-%m-%d'".freeze
     FORMAT_OFFSET = "%+03i%02i".freeze
     FORMAT_TIMESTAMP_RE = /%[Nz]/.freeze
-    FORMAT_TIMESTAMP_USEC = ".%06d".freeze
     FORMAT_USEC = '%N'.freeze
     FRAME_ALL = "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING".freeze
     FRAME_ROWS = "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW".freeze
@@ -331,11 +330,12 @@ module Sequel
     USING = ' USING ('.freeze
     UNION_ALL_SELECT = ' UNION ALL SELECT '.freeze
     VALUES = " VALUES ".freeze
-    V190 = '1.9.0'.freeze
     WHERE = " WHERE ".freeze
     WITH_ORDINALITY = " WITH ORDINALITY".freeze
     WITHIN_GROUP = " WITHIN GROUP (ORDER BY ".freeze
 
+    DATETIME_SECFRACTION_ARG = RUBY_VERSION >= '1.9.0' ? 1000000 : 86400000000
+
     [:literal, :quote_identifier, :quote_schema_table].each do |meth|
       class_eval(<<-END, __FILE__, __LINE__ + 1)
         def #{meth}(*args, &block)
@@ -525,11 +525,6 @@ module Sequel
       end
     end
 
-    # REMOVE411
-    def emulated_function_sql_append(sql, f)
-      _function_sql_append(sql, native_function_name(f.f), f.args)
-    end
-
     # Append literalization of function call to SQL string.
     def function_sql_append(sql, f)
       name = f.name
@@ -827,14 +822,6 @@ module Sequel
       sql << PAREN_CLOSE
     end
 
-    # REMOVE411
-    def window_function_sql_append(sql, function, window)
-      Deprecation.deprecate("Dataset#window_function_sql_append", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
-      literal_append(sql, function)
-      sql << OVER
-      literal_append(sql, window)
-    end
-
     protected
 
     # Return a from_self dataset if an order or limit is specified, so it works as expected
@@ -845,32 +832,6 @@ module Sequel
     
     private
 
-    # REMOVE411
-    def _function_sql_append(sql, name, args)
-      Deprecation.deprecate("Dataset#emulated_function_sql_append and #_function_sql_append", "Please use Sequel::SQL::Function.new!(name, args, :emulate=>true) to create an emulated SQL function")
-      case name
-      when SQL::Identifier
-        if supports_quoted_function_names?
-          literal_append(sql, name)
-        else
-          sql << name.value.to_s
-        end
-      when SQL::QualifiedIdentifier
-        if supports_quoted_function_names?
-          literal_append(sql, name)
-        else
-          sql << split_qualifiers(name).join(DOT)
-        end
-      else
-        sql << name.to_s
-      end
-      if args.empty?
-        sql << FUNCTION_EMPTY
-      else
-        literal_append(sql, args)
-      end
-    end
-
     # Formats the truncate statement.  Assumes the table given has already been
     # literalized.
     def _truncate_sql(table)
@@ -1061,7 +1022,7 @@ module Sequel
       v2 = db.from_application_timestamp(v)
       fmt = default_timestamp_format.gsub(FORMAT_TIMESTAMP_RE) do |m|
         if m == FORMAT_USEC
-          format_timestamp_usec(v.is_a?(DateTime) ? v.sec_fraction*(RUBY_VERSION < V190 ? 86400000000 : 1000000) : v.usec) if supports_timestamp_usecs?
+          format_timestamp_usec(v.is_a?(DateTime) ? v.sec_fraction*(DATETIME_SECFRACTION_ARG) : v.usec) if supports_timestamp_usecs?
         else
           if supports_timestamp_timezones?
             # Would like to just use %z format, but it doesn't appear to work on Windows
@@ -1082,7 +1043,10 @@ module Sequel
     # Return the SQL timestamp fragment to use for the fractional time part.
     # Should start with the decimal point.  Uses 6 decimal places by default.
     def format_timestamp_usec(usec)
-      sprintf(FORMAT_TIMESTAMP_USEC, usec)
+      unless (ts = timestamp_precision) == 6
+        usec = usec/(10 ** (6 - ts))
+      end
+      sprintf(".%0#{ts}d", usec)
     end
 
     # Append literalization of identifier to SQL string, considering regular strings
@@ -1121,7 +1085,11 @@ module Sequel
 
     def insert_into_sql(sql)
       sql << INTO
-      source_list_append(sql, @opts[:from])
+      if (f = @opts[:from]) && f.length == 1
+        identifier_append(sql, unaliased_identifier(f.first))
+      else
+        source_list_append(sql, f)
+      end
     end
 
     def insert_columns_sql(sql)
@@ -1171,11 +1139,6 @@ module Sequel
       "#{join_type.to_s.gsub(UNDERSCORE, SPACE).upcase} JOIN"
     end
 
-    # Whether this dataset is a joined dataset
-    def joined_dataset?
-     (opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]
-    end
-
     # Append a literalization of the array to SQL string.
     # Treats as an expression if an array of all two pairs, or as a SQL array otherwise.
     def literal_array_append(sql, v)
@@ -1552,6 +1515,11 @@ module Sequel
       ds.clone(:append_sql=>sql).sql
     end
 
+    # The number of decimal digits of precision to use in timestamps.
+    def timestamp_precision
+      supports_timestamp_usecs? ? 6 : 0
+    end
+
     def update_table_sql(sql)
       sql << SPACE
       source_list_append(sql, @opts[:from])
diff --git a/lib/sequel/extensions/dataset_source_alias.rb b/lib/sequel/extensions/dataset_source_alias.rb
new file mode 100644
index 0000000..c7e5d7a
--- /dev/null
+++ b/lib/sequel/extensions/dataset_source_alias.rb
@@ -0,0 +1,90 @@
+# The dataset_source_alias extension changes Sequel's
+# default behavior of automatically aliasing datasets
+# from using t1, t2, etc. to using an alias based on
+# the source of the dataset.  Example:
+#
+#   DB.from(DB.from(:a))
+#   # default: SELECT * FROM (SELECT * FROM a) AS t1
+#   # with extension: SELECT * FROM (SELECT * FROM a) AS a
+#
+# This also works when joining:
+#
+#   DB[:a].join(DB[:b], [:id])
+#   # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS b USING (id)
+#
+# To avoid conflicting aliases, this attempts to alias tables
+# uniquely if it detects a conflict:
+#
+#   DB.from(:a, DB.from(:a))
+#   # SELECT * FROM a, (SELECT * FROM a) AS a_0
+#
+# Note that not all conflicts are correctly detected and handled.
+# It is encouraged to alias your datasets manually instead of
+# relying on the auto-aliasing if there would be a conflict.
+#
+# In the places where Sequel cannot determine the
+# appropriate alias to use for the dataset, it will fallback to
+# the standard t1, t2, etc. aliasing.
+#
+# You can load this extension into specific datasets:
+#
+#   ds = DB[:table]
+#   ds = ds.extension(:dataset_source_alias)
+#
+# Or you can load it into all of a database's datasets, which
+# is probably the desired behavior if you are using this extension:
+#
+#   DB.extension(:dataset_source_alias)
+
+module Sequel
+  class Dataset
+    module DatasetSourceAlias
+      # Preprocess the list of sources and attempt to alias any
+      # datasets in the sources to the first source of the resepctive
+      # dataset.
+      def from(*source, &block)
+        virtual_row_columns(source, block)
+        table_aliases = []
+        source = source.map do |s|
+          case s
+          when Dataset
+            s = dataset_source_alias_expression(s, table_aliases)
+          when Symbol, String, SQL::AliasedExpression, SQL::Identifier, SQL::QualifiedIdentifier
+            table_aliases << alias_symbol(s)
+          end
+          s
+        end
+        super(*source, &nil)
+      end
+
+      # If a Dataset is given as the table argument, attempt to alias
+      # it to its source.
+      def join_table(type, table, expr=nil, options=OPTS)
+        if table.is_a?(Dataset) && !options[:table_alias]
+          table = dataset_source_alias_expression(table)
+        end
+        super
+      end
+
+      private
+
+      # Attempt to automatically alias the given dataset to its source.
+      # If the dataset cannot be automatically aliased to its source,
+      # return it unchanged.  The table_aliases argument is a list of
+      # already used alias symbols, which will not be used as the alias.
+      def dataset_source_alias_expression(ds, table_aliases=[])
+        base = ds.first_source if ds.opts[:from]
+        case base
+        when Symbol, String, SQL::AliasedExpression, SQL::Identifier, SQL::QualifiedIdentifier
+          aliaz = unused_table_alias(base, table_aliases)
+          table_aliases << aliaz
+          ds.as(aliaz)
+        else
+          ds
+        end
+      end
+    end
+
+    register_extension(:dataset_source_alias, DatasetSourceAlias)
+  end
+end
diff --git a/lib/sequel/extensions/pg_array.rb b/lib/sequel/extensions/pg_array.rb
index dcacf6c..d393ac2 100644
--- a/lib/sequel/extensions/pg_array.rb
+++ b/lib/sequel/extensions/pg_array.rb
@@ -1,10 +1,10 @@
 # The pg_array extension adds support for Sequel to handle
 # PostgreSQL's array types.
 #
-# This extension integrates with Sequel's native postgres adapter, so
-# that when array fields are retrieved, they are parsed and returned
-# as instances of Sequel::Postgres::PGArray.  PGArray is
-# a DelegateClass of Array, so it mostly acts like an array, but not
+# This extension integrates with Sequel's native postgres adapter and
+# the jdbc/postgresql adapter, so that when array fields are retrieved,
+# they are parsed and returned as instances of Sequel::Postgres::PGArray.
+# PGArray is a DelegateClass of Array, so it mostly acts like an array, but not
 # completely (is_a?(Array) is false).  If you want the actual array,
 # you can call PGArray#to_a.  This is done so that Sequel does not
 # treat a PGArray like an Array by default, which would cause issues.
@@ -39,18 +39,21 @@
 # See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
 # for details on using postgres array columns in CREATE/ALTER TABLE statements.
 #
-# If you are not using the native postgres adapter and are using array
-# types as model column values you probably should use the
-# typecast_on_load plugin if the column values are returned as a
-# regular array, and the pg_typecast_on_load plugin if the column
-# values are returned as a string.
+# If you are not using the native postgres or jdbc/postgresql adapter and are using array
+# types as model column values you probably should use the the pg_typecast_on_load plugin
+# if the column values are returned as a string.
 #
 # This extension by default includes handlers for array types for
 # all scalar types that the native postgres adapter handles. It
 # also makes it easy to add support for other array types.  In
 # general, you just need to make sure that the scalar type is
 # handled and has the appropriate converter installed in
-# Sequel::Postgres::PG_TYPES under the appropriate type OID.
+# Sequel::Postgres::PG_TYPES or the Database instance's
+# conversion_procs usingthe appropriate type OID.  For user defined
+# types, you can do this via:
+#
+#   DB.conversion_procs[scalar_type_oid] = lambda{|string| ...}
+#
 # Then you can call
 # Sequel::Postgres::PGArray::DatabaseMethods#register_array_type
 # to automatically set up a handler for the array type.  So if you
@@ -63,6 +66,7 @@
 # Sequel::Postgres::PGArray.register.  In this case, you'll have
 # to specify the type oids:
 #
+#   Sequel::Postgres::PG_TYPES[1234] = lambda{|string| ...}
 #   Sequel::Postgres::PGArray.register('foo', :oid=>4321, :scalar_oid=>1234)
 #
 # Both Sequel::Postgres::PGArray::DatabaseMethods#register_array_type
diff --git a/lib/sequel/extensions/pg_enum.rb b/lib/sequel/extensions/pg_enum.rb
new file mode 100644
index 0000000..9aa95b4
--- /dev/null
+++ b/lib/sequel/extensions/pg_enum.rb
@@ -0,0 +1,135 @@
+# The pg_enum extension adds support for Sequel to handle PostgreSQL's enum
+# types.  To use this extension, first load it into your Database instance:
+#
+#   DB.extension :pg_enum
+#
+# It allows creation of enum types using create_enum:
+#
+#   DB.create_enum(:type_name, %w'value1 value2 value3')
+#
+# You can also add values to existing enums via add_enum_value:
+#
+#   DB.add_enum_value(:enum_type_name, 'value4')
+#
+# If you want to drop an enum type, you can use drop_enum:
+#
+#   DB.drop_enum(:type_name)
+#
+# Just like any user-created type, after creating the type, you
+# can create tables that have a column of that type:
+#
+#   DB.create_table(:table_name)
+#     enum_type_name :column_name
+#   end
+#
+# When parsing the schema, enum types are recognized, and available
+# values returned in the schema hash:
+#
+#   DB.schema(:table_name)
+#   [[:column_name, {:type=>:enum, :enum_values=>['value1', 'value2']}]]
+#
+# If the pg_array extension is used, arrays of enums are returned as a
+# PGArray:
+#
+#   DB.create_table(:table_name)
+#     column :column_name, 'enum_type_name[]'
+#   end
+#   DB[:table_name].get(:column_name)
+#   # ['value1', 'value2']
+#
+# Finally, typecasting for enums is setup to cast to strings, which
+# allows you to use symbols in your model code.  Similar, you can provide
+# the enum values as symbols when creating enums using create_enum or
+# add_enum_value.
+
+module Sequel
+  module Postgres
+    # Methods enabling Database object integration with enum types.
+    module EnumDatabaseMethods
+      # Parse the available enum values when loading this extension into
+      # your database.
+      def self.extended(db)
+        db.send(:parse_enum_labels)
+      end
+
+      # Run the SQL to add the given value to the existing enum type.
+      # Options:
+      # :after :: Add the new value after this existing value.
+      # :before :: Add the new value before this existing value.
+      # :if_not_exists :: Do not raise an error if the value already exists in the enum.
+      def add_enum_value(enum, value, opts=OPTS)
+        sql = "ALTER TYPE #{quote_schema_table(enum)} ADD VALUE#{' IF NOT EXISTS' if opts[:if_not_exists]} #{literal(value.to_s)}"
+        if v = opts[:before]
+          sql << " BEFORE #{literal(v.to_s)}"
+        elsif v = opts[:after]
+          sql << " AFTER #{literal(v.to_s)}"
+        end
+        run sql
+        parse_enum_labels
+        nil
+      end
+
+      # Run the SQL to create an enum type with the given name and values.
+      def create_enum(enum, values)
+        sql = "CREATE TYPE #{quote_schema_table(enum)} AS ENUM (#{values.map{|v| literal(v.to_s)}.join(', ')})"
+        run sql
+        parse_enum_labels
+        nil
+      end
+
+      # Run the SQL to drop the enum type with the given name.
+      # Options:
+      # :if_exists :: Do not raise an error if the enum type does not exist
+      # :cascade :: Also drop other objects that depend on the enum type
+      def drop_enum(enum, opts=OPTS)
+        sql = "DROP TYPE#{' IF EXISTS' if opts[:if_exists]} #{quote_schema_table(enum)}#{' CASCADE' if opts[:cascade]}"
+        run sql
+        parse_enum_labels
+        nil
+      end
+
+      private
+
+      # Parse the pg_enum table to get enum values, and
+      # the pg_type table to get names and array oids for
+      # enums.
+      def parse_enum_labels
+        @enum_labels = metadata_dataset.from(:pg_enum).
+          order(:enumtypid, :enumsortorder).
+          select_hash_groups(Sequel.cast(:enumtypid, Integer).as(:v), :enumlabel)
+
+        if respond_to?(:register_array_type)
+          array_types = metadata_dataset.
+            from(:pg_type).
+            where(:oid=>@enum_labels.keys).
+            exclude(:typarray=>0).
+            select_map([:typname, Sequel.cast(:typarray, Integer).as(:v)])
+
+          existing_oids = conversion_procs.keys
+          array_types.each do |name, oid|
+            next if existing_oids.include?(oid)
+            register_array_type(name, :oid=>oid)
+          end
+        end
+      end
+
+      # For schema entries that are enums, set the type to
+      # :enum and add a :enum_values entry with the enum values.
+      def schema_parse_table(*)
+        super.each do |_, s|
+          if values = @enum_labels[s[:oid]]
+            s[:type] = :enum
+            s[:enum_values] = values
+          end
+        end
+      end
+
+      # Typecast the given value to a string.
+      def typecast_value_enum(value)
+        value.to_s
+      end
+    end
+  end
+
+  Database.register_extension(:pg_enum, Postgres::EnumDatabaseMethods)
+end
diff --git a/lib/sequel/extensions/pg_hstore.rb b/lib/sequel/extensions/pg_hstore.rb
index 92269a4..b7050a9 100644
--- a/lib/sequel/extensions/pg_hstore.rb
+++ b/lib/sequel/extensions/pg_hstore.rb
@@ -3,8 +3,8 @@
 # the hstore type stores an arbitrary key-value table, where the keys
 # are strings and the values are strings or NULL.
 #
-# This extension integrates with Sequel's native postgres adapter, so
-# that when hstore fields are retrieved, they are parsed and returned
+# This extension integrates with Sequel's native postgres and jdbc/postgresql
+# adapters, so that when hstore fields are retrieved, they are parsed and returned
 # as instances of Sequel::Postgres::HStore.  HStore is
 # a DelegateClass of Hash, so it mostly acts like a hash, but not
 # completely (is_a?(Hash) is false).  If you want the actual hash,
@@ -78,11 +78,9 @@
 # See the {schema modification guide}[rdoc-ref:doc/schema_modification.rdoc]
 # for details on using hstore columns in CREATE/ALTER TABLE statements.
 #
-# If you are not using the native postgres adapter and are using hstore
+# If you are not using the native postgres or jdbc/postgresql adapters and are using hstore
 # types as model column values you probably should use the
-# typecast_on_load plugin if the column values are returned as a
-# hash, and the pg_typecast_on_load plugin if the column
-# values are returned as a string.
+# pg_typecast_on_load plugin if the column values are returned as a string.
 #
 # This extension requires the delegate and strscan libraries.
 
diff --git a/lib/sequel/extensions/pg_inet.rb b/lib/sequel/extensions/pg_inet.rb
index cf0cc99..8e372c7 100644
--- a/lib/sequel/extensions/pg_inet.rb
+++ b/lib/sequel/extensions/pg_inet.rb
@@ -1,16 +1,15 @@
 # The pg_inet extension adds support for Sequel to handle
 # PostgreSQL's inet and cidr types using ruby's IPAddr class.
 #
-# This extension integrates with Sequel's native postgres adapter, so
-# that when inet/cidr fields are retrieved, they are returned as
+# This extension integrates with Sequel's native postgres and jdbc/postgresql
+# adapters, so that when inet/cidr fields are retrieved, they are returned as
 # IPAddr instances
 #
-# After loading the extension, you should extend your dataset
-# with a module so that it correctly handles the inet/cidr type:
+# To use this extension, load it into your database:
 #
 #   DB.extension :pg_inet
 #
-# If you are not using the native postgres adapter and are using inet/cidr
+# If you are not using the native postgres or jdbc/postgresql adapters and are using inet/cidr
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
diff --git a/lib/sequel/extensions/pg_interval.rb b/lib/sequel/extensions/pg_interval.rb
index 155ee11..8e2d1fb 100644
--- a/lib/sequel/extensions/pg_interval.rb
+++ b/lib/sequel/extensions/pg_interval.rb
@@ -1,7 +1,7 @@
 # The pg_interval extension adds support for PostgreSQL's interval type.
 #
-# This extension integrates with Sequel's native postgres adapter, so
-# that when interval type values are retrieved, they are parsed and returned
+# This extension integrates with Sequel's native postgres and jdbc/postgresql
+# adapters, so that when interval type values are retrieved, they are parsed and returned
 # as instances of ActiveSupport::Duration.
 #
 # In addition to the parser, this extension adds literalizers for
@@ -15,7 +15,7 @@
 #
 #   DB.extension :pg_interval
 #
-# If you are not using the native postgres adapter and are using interval
+# If you are not using the native postgres or jdbc/postgresql adapters and are using interval
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
@@ -67,7 +67,7 @@ module Sequel
       # Creates callable objects that convert strings into ActiveSupport::Duration instances.
       class Parser
         # Regexp that parses the full range of PostgreSQL interval type output.
-        PARSER = /\A([+-]?\d+ years?\s?)?([+-]?\d+ mons?\s?)?([+-]?\d+ days?\s?)?(?:(?:([+-])?(\d\d):(\d\d):(\d\d(\.\d+)?))|([+-]?\d+ hours?\s?)?([+-]?\d+ mins?\s?)?([+-]?\d+(\.\d+)? secs?\s?)?)?\z/o
+        PARSER = /\A([+-]?\d+ years?\s?)?([+-]?\d+ mons?\s?)?([+-]?\d+ days?\s?)?(?:(?:([+-])?(\d{2,10}):(\d\d):(\d\d(\.\d+)?))|([+-]?\d+ hours?\s?)?([+-]?\d+ mins?\s?)?([+-]?\d+(\.\d+)? secs?\s?)?)?\z/o
 
         # Parse the interval input string into an ActiveSupport::Duration instance.
         def call(string)
diff --git a/lib/sequel/extensions/pg_json.rb b/lib/sequel/extensions/pg_json.rb
index ad052b7..eea6f05 100644
--- a/lib/sequel/extensions/pg_json.rb
+++ b/lib/sequel/extensions/pg_json.rb
@@ -1,15 +1,16 @@
 # The pg_json extension adds support for Sequel to handle
-# PostgreSQL's json type.  It is slightly more strict than the
-# PostgreSQL json type in that the object returned must be an
+# PostgreSQL's json and jsonb types.  It is slightly more strict than the
+# PostgreSQL json types in that the object returned should be an
 # array or object (PostgreSQL's json type considers plain numbers
-# and strings as valid).  This is because Sequel relies completely
-# on the ruby JSON library for parsing, and ruby's JSON library
-# does not accept the values.
+# strings, true, false, and null as valid).  Sequel will work with
+# PostgreSQL json values that are not arrays or objects, but support
+# is fairly limited and the values do not roundtrip.
 #
 # This extension integrates with Sequel's native postgres adapter, so
 # that when json fields are retrieved, they are parsed and returned
 # as instances of Sequel::Postgres::JSONArray or
-# Sequel::Postgres::JSONHash.  JSONArray and JSONHash are
+# Sequel::Postgres::JSONHash (or JSONBArray or JSONBHash for jsonb
+# columns).  JSONArray and JSONHash are
 # DelegateClasses of Array and Hash, so they mostly act the same, but
 # not completely (json_array.is_a?(Array) is false).  If you want
 # the actual array for a JSONArray, call JSONArray#to_a.  If you want
@@ -20,15 +21,15 @@
 # To turn an existing Array or Hash into a JSONArray or JSONHash,
 # use Sequel.pg_json:
 #
-#   Sequel.pg_json(array)
-#   Sequel.pg_json(hash)
+#   Sequel.pg_json(array) # or Sequel.pg_jsonb(array) for jsonb type
+#   Sequel.pg_json(hash)  # or Sequel.pg_jsonb(hash) for jsonb type
 #
 # If you have loaded the {core_extensions extension}[rdoc-ref:doc/core_extensions.rdoc],
 # or you have loaded the core_refinements extension
 # and have activated refinements for the file, you can also use Array#pg_json and Hash#pg_json:
 #
-#   array.pg_json
-#   hash.pg_json
+#   array.pg_json # or array.pg_jsonb for jsonb type
+#   hash.pg_json  # or hash.pg_jsonb for jsonb type
 #
 # So if you want to insert an array or hash into an json database column:
 #
@@ -154,8 +155,9 @@ module Sequel
       end
 
       # Parse the given string as json, returning either a JSONArray
-      # or JSONHash instance, and raising an error if the JSON
-      # parsing does not yield an array or hash.
+      # or JSONHash instance (or JSONBArray or JSONBHash instance if jsonb
+      # argument is true), or a String, Numeric, true, false, or nil
+      # if the json library used supports that.
       def self.parse_json(s, jsonb=false)
         begin
           value = Sequel.parse_json(s)
@@ -168,6 +170,8 @@ module Sequel
           (jsonb ? JSONBArray : JSONArray).new(value)
         when Hash 
           (jsonb ? JSONBHash : JSONHash).new(value)
+        when String, Numeric, true, false, nil
+          value
         else
           raise Sequel::InvalidValue, "unhandled json value: #{value.inspect} (from #{s.inspect})"
         end
diff --git a/lib/sequel/extensions/pg_range.rb b/lib/sequel/extensions/pg_range.rb
index 34a763a..d049be3 100644
--- a/lib/sequel/extensions/pg_range.rb
+++ b/lib/sequel/extensions/pg_range.rb
@@ -6,7 +6,7 @@
 # unbounded beginnings and endings (which ruby's range does not
 # support).
 #
-# This extension integrates with Sequel's native postgres adapter, so
+# This extension integrates with Sequel's native postgres and jdbc/postgresql adapters, so
 # that when range type values are retrieved, they are parsed and returned
 # as instances of Sequel::Postgres::PGRange.  PGRange mostly acts
 # like a Range, but it's not a Range as not all PostgreSQL range
@@ -45,7 +45,7 @@
 #
 #   DB.extension :pg_range
 #
-# If you are not using the native postgres adapter and are using range
+# If you are not using the native postgres or jdbc/postgresql adapters and are using range
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
@@ -395,7 +395,9 @@ module Sequel
         end
       end
 
-      # Whether this range is empty (has no points).
+      # Whether this range is empty (has no points).  Note that for manually created ranges
+      # (ones not retrieved from the database), this will only be true if the range
+      # was created using the :empty option.
       def empty?
         @empty
       end
diff --git a/lib/sequel/extensions/pg_row.rb b/lib/sequel/extensions/pg_row.rb
index 94daf54..23f37c8 100644
--- a/lib/sequel/extensions/pg_row.rb
+++ b/lib/sequel/extensions/pg_row.rb
@@ -1,7 +1,7 @@
 # The pg_row extension adds support for Sequel to handle
 # PostgreSQL's row-valued/composite types.
 #
-# This extension integrates with Sequel's native postgres adapter, so
+# This extension integrates with Sequel's native postgres and jdbc/postgresql adapters, so
 # that when composite fields are retrieved, they are parsed and returned
 # as instances of Sequel::Postgres::PGRow::(HashRow|ArrayRow), or
 # optionally a custom type.  HashRow and ArrayRow are DelegateClasses of
@@ -74,7 +74,7 @@
 #   DB.conversion_procs.select{|k,v| v.is_a?(Sequel::Postgres::PGRow::Parser) && \
 #     v.converter && (v.converter.name.nil? || v.converter.name == '') }.map{|k,v| v}
 # 
-# If you are not using the native postgres adapter and are using composite types
+# If you are not using the native postgres or jdbc/postgresql adapters and are using composite types
 # types as model column values you probably should use the
 # pg_typecast_on_load plugin if the column values are returned as a string.
 #
diff --git a/lib/sequel/extensions/pg_static_cache_updater.rb b/lib/sequel/extensions/pg_static_cache_updater.rb
index 1a0a244..fee77ba 100644
--- a/lib/sequel/extensions/pg_static_cache_updater.rb
+++ b/lib/sequel/extensions/pg_static_cache_updater.rb
@@ -98,15 +98,17 @@ SQL
       end
 
       # Listen on the notification channel for changes to any of tables for
-      # the models given. If notified about a change to one of the tables,
+      # the models given in a new thread. If notified about a change to one of the tables,
       # reload the cache for the related model.  Options given are also
       # passed to Database#listen.
       #
-      # Note that this implementation does not currently support model
+      # Note that this implementation does not currently support multiple
       # models that use the same underlying table.
       #
       # Options:
       # :channel_name :: Override the channel name to use.
+      # :before_thread_exit :: An object that responds to +call+ that is called before the 
+      #                        the created thread exits.
       def listen_for_static_cache_updates(models, opts=OPTS)
         raise Error, "this database object does not respond to listen, use the postgres adapter with the pg driver" unless respond_to?(:listen)
         models = [models] unless models.is_a?(Array)
@@ -119,10 +121,14 @@ SQL
         end
 
         Thread.new do
-          listen(opts[:channel_name]||default_static_cache_update_name, {:loop=>true}.merge(opts)) do |_, _, oid|
-            if model = oid_map[oid.to_i]
-              model.send(:load_cache)
+          begin
+            listen(opts[:channel_name]||default_static_cache_update_name, {:loop=>true}.merge(opts)) do |_, _, oid|
+              if model = oid_map[oid.to_i]
+                model.send(:load_cache)
+              end
             end
+          ensure
+            opts[:before_thread_exit].call if opts[:before_thread_exit]
           end
         end
       end
diff --git a/lib/sequel/extensions/round_timestamps.rb b/lib/sequel/extensions/round_timestamps.rb
new file mode 100644
index 0000000..fcd2240
--- /dev/null
+++ b/lib/sequel/extensions/round_timestamps.rb
@@ -0,0 +1,52 @@
+# The round_timestamps extension will automatically round timestamp
+# values to the database's supported level of precision before literalizing
+# them.
+#
+# For example, if the database supports microsecond precision, and you give
+# it a Time value with greater than microsecond precision, it will round it
+# appropriately:
+#
+#   Time.at(1405341161.917999982833862)
+#   # default: 2014-07-14 14:32:41.917999
+#   # with extension: 2014-07-14 14:32:41.918000
+#
+# The round_timestamps extension correctly deals with databases that support
+# millisecond or second precision.  In addition to handling Time values, it
+# also handles DateTime values and Sequel::SQLTime values (for the TIME type).
+#
+# To round timestamps for a single dataset:
+#
+#   ds = ds.extension(:round_timestamps)
+#
+# To round timestamps for all datasets on a single database:
+#
+#   DB.extension(:round_timestamps)
+
+unless RUBY_VERSION >= '1.9'
+  # :nocov:
+  raise LoadError, 'the round_timestamps extension only works on ruby 1.9+'
+  # :nocov:
+end
+
+module Sequel
+  class Dataset
+    module RoundTimestamps
+      # Round DateTime values before literalizing
+      def literal_datetime(v)
+        super(v + Rational(5, 10**timestamp_precision)/864000)
+      end
+
+      # Round Sequel::SQLTime values before literalizing
+      def literal_sqltime(v)
+        super(v.round(timestamp_precision))
+      end
+
+      # Round Time values before literalizing
+      def literal_time(v)
+        super(v.round(timestamp_precision))
+      end
+    end
+
+    register_extension(:round_timestamps, RoundTimestamps)
+  end
+end
diff --git a/lib/sequel/model.rb b/lib/sequel/model.rb
index 6797c8c..0731ceb 100644
--- a/lib/sequel/model.rb
+++ b/lib/sequel/model.rb
@@ -35,7 +35,7 @@ module Sequel
   #     dataset # => DB1[:comments]
   #   end
   def self.Model(source)
-    if cache_anonymous_models && (klass = Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]})
+    if cache_anonymous_models && (klass = Model::ANONYMOUS_MODEL_CLASSES_MUTEX.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]})
       return klass
     end
     klass = if source.is_a?(Database)
@@ -45,7 +45,7 @@ module Sequel
     else
       Class.new(Model).set_dataset(source)
     end
-    Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if cache_anonymous_models
+    Model::ANONYMOUS_MODEL_CLASSES_MUTEX.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if cache_anonymous_models
     klass
   end
 
@@ -78,6 +78,9 @@ module Sequel
     # of classes when dealing with code reloading.
     ANONYMOUS_MODEL_CLASSES = {}
 
+    # Mutex protecting access to ANONYMOUS_MODEL_CLASSES
+    ANONYMOUS_MODEL_CLASSES_MUTEX = Mutex.new
+
     # Class methods added to model that call the method of the same name on the dataset
     DATASET_METHODS = (Dataset::ACTION_METHODS + Dataset::QUERY_METHODS +
       [:each_server]) - [:and, :or, :[], :columns, :columns!, :delete, :update, :add_graph_aliases, :first, :first!]
diff --git a/lib/sequel/model/associations.rb b/lib/sequel/model/associations.rb
index cc92c42..2b6e5d1 100644
--- a/lib/sequel/model/associations.rb
+++ b/lib/sequel/model/associations.rb
@@ -248,7 +248,6 @@ module Sequel
           elsif strategy == :union
             objects = []
             ds = associated_dataset
-            ds = self[:eager_block].call(ds) if self[:eager_block]
             loader = union_eager_loader
             joiner = " UNION ALL "
             eo[:id_map].keys.each_slice(subqueries_per_union).each do |slice|
@@ -695,6 +694,7 @@ module Sequel
         def union_eager_loader
           cached_fetch(:union_eager_loader) do
             Sequel::Dataset::PlaceholderLiteralizer.loader(associated_dataset) do |pl, ds|
+              ds = self[:eager_block].call(ds) if self[:eager_block]
               keys = predicate_keys
               ds = ds.where(keys.map{pl.arg}.zip(keys))
               if eager_loading_use_associated_key?
@@ -1206,7 +1206,7 @@ module Sequel
         
         # The columns to select when loading the association, associated_class.table_name.* by default.
         def select
-         cached_fetch(:select){Sequel::SQL::ColumnAll.new(associated_class.table_name)}
+          cached_fetch(:select){default_select}
         end
 
         private
@@ -1215,6 +1215,17 @@ module Sequel
           super.inner_join(self[:join_table], self[:right_keys].zip(right_primary_keys), :qualify=>:deep)
         end
 
+        # The default selection for associations that require joins.  These do not use the default
+        # model selection unless all entries in the select are explicitly qualified identifiers, as
+        # other it can include unqualified columns which would be made ambiguous by joining.
+        def default_select
+          if (sel = associated_class.dataset.opts[:select]) && sel.all?{|c| selection_is_qualified?(c)}
+            sel
+          else
+            Sequel::SQL::ColumnAll.new(associated_class.table_name)
+          end
+        end
+
         def filter_by_associations_conditions_associated_keys
           qualify(join_table_alias, self[:left_keys])
         end
@@ -1252,6 +1263,21 @@ module Sequel
           :many_to_many
         end
 
+        # Whether the given expression represents a qualified identifier.  Used to determine if it is
+        # OK to use directly when joining.
+        def selection_is_qualified?(c)
+          case c
+          when Symbol
+            Sequel.split_symbol(c)[0]
+          when Sequel::SQL::QualifiedIdentifier
+            true
+          when Sequel::SQL::AliasedExpression
+            selection_is_qualified?(c.expression)
+          else
+            false
+          end
+        end
+
         # Split the join table into source and alias parts.
         def split_join_table_alias
           associated_class.dataset.split_alias(self[:join_table])
@@ -1478,8 +1504,8 @@ module Sequel
         #                the current association's key(s).  Set to nil to not use a reciprocal.
         # :remover :: Proc used to define the private _remove_* method for doing the database work
         #             to remove the association between the given object and the current object (*_to_many assocations).
-        # :select :: the columns to select.  Defaults to the associated class's
-        #            table_name.* in a many_to_many association, which means it doesn't include the attributes from the
+        # :select :: the columns to select.  Defaults to the associated class's table_name.* in an association
+        #            that uses joins, which means it doesn't include the attributes from the
         #            join table.  If you want to include the join table attributes, you can
         #            use this option, but beware that the join table attributes can clash with
         #            attributes from the model table, so you should alias any attributes that have
diff --git a/lib/sequel/model/base.rb b/lib/sequel/model/base.rb
index c3a038d..5b85181 100644
--- a/lib/sequel/model/base.rb
+++ b/lib/sequel/model/base.rb
@@ -900,6 +900,14 @@ module Sequel
         schema_array = check_non_connection_error{db.schema(dataset, :reload=>reload)} if db.supports_schema_parsing?
         if schema_array
           schema_array.each{|k,v| schema_hash[k] = v}
+
+          # Set the primary key(s) based on the schema information,
+          # if the schema information includes primary key information
+          if schema_array.all?{|k,v| v.has_key?(:primary_key)}
+            pks = schema_array.collect{|k,v| k if v[:primary_key]}.compact
+            pks.length > 0 ? set_primary_key(pks) : no_primary_key
+          end
+
           if (select = ds_opts[:select]) && !(select.length == 1 && select.first.is_a?(SQL::ColumnAll))
             # We don't remove the columns from the schema_hash,
             # as it's possible they will be used for typecasting
@@ -913,12 +921,6 @@ module Sequel
             # returned by the schema.
             cols = schema_array.collect{|k,v| k}
             set_columns(cols)
-            # Set the primary key(s) based on the schema information,
-            # if the schema information includes primary key information
-            if schema_array.all?{|k,v| v.has_key?(:primary_key)}
-              pks = schema_array.collect{|k,v| k if v[:primary_key]}.compact
-              pks.length > 0 ? set_primary_key(pks) : no_primary_key
-            end
             # Also set the columns for the dataset, so the dataset
             # doesn't have to do a query to get them.
             dataset.instance_variable_set(:@columns, cols)
@@ -1036,6 +1038,8 @@ module Sequel
           ds.literal_append(sql, pk)
           ds.fetch_rows(sql){|r| return ds.row_proc.call(r)}
           nil
+        elsif dataset.joined_dataset?
+          first_where(qualified_primary_key_hash(pk))
         else
           first_where(primary_key_hash(pk))
         end
@@ -1147,7 +1151,7 @@ module Sequel
         changed_columns.clear 
         yield self if block_given?
       end
-      
+
       # Returns value of the column's attribute.
       #
       #   Artist[1][:id] #=> 1
@@ -1216,14 +1220,6 @@ module Sequel
         @changed_columns ||= []
       end
   
-      # Similar to Model#dup, but copies frozen status to returned object
-      # if current object is frozen.
-      def clone
-        o = dup
-        o.freeze if frozen?
-        o
-      end
-
       # Deletes and returns +self+.  Does not run destroy hooks.
       # Look into using +destroy+ instead.
       #
@@ -1249,18 +1245,6 @@ module Sequel
         checked_save_failure(opts){checked_transaction(opts){_destroy(opts)}}
       end
 
-      # Produce a shallow copy of the object, similar to Object#dup.
-      def dup
-        s = self
-        super.instance_eval do
-          @values = s.values.dup
-          @changed_columns = s.changed_columns.dup
-          @errors = s.errors.dup
-          @this = s.this.dup if !new? && model.primary_key
-          self
-        end
-      end
-  
       # Iterates through all of the current values using each.
       #
       #  Album[1].each{|k, v| puts "#{k} => #{v}"}
@@ -1511,6 +1495,7 @@ module Sequel
       def save(opts=OPTS)
         raise Sequel::Error, "can't save frozen object" if frozen?
         set_server(opts[:server]) if opts[:server] 
+        _before_validation
         if opts[:validate] != false
           unless checked_save_failure(opts){_valid?(true, opts)}
             raise(ValidationFailed.new(self)) if raise_on_failure?(opts)
@@ -1642,7 +1627,16 @@ module Sequel
       #   Artist[1].this
       #   # SELECT * FROM artists WHERE (id = 1) LIMIT 1
       def this
-        @this ||= use_server(model.instance_dataset.filter(pk_hash))
+        return @this if @this
+        raise Error, "No dataset for model #{model}" unless ds = model.instance_dataset
+
+        cond = if ds.joined_dataset?
+          model.qualified_primary_key_hash(pk)
+        else
+          pk_hash
+        end
+
+        @this = use_server(ds.where(cond))
       end
       
       # Runs #set with the passed hash and then runs save_changes.
@@ -1701,11 +1695,21 @@ module Sequel
       #   artist(:name=>'Invalid').valid? # => false
       #   artist.errors.full_messages # => ['name cannot be Invalid']
       def valid?(opts = OPTS)
+        _before_validation
         _valid?(false, opts)
       end
 
       private
       
+      # Run code before any validation is done, but also run it before saving
+      # even if validation is skipped.  This is a private hook.  It exists so that
+      # plugins can set values automatically before validation (as the values
+      # need to be validated), but should be set even if validation is skipped.
+      # Unlike the regular before_validation hook, we do not skip the save/validation
+      # if this returns false.
+      def _before_validation
+      end
+
       # Do the deletion of the object's dataset, and check that the row
       # was actually deleted.
       def _delete
@@ -1761,7 +1765,7 @@ module Sequel
       # the record should be refreshed from the database.
       def _insert
         ds = _insert_dataset
-        if !ds.opts[:select] and ds.supports_insert_select? and h = _insert_select_raw(ds)
+        if _use_insert_select?(ds) && (h = _insert_select_raw(ds))
           _save_set_values(h)
           nil
         else
@@ -1924,6 +1928,11 @@ module Sequel
         _update_dataset.update(columns)
       end
 
+      # Whether to use insert_select when inserting a new row.
+      def _use_insert_select?(ds)
+        (!ds.opts[:select] || ds.opts[:returning]) && ds.supports_insert_select? 
+      end
+
       # Internal validation method.  If +raise_errors+ is +true+, hook
       # failures will be raised as HookFailure exceptions.  If it is
       # +false+, +false+ will be returned instead.
@@ -1990,6 +1999,36 @@ module Sequel
         Errors
       end
 
+      if RUBY_VERSION >= '1.9'
+        # Clone constructor -- freeze internal data structures if the original's
+        # are frozen.
+        def initialize_clone(other)
+          super
+          freeze if other.frozen?
+          self
+        end
+      else
+        # :nocov:
+        # Ruby 1.8 doesn't support initialize_clone, so override clone to dup and freeze. 
+        def clone
+          o = dup
+          o.freeze if frozen?
+          o
+        end
+        public :clone
+        # :nocov:
+      end
+
+      # Copy constructor -- Duplicate internal data structures.
+      def initialize_copy(other)
+        super
+        @values = @values.dup
+        @changed_columns = @changed_columns.dup if @changed_columns
+        @errors = @errors.dup if @errors
+        @this = @this.dup if @this
+        self
+      end
+
       # Set the columns with the given hash.  By default, the same as +set+, but
       # exists so it can be overridden.  This is called only for new records, before
       # changed_columns is cleared.
diff --git a/lib/sequel/plugins/auto_validations.rb b/lib/sequel/plugins/auto_validations.rb
index d40dc9d..0de89bd 100644
--- a/lib/sequel/plugins/auto_validations.rb
+++ b/lib/sequel/plugins/auto_validations.rb
@@ -1,13 +1,14 @@
 module Sequel
   module Plugins
-    # The auto_validations plugin automatically sets up three types of validations
+    # The auto_validations plugin automatically sets up the following types of validations
     # for your model columns:
     #
     # 1. type validations for all columns
     # 2. not_null validations on NOT NULL columns (optionally, presence validations)
     # 3. unique validations on columns or sets of columns with unique indexes
+    # 4. max length validations on string columns
     #
-    # To determine the columns to use for the not_null validations and the types for the type validations,
+    # To determine the columns to use for the type/not_null/max_length validations,
     # the plugin looks at the database schema for the model's table.  To determine
     # the unique validations, Sequel looks at the indexes on the table.  In order
     # for this plugin to be fully functional, the underlying database adapter needs
@@ -49,6 +50,7 @@ module Sequel
           @auto_validate_presence = false
           @auto_validate_not_null_columns = []
           @auto_validate_explicit_not_null_columns = []
+          @auto_validate_max_length_columns = []
           @auto_validate_unique_columns = []
           @auto_validate_types = true
         end
@@ -71,10 +73,14 @@ module Sequel
         # The columns with automatic not_null validations for columns present in the values.
         attr_reader :auto_validate_explicit_not_null_columns
 
+        # The columns or sets of columns with automatic max_length validations, as an array of
+        # pairs, with the first entry being the column name and second entry being the maximum length.
+        attr_reader :auto_validate_max_length_columns
+
         # The columns or sets of columns with automatic unique validations
         attr_reader :auto_validate_unique_columns
 
-        Plugins.inherited_instance_variables(self, :@auto_validate_presence=>nil, :@auto_validate_types=>nil, :@auto_validate_not_null_columns=>:dup, :@auto_validate_explicit_not_null_columns=>:dup, :@auto_validate_unique_columns=>:dup)
+        Plugins.inherited_instance_variables(self, :@auto_validate_presence=>nil, :@auto_validate_types=>nil, :@auto_validate_not_null_columns=>:dup, :@auto_validate_explicit_not_null_columns=>:dup, :@auto_validate_max_length_columns=>:dup, :@auto_validate_unique_columns=>:dup)
         Plugins.after_set_dataset(self, :setup_auto_validations)
 
         # Whether to use a presence validation for not null columns
@@ -91,7 +97,7 @@ module Sequel
         # If :all is given as the type, skip all auto validations.
         def skip_auto_validations(type)
           if type == :all
-            [:not_null, :types, :unique].each{|v| skip_auto_validations(v)}
+            [:not_null, :types, :unique, :max_length].each{|v| skip_auto_validations(v)}
           elsif type == :types
             @auto_validate_types = false
           else
@@ -107,6 +113,7 @@ module Sequel
           @auto_validate_not_null_columns = not_null_cols - Array(primary_key)
           explicit_not_null_cols += Array(primary_key)
           @auto_validate_explicit_not_null_columns = explicit_not_null_cols.uniq
+          @auto_validate_max_length_columns = db_schema.select{|col, sch| sch[:type] == :string && sch[:max_length].is_a?(Integer)}.map{|col, sch| [col, sch[:max_length]]}
           table = dataset.first_source_table
           @auto_validate_unique_columns = if db.supports_index_parsing? && [Symbol, SQL::QualifiedIdentifier, SQL::Identifier, String].any?{|c| table.is_a?(c)}
             db.indexes(table).select{|name, idx| idx[:unique] == true}.map{|name, idx| idx[:columns]}
@@ -134,6 +141,11 @@ module Sequel
               validates_not_null(not_null_columns, :allow_missing=>true)
             end
           end
+          unless (max_length_columns = model.auto_validate_max_length_columns).empty?
+            max_length_columns.each do |col, len|
+              validates_max_length(len, col, :allow_nil=>true)
+            end
+          end
 
           validates_schema_types if model.auto_validate_types?
 
diff --git a/lib/sequel/plugins/class_table_inheritance.rb b/lib/sequel/plugins/class_table_inheritance.rb
index 39cd4ce..36bc3a0 100644
--- a/lib/sequel/plugins/class_table_inheritance.rb
+++ b/lib/sequel/plugins/class_table_inheritance.rb
@@ -31,12 +31,20 @@ module Sequel
     # When using the class_table_inheritance plugin, subclasses use joined 
     # datasets:
     #
-    #   Employee.dataset.sql  # SELECT * FROM employees
-    #   Manager.dataset.sql   # SELECT * FROM employees
-    #                         # INNER JOIN managers USING (id)
-    #   Executive.dataset.sql # SELECT * FROM employees 
-    #                         # INNER JOIN managers USING (id)
-    #                         # INNER JOIN executives USING (id)
+    #   Employee.dataset.sql
+    #   # SELECT employees.id, employees.name, employees.kind
+    #   # FROM employees
+    #
+    #   Manager.dataset.sql
+    #   # SELECT employees.id, employees.name, employees.kind, managers.num_staff
+    #   # FROM employees
+    #   # JOIN managers ON (managers.id = employees.id)
+    #
+    #   Executive.dataset.sql
+    #   # SELECT employees.id, employees.name, employees.kind, managers.num_staff, executives.num_managers
+    #   # FROM employees
+    #   # JOIN managers ON (managers.id = employees.id)
+    #   # JOIN executives ON (executives.id = managers.id)
     #
     # This allows Executive.all to return instances with all attributes
     # loaded.  The plugin overrides the deleting, inserting, and updating
@@ -99,12 +107,13 @@ module Sequel
       def self.configure(model, opts=OPTS)
         model.instance_eval do
           @cti_base_model = self
-          @cti_key = key = opts[:key] 
+          @cti_key = opts[:key] 
           @cti_tables = [table_name]
           @cti_columns = {table_name=>columns}
           @cti_table_map = opts[:table_map] || {}
           @cti_model_map = opts[:model_map]
           set_dataset_cti_row_proc
+          set_dataset(dataset.select(*columns.map{|c| Sequel.qualify(table_name, Sequel.identifier(c))}))
         end
       end
 
@@ -163,7 +172,7 @@ module Sequel
             # Need to set dataset and columns before calling super so that
             # the main column accessor module is included in the class before any
             # plugin accessor modules (such as the lazy attributes accessor module).
-            set_dataset(ds.join(table, [pk]))
+            set_dataset(ds.join(table, pk=>pk).select_append(*(columns - [primary_key]).map{|c| Sequel.qualify(table, Sequel.identifier(c))}))
             set_columns(self.columns)
           end
           super
@@ -221,14 +230,6 @@ module Sequel
       end
 
       module InstanceMethods
-        # Set the cti_key column to the name of the model.
-        def before_validation
-          if new? && model.cti_key && !model.cti_model_map
-            send("#{model.cti_key}=", model.name.to_s)
-          end
-          super
-        end
-        
         # Delete the row from all backing tables, starting from the
         # most recent table and going through all superclasses.
         def delete
@@ -242,6 +243,14 @@ module Sequel
         
         private
         
+        # Set the cti_key column to the name of the model.
+        def _before_validation
+          if new? && model.cti_key && !model.cti_model_map
+            send("#{model.cti_key}=", model.name.to_s)
+          end
+          super
+        end
+        
         # Insert rows into all backing tables, using the columns
         # in each table.  
         def _insert
diff --git a/lib/sequel/plugins/column_select.rb b/lib/sequel/plugins/column_select.rb
new file mode 100644
index 0000000..52f19c5
--- /dev/null
+++ b/lib/sequel/plugins/column_select.rb
@@ -0,0 +1,57 @@
+module Sequel
+  module Plugins
+    # The column_select plugin changes the default selection for a
+    # model dataset to explicit select all columns from the table:
+    # <tt>table.column1, table.column2, table.column3, ...</tt>.
+    # This makes it simpler to add columns to the model's table
+    # in a migration concurrently while running the application,
+    # without it affecting the operation of the application.
+    #
+    # Note that by default on databases that supporting RETURNING,
+    # using explicit column selections will cause instance creations
+    # to use two queries (insert and refresh) instead of a single
+    # query using RETURNING.  You can use the insert_returning_select
+    # plugin to automatically use RETURNING for instance creations
+    # for models where the column_select plugin automatically sets up
+    # an explicit column selection.
+    #
+    # Usage:
+    #
+    #   # Make all model subclasses explicitly select qualified columns
+    #   Sequel::Model.plugin :column_select
+    #
+    #   # Make the Album class select qualified columns
+    #   Album.plugin :column_select
+    module ColumnSelect
+      # Modify the current model's dataset selection, if the model
+      # has a dataset.
+      def self.configure(model)
+        model.instance_eval do
+          self.dataset = dataset if @dataset
+        end
+      end
+
+      module ClassMethods
+        private
+
+        # If the underlying dataset selects from a single table and
+        # has no explicit selection, explicitly select all columns from that table,
+        # qualifying them with table's name.
+        def convert_input_dataset(ds)
+          ds = super
+          if !ds.opts[:select] && (from = ds.opts[:from]) && from.length == 1 && !ds.opts[:join]
+            if db.supports_schema_parsing?
+              cols = check_non_connection_error{db.schema(ds)}
+              if cols
+                cols = cols.map{|c, _| c}
+              end
+            end
+            cols ||= check_non_connection_error{ds.columns}
+            ds = ds.select(*cols.map{|c| Sequel.qualify(ds.first_source, Sequel.identifier(c))})
+          end
+          ds
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/composition.rb b/lib/sequel/plugins/composition.rb
index bc60c80..170a387 100644
--- a/lib/sequel/plugins/composition.rb
+++ b/lib/sequel/plugins/composition.rb
@@ -149,27 +149,11 @@ module Sequel
       end
 
       module InstanceMethods
-        # For each composition, set the columns in the model class based
-        # on the composition object.
-        def before_save
-          @compositions.keys.each{|n| instance_eval(&model.compositions[n][:decomposer])} if @compositions
-          super
-        end
-        
         # Cache of composition objects for this class.
         def compositions
           @compositions ||= {}
         end
 
-        # Duplicate compositions hash when duplicating model instance.
-        def dup
-          s = self
-          super.instance_eval do
-            @compositions = s.compositions.dup
-            self
-          end
-        end
-
         # Freeze compositions hash when freezing model instance.
         def freeze
           compositions.freeze
@@ -178,11 +162,25 @@ module Sequel
 
         private
 
+        # For each composition, set the columns in the model class based
+        # on the composition object.
+        def _before_validation
+          @compositions.keys.each{|n| instance_eval(&model.compositions[n][:decomposer])} if @compositions
+          super
+        end
+        
         # Clear the cached compositions when manually refreshing.
         def _refresh_set_values(hash)
           @compositions.clear if @compositions
           super
         end
+
+        # Duplicate compositions hash when duplicating model instance.
+        def initialize_copy(other)
+          super
+          @compositions = other.compositions.dup
+          self
+        end
       end
     end
   end
diff --git a/lib/sequel/plugins/dirty.rb b/lib/sequel/plugins/dirty.rb
index c52f582..19959be 100644
--- a/lib/sequel/plugins/dirty.rb
+++ b/lib/sequel/plugins/dirty.rb
@@ -85,17 +85,6 @@ module Sequel
           initial_values.has_key?(column)
         end
 
-        # Duplicate internal data structures
-        def dup 
-          s = self
-          super.instance_eval do
-            @initial_values = s.initial_values.dup
-            @missing_initial_values = s.send(:missing_initial_values).dup
-            @previous_changes = s.previous_changes.dup if s.previous_changes
-            self
-          end
-        end
-
         # Freeze internal data structures
         def freeze
           initial_values.freeze
@@ -209,6 +198,15 @@ module Sequel
           end
         end
 
+        # Duplicate internal data structures
+        def initialize_copy(other)
+          super
+          @initial_values = other.initial_values.dup
+          @missing_initial_values = other.send(:missing_initial_values).dup
+          @previous_changes = other.previous_changes.dup if other.previous_changes
+          self
+        end
+
         # Reset the initial values when initializing.
         def initialize_set(h)
           super
diff --git a/lib/sequel/plugins/hook_class_methods.rb b/lib/sequel/plugins/hook_class_methods.rb
index 9bd0b71..42c2c5e 100644
--- a/lib/sequel/plugins/hook_class_methods.rb
+++ b/lib/sequel/plugins/hook_class_methods.rb
@@ -62,7 +62,7 @@ module Sequel
         # Example of usage:
         #
         #  class MyModel
-        #   define_hook :before_move_to
+        #   add_hook_type :before_move_to
         #   before_move_to(:check_move_allowed){|o| o.allow_move?}
         #   def move_to(there)
         #     return if before_move_to == false
diff --git a/lib/sequel/plugins/insert_returning_select.rb b/lib/sequel/plugins/insert_returning_select.rb
new file mode 100644
index 0000000..ee60565
--- /dev/null
+++ b/lib/sequel/plugins/insert_returning_select.rb
@@ -0,0 +1,70 @@
+module Sequel
+  module Plugins
+    # If the model's dataset selects explicit columns and the
+    # database supports it, the insert_returning_select plugin will
+    # automatically set the RETURNING clause on the dataset used to
+    # insert rows to the columns selected, which allows the default model
+    # support to run the insert and refresh of the data in a single
+    # query, instead of two separate queries.  This is Sequel's default
+    # behavior when the model does not select explicit columns.
+    #
+    # Usage:
+    #
+    #   # Make all model subclasses automatically setup insert returning clauses
+    #   Sequel::Model.plugin :insert_returning_select
+    #
+    #   # Make the Album class automatically setup insert returning clauses
+    #   Album.plugin :insert_returning_select
+    module InsertReturningSelect
+      # Modify the current model's dataset selection, if the model
+      # has a dataset.
+      def self.configure(model)
+        model.instance_eval do
+          self.dataset = dataset if @dataset && @dataset.opts[:select]
+        end
+      end
+
+      module ClassMethods
+        # The dataset to use to insert new rows.  For internal use only.
+        attr_reader :instance_insert_dataset
+
+        private
+
+        # When reseting the instance dataset, also reset the instance_insert_dataset.
+        def reset_instance_dataset
+          ret = super
+          ds = @instance_dataset
+
+          if columns = insert_returning_columns(ds)
+            ds = ds.returning(*columns)
+          end
+          @instance_insert_dataset = ds
+
+          ret
+        end
+
+        # Determine the columns to use for the returning clause, or return nil
+        # if they can't be determined and a returning clause should not be
+        # added automatically.
+        def insert_returning_columns(ds)
+          return unless ds.supports_returning?(:insert)
+          return unless values = ds.opts[:select]
+
+          values = values.map{|v| ds.unqualified_column_for(v)}
+          if values.all?
+            values
+          end
+        end
+      end
+      
+      module InstanceMethods
+        private
+
+        # Use the instance_insert_dataset as the base dataset for the insert.
+        def _insert_dataset
+          use_server(model.instance_insert_dataset)
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/instance_filters.rb b/lib/sequel/plugins/instance_filters.rb
index ba58ade..edc7545 100644
--- a/lib/sequel/plugins/instance_filters.rb
+++ b/lib/sequel/plugins/instance_filters.rb
@@ -59,15 +59,6 @@ module Sequel
           clear_instance_filters
         end
 
-        # Duplicate internal structures when duplicating model instance.
-        def dup
-          ifs = instance_filters.dup
-          super.instance_eval do
-            @instance_filters = ifs
-            self
-          end
-        end
-      
         # Freeze the instance filters when freezing the object
         def freeze
           instance_filters.freeze
@@ -93,6 +84,13 @@ module Sequel
           end
         end
 
+        # Duplicate internal structures when duplicating model instance.
+        def initialize_copy(other)
+          super
+          @instance_filters = other.send(:instance_filters).dup
+          self
+        end
+      
         # Lazily initialize the instance filter array.
         def instance_filters
           @instance_filters ||= []
diff --git a/lib/sequel/plugins/lazy_attributes.rb b/lib/sequel/plugins/lazy_attributes.rb
index 5536425..4e2308b 100644
--- a/lib/sequel/plugins/lazy_attributes.rb
+++ b/lib/sequel/plugins/lazy_attributes.rb
@@ -17,6 +17,13 @@ module Sequel
     #
     #   # You can specify multiple columns to lazily load:
     #   Album.plugin :lazy_attributes, :review, :tracklist
+    #
+    # Note that by default on databases that supporting RETURNING,
+    # using explicit column selections will cause instance creations
+    # to use two queries (insert and refresh) instead of a single
+    # query using RETURNING.  You can use the insert_returning_select
+    # plugin to automatically use RETURNING for instance creations
+    # for models using the lazy_attributes plugin.
     module LazyAttributes
       # Lazy attributes requires the tactical_eager_loading plugin
       def self.apply(model, *attrs)
@@ -37,7 +44,10 @@ module Sequel
         # For each attribute given, create an accessor method that allows a lazy
         # lookup of the attribute.  Each attribute should be given as a symbol.
         def lazy_attributes(*attrs)
-          set_dataset(dataset.select(*(columns - attrs)))
+          unless select = dataset.opts[:select]
+            select = dataset.columns.map{|c| Sequel.qualify(dataset.first_source, c)}
+          end
+          set_dataset(dataset.select(*select.reject{|c| attrs.include?(dataset.send(:_hash_key_symbol, c))}))
           attrs.each{|a| define_lazy_attribute_getter(a)}
         end
         
@@ -66,8 +76,9 @@ module Sequel
         # the attribute for just the current object.  Return the value of
         # the attribute for the current object.
         def lazy_attribute_lookup(a)
+          selection = Sequel.qualify(model.table_name, a)
           if frozen?
-            return this.dup.select(a).get(a)
+            return this.dup.get(selection)
           end
 
           if retrieved_with
@@ -75,14 +86,15 @@ module Sequel
             composite_pk = true if primary_key.is_a?(Array)
             id_map = {}
             retrieved_with.each{|o| id_map[o.pk] = o unless o.values.has_key?(a) || o.frozen?}
-            model.select(*(Array(primary_key) + [a])).filter(primary_key=>id_map.keys).naked.each do |row|
+            predicate_key = composite_pk ? primary_key.map{|k| Sequel.qualify(model.table_name, k)} : Sequel.qualify(model.table_name, primary_key)
+            model.select(*(Array(primary_key).map{|k| Sequel.qualify(model.table_name, k)} + [selection])).where(predicate_key=>id_map.keys).naked.each do |row|
               obj = id_map[composite_pk ? row.values_at(*primary_key) : row[primary_key]]
               if obj && !obj.values.has_key?(a)
                 obj.values[a] = row[a]
               end
             end
           end
-          values[a] = this.select(a).get(a) unless values.has_key?(a)
+          values[a] = this.get(selection) unless values.has_key?(a)
           values[a]
         end
       end
diff --git a/lib/sequel/plugins/list.rb b/lib/sequel/plugins/list.rb
index c0f331d..67117ee 100644
--- a/lib/sequel/plugins/list.rb
+++ b/lib/sequel/plugins/list.rb
@@ -96,6 +96,15 @@ module Sequel
           super
         end
 
+        # When destroying an instance, move all entries after the instance down
+        # one position, so that there aren't any gaps
+        def after_destroy
+          super
+
+          f = Sequel.expr(position_field)
+          list_dataset.where(f > position_value).update(f => f - 1)
+        end
+
         # Find the last position in the list containing this instance.
         def last_position
           list_dataset.max(position_field).to_i
diff --git a/lib/sequel/plugins/modification_detection.rb b/lib/sequel/plugins/modification_detection.rb
new file mode 100644
index 0000000..92a3690
--- /dev/null
+++ b/lib/sequel/plugins/modification_detection.rb
@@ -0,0 +1,90 @@
+module Sequel
+  module Plugins
+    # This plugin automatically detects in-place modifications to
+    # columns as well as direct modifications of the values hash.
+    #
+    #   class User < Sequel::Model
+    #     plugin :modification_detection
+    #   end
+    #   user = User[1]
+    #   user.a # => 'a'
+    #   user.a << 'b'
+    #   user.save_changes
+    #   # UPDATE users SET a = 'ab' WHERE (id = 1)
+    #
+    # Note that for this plugin to work correctly, the column values must
+    # correctly implement the #hash method, returning the same value if
+    # the object is equal, and a different value if the object is not equal.
+    #
+    # Note that this plugin causes a performance hit for all retrieved
+    # objects, so it shouldn't be used in cases where performance is a
+    # primary concern.
+    #
+    # Usage:
+    #
+    #   # Make all model subclass automatically detect column modifications
+    #   Sequel::Model.plugin :modification_detection
+    #
+    #   # Make the Album class automatically detect column modifications
+    #   Album.plugin :modification_detection
+    module ModificationDetection
+      module ClassMethods
+        # Calculate the hashes for all of the column values, so that they
+        # can be compared later to determine if the column value has changed.
+        def call(_)
+          v = super
+          v.calculate_values_hashes
+          v
+        end
+      end
+
+      module InstanceMethods
+        # Recalculate the column value hashes after updating.
+        def after_update
+          super
+          recalculate_values_hashes
+        end
+
+        # Calculate the column hash values if they haven't been already calculated.
+        def calculate_values_hashes
+          @values_hashes || recalculate_values_hashes
+        end
+
+        # Detect which columns have been modified by comparing the cached hash
+        # value to the hash of the current value.
+        def changed_columns
+          cc = super
+          changed = []
+          v = @values
+          if vh = @values_hashes
+            (vh.keys - cc).each{|c| changed << c unless v.has_key?(c) && vh[c] == v[c].hash}
+          end
+          cc + changed
+        end
+
+        private
+
+        # Recalculate the column value hashes after manually refreshing.
+        def _refresh(dataset)
+          super
+          recalculate_values_hashes
+        end
+
+        # Recalculate the column value hashes after refreshing after saving a new object.
+        def _save_refresh
+          super
+          recalculate_values_hashes
+        end
+
+        # Recalculate the column value hashes, caching them for later use.
+        def recalculate_values_hashes
+          vh = {}
+          @values.each do |k,v|
+            vh[k] = v.hash
+          end
+          @values_hashes = vh.freeze
+        end
+      end
+    end
+  end
+end
diff --git a/lib/sequel/plugins/nested_attributes.rb b/lib/sequel/plugins/nested_attributes.rb
index 87e7c5e..6db1045 100644
--- a/lib/sequel/plugins/nested_attributes.rb
+++ b/lib/sequel/plugins/nested_attributes.rb
@@ -82,29 +82,27 @@ module Sequel
         attr_accessor :nested_attributes_module
         
         # Allow nested attributes to be set for the given associations.  Options:
-        # * :destroy - Allow destruction of nested records.
-        # * :fields - If provided, should be an Array or proc. If it is an array,
-        #   restricts the fields allowed to be modified through the
-        #   association_attributes= method to the specific fields given. If it is
-        #   a proc, it will be called with the associated object and should return an
-        #   array of the allowable fields.
-        # * :limit - For *_to_many associations, a limit on the number of records
-        #   that will be processed, to prevent denial of service attacks.
-        # * :reject_if - A proc that is given each attribute hash before it is
-        #   passed to its associated object. If the proc returns a truthy
-        #   value, the attribute hash is ignored.
-        # * :remove - Allow disassociation of nested records (can remove the associated
-        #   object from the parent object, but not destroy the associated object).
-        # * :strict - Kept for backward compatibility. Setting it to false is
-        #   equivalent to setting :unmatched_pk to :ignore.
-        # * :transform - A proc to transform attribute hashes before they are
-        #   passed to associated object. Takes two arguments, the parent object and
-        #   the attribute hash. Uses the return value as the new attribute hash.
-        # * :unmatched_pk - Specify the action to be taken if a primary key is
-        #   provided in a record, but it doesn't match an existing associated
-        #   object. Set to :create to create a new object with that primary
-        #   key, :ignore to ignore the record, or :raise to raise an error.
-        #   The default is :raise.
+        # :destroy :: Allow destruction of nested records.
+        # :fields :: If provided, should be an Array or proc. If it is an array,
+        #            restricts the fields allowed to be modified through the
+        #            association_attributes= method to the specific fields given. If it is
+        #            a proc, it will be called with the associated object and should return an
+        #            array of the allowable fields.
+        # :limit :: For *_to_many associations, a limit on the number of records
+        #           that will be processed, to prevent denial of service attacks.
+        # :reject_if :: A proc that is given each attribute hash before it is
+        #               passed to its associated object. If the proc returns a truthy
+        #               value, the attribute hash is ignored.
+        # :remove :: Allow disassociation of nested records (can remove the associated
+        #            object from the parent object, but not destroy the associated object).
+        # :transform :: A proc to transform attribute hashes before they are
+        #               passed to associated object. Takes two arguments, the parent object and
+        #               the attribute hash. Uses the return value as the new attribute hash.
+        # :unmatched_pk :: Specify the action to be taken if a primary key is
+        #                  provided in a record, but it doesn't match an existing associated
+        #                  object. Set to :create to create a new object with that primary
+        #                  key, :ignore to ignore the record, or :raise to raise an error.
+        #                  The default is :raise.
         #
         # If a block is provided, it is used to set the :reject_if option.
         def nested_attributes(*associations, &block)
@@ -125,25 +123,35 @@ module Sequel
         # class.
         def def_nested_attribute_method(reflection)
           nested_attributes_module.class_eval do
-            if reflection.returns_array?
-              define_method("#{reflection[:name]}_attributes=") do |array|
-                nested_attributes_list_setter(reflection, array)
-              end
-            else
-             define_method("#{reflection[:name]}_attributes=") do |h|
-                nested_attributes_setter(reflection, h)
-              end
+            define_method("#{reflection[:name]}_attributes=") do |v|
+              set_nested_attributes(reflection[:name], v)
             end
           end
         end
       end
       
       module InstanceMethods
+        # Set the nested attributes for the given association.  obj should be an enumerable of multiple objects
+        # for plural associations.  The opts hash can be used to override any of the default options set by
+        # the class-level nested_attributes call.
+        def set_nested_attributes(assoc, obj, opts=OPTS)
+          raise(Error, "no association named #{assoc} for #{model.inspect}") unless ref = model.association_reflection(assoc)
+          raise(Error, "nested attributes are not enabled for association #{assoc} for #{model.inspect}") unless meta = ref[:nested_attributes]
+          meta = meta.merge(opts)
+          meta[:reflection] = ref
+          if ref.returns_array?
+            nested_attributes_list_setter(meta, obj)
+          else
+            nested_attributes_setter(meta, obj)
+          end
+        end
+
         private
         
         # Check that the keys related to the association are not modified inside the block.  Does
         # not use an ensure block, so callers should be careful.
-        def nested_attributes_check_key_modifications(reflection, obj)
+        def nested_attributes_check_key_modifications(meta, obj)
+          reflection = meta[:reflection]
           keys = reflection.associated_object_keys.map{|x| obj.send(x)}
           yield
           unless keys == reflection.associated_object_keys.map{|x| obj.send(x)}
@@ -154,10 +162,11 @@ module Sequel
         # Create a new associated object with the given attributes, validate
         # it when the parent is validated, and save it when the object is saved.
         # Returns the object created.
-        def nested_attributes_create(reflection, attributes)
+        def nested_attributes_create(meta, attributes)
+          reflection = meta[:reflection]
           obj = reflection.associated_class.new
-          nested_attributes_set_attributes(reflection, obj, attributes)
-          after_validation_hook{validate_associated_object(reflection, obj)}
+          nested_attributes_set_attributes(meta, obj, attributes)
+          after_validation_hook{validate_associated_object(meta, obj)}
           if reflection.returns_array?
             send(reflection[:name]) << obj
             after_save_hook{send(reflection.add_method, obj)}
@@ -177,6 +186,7 @@ module Sequel
               after_save_hook{send(reflection.setter_method, obj)}
             end
           end
+          add_reciprocal_object(reflection, obj)
           obj
         end
         
@@ -184,19 +194,20 @@ module Sequel
         # If a hash is provided it, sort it by key and then use the values.
         # If there is a limit on the nested attributes for this association,
         # make sure the length of the attributes_list is not greater than the limit.
-        def nested_attributes_list_setter(reflection, attributes_list)
+        def nested_attributes_list_setter(meta, attributes_list)
           attributes_list = attributes_list.sort_by{|x| x.to_s}.map{|k,v| v} if attributes_list.is_a?(Hash)
-          if (limit = reflection[:nested_attributes][:limit]) && attributes_list.length > limit
+          if (limit = meta[:limit]) && attributes_list.length > limit
             raise(Error, "number of nested attributes (#{attributes_list.length}) exceeds the limit (#{limit})")
           end
-          attributes_list.each{|a| nested_attributes_setter(reflection, a)}
+          attributes_list.each{|a| nested_attributes_setter(meta, a)}
         end
         
         # Remove the given associated object from the current object. If the
         # :destroy option is given, destroy the object after disassociating it
         # (unless destroying the object would automatically disassociate it).
         # Returns the object removed.
-        def nested_attributes_remove(reflection, obj, opts=OPTS)
+        def nested_attributes_remove(meta, obj, opts=OPTS)
+          reflection = meta[:reflection]
           if !opts[:destroy] || reflection.remove_before_destroy?
             before_save_hook do
               if reflection.returns_array?
@@ -212,8 +223,8 @@ module Sequel
         
         # Set the fields in the obj based on the association, only allowing
         # specific :fields if configured.
-        def nested_attributes_set_attributes(reflection, obj, attributes)
-          if fields = reflection[:nested_attributes][:fields]
+        def nested_attributes_set_attributes(meta, obj, attributes)
+          if fields = meta[:fields]
             fields = fields.call(obj) if fields.respond_to?(:call)
             obj.set_only(attributes, fields)
           else
@@ -231,12 +242,13 @@ module Sequel
         # * If a primary key exists in the attributes hash but it does not match an associated object,
         #   either raise an error, create a new object or ignore the hash, depending on the :unmatched_pk option.
         # * If no primary key exists in the attributes hash, create a new object.
-        def nested_attributes_setter(reflection, attributes)
-          if a = reflection[:nested_attributes][:transform]
+        def nested_attributes_setter(meta, attributes)
+          if a = meta[:transform]
             attributes = a.call(self, attributes)
           end
-          return if (b = reflection[:nested_attributes][:reject_if]) && b.call(attributes)
+          return if (b = meta[:reject_if]) && b.call(attributes)
           modified!
+          reflection = meta[:reflection]
           klass = reflection.associated_class
           sym_keys = Array(klass.primary_key)
           str_keys = sym_keys.map{|k| k.to_s}
@@ -246,28 +258,28 @@ module Sequel
           end
           if obj
             attributes = attributes.dup.delete_if{|k,v| str_keys.include? k.to_s}
-            if reflection[:nested_attributes][:destroy] && klass.db.send(:typecast_value_boolean, attributes.delete(:_delete) || attributes.delete('_delete'))
-              nested_attributes_remove(reflection, obj, :destroy=>true)
-            elsif reflection[:nested_attributes][:remove] && klass.db.send(:typecast_value_boolean, attributes.delete(:_remove) || attributes.delete('_remove'))
-              nested_attributes_remove(reflection, obj)
+            if meta[:destroy] && klass.db.send(:typecast_value_boolean, attributes.delete(:_delete) || attributes.delete('_delete'))
+              nested_attributes_remove(meta, obj, :destroy=>true)
+            elsif meta[:remove] && klass.db.send(:typecast_value_boolean, attributes.delete(:_remove) || attributes.delete('_remove'))
+              nested_attributes_remove(meta, obj)
             else
-              nested_attributes_update(reflection, obj, attributes)
+              nested_attributes_update(meta, obj, attributes)
             end
-          elsif pk.all? && reflection[:nested_attributes][:unmatched_pk] != :create
-            if reflection[:nested_attributes][:unmatched_pk] == :raise
+          elsif pk.all? && meta[:unmatched_pk] != :create
+            if meta[:unmatched_pk] == :raise
               raise(Error, "no matching associated object with given primary key (association: #{reflection[:name]}, pk: #{pk})")
             end
           else
-            nested_attributes_create(reflection, attributes)
+            nested_attributes_create(meta, attributes)
           end
         end
         
         # Update the given object with the attributes, validating it when the
         # parent object is validated and saving it when the parent is saved.
         # Returns the object updated.
-        def nested_attributes_update(reflection, obj, attributes)
-          nested_attributes_update_attributes(reflection, obj, attributes)
-          after_validation_hook{validate_associated_object(reflection, obj)}
+        def nested_attributes_update(meta, obj, attributes)
+          nested_attributes_update_attributes(meta, obj, attributes)
+          after_validation_hook{validate_associated_object(meta, obj)}
           # Don't need to validate the object twice if :validate association option is not false
           # and don't want to validate it at all if it is false.
           after_save_hook{obj.save_changes(:validate=>false)}
@@ -275,15 +287,16 @@ module Sequel
         end
 
         # Update the attributes for the given object related to the current object through the association.
-        def nested_attributes_update_attributes(reflection, obj, attributes)
-          nested_attributes_check_key_modifications(reflection, obj) do
-            nested_attributes_set_attributes(reflection, obj, attributes)
+        def nested_attributes_update_attributes(meta, obj, attributes)
+          nested_attributes_check_key_modifications(meta, obj) do
+            nested_attributes_set_attributes(meta, obj, attributes)
           end
         end
 
         # Validate the given associated object, adding any validation error messages from the
         # given object to the parent object.
-        def validate_associated_object(reflection, obj)
+        def validate_associated_object(meta, obj)
+          reflection = meta[:reflection]
           return if reflection[:validate] == false
           association = reflection[:name]
           if (reflection[:type] == :one_to_many || reflection[:type] == :one_to_one) && (key = reflection[:key]).is_a?(Symbol) && !(pk_val = obj.values[key])
diff --git a/lib/sequel/plugins/prepared_statements.rb b/lib/sequel/plugins/prepared_statements.rb
index 88f6a8a..ab18a33 100644
--- a/lib/sequel/plugins/prepared_statements.rb
+++ b/lib/sequel/plugins/prepared_statements.rb
@@ -49,6 +49,19 @@ module Sequel
 
         private
 
+        # Create a prepared statement, but modify the SQL used so that the model's columns are explicitly
+        # selected instead of using *, assuming that the dataset selects from a single table.
+        def prepare_explicit_statement(ds, type, vals=OPTS)
+          f = ds.opts[:from]
+          meth = type == :insert_select ? :returning : :select
+          s = ds.opts[meth]
+          if f && f.length == 1 && !ds.opts[:join] && (!s || s.empty?)
+            ds = ds.send(meth, *columns.map{|c| Sequel.identifier(c)})
+          end 
+          
+          prepare_statement(ds, type, vals)
+        end
+
         # Create a prepared statement based on the given dataset with a unique name for the given
         # type of query and values.
         def prepare_statement(ds, type, vals=OPTS)
@@ -76,18 +89,18 @@ module Sequel
         # and return that column values for the row created.
         def prepared_insert_select(cols)
           if dataset.supports_insert_select?
-            cached_prepared_statement(:insert_select, prepared_columns(cols)){prepare_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)), :insert_select, prepared_statement_key_hash(cols))}
+            cached_prepared_statement(:insert_select, prepared_columns(cols)){prepare_explicit_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)), :insert_select, prepared_statement_key_hash(cols))}
           end
         end
 
         # Return a prepared statement that can be used to lookup a row solely based on the primary key.
         def prepared_lookup
-          cached_prepared_statement(:fixed, :lookup){prepare_statement(filter(prepared_statement_key_array(primary_key)), :first)}
+          cached_prepared_statement(:fixed, :lookup){prepare_explicit_statement(filter(prepared_statement_key_array(primary_key)), :first)}
         end
 
         # Return a prepared statement that can be used to refresh a row to get new column values after insertion.
         def prepared_refresh
-          cached_prepared_statement(:fixed, :refresh){prepare_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)).filter(prepared_statement_key_array(primary_key)), :first)}
+          cached_prepared_statement(:fixed, :refresh){prepare_explicit_statement(naked.clone(:server=>dataset.opts.fetch(:server, :default)).filter(prepared_statement_key_array(primary_key)), :first)}
         end
 
         # Return an array of two element arrays with the column symbol as the first entry and the
@@ -122,8 +135,6 @@ module Sequel
           prepared_lookup.call(primary_key_hash(pk))
         end
 
-        private
-
         # If a prepared statement has already been cached for the given type and subtype,
         # return it.  Otherwise, yield to the block to get the prepared statement, and cache it.
         def cached_prepared_statement(type, subtype)
diff --git a/lib/sequel/plugins/prepared_statements_associations.rb b/lib/sequel/plugins/prepared_statements_associations.rb
index 26ab798..ba2db61 100644
--- a/lib/sequel/plugins/prepared_statements_associations.rb
+++ b/lib/sequel/plugins/prepared_statements_associations.rb
@@ -53,6 +53,20 @@ module Sequel
           opts.send(:cached_fetch, :prepared_statement) do
             unless opts[:instance_specific]
               ds, bv = _associated_dataset(opts, {}).unbind
+
+              f = ds.opts[:from]
+              if f && f.length == 1
+                s = ds.opts[:select]
+                if ds.opts[:join]
+                  if opts.eager_loading_use_associated_key? && s && s.length == 1 && s.first.is_a?(SQL::ColumnAll)
+                    table = s.first.table
+                    ds = ds.select(*opts.associated_class.columns.map{|c| Sequel.identifier(c).qualify(table)})
+                  end
+                elsif !s || s.empty?
+                  ds = ds.select(*opts.associated_class.columns.map{|c| Sequel.identifier(c)})
+                end
+              end 
+          
               if bv.length != assoc_bv.length
                 h = {}
                 bv.each do |k,v|
diff --git a/lib/sequel/plugins/serialization.rb b/lib/sequel/plugins/serialization.rb
index 1ea103c..80f81c9 100644
--- a/lib/sequel/plugins/serialization.rb
+++ b/lib/sequel/plugins/serialization.rb
@@ -175,27 +175,12 @@ module Sequel
       end
 
       module InstanceMethods
-        # Serialize deserialized values before saving
-        def before_save
-          serialize_deserialized_values
-          super
-        end
-        
         # Hash of deserialized values, used as a cache.
         def deserialized_values
           @deserialized_values ||= {}
         end
 
         # Freeze the deserialized values
-        def dup
-          dv = deserialized_values.dup
-          super.instance_eval do
-            @deserialized_values = dv
-            self
-          end
-        end
-
-        # Freeze the deserialized values
         def freeze
           deserialized_values.freeze
           super
@@ -203,6 +188,12 @@ module Sequel
 
         private
 
+        # Serialize deserialized values before saving
+        def _before_validation
+          serialize_deserialized_values
+          super
+        end
+        
         # Clear any cached deserialized values when doing a manual refresh.
         def _refresh_set_values(hash)
           @deserialized_values.clear if @deserialized_values
@@ -218,6 +209,13 @@ module Sequel
           end
         end
 
+        # Dup the deserialized values when duping model instance.
+        def initialize_copy(other)
+          super
+          @deserialized_values = other.deserialized_values.dup
+          self
+        end
+
         # Serialize all deserialized values
         def serialize_deserialized_values
           deserialized_values.each{|k,v| @values[k] = serialize_value(k, v)}
diff --git a/lib/sequel/plugins/serialization_modification_detection.rb b/lib/sequel/plugins/serialization_modification_detection.rb
index d22fa72..1b58eb4 100644
--- a/lib/sequel/plugins/serialization_modification_detection.rb
+++ b/lib/sequel/plugins/serialization_modification_detection.rb
@@ -45,15 +45,6 @@ module Sequel
           cc
         end
 
-        # Duplicate the original deserialized values when duplicating instance.
-        def dup
-          o = @original_deserialized_values
-          super.instance_eval do
-            @original_deserialized_values = o.dup if o
-            self
-          end
-        end
-
         # Freeze the original deserialized values when freezing the instance.
         def freeze
           @original_deserialized_values ||= {}
@@ -63,6 +54,15 @@ module Sequel
 
         private
 
+        # Duplicate the original deserialized values when duplicating instance.
+        def initialize_copy(other)
+          super
+          if o = other.instance_variable_get(:@original_deserialized_values)
+            @original_deserialized_values = o.dup
+          end
+          self
+        end
+
         # For new objects, serialize any existing deserialized values so that changes can
         # be detected.
         def initialize_set(values)
diff --git a/lib/sequel/plugins/single_table_inheritance.rb b/lib/sequel/plugins/single_table_inheritance.rb
index 7ca270a..a552001 100644
--- a/lib/sequel/plugins/single_table_inheritance.rb
+++ b/lib/sequel/plugins/single_table_inheritance.rb
@@ -215,8 +215,10 @@ module Sequel
       end
 
       module InstanceMethods
+        private
+
         # Set the sti_key column based on the sti_key_map.
-        def before_validation
+        def _before_validation
           if new? && !self[model.sti_key]
             send("#{model.sti_key}=", model.sti_key_chooser.call(self))
           end
diff --git a/lib/sequel/plugins/timestamps.rb b/lib/sequel/plugins/timestamps.rb
index eafa14d..4c852aa 100644
--- a/lib/sequel/plugins/timestamps.rb
+++ b/lib/sequel/plugins/timestamps.rb
@@ -57,12 +57,6 @@ module Sequel
       end
 
       module InstanceMethods
-        # Set the create timestamp when creating
-        def before_validation
-          set_create_timestamp if new?
-          super
-        end
-        
         # Set the update timestamp when updating
         def before_update
           set_update_timestamp
@@ -71,6 +65,12 @@ module Sequel
         
         private
         
+        # Set the create timestamp when creating
+        def _before_validation
+          set_create_timestamp if new?
+          super
+        end
+        
         # If the object has accessor methods for the create timestamp field, and
         # the create timestamp value is nil or overwriting it is allowed, set the
         # create timestamp field to the time given or the current time.  If setting
diff --git a/lib/sequel/sql.rb b/lib/sequel/sql.rb
index 891d207..4b2f2a4 100644
--- a/lib/sequel/sql.rb
+++ b/lib/sequel/sql.rb
@@ -1339,16 +1339,6 @@ module Sequel
       end
     end
 
-    # REMOVE411
-    class EmulatedFunction < Function
-      def self.new(name, *args)
-        Deprecation.deprecate("Sequel::SQL::EmulatedFunction", "Please use Sequel::SQL::Function.new!(name, args, :emulate=>true) to create an emulated SQL function")
-        Function.new!(name, args, :emulate=>true)
-      end
-
-      to_s_method :emulated_function_sql
-    end
-    
     class GenericExpression
       include AliasMethods
       include BooleanMethods
@@ -1394,18 +1384,9 @@ module Sequel
       attr_reader :table_expr
 
       # Create an object with the given join_type and table expression.
-      def initialize(join_type, table, table_alias = nil)
+      def initialize(join_type, table_expr)
         @join_type = join_type
-
-        @table_expr = if table.is_a?(AliasedExpression)
-          table
-        # REMOVE411
-        elsif table_alias
-          Deprecation.deprecate("The table_alias argument to Sequel::SQL::JoinClause#initialize", "Please use a Sequel::SQL::AliasedExpression as the table argument instead.")
-          AliasedExpression.new(table, table_alias)
-        else
-          table
-        end
+        @table_expr = table_expr
       end
 
       # The table/set related to the JOIN, without any alias.
@@ -1827,28 +1808,6 @@ module Sequel
       to_s_method :window_sql, '@opts'
     end
 
-    # REMOVE411
-    class WindowFunction < GenericExpression
-      # The function to use, should be an <tt>SQL::Function</tt>.
-      attr_reader :function
-
-      # The window to use, should be an <tt>SQL::Window</tt>.
-      attr_reader :window
-
-      def self.new(function, window)
-        Deprecation.deprecate("Sequel::SQL::WindowFunction", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
-        function.over(window)
-      end
-
-      # Set the function and window.
-      def initialize(function, window)
-        Deprecation.deprecate("Sequel::SQL::WindowFunction", "Please use Sequel::SQL::Function.new(name, *args).over(...) to create an SQL window function")
-        @function, @window = function, window
-      end
-
-      to_s_method :window_function_sql, '@function, @window'
-    end
-
     # A +Wrapper+ is a simple way to wrap an existing object so that it supports
     # the Sequel DSL.
     class Wrapper < GenericExpression
diff --git a/lib/sequel/version.rb b/lib/sequel/version.rb
index e042b43..9d67db5 100644
--- a/lib/sequel/version.rb
+++ b/lib/sequel/version.rb
@@ -3,7 +3,7 @@ module Sequel
   MAJOR = 4
   # The minor version of Sequel.  Bumped for every non-patch level
   # release, generally around once a month.
-  MINOR = 11
+  MINOR = 13
   # The tiny version of Sequel.  Usually 0, only bumped for bugfix
   # releases that fix regressions from previous versions.
   TINY  = 0
diff --git a/spec/adapters/mysql_spec.rb b/spec/adapters/mysql_spec.rb
index b586b2c..e400b3d 100644
--- a/spec/adapters/mysql_spec.rb
+++ b/spec/adapters/mysql_spec.rb
@@ -509,6 +509,13 @@ describe "A MySQL database" do
     end
   end
 
+  specify "should correctly format CREATE TABLE statements with foreign keys, when :key != the default (:id)" do
+    @db.create_table(:items){primary_key :id; Integer :other_than_id; foreign_key :p_id, :items, :key => :other_than_id, :null => false, :on_delete => :cascade}
+    check_sqls do
+      @db.sqls.should == ["CREATE TABLE `items` (`id` integer PRIMARY KEY AUTO_INCREMENT, `other_than_id` integer, `p_id` integer NOT NULL, UNIQUE (`other_than_id`), FOREIGN KEY (`p_id`) REFERENCES `items`(`other_than_id`) ON DELETE CASCADE)"]
+    end
+  end
+
   specify "should correctly format ALTER TABLE statements with foreign keys" do
     @db.create_table(:items){Integer :id}
     @db.create_table(:users){primary_key :id}
diff --git a/spec/adapters/postgres_spec.rb b/spec/adapters/postgres_spec.rb
index bd05282..b3cf1ac 100644
--- a/spec/adapters/postgres_spec.rb
+++ b/spec/adapters/postgres_spec.rb
@@ -179,6 +179,19 @@ describe "A PostgreSQL database" do
     @db.server_version.should > 70000
   end
 
+  specify "should create a dataset using the VALUES clause via #values" do
+    @db.values([[1, 2], [3, 4]]).map([:column1, :column2]).should == [[1, 2], [3, 4]]
+  end
+
+  specify "should support ordering and limiting with #values" do
+    @db.values([[1, 2], [3, 4]]).reverse(:column2, :column1).limit(1).map([:column1, :column2]).should == [[3, 4]]
+    @db.values([[1, 2], [3, 4]]).reverse(:column2, :column1).offset(1).map([:column1, :column2]).should == [[1, 2]]
+  end
+
+  specify "should support subqueries with #values" do
+    @db.values([[1, 2]]).from_self.cross_join(@db.values([[3, 4]]).as(:x, [:c1, :c2])).map([:column1, :column2, :c1, :c2]).should == [[1, 2, 3, 4]]
+  end
+
   specify "should respect the :read_only option per-savepoint" do
     proc{@db.transaction{@db.transaction(:savepoint=>true, :read_only=>true){@db[:public__testfk].insert}}}.should raise_error(Sequel::DatabaseError)
     proc{@db.transaction(:auto_savepoint=>true, :read_only=>true){@db.transaction(:read_only=>false){@db[:public__testfk].insert}}}.should raise_error(Sequel::DatabaseError)
@@ -257,6 +270,12 @@ describe "A PostgreSQL database" do
     ds.insert(:id=>1)
     ds.select_map(:id).should == [1]
   end
+
+  specify "should have notice receiver receive notices" do
+    a = nil
+    Sequel.connect(DB.opts.merge(:notice_receiver=>proc{|r| a = r.result_error_message})){|db| db.do("BEGIN\nRAISE WARNING 'foo';\nEND;")}
+    a.should == "WARNING:  foo\n"
+  end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_version >= 90000
 end
 
 describe "A PostgreSQL database with domain types" do
@@ -1035,6 +1054,18 @@ describe "Postgres::Dataset#insert" do
     @ds.first(:xid=>h[:xid])[:value].should == 10
   end
 
+  specify "should have insert_select respect existing returning clause" do
+    h = @ds.returning(:value___v, :xid___x).insert_select(:value=>10)
+    h[:v].should == 10
+    @ds.first(:xid=>h[:x])[:value].should == 10
+  end
+
+  specify "should have prepared insert_select respect existing returning clause" do
+    h = @ds.returning(:value___v, :xid___x).prepare(:insert_select, :insert_select, :value=>10).call
+    h[:v].should == 10
+    @ds.first(:xid=>h[:x])[:value].should == 10
+  end
+
   specify "should correctly return the inserted record's primary key value" do
     value1 = 10
     id1 = @ds.insert(:value=>value1)
@@ -2466,14 +2497,11 @@ describe 'PostgreSQL hstore handling' do
     @ds.get(h2.merge(h3).keys.length).should == 2
     @ds.get(h1.merge(h3).keys.length).should == 3
 
-    unless [:do].include?(@db.adapter_scheme)
-      # Broken DataObjects thinks operators with ? represent placeholders
-      @ds.get(h1.contain_all(%w'a c')).should == true
-      @ds.get(h1.contain_all(%w'a d')).should == false
+    @ds.get(h1.contain_all(%w'a c')).should == true
+    @ds.get(h1.contain_all(%w'a d')).should == false
 
-      @ds.get(h1.contain_any(%w'a d')).should == true
-      @ds.get(h1.contain_any(%w'e d')).should == false
-    end
+    @ds.get(h1.contain_any(%w'a d')).should == true
+    @ds.get(h1.contain_any(%w'e d')).should == false
 
     @ds.get(h1.contains(h2)).should == true
     @ds.get(h1.contains(h3)).should == false
@@ -2491,18 +2519,16 @@ describe 'PostgreSQL hstore handling' do
 
     @ds.from(Sequel.hstore('a'=>'b', 'c'=>nil).op.each).order(:key).all.should == [{:key=>'a', :value=>'b'}, {:key=>'c', :value=>nil}]
 
-    unless [:do].include?(@db.adapter_scheme)
-      @ds.get(h1.has_key?('c')).should == true
-      @ds.get(h1.include?('c')).should == true
-      @ds.get(h1.key?('c')).should == true
-      @ds.get(h1.member?('c')).should == true
-      @ds.get(h1.exist?('c')).should == true
-      @ds.get(h1.has_key?('d')).should == false
-      @ds.get(h1.include?('d')).should == false
-      @ds.get(h1.key?('d')).should == false
-      @ds.get(h1.member?('d')).should == false
-      @ds.get(h1.exist?('d')).should == false
-    end
+    @ds.get(h1.has_key?('c')).should == true
+    @ds.get(h1.include?('c')).should == true
+    @ds.get(h1.key?('c')).should == true
+    @ds.get(h1.member?('c')).should == true
+    @ds.get(h1.exist?('c')).should == true
+    @ds.get(h1.has_key?('d')).should == false
+    @ds.get(h1.include?('d')).should == false
+    @ds.get(h1.key?('d')).should == false
+    @ds.get(h1.member?('d')).should == false
+    @ds.get(h1.exist?('d')).should == false
 
     @ds.get(h1.hstore.hstore.hstore.keys.length).should == 2
     @ds.get(h1.keys.length).should == 2
@@ -3022,6 +3048,7 @@ describe 'PostgreSQL interval types' do
       ['1 second', '00:00:01', 1, [[:seconds, 1]]],
       ['1 minute', '00:01:00', 60, [[:seconds, 60]]],
       ['1 hour', '01:00:00', 3600, [[:seconds, 3600]]],
+      ['123000 hours', '123000:00:00', 442800000, [[:seconds, 442800000]]],
       ['1 day', '1 day', 86400, [[:days, 1]]],
       ['1 week', '7 days', 86400*7, [[:days, 7]]],
       ['1 month', '1 mon', 86400*30, [[:months, 1]]],
@@ -3390,7 +3417,8 @@ describe 'pg_static_cache_updater extension' do
     q = Queue.new
     q1 = Queue.new
 
-    @db.listen_for_static_cache_updates(@Thing, :timeout=>0, :loop=>proc{q.push(nil); q1.pop.call})
+    @db.listen_for_static_cache_updates(@Thing, :timeout=>0, :loop=>proc{q.push(nil); q1.pop.call}, :before_thread_exit=>proc{q.push(nil)})
+
     q.pop
     q1.push(proc{@db[:things].insert(1, 'A')})
     q.pop
@@ -3405,5 +3433,47 @@ describe 'pg_static_cache_updater extension' do
     @Thing.all.should == []
 
     q1.push(proc{throw :stop})
+    q.pop
   end
 end if DB.adapter_scheme == :postgres && SEQUEL_POSTGRES_USES_PG && DB.server_version >= 90000
+
+describe 'PostgreSQL enum types' do
+  before(:all) do
+    @db = DB
+    @db.extension :pg_array, :pg_enum
+    @db.create_enum(:test_enum, %w'a b c d')
+
+    @db.create_table!(:test_enumt) do
+      test_enum  :t
+    end
+  end
+  after(:all) do
+    @db.drop_table?(:test_enumt)
+    @db.drop_enum(:test_enum)
+  end
+
+  specify "should return correct entries in the schema" do
+    s = @db.schema(:test_enumt)
+    s.first.last[:type].should == :enum
+    s.first.last[:enum_values].should == %w'a b c d'
+  end
+
+  it "should add array parsers for enum values" do
+    @db.get(Sequel.pg_array(%w'a b', :test_enum)).should == %w'a b'
+  end if DB.adapter_scheme == :postgres || DB.adapter_scheme == :jdbc
+
+  it "should set up model typecasting correctly" do
+    c = Class.new(Sequel::Model(:test_enumt))
+    o = c.new
+    o.t = :a
+    o.t.should == 'a'
+  end
+
+  it "should add values to existing enum" do
+    @db.add_enum_value(:test_enum, 'e')
+    @db.add_enum_value(:test_enum, 'f', :after=>'a')
+    @db.add_enum_value(:test_enum, 'g', :before=>'b')
+    @db.add_enum_value(:test_enum, 'a', :if_not_exists=>true) if @db.server_version >= 90300
+    @db.schema(:test_enumt, :reload=>true).first.last[:enum_values].should == %w'a f g b c d e'
+  end if DB.server_version >= 90100
+end
diff --git a/spec/bin_spec.rb b/spec/bin_spec.rb
index 7a2c1d1..20f077b 100644
--- a/spec/bin_spec.rb
+++ b/spec/bin_spec.rb
@@ -56,6 +56,7 @@ describe "bin/sequel" do
     bin(:args=>'-c "print DB.tables.inspect"').should == '[]'
     DB.create_table(:a){Integer :a}
     bin(:args=>'-c "print DB.tables.inspect"').should == '[:a]'
+    bin(:args=>'-v -c "print DB.tables.inspect"').should == "sequel #{Sequel.version}\n[:a]"
   end
 
   it "-C should copy databases" do
@@ -88,8 +89,8 @@ END
     DB2.tables.sort_by{|t| t.to_s}.should == [:a, :b]
     DB[:a].all.should == [{:a=>1, :name=>'foo'}]
     DB[:b].all.should == [{:a=>1}]
-    DB2.schema(:a).should == [[:a, {:allow_null=>false, :default=>nil, :primary_key=>true, :db_type=>"integer", :type=>:integer, :ruby_default=>nil}], [:name, {:allow_null=>true, :default=>nil, :primary_key=>false, :db_type=>"varchar(255)", :type=>:string, :ruby_default=>nil}]]
-    DB2.schema(:b).should == [[:a, {:allow_null=>true, :default=>nil, :primary_key=>false, :db_type=>"integer", :type=>:integer, :ruby_default=>nil}]]
+    DB2.schema(:a).map{|col, sch| [col, *sch.values_at(:allow_null, :default, :primary_key, :db_type, :type, :ruby_default)]}.should == [[:a, false, nil, true, "integer", :integer, nil], [:name, true, nil, false, "varchar(255)", :string, nil]]
+    DB2.schema(:b).map{|col, sch| [col, *sch.values_at(:allow_null, :default, :primary_key, :db_type, :type, :ruby_default)]}.should == [[:a, true, nil, false, "integer", :integer, nil]]
     DB2.indexes(:a).should == {}
     DB2.indexes(:b).should == {:b_a_index=>{:unique=>false, :columns=>[:a]}}
     DB2.foreign_key_list(:a).should == []
@@ -188,7 +189,7 @@ END
     bin(:args=>'-t -c "lambda{lambda{lambda{raise \'foo\'}.call}.call}.call"', :stderr=>true).count("\n").should > 3
   end
 
-  it "-v should output the Sequel version" do
+  it "-v should output the Sequel version and exit if database is not given" do
     bin(:args=>"-v", :no_conn=>true).should == "sequel #{Sequel.version}\n"
   end
 
@@ -201,6 +202,7 @@ END
     bin(:args=>'-D -d', :stderr=>true).should == "Error: Cannot specify -D and -d together\n"
     bin(:args=>'-m foo -d', :stderr=>true).should == "Error: Cannot specify -m and -d together\n"
     bin(:args=>'-S foo -d', :stderr=>true).should == "Error: Cannot specify -S and -d together\n"
+    bin(:args=>'-S foo -C', :stderr=>true).should == "Error: Cannot specify -S and -C together\n"
   end
 
   it "should use a mock database if no database is given" do
@@ -243,6 +245,7 @@ END
     bin(:post=>TMP_FILE).should == '[]'
     DB.create_table(:a){Integer :a}
     bin(:post=>TMP_FILE).should == '[:a]'
+    bin(:post=>TMP_FILE, :args=>'-v').should == "sequel #{Sequel.version}\n[:a]"
   end
 
   it "should run code provided on stdin" do
diff --git a/spec/core/database_spec.rb b/spec/core/database_spec.rb
index b827d3b..c6cff40 100644
--- a/spec/core/database_spec.rb
+++ b/spec/core/database_spec.rb
@@ -1290,6 +1290,12 @@ describe "A broken adapter (lib is there but the class is not)" do
   end
 end
 
+describe "Sequel::Database.load_adapter" do
+  specify "should not raise an error if subadapter does not exist" do
+    Sequel::Database.load_adapter(:foo, :subdir=>'bar').should == nil
+  end
+end
+
 describe "A single threaded database" do
   after do
     Sequel::Database.single_threaded = false
diff --git a/spec/core/dataset_spec.rb b/spec/core/dataset_spec.rb
index 0f18a02..5dd648b 100644
--- a/spec/core/dataset_spec.rb
+++ b/spec/core/dataset_spec.rb
@@ -982,22 +982,17 @@ describe "Dataset#literal" do
     d.literal(d).should == "(#{d.sql})"
   end
   
-  specify "should literalize Sequel::SQLTime properly" do
-    t = Sequel::SQLTime.now
-    s = t.strftime("'%H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'"
+  specify "should literalize times properly" do
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 500000)).should == "'01:02:03.500000'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5, 500000)).should == "'2010-01-02 03:04:05.500000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(55, 10))).should == "'2010-01-02 03:04:05.500000'"
   end
   
-  specify "should literalize Time properly" do
-    t = Time.now
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'"
-  end
-  
-  specify "should literalize DateTime properly" do
-    t = DateTime.now
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction * (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}'"
+  specify "should literalize times properly for databases supporting millisecond precision" do
+    meta_def(@dataset, :timestamp_precision){3}
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 500000)).should == "'01:02:03.500'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5, 500000)).should == "'2010-01-02 03:04:05.500'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(55, 10))).should == "'2010-01-02 03:04:05.500'"
   end
   
   specify "should literalize Date properly" do
@@ -1015,52 +1010,19 @@ describe "Dataset#literal" do
 
   specify "should literalize Time, DateTime, Date properly if SQL standard format is required" do
     meta_def(@dataset, :requires_sql_standard_datetimes?){true}
-
-    t = Time.now
-    s = t.strftime("TIMESTAMP '%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}'"
-
-    t = DateTime.now
-    s = t.strftime("TIMESTAMP '%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction* (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}'"
-
-    d = Date.today
-    s = d.strftime("DATE '%Y-%m-%d'")
-    @dataset.literal(d).should == s
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5, 500000)).should == "TIMESTAMP '2010-01-02 03:04:05.500000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(55, 10))).should == "TIMESTAMP '2010-01-02 03:04:05.500000'"
+    @dataset.literal(Date.new(2010, 1, 2)).should == "DATE '2010-01-02'"
   end
   
   specify "should literalize Time and DateTime properly if the database support timezones in timestamps" do
     meta_def(@dataset, :supports_timestamp_timezones?){true}
+    @dataset.literal(Time.utc(2010, 1, 2, 3, 4, 5, 500000)).should == "'2010-01-02 03:04:05.500000+0000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(55, 10))).should == "'2010-01-02 03:04:05.500000+0000'"
 
-    t = Time.now.utc
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.usec)}+0000'"
-
-    t = DateTime.now.new_offset(0)
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}.#{sprintf('%06i', t.sec_fraction* (RUBY_VERSION < '1.9.0' ? 86400000000 : 1000000))}+0000'"
-  end
-  
-  specify "should literalize Time and DateTime properly if the database doesn't support usecs in timestamps" do
     meta_def(@dataset, :supports_timestamp_usecs?){false}
-    
-    t = Time.now.utc
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}'"
-
-    t = DateTime.now.new_offset(0)
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}'"
-    
-    meta_def(@dataset, :supports_timestamp_timezones?){true}
-    
-    t = Time.now.utc
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
-
-    t = DateTime.now.new_offset(0)
-    s = t.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
+    @dataset.literal(Time.utc(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
   end
   
   specify "should not modify literal strings" do
@@ -2127,12 +2089,20 @@ describe "Dataset#join_table" do
     @d.cross_join(:categories).sql.should == 'SELECT * FROM "items" CROSS JOIN "categories"'
   end
   
-  specify "should raise an error if additional arguments are provided to join methods that don't take conditions" do
-    proc{@d.natural_join(:categories, :id=>:id)}.should raise_error(ArgumentError)
-    proc{@d.natural_left_join(:categories, :id=>:id)}.should raise_error(ArgumentError)
-    proc{@d.natural_right_join(:categories, :id=>:id)}.should raise_error(ArgumentError)
-    proc{@d.natural_full_join(:categories, :id=>:id)}.should raise_error(ArgumentError)
-    proc{@d.cross_join(:categories, :id=>:id)}.should raise_error(ArgumentError)
+  specify "should support options hashes for join methods that don't take conditions" do
+    @d.natural_join(:categories, :table_alias=>:a).sql.should == 'SELECT * FROM "items" NATURAL JOIN "categories" AS "a"'
+    @d.natural_left_join(:categories, :table_alias=>:a).sql.should == 'SELECT * FROM "items" NATURAL LEFT JOIN "categories" AS "a"'
+    @d.natural_right_join(:categories, :table_alias=>:a).sql.should == 'SELECT * FROM "items" NATURAL RIGHT JOIN "categories" AS "a"'
+    @d.natural_full_join(:categories, :table_alias=>:a).sql.should == 'SELECT * FROM "items" NATURAL FULL JOIN "categories" AS "a"'
+    @d.cross_join(:categories, :table_alias=>:a).sql.should == 'SELECT * FROM "items" CROSS JOIN "categories" AS "a"'
+  end
+
+  specify "should raise an error if non-hash arguments are provided to join methods that don't take conditions" do
+    proc{@d.natural_join(:categories, nil)}.should raise_error(Sequel::Error)
+    proc{@d.natural_left_join(:categories, nil)}.should raise_error(Sequel::Error)
+    proc{@d.natural_right_join(:categories, nil)}.should raise_error(Sequel::Error)
+    proc{@d.natural_full_join(:categories, nil)}.should raise_error(Sequel::Error)
+    proc{@d.cross_join(:categories, nil)}.should raise_error(Sequel::Error)
   end
 
   specify "should raise an error if blocks are provided to join methods that don't pass them" do
@@ -2184,6 +2154,10 @@ describe "Dataset#join_table" do
     @d.from('stats').join('players', {:id => :player_id}, :implicit_qualifier=>:p).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON ("players"."id" = "p"."player_id")'
   end
   
+  specify "should support the :reset_implicit_qualifier option" do
+    @d.from(:stats).join(:a, [:b], :reset_implicit_qualifier=>false).join(:players, {:id => :player_id}).sql.should == 'SELECT * FROM "stats" INNER JOIN "a" USING ("b") INNER JOIN "players" ON ("players"."id" = "stats"."player_id")'
+  end
+  
   specify "should default :qualify option to default_join_table_qualification" do
     def @d.default_join_table_qualification; false; end
     @d.from('stats').join(:players, :id => :player_id).sql.should == 'SELECT * FROM "stats" INNER JOIN "players" ON ("id" = "player_id")'
@@ -3237,6 +3211,11 @@ describe "Dataset#insert_sql" do
   specify "should accept an array of columns and an LiteralString" do
     @ds.insert_sql([:a, :b, :c], Sequel.lit('VALUES (1, 2, 3)')).should == "INSERT INTO items (a, b, c) VALUES (1, 2, 3)"
   end
+
+  specify "should use unaliased table name" do
+    @ds.from(:items___i).insert_sql(1).should == "INSERT INTO items VALUES (1)"
+    @ds.from(Sequel.as(:items, :i)).insert_sql(1).should == "INSERT INTO items VALUES (1)"
+  end
 end
 
 describe "Dataset#inspect" do
@@ -3445,7 +3424,7 @@ describe "Dataset prepared statements and bound variables " do
   before do
     @db = Sequel.mock
     @ds = @db[:items]
-    meta_def(@ds, :insert_sql){|*v| "#{super(*v)}#{' RETURNING *' if opts.has_key?(:returning)}" }
+    meta_def(@ds, :insert_select_sql){|*v| "#{insert_sql(*v)} RETURNING *" }
   end
   
   specify "#call should take a type and bind hash and interpolate it" do
@@ -3923,7 +3902,12 @@ describe "Sequel timezone support" do
     @dataset = @db.dataset
     meta_def(@dataset, :supports_timestamp_timezones?){true}
     meta_def(@dataset, :supports_timestamp_usecs?){false}
-    @offset = sprintf("%+03i%02i", *(Time.now.utc_offset/60).divmod(60))
+    @utc_time = Time.utc(2010, 1, 2, 3, 4, 5)
+    @local_time = Time.local(2010, 1, 2, 3, 4, 5)
+    @offset = sprintf("%+03i%02i", *(@local_time.utc_offset/60).divmod(60))
+    @dt_offset = @local_time.utc_offset/Rational(86400, 1)
+    @utc_datetime = DateTime.new(2010, 1, 2, 3, 4, 5)
+    @local_datetime = DateTime.new(2010, 1, 2, 3, 4, 5, @dt_offset)
   end
   after do
     Sequel.default_timezone = nil
@@ -3932,50 +3916,26 @@ describe "Sequel timezone support" do
   
   specify "should handle an database timezone of :utc when literalizing values" do
     Sequel.database_timezone = :utc
-
-    t = Time.now
-    s = t.getutc.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
-
-    t = DateTime.now
-    s = t.new_offset(0).strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
+    @dataset.literal(Time.utc(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
   end
   
   specify "should handle an database timezone of :local when literalizing values" do
     Sequel.database_timezone = :local
-
-    t = Time.now.utc
-    s = t.getlocal.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}#{@offset}'"
-
-    t = DateTime.now.new_offset(0)
-    s = t.new_offset(DateTime.now.offset).strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}#{@offset}'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05#{@offset}'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, 5, @dt_offset)).should == "'2010-01-02 03:04:05#{@offset}'"
   end
   
   specify "should have Database#timezone override Sequel.database_timezone" do
     Sequel.database_timezone = :local
     @db.timezone = :utc
-
-    t = Time.now
-    s = t.getutc.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
-
-    t = DateTime.now
-    s = t.new_offset(0).strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}+0000'"
+    @dataset.literal(Time.utc(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05+0000'"
 
     Sequel.database_timezone = :utc
     @db.timezone = :local
-
-    t = Time.now.utc
-    s = t.getlocal.strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}#{@offset}'"
-
-    t = DateTime.now.new_offset(0)
-    s = t.new_offset(DateTime.now.offset).strftime("'%Y-%m-%d %H:%M:%S")
-    @dataset.literal(t).should == "#{s}#{@offset}'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5)).should == "'2010-01-02 03:04:05#{@offset}'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, 5, @dt_offset)).should == "'2010-01-02 03:04:05#{@offset}'"
   end
   
   specify "should handle converting database timestamps into application timestamps" do
@@ -4406,9 +4366,10 @@ end
 
 describe "Dataset#returning" do
   before do
-    @ds = Sequel.mock(:fetch=>proc{|s| {:foo=>s}})[:t].returning(:foo)
+    @db = Sequel.mock(:fetch=>proc{|s| {:foo=>s}})
+    @db.extend_datasets{def supports_returning?(type) true end}
+    @ds = @db[:t].returning(:foo)
     @pr = proc do
-      def @ds.supports_returning?(*) true end
       sc = class << @ds; self; end
       Sequel::Dataset.def_sql_method(sc, :delete, %w'delete from where returning')
       Sequel::Dataset.def_sql_method(sc, :insert, %w'insert into columns values returning')
@@ -4446,6 +4407,11 @@ describe "Dataset#returning" do
     @ds.insert(1).should == [{:foo=>"INSERT INTO t VALUES (1) RETURNING foo"}]
     @ds.update(:foo=>1).should == [{:foo=>"UPDATE t SET foo = 1 RETURNING foo"}]
   end
+
+  specify "should raise an error if RETURNING is not supported" do
+    @db.extend_datasets{def supports_returning?(type) false end}
+    proc{@db[:t].returning}.should raise_error(Sequel::Error)
+  end
 end
 
 describe "Dataset emulating bitwise operator support" do
@@ -4931,3 +4897,59 @@ describe "Dataset emulated complex expression operators" do
     @ds.literal(~@n).should == "((0 - x) - 1)"
   end
 end
+
+describe "#joined_dataset?" do
+  before do
+    @ds = Sequel.mock.dataset
+  end
+
+  it "should be false if the dataset has 0 or 1 from table" do
+    @ds.joined_dataset?.should == false
+    @ds.from(:a).joined_dataset?.should == false
+  end
+
+  it "should be true if the dataset has 2 or more from tables" do
+    @ds.from(:a, :b).joined_dataset?.should == true
+    @ds.from(:a, :b, :c).joined_dataset?.should == true
+  end
+
+  it "should be true if the dataset has any join tables" do
+    @ds.from(:a).cross_join(:b).joined_dataset?.should == true
+  end
+end
+
+describe "#unqualified_column_for" do
+  before do
+    @ds = Sequel.mock.dataset
+  end
+
+  it "should handle Symbols" do
+    @ds.unqualified_column_for(:a).should == Sequel.identifier('a')
+    @ds.unqualified_column_for(:b__a).should == Sequel.identifier('a')
+    @ds.unqualified_column_for(:a___c).should == Sequel.identifier('a').as('c')
+    @ds.unqualified_column_for(:b__a___c).should == Sequel.identifier('a').as('c')
+  end
+
+  it "should handle SQL::Identifiers" do
+    @ds.unqualified_column_for(Sequel.identifier(:a)).should == Sequel.identifier(:a)
+  end
+
+  it "should handle SQL::QualifiedIdentifiers" do
+    @ds.unqualified_column_for(Sequel.qualify(:b, :a)).should == Sequel.identifier('a')
+    @ds.unqualified_column_for(Sequel.qualify(:b, 'a')).should == Sequel.identifier('a')
+  end
+
+  it "should handle SQL::AliasedExpressions" do
+    @ds.unqualified_column_for(Sequel.qualify(:b, :a).as(:c)).should == Sequel.identifier('a').as(:c)
+  end
+
+  it "should return nil for other objects" do
+    @ds.unqualified_column_for(Object.new).should == nil
+    @ds.unqualified_column_for('a').should == nil
+  end
+
+  it "should return nil for other objects inside SQL::AliasedExpressions" do
+    @ds.unqualified_column_for(Sequel.as(Object.new, 'a')).should == nil
+    @ds.unqualified_column_for(Sequel.as('a', 'b')).should == nil
+  end
+end
diff --git a/spec/core/object_graph_spec.rb b/spec/core/object_graph_spec.rb
index 621ac49..6832ce8 100644
--- a/spec/core/object_graph_spec.rb
+++ b/spec/core/object_graph_spec.rb
@@ -58,6 +58,11 @@ describe Sequel::Dataset, "graphing" do
       ds.sql.should == 'SELECT points.id, points.x AS y, lines.id AS lines_id, lines.x, lines.y AS lines_y, lines.graph_id FROM points LEFT OUTER JOIN lines ON (lines.x = points.id)'
     end
 
+    it "should requalify currently selected columns in new graph if current dataset joins tables" do
+      ds = @ds1.cross_join(:lines).select(:points__id, :lines__id___lid, :lines__x, :lines__y).graph(@ds3, :x=>:id)
+      ds.sql.should == 'SELECT points.id, points.lid, points.x, points.y, graphs.id AS graphs_id, graphs.name, graphs.x AS graphs_x, graphs.y AS graphs_y, graphs.lines_x FROM (SELECT points.id, lines.id AS lid, lines.x, lines.y FROM points CROSS JOIN lines) AS points LEFT OUTER JOIN graphs ON (graphs.x = points.id)'
+    end
+
     it "should raise error if currently selected expressions cannot be handled" do
       proc{@ds1.select(1).graph(@ds2, :x=>:id)}.should raise_error(Sequel::Error)
     end
diff --git a/spec/core/schema_spec.rb b/spec/core/schema_spec.rb
index 20b4535..fa64ce1 100644
--- a/spec/core/schema_spec.rb
+++ b/spec/core/schema_spec.rb
@@ -1550,7 +1550,7 @@ describe "Schema Parser" do
   specify "should correctly parse all supported data types" do
     sm = Module.new do
       def schema_parse_table(t, opts)
-        [[:x, {:type=>schema_column_type(t.to_s)}]]
+        [[:x, {:db_type=>t.to_s, :type=>schema_column_type(t.to_s)}]]
       end
     end
     @db.extend(sm)
@@ -1563,6 +1563,7 @@ describe "Schema Parser" do
     @db.schema(:"character varying").first.last[:type].should == :string
     @db.schema(:varchar).first.last[:type].should == :string
     @db.schema(:"varchar(255)").first.last[:type].should == :string
+    @db.schema(:"varchar(255)").first.last[:max_length].should == 255
     @db.schema(:text).first.last[:type].should == :string
     @db.schema(:date).first.last[:type].should == :date
     @db.schema(:datetime).first.last[:type].should == :datetime
diff --git a/spec/extensions/auto_validations_spec.rb b/spec/extensions/auto_validations_spec.rb
index 7cec975..958ab9d 100644
--- a/spec/extensions/auto_validations_spec.rb
+++ b/spec/extensions/auto_validations_spec.rb
@@ -2,13 +2,13 @@ require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
 
 describe "Sequel::Plugins::AutoValidations" do
   before do
-    db = Sequel.mock(:fetch=>{:v=>1})
+    db = Sequel.mock(:fetch=>proc{|sql| sql =~ /a{51}/ ? {:v=>0} : {:v=>1}})
     def db.schema_parse_table(*) true; end
     def db.schema(t, *)
       t = t.first_source if t.is_a?(Sequel::Dataset)
       return [] if t != :test
       [[:id, {:primary_key=>true, :type=>:integer, :allow_null=>false}],
-       [:name, {:primary_key=>false, :type=>:string, :allow_null=>false}],
+       [:name, {:primary_key=>false, :type=>:string, :allow_null=>false, :max_length=>50}],
        [:num, {:primary_key=>false, :type=>:integer, :allow_null=>true}],
        [:d, {:primary_key=>false, :type=>:date, :allow_null=>false}],
        [:nnd, {:primary_key=>false, :type=>:string, :allow_null=>false, :ruby_default=>'nnd'}]]
@@ -42,6 +42,10 @@ describe "Sequel::Plugins::AutoValidations" do
     @m.set(:d=>Date.today, :num=>1)
     @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
+
+    @m.set(:name=>'a'*51)
+    @m.valid?.should == false
+    @m.errors.should == {:name=>["is longer than 50 characters"]}
   end
 
   it "should handle databases that don't support index parsing" do
@@ -86,6 +90,13 @@ describe "Sequel::Plugins::AutoValidations" do
 
     @c.skip_auto_validations(:unique)
     @m.valid?.should == true
+
+    @m.set(:name=>'a'*51)
+    @m.valid?.should == false
+    @m.errors.should == {:name=>["is longer than 50 characters"]}
+
+    @c.skip_auto_validations(:max_length)
+    @m.valid?.should == true
   end
 
   it "should allow skipping all auto validations" do
@@ -95,6 +106,8 @@ describe "Sequel::Plugins::AutoValidations" do
     @m.valid?.should == true
     @m.set(:d=>'/', :num=>'a', :name=>'1')
     @m.valid?.should == true
+    @m.set(:name=>'a'*51)
+    @m.valid?.should == true
   end
 
   it "should work correctly in subclasses" do
@@ -110,6 +123,10 @@ describe "Sequel::Plugins::AutoValidations" do
     @m.set(:d=>Date.today, :num=>1)
     @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
+
+    @m.set(:name=>'a'*51)
+    @m.valid?.should == false
+    @m.errors.should == {:name=>["is longer than 50 characters"]}
   end
 
   it "should work correctly in STI subclasses" do
@@ -128,6 +145,10 @@ describe "Sequel::Plugins::AutoValidations" do
     @m.valid?.should == false
     @m.errors.should == {[:name, :num]=>["is already taken"]}
     @m.db.sqls.should == ["SELECT count(*) AS count FROM test WHERE ((name = '1') AND (num = 1)) LIMIT 1"]
+
+    @m.set(:name=>'a'*51)
+    @m.valid?.should == false
+    @m.errors.should == {:name=>["is longer than 50 characters"]}
   end
 
   it "should work correctly when changing the dataset" do
diff --git a/spec/extensions/class_table_inheritance_spec.rb b/spec/extensions/class_table_inheritance_spec.rb
index 028fd2a..ac4fad6 100644
--- a/spec/extensions/class_table_inheritance_spec.rb
+++ b/spec/extensions/class_table_inheritance_spec.rb
@@ -48,8 +48,8 @@ describe "class_table_inheritance plugin" do
     Object.send(:remove_const, :Employee)
   end
 
-  specify "should have simple_table = nil for subclasses" do
-    Employee.simple_table.should == "employees"
+  specify "should have simple_table = nil for all classes" do
+    Employee.simple_table.should == nil
     Manager.simple_table.should == nil
     Executive.simple_table.should == nil
     Staff.simple_table.should == nil
@@ -62,10 +62,10 @@ describe "class_table_inheritance plugin" do
     end
 
   specify "should use a joined dataset in subclasses" do
-    Employee.dataset.sql.should == 'SELECT * FROM employees'
-    Manager.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id)'
-    Executive.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id) INNER JOIN executives USING (id)'
-    Staff.dataset.sql.should == 'SELECT * FROM employees INNER JOIN staff USING (id)'
+    Employee.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind FROM employees'
+    Manager.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind, managers.num_staff FROM employees INNER JOIN managers ON (managers.id = employees.id)'
+    Executive.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind, managers.num_staff, executives.num_managers FROM employees INNER JOIN managers ON (managers.id = employees.id) INNER JOIN executives ON (executives.id = managers.id)'
+    Staff.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind, staff.manager_id FROM employees INNER JOIN staff ON (staff.id = employees.id)'
   end
   
   it "should return rows with the correct class based on the polymorphic_key value" do
@@ -80,7 +80,7 @@ describe "class_table_inheritance plugin" do
   
   it "should return rows with the current class if cti_key is nil" do
     Employee.plugin(:class_table_inheritance)
-    @ds._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Executive'}, {:kind=>'Staff'}]
+    Employee.dataset._fetch = [{:kind=>'Employee'}, {:kind=>'Manager'}, {:kind=>'Executive'}, {:kind=>'Staff'}]
     Employee.all.collect{|x| x.class}.should == [Employee, Employee, Employee, Employee]
   end
   
@@ -147,22 +147,27 @@ describe "class_table_inheritance plugin" do
     o.valid?.should == true
   end
 
+  it "should set the type column field even when not validating" do
+    Employee.new.save(:validate=>false)
+    @db.sqls.should == ["INSERT INTO employees (kind) VALUES ('Employee')"]
+  end
+
   it "should raise an error if attempting to create an anonymous subclass" do
     proc{Class.new(Manager)}.should raise_error(Sequel::Error)
   end
 
   it "should allow specifying a map of names to tables to override implicit mapping" do
-    Manager.dataset.sql.should == 'SELECT * FROM employees INNER JOIN managers USING (id)'
-    Staff.dataset.sql.should == 'SELECT * FROM employees INNER JOIN staff USING (id)'
+    Manager.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind, managers.num_staff FROM employees INNER JOIN managers ON (managers.id = employees.id)'
+    Staff.dataset.sql.should == 'SELECT employees.id, employees.name, employees.kind, staff.manager_id FROM employees INNER JOIN staff ON (staff.id = employees.id)'
   end
 
   it "should lazily load attributes for columns in subclass tables" do
     Manager.instance_dataset._fetch = Manager.dataset._fetch = {:id=>1, :name=>'J', :kind=>'Executive', :num_staff=>2}
     m = Manager[1]
-    @db.sqls.should == ['SELECT * FROM employees INNER JOIN managers USING (id) WHERE (id = 1) LIMIT 1']
+    @db.sqls.should == ['SELECT employees.id, employees.name, employees.kind, managers.num_staff FROM employees INNER JOIN managers ON (managers.id = employees.id) WHERE (managers.id = 1) LIMIT 1']
     Executive.instance_dataset._fetch = Executive.dataset._fetch = {:num_managers=>3}
     m.num_managers.should == 3
-    @db.sqls.should == ['SELECT num_managers FROM employees INNER JOIN managers USING (id) INNER JOIN executives USING (id) WHERE (id = 1) LIMIT 1']
+    @db.sqls.should == ['SELECT executives.num_managers FROM employees INNER JOIN managers ON (managers.id = employees.id) INNER JOIN executives ON (executives.id = managers.id) WHERE (executives.id = 1) LIMIT 1']
     m.values.should == {:id=>1, :name=>'J', :kind=>'Executive', :num_staff=>2, :num_managers=>3}
   end
 
@@ -224,12 +229,12 @@ describe "class_table_inheritance plugin" do
   it "should handle many_to_one relationships correctly" do
     Manager.dataset._fetch = {:id=>3, :name=>'E', :kind=>'Executive', :num_managers=>3}
     Staff.load(:manager_id=>3).manager.should == Executive.load(:id=>3, :name=>'E', :kind=>'Executive', :num_managers=>3)
-    @db.sqls.should == ['SELECT * FROM employees INNER JOIN managers USING (id) WHERE (id = 3) LIMIT 1']
+    @db.sqls.should == ['SELECT employees.id, employees.name, employees.kind, managers.num_staff FROM employees INNER JOIN managers ON (managers.id = employees.id) WHERE (managers.id = 3) LIMIT 1']
   end
   
   it "should handle one_to_many relationships correctly" do
     Staff.dataset._fetch = {:id=>1, :name=>'S', :kind=>'Staff', :manager_id=>3}
     Executive.load(:id=>3).staff_members.should == [Staff.load(:id=>1, :name=>'S', :kind=>'Staff', :manager_id=>3)]
-    @db.sqls.should == ['SELECT * FROM employees INNER JOIN staff USING (id) WHERE (staff.manager_id = 3)']
+    @db.sqls.should == ['SELECT employees.id, employees.name, employees.kind, staff.manager_id FROM employees INNER JOIN staff ON (staff.id = employees.id) WHERE (staff.manager_id = 3)']
   end
 end
diff --git a/spec/extensions/column_select_spec.rb b/spec/extensions/column_select_spec.rb
new file mode 100644
index 0000000..d8ea956
--- /dev/null
+++ b/spec/extensions/column_select_spec.rb
@@ -0,0 +1,108 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "Sequel::Plugins::ColumnSelect" do
+  def set_cols(*cols)
+    @cols.replace(cols)
+  end
+
+  before do
+    cols = @cols = []
+    @db = Sequel.mock
+    @db.extend_datasets(Module.new{define_method(:columns){cols}})
+    set_cols :id, :a, :b, :c
+    @Album = Class.new(Sequel::Model(@db[:albums]))
+  end
+
+  it "should add a explicit column selections to existing dataset without explicit selection" do
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT albums.id, albums.a, albums.b, albums.c FROM albums'
+
+    @Album.dataset = :albs
+    @Album.dataset.sql.should == 'SELECT albs.id, albs.a, albs.b, albs.c FROM albs'
+
+    @Album.dataset = Sequel.identifier(:albs)
+    @Album.dataset.sql.should == 'SELECT albs.id, albs.a, albs.b, albs.c FROM albs'
+  end
+
+  it "should handle qualified tables" do
+    @Album.dataset = :s__albums
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT s.albums.id, s.albums.a, s.albums.b, s.albums.c FROM s.albums'
+
+    @Album.dataset = Sequel.qualify(:s2, :albums)
+    @Album.dataset.sql.should == 'SELECT s2.albums.id, s2.albums.a, s2.albums.b, s2.albums.c FROM s2.albums'
+  end
+
+  it "should handle aliases" do
+    @Album.dataset = :albums___a
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT a.id, a.a, a.b, a.c FROM albums AS a'
+
+    @Album.dataset = Sequel.as(:albums, :b)
+    @Album.dataset.sql.should == 'SELECT b.id, b.a, b.b, b.c FROM albums AS b'
+
+    @Album.dataset = :s__albums___a
+    @Album.dataset.sql.should == 'SELECT a.id, a.a, a.b, a.c FROM s.albums AS a'
+
+    @Album.dataset = @Album.db[:albums].from_self
+    @Album.dataset.sql.should == 'SELECT t1.id, t1.a, t1.b, t1.c FROM (SELECT * FROM albums) AS t1'
+
+    @Album.dataset = Sequel.as(@Album.db[:albums], :b)
+    @Album.dataset.sql.should == 'SELECT b.id, b.a, b.b, b.c FROM (SELECT * FROM albums) AS b'
+  end
+
+  it "should not add a explicit column selection selection on existing dataset with explicit selection" do
+    @Album.dataset = @Album.dataset.select(:name)
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT name FROM albums'
+
+    @Album.dataset = @Album.dataset.select(:name, :artist)
+    @Album.dataset.sql.should == 'SELECT name, artist FROM albums'
+  end
+
+  it "should not add a explicit column selection on existing dataset with multiple tables" do
+    @Album.dataset = @Album.db.from(:a1, :a2)
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT * FROM a1, a2'
+
+    @Album.dataset = @Album.db.from(:a1).cross_join(:a2)
+    @Album.dataset.sql.should == 'SELECT * FROM a1 CROSS JOIN a2'
+  end
+
+  it "should use explicit column selection for many_to_many associations" do
+    @Album.plugin :column_select
+    @Album.many_to_many :albums, :class=>@Album, :left_key=>:l, :right_key=>:r, :join_table=>:j
+    @Album.load(:id=>1).albums_dataset.sql.should == 'SELECT albums.id, albums.a, albums.b, albums.c FROM albums INNER JOIN j ON (j.r = albums.id) WHERE (j.l = 1)'
+  end
+
+  it "should set not explicit column selection for many_to_many associations when overriding select" do
+    @Album.plugin :column_select
+    @Album.dataset = @Album.dataset.select(:a)
+    @Album.many_to_many :albums, :class=>@Album, :left_key=>:l, :right_key=>:r, :join_table=>:j
+    @Album.load(:id=>1).albums_dataset.sql.should == 'SELECT albums.* FROM albums INNER JOIN j ON (j.r = albums.id) WHERE (j.l = 1)'
+  end
+
+  it "should use the schema to get columns if available" do
+    def @db.supports_schema_parsing?() true end
+    def @db.schema(t, *)
+      [[:t, {}], [:d, {}]]
+    end
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT albums.t, albums.d FROM albums'
+  end
+
+  it "should handle case where schema parsing does not produce results" do
+    def @db.supports_schema_parsing?() true end
+    def @db.schema_parse_table(t, *) [] end
+    @Album.plugin :column_select
+    @Album.dataset.sql.should == 'SELECT albums.id, albums.a, albums.b, albums.c FROM albums'
+  end
+
+  it "works correctly when loaded on model without a dataset" do
+    c = Class.new(Sequel::Model)
+    c.plugin :column_select
+    sc = Class.new(c)
+    sc.dataset = @db[:a]
+    sc.dataset.sql.should == "SELECT a.id, a.a, a.b, a.c FROM a"
+  end
+end
diff --git a/spec/extensions/composition_spec.rb b/spec/extensions/composition_spec.rb
index e591f1f..36a0fc2 100644
--- a/spec/extensions/composition_spec.rb
+++ b/spec/extensions/composition_spec.rb
@@ -27,6 +27,26 @@ describe "Composition plugin" do
     proc{@c.composition :date, :mapping=>[]}.should_not raise_error
   end
 
+  it "should handle validations of underlying columns" do
+    @c.composition :date, :mapping=>[:year, :month, :day]
+    o = @c.new
+    def o.validate
+      [:year, :month, :day].each{|c| errors.add(c, "not present") unless send(c)}
+    end
+    o.valid?.should == false
+    o.date = Date.new(1, 2, 3)
+    o.valid?.should == true
+  end
+
+  it "should set column values even when not validating" do
+    @c.composition :date, :mapping=>[:year, :month, :day]
+    @c.load({}).set(:date=>Date.new(4, 8, 12)).save(:validate=>false)
+    sql = DB.sqls.last
+    sql.should include("year = 4")
+    sql.should include("month = 8")
+    sql.should include("day = 12")
+  end
+
   it ".compositions should return the reflection hash of compositions" do
     @c.compositions.should == {}
     @c.composition :date, :mapping=>[:year, :month, :day]
diff --git a/spec/extensions/dataset_source_alias_spec.rb b/spec/extensions/dataset_source_alias_spec.rb
new file mode 100644
index 0000000..688469c
--- /dev/null
+++ b/spec/extensions/dataset_source_alias_spec.rb
@@ -0,0 +1,51 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "dataset_source_alias extension" do
+  before do
+    @db = Sequel.mock
+    @db.extension(:dataset_source_alias)
+  end
+
+  it "should automatically alias datasets to their first source in #from" do
+    @db[@db[:a]].sql.should == 'SELECT * FROM (SELECT * FROM a) AS a'
+    @db[:a, @db[:b]].sql.should == 'SELECT * FROM a, (SELECT * FROM b) AS b'
+  end
+
+  it "should handle virtual row blocks in #from" do
+    @db.dataset.from{|_| @db[:a]}.sql.should == 'SELECT * FROM (SELECT * FROM a) AS a'
+    @db.dataset.from(:a){|_| @db[:b]}.sql.should == 'SELECT * FROM a, (SELECT * FROM b) AS b'
+  end
+
+  it "should automatically alias datasets to their first source in #join" do
+    @db[:a].cross_join(@db[:b]).sql.should == 'SELECT * FROM a CROSS JOIN (SELECT * FROM b) AS b'
+  end
+
+  it "should handle :table_alias option when joining" do
+    @db[:a].cross_join(@db[:b], :table_alias=>:c).sql.should == 'SELECT * FROM a CROSS JOIN (SELECT * FROM b) AS c'
+  end
+
+  it "should handle aliasing issues automatically" do
+    @db[:a, @db[:a]].sql.should == 'SELECT * FROM a, (SELECT * FROM a) AS a_0'
+    @db.dataset.from(:a, @db[:a]){|_| @db[:a]}.sql.should == 'SELECT * FROM a, (SELECT * FROM a) AS a_0, (SELECT * FROM a) AS a_1'
+    @db.dataset.from(:a, @db[:a]){|_| @db[:a]}.cross_join(@db[:a]).sql.should == 'SELECT * FROM a, (SELECT * FROM a) AS a_0, (SELECT * FROM a) AS a_1 CROSS JOIN (SELECT * FROM a) AS a_2'
+  end
+
+  it "should handle from_self" do
+    @db[:a].from_self.sql.should == 'SELECT * FROM (SELECT * FROM a) AS a'
+    @db[:a].from_self.from_self.sql.should == 'SELECT * FROM (SELECT * FROM (SELECT * FROM a) AS a) AS a'
+  end
+
+  it "should handle datasets without sources" do
+    @db[@db.select(1)].sql.should == 'SELECT * FROM (SELECT 1) AS t1'
+    @db[:t, @db.select(1)].sql.should == 'SELECT * FROM t, (SELECT 1) AS t1'
+    @db[:a].cross_join(@db.select(1)).sql.should == 'SELECT * FROM a CROSS JOIN (SELECT 1) AS t1'
+  end
+
+  it "should handle datasets selecting from functions" do
+    @db.dataset.from{|o| @db[o.f(:a)]}.sql.should == 'SELECT * FROM (SELECT * FROM f(a)) AS t1'
+  end
+
+  it "should handle datasets with literal SQL" do
+    @db.from(@db['SELECT c FROM d']).sql.should == 'SELECT * FROM (SELECT c FROM d) AS t1'
+  end
+end
diff --git a/spec/extensions/insert_returning_select_spec.rb b/spec/extensions/insert_returning_select_spec.rb
new file mode 100644
index 0000000..fad7bb9
--- /dev/null
+++ b/spec/extensions/insert_returning_select_spec.rb
@@ -0,0 +1,46 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "Sequel::Plugins::InsertReturningSelect" do
+  before do
+    @db = Sequel.mock(:fetch=>{:id=>1, :x=>2}, :autoid=>1)
+    @db.extend_datasets do
+      def supports_returning?(_) true end
+      def insert_select(*v) with_sql_first("#{insert_sql(*v)} RETURNING #{opts[:returning].map{|x| literal(x)}.join(', ')}") end
+    end
+    @Album = Class.new(Sequel::Model(@db[:albums].select(:id, :x)))
+    @Album.columns :id, :x
+    @db.sqls
+  end
+
+  it "should add a returning clause when inserting using selected columns" do
+    @Album.plugin :insert_returning_select
+    @Album.create(:x=>2).should == @Album.load(:id=>1, :x=>2)
+    @db.sqls.should == ['INSERT INTO albums (x) VALUES (2) RETURNING id, x']
+  end
+
+  it "should not add a returning clause if selection does not consist of just columns" do
+    @Album.dataset = @Album.dataset.select_append(Sequel.as(1, :b))
+    @Album.plugin :insert_returning_select
+    @db.sqls.clear
+    @Album.create(:x=>2).should == @Album.load(:id=>1, :x=>2)
+    @db.sqls.should == ['INSERT INTO albums (x) VALUES (2)', 'SELECT id, x, 1 AS b FROM albums WHERE (id = 1) LIMIT 1']
+  end
+
+  it "should not add a returning clause if database doesn't support it" do
+    @db.extend_datasets{def supports_returning?(_) false end}
+    @Album.plugin :insert_returning_select
+    @Album.create(:x=>2).should == @Album.load(:id=>1, :x=>2)
+    @db.sqls.should == ['INSERT INTO albums (x) VALUES (2)', 'SELECT id, x FROM albums WHERE (id = 1) LIMIT 1']
+  end
+
+  it "should work correctly with subclasses" do
+    c = Class.new(Sequel::Model)
+    c.plugin :insert_returning_select
+    b = Class.new(c)
+    b.columns :id, :x
+    b.dataset = @db[:albums].select(:id, :x)
+    @db.sqls.clear
+    b.create(:x=>2).should == b.load(:id=>1, :x=>2)
+    @db.sqls.should == ['INSERT INTO albums (x) VALUES (2) RETURNING id, x']
+  end
+end
diff --git a/spec/extensions/lazy_attributes_spec.rb b/spec/extensions/lazy_attributes_spec.rb
index ccd3fb2..1af089e 100644
--- a/spec/extensions/lazy_attributes_spec.rb
+++ b/spec/extensions/lazy_attributes_spec.rb
@@ -25,7 +25,7 @@ describe "Sequel::Plugins::LazyAttributes" do
           elsif sql =~ /id = (\d)/
             [$1]
           end.map do |x|
-            if sql =~ /SELECT name FROM/
+            if sql =~ /SELECT (la.)?name FROM/
               {:name=>x.to_s}
             else
               {:id=>x.to_i, :name=>x.to_s}
@@ -46,7 +46,6 @@ describe "Sequel::Plugins::LazyAttributes" do
     @c.set_dataset(@ds.select(:id, :blah))
     @c.dataset.sql.should == 'SELECT id, blah FROM la'
     @c.plugin :lazy_attributes, :blah
-    @c.dataset.opts[:select].should == [:id]
     @c.dataset.sql.should == 'SELECT id FROM la'
   end
   
@@ -54,13 +53,18 @@ describe "Sequel::Plugins::LazyAttributes" do
     @c.set_dataset(@ds.select(:id, :blah))
     @c.dataset.sql.should == 'SELECT id, blah FROM la'
     @c.lazy_attributes :blah
-    @c.dataset.opts[:select].should == [:id]
     @c.dataset.sql.should == 'SELECT id FROM la'
   end
 
+  it "should handle lazy attributes that are qualified in the selection" do
+    @c.set_dataset(@ds.select(:la__id, :la__blah))
+    @c.dataset.sql.should == 'SELECT la.id, la.blah FROM la'
+    @c.plugin :lazy_attributes, :blah
+    @c.dataset.sql.should == 'SELECT la.id FROM la'
+  end
+  
   it "should remove the attributes given from the SELECT columns of the model's dataset" do
-    @ds.opts[:select].should == [:id]
-    @ds.sql.should == 'SELECT id FROM la'
+    @ds.sql.should == 'SELECT la.id FROM la'
   end
 
   it "should still typecast correctly in lazy loaded column setters" do
@@ -80,16 +84,16 @@ describe "Sequel::Plugins::LazyAttributes" do
     m.values.should == {:id=>1}
     m.name.should == '1'
     m.values.should == {:id=>1, :name=>'1'}
-    @db.sqls.should == ['SELECT id FROM la LIMIT 1', 'SELECT name FROM la WHERE (id = 1) LIMIT 1']
+    @db.sqls.should == ['SELECT la.id FROM la LIMIT 1', 'SELECT la.name FROM la WHERE (id = 1) LIMIT 1']
   end
 
   it "should lazily load the attribute for a frozen model object" do
     m = @c.first
     m.freeze
     m.name.should == '1'
-    @db.sqls.should == ['SELECT id FROM la LIMIT 1', 'SELECT name FROM la WHERE (id = 1) LIMIT 1']
+    @db.sqls.should == ['SELECT la.id FROM la LIMIT 1', 'SELECT la.name FROM la WHERE (id = 1) LIMIT 1']
     m.name.should == '1'
-    @db.sqls.should == ['SELECT name FROM la WHERE (id = 1) LIMIT 1']
+    @db.sqls.should == ['SELECT la.name FROM la WHERE (id = 1) LIMIT 1']
   end
 
   it "should not lazily load the attribute for a single model object if the value already exists" do
@@ -98,7 +102,7 @@ describe "Sequel::Plugins::LazyAttributes" do
     m[:name] = '1'
     m.name.should == '1'
     m.values.should == {:id=>1, :name=>'1'}
-    @db.sqls.should == ['SELECT id FROM la LIMIT 1']
+    @db.sqls.should == ['SELECT la.id FROM la LIMIT 1']
   end
 
   it "should not lazily load the attribute for a single model object if it is a new record" do
@@ -114,16 +118,16 @@ describe "Sequel::Plugins::LazyAttributes" do
     ms.map{|m| m.name}.should == %w'1 2'
     ms.map{|m| m.values}.should == [{:id=>1, :name=>'1'}, {:id=>2, :name=>'2'}]
     sqls = @db.sqls
-    ['SELECT id, name FROM la WHERE (id IN (1, 2))',
-     'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop)
-    sqls.should == ['SELECT id FROM la']
+    ['SELECT la.id, la.name FROM la WHERE (la.id IN (1, 2))',
+     'SELECT la.id, la.name FROM la WHERE (la.id IN (2, 1))'].should include(sqls.pop)
+    sqls.should == ['SELECT la.id FROM la']
   end
 
   it "should not eagerly load the attribute if model instance is frozen, and deal with other frozen instances if not frozen" do
     ms = @c.all
     ms.first.freeze
     ms.map{|m| m.name}.should == %w'1 2'
-    @db.sqls.should == ['SELECT id FROM la', 'SELECT name FROM la WHERE (id = 1) LIMIT 1', 'SELECT id, name FROM la WHERE (id IN (2))']
+    @db.sqls.should == ['SELECT la.id FROM la', 'SELECT la.name FROM la WHERE (id = 1) LIMIT 1', 'SELECT la.id, la.name FROM la WHERE (la.id IN (2))']
   end
 
   it "should add the accessors to a module included in the class, so they can be easily overridden" do
@@ -137,9 +141,9 @@ describe "Sequel::Plugins::LazyAttributes" do
     ms.map{|m| m.name}.should == %w'1-blah 2-blah'
     ms.map{|m| m.values}.should == [{:id=>1, :name=>'1'}, {:id=>2, :name=>'2'}]
     sqls = @db.sqls
-    ['SELECT id, name FROM la WHERE (id IN (1, 2))',
-     'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop)
-    sqls.should == ['SELECT id FROM la']
+    ['SELECT la.id, la.name FROM la WHERE (la.id IN (1, 2))',
+     'SELECT la.id, la.name FROM la WHERE (la.id IN (2, 1))'].should include(sqls.pop)
+    sqls.should == ['SELECT la.id FROM la']
   end
 
   it "should work with the serialization plugin" do
@@ -152,15 +156,15 @@ describe "Sequel::Plugins::LazyAttributes" do
     ms.map{|m| m.deserialized_values}.should == [{:name=>3}, {:name=>6}]
     ms.map{|m| m.name}.should == [3,6]
     sqls = @db.sqls
-    ['SELECT id, name FROM la WHERE (id IN (1, 2))',
-     'SELECT id, name FROM la WHERE (id IN (2, 1))'].should include(sqls.pop)
-    sqls.should == ['SELECT id FROM la']
+    ['SELECT la.id, la.name FROM la WHERE (la.id IN (1, 2))',
+     'SELECT la.id, la.name FROM la WHERE (la.id IN (2, 1))'].should include(sqls.pop)
+    sqls.should == ['SELECT la.id FROM la']
     m = @ds.first
     m.values.should == {:id=>1}
     m.name.should == 3
     m.values.should == {:id=>1, :name=>"--- 3\n"}
     m.deserialized_values.should == {:name=>3}
     m.name.should == 3
-    @db.sqls.should == ["SELECT id FROM la LIMIT 1", "SELECT name FROM la WHERE (id = 1) LIMIT 1"]
+    @db.sqls.should == ["SELECT la.id FROM la LIMIT 1", "SELECT la.name FROM la WHERE (id = 1) LIMIT 1"]
   end
 end
diff --git a/spec/extensions/list_spec.rb b/spec/extensions/list_spec.rb
index 03a0379..a1a876a 100644
--- a/spec/extensions/list_spec.rb
+++ b/spec/extensions/list_spec.rb
@@ -106,6 +106,11 @@ describe "List plugin" do
       "SELECT * FROM items WHERE (id = 3) ORDER BY scope_id, position LIMIT 1"]
   end
 
+  it "should update positions automatically on deletion" do
+    @o.destroy
+    @db.sqls.should == ["DELETE FROM items WHERE (id = 7)", "UPDATE items SET position = (position - 1) WHERE (position > 3)"]
+  end
+
   it "should have last_position return the last position in the list" do
     @c.dataset._fetch  = {:max=>10}
     @o.last_position.should == 10
diff --git a/spec/extensions/modification_detection_spec.rb b/spec/extensions/modification_detection_spec.rb
new file mode 100644
index 0000000..a2107d8
--- /dev/null
+++ b/spec/extensions/modification_detection_spec.rb
@@ -0,0 +1,80 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+require 'yaml'
+
+describe "serialization_modification_detection plugin" do
+  before do
+    @ds = Sequel.mock(:fetch=>{:id=>1, :a=>'a', :b=>1, :c=>['a'], :d=>{'b'=>'c'}}, :numrows=>1, :autoid=>1)[:items]
+    @c = Class.new(Sequel::Model(@ds))
+    @c.plugin :modification_detection
+    @c.columns :a, :b, :c, :d
+    @o = @c.first
+    @ds.db.sqls
+  end
+  
+  it "should only detect columns that have been changed" do
+    @o.changed_columns.should == []
+    @o.a << 'b'
+    @o.changed_columns.should == [:a]
+    @o.a.replace('a') 
+    @o.changed_columns.should == []
+
+    @o.values[:b] = 2
+    @o.changed_columns.should == [:b]
+    @o.values[:b] = 1
+    @o.changed_columns.should == []
+
+    @o.c[0] << 'b'
+    @o.d['b'] << 'b'
+    @o.changed_columns.sort_by{|c| c.to_s}.should == [:c, :d]
+    @o.c[0] = 'a'
+    @o.changed_columns.should == [:d]
+    @o.d['b'] = 'c'
+    @o.changed_columns.should == []
+  end
+  
+  it "should not list a column twice" do
+    @o.a = 'b'
+    @o.a << 'a'
+    @o.changed_columns.should == [:a]
+  end
+  
+  it "should report correct changed_columns after updating" do
+    @o.a << 'a'
+    @o.save_changes
+    @o.changed_columns.should == []
+
+    @o.values[:b] = 2
+    @o.save_changes
+    @o.changed_columns.should == []
+
+    @o.c[0] << 'b'
+    @o.save_changes
+    @o.changed_columns.should == []
+
+    @o.d['b'] << 'a'
+    @o.save_changes
+    @o.changed_columns.should == []
+
+    @ds.db.sqls.should == ["UPDATE items SET a = 'aa' WHERE (id = 1)",
+                       "UPDATE items SET b = 2 WHERE (id = 1)",
+                       "UPDATE items SET c = ('ab') WHERE (id = 1)",
+                       "UPDATE items SET d = ('b' = 'ca') WHERE (id = 1)"]
+  end
+
+  it "should report correct changed_columns after creating new object" do
+    o = @c.create
+    o.changed_columns.should == []
+    o.a << 'a'
+    o.changed_columns.should == [:a]
+    @ds.db.sqls.should == ["INSERT INTO items DEFAULT VALUES", "SELECT * FROM items WHERE (id = 1) LIMIT 1"]
+  end
+
+  it "should report correct changed_columns after refreshing existing object" do
+    @o.a << 'a'
+    @o.changed_columns.should == [:a]
+    @o.refresh
+    @o.changed_columns.should == []
+    @o.a << 'a'
+    @o.changed_columns.should == [:a]
+  end
+end
diff --git a/spec/extensions/nested_attributes_spec.rb b/spec/extensions/nested_attributes_spec.rb
index 4832ba8..040abcb 100644
--- a/spec/extensions/nested_attributes_spec.rb
+++ b/spec/extensions/nested_attributes_spec.rb
@@ -38,6 +38,7 @@ describe "NestedAttributes plugin" do
     @Artist.one_to_one :first_album, :class=>@Album, :key=>:artist_id
     @Album.many_to_one :artist, :class=>@Artist, :reciprocal=>:albums
     @Album.many_to_many :tags, :class=>@Tag, :left_key=>:album_id, :right_key=>:tag_id, :join_table=>:at
+    @Tag.many_to_many :albums, :class=>@Album, :left_key=>:tag_id, :right_key=>:album_id, :join_table=>:at
     @Artist.nested_attributes :albums, :first_album, :destroy=>true, :remove=>true
     @Artist.nested_attributes :concerts, :destroy=>true, :remove=>true
     @Album.nested_attributes :artist, :tags, :destroy=>true, :remove=>true
@@ -117,9 +118,12 @@ describe "NestedAttributes plugin" do
   end
   
   it "should add new objects to the cached association array as soon as the *_attributes= method is called" do
-    a = @Artist.new({:name=>'Ar', :albums_attributes=>[{:name=>'Al', :tags_attributes=>[{:name=>'T'}]}]})
+    a = @Artist.new({:name=>'Ar', :first_album_attributes=>{:name=>'B'}, :albums_attributes=>[{:name=>'Al', :tags_attributes=>[{:name=>'T'}]}]})
     a.albums.should == [@Album.new(:name=>'Al')]
+    a.albums.first.artist.should == a
     a.albums.first.tags.should == [@Tag.new(:name=>'T')]
+    a.first_album.should == @Album.new(:name=>'B')
+    a.first_album.artist.should == a
   end
   
   it "should support creating new objects with composite primary keys" do
@@ -608,4 +612,31 @@ describe "NestedAttributes plugin" do
     proc{al.set(:tags_attributes=>[{:id=>30, :name=>'T2', :number=>3}])}.should raise_error(Sequel::Error)
     proc{al.set(:tags_attributes=>[{:name=>'T2', :number=>3}])}.should raise_error(Sequel::Error)
   end
+
+  it "should allow per-call options via the set_nested_attributes method" do
+    @Tag.columns :id, :name, :number
+    @Album.nested_attributes :tags
+
+    al = @Album.load(:id=>10, :name=>'Al')
+    t = @Tag.load(:id=>30, :name=>'T', :number=>10)
+    al.associations[:tags] = [t]
+    al.set_nested_attributes(:tags, [{:id=>30, :name=>'T2'}, {:name=>'T3'}], :fields=>[:name])
+    @db.sqls.should == []
+    al.save
+    check_sql_array("UPDATE albums SET name = 'Al' WHERE (id = 10)",
+      "UPDATE tags SET name = 'T2' WHERE (id = 30)",
+      "INSERT INTO tags (name) VALUES ('T3')",
+      ["INSERT INTO at (album_id, tag_id) VALUES (10, 1)", "INSERT INTO at (tag_id, album_id) VALUES (1, 10)"])
+    proc{al.set_nested_attributes(:tags, [{:id=>30, :name=>'T2', :number=>3}], :fields=>[:name])}.should raise_error(Sequel::Error)
+    proc{al.set_nested_attributes(:tags, [{:name=>'T2', :number=>3}], :fields=>[:name])}.should raise_error(Sequel::Error)
+  end
+
+  it "should have set_nested_attributes method raise error if called with a bad association" do
+    proc{@Album.load(:id=>10, :name=>'Al').set_nested_attributes(:tags2, [{:id=>30, :name=>'T2', :number=>3}], :fields=>[:name])}.should raise_error(Sequel::Error)
+  end
+
+  it "should have set_nested_attributes method raise error if called with an association that doesn't support nested attributes" do
+    @Tag.columns :id, :name, :number
+    proc{@Album.load(:id=>10, :name=>'Al').set_nested_attributes(:tags, [{:id=>30, :name=>'T2', :number=>3}], :fields=>[:name])}.should raise_error(Sequel::Error)
+  end
 end
diff --git a/spec/extensions/pg_enum_spec.rb b/spec/extensions/pg_enum_spec.rb
new file mode 100644
index 0000000..98eecd0
--- /dev/null
+++ b/spec/extensions/pg_enum_spec.rb
@@ -0,0 +1,64 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+describe "pg_inet extension" do
+  before do
+    @db = Sequel.connect('mock://postgres', :quote_identifiers=>false)
+    @db.extend(Module.new do
+      def schema_parse_table(*)
+        [[:a, {:oid=>1}]]
+      end
+    end)
+    @db.send(:metadata_dataset)._fetch = [[{:v=>1, :enumlabel=>'a'}, {:v=>1, :enumlabel=>'b'}, {:v=>1, :enumlabel=>'c'}],
+      [{:typname=>'enum1', :v=>212389}]]
+    @db.extension(:pg_array, :pg_enum)
+    @db.sqls
+  end
+
+  it "should include enum information in the schema entry" do
+    @db.schema(:a).should == [[:a, {:oid=>1, :ruby_default=>nil, :type=>:enum, :enum_values=>%w'a b c'}]]
+  end
+
+  it "should typecast objects to string" do
+    @db.typecast_value(:enum, :a).should == 'a'
+  end
+
+  it "should add array parsers for enum values" do
+    @db.conversion_procs[212389].call('{a,b,c}').should == %w'a b c'
+  end
+
+  it "should support #create_enum method for adding a new enum" do
+    @db.create_enum(:foo, [:a, :b, :c])
+    @db.sqls.first.should == "CREATE TYPE foo AS ENUM ('a', 'b', 'c')"
+    @db.create_enum(:sch__foo, %w'a b c')
+    @db.sqls.first.should == "CREATE TYPE sch.foo AS ENUM ('a', 'b', 'c')"
+  end
+
+  it "should support #drop_enum method for dropping an enum" do
+    @db.drop_enum(:foo)
+    @db.sqls.first.should == "DROP TYPE foo"
+    @db.drop_enum(:sch__foo, :if_exists=>true)
+    @db.sqls.first.should == "DROP TYPE IF EXISTS sch.foo"
+    @db.drop_enum('foo', :cascade=>true)
+    @db.sqls.first.should == "DROP TYPE foo CASCADE"
+  end
+
+  it "should support #add_enum_value method for adding value to an existing enum" do
+    @db.add_enum_value(:foo, :a)
+    @db.sqls.first.should == "ALTER TYPE foo ADD VALUE 'a'"
+  end
+
+  it "should support :before option for #add_enum_value method for adding value before an existing enum value" do
+    @db.add_enum_value('foo', :a, :before=>:b)
+    @db.sqls.first.should == "ALTER TYPE foo ADD VALUE 'a' BEFORE 'b'"
+  end
+
+  it "should support :after option for #add_enum_value method for adding value after an existing enum value" do
+    @db.add_enum_value(:sch__foo, :a, :after=>:b)
+    @db.sqls.first.should == "ALTER TYPE sch.foo ADD VALUE 'a' AFTER 'b'"
+  end
+
+  it "should support :if_not_exists option for #add_enum_value method for not adding the value if it exists" do
+    @db.add_enum_value(:foo, :a, :if_not_exists=>true)
+    @db.sqls.first.should == "ALTER TYPE foo ADD VALUE IF NOT EXISTS 'a'"
+  end
+end
diff --git a/spec/extensions/pg_json_spec.rb b/spec/extensions/pg_json_spec.rb
index adf8f7e..7f8f853 100644
--- a/spec/extensions/pg_json_spec.rb
+++ b/spec/extensions/pg_json_spec.rb
@@ -10,17 +10,6 @@ describe "pg_json extension" do
     @ac = m::JSONArray
     @bhc = m::JSONBHash
     @bac = m::JSONBArray
-
-    # Create subclass in correct namespace for easily overriding methods
-    j = m::JSON = JSON.dup
-    j.instance_eval do
-      Parser = JSON::Parser
-      alias old_parse parse
-      def parse(s)
-        return 1 if s == '1'
-        old_parse(s) 
-      end
-    end
   end
   before do
     @db = Sequel.connect('mock://postgres', :quote_identifiers=>false)
@@ -64,10 +53,15 @@ describe "pg_json extension" do
       Sequel.instance_eval do
         alias pj parse_json
         def parse_json(v)
-          v
+          {'1'=>1, "'a'"=>'a', 'true'=>true, 'false'=>false, 'null'=>nil, 'o'=>Object.new}.fetch(v){pj(v)}
         end
       end
-      proc{@m.parse_json('1')}.should raise_error(Sequel::InvalidValue)
+      @m.parse_json('1').should == 1
+      @m.parse_json("'a'").should == 'a'
+      @m.parse_json('true').should == true
+      @m.parse_json('false').should == false
+      @m.parse_json('null').should == nil
+      proc{@m.parse_json('o')}.should raise_error(Sequel::InvalidValue)
     ensure
       Sequel.instance_eval do
         alias parse_json pj
diff --git a/spec/extensions/pg_static_cache_updater_spec.rb b/spec/extensions/pg_static_cache_updater_spec.rb
index 2cb2d49..e27f051 100644
--- a/spec/extensions/pg_static_cache_updater_spec.rb
+++ b/spec/extensions/pg_static_cache_updater_spec.rb
@@ -77,4 +77,16 @@ describe "pg_static_cache_updater extension" do
     @db.fetch = {:v=>1234}
     proc{@db.listen_for_static_cache_updates(Class.new(Sequel::Model(@db[:table])))}.should raise_error(Sequel::Error)
   end
+
+  specify "#listen_for_static_cache_updates should handle a :before_thread_exit option" do
+    a = []
+    @db.listen_for_static_cache_updates([@model], :yield=>[nil, nil, 12345], :before_thread_exit=>proc{a << 1}).join
+    a.should == [1]
+  end
+
+  specify "#listen_for_static_cache_updates should call :before_thread_exit option even if listen raises an exception" do
+    a = []
+    @db.listen_for_static_cache_updates([@model], :yield=>[nil, nil, 12345], :after_listen=>proc{raise ArgumentError}, :before_thread_exit=>proc{a << 1}).join
+    a.should == [1]
+  end
 end
diff --git a/spec/extensions/prepared_statements_associations_spec.rb b/spec/extensions/prepared_statements_associations_spec.rb
index 964136a..8837295 100644
--- a/spec/extensions/prepared_statements_associations_spec.rb
+++ b/spec/extensions/prepared_statements_associations_spec.rb
@@ -31,25 +31,25 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
 
   specify "should run correct SQL for associations" do
     @Artist.load(:id=>1).albums
-    @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1) -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE (albums.artist_id = 1) -- prepared"]
 
     @Artist.load(:id=>1).album
-    @db.sqls.should == ["SELECT * FROM albums WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2).artist
-    @db.sqls.should == ["SELECT * FROM artists WHERE (artists.id = 2) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT id, id2 FROM artists WHERE (artists.id = 2) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2).tag
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) WHERE (albums_tags.album_id = 1) LIMIT 1 -- prepared"]
 
     @Artist.load(:id=>1).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) -- prepared"]
 
     @Artist.load(:id=>1).tag
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON (albums_tags.tag_id = tags.id) INNER JOIN albums ON (albums.id = albums_tags.album_id) WHERE (albums.artist_id = 1) LIMIT 1 -- prepared"]
   end
 
   specify "should run correct SQL for composite key associations" do
@@ -63,25 +63,25 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @Artist.one_through_many :tag, :clone=>:tags
 
     @Artist.load(:id=>1, :id2=>2).albums
-    @db.sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
 
     @Artist.load(:id=>1, :id2=>2).album
-    @db.sqls.should == ["SELECT * FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2, :artist_id2=>3).artist
-    @db.sqls.should == ["SELECT * FROM artists WHERE ((artists.id = 2) AND (artists.id2 = 3)) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT id, id2 FROM artists WHERE ((artists.id = 2) AND (artists.id2 = 3)) LIMIT 1 -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2, :id2=>3).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) -- prepared"]
 
     @Album.load(:id=>1, :artist_id=>2, :id2=>3).tag
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) WHERE ((albums_tags.album_id = 1) AND (albums_tags.album_id2 = 3)) LIMIT 1 -- prepared"]
 
     @Artist.load(:id=>1, :id2=>2).tags
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) -- prepared"]
 
     @Artist.load(:id=>1, :id2=>2).tag
-    @db.sqls.should == ["SELECT tags.* FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"]
+    @db.sqls.should == ["SELECT tags.id, tags.id2 FROM tags INNER JOIN albums_tags ON ((albums_tags.tag_id = tags.id) AND (albums_tags.tag_id2 = tags.id2)) INNER JOIN albums ON ((albums.id = albums_tags.album_id) AND (albums.id2 = albums_tags.album_id2)) WHERE ((albums.artist_id = 1) AND (albums.artist_id2 = 2)) LIMIT 1 -- prepared"]
   end
 
   specify "should not run query if no objects can be associated" do
@@ -120,19 +120,19 @@ describe "Sequel::Plugins::PreparedStatementsAssociations" do
     @Album.dataset = @Album.dataset.where(:a=>2)
     @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id
     @Artist.load(:id=>1).albums
-    @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
   end
 
   specify "should use a prepared statement if the :conditions association option" do
     @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :conditions=>{:a=>2} 
     @Artist.load(:id=>1).albums
-    @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
   end
 
   specify "should not use a prepared statement if :conditions association option uses an identifier" do
     @Artist.one_to_many :albums, :class=>@Album, :key=>:artist_id, :conditions=>{Sequel.identifier('a')=>2}
     @Artist.load(:id=>1).albums
-    @db.sqls.should == ["SELECT * FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
+    @db.sqls.should == ["SELECT id, artist_id, id2, artist_id2 FROM albums WHERE ((a = 2) AND (albums.artist_id = 1)) -- prepared"]
   end
 
   specify "should run a regular query if :dataset option is used when defining the association" do
diff --git a/spec/extensions/prepared_statements_spec.rb b/spec/extensions/prepared_statements_spec.rb
index 42aafcb..6887fde 100644
--- a/spec/extensions/prepared_statements_spec.rb
+++ b/spec/extensions/prepared_statements_spec.rb
@@ -5,6 +5,7 @@ describe "prepared_statements plugin" do
     @db = Sequel.mock(:fetch=>{:id=>1, :name=>'foo', :i=>2}, :autoid=>proc{|sql| 1}, :numrows=>1, :servers=>{:read_only=>{}})
     @c = Class.new(Sequel::Model(@db[:people]))
     @c.columns :id, :name, :i
+    @columns = "id, name, i"
     @c.plugin :prepared_statements
     @p = @c.load(:id=>1, :name=>'foo', :i=>2)
     @ds = @c.dataset
@@ -13,7 +14,7 @@ describe "prepared_statements plugin" do
 
   specify "should correctly lookup by primary key" do
     @c[1].should == @p
-    @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
+    @db.sqls.should == ["SELECT id, name, i FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
   end 
 
   shared_examples_for "prepared_statements plugin" do
@@ -29,7 +30,7 @@ describe "prepared_statements plugin" do
 
     specify "should correctly create instance" do
       @c.create(:name=>'foo').should == @c.load(:id=>1, :name=>'foo', :i => 2)
-      @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo')", "SELECT * FROM people WHERE (id = 1) LIMIT 1"]
+      @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo')", "SELECT #{@columns} FROM people WHERE (id = 1) LIMIT 1"]
     end
 
     specify "should correctly create instance if dataset supports insert_select" do
@@ -37,16 +38,19 @@ describe "prepared_statements plugin" do
         def supports_insert_select?
           true
         end
+        def supports_returning?(type)
+          true
+        end
         def insert_select(h)
           self._fetch = {:id=>1, :name=>'foo', :i => 2}
-          returning.server(:default).with_sql(:insert_sql, h).first
+          server(:default).with_sql_first(insert_select_sql(h))
         end
-        def insert_sql(*)
-          "#{super}#{' RETURNING *' if opts.has_key?(:returning)}"
+        def insert_select_sql(*v)
+          "#{insert_sql(*v)} RETURNING #{(opts[:returning] && !opts[:returning].empty?) ? opts[:returning].map{|c| literal(c)}.join(', ') : '*'}"
         end
       end
       @c.create(:name=>'foo').should == @c.load(:id=>1, :name=>'foo', :i => 2)
-      @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo') RETURNING *"]
+      @db.sqls.should == ["INSERT INTO people (name) VALUES ('foo') RETURNING #{@columns}"]
     end
   end
 
@@ -54,6 +58,7 @@ describe "prepared_statements plugin" do
 
   describe "when #use_prepared_statements_for? returns false" do
     before do
+      @columns = "*"
       @c.class_eval{def use_prepared_statements_for?(type) false end}
     end
 
@@ -63,7 +68,7 @@ describe "prepared_statements plugin" do
   specify "should work correctly when subclassing" do
     c = Class.new(@c)
     c[1].should == c.load(:id=>1, :name=>'foo', :i=>2)
-    @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
+    @db.sqls.should == ["SELECT id, name, i FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
   end 
 
   describe " with placeholder type specifiers" do 
@@ -73,7 +78,7 @@ describe "prepared_statements plugin" do
 
     specify "should correctly handle without schema type" do
       @c[1].should == @p
-      @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
+      @db.sqls.should == ["SELECT id, name, i FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
     end
 
     specify "should correctly handle with schema type" do
@@ -92,7 +97,7 @@ describe "prepared_statements plugin" do
         end
       end
       @c[1].should == @p
-      @db.sqls.should == ["SELECT * FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
+      @db.sqls.should == ["SELECT id, name, i FROM people WHERE (id = 1) LIMIT 1 -- read_only"]
     end 
   end
 end
diff --git a/spec/extensions/round_timestamps_spec.rb b/spec/extensions/round_timestamps_spec.rb
new file mode 100644
index 0000000..8d661e8
--- /dev/null
+++ b/spec/extensions/round_timestamps_spec.rb
@@ -0,0 +1,43 @@
+require File.join(File.dirname(File.expand_path(__FILE__)), "spec_helper")
+
+if RUBY_VERSION >= '1.9.0'
+describe "Sequel::Dataset::RoundTimestamps" do
+  before do
+    @dataset = Sequel.mock.dataset.extension(:round_timestamps)
+  end
+
+  specify "should round times properly for databases supporting microsecond precision" do
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 499999.5)).should == "'01:02:03.500000'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.4999995)).should == "'2010-01-02 03:04:05.500000'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(54999995, 10000000))).should == "'2010-01-02 03:04:05.500000'"
+
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 499999.4)).should == "'01:02:03.499999'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.4999994)).should == "'2010-01-02 03:04:05.499999'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(54999994, 10000000))).should == "'2010-01-02 03:04:05.499999'"
+  end
+  
+  specify "should round times properly for databases supporting millisecond precision" do
+    def @dataset.timestamp_precision() 3 end
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 499500)).should == "'01:02:03.500'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.4995)).should == "'2010-01-02 03:04:05.500'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(54995, 10000))).should == "'2010-01-02 03:04:05.500'"
+
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 499499)).should == "'01:02:03.499'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.4994)).should == "'2010-01-02 03:04:05.499'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(54994, 10000))).should == "'2010-01-02 03:04:05.499'"
+  end
+  
+  specify "should round times properly for databases supporting second precision" do
+    def @dataset.supports_timestamp_usecs?() false end
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 500000)).should == "'01:02:04'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.5)).should == "'2010-01-02 03:04:06'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(55, 10))).should == "'2010-01-02 03:04:06'"
+
+    @dataset.literal(Sequel::SQLTime.create(1, 2, 3, 499999)).should == "'01:02:03'"
+    @dataset.literal(Time.local(2010, 1, 2, 3, 4, 5.4999999)).should == "'2010-01-02 03:04:05'"
+    @dataset.literal(DateTime.new(2010, 1, 2, 3, 4, Rational(54999999, 10000000))).should == "'2010-01-02 03:04:05'"
+  end
+end
+else
+  skip_warn "round_timestamps extension: only works on ruby 1.9+"
+end
diff --git a/spec/extensions/serialization_modification_detection_spec.rb b/spec/extensions/serialization_modification_detection_spec.rb
index c43bab8..a26fa8c 100644
--- a/spec/extensions/serialization_modification_detection_spec.rb
+++ b/spec/extensions/serialization_modification_detection_spec.rb
@@ -78,7 +78,7 @@ describe "serialization_modification_detection plugin" do
     @o1.changed_columns.should == [:h]
   end
 
-  it "should work with frozen objects" do
+  it "should work with duplicating objects" do
     @o2.changed_columns.should == []
     o = @o2.dup
     @o2.h.should == {}
@@ -86,4 +86,13 @@ describe "serialization_modification_detection plugin" do
     @o2.changed_columns.should == [:h]
     o.changed_columns.should == []
   end
+
+  it "should work with duplicating objects after modifying them" do
+    @o2.changed_columns.should == []
+    @o2.h.should == {}
+    @o2.h[1] = 2
+    @o2.changed_columns.should == [:h]
+    o = @o2.dup
+    o.changed_columns.should == [:h]
+  end
 end
diff --git a/spec/extensions/serialization_spec.rb b/spec/extensions/serialization_spec.rb
index faf84ee..63ec6b3 100644
--- a/spec/extensions/serialization_spec.rb
+++ b/spec/extensions/serialization_spec.rb
@@ -26,6 +26,24 @@ describe "Serialization plugin" do
     DB.sqls.last.should =~ /INSERT INTO items \((ghi)\) VALUES \('\[123\]'\)/
   end
 
+  it "should handle validations of underlying column" do
+    @c.plugin :serialization, :yaml, :abc
+    o = @c.new
+    def o.validate
+      errors.add(:abc, "not present") unless self[:abc]
+    end
+    o.valid?.should == false
+    o.abc = {}
+    o.valid?.should == true
+  end
+
+  it "should set column values even when not validating" do
+    @c.set_primary_key :id
+    @c.plugin :serialization, :yaml, :abc
+    @c.load({:id=>1}).set(:abc=>{}).save(:validate=>false)
+    DB.sqls.last.gsub("\n", '').should == "UPDATE items SET abc = '--- {}' WHERE (id = 1)"
+  end
+
   it "should allow serializing attributes to yaml" do
     @c.plugin :serialization, :yaml, :abc
     @c.create(:abc => 1)
diff --git a/spec/extensions/single_table_inheritance_spec.rb b/spec/extensions/single_table_inheritance_spec.rb
index 197fa9d..78f1c28 100644
--- a/spec/extensions/single_table_inheritance_spec.rb
+++ b/spec/extensions/single_table_inheritance_spec.rb
@@ -85,6 +85,11 @@ describe Sequel::Model, "single table inheritance plugin" do
     o.valid?.should == true
   end
 
+  it "should set type column field even if validations are skipped" do
+    StiTestSub1.new.save(:validate=>false)
+    DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE ((sti_tests.kind IN ('StiTestSub1')) AND (id = 10)) LIMIT 1"]
+  end
+
   it "should override an existing value in the class name field" do
     StiTest.create(:kind=>'StiTestSub1')
     DB.sqls.should == ["INSERT INTO sti_tests (kind) VALUES ('StiTestSub1')", "SELECT * FROM sti_tests WHERE (id = 10) LIMIT 1"]
diff --git a/spec/extensions/timestamps_spec.rb b/spec/extensions/timestamps_spec.rb
index 0b505d4..175672d 100644
--- a/spec/extensions/timestamps_spec.rb
+++ b/spec/extensions/timestamps_spec.rb
@@ -32,6 +32,12 @@ describe "Sequel::Plugins::Timestamps" do
     o.valid?.should == true
   end
 
+  it "should set timestamp fields when skipping validations" do
+    @c.plugin :timestamps
+    @c.new.save(:validate=>false)
+    @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')"]
+  end
+
   it "should set the create timestamp field on creation" do
     o = @c.create
     @c.db.sqls.should == ["INSERT INTO t (created_at) VALUES ('2009-08-01')"]
diff --git a/spec/integration/plugin_test.rb b/spec/integration/plugin_test.rb
index 8ea2d06..c26215e 100644
--- a/spec/integration/plugin_test.rb
+++ b/spec/integration/plugin_test.rb
@@ -1,8 +1,5 @@
 require File.join(File.dirname(File.expand_path(__FILE__)), 'spec_helper.rb')
 
-# DB2 does not seem to support USING joins in every version; it seems to be
-# valid expression in DB2 iSeries UDB though.
-unless !DB.dataset.supports_join_using? || Sequel.guarded?(:db2)
 describe "Class Table Inheritance Plugin" do
   before(:all) do
     @db = DB
@@ -37,7 +34,7 @@ describe "Class Table Inheritance Plugin" do
     class ::Executive < Manager
     end 
     class ::Staff < Employee
-      many_to_one :manager, :qualify=>false
+      many_to_one :manager
     end 
     
     @i1 =@db[:employees].insert(:name=>'E', :kind=>'Employee')
@@ -96,10 +93,10 @@ describe "Class Table Inheritance Plugin" do
   end
   
   specify "should handle associations only defined in subclasses" do
-    Employee.filter(:id=>@i2).all.first.manager.id.should == @i4
+    Employee.filter(:employees__id=>@i2).all.first.manager.id.should == @i4
   end
 
-  cspecify "should insert rows into all tables", [proc{|db| db.sqlite_version < 30709}, :sqlite] do
+  specify "should insert rows into all tables" do
     e = Executive.create(:name=>'Ex2', :num_managers=>8, :num_staff=>9)
     i = e.id
     @db[:employees][:id=>i].should == {:id=>i, :name=>'Ex2', :kind=>'Executive'}
@@ -138,13 +135,12 @@ describe "Class Table Inheritance Plugin" do
     Executive.limit(1).eager(:staff_members).first.staff_members.should == [Staff[@i2]]
   end
   
-  cspecify "should handle eagerly graphing one_to_many relationships", [proc{|db| db.sqlite_version < 30709}, :sqlite] do
+  specify "should handle eagerly graphing one_to_many relationships" do
     es = Executive.limit(1).eager_graph(:staff_members).all
     es.should == [Executive[@i4]]
     es.map{|x| x.staff_members}.should == [[Staff[@i2]]]
   end
 end
-end
 
 describe "Many Through Many Plugin" do
   before(:all) do
@@ -1494,6 +1490,11 @@ describe "List plugin without a scope" do
     proc { @c[:name => "def"].move_up(10) }.should raise_error(Sequel::Error)
     proc { @c[:name => "def"].move_down(10) }.should raise_error(Sequel::Error)
   end
+
+  it "should update positions on destroy" do
+    @c[:name => "def"].destroy
+    @c.select_map([:position, :name]).should == [[1, 'abc'], [2, 'hig']]
+  end
 end
 
 describe "List plugin with a scope" do
@@ -1572,6 +1573,11 @@ describe "List plugin with a scope" do
     proc { @c[:name => "P1"].move_up(10) }.should raise_error(Sequel::Error)
     proc { @c[:name => "P1"].move_down(10) }.should raise_error(Sequel::Error)
   end
+
+  it "should update positions on destroy" do
+    @c[:name => "P2"].destroy
+    @c.select_order_map([:pos, :name]).should == [[1, "Hm"], [1, "P1"], [1, "Ps"], [2, "Au"], [2, "P3"]]
+  end
 end
 
 describe "Sequel::Plugins::Tree" do
@@ -1761,6 +1767,49 @@ describe "Sequel::Plugins::PreparedStatements" do
   end
 end
 
+describe "Sequel::Plugins::PreparedStatements with schema changes" do
+  before do
+    @db = DB
+    @db.create_table!(:ps_test) do
+      primary_key :id
+      String :name
+    end
+    @c = Class.new(Sequel::Model(@db[:ps_test]))
+    @c.many_to_one :ps_test, :key=>:id, :class=>@c
+    @c.one_to_many :ps_tests, :key=>:id, :class=>@c
+    @c.many_to_many :mps_tests, :left_key=>:id, :right_key=>:id, :class=>@c, :join_table=>:ps_test___x
+    @c.plugin :prepared_statements
+    @c.plugin :prepared_statements_associations
+  end
+  after do
+    @db.drop_table?(:ps_test)
+  end
+
+  it "should handle added columns" do 
+    foo = @c.create(:name=>'foo')
+    @c[foo.id].name.should == 'foo'
+    foo.ps_test.name.should == 'foo'
+    foo.ps_tests.map{|x| x.name}.should == %w'foo'
+    foo.mps_tests.map{|x| x.name}.should == %w'foo'
+    foo.update(:name=>'foo2')
+    @c[foo.id].name.should == 'foo2'
+    foo.delete
+    foo.exists?.should == false
+
+    @db.alter_table(:ps_test){add_column :i, Integer}
+
+    foo = @c.create(:name=>'foo')
+    @c[foo.id].name.should == 'foo'
+    foo.ps_test.name.should == 'foo'
+    foo.ps_tests.map{|x| x.name}.should == %w'foo'
+    foo.mps_tests.map{|x| x.name}.should == %w'foo'
+    foo.update(:name=>'foo2')
+    @c[foo.id].name.should == 'foo2'
+    foo.delete
+    foo.exists?.should == false
+  end
+end
+
 describe "Caching plugins" do
   before(:all) do
     @db = DB
diff --git a/spec/integration/prepared_statement_test.rb b/spec/integration/prepared_statement_test.rb
index 513fc97..316f01f 100644
--- a/spec/integration/prepared_statement_test.rb
+++ b/spec/integration/prepared_statement_test.rb
@@ -117,6 +117,12 @@ describe "Prepared Statements and Bound Arguments" do
     @ds.filter(:id=>2).first[:numb].should == 20
   end
 
+  specify "should support bound variables with insert_select" do
+    @ds.call(:insert_select, {:n=>20}, :numb=>:$n).should == {:id=>2, :numb=>20}
+    @ds.count.should == 2
+    @ds.order(:id).map(:numb).should == [10, 20]
+  end if DB.dataset.supports_insert_select?
+
   specify "should support bound variables with delete" do
     @ds.filter(:numb=>:$n).call(:delete, :n=>10).should == 1
     @ds.count.should == 0
@@ -228,6 +234,12 @@ describe "Prepared Statements and Bound Arguments" do
     @ds.filter(:id=>2).first[:numb].should == 20
   end
 
+  specify "should support prepared_statements with insert_select" do
+    @ds.prepare(:insert_select, :insert_select_n, :numb=>:$n).call(:n=>20).should == {:id=>2, :numb=>20}
+    @ds.count.should == 2
+    @ds.order(:id).map(:numb).should == [10, 20]
+  end if DB.dataset.supports_insert_select?
+
   specify "should support prepared statements with delete" do
     @ds.filter(:numb=>:$n).prepare(:delete, :delete_n)
     @db.call(:delete_n, :n=>10).should == 1
diff --git a/spec/integration/schema_test.rb b/spec/integration/schema_test.rb
index 4522242..550a9e0 100644
--- a/spec/integration/schema_test.rb
+++ b/spec/integration/schema_test.rb
@@ -142,6 +142,13 @@ describe "Database schema parser" do
     DB.create_table!(:items){FalseClass :number}
     DB.schema(:items).first.last[:type].should == :boolean
   end
+
+  specify "should parse maximum length for string columns" do
+    DB.create_table!(:items){String :a, :size=>4}
+    DB.schema(:items).first.last[:max_length].should == 4
+    DB.create_table!(:items){String :a, :fixed=>true, :size=>3}
+    DB.schema(:items).first.last[:max_length].should == 3
+  end
 end if DB.supports_schema_parsing?
 
 describe "Database index parsing" do
diff --git a/spec/model/associations_spec.rb b/spec/model/associations_spec.rb
index 002147c..d3e780b 100644
--- a/spec/model/associations_spec.rb
+++ b/spec/model/associations_spec.rb
@@ -1890,6 +1890,30 @@ describe Sequel::Model, "many_to_many" do
     end
   end
   
+  it "should not override a selection consisting completely of qualified columns using Sequel::SQL::QualifiedIdentifier" do
+    @c1.dataset = @c1.dataset.select(Sequel.qualify(:attributes, :id), Sequel.qualify(:attributes, :b))
+    @c2.many_to_many :attributes, :class => @c1
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.id, attributes.b FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
+  end
+  
+  it "should not override a selection consisting completely of qualified columns using symbols" do
+    @c1.dataset = @c1.dataset.select(:attributes__id, :attributes__b)
+    @c2.many_to_many :attributes, :class => @c1
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.id, attributes.b FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
+  end
+  
+  it "should not override a selection consisting completely of qualified columns using Sequel::SQL::AliasedExpression" do
+    @c1.dataset = @c1.dataset.select(Sequel.qualify(:attributes, :id).as(:a), Sequel.as(:attributes__b, :c))
+    @c2.many_to_many :attributes, :class => @c1
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.id AS a, attributes.b AS c FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
+  end
+  
+  it "should override a selection consisting of non qualified columns" do
+    @c1.dataset = @c1.dataset.select{foo(:bar)}
+    @c2.many_to_many :attributes, :class => @c1
+    @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id = 1234)'
+  end
+  
   it "should respect :eager_loader_predicate_key when lazily loading" do
     @c2.many_to_many :attributes, :class => @c1, :eager_loading_predicate_key=>Sequel.subscript(:attributes_nodes__node_id, 0)
     @c2.new(:id => 1234).attributes_dataset.sql.should == 'SELECT attributes.* FROM attributes INNER JOIN attributes_nodes ON (attributes_nodes.attribute_id = attributes.id) WHERE (attributes_nodes.node_id[0] = 1234)'
diff --git a/spec/model/eager_loading_spec.rb b/spec/model/eager_loading_spec.rb
index 84867ea..3275afd 100644
--- a/spec/model/eager_loading_spec.rb
+++ b/spec/model/eager_loading_spec.rb
@@ -762,6 +762,15 @@ describe Sequel::Model, "#eager" do
     a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
     DB.sqls.should == []
   end
+
+  it "should respect the :limit option on a one_to_many association with an association block" do
+    EagerAlbum.one_to_many :tracks, :class=>'EagerTrack', :key=>:album_id, :order=>:name, :limit=>2 do |ds| ds.where(:a=>1) end
+    a = EagerAlbum.eager(:tracks).all
+    a.should == [EagerAlbum.load(:id => 1, :band_id => 2)]
+    DB.sqls.should == ['SELECT * FROM albums', 'SELECT * FROM (SELECT * FROM tracks WHERE ((a = 1) AND (1 = tracks.album_id)) ORDER BY name LIMIT 2) AS t1']
+    a.first.tracks.should == [EagerTrack.load(:id => 3, :album_id=>1)]
+    DB.sqls.should == []
+  end
   
   it "should respect the :limit option on a one_to_many association using the :window_function strategy" do
     def (EagerTrack.dataset).supports_window_functions?() true end
diff --git a/spec/model/model_spec.rb b/spec/model/model_spec.rb
index 8cfefb0..d0708ab 100644
--- a/spec/model/model_spec.rb
+++ b/spec/model/model_spec.rb
@@ -824,10 +824,10 @@ describe Sequel::Model, ".[]" do
     DB.sqls.should == ["SELECT * FROM items WHERE name = 'sharon'"]
   end
 
-  it "should return the first record for the given pk for a filtered dataset" do
-    @c.dataset = @c.dataset.filter(:active=>true)
+  it "should use a qualified primary key if the dataset is joined" do
+    @c.dataset = @c.dataset.cross_join(:a)
     @c[1].should == @c.load(:name => 'sharon', :id => 1)
-    DB.sqls.should == ["SELECT * FROM items WHERE ((active IS TRUE) AND (id = 1)) LIMIT 1"]
+    DB.sqls.should == ["SELECT * FROM items CROSS JOIN a WHERE (items.id = 1) LIMIT 1"]
   end
 
   it "should work correctly for composite primary key specified as array" do
@@ -905,6 +905,16 @@ describe "Model.db_schema" do
     @c.primary_key.should == :x
   end
   
+  specify "should automatically set a singular primary key even if there are specific columns selected" do
+    ds = @dataset.select(:a, :b, :x)
+    d = ds.db
+    def d.schema(table, *opts) [[:a, {:primary_key=>false}], [:b, {:primary_key=>false}], [:x, {:primary_key=>true}]] end
+    @c.primary_key.should == :id
+    @c.dataset = ds
+    @c.db_schema.should == {:a=>{:primary_key=>false}, :b=>{:primary_key=>false}, :x=>{:primary_key=>true}}
+    @c.primary_key.should == :x
+  end
+  
   specify "should automatically set the composite primary key based on the schema" do
     ds = @dataset
     d = ds.db
diff --git a/spec/model/record_spec.rb b/spec/model/record_spec.rb
index 2f383e4..a1d6e58 100644
--- a/spec/model/record_spec.rb
+++ b/spec/model/record_spec.rb
@@ -57,7 +57,7 @@ describe "Model#save" do
     ds._fetch = {:y=>2}
     def ds.supports_insert_select?() true end
     def ds.insert_select(hash)
-      execute("INSERT INTO items (y) VALUES (2) RETURNING *"){|r| return r}
+      with_sql_first("INSERT INTO items (y) VALUES (2) RETURNING *")
     end
     o = @c.new(:x => 1)
     o.save
@@ -72,6 +72,23 @@ describe "Model#save" do
     @c.new(:x => 1).save
   end
 
+  it "should use dataset's insert_select method if the dataset uses returning, even if specific columns are selected" do
+    def (@c.dataset).supports_returning?(_) true end
+    ds = @c.dataset = @c.dataset.select(:y).returning(:y)
+    DB.reset
+    ds = @c.instance_dataset
+    ds._fetch = {:y=>2}
+    def ds.supports_insert_select?() true end
+    def ds.insert_select(hash)
+      with_sql_first("INSERT INTO items (y) VALUES (2) RETURNING y")
+    end
+    o = @c.new(:x => 1)
+    o.save
+    
+    o.values.should == {:y=>2}
+    DB.sqls.should == ["INSERT INTO items (y) VALUES (2) RETURNING y"]
+  end
+
   it "should use value returned by insert as the primary key and refresh the object" do
     o = @c.new(:x => 11)
     o.save
@@ -783,6 +800,12 @@ describe Sequel::Model, "#this" do
     instance.this.sql.should == "SELECT * FROM examples WHERE (a = 3) LIMIT 1"
   end
 
+  it "should use a qualified primary key if the dataset is joined" do
+    @example.dataset = @example.dataset.cross_join(:a)
+    instance = @example.load(:id => 3)
+    instance.this.sql.should == "SELECT * FROM examples CROSS JOIN a WHERE (examples.id = 3) LIMIT 1"
+  end
+
   it "should support composite primary keys" do
     @example.set_primary_key [:x, :y]
     instance = @example.load(:x => 4, :y => 5)
diff --git a/www/pages/documentation.html.erb b/www/pages/documentation.html.erb
index f3bd2af..f1ffd06 100644
--- a/www/pages/documentation.html.erb
+++ b/www/pages/documentation.html.erb
@@ -98,6 +98,8 @@
 <h3>Presentations</h3>
 
 <ul>
+<li><a href="http://code.jeremyevans.net/presentations/rubyc2014-2/index.html">"Give-and-Go with PostgreSQL and Sequel" Presentation at RubyC 2014</a> (<a href="http://www.youtube.com/watch?v=toAcnwqlU1Q">Video</a>)</li>
+<li><a href="http://code.jeremyevans.net/presentations/rubyc2014-1/index.html">"Deep Dive Into Eager Loading Limited Associations" Presentation at RubyC 2014</a> (<a href="http://www.youtube.com/watch?v=KbF7RMk_2Qo">Video</a>)</li>
 <li><a href="http://code.jeremyevans.net/presentations/railsclub2013/index.html?trans=no">"Give-and-Go with PostgreSQL and Sequel" Presentation at RailsClub 2013</a> (<a href="http://code.jeremyevans.net/presentations/railsclub2013/index.html">In Russian</a>) (<a href="http://live.digicast.ru/ru/view/2116">Video</a>, starts about 6:30)</li>
 <li><a href="http://code.jeremyevans.net/presentations/heroku201205/index.html">"The Development of Sequel" Presentation in May 2012 at Heroku</a></li>
 <li><a href="http://code.jeremyevans.net/presentations/pgwest2011/index.html">"Sequel: The Database Toolkit for Ruby" Presentation at PostgreSQL Conference West 2011</a></li>
diff --git a/www/pages/plugins.html.erb b/www/pages/plugins.html.erb
index ab2bf97..be72534 100644
--- a/www/pages/plugins.html.erb
+++ b/www/pages/plugins.html.erb
@@ -26,6 +26,7 @@
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/ForceEncoding.html">force_encoding</a>: Forces the all model column string values to a given encoding.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/InputTransformer.html">input_transformer</a>: Automatically transform input to model column setters.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/LazyAttributes.html">lazy_attributes</a>: Allows you to set some attributes that should not be loaded by default, but only loaded when an object requests them.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/ModificationDetection.html">modification_detection</a>: Automatically detect in-place changes to column values.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/StringStripper.html">string_stripper</a>: Strips strings assigned to model attributes.</li>
 </ul></li>
 <li>Caching:<ul>
@@ -59,6 +60,11 @@
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/UpdateOrCreate.html">update_or_create</a>: Adds helper methods for updating an object if it exists, or creating such an object if it does not.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/UpdatePrimaryKey.html">update_primary_key</a>: Allows you to safely update the primary key of a model object.</li>
 </ul></li>
+<li>Selection:<ul>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/ColumnSelect.html">column_select</a>: Selects explicitly qualified columns (table.column1, table.column2, ...) instead of just * for model datasets.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/InsertReturningSelect.html">insert_returning_select</a>: Automatically sets RETURNING clause for INSERT queries for models that use an explicit column selection.</li>
+<li><a href="rdoc-plugins/classes/Sequel/Plugins/TableSelect.html">table_select</a>: Selects table.* instead of just * for model datasets.</li>
+</ul></li>
 <li>Serialization:<ul>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Composition.html">composition</a>: Supports defining getters/setters for objects with data backed by the model's columns.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/JsonSerializer.html">json_serializer</a>: Allows you to serialize/deserialize model objects to/from JSON.</li>
@@ -81,7 +87,6 @@
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Schema.html">schema</a>: Adds backwards compatibility for Model.set_schema and Model.create_table.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Scissors.html">scissors</a>: Adds class methods for delete, destroy, and update.</li>
 <li><a href="rdoc-plugins/classes/Sequel/Plugins/Subclasses.html">subclasses</a>: Allows easy access all model subclasses and descendent classes, without using ObjectSpace.</li>
-<li><a href="rdoc-plugins/classes/Sequel/Plugins/TableSelect.html">table_select</a>: Selects table.* instead of just * for model datasets.</li>
 <li><a href='rdoc-plugins/classes/Sequel/Plugins/TypecastOnLoad.html'>typecast_on_load</a>: Fixes bad database typecasting when loading model objects.</li>
 </ul></li>
 </ul>
@@ -124,6 +129,7 @@
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/from_block_rb.html">from_block</a>: Makes blocks passed to Database#from affect FROM clause instead of WHERE clause.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/looser_typecasting_rb.html">looser_typecasting</a>: Uses .to_f and .to_i instead of Kernel.Float and Kernel.Integer when typecasting floats and integers.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_array_rb.html">pg_array</a>: Adds support for PostgreSQL arrays.</li>
+<li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_enum_rb.html">pg_enum</a>: Adds support for PostgreSQL enums.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_hstore_rb.html">pg_hstore</a>: Adds support for the PostgreSQL hstore type.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_inet_rb.html">pg_inet</a>: Adds support for the PostgreSQL inet and cidr types.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pg_interval_rb.html">pg_interval</a>: Adds support for the PostgreSQL interval type.</li>
@@ -144,6 +150,7 @@
 <ul>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/columns_introspection_rb.html">columns_introspection</a>: Attemps to skip database queries by introspecting the selected columns if possible.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/current_datetime_timestamp_rb.html">current_datetime_timestamp</a>: Creates current Time/DateTime objects that are literalized as CURRENT_TIMESTAMP.</li>
+<li><a href="rdoc-plugins/files/lib/sequel/extensions/dataset_source_alias_rb.html">dataset_source_alias</a>: Automatically aliases datasets to their source instead of using t1, t2, etc..</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/date_arithmetic_rb.html">date_arithmetic</a>: Allows for database-independent date calculations (adding/subtracting an interval to/from a date/timestamp).</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/empty_array_ignore_nulls_rb.html">empty_array_ignore_nulls</a>: Makes Sequel's handling of IN/NOT IN with an empty ignore correct NULL handling.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/filter_having_rb.html">filter_having</a>: Makes Dataset#filter, #and, #or, and #having operate on HAVING clause if HAVING clause is already present.</li>
@@ -155,6 +162,7 @@
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/pretty_table_rb.html">pretty_table</a>: Adds Dataset#print for printing a dataset as a simple plain-text table.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/query_rb.html">query</a>: Adds Dataset#query for a different interface to creating queries that doesn't use method chaining.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/query_literals_rb.html">query_literals</a>: Automatically uses literal strings for regular strings in select, group, and order methods (similar to filter methods).</li>
+<li><a href="rdoc-plugins/files/lib/sequel/extensions/round_timestamps_rb.html">round_timestamps</a>: Automatically round timestamp values to database supported precision before literalizing them.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/select_remove_rb.html">select_remove</a>: Adds Dataset#select_remove to remove selected columns from a dataset.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/sequel_3_dataset_methods_rb.html">sequel_3_dataset_methods</a>: Adds Dataset#[]=, #insert_multiple, #qualify_to, #qualify_to_first_source, #set, #to_csv, #db=, and #opts= methods.</li>
 <li><a href="rdoc-plugins/files/lib/sequel/extensions/set_overrides_rb.html">set_overrides</a>: Adds Dataset#set_defaults and #set_overrides for setting default values in some INSERT/UPDATE statements.</li>

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-ruby-extras/ruby-sequel.git



More information about the Pkg-ruby-extras-commits mailing list