1. Sean Davis
  2. mysql2postgres

Commits

Paul Gallagher  committed df1c70b

remove original mysql2psql script, rename README as rdoc

  • Participants
  • Parent commits 970e439
  • Branches master

Comments (0)

Files changed (3)

File README

-Tool for converting mysql database to postgresql.
-It can create postgresql dump from mysql database or directly load data from mysql to
-postgresql (at about 100 000 records per minute).
-
-It can translate now most data types and indexes, but if you experience some problems, feel free
-to contact me, I'll help you.
-
-
-After editing config.yml and launching tool, You can see something like..
-
-Creating table friendships...
-Created table friendships
-Loading friendships...
-620000 of 638779 rows loaded. [ETA: 2010/01/21 21:32 (00h:00m:01s)]
-638779 rows loaded in 1min 3s
-Indexing table friendships...
-Indexed table friendships
-Table creation 0 min, loading 1 min, indexing 0 min, total 1 min
-
-== Running tests
-
-
-Integration tests require suitable mysql and postgres databases to be setup in advance:
-
-mysql on localhost:3306
-- database created called "mysql2psql_test"
-- user setup for "mysql2psql" with no password
-- e.g. 
-  mysqladmin -uroot -p create mysql2psql_test
-  mysql -uroot -p -e "grant all on mysql2psql_test.* to 'mysql2psql'@'localhost';"
-  # verify connecction:
-  mysql -umysql2psql -e "select database(),'yes' as connected" mysql2psql_test
-  
-postgres on localhost:5432
-- database created called "mysql2psql_test" 
-- role (user) access setup for "mysql2psql" with no password
-- e.g.
-  psql postgres -c "create role mysql2psql with login"
-  psql postgres -c "create database mysql2psql_test with owner mysql2psql encoding='UTF8'"
-  # verify connection:
-  psql mysql2psql_test -U mysql2psql -c "\c"
-
-==
-# note from mgkimsal
-I'm still having trouble with bit(1)/boolean fields
-workaround I've found is to put output in file
-then in VIM on file, search/replace the true/false binary fields with t/f
-specifically
-
-(reversed on 3/23 - should be ^A gets f)
-:%s/^@/t/g
-:%s/^A/f/g
-keystrokes are ctrl-v ctrl-shift-@ to get the 'true' binary field
-keystrokes are ctrl-v ctrl-shift-A to get the 'false' binary field
-
-
-# This tool converts mysql database to posgresql.
-# Main repository is http://github.com/maxlapshin/mysql2postgres 
-# Copyright 2009-2010 Max Lapshin <max@maxidoors.ru>
-#
-# Other contributors (in git log order):
-# Anton Ageev <anton@ageev.name>
-# Samuel Tribehou <cracoucax@gmail.com>
-# Marco Nenciarini <marco.nenciarini@devise.it>
-# James Nobis <jnobis@jnobis.controldocs.com>
-# quel <github@quelrod.net>
-# Holger Amann <keeney@fehu.org>
-# Maxim Dobriakov <closer.main@gmail.com>
-# Michael Kimsal <mgkimsal@gmail.com>
-# Jacob Coby <jcoby@portallabs.com>
-# Neszt Tibor <neszt@tvnetwork.hu>
-# Miroslav Kratochvil <exa.exa@gmail.com>
-#
-# I copy here config.yml.sample if you've got this file alone
-#
-# -----------------------
-# mysql:
-#  hostname: localhost
-#  port: 3306
-#  socket: /tmp/mysql.sock
-#  username: somename
-#  password: secretpassword
-#  database: somename 
-# 
-# destination:
-#  # if file is given, output goes to file, else postgres
-#  file:
-#  postgres:
-#   hostname: localhost
-#   port: 5432
-#   username: somename
-#   password: secretpassword
-#   database: somename
-# 
-# # if tables is given, only the listed tables will be converted.  leave empty to convert all tables.
-# #tables:
-# #- table1
-# #- table2
-# #- table3
-# #- table4
-# 
-# # if exclude_tables is given, exclude the listed tables from the conversion.
-# #exclude_tables:
-# #- table5
-# #- table6
-#
-# -----------------------
-#
-#
-# Copy contents between lines into config.yml, edit it according to your settings
-# and run mysql2psql. It will load mysql database directly into postgres, autoconverting all
-
- 
-

File README.rdoc

View file
+Tool for converting mysql database to postgresql.
+It can create postgresql dump from mysql database or directly load data from mysql to
+postgresql (at about 100 000 records per minute).
+
+It can translate now most data types and indexes, but if you experience some problems, feel free
+to contact me, I'll help you.
+
+
+After editing config.yml and launching tool, You can see something like..
+
+Creating table friendships...
+Created table friendships
+Loading friendships...
+620000 of 638779 rows loaded. [ETA: 2010/01/21 21:32 (00h:00m:01s)]
+638779 rows loaded in 1min 3s
+Indexing table friendships...
+Indexed table friendships
+Table creation 0 min, loading 1 min, indexing 0 min, total 1 min
+
+== Running tests
+
+
+Integration tests require suitable mysql and postgres databases to be setup in advance:
+
+mysql on localhost:3306
+- database created called "mysql2psql_test"
+- user setup for "mysql2psql" with no password
+- e.g. 
+  mysqladmin -uroot -p create mysql2psql_test
+  mysql -uroot -p -e "grant all on mysql2psql_test.* to 'mysql2psql'@'localhost';"
+  # verify connecction:
+  mysql -umysql2psql -e "select database(),'yes' as connected" mysql2psql_test
+  
+postgres on localhost:5432
+- database created called "mysql2psql_test" 
+- role (user) access setup for "mysql2psql" with no password
+- e.g.
+  psql postgres -c "create role mysql2psql with login"
+  psql postgres -c "create database mysql2psql_test with owner mysql2psql encoding='UTF8'"
+  # verify connection:
+  psql mysql2psql_test -U mysql2psql -c "\c"
+
+==
+# note from mgkimsal
+I'm still having trouble with bit(1)/boolean fields
+workaround I've found is to put output in file
+then in VIM on file, search/replace the true/false binary fields with t/f
+specifically
+
+(reversed on 3/23 - should be ^A gets f)
+:%s/^@/t/g
+:%s/^A/f/g
+keystrokes are ctrl-v ctrl-shift-@ to get the 'true' binary field
+keystrokes are ctrl-v ctrl-shift-A to get the 'false' binary field
+
+
+# This tool converts mysql database to posgresql.
+# Main repository is http://github.com/maxlapshin/mysql2postgres 
+# Copyright 2009-2010 Max Lapshin <max@maxidoors.ru>
+#
+# Other contributors (in git log order):
+# Anton Ageev <anton@ageev.name>
+# Samuel Tribehou <cracoucax@gmail.com>
+# Marco Nenciarini <marco.nenciarini@devise.it>
+# James Nobis <jnobis@jnobis.controldocs.com>
+# quel <github@quelrod.net>
+# Holger Amann <keeney@fehu.org>
+# Maxim Dobriakov <closer.main@gmail.com>
+# Michael Kimsal <mgkimsal@gmail.com>
+# Jacob Coby <jcoby@portallabs.com>
+# Neszt Tibor <neszt@tvnetwork.hu>
+# Miroslav Kratochvil <exa.exa@gmail.com>
+#
+# I copy here config.yml.sample if you've got this file alone
+#
+# -----------------------
+# mysql:
+#  hostname: localhost
+#  port: 3306
+#  socket: /tmp/mysql.sock
+#  username: somename
+#  password: secretpassword
+#  database: somename 
+# 
+# destination:
+#  # if file is given, output goes to file, else postgres
+#  file:
+#  postgres:
+#   hostname: localhost
+#   port: 5432
+#   username: somename
+#   password: secretpassword
+#   database: somename
+# 
+# # if tables is given, only the listed tables will be converted.  leave empty to convert all tables.
+# #tables:
+# #- table1
+# #- table2
+# #- table3
+# #- table4
+# 
+# # if exclude_tables is given, exclude the listed tables from the conversion.
+# #exclude_tables:
+# #- table5
+# #- table6
+#
+# -----------------------
+#
+#
+# Copy contents between lines into config.yml, edit it according to your settings
+# and run mysql2psql. It will load mysql database directly into postgres, autoconverting all
+
+ 
+

File mysql2psql

-#!/usr/bin/env ruby
-#
-# This tool converts mysql database to posgresql.
-# Main repository is http://github.com/maxlapshin/mysql2postgres 
-# Copyright 2009-2010 Max Lapshin <max@maxidoors.ru>
-#
-# Other contributors (in git log order):
-# Anton Ageev <anton@ageev.name>
-# Samuel Tribehou <cracoucax@gmail.com>
-# Marco Nenciarini <marco.nenciarini@devise.it>
-# James Nobis <jnobis@jnobis.controldocs.com>
-# quel <github@quelrod.net>
-# Holger Amann <keeney@fehu.org>
-# Maxim Dobriakov <closer.main@gmail.com>
-# Michael Kimsal <mgkimsal@gmail.com>
-# Jacob Coby <jcoby@portallabs.com>
-# Neszt Tibor <neszt@tvnetwork.hu>
-# Miroslav Kratochvil <exa.exa@gmail.com>
-#
-# I copy here config.yml.sample if you've got this file alone
-#
-# -----------------------
-# mysql:
-#  hostname: localhost
-#  port: 3306
-#  socket: /tmp/mysql.sock
-#  username: somename
-#  password: secretpassword
-#  database: somename 
-# 
-# destination:
-#  # if file is given, output goes to file, else postgres
-#  file:
-#  postgres:
-#   hostname: localhost
-#   port: 5432
-#   username: somename
-#   password: secretpassword
-#   database: somename
-# 
-# # if tables is given, only the listed tables will be converted.  leave empty to convert all tables.
-# #tables:
-# #- table1
-# #- table2
-# #- table3
-# #- table4
-# 
-# # if exclude_tables is given, exclude the listed tables from the conversion.
-# #exclude_tables:
-# #- table5
-# #- table6
-#
-# -----------------------
-#
-#
-# Copy contents between lines into config.yml, edit it according to your settings
-# and run mysql2psql. It will load mysql database directly into postgres, autoconverting all
-
-
-
-require 'rubygems'
-require 'mysql'
-gem "pg"
-require 'pg'
-require 'yaml'
-
-class MysqlReader
-  class Field
-  end
-  
-  class Table
-    attr_reader :name
-    
-    def initialize(reader, name)
-      @reader = reader
-      @name = name
-    end
-    
-    @@types = %w(tiny enum decimal short long float double null timestamp longlong int24 date time datetime year set blob string var_string char).inject({}) do |list, type|
-      list[eval("Mysql::Field::TYPE_#{type.upcase}")] = type
-      list
-    end
-    
-    @@types[246] = "decimal"
-    
-    def columns
-      @columns ||= load_columns
-    end
-    
-    def convert_type(type)
-      case type
-      when /int.* unsigned/
-        "bigint"
-      when /bigint/
-        "bigint"
-      when "bit(1)"
-        "boolean"
-      when "tinyint(1)"
-        "boolean"
-      when /tinyint/
-        "tinyint"
-      when /int/
-        "integer"
-      when /varchar/
-        "varchar"
-      when /char/
-        "char"
-      when /(float|decimal)/
-        "decimal"
-      when /double/
-         "double precision"
-      else
-        type
-      end 
-    end
-    
-    def load_columns
-      @reader.reconnect
-      result = @reader.mysql.list_fields(name)
-      mysql_flags = Mysql::Field.constants.select {|c| c =~ /FLAG/}
-      fields = []
-      @reader.mysql.query("EXPLAIN `#{name}`") do |res|
-        while field = res.fetch_row do
-          length = field[1][/\((\d+)\)/, 1] if field[1] =~ /\((\d+)\)/
-          length = field[1][/\((\d+),(\d+)\)/, 1] if field[1] =~ /\((\d+),(\d+)\)/
-          desc = {
-            :name => field[0],
-            :table_name => name,
-            :type => convert_type(field[1]),
-            :length => length && length.to_i,
-            :decimals => field[1][/\((\d+),(\d+)\)/, 2],
-            :null => field[2] == "YES",
-            :primary_key => field[3] == "PRI",
-            :auto_increment => field[5] == "auto_increment"
-            }
-          desc[:default] = field[4] unless field[4].nil?
-          fields << desc
-        end
-      end
- 
-      fields.select {|field| field[:auto_increment]}.each do |field|
-        @reader.mysql.query("SELECT max(`#{field[:name]}`) FROM `#{name}`") do |res|
-          field[:maxval] = res.fetch_row[0].to_i
-        end
-      end
-      fields
-    end
-    
-    
-    def indexes
-      load_indexes unless @indexes
-      @indexes 
-    end
- 
-    def foreign_keys
-      load_indexes unless @foreign_keys
-      @foreign_keys
-    end
-    
-    def load_indexes
-      @indexes = []
-      @foreign_keys = []
-      
-      @reader.mysql.query("SHOW CREATE TABLE `#{name}`") do |result|
-        explain = result.fetch_row[1]
-        explain.split(/\n/).each do |line|
-          next unless line =~ / KEY /
-          index = {}
-          if match_data = /CONSTRAINT `(\w+)` FOREIGN KEY \(`(\w+)`\) REFERENCES `(\w+)` \(`(\w+)`\)/.match(line)
-            index[:name] = match_data[1]
-            index[:column] = match_data[2]
-            index[:ref_table] = match_data[3]
-            index[:ref_column] = match_data[4]
-            @foreign_keys << index
-          elsif match_data = /KEY `(\w+)` \((.*)\)/.match(line)
-            index[:name] = match_data[1]
-            index[:columns] = match_data[2].split(",").map {|col| col[/`(\w+)`/, 1]}
-            index[:unique] = true if line =~ /UNIQUE/
-            @indexes << index
-          elsif match_data = /PRIMARY KEY .*\((.*)\)/.match(line)
-            index[:primary] = true
-            index[:columns] = match_data[1].split(",").map {|col| col.strip.gsub(/`/, "")}
-            @indexes << index
-          end
-        end
-      end
-    end
-    
-    def count_rows
-      @reader.mysql.query("SELECT COUNT(*) FROM `#{name}`")  do |res|
-        return res.fetch_row[0].to_i
-      end
-    end
-    
-    def has_id?
-      !!columns.find {|col| col[:name] == "id"} 
-    end
-    
-    def count_for_pager
-      query = has_id? ? 'MAX(id)' : 'COUNT(*)'
-      @reader.mysql.query("SELECT #{query} FROM `#{name}`") do |res|
-        return res.fetch_row[0].to_i
-      end
-    end
- 
-    def query_for_pager
-      query = has_id? ? 'WHERE id >= ? AND id < ?' : 'LIMIT ?,?'
-      "SELECT #{columns.map{|c| "`"+c[:name]+"`"}.join(", ")} FROM `#{name}` #{query}"
-    end
-  end
-  
-  def connect
-    @mysql = Mysql.connect(@host, @user, @passwd, @db, @port, @sock, @flag)
-    @mysql.query("SET NAMES utf8")
-    @mysql.query("SET SESSION query_cache_type = OFF")
-  end
-  
-  def reconnect
-    @mysql.close rescue false
-    connect
-  end
-  
-  def initialize(host = nil, user = nil, passwd = nil, db = nil, port = nil, sock = nil, flag = nil)
-    @host, @user, @passwd, @db, @port, @sock, @flag = host, user, passwd, db, port, sock, flag
-    connect
-  end
-  
-  attr_reader :mysql
-  
-  def tables
-    @tables ||= @mysql.list_tables.map {|table| Table.new(self, table)}
-  end
-  
-  def paginated_read(table, page_size)
-    count = table.count_for_pager
-    return if count < 1
-    statement = @mysql.prepare(table.query_for_pager)
-    counter = 0
-    0.upto((count + page_size)/page_size) do |i|
-      statement.execute(i*page_size, table.has_id? ? (i+1)*page_size : page_size)
-      while row = statement.fetch
-        counter += 1
-        yield(row, counter)
-      end
-    end
-    counter
-  end
-end
- 
-class Writer
-end
- 
- 
-class PostgresWriter < Writer
-  def column_description(column)
-    "#{PGconn.quote_ident(column[:name])} #{column_type_info(column)}"
-  end
-  
-  def column_type(column)
-    column_type_info(column).split(" ").first
-  end
-  
-  def column_type_info(column)
-    if column[:auto_increment]
-      return "integer DEFAULT nextval('#{column[:table_name]}_#{column[:name]}_seq'::regclass) NOT NULL"
-    end
-    
-    default = column[:default] ? " DEFAULT #{column[:default] == nil ? 'NULL' : "'"+PGconn.escape(column[:default])+"'"}" : nil
-    null = column[:null] ? "" : " NOT NULL"
-    type = 
-    case column[:type]
-    
-    # String types
-    when "char"
-      default = default + "::char" if default
-      "character(#{column[:length]})"
-    when "varchar"
-      default = default + "::character varying" if default
-#      puts "VARCHAR: #{column.inspect}"
-      "character varying(#{column[:length]})"
-      
-    # Integer and numeric types
-    when "integer"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default].to_i}" if default
-      "integer"
-    when "bigint"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default].to_i}" if default
-      "bigint"
-    when "tinyint"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default].to_i}" if default
-      "smallint"
-    
-    when "boolean"
-      default = " DEFAULT #{column[:default].to_i == 1 ? 'true' : 'false'}" if default
-      "boolean"
-    when "float"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default].to_f}" if default
-      "real"
-    when "float unsigned"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default].to_f}" if default
-      "real"
-    when "decimal"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default]}" if default
-      "numeric(#{column[:length] || 10}, #{column[:decimals] || 0})"
-
-    when "double precision"
-      default = " DEFAULT #{column[:default].nil? ? 'NULL' : column[:default]}" if default
-      "double precision"
-
-    # Mysql datetime fields
-    when "datetime"
-      default = nil
-      "timestamp without time zone"
-    when "date"
-      default = nil
-      "date"
-    when "timestamp"
-      default = " DEFAULT CURRENT_TIMESTAMP" if column[:default] == "CURRENT_TIMESTAMP"
-      default = " DEFAULT '1970-01-01 00:00'" if column[:default] == "0000-00-00 00:00"
-      default = " DEFAULT '1970-01-01 00:00:00'" if column[:default] == "0000-00-00 00:00:00"
-      "timestamp without time zone"
-    when "time"
-      default = " DEFAULT NOW()" if default
-      "time without time zone"
-
-    when "tinyblob"
-      "bytea"
-    when "mediumblob"
-      "bytea"
-    when "longblob"
-      "bytea"
-    when "blob"
-      "bytea"
-    when "varbinary"
-      "bytea"
-    when "tinytext"
-      "text"
-    when "mediumtext"
-      "text"
-    when "longtext"
-      "text"
-    when "text"
-      "text"
-    when /^enum/
-      default = default + "::character varying" if default
-      enum = column[:type].gsub(/enum|\(|\)/, '')
-      max_enum_size = enum.split(',').map{ |check| check.size() -2}.sort[-1]
-      "character varying(#{max_enum_size}) check( #{column[:name]} in (#{enum}))"
-    else
-      puts "Unknown #{column.inspect}"
-      column[:type].inspect
-      return ""
-    end
-    "#{type}#{default}#{null}"
-  end
-  
-  def process_row(table, row)
-  	table.columns.each_with_index do |column, index|
-
-        if column[:type] == "time"
-          row[index] = "%02d:%02d:%02d" % [row[index].hour, row[index].minute, row[index].second]
-        end
-        
-        if row[index].is_a?(Mysql::Time)
-          row[index] = row[index].to_s.gsub('0000-00-00 00:00', '1970-01-01 00:00')
-          row[index] = row[index].to_s.gsub('0000-00-00 00:00:00', '1970-01-01 00:00:00')
-        end
-        
-        if column_type(column) == "boolean"
-          row[index] = row[index] == 1 ? 't' : row[index] == 0 ? 'f' : row[index]
-        end
-        
-        if row[index].is_a?(String)
-          if column_type(column) == "bytea"
-            row[index] = PGconn.escape_bytea(row[index])
-          else
-            row[index] = row[index].gsub(/\\/, '\\\\\\').gsub(/\n/,'\n').gsub(/\t/,'\t').gsub(/\r/,'\r').gsub(/\0/, '')
-          end
-        end
-        
-        row[index] = '\N' if !row[index]
-      end
-  end
-  
-  def truncate(table)
-  end
-  
-end
- 
-class PostgresFileWriter < PostgresWriter
-  def initialize(file)
-    @f = File.open(file, "w+")
-    @f << <<-EOF
--- MySQL 2 PostgreSQL dump\n
-SET client_encoding = 'UTF8';
-SET standard_conforming_strings = off;
-SET check_function_bodies = false;
-SET client_min_messages = warning;
- 
-EOF
-  end
-  
-  def truncate(table)
-    serial_key = nil
-    maxval = nil
-    
-    table.columns.map do |column|
-      if column[:auto_increment]
-        serial_key = column[:name]
-        maxval = column[:maxval].to_i < 1 ? 1 : column[:maxval] + 1
-      end
-    end
-
-    @f << <<-EOF
--- TRUNCATE #{table.name};
-TRUNCATE #{PGconn.quote_ident(table.name)} CASCADE;
-
-EOF
-    if serial_key
-    @f << <<-EOF
-SELECT pg_catalog.setval(pg_get_serial_sequence('#{table.name}', '#{serial_key}'), #{maxval}, true);
-EOF
-    end
-  end
-  
-  def write_table(table)
-    primary_keys = []
-    serial_key = nil
-    maxval = nil
-    
-    columns = table.columns.map do |column|
-      if column[:auto_increment]
-        serial_key = column[:name]
-        maxval = column[:maxval].to_i < 1 ? 1 : column[:maxval] + 1
-      end
-      if column[:primary_key]
-        primary_keys << column[:name]
-      end
-      "  " + column_description(column)
-    end.join(",\n")
-    
-    if serial_key
-      
-      @f << <<-EOF
---
--- Name: #{table.name}_#{serial_key}_seq; Type: SEQUENCE; Schema: public
---
- 
-DROP SEQUENCE IF EXISTS #{table.name}_#{serial_key}_seq CASCADE;
- 
-CREATE SEQUENCE #{table.name}_#{serial_key}_seq
-    INCREMENT BY 1
-    NO MAXVALUE
-    NO MINVALUE
-    CACHE 1;
-    
-    
-SELECT pg_catalog.setval('#{table.name}_#{serial_key}_seq', #{maxval}, true);
- 
-      EOF
-    end
-    
-    @f << <<-EOF
--- Table: #{table.name}
- 
--- DROP TABLE #{table.name};
-DROP TABLE IF EXISTS #{PGconn.quote_ident(table.name)} CASCADE;
- 
-CREATE TABLE #{PGconn.quote_ident(table.name)} (
-EOF
-  
-    @f << columns
- 
-    if primary_index = table.indexes.find {|index| index[:primary]}
-      @f << ",\n  CONSTRAINT #{table.name}_pkey PRIMARY KEY(#{primary_index[:columns].map {|col| PGconn.quote_ident(col)}.join(", ")})"
-    end
-    
-    @f << <<-EOF
-\n)
-WITHOUT OIDS;
-EOF
-  
-    table.indexes.each do |index|
-      next if index[:primary]
-      unique = index[:unique] ? "UNIQUE " : nil
-      @f << <<-EOF
-DROP INDEX IF EXISTS #{PGconn.quote_ident(index[:name])} CASCADE;
-CREATE #{unique}INDEX #{PGconn.quote_ident(index[:name])} ON #{PGconn.quote_ident(table.name)} (#{index[:columns].map {|col| PGconn.quote_ident(col)}.join(", ")});
-EOF
-    end
- 
-  end
-  
-  def write_indexes(table)
-  end
-  
-  def write_constraints(table)
-    table.foreign_keys.each do |key|
-      @f << "ALTER TABLE #{PGconn.quote_ident(table.name)} ADD FOREIGN KEY (#{PGconn.quote_ident(key[:column])}) REFERENCES #{PGconn.quote_ident(key[:ref_table])}(#{PGconn.quote_ident(key[:ref_column])});\n"
-    end
-  end
-  
-  
-  def write_contents(table, reader)
-    @f << <<-EOF
---
--- Data for Name: #{table.name}; Type: TABLE DATA; Schema: public
---
-
-COPY "#{table.name}" (#{table.columns.map {|column| PGconn.quote_ident(column[:name])}.join(", ")}) FROM stdin;
-EOF
-    
-    reader.paginated_read(table, 1000) do |row, counter|
-      line = []
-      process_row(table, row)
-      @f << row.join("\t") + "\n"
-    end
-    @f << "\\.\n\n"
-    #@f << "VACUUM FULL ANALYZE #{PGconn.quote_ident(table.name)};\n\n"
-  end
-  
-  def close
-    @f.close
-  end
-end
- 
-class PostgresDbWriter < PostgresWriter
-  def connection(hostname, login, password, database, port)
-    database, schema = database.split(":")
-    @conn = PGconn.new(hostname, port.to_s, '', '', database, login, password)
-    @conn.exec("SET search_path TO #{PGconn.quote_ident(schema)}") if schema
-  end
-  
-  def initialize(hostname, login, password, database, port = 5432)
-    connection(hostname, login, password, database, port)
-    @conn.exec("SET client_encoding = 'UTF8'")
-    @conn.exec("SET standard_conforming_strings = off") if @conn.server_version >= 80200
-    @conn.exec("SET check_function_bodies = false")
-    @conn.exec("SET client_min_messages = warning")
-  end
-
-  def exists?(relname)
-    rc = @conn.select_one("SELECT COUNT(*) FROM pg_class WHERE relname = #{PGconn.quote(relname)}")
-    (!rc.nil?) && (!rc.empty?) && (rc.first.to_i > 0)
-  end
-  
-  def write_table(table)
-    puts "Creating table #{table.name}..."
-    primary_keys = []
-    serial_key = nil
-    maxval = nil
-    
-    columns = table.columns.map do |column|
-      if column[:auto_increment]
-        serial_key = column[:name]
-        maxval = column[:maxval].to_i < 1 ? 1 : column[:maxval] + 1
-      end
-      if column[:primary_key]
-        primary_keys << column[:name]
-      end
-      "  " + column_description(column)
-    end.join(",\n")
-    
-    if serial_key
-      if @conn.server_version < 80200
-        serial_key_seq = "#{table.name}_#{serial_key}_seq"
-        @conn.exec("DROP SEQUENCE #{serial_key_seq} CASCADE") if exists?(serial_key_seq)
-      else
-        @conn.exec("DROP SEQUENCE IF EXISTS #{table.name}_#{serial_key}_seq CASCADE")
-      end
-      @conn.exec <<-EOF
-        CREATE SEQUENCE #{table.name}_#{serial_key}_seq
-        INCREMENT BY 1
-        NO MAXVALUE
-        NO MINVALUE
-        CACHE 1
-      EOF
-    
-      @conn.exec "SELECT pg_catalog.setval('#{table.name}_#{serial_key}_seq', #{maxval}, true)"
-    end
-    
-    if @conn.server_version < 80200
-      @conn.exec "DROP TABLE #{PGconn.quote_ident(table.name)} CASCADE;" if exists?(table.name)
-    else
-      @conn.exec "DROP TABLE IF EXISTS #{PGconn.quote_ident(table.name)} CASCADE;"
-    end
-    create_sql = "CREATE TABLE #{PGconn.quote_ident(table.name)} (\n" + columns + "\n)\nWITHOUT OIDS;"
-    begin
-      @conn.exec(create_sql)
-    rescue Exception => e
-      puts "Error: \n#{create_sql}"
-      raise
-    end
-    puts "Created table #{table.name}"
- 
-  end
-  
-  def write_indexes(table)
-    puts "Indexing table #{table.name}..."
-    if primary_index = table.indexes.find {|index| index[:primary]}
-      @conn.exec("ALTER TABLE #{PGconn.quote_ident(table.name)} ADD CONSTRAINT \"#{table.name}_pkey\" PRIMARY KEY(#{primary_index[:columns].map {|col| PGconn.quote_ident(col)}.join(", ")})")
-    end
-    
-    table.indexes.each do |index|
-      next if index[:primary]
-      unique = index[:unique] ? "UNIQUE " : nil
-      
-      #MySQL allows an index name which could be equal to a table name, Postgres doesn't
-      indexname = index[:name]
-      if indexname.eql?(table.name)
-        indexnamenew = "#{indexname}_index"
-        puts "WARNING: index \"#{indexname}\" equals table name. This is not allowed by postgres and will be renamed to \"#{indexnamenew}\""
-        indexname = indexnamenew
-      end
-      
-      if @conn.server_version < 80200
-        @conn.exec("DROP INDEX #{PGconn.quote_ident(indexname)} CASCADE;") if exists?(indexname)
-      else
-        @conn.exec("DROP INDEX IF EXISTS #{PGconn.quote_ident(indexname)} CASCADE;")
-      end
-      @conn.exec("CREATE #{unique}INDEX #{PGconn.quote_ident(indexname)} ON #{PGconn.quote_ident(table.name)} (#{index[:columns].map {|col| PGconn.quote_ident(col)}.join(", ")});")
-    end
-    
-    
-    #@conn.exec("VACUUM FULL ANALYZE #{PGconn.quote_ident(table.name)}")
-    puts "Indexed table #{table.name}"
-  rescue Exception => e
-    puts "Couldn't create indexes on #{table} (#{table.indexes.inspect})"
-    puts e
-    puts e.backtrace[0,3].join("\n")
-  end
-  
-  def write_constraints(table)
-    table.foreign_keys.each do |key|
-      key_sql = "ALTER TABLE #{PGconn.quote_ident(table.name)} ADD FOREIGN KEY (#{PGconn.quote_ident(key[:column])}) REFERENCES #{PGconn.quote_ident(key[:ref_table])}(#{PGconn.quote_ident(key[:ref_column])})"
-      begin
-        @conn.exec(key_sql)
-      rescue Exception => e
-        puts "Error: \n#{key_sql}\n#{e}"
-      end
-    end
-  end
-  
-  def format_eta (t)
-    t = t.to_i
-    sec = t % 60
-    min = (t / 60) % 60
-    hour = t / 3600
-    sprintf("%02dh:%02dm:%02ds", hour, min, sec)
-  end
-  
-  def write_contents(table, reader)
-    _time1 = Time.now
-    copy_line = "COPY #{PGconn.quote_ident(table.name)} (#{table.columns.map {|column| PGconn.quote_ident(column[:name])}.join(", ")}) FROM stdin;"
-    @conn.exec(copy_line)
-    puts "Counting rows of #{table.name}... "
-    STDOUT.flush
-    rowcount = table.count_rows
-    puts "Rows counted"
-    puts "Loading #{table.name}..."
-    STDOUT.flush
-    _counter = reader.paginated_read(table, 1000) do |row, counter|
-      line = []
-      process_row(table, row)
-      @conn.put_copy_data(row.join("\t") + "\n")
-       
-      if counter != 0 && counter % 20000 == 0
-        elapsedTime = Time.now - _time1
-        eta = elapsedTime * rowcount / counter - elapsedTime
-        etaf = self.format_eta(eta)
-        etatimef = (Time.now + eta).strftime("%Y/%m/%d %H:%M")
-        printf "\r#{counter} of #{rowcount} rows loaded. [ETA: #{etatimef} (#{etaf})]"
-        STDOUT.flush
-      end
-      
-      if counter % 5000 == 0
-        @conn.put_copy_end
-        @conn.exec(copy_line)
-      end
-       
-    end
-    _time2 = Time.now
-    puts "\n#{_counter} rows loaded in #{((_time2 - _time1) / 60).round}min #{((_time2 - _time1) % 60).round}s"
-#    @conn.putline(".\n")
-    @conn.put_copy_end
-  end
-  
-  def close
-    @conn.close
-  end
-
-end
-
- 
-class Converter
-  attr_reader :reader, :writer, :options
-  
-  def initialize(reader, writer, options = {})
-    @reader = reader
-    @writer = writer
-    @options = options
-    @exclude_tables = options.exclude_tables([])
-    @only_tables = options.only_tables(nil)
-    @supress_data = options[:supress_data]
-    @supress_ddl = options[:supress_ddl]
-    @force_truncate = options[:force_truncate]
-  end
-  
-  def convert
-    _time1 = Time.now
-
-    tables = reader.tables.
-      reject {|table| @exclude_tables.include?(table.name)}.
-      select {|table| @only_tables ? @only_tables.include?(table.name) : true}
-
-
-    tables.each do |table|
-      writer.write_table(table)
-    end unless @supress_ddl
- 
-    _time2 = Time.now
-    tables.each do |table|
-      writer.truncate(table) if @force_truncate
-      writer.write_contents(table, reader)
-    end unless @supress_data
- 
-    _time3 = Time.now
-    tables.each do |table|
-      writer.write_indexes(table)
-    end unless @supress_ddl
-    tables.each do |table|
-      writer.write_constraints(table)
-    end unless @supress_ddl
- 
- 
-    writer.close
-    _time4 = Time.now
-    puts "Table creation #{((_time2 - _time1) / 60).round} min, loading #{((_time3 - _time2) / 60).round} min, indexing #{((_time4 - _time3) / 60).round} min, total #{((_time4 - _time1) / 60).round} min"
-  end
-end
-
-class YamlConfig
-  attr_accessor :config
-  class UninitializedValueError < StandardError
-	end
-  def initialize(filepath)
-    @config = YAML::load(File.read(filepath))
-  end
-  def [](key)
-    self.send( key )
-  end
-  def method_missing(name, *args)
-    token=name.to_s
-    default = args.length>0 ? args[0] : ''
-    must_be_defined = default == :none
-    case token
-    when /mysql/i
-      key=token.sub( /^mysql/, '' )
-      value=@config["mysql"][key]
-    when /pg/i
-      key=token.sub( /^pg/, '' )
-      value=@config["destination"]["postgres"][key]
-    when /dest/i
-      key=token.sub( /^dest/, '' )
-      value=@config["destination"][key]
-    when /only_tables/i
-      value=@config["tables"] 
-    else
-      value=@config[token]
-    end
-    value.nil? ? ( must_be_defined ? (raise UninitializedValueError.new) : default ) : value
-  end
-  
-end
-
-options = YamlConfig.new( ARGV[0] || "#{File.dirname(__FILE__)}/config.yml" )
-
-reader = MysqlReader.new(
-  options.mysqlhostname('localhost'), options.mysqlusername, options.mysqlpassword, 
-  options.mysqldatabase, options.mysqlport, options.mysqlsocket )
-
-if options.destfile(nil)
-  writer = PostgresFileWriter.new(options.destfile)
-else
-  writer = PostgresDbWriter.new(
-    options.pghostname('localhost'), options.pgusername, options.pgpassword, 
-    options.pgdatabase, options.pgport(5432).to_i )
-end
-
-Converter.new(reader, writer, options).convert