如何把海量数据从Oracle导入到Mongodb-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

如何把海量数据从Oracle导入到Mongodb

一、背景

因为业务需求,现在需要把 Oracle 中几千万的数据转移到 MongoDB,如果通过 PL/SQL Develop 导出,速度会比较慢,而且也很占用带宽。发现一款软件 sqluldr2 数据导出速度非常快,我们后面演示通过 sqluldr2 数据导出。

创新互联建站 - 绵阳主机托管,四川服务器租用,成都服务器租用,四川网通托管,绵阳服务器托管,德阳服务器托管,遂宁服务器托管,绵阳服务器托管,四川云主机,成都云主机,西南云主机,绵阳主机托管,西南服务器托管,四川/成都大带宽,机柜大带宽、租用·托管,四川老牌IDC服务商

整体思路

把oracle中的数据导入到csv格式,然后在mongodb中使用mongoimport工具导入到mongo数据库中。

下载地址

官方下载:http://×××w.anysql.net/software/sqluldr.zip
官方下载:http://×××w.onexsoft.com/zh/download

二、安装工具

程序放在 oracle 的家目录,第一次执行的时候会报错,它回去寻找libclntsh.so这个库文件,这个文件没有在库的环境变量里面,我们可以在oracle的安装目录里面找到,然后我们做个软连接就可以了。

ln -s /u01/oracle/11.0.2.4/lib/libclntsh.so /usr/lib64

二、工具参数

  • 切换到oracle用户执行工具
SQL*UnLoader: Fast Oracle Text Unloader (GZIP, Parallel), Release 4.0.1
(@) Copyright Lou Fangxin (AnySQL.net) 2004 - 2010, all rights reserved.

License: Free for non-commercial useage, else 100 USD per server.

Usage: SQLULDR2 keyword=value [,keyword=value,...]

Valid Keywords:
  user    = username/password@tnsname  
  sql     = SQL file name  
  query   = select statement  
  field   = separator string between fields  
  record  = separator string between records  
  rows    = print progress for every given rows (default, 1000000)   
  file    = output file name(default: uldrdata.txt)  
  log     = log file name, prefix with + to append mode  
  fast    = auto tuning the session level parameters(YES)  
  text    = output type (MySQL, CSV, MYSQLINS, ORACLEINS, FORM, SEARCH).  
  charset = character set name of the target database.  
  ncharset= national character set name of the target database.  
  parfile = read command option from parameter file   
  read    = set DB_FILE_MULTIBLOCK_READ_COUNT at session level  
  sort    = set SORT_AREA_SIZE at session level (UNIT:MB)   
  hash    = set HASH_AREA_SIZE at session level (UNIT:MB)   
  array   = array fetch size  
  head    = print row header(Yes|No)  
  batch   = save to new file for every rows batch (Yes/No)  
  size    = maximum output file piece size (UNIB:MB)  
  serial  = set _serial_direct_read to TRUE at session level  
  trace   = set event 10046 to given level at session level  
  table   = table name in the sqlldr control file  
  control = sqlldr control file and path.  
  mode    = sqlldr option, INSERT or APPEND or REPLACE or TRUNCATE   
  buffer  = sqlldr READSIZE and BINDSIZE, default 16 (MB)  
  long    = maximum long field size  
  width   = customized max column width (w1:w2:...)   
  quote   = optional quote string   
  data    = disable real data unload (NO, OFF)   
  alter   = alter session SQLs to be execute before unload   
  safe    = use large buffer to avoid ORA-24345 error (Yes|No)   
  crypt   = encrypted user information only (Yes|No)   
  sedf/t  = enable character translation function   
  null    = replace null with given value   
  escape  = escape character for special characters  
  escf/t  = escape from/to characters list   
  format  = MYSQL: MySQL Insert SQLs, SQL: Insert SQLs.  
  exec    = the command to execute the SQLs.  
  prehead = column name prefix for head line.  
  rowpre  = row prefix string for each line.  
  rowsuf  = row sufix string for each line.  
  colsep  = separator string between column name and value.  
  presql  = SQL or scripts to be executed before data unload.  
  postsql = SQL or scripts to be executed after data unload.  
  lob     = extract lob values to single file (FILE).  
  lobdir  = subdirectory count to store lob files .  
  split   = table name for automatically parallelization.  
  degree  = parallelize data copy degree (2-128).  

1、要导出的数据由query控制

query参数如果整表导出,可以直接写表名,如果需要查询运算和where条件,query=“sql文本”,也可以把复杂sql写入到文本中由query调用。

2、分隔符设置

默认是逗号分隔符,通过field参数指定分隔符。

sqluldr2 testuser/testuser query=chen.tt1 field=";"

3、大数据量操作

对于大表可以输出到多个文件中,指定行数分割或者按照文件大小分割,例如:

sqluldr2 testuser/testuser@orcl query="select * from test_table2" file=test_table2_%B.txt batch=yes rows=500000

三、执行导出

1、本地执行方式

users参数可以省略不写,和expdp username/passwd 方式一样。

export ORACLE_SID=orcl
sqluldr2 testuser/testuser  query="select * from test" file=test_table1.txt

2、tns方式

sqluldr2 user=testuser/testuser@orcl  query="select * from test" file=test_table1.txt

3、简易连接

sqluldr2 user=testuser/testuser@x.x.x.x:1521/orcl  query="select * from test" file=test_table1.txt

严格按照要求写语句,等号两边不能有空格。

四、实例

一切准备就绪之后,切换到oracle用户下面,执行下面命令。

[oracle@cookie ~]$ ./sqluldr2linux64.bin user=gather/gapass@orcl query="dmp_user_center" head=yes file=/home/oracle/dmp.csv       
           0 rows exported at 2018-10-09 14:40:27, size 0 MB.
     1000000 rows exported at 2018-10-09 14:40:36, size 80 MB.
     2000000 rows exported at 2018-10-09 14:40:43, size 144 MB.
     3000000 rows exported at 2018-10-09 14:40:50, size 212 MB.
     4000000 rows exported at 2018-10-09 14:40:57, size 276 MB.
     5000000 rows exported at 2018-10-09 14:41:04, size 340 MB.
     6000000 rows exported at 2018-10-09 14:41:11, size 404 MB.
     7000000 rows exported at 2018-10-09 14:41:18, size 460 MB.
     8000000 rows exported at 2018-10-09 14:41:25, size 504 MB.
     9000000 rows exported at 2018-10-09 14:41:31, size 548 MB.
     9403362 rows exported at 2018-10-09 14:41:34, size 568 MB.
         output file /home/oracle/dmp.csv closed at 9403362 rows, size 568 MB.

1、我是整表导出,所以query只填写了表名。
2、head=yes 保留了表头。
3、可以看到速度很快,一千万的数据一分钟就导出来了,如果是新机器,我相信速度会更加快很多。

五、Mongodb数据导入

[root@mbasic ~]# mongoimport -udmp -p dmp --db dmp --collection dmp_user_center --type csv --headerline --ignoreBlanks --file dmp.csv    
2018-10-09T14:49:13.580+0800    connected to: localhost
2018-10-09T14:49:16.551+0800    [........................] dmp.dmp_user_center  5.9 MB/568.5 MB (1.0%)
2018-10-09T14:49:19.551+0800    [........................] dmp.dmp_user_center  11.7 MB/568.5 MB (2.1%)
2018-10-09T14:49:22.551+0800    [........................] dmp.dmp_user_center  17.7 MB/568.5 MB (3.1%)
2018-10-09T14:49:25.551+0800    [........................] dmp.dmp_user_center  23.4 MB/568.5 MB (4.1%)
2018-10-09T14:49:28.551+0800    [#.......................] dmp.dmp_user_center  29.1 MB/568.5 MB (5.1%)
2018-10-09T14:49:31.551+0800    [#.......................] dmp.dmp_user_center  35.0 MB/568.5 MB (6.2%)

2018-10-09T14:54:49.551+0800    [#######################.] dmp.dmp_user_center  563.0 MB/568.5 MB (99.0%)
2018-10-09T14:54:52.551+0800    [#######################.] dmp.dmp_user_center  567.4 MB/568.5 MB (99.8%)
2018-10-09T14:54:53.447+0800    [########################] dmp.dmp_user_center  568.5 MB/568.5 MB (100.0%)
2018-10-09T14:54:53.447+0800    imported 9403362 documents

网页题目:如何把海量数据从Oracle导入到Mongodb
文章转载:http://kswsj.cn/article/jedosi.html

其他资讯