您的位置:新葡亰496net > 网络数据库 > 新葡亰496net:优化Mysql数据表的shell脚本,pg常用

新葡亰496net:优化Mysql数据表的shell脚本,pg常用

发布时间:2019-06-19 12:56编辑:网络数据库浏览(102)

    MySQL表碎片整理

    • 1. 图谋碎片大小
    • 2. 疏理碎片
      • 2.1 使用alter table table_name engine = innodb命令实行整治。
      • 2.2 使用pt-online-schema-change工具也能实行在线整理表结构,收罗碎片等操作。
      • 2.3 使用optimize table命令,整理碎片。
    • 3. 整治表碎片shell脚本

    鉴于公司数据库中的数据量很大,定时对合作社的mysql数据库中的数据表进行优化操作(关于optimize的描述如下所示),数据库中有300多张数据表,手工业去操作明显不太现实,用脚本来施行成效照旧很不错的,脚本如下:

    生产条件MySQL表的掩护:check、optimize和analyze

    这里在Linux下利用到crontab定时职责的丰富及mysqldump实施轻便的数据库备份,具体步骤如下:

    1)小型监控:
    1.在pg库主机上配置,每5分钟施行二次,插入到自身的测试pg库内
    [root@mysqltest tina_shell]# cat jk_pg.sh
    #!/bin/bash
    #适用于中间转播库192.168.12.8和12.2
    running_port=`netstat -nat|grep "LISTEN"|grep "5432"|sed -n 2p|awk -F : '{print $4}'`
    jk_host=`ifconfig |grep "inet addr:192.168"|awk '{print $2}'|awk -F : '{print $2}'`
    record_time=`date "%Y-%m-%d %H:%M:%S"`
    waiting_count=`ps -ef|grep postgres|grep -v startup |grep waiting|wc -l`
    streaming=`ps -ef|grep wal|grep streaming |awk '{print $15}'`
    #tbjk=`ps -ef|grep postgres|grep startup|grep waiting|wc -l`
    cipan=`df -ah |grep % |grep -v tmpfs|grep -v boot`
    usersum=`ps -ef|grep postgres |grep -E "engine|fenxi|sqluser" |wc -l`

    1. 乘除碎片大小

    要整治碎片,首先要询问碎片的图谋办法。

    能够经过show table [from|in db_name] status like '%table_name%'指令查看:

    mysql> show table from employees status like 't1'G
    *************************** 1. row ***************************
               Name: t1
             Engine: InnoDB
            Version: 10
         Row_format: Dynamic
               Rows: 1176484
     Avg_row_length: 86
        Data_length: 101842944
    Max_data_length: 0
       Index_length: 0
          Data_free: 39845888
     Auto_increment: NULL
        Create_time: 2018-08-28 13:40:19
        Update_time: 2018-08-28 13:50:43
         Check_time: NULL
          Collation: utf8mb4_general_ci
           Checksum: NULL
     Create_options: 
            Comment: 
    1 row in set (0.00 sec)
    

     

    零星大小 = 数据总大小 - 实际表空间文件大小

    • 数量总大小 = Data_length Data_length = 101842944

    • 实际表空间文件大小 = rows * Avg_row_length = 1176484 * 86 = 101177624

    • 心碎大小 = (101842944 - 101177624) / 1024 /1024 = 0.63MB

    通过information_schema.tablesDATA_FREE列查看表有未有零星:

    SELECT t.TABLE_SCHEMA,
           t.TABLE_NAME,
           t.TABLE_ROWS,
           t.DATA_LENGTH,
           t.INDEX_LENGTH,
           concat(round(t.DATA_FREE / 1024 / 1024, 2), 'M') AS datafree
    FROM information_schema.tables t
    WHERE t.TABLE_SCHEMA = 'employees'
    
    
     -------------- -------------- ------------ ------------- -------------- ---------- 
    | TABLE_SCHEMA | TABLE_NAME   | TABLE_ROWS | DATA_LENGTH | INDEX_LENGTH | datafree |
     -------------- -------------- ------------ ------------- -------------- ---------- 
    | employees    | departments  |          9 |       16384 |        16384 | 0.00M    |
    | employees    | dept_emp     |     331143 |    12075008 |     11567104 | 0.00M    |
    | employees    | dept_manager |         24 |       16384 |        32768 | 0.00M    |
    | employees    | employees    |     299335 |    15220736 |            0 | 0.00M    |
    | employees    | salaries     |    2838426 |   100270080 |     36241408 | 5.00M    |
    | employees    | t1           |    1191784 |    48824320 |     17317888 | 5.00M    |
    | employees    | titles       |     442902 |    20512768 |     11059200 | 0.00M    |
    | employees    | ttt          |          2 |       16384 |            0 | 0.00M    |
     -------------- -------------- ------------ ------------- -------------- ---------- 
    8 rows in set (0.00 sec)
    

     

    mysql手册中有关 OPTIMIZE 的描述:

     

    1.      编写三个本子: /serverBack/autobackmysql.sh

    #echo $jk_host $record_time $waiting_count $streaming $tbjk >>/tmp/pg_check_state.log
    psql -h 192.168.12.31 -U postgres -p 1922 -d tina -c "insert into jk_pg(jk_host,record_time,waiting_count,streaming,running_port,cipan,usersum) values('$jk_host','$record_time','$waiting_count','$streaming','$running_port','$cipan','$usersum');"

    2. 照拂碎片

    OPTIMIZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tbl_name [, tbl_name] …

     ㈠ optimize

    剧情如下:

    2.部署crontab
    cat /etc/crontab
    0 20 * * * root sh /tina_shell/backup.sh
    4 * * * * root sh /tina_shell/pg_delete_archivelog.sh
    */5 * * * * root sh /tina_shell/jk_pg.sh

    2.1 使用alter table table_name engine = innodb指令举行规整。

     root@localhost [employees] 14:27:01> alter table t1   engine=innodb;
    
     Query OK, 0 rows affected (5.69 sec)
     Records: 0  Duplicates: 0  Warnings: 0
    
     root@localhost [employees] 14:27:15> show table status like 't1'G
     *************************** 1. row ***************************
               Name: t1
             Engine: InnoDB
            Version: 10
         Row_format: Dynamic
               Rows: 1191062
     Avg_row_length: 48
        Data_length: 57229312
    Max_data_length: 0
       Index_length: 0
          Data_free: 2097152
     Auto_increment: NULL
        Create_time: 2018-08-28 14:27:15
        Update_time: NULL
         Check_time: NULL
          Collation: utf8mb4_general_ci
           Checksum: NULL
     Create_options: 
            Comment: 
     1 row in set (0.00 sec)
    

     

    假定你已经删除了表的一大学一年级些,可能只要您曾经对包括可变长度行的表(含有VATiguanCHA奥德赛, BLOB或TEXT列的表)实行了大多改换,则应利用

            

    方法一:

    3.建表
    CREATE TABLE jk_pg
    (
      id serial NOT NULL,
      jk_host character varying, -- 监察和控制主机的ip地址
      record_time timestamp without time zone, -- 监察和控制的时刻
      waiting_count integer, -- 产生waiting等待的历程数ps -ef|grep postgres|grep -v startup |grep waiting|wc -l
      streaming character varying, -- 正在进展共同的日记ps -ef|grep wal|grep streaming |awk '{print $13}'
      usersum integer, -- 当前延续用户总量(sqluser、engine、fenxi)
      tbjk integer, -- ps -ef|grep postgres|grep startup|grep waiting|wc -l
      running_port integer, -- 检验pg运转是或不是健康,假诺未有呈现5432端口,那pg就挂了
      cipan character varying, -- 磁盘景况
      locks character varying, -- 锁表情状
      beizhu character varying -- 填写部分十二分的备考
    )
    WITH (
      OIDS=FALSE
    );
    COMMENT ON TABLE jk_pg  IS '自制监察和控制表-tina';

    2.2 使用pt-online-schema-change工具也能拓展在线整理表结构,搜集碎片等操作。

     [root@mysqldb1 14:29:29 /root]
     # pt-online-schema-change --alter="ENGINE=innodb" D=employees,t=t1 --execute
     Cannot chunk the original table `employees`.`t1`: There is no good index and the table is oversized. at /opt/percona-toolkit-3.0.11/bin/pt-online-schema-change line 5852.
    

     

     需表上有主键或唯一索引才能运行
    
     [root@mysqldb1 14:31:16 /root]
    # pt-online-schema-change --alter='engine=innodb' D=employees,t=salaries --execute
    No slaves found.  See --recursion-method if host mysqldb1 has slaves.
    Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
    Operation, tries, wait:
      analyze_table, 10, 1
      copy_rows, 10, 0.25
      create_triggers, 10, 1
      drop_triggers, 10, 1
      swap_tables, 10, 1
      update_foreign_keys, 10, 1
    Altering `employees`.`salaries`...
    Creating new table...
    Created new table employees._salaries_new OK.
    Altering new table...
    Altered `employees`.`_salaries_new` OK.
    2018-08-28T14:37:01 Creating triggers...
    2018-08-28T14:37:01 Created triggers OK.
    2018-08-28T14:37:01 Copying approximately 2838426 rows...
    Copying `employees`.`salaries`:  74% 00:10 remain
    2018-08-28T14:37:41 Copied rows OK.
    2018-08-28T14:37:41 Analyzing new table...
    2018-08-28T14:37:42 Swapping tables...
    2018-08-28T14:37:42 Swapped original and new tables OK.
    2018-08-28T14:37:42 Dropping old table...
    2018-08-28T14:37:42 Dropped old table `employees`.`_salaries_old` OK.
    2018-08-28T14:37:42 Dropping triggers...
    2018-08-28T14:37:42 Dropped triggers OK.
    Successfully altered `employees`.`salaries`.
    

     

    OPTIMIZE TABLE。被删去的笔录被保障在链接清单中,后续的INSERT操作会重新使用旧的笔录地方。您能够应用OPTIMIZE TABLE来重新

            optimize能够回收空间、减弱碎片、提升I/O

    ##采用mysqldump备份数据库erms

    查阅监察和控制数据
    tina=# select * from jk_pg order by record_time desc,jk_host desc limit 4;
      id  |    jk_host     |     record_time     | waiting_count |  streaming   | usersum | tbjk | running_port |                        cipan                         | locks | beizhu
    ------ ---------------- --------------------- --------------- -------------- --------- ------ -------------- ------------------------------------------------------ ------- --------
    7654 | 192.168.12.2  | 2016-01-13 11:00:01 |             0 | F2B/CE5349B0 |     161 |      |         5432 | Filesystem      Size  Used Avail Use% Mounted on    |       |
          |                |                     |               |              |         |      |              | /dev/sda2       104G   21G   78G  22% /             |       |
          |                |                     |               |              |         |      |              | /dev/sdc1       917G  540G  331G  63% /opt/db_backup |       |
          |                |                     |               |              |         |      |              | /dev/sdb        939G  370G  522G  42% /home/pgsql    |       |
    7655 | 192.168.12.1  | 2016-01-13 11:00:01 |              0 | F2B/CEE173E8 |      26 |    0 |         5432 | Filesystem      Size  Used Avail Use% Mounted on    |       |
          |                |                     |               |              |         |      |              | /dev/sda3       103G  6.1G   92G   7% /             |       |
          |                |                     |               |              |         |      |              | /dev/sdb1       939G  285G  606G  32% /home/pgsql    |       |
    7653 | 192.168.12.8 | 2016-01-13 11:00:01 |               0 |              |      30 |      |         5432 | Filesystem      Size  Used Avail Use% Mounted on    |       |
          |                |                     |               |              |         |      |              | /dev/sda3        27G  1.9G   24G   8% /             |       |
          |                |                     |               |              |         |      |              | /dev/sda2        29G  4.1G   24G  15% /var          |       |
          |                |                     |               |              |         |      |              | /dev/sdb1       252G  118G  122G  50% /home          |       |

    2.3 使用optimize table命令,整理碎片。

    运行OPTIMIZE TABLE, InnoDB成立二个新的.ibd具备不常名称的文件,只行使存款和储蓄的莫过于数据所需的上空。优化完毕后,InnoDB删除旧.ibd文件并将其替换为新文件。假设原先的.ibd文件显着拉长但实际上数据仅占其尺寸的一部分,则运维OPTIMIZE TABLE能够回收未使用的空间。

    mysql>optimize table account;
     -------------- ---------- ---------- ------------------------------------------------------------------- 
    | Table        | Op       | Msg_type | Msg_text                                                          |
     -------------- ---------- ---------- ------------------------------------------------------------------- 
    | test.account | optimize | note     | Table does not support optimize, doing recreate   analyze instead |
    | test.account | optimize | status   | OK                                                                |
     -------------- ---------- ---------- ------------------------------------------------------------------- 
    2 rows in set (0.09 sec)
    

     

    应用未利用的空中,并整治数据文件的碎片。

            这两天帮衬的囤积引擎有:InnoDB、MyASIM和ASportageCHIVE

    /usr/local/MySQL/bin/mysqldump -uroot -ppwd erms >> /serverBack/mysql_back/erms_$(date "%Y_%m_%d").sql

    2)pg总括库全部表的行数
    [root@pg-ro tmp]# cat tinadb.sh
    #!/bin/bash
    #2015-11-3 tina
    date=`date "%Y-%m-%d %H:%M:%S"`
    echo "begin time is: $date" >>/tmp/tongji.log

    3.照看表碎片shell脚本

    # cat optimize_table.sh

    #!/bin/sh
    socket=/tmp/mysql3306.sock
    time=`date "%Y-%m-%d"`
    SQL="select concat(d.TABLE_SCHEMA,'.',d.TABLE_NAME) from information_schema.TABLES d where d.TABLE_SCHEMA = 'employees'"

    optimize_table_name=$(/usr/local/mysql/bin/mysql -S $socket -e "$SQL"|grep -v "TABLE_NAME")

    echo "Begin Optimize Table at: "`date "%Y-%m-%d %H:%M:%S"`>/tmp/optimize_table_$time.log

    for table_list in $optimize_table_name
    do

    echo `date "%Y-%m-%d %H:%M:%S"` "alter table $table_list engine=innodb ...">>/tmp/optimize_table_$time.log
    /usr/local/mysql/bin/mysql -S $socket -e "alter table $table_list engine=innoDB"

    done
    echo "End Optimize Table at: "`date "%Y-%m-%d %H:%M:%S"`>>/tmp/optimize_table_$time.log

    输出内容
    

    # cat optimize_table_2018-08-30.log

    Begin Optimize Table at: 2018-08-30 08:43:21
    2018-08-30 08:43:21 alter table employees.departments engine=innodb ...
    2018-08-30 08:43:21 alter table employees.dept_emp engine=innodb ...
    2018-08-30 08:43:27 alter table employees.dept_manager engine=innodb ...
    2018-08-30 08:43:27 alter table employees.employees engine=innodb ...
    2018-08-30 08:43:32 alter table employees.salaries engine=innodb ...
    2018-08-30 08:44:02 alter table employees.t1 engine=innodb ...
    2018-08-30 08:44:17 alter table employees.titles engine=innodb ...
    2018-08-30 08:44:28 alter table employees.ttt engine=innodb ...
    End Optimize Table at: 2018-08-30 08:44:28

     

     

    利用方式:sh optimize.sh word

            

    ##找到/serverBack/mysql_back/下文件名称以erms_千帆竞发,以 .sql 结尾的公文,并且是7天前系统修改过的文件,将其删除

    tables=$(psql -U postgres -d tinadb -c "select tablename from pg_tables where  schemaname='public' order by tablename;"|grep -v "tablename" |grep -v "rows"|grep -v "-")

    [[email protected] shell]#

            如果是Replication环境、可加NO_WRITE_TO_BINLOG(恐怕LOCAL、意思完全同样)、譬如:

    find /serverBack/mysql_back/ -mtime 7-name "erms_*.sql" -exec rm -rf {} ;

    #echo $tables >>/tmp/tongji.log

    #!/bin/sh

            optimize local table table_name;

    方法二:

    for table in $tables
        do
           echo $table >>/tmp/tongji.log
           psql -U postgres -d tinadb -c "select count(*) from $table;" |grep -v "count" |grep -v "row"|grep -v "-">>/tmp/tongji.log
        done
    #echo "ok!" >>/tmp/tongji.log

    time_log=/opt/optimize_time

            

    /usr/local/mysql/bin/mysqldump -uroot -ppwd dbname > dir/db_`date %F`.sql

    查看--并一贯粘贴到execl表格中
    [root@pg-ro tmp]# cat /tmp/tongji.log  |awk 'NF==1{printf "%s ", $1;next}1'
    begin time is: 2015-11-03 14:12:12
    t1 11024
    t2 8267537
    t3 1684
    t4 2

     

            以下是多少个简练测试:

    ##保留近一周的备份文件,更早的删除

    总结其余库,直接用vi替换功效替换db名就可以:
    替换 :%s/tinadb/dbname/g

    sum=$#

     

    find /dir -mtime 7 -name"db_*.sql" -exec rm -rf {} ;

    3)pg 定期vacuum和reindex脚本
    [root@pg tina_shell]# cat pg_tinadb_vacuum.sh
    #!/bin/bash
    #2014-10-22 tina
    date=`date "%Y-%m-%d %H:%M:%S"`
    echo "begin time is: $date" >>/tmp/pg_tinadb_vacuum.log

    if [ "$sum" -eq 0 ]

    [plain] 

    方法三:

    tables=$(psql -U postgres -d tinadb -c "select tablename from pg_tables where schemaname='public';" |grep -v "tablename" |grep -v "rows"|grep -v "-")
    echo $tables >>/tmp/pg_tinadb_vacuum.log

    then

    [[email protected] employees]$ ls -alh t.ibd  

    filename='date %y%m%d'

    indexes=$(psql -U postgres -d tinadb -c "select indexname from pg_indexes where schemaname='public' and indexname not like '%pkey';"|grep -v "indexname"|grep -v "-" |grep -v "row")

    echo "Error: no parameter chosed"

    -rw-rw---- 1 mysql dba 24M 05-22 16:48 t.ibd  

    /usr/local/mysql/bin/mysqldump -uroot-proot erms >>/serverBack/mysql/$filename.sql

    for table in $tables
    do
    psql -U postgres -d tinadb -c "vacuum full $table;">>/tmp/pg_tinadb_vacuum.log
    echo "table $table has finished vacuum.">>/tmp/pg_tinadb_vacuum.log
    done

    exit 1

      

    较全的shell脚本内容如下:

    for index in $indexes
    do
    psql -U postgres -d tinadb -c "reindex index $index;">>/tmp/pg_tinadb_vacuum.log
    echo "index $index has finished reindex.">>/tmp/pg_tinadb_vacuum.log
    done

    fi

    未optimize前、有24M  

    echo "---------------------------------------------------" >> /serverBack/dbBack/dbBackLog.log 

    翻开后台日志:
    [root@pg tmp]# tail -f pg_tinadb_vacuum.log
    begin time is: 2016-01-13 11:38:26
    VACUUM
    table t1 has finished vacuum.
    VACUUM
    table t2 has finished vacuum.
    VACUUM
    table t3 has finished vacuum.
    VACUUM
    table t4 has finished vacuum.
    REINDEX
    index t1_rin_idx has finished reindex.

     

      

    echo $(date "%Y-%m-%d %H:%M:%S") "erms Database backup start"  >> /serverBack/dbBack/dbBackLog.log 

    建议:借使库中留存大表,就独自手动操作,不然可能会促成实践时间长度期锁表,影响其余事情。

    for i in $*;do

    mysql> optimize table t;  

    /usr/local/mysql/bin/mysqldump -uroot -ppwd erms >> /serverBack/dbBack/erms_$(date "%Y-%m-%d").sql 

    4)pg平时备份脚本
    [root@mysqltest tina_shell]# cat backup.sh
    #!/bin/bash
    #本地备份保存目录
    bkdir=/home/bk_pg
    day=`date "%Y%m%d"`

    echo "optimize database $i starting ..."

    ------------- ---------- ---------- -------------------------------------------------------------------  

    if [ 0 -eq $? ];then 

    #直接钦赐备份哪些,也得以透过pg_database查询全数非模板和系统db进行机动备份
    DB="tinadb testdb"
    cd $bkdir
    #result=0

    tables=$(/usr/bin/mysql $i -udevuser -pdevuser -e "show tables" | grep -v "Tables" > /opt/$i)

    | Table       | Op       | Msg_type | Msg_text                                                          |  

    if [ -f "/serverBack/dbBack/erms_$(date "%Y-%m-%d").sql" ];then 

    if [ -f $bkdir/pg.md5 ]
    then
        rm -f $bkdir/pg.md5
    fi

    tablelist=$(cat /opt/$i)

    ------------- ---------- ---------- -------------------------------------------------------------------  

    echo $(date "%Y-%m-%d %H:%M:%S") "erms Database backup success!" >> /serverBack/dbBack/dbBackLog.log 

    for db in $DB
    do
        pg_dump --host localhost --port 5432 --username "postgres" --format custom --blobs --encoding UTF8 --verbose $db --file $bkdir/$db.$day.backup &> $bkdir/bk.log
        pgret=$?
        if [ "$pgret" -ne "0" ]
        then
            echo "$pgtime $db backup fail" >> $bkdir/pg.md5
            exit 1
        else
            md5sum $bkdir/$db.$day.backup >> $bkdir/pg.md5
        fi
    done

     

    | employees.t | optimize | note     | Table does not support optimize, doing recreate analyze instead |  

    else 

    #上传ftp,异地保存一份备份
    lftp backup.work <<END
    user username userpasswd
    lcd $bkdir
    cd 12.8_pg
    put tinadb.$day.backup
    put testdb.$day.backup
    put pg.md5
    exit
    END

    echo "optimize database $i starting ................" >> $time_log

    | employees.t | optimize | status   | OK                                                                |  

    echo $(date "%Y-%m-%d %H:%M:%S") "erms Database backup fail!" >> /serverBack/dbBack/dbBackLog.log 

    #去除二日前的备份
    find $bkdir/ -type f -mtime 2 -exec rm -f {} ;

    echo "$i start at $(date [%Y/%m/%d/%H:%M:%S])" >> $time_log

    ------------- ---------- ---------- -------------------------------------------------------------------  

    fi 

    5)简易的pg主从同步检验脚本1
    [root@mysqltest tina_shell]# cat pg_check_sync.sh
    #!/bin/bash
    #check pg database whether is running
    pg_port=`netstat -nat|grep "LISTEN"|grep "5432"|sed -n 2p|awk -F : '{print $4}'|awk '{gsub(/ /,"")}1'`
    host_ip=`ifconfig |grep "inet addr:192.168"|awk '{print $2}'|awk -F : '{print $2}'`
    date=`date "%Y-%m-%d %H:%M:%S"`

     

    2 rows in set (3.82 sec)  

    else 

    echo $date >>/tmp/pg_check_state.log
    新葡亰496net,if [ "$pg_port" = "5432" ]
       then
           echo "$host_ip postgresql is running" >> /tmp/pg_check_state.log
       else
           echo "Warnning -$host_ip postgresql is not running!" >>/tmp/pg_check_state.log
    fi

    for list in $tablelist

      

    echo $(date "%Y-%m-%d %H:%M:%S") "erms Database backup error!" >> /serverBack/dbBack/dbBackLog.log 

    #check the role of the host
    pg_role1=`ps -ef |grep wal| awk '{print $10}'|grep "sender"`
    pg_role2=`ps -ef |grep wal| awk '{print $10}'|grep "receiver"`
    pg_slave_ip=`ps -ef|grep wal|grep sender|awk '{print $13}'|awk -F "(" '{print $1}'`

    do

    --对于InnoDB的表、上边的剧情并非报错、那是MySQL会帮您映射到:alter table table_name engine='InnoDB';  

    fi 

    if [ "$pg_role1" == "sender" -a "$pg_role2" == "" ]
       then
         echo "$host_ip is master host and $pg_slave_ip is slave host" >>/tmp/pg_check_state.log
           else if  [ "$pg_role1" == "" -a "$pg_role2" == "receiver" ]
           then echo "$host_ip is postgresql slave host.Please execute the shell in the master host!" >>/tmp/pg_check_state.log
        else
           echo "check whether the database has slave host" >>/tmp/pg_check_state.log
       fi
    fi

    echo $list

    --MyISAM不会有这种气象  

    echo "---------------------------------------------------" >> /serverBack/dbBack/dbBackLog.log 

    #check whether the slave is synchronous
    pg_sync_status=$(su - postgres -c "psql -c 'select state from pg_stat_replication;'|sed -n 3p")

    /usr/bin/mysql $i -utaobao -padmin -e "optimize table $list"

      

    find /serverBack/mysql_back/ -mtime 7 -name "erms_*.sql" -exec rm -rf {} ; 

    if [ "$pg_sync_status" = " streaming" ]
       then echo "the slave is synchronous" >>/tmp/pg_check_state.log
       else
       echo "warnning - please check the sync status of slave database " >>/tmp/pg_check_state.log
    fi

    done

    [[email protected] employees]$ ls -alh t.ibd  

    注意:a.这里的mysqldump最棒是应用相对路线,若一贯动用mysqldump有希望备份成空文件

    实行结果:
    1.单节点
    [root@mysqltest tina_shell]# cat /tmp/pg_check_state.log
    2016-01-13 15:04:53
    192.168.12.8 postgresql is running
    check whether the database has slave host            ----请检查该pg库是或不是有从库

     

    -rw-rw---- 1 mysql dba 14M 05-22 16:49 t.ibd  

    b.为了确定保障该脚本内容正确无误,能够独立运营每一种命令,如进行mysqldump命令:/usr/local/mysql/bin/mysqldump -uroot -ppwd erms >> /serverBack/mysql_back/erms_$(date "%Y_%m_%d").sql

    2.主节点
    [root@pg tina_shell]# cat /tmp/pg_check_state.log
    2016-01-13 15:03:31
    192.168.12.2 postgresql is running
    192.168.12.2 is master host and 192.168.12.1 is slave host
    the slave is synchronous                           ----主从协同

    echo "$i end at $(date [%Y/%m/%d/%H:%M:%S])" >> $time_log

        

    c.find命令中,结尾处的 ; 分号无法简单

    3.从节点
    [root@pg tina_shell]# cat /tmp/pg_check_state.log
    2016-01-13 15:00:44
    192.168.12.1 postgresql is running
    192.168.12.1 is postgresql slave host.Please execute the shell in the master host!   ---此ip上pg是从库,请在主库上施行脚本

    echo >> $time_log

    optimize后、剩14M  

    2.      增加三个定期任务

    6)简易的pg主从同步检验脚本2
    root@pg /usr/lib64/nagios/plugins]#cat check_pgsync.sh
    #!/bin/bash
    # nrpe command: check pg sql and sync state.

    done

     

    crontab –e ##编辑定期任务

    # customer config
    pgport=
    pgdbname=
    pgdbuser=

    ...

         ㈡ check

    加多定期职务内容:

    # default value.
    pgport=${pgport:-5432}
    pgdbname=${pgdbname:-postgres}
    pgdbuser=${pgdbuser:-postgres}

            

    00 15 * * * /serverBack/autobackmysql.sh  ##每日定期15:00:00 推行脚本 /serverBack/autobackmysql.sh

    if [ -z "$pgport" ]; then
        echo "error: pgport no defined"
        exit 4
    fi

            检查表或视图的有无错误

    命令:

    msg_ok="OK - pg is running and slave is synchronous."
    msg_warn="WARNING - pg is running but slave synchronous fail."
    msg_crit="CRITIAL - pg is not running on port: $pgport"

            帮忙表引擎有:InnoDB和MyISAM

    crontab –e ##编纂按期职责

    # check pg running
    if netstat -ntple | grep -q "[:]$pgport"; then
        # check slave db host.
        if ps -ef | grep -q "[w]al receiver process"; then
            echo "error: it seems you are running me in slave db host."
        fi
        # check slave synchronous
        if psql -d "$pgdbname" -U "$pgdbuser"
            -c 'select state from pg_stat_replication;'
            | grep -q "[s]treaming"
        then
            echo "$msg_ok"
            exit 0
        else
            echo "$msg_warn"
    新葡亰496net:优化Mysql数据表的shell脚本,pg常用自制shell脚本。        exit 1
        fi
    else
        echo "$msg_crit"
        exit 2
    fi

            

    crontab –r 删除全体定期职分

    exit 5

            下边轻易模拟贰个测试:

    crontab –l 列出具备按期职责

    1.单节点
    [root@mysqltest tina_shell]# ./check_pgsync.sh
    WARNING - pg is running but slave synchronous fail.

     

    正文永世更新链接地址:http://www.linuxidc.com/Linux/2017-05/143604.htm

    2.主节点
    [root@pg tina_shell]# ./check_pgsync.sh
    OK - pg is running and slave is synchronous.

    [plain] 

    新葡亰496net 1

    3.从节点
    [root@pg-ro tina_shell]# ./check_pgsync.sh
    error: it seems you are running me in slave db host.
    WARNING - pg is running but slave synchronous fail.

    mysql> check table t;  

    7)pg主从切换shell脚本(闲来无事写的,不提议陈设生产)
    主库:192.168.10.232
    从库:192.168.10.233
    情况:主从同步,主库突然挂掉
    本子都布置好之后,只需求在着力实行第贰个剧本,就能触发后边脚本的操作,一步到位。
    (部分参数要求提前安装好)

    ------------- ------- ---------- ----------  

    1、检查实验主库是或不是健康运营,假设不是健康运行,就去实行从库的切换一只脚本
    [postgres@localhost tmp]$ cat pg_check_master.sh
    #!/bin/bash
    #check the master pg whether is running
    pg_port=`netstat -nat|grep "LISTEN"|grep "5432"|sed -n 2p|awk -F : '{print $4}'|awk '{gsub(/ /,"")}1'`
    host_ip=`ifconfig |grep "inet addr:192.168"|awk '{print $2}'|awk -F : '{print $2}'`
    date=`date "%Y-%m-%d %H:%M:%S"`
    echo $date >>/tmp/pg_check_master.log

    | Table       | Op    | Msg_type | Msg_text |  

    if [ "$pg_port" = "5432" ]
       then
           echo "$host_ip postgresql is running" >> /tmp/pg_check_master.log
       
       else
           echo "Warnning -$host_ip postgresql is not running!" >>/tmp/pg_check_master.log
           echo "the slave is switching to the master ...please waiting" >>/tmp/pg_check_master.log
           ssh 192.168.10.233 "sh /tmp/pg_switch.sh"
    fi  

    ------------- ------- ---------- ----------  

    2、创造从库的接触文件,将从库运营成主库(触发文件,主库和从库的名字最佳不要设置成同样的,避防不佳区分)
    [postgres@localhost tmp]$ cat pg_switch.sh
    #!/bin/bash
    #swtch slave to master
    date=`date "%Y-%m-%d %H:%M:%S"`
    echo $date >>/tmp/pg_switch.log
    cd /pg/data
    rm -fr recovery.done
    touch /tmp/pg.trigger.456
    sleep 20s
    if [ -f '/pg/data/recovery.done' ]
          then echo "the slave has switched to the master successful!" >> /tmp/pg_switch.log
          echo "the old master is going to switch to the new slave!">>/tmp/pg_switch.log
          his_file=`ls -lt /pg/data/pg_xlog/0000000*.history |sed -n 1p|awk '{print $9}'`
          scp $his_file root@192.168.10.232:/pg/data/pg_xlog
          ssh 192.168.10.232  "sh /tmp/start_new_slave.sh"    
      else
          echo "warnning:the slave has switched fail!">>/tmp/pg_switch.log
    fi

    新葡亰496net:优化Mysql数据表的shell脚本,pg常用自制shell脚本。| employees.t | check | status   | OK       |  

    3、注意recovery.conf会随着宗旨的变型而消逝,因而大家能够先将内容写好的公文备份到上拔尖目录
    剧情包括如下:
    vi /pg/recovery.conf.bak
    recovery_target_timeline = 'latest'
    standby_mode = 'on' 
    primary_conninfo = 'host=192.168.10.233 port=5432 user=postgres password=tina'
    trigger_file = '/tmp/pg.trigger.456' 

    ------------- ------- ---------- ----------  

    4、有了时间线文件、有了recovery.conf,检查一下pg_hba.conf,就足以平素开发银行pg新从库了,并做二个着力同步的检讨。
    [root@localhost tmp]# cat start_new_slave.sh
    #!/bin/bash
    date=`date "%Y-%m-%d %H:%M:%S"`
    echo $date >>/tmp/start_new_slave.log
    chown postgres.postgres /pg/data/pg_xlog/*.history

    1 row in set (0.63 sec)  

    cp /pg/recovery.conf.bak /pg/data/recovery.conf
    chown postgres.postgres recovery.conf
    su - postgres -c "pg_ctl -D /pg/data start" >>/tmp/start_new_slave.log 2&>1
    pg_slave_status=`ps -ef |grep wal| awk '{print $10}'|grep "receiver"`
    if [ "$pg_slave_status" = "receiver" ]
       then
         echo "the slave sync is ok!" >>/tmp/start_new_slave.log
       else
           echo "error:please check the slave whether is running or not!" >>/tmp/start_new_slave.log
    fi

      

    8)pg删除归档日志
    [root@pg tina_shell]# cat pg_delete_archivedlog.sh
    #!/bin/bash
    find /home/pgsql/backup_new/archived_log/  -type f  -mtime 2 -exec rm {} ;

    --未有不当的情状是这么的  

    9)常用拼接sql
    select 'select count(*) from '||tablename||';' from pg_tables where schemaname='public';
    select 'alter table '||tablename||' add constraint u_'||tablename||' unique(sample_h);' from pg_tables where tablename like 't_wh20%';

      

    --用vim张开t.frm随便编辑两把  

      

    mysql> check table tG;  

    *************************** 1. row ***************************  

       Table: employees.t  

          Op: check  

    Msg_type: Error  

    Msg_text: Incorrect information in file: './employees/t.frm'  

    *************************** 2. row ***************************  

       Table: employees.t  

          Op: check  

    Msg_type: error  

    Msg_text: Corrupt  

    2 rows in set (0.00 sec)  

      

    --报错了  

     

         ㈢ analyze

            

            用于搜罗优化器总括消息、和tuning相关、

            这几个命令对 MyISAM、BDB、InnoDB 存款和储蓄引擎的表有作用

            假诺不想记录到binlog、也可加关键字local可能别的二个

     

    [plain] 

    mysql> analyze table tG;  

    *************************** 1. row ***************************  

       Table: employees.t  

          Op: analyze  

    Msg_type: Error  

    Msg_text: Incorrect information in file: './employees/t.frm'  

    *************************** 2. row ***************************  

       Table: employees.t  

          Op: analyze  

    Msg_type: error  

    Msg_text: Corrupt  

    2 rows in set (0.00 sec)  

    ㈠ optimize optimize能够回收空间、减弱碎片、提升I/O 如今支撑的积攒引擎有:InnoDB、MyASIM和AMuranoC...

    本文由新葡亰496net发布于网络数据库,转载请注明出处:新葡亰496net:优化Mysql数据表的shell脚本,pg常用

    关键词:

上一篇:没有了

下一篇:没有了