文章目录

装B留念:

MySQL服务端机器配置和操作系统信息,没有使用FlashCache哦:

# Aspersa System Summary Report

Uptime | 84 days, 19:00, 3 users, load average: 0.00, 0.21, 0.16  
Platform | Linux  
Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)  
Kernel | 2.6.18-164.el5  
Architecture | CPU = 64-bit, OS = 64-bit  
Threading | NPTL 2.5  
Compiler | GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-44).  
SELinux | Disabled  
Virtualized | No virtualization detected

# Processor

Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes  
Speeds | 16x2261.058  
Models | 16xIntel(R) Xeon(R) CPU E5520 @ 2.27GHz  
Caches | 16x8192 KB

# Memory

Total | 23.53G  
Free | 2.16G  
Used | physical = 21.38G, swap = 240.00k, virtual = 21.38G  
Buffers | 1.03G  
Caches | 13.60G  
Dirty | 156 kB  
UsedRSS | 6.1G  
Swappiness | vm.swappiness = 60  
DirtyPolicy | vm.dirty\_ratio = 40, vm.dirty\_background_ratio = 10

# Mounted Filesystems

Filesystem Size Used Type Opts Mountpoint  
/dev/sda10 766G 11% ext3 rw /uxx  
/dev/sda1 122M 16% ext3 rw /boot  
/dev/sda2 15G 67% ext3 rw /  
/dev/sda3 15G 76% ext3 rw /usr  
/dev/sda5 8.6G 2% ext3 rw /tmp  
tmpfs 12G 0% tmpfs rw /dev/shm

# Disk Schedulers And Queue Size

sda | [cfq] 128

# Disk Partioning

Device Type Start End Size  
============ ==== ========== ========== ==================

# Kernel Inode State

dentry-state | 297447 276749 45 0 0 0  
file-nr | 3570 0 2390094  
inode-nr | 220730 32

# LVM Volumes

WARNING: Running as a non-root user. Functionality may be unavailable.

# RAID Controller

Controller | LSI Logic MegaRAID SAS  
Model | , interface, ports  
Cache | Memory, BBU  
BBU | % Charged, Temperature C, isSOHGood=

VirtualDev Size RAID Level Disks SpnDpth Stripe Status Cache  
========== ========= ========== ===== ======= ====== ======= =========

PhysiclDev Type State Errors Vendor Model Size  
========== ==== ======= ====== ======= ============ ===========

# Network Config

Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
Controller | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)  
FIN Timeout | net.ipv4.tcp\_fin\_timeout = 60  
Port Range | net.ipv4.ip\_local\_port_range = 32768 61000

# Interface Statistics

interface rx\_bytes rx\_packets rx\_errors tx\_bytes tx\_packets tx\_errors  
========= ========= ========== ========== ========== ========== ==========  
lo 600000000 500000 0 600000000 500000 0  
eth0 0 0 0 0 0 0  
eth1 0 0 0 0 0 0  
eth2 0 0 0 0 0 0  
eth3 0 0 0 0 0 0  
eth4 1000000000 600000000 0 2000000000 450000000 0  
eth5 0 0 0 0 0 0  
eth6 1250000000 15000000 0 0 0 0  
eth7 0 0 0 0 0 0  
sit0 0 0 0 0 0 0  
bond0 2500000000 600000000 0 2000000000 450000000 0

# The End

MySQL 使用Percona 5.5.18

my.cnf的配置如下

default-storage-engine = INNODB  
innodb\_flush\_method = O_DIRECT  
innodb\_file\_per_table = 1  
innodb\_flush\_log\_at\_trx_commit = 1  
innodb\_lock\_wait_timeout = 100  
innodb\_additional\_mem\_pool\_size = 20M  
innodb\_buffer\_pool_size = 5G  
innodb\_log\_buffer_size= 800M  
innodb\_log\_file_size = 200M  
innodb\_log\_files\_in\_group = 4  
innodb\_file\_io_threads = 4  
innodb\_thread\_concurrency = 32  
innodb\_max\_dirty\_pages\_pct = 90  
innodb\_data\_file_path = ibdata1:1G:autoextend  
innodb\_sync\_spin_loops = 0  
innodb\_spin\_wait_delay = 0

tdh\_socket\_thread_num = 8  
tdh\_socket\_slow\_read\_thread_num = 64  
tdh\_socket\_io\_thread\_num = 4  
tdh\_socket\_write\_thread\_num = 16  
tdh\_socket\_optimize_on = 1  
tdh\_socket\_cache\_table\_num\_for\_thd = 40  

插入的表结构

CREATE TABLE `test` (  
`id` int(20) unsigned NOT NULL AUTO_INCREMENT,  
`k` int(20) DEFAULT NULL,  
`i` int(20) NOT NULL,  
`c` char(120) DEFAULT NULL,  
`kc` int(20) DEFAULT '1',  
PRIMARY KEY (`id`)  
) ENGINE=InnoDB DEFAULT CHARSET=gbk; 

压测脚本:

package benchmark.insert

import benchmark.StressTest  
import com.taobao.tdhs.client.TDHSClient  
import com.taobao.tdhs.client.TDHSClientImpl  
import com.taobao.tdhs.client.response.TDHSResponse  
import com.taobao.tdhs.client.response.TDHSResponseEnum  
import java.util.concurrent.atomic.AtomicLong

/**  
* @author [文通][1]  
* @since 12-2-9 下午3:31  
*  
*/

int step = 100000000

int index = Integer.valueOf(args[0])

AtomicLong id = new AtomicLong((index - 1) * step + 1)

TDHSClient client = new TDHSClientImpl(new InetSocketAddress("t-wentong", 9999), 2);

def s = new StressTest(count: step)  
s.add({  
def _id = id.incrementAndGet()  
String table = "test"  
TDHSResponse response = client.createStatement(index).insert().use("benchmark_insert").from(table)  
.value("id", _id.toString())  
.value("k", "10000")  
.value("i", _id.toString())  
.value("c", \_id.toString() + "\_abcdefghijklmnopqrstuvwxyz").insert()

if (response != null && response.getStatus() != TDHSResponseEnum.ClientStatus.OK) {  
System.out.println(response);  
}  
}, 100)

s.run()  
client.shutdown()

使用TDH_SOCKET进行插入能到这么高TPS的关键:

  • TDH_SOCKET提供group commit:

    *   减少redo log的fsync次数,使IOPS不成为瓶颈
    
  • TDH_SOCKET能提供并发写.
  • 自增id的生成策略需要分段自增:

    *   如果采用全自增id生成策略(即默认innodb提供的自增id生成策略)的话,会在高并发插入的时候遇到一个block的写锁.
    
    • 因为是顺序自增,所以并发插入的记录都会集中写入到同一个page上,而一个线程写page会对这个page做一次rw_lock_x_lock_func(&(block->lock), 0, file, line); 那么这个写锁会成为瓶颈,使TPS无法上去所以只要改变生成id的策略:

      *   分段自增比如一个写线程一下子获取1-10000的id范围 下一个写线程获取10001-20000的id范围..
      
      • 每个写线程在各自的范围里面自增的生成id,那么即能保证一定的顺序写,又能使写page不会因为并发写锁而性能下降!

好,当上了10wTPS之后你会发现CPU占用:

发现CPU的占用还不是很高..但是这个时候的瓶颈在于log_sys->mutex 这个锁了.因为插入么最后要写undo log.

至于TDH_SOCKET是啥…..先买个关子~

文章目录