Kategori arşivi: Genel

bind slave dns example conf

acl recurseallow { 172.28.16.0/24; 172.28.35.0/24; 10.10.10.0/24; };

options {
	listen-on port 53 { 127.0.0.1; 172.28.16.2; };
	listen-on-v6 port 53 { ::1; };
	directory 	"/var/named";
	dump-file 	"/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
	
	allow-query     { any; };
	allow-recursion { recurseallow; };
	forwarders {
		195.46.39.39;
		195.46.39.40;
		77.88.8.8;
		77.88.8.1;
	        8.8.8.8;
		8.8.4.4;
		195.175.39.39;
		195.175.39.40;
        };

	dnssec-enable yes;
	dnssec-validation yes;
	dnssec-lookaside auto;

	/* Path to ISC DLV key */
	bindkeys-file "/etc/named.iscdlv.key";

	managed-keys-directory "/var/named/dynamic";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
	type hint;
	file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "bb.local" IN {
	type slave;
        masters { 172.28.35.2; };
	file "slaves/bb.local.db";
};

Partitioning the WordPress Comments Table

WordPress sites can get big. Really big. When you’re looking at a site of Cheezburger, Engadget or Techcrunch proportions, you get hundreds of comments per post, on dozens of posts per day, which adds up to millions of comments per year.

In order to keep your site running in top condition, you don’t want to be running queries against tables with lots of rarely accessed rows, which is what happens with most comments – after the post drops off the front page, readership drops, so the comments are viewed much less frequently. So, what we want to do is remove these old comments from the primary comment table, but keep them handy, for when people read the archives.

Enter partitioning.

The idea of MySQL partitioning is that it splits tables up into multiple logical tablespaces, based on your criteria. Running a query on a single partition of a large table is much faster than running it across the entire table, even with appropriate indexes.

In the case of the WordPress comments table, splitting it up by the `comment_post_ID` seems to be the most appropriate . This should keep the partitions to a reasonable size, and ensure that there’s minimal cross-over between partitions.

First off, we need to add the `comment_post_ID` column to the Primary Key. This can be a slow process if you already have a massive `wp_comments` table, so you may need to schedule some downtime to handle this. Alternatively, there many methods for making schema changes with no downtime, such as judicious use of Replication, Facebook’s Online Schema Change Tool, or the currently-in-development mk-online-schema-change, for Maatkit.

ALTER TABLE wp_comments DROP PRIMARY KEY, ADD PRIMARY KEY (comment_ID, comment_post_ID);

Now that we’ve altered this index, we can define the partitions. For this example, we’ll say we want the comments for 1000 posts per partition. This query can take a long time to run, if you already have many comments in your system.

ALTER TABLE wp_comments PARTITION BY RANGE(comment_post_ID) (
    PARTITION p0 VALUES LESS THAN (1000),
    PARTITION p1 VALUES LESS THAN (2000),
    PARTITION p2 VALUES LESS THAN (3000),
    PARTITION p3 VALUES LESS THAN (4000),
    PARTITION p4 VALUES LESS THAN (5000),
    PARTITION p5 VALUES LESS THAN (6000),
    PARTITION p6 VALUES LESS THAN MAXVALUE
);

When you’re approaching the next partition divider value, adding a new partition is simple. For example, you’d run this query around post 6000.

ALTER TABLE wp_comments REORGANIZE PARTITION p6 INTO (
    PARTITION p6 VALUES LESS THAN (7000),
    PARTITION p7 VALUES LESS THAN MAXVALUE
);

Naturally, this process is most useful for very large WordPress sites. If you’re starting a new site with big plans, however, you may just want to factor this into your architecture.

UPDATE: Changed the partition definition to better reflect how WordPress uses the wp_comments table, per Giuseppe’s comments.

source: http://pento.net/2011/04/28/partitioning-the-wordpress-comments-table/

Grace Hopper

Tümamiral Grace Murray Hopper (9 Aralık 1906 – 1 Ocak 1992) Amerikalı bilgisayar bilimcisi ve ABD donanmasında rütbeli askerdi. Harvard Mark I bilgisayarının ilk programcılarından biri olan Hopper, bilgisayar programlama dilleri için ilk derleyiciyi geliştirdi. İlk modern programlama dillerinden biri olan COBOL’un da geliştiricilerindendi.[1] Bilgisayar dilinde “debugging” diye bilinen programı hatalardan temizleme konseptinin de ilk kullanıcılarındandı. Amerikan savaş gemisi USS Hopper (DDG-70) adını kendisinden almıştır.

http://www.bursadabugun.com/haber/google-dan-grace-hopper-a-ozel-doodle-331752.html

http://www.bursadabugun.com/haber/grace-hopper-kimdir-331753.html

Websitesi alış satış, en hızlı en güvenli yolu Siterobot.com

Websiteleri ve domainler için güvenli alım satım platformu Siterobot.com webmasterların, yazılımcı, tasarımcı, ajansların proje, alış satış, tüm ihtiyaçlarını karşılayacak güvenli bir platformdur. Amaçları web sitesi, domain, Facebook fan sayfası, tanıtım yazısı, backlink, Facebook üye çekimi, vb seo & webmasterlera yönelik alım satış yapanlar arasında güvenli bir platform kurup hızlı sonuç odaklı ticari faaliyetlerini yürütmelerini sağlamaktır.

Siterobot.com’da ücretsiz ilan kampanyaları ile ilanınızı hemen yayınlayıp. web site alımı , web site satma, domain alışı satışı vb. hizmet faaliyetlerinize başlıyabilirsiniz. Üstelim komisyon ücretleri çok düşük sadece %3,99 gibi bir oranlar sunulan bu websitesi alış satış hizmeti, tüm teknoloji, webmasterler, proje geliştiricilerinin ilgisini çekmektedir.

Siterobot güvenlikli sistemleri ile 3D kredi kartı, paypai EFT/Havale ile siterobot bakiyenize para transfer edebilir. Ve hızlı bir çekilde kendi hesaplarınıza çekebilirsiniz.
Geleceğin en hızlı en güvenli, websitesi,domain,seo projeleri, vb, alış satış platformu

vps mysql conf example

[mysqld]
local-infile=0

max_connections = 500
key_buffer_size = 256M
myisam_sort_buffer_size = 64M
join_buffer_size = 1M
read_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 8000
thread_cache_size = 256
wait_timeout = 20
connect_timeout = 30
tmp_table_size = 128M
max_heap_table_size = 64M
max_allowed_packet = 32M
net_buffer_length = 16384
thread_concurrency = 2
table_lock_wait_timeout = 30
query_cache_limit = 10M
query_cache_size = 96M
query_cache_type = 1
set-variable=local-infile=0
thread_concurrency = 8
[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 4M
write_buffer = 4M

[myisamchk]
tmpdir=/tmp
key_buffer = 64M
sort_buffer = 64M
read_buffer = 16M
write_buffer = 16M