4
I Use This!
Moderate Activity

News

Analyzed about 14 hours ago. based on code collected about 23 hours ago.
Posted over 11 years ago by Esen Sagynov
I am very happy to announce that Node.js driver for CUBRID Database (Github) goes stable today. As Milestone 4 is now complete, the node-cubrid module can be installed directly from NPM.npm install node-cubridOr install it globally to access it from ... [More] all your applications:npm install -g node-cubrid1.0 stable release highlightsnode-cubrid provides a 100% JavaScript Node.js module which can be used to connect to and query CUBRID databases. This stable release features:Rich database support: Connect, Query, Fetch, Execute, Commit, Rollback, DB Schema, etc.Out of the box driver events model10,000+ LOC, including the driver test code and demos50+ test casesHTML documentationUser demos: E2E scenarios, web sitesUser tutorial ...and many more!The driver code release contains many test cases and demos which will show you how to use the driver. To view the examples and source code visit Github project page at https://github.com/CUBRID/node-cubrid. Thus, the M4 is now completed.Milestone 1: Basic driver interfaces: connect, queries support + result set - 3rd week of August. Released on August 18, 2012!Milestone 2: Technology preview: 80% functionality ready - 2nd week of September. Released on September 17, 2012!Milestone 3: Beta release: > 95% functionality ready - 1st week of October. Released on October 4, 2012!Milestone 4: Stable release: 100% functionality ready + NPM package - end of October. Released on October 29, 2012!What's nextAdditional database functionality (enhanced LOB support, more db schemas etc.)New functionalities: integrated connection pool, queries queue, better caching etc.Code improvements, optimizationsMore examplesDemo videoIn previous M2 announcement we have posted a demo video which demonstrates the functionality of CUBRID Node.js driver. To see the video, visit CUBRID Node.js driver - M2 is completed!If you have questions, ask at our CUBRID Q&A site. We will be glad to answer you! [Less]
Posted over 11 years ago by Esen Sagynov
Last week me and my colleague have attended Russian HighLoad++ 2012 Developers Conference in Moscow. The event is organized annually by Ontico, the Russian company which is also responsible for two other conferences like RIT++ (Russian Internet ... [More] Technologies) and Whale Rider.Earlier this year in April 2012 we also attended RIT++ Conference where we introduced CUBRID ("Growing in the Wild. The story by CUBRID Database Developers.") to the Russian Developers Community. At the time the feedback from the audience was incredible. After the presentation we have been receiving lots of questions for two consecutive days. You can watch the video recording from RIT++ Conference online at http://profyclub.ru/docs/439 (registration is required) or view the slides at SlideShare. I promise you will learn lots of new facts about CUBRID!This year HighLoad++ was as awesome as RIT++ and even better. This time there were exactly 899 attendees at the opening day. This time we presented "Database Sharding the Right Way: Easy, Reliable, and Open source" where I explained about native Database Sharding feature in CUBRID 9.0 that we have released three weeks ago. While the video recordings are still on the way, you can view the slides below. During our presentation over 100 people were listening to my talk. At the end we have received many great questions from the audience. When our session was over, many developers have followed us outside of the conference hall where we have continued the talk about CUBRID SHARD.It was very exciting to talk with these Russian developers and together try to solve the practical database sharding issues they had with their current third-party sharding solutions.Some of the questions were very interesting and at the same time challenging to answer that I decided to highlight and answer them in separate blog posts. So do not forget to check again our blog.When the conference was over, we gathered together with the remaining speakers and had our photo taken. There were fellows from MariaDB, MySQL, Pythian, Tokutek, Etsy, and VMware (this is just a very small group of remaining speakers; full list can be found at highload++ site).From left: Colin Charles (Monty Program Ab, MariaDB), Alvaro Videla (VMware), Danil Zburivsky (Pythian), Gerardo Narvaja (Tokutek), Lars Thalmann (Oracle, MySQL), Chris Bohn (Etsy), Esen Sagynov (NHN, CUBRID), Sergei Golubchik (Monty Program Ab, MariaDB).So, the conference went very well! I would like to thank again the conference organizers for inviting us to share our [NHN] experience of managing Big Data.Take your time to view the presentation slides, familiarize yourself with native Sharding feature in CUBRID, and if you end up having questions, feel free to ask in the comments below. I will be very glad to answer all of them thoroughly. To try CUBRID, download it from http://www.cubrid.org/downloads. [Less]
Posted over 11 years ago by
One the first things I stumbled upon when I started my first Node.js project was how to handle the communication between the browser (the client) and my middleware (the middleware being a Node.js application using the CUBRID Node.js driver ... [More] (node-cubrid) to exchange information with a CUBRID 8.4.1 database).I am already familiar with AJAX (btw, thanks God for jQuery!! ) but, while studying Node.js, I found out about the Socket.IO module and even found some pretty nice code examples on the internet... Examples which were very-very easy to (re)use...So this quickly become a dilemma: what to choose, AJAX or sockets.io?Obviously, as my experience was quite limited, I needed first more information from out there... In other words, it was time to do some quality Google search :)There’s a lot of information available and, obviously, one would need to filter out all the “noise” and keep what is really useful. Let me share with you some of the goods links I found on the topic:http://stackoverflow.com/questions/7193033/nodejs-ajax-vs-socket-io-pros-and-conshttp://podefr.tumblr.com/post/22553968711/an-innovative-way-to-replace-ajax-and-jsonp-usinghttp://stackoverflow.com/questions/4848642/what-is-the-disadvantage-of-using-websocket-socket-io-where-ajax-will-do?rq=1http://howtonode.org/websockets-socketioTo summarize, here’s what I quickly found:Socket.IO (usually) uses persistent connection between the client and the server (the middleware), so you can reach a maximum limit of concurrent connections depending on the resources you have on server side (while more AJAX async requests can be served with the same resources).With AJAX you can do RESTful requests. This means that you can take advantage of existing HTTP-infrastructure like e.g. proxies to cache requests and use conditional get requests.There is more (communication) data overhead in AJAX when compared to Socket.IO (HTTP headers, cookies etc.)AJAX is usually faster than Socket.IO to “code”...When using Socket.IO, it is possible to have a two-way communication where each side – client or server - can initiate a request. In AJAX, it is only the client who can initiate a request!Socket.IO has more transport options, including Adobe Flash.Now, for my own application, what I was most interested in was the speed of making requests and getting data from the (Node.js) server!Regarding the middleware data communication with the CUBRID database, as ~90% of my data access was read-only, a good data caching mechanism is obviously a great way to go! But about this, I’ll talk next time.So I decided to put up their (AJAX and socket.io) speed to test, to see which one is faster (at least on my hardware & software environment)....! My middleware was setup to run on an i5 processor, 8GB of RAM and an Intel X25 SSD drive.But seriously, every speed test and, generally speaking, any performance test depends so much(!) on your hardware and software configuration, that it is always a great idea to try the things on your own environment, rely less on various information you find on internet and more on your own findings!The tests I decided to do have to meet the following requirements:Test:AJAXSocket.IO persistent connectionSocket.IO non-persistent connectionsTest 10, 100, 250 and 500 data exchanges between the client and the serverEach data exchange between the middleware SERVER (a Node.js web server) and the client (a browser) is a 4KBytes random data stringRun the server in release (not debug) modeUse Firefox as the clientMinimize the console messages output, for both server and clientDo each test after a client full page reloadRepeat each test at least 3 times, to make sure the results are consistentTesting Socket.IO, using a persistent connectionI've created a small Node.js server, which was handling the client requests:io.sockets.on('connection', function (client) {     client.on('send_me_data', function (idx) {         client.emit('you_have_data', idx, random_string(4096));     }); });And this is the JS client script I used for test:var socket = io.connect(document.location.href); socket.on('you_have_data', function (idx, data) {     var end_time = new Date();     total_time += end_time - start_time;     logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');     if (idx++ < countMax) {         setTimeout(function () {             start_time = new Date();             socket.emit('send_me_data', idx);         }, 500);     } });Testing Socket.IO, using NON-persistent connectionThis time, for each data exchange, I opened a new socket-io connection.The Node.js server code was similar with the previous one, but I decided to send back the client data immediately after connect, as a new connection was initiated every time, for each data exchange:io.sockets.on('connection', function (client) {     client.emit('you_have_data', random_string(4096)); });The client test code was:function exchange(idx) {     var start_time = new Date();     var socket = io.connect(document.location.href, {'force new connection' : true});     socket.on('you_have_data', function (data) {         var end_time = new Date();         total_time += end_time - start_time;         socket.removeAllListeners();         socket.disconnect();         logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');                  if (idx++ < countMax) {             setTimeout(function () {                 exchange(idx);             }, 500);         }     }); }Testing AJAXFinally, I put AJAX to test...The Node.js server code was, again, not that different from the previous ones:res.writeHead(200, {'Content-Type' : 'text/plain'}); res.end('_testcb(\'{"message": "' + random_string(4096) + '"}\')');As for the client code, this is what I used to test:function exchange(idx) {     var start_time = new Date();     $.ajax({         url : 'http://localhost:8080/',         dataType : "jsonp",         jsonpCallback : "_testcb",         timeout : 300,         success : function (data) {             var end_time = new Date();             total_time += end_time - start_time;             logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');                          if (idx++ < countMax) {                 setTimeout(function () {                     exchange(idx);                 }, 500);             }         },         error : function (jqXHR, textStatus, errorThrown) {             alert('Error: ' + textStatus + " " + errorThrown);         }     }); }Remember, when coding together AJAX and Node.js, you need to take into account the you might be doing cross-domain requests and violating same origin policy, therefore you should use the JSONP based format!Btw, as you can see, I quoted only the most significant parts of the test code, to save space. If anyone needs the full code, server and client, please let me know – I’ll be happy to share them.OK – it’s time now to see what we got after all this work! I have run each test for 10, 100, 250 and 500 data exchanges and this is what I got in the end: Data exchanges Socket.IO NON-persistent (ms.) AJAX (ms.) Socket.IO persistent (ms.) 10 90 40 32 100 900 320 340 250 2,400 800 830 500 4,900 1,500 1,600Looking into the results, we can notice a few things right away:For each type of test, the results behave quite linear; this is good – it shows that the results are consistent.The results clearly show that when using Socket.IO non-persistent connections, the performance numbers are significantly worse than others.It doesn’t seem to be a big difference between AJAX and the Socket.IO persistent connections – we are talking only about some milliseconds differences. This means that if you can live with less than 10,000 data exchanges per day, for example, there are high chances that the user won’t notice a speed difference...The graph below illustrates the numbers I obtained in test:...So what’s next...?...Well, I have to figure out what kind of traffic I need to support and then I will re-run the tests for those numbers, but this time excluding Socket.IO non-persistent connections. That’s because it is obvious that I need to choose between AJAX and persistent Socket.IO connections.And I also learned that, most probably, the difference in speed would not be as much as one would expect... at least not for a “small-traffic” web site, so I need to start looking into other advantages and disadvantages for each approach/technology when choosing my solution!That’s pretty much for this post - see you next time with a post about Node.js and caching!P.S. Here are a few more nice resources to find interesting stuff about Node.js, Socket.IO and AJAX:http://socket.io/#how-to-usehttp://www.hacksparrow.com/jquery-with-node-js.htmlhttp://www.slideshare.net/toddeichel/nodejs-talk-at-jquery-pittsburghhttp://tech.burningbird.net/article/node-references-and-resourceshttp://davidwalsh.name/websocket [Less]
Posted over 11 years ago by Esen Sagynov
We are very excited to announce that we have completed the 3rd Milestone of CUBRID Node.js driver project, meaning the driver has reached its beta stage!As usual, the M3 code has been pushed to Github at https://github.com/CUBRID/node-cubrid. Please ... [More] go ahead and download the driver. You will also find the installation instructions in the project README file.Some 1.0 Beta release highlights:Connect, Query, Fetch, Execute, Commit, Rollback, DB Schema, etc. - all have been implemented!Events model9,000+ LOC, including the test code50+ test casesnodeunit supportDocumentation is now available.E2E scenarios5 demos websites... and many more additions and improvements!Thus, the third milestone is now completed. Next time we will release the stable version of the driver.Milestone 1: Basic driver interfaces: connect, queries support + result set - 3rd week of August. Released on August 18, 2012!Milestone 2: Technology preview: 80% functionality ready - 2nd week of September. Released on September 17, 2012!Milestone 3: Beta release: > 95% functionality ready - 1st week of October. Released on October 4, 2012!Milestone 4: Stable release: 100% functionality ready- end of October. In progress.Milestone 5: NPM Package and tutorials - beginning of November.What's coming next in the 1.0 Stable release? (October 2012)An NPM installerAdditional functionalityCode improvements, optimizations and refactoringMore testingAdditional tutorialsDemo videoIn previous M2 announcement we have posted a demo video which demonstrates the functionality of CUBRID Node.js driver. To see the video, visit CUBRID Node.js driver - M2 is completed!If you have questions, ask at our CUBRID Q&A site. We will be glad to answer you! [Less]
Posted over 11 years ago by Esen Sagynov
Annually since 2008 NHN, the company behind CUBRID open source RDBMS, holds DEVIEW conference at the capital city of Korea to share the knowledge it has accumulated for past 13 years.This year DEVIEW was held on September 17 at COEX Grand Ballroom ... [More] where over 2,000 software developers and IT specialists have gathered. Many influential Korean and foreign Tech companies joined hands with NHN to share their ideas and insights of new technologies they are deploying at their services.The number of international participants in DEVIEW 2012 surpassed that of last year. There were 42 speakers from companies like Amazon, Couchbase, Heroku, Intel, LinkedIn, Nvidia, Twitter, Facebook, Google and many others.DEVIEW 2012 showcased seven tracks (A through G), each with six sessions that deal with the subject of the web, mobile, performance, database, large-capacity data processing, NoSQL, cloud, CPU computing and other advanced technology that can be used in the IT industry.At the conference on behalf of our CUBRID team Lee Donghyun presented "Is NoSQL the only answer for processing large data? Consider using CUBRID." In his presentation Donghyun talked about various use cases practiced at NHN which leverage RDBMS' advantageous features such as transactions and data stability while still being able to process large data. In particular, the presentation explained how to use various powerful features in CUBRID to handle large amounts of data.Kingsley Wood from Amazon presented "Leveraging Cloud Computing for global scale online game success" where he explained how successful game developers take advantage of the latest innovations in cloud computing to handle large volumes of traffic.Raffi Krikorian from Twitter talked about "Real-time large data at Twitter": how Twitter delivers thousands of tweets per second to disks, in-memory timelines, emails, and mobile devices. View his presentation below. We have previous posted a great article about Decomposing Twitter from Database Perspective. Refer to this article to learn more about Twitters Database Infrastructure.Richard Park from LInkedIn talked about a new Apache Incubator project, Kafka, LinkedIn's distributed publish/subscribe messaging system. In the presentation Richard highlighted the core design principles of this system, operational aspects of running Kafka in production, performance metrics, and how this system fits into LinkedIn's data ecosystem as well as some of the products and monitoring applications that are supported by Apache Kafka. You can find his presentation at Prezi.Perry Krug from CouchBase introduced Couchbase Server, a document-oriented NoSQL database for "Speed and Scale with Interactive Applications". The Perry explained about various big data management challenges encountered by large scale Web and mobile applications, and how to address them. See his presentation below. If you are interested in NoSQL, NoSQL Benchmarking, or other Platforms for Big Data, we have published a set of great articles about NoSQL.Harold Giménez from Heroku introduced "Heroku PostgreSQL: The Tale of Conceiving and Building a Leading Cloud Database Service". Heroku Postgres is a Database as a Service (DaaS) provider, and this presentation is all about story of conceiving Heroku Postgres, its initial buildout, and its evolution up to where it is today. Here is Harold's presentation. Kim Giyoung from Twitter explained about "Creating an attractive platform" based on the experience of Facebook. Below is his presentation. The full program of DEVIEW 2012 is available at http://deview.kr/2012/xe/index.php?module=timetable&act=dispTimetableTimetable.During the conference some of the speakers have gathered quite a lot of people around themselves. Aaron T. Myers, the senior developer at Cloudera, Twitter's Raffi Krikorian, and Intel's Rajiv Kapoor were greeted very well by DEVIEW attendees.This year's DEVIEW was especially successful. To view previous years' DEVIEW summary, see 2011 and 2010 recap. [Less]
Posted over 11 years ago by Sangmin Lee
This is the third article in the series of "Become a Java GC Expert". In the first issue Understanding Java Garbage Collection we have learned about the processes for different GC algorithms, about how GC works, what Young and Old Generation is, what ... [More] you should know about the 5 types of GC in the new JDK 7, and what the performance implications are for each of these GC types.In the second article How to Monitor Java Garbage Collection I have explained how JVM actually runs the Garbage Collection in the real time, how we can monitor GC, and which tools we can use to make this process faster and more effective.In this third article based on real cases as our examples I will show some of the best options you can use for GC tuning. I have written this article under the assumption that you have already understood the previous articles in this series. Therefore, for your further understanding, if you haven't already read the two previous articles, please do so before reading this one. Is GC Tuning Required? Or more precisely is GC tuning required for Java-based services? I should say GC tuning is not always required for all Java-based services. This means a Java-based system in operation has the following options and actions: The memory size has been specified using -Xms and –Xmx options.The -server option is included.Logs such as Timeout log are not left in the system.In other words, if you have not set the memory size and too many Timeout logs are printed, you need to perform GC tuning on your system. But, there is one thing to keep in mind: GC tuning is the last task to be done. Think about the fundamental cause of GC tuning. The Garbage Collector clears an object created in Java. The number of objects necessary to be cleared by the garbage collector as well as the number of GCs to be executed depend on the number of objects which have been created. Therefore, to control the GC performed by your system, you should, first, decrease the number of objects created. There is a saying, "many a little makes a mickle." We need to take care of small things, or they will add up and become something big which is difficult to manage.We need to use and make StringBuilder or StringBuffer a way of life instead of String.And it is better to accumulate as few logs as possible.However, we know that there are some cases we cannot help. We have seen that XML and JSON parsing use the most memory. Even though we use String as little as possible and process logs as well as we can, a huge temporary memory is used for parsing XML or JSON, some 10-100 MB. However, it is difficult not to use XML and JSON. Just understand that it takes too much memory. If application memory usage improves after repeated tunings, you can start GC tuning. I classify the purposes of GC tuning into two.One is to minimize the number of objects passed to the old area; and the other is to decrease Full GC execution time. Minimizing Number of Objects Passed to Old Area Generational GC is the GC provided by Oracle JVM, excluding the G1 GC which can be used from JDK 7 and higher versions. In other words, an object is created in the Eden area and transferred from and to the Survivor area. After that, the objects left are sent to the Old area. Some objects are created in the Eden area and directly passed to the Old area because of their large size. GC in the Old area takes relatively more time than the GC in the New area. Therefore, decreasing the number of objects passed to the Old area can decrease the full GC in frequency. Decreasing the number of objects passed to the Old area may be misunderstood as choosing to leave the object in the New area. However, this is impossible. Instead, you can adjust the size of the New area. Decreasing Full GC Time The execution time of Full GC is relatively longer than that of Minor GC. Therefore, if it takes too much time to execute Full GC (1 second or more), timeout may occur in several connected parts.If you try to decrease the Old area size to decrease Full GC execution time, OutOfMemoryError may occur or the number of Full GCs may increase.Alternatively, if you try to decrease the number of Full GC by increasing the Old area size, the execution time will be increased.Therefore, you need to set the Old area size to a "proper" value. Options Affecting the GC Performance As I have mentioned at the end of Understanding Java Garbage Collection, do not think that "Somebody's got a great performance when he used GC options. Why don't we use that option as he did?" The reason is that the size of objects created and their lifetime is different from one Web service to another. Simply consider, if a task is performed under the conditions of A, B, C, D and E, and the same task is performed under the conditions of only A and B, then which one will be done quicker? From a common-sense standpoint, the answer would be the task which is performed under conditions of A and B. Java GC options are the same. Setting several options does not enhance the speed of executing GC. Rather, it may make it slower. The basic principle of GC tuning is to apply the different GC options to two or more servers and compare them, and then add those options to the server for which the server has demonstrated enhanced performance or better GC time. Keep this in mind. The following table shows options related to memory size among the GC options that can affect performance. Table 1: JVM Options to Be Checked for GC Tuning. ClassificationOptionDescription Heap area size-XmsHeap area size when starting JVM -XmxMaximum heap area size New area size-XX:NewRatioRatio of New area and Old area -XX:NewSizeNew area size -XX:SurvivorRatioRatio of Eden area and Survivor area I frequently use -Xms, -Xmx, and -XX:NewRatio options for GC tuning.  -Xms and -Xmx option are particularly required. How you set the NewRatio option makes a significant difference on GC performance. Some people ask how to set the Perm area size? You can set the Perm area size with the -XX:PermSize and -XX:MaxPermSize options but only when OutOfMemoryError occurs and the cause is the Perm area size. Another option that may affect the GC performance is the GC type. The following table shows available options by GC type (based on JDK 6.0). Table 2: Available Options by GC Type. Classification Option Remarks Serial GC -XX:+UseSerialGC Parallel GC -XX:+UseParallelGC -XX:ParallelGCThreads=value Parallel Compacting GC -XX:+UseParallelOldGC CMS GC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=value -XX:+UseCMSInitiatingOccupancyOnly G1 -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC In JDK 6, these two options must be used together.Except G1 GC, the GC type is changed by setting the option at the first line of each GC type. The most general GC type that does not intrude is Serial GC. It is optimized for client systems. There are a lot of options that affect GC performance. But you can get significant effect by setting the options mentioned above. Remember that setting too many options does not promise enhanced GC execution time. Procedure of GC Tuning The procedure of GC tuning is similar to the general performance improvement procedure. The following is the GC tuning procedure that I use. 1. Monitoring GC status You need to monitor the GC status to check the GC status of the system in operation. Please see various GC monitoring methods in How to Monitor Java Garbage Collection. 2. Deciding whether to tune GC after analyzing the monitoring result After checking the GC status, you should analyze the monitoring result and decide whether to tune GC or not. If the analysis shows that the time taken to execute GC is just 0.1-0.3 seconds. you don't need to waste your time on tuning the GC. However, if the GC execution time is 1-3 seconds, or more than 10 seconds, GC tuning is necessary. But, if you have allocated about 10GB Java memory and it is impossible to decrease the memory size, there is no way to tune GC. Before tuning GC, you need to think about why you need to allocate large memory size. If you have allocated the memory of 1 GB or 2 GB and OutOfMemoryError occurs, you should execute heap dump to verify and remove the cause. Note: Heap dump is a file of the memory that is used to check the objects and data in the Java memory. This file can be created by using the jmap command included in the JDK. While creating the file, the Java process stops. Therefore, do not create this file while the system is operating. Search on the Internet the detailed description on heap dump. For Korean readers, see my book I published last year: The story of troubleshooting for Java developers and system operators (Sangmin Lee, Hanbit Media, 2011, 416 pages). 3. Setting GC type/memory size If you have decided on GC tuning, select the GC type and set the memory size. At this time, if you have several servers, it is important to check the difference of each GC option by setting different GC options for each server. 4. Analyzing results Start analyzing the results after collecting data for at least 24 hours after setting GC options. If you are lucky, you will find the most suitable GC options for the system. If you are not, you should analyze the logs and check how the memory has been allocated. Then you need to find the optimum options for the system by changing the GC type/memory size. 5. If the result is satisfactory, apply the option to all servers and terminate GC tuning. If the GC tuning result is satisfactory, apply the option to all the servers and terminate GC tuning. In the following section, you will see the tasks to be done in each stage. Monitoring GC Status and Analyzing Results The best way to check the GC status of the Web Application Server (WAS) in operation is to use the jstat command. I have explained the jstat command in How To Monitor Java Garbage Collection, so I will describe the data to check in this article. The following example shows a JVM for which GC tuning has not been done (however, it is not the operation server). $ jstat -gcutil 21719 1s S0    S1    E    O    P    YGC    YGCT    FGC    FGCT GCT 48.66 0.00 48.10 49.70 77.45 3428 172.623 3 59.050 231.673 48.66 0.00 48.10 49.70 77.45 3428 172.623 3 59.050 231.673Here, check the values of YGC and YGCT. Divide YGCT by YGC. Then you get 0.050 seconds (50 ms). It means that it takes average 50 ms to execute GC in the Young area. With that result, you don't need to care about GC for the Young area. And now, check the values of FGCT and FGC. Divide FGCT by FGC. Then you get 19.68 seconds. It means that it takes average 19.68 seconds to execute GC. It may take 19.68 seconds to execute GC three times. Otherwise, it takes 1 second to execute GC two times and 58 seconds for once. In both cases, GC tuning is required. You can easily check GC status by using the jstat command; however, the best way to analyze GC is by generating logs with the –verbosegc option. For a detailed description on how to generate and tools to analyze logs, I have explained it the previous article. HPJMeter is my favorite among tools that are used to analyze the -verbosegc log. It is easy to use and analyze. With HPJmeter you can easily check the distribution of GC execution times and the frequency of GC occurrence. If the GC execution time meets all of the following conditions, GC tuning is not required. Minor GC is processed quickly (within 50 ms).Minor GC is not frequently executed (about 10 seconds).Full GC is processed quickly (within 1 second).Full GC is not frequently executed (once per 10 minutes).The values in parentheses are not the absolute values; they vary according to the service status. Some services may be satisfied with 0.9 seconds of Full GC processing speed, but some may not. Therefore, check the values and decide whether to execute GC tuning or not by considering each service. There is one thing you should be careful of when you check the GC status; do not check the time of Minor GC and Full GC only. You must check the number of GC executions, as well. If the New area size is too small, Minor GC will be too frequently executed (sometimes once or more per 1 second). In addition, the number of objects passed to the Old area increases, causing increased Full GC executions. Therefore, apply the –gccapacity option in the stat command to check how much the area is occupied. Setting GC Type/Memory Size Setting GC Type There are five GC types for Oracle JVM. However, if not JDK 7, one among Parallel GC, Parallel Compacting GC and CMS GC should be selected. There is no principle or rule to decide which one to select. If so, how can we select one? The most recommended way is to apply all three. However, one thing is clear - CMS GC is faster than other Parallel GCs. At this time, if so, just apply CMS GC. However, CMS GC is not always faster. Generally, Full GC of CMS GC is fast, however, when concurrent mode failure occurs, it is slower than other Parallel GCs. Concurrent mode failureLet's take a deeper look into the concurrent mode failure. The biggest difference between Parallel GC and CMS GC is the compaction task. The compaction task is to remove memory fragmentation by compacting memory in order to remove the empty space between allocated memory areas. In the Parallel GC type, the compaction is executed whenever Full GC is executed, taking too much time. However, after executing Full GC, memory can be allocated in a faster way since the next memory can be allocated sequentially. On the contrary, CMS GC does not accompany compaction. Therefore, the CMS GC is executed faster. However, when compaction is not executed, some empty spaces are generated in the memory as before executing Disk Defragmenter. Therefore, there may be no space for large objects. For example, 300 MB is left in the Old area, but some 10 MB objects cannot be sequentially saved in the area. In this case, "Concurrent mode failure" warning occurs and compaction is executed. However, if CMS GC is used, it takes a longer time to execute compaction than other Parallel GCs. And, it may cause another problem. For a more detailed description on concurrent mode failure, see Understanding CMS GC Logs, written by Oracle engineers. In conclusion, you should find the best GC type for your system. Each system requires its proper GC type, so you need to find the best GC type for your system. If you are running six servers, I recommend you to set the same options for each of two servers, add the -verbosegc option, and then analyze the result. Setting Memory Size The following shows the relationship between the memory size, the number of GC execution, and the GC execution time. Large memory sizedecreases the number of GC executions.increases the GC execution time.Small memory sizedecreases the GC execution time.increases the number of GC executions.There is no "right" answer to set the memory size to small or large. 10 GB is OK if the server resource is good and Full GC can be completed within 1 second even when the memory has been set to 10 GB. But most servers are not in the status. When the memory is set to 10 GB, it takes about 10 ~ 30 seconds to execute Full GC. Of course, the time may vary according the object size. If so, how we should set the memory size? Generally, I recommend 500 MB. But note that it does not mean that you should set the WAS memory with the –Xms500m and –Xmx500m options. Based on the current status before GC tuning, check the memory size left after Full GC. If there is about 300 MB left after Full GC, it is good to set the memory to 1 GB (300 MB (for default usage) + 500 MB (minimum for the Old area) + 200 MB (for free memory)). That means you should set the memory space with more than 500 MB for the Old area. Therefore, if you have three operation servers, set one server to 1 GB, one to 1.5 GB, and one to 2 GB, and then check the result. Theoretically, GC will be done fast in the order of 1 GB > 1.5 GB > 2 GB, so 1 GB will be the fastest to execute GC. However, it cannot be guaranteed that it takes 1 second to execute Full GC with 1 GB and 2 seconds with 2 GB. The time depends on the server performance and the object size. Therefore, the best way to create the measurement data is to set as many as possible and monitor them. You should set one more thing for setting the memory size: NewRatio. NewRatio is the ratio of the New area and the Old area. If XX:NewRatio=1, New area:Old area is 1:1. For 1 GB, New area:Old area is 500MB: 500MB. If NewRatio is 2, New area:Old area is 1:2. Therefore, as the value gets larger, the Old area size gets larger and the New area size gets smaller. It may not be an important thing, but NewRatio value significantly affects the entire GC performance. If the New area size is small, much memory is passed to the Old area, causing frequent Full GC and taking a long time to handle it. You may simply think that NewRatio 1 would be the best; however, it may not be so. When NewRatio is set to 2 or 3, the entire GC status may be better. And I have seen such cases. What is the fastest way to complete GC tuning? Comparing the results from performance tests is the fastest way to get the result. To set different options for each server and monitor the status, it is recommended to check the data after at least one or two days. However, you should prepare for giving the same load with the operation situation when you execute GC tuning through performance test. And the request ratio such as the URL that gives the load must be identical to that of the operation situation. However, giving accurate load is not easy for the professional performance tester and takes too long time for preparing. Therefore, it is more convenient and easier to apply the options to operation and wait for the result even though it takes a longer time. Analyzing GC Tuning Results After applying the GC option and setting the -verbosegc option, check whether the logs are accumulated as desired with the tail command. If the option is not exactly set and no log is accumulated, you will waste your time. If logs are accumulated as desired, check the result after collecting data for one or two days. The easiest way is to move logs to the local PC and analyze the data by using HPJMeter. In the analysis, focus on the following. The priority is determined by me. The most important item to decide the GC option is Full GC execution time. Full GC execution timeMinor GC execution timeFull GC execution intervalMinor GC execution intervalEntire Full GC execution timeEntire Minor GC execution timeEntire GC execution timeFull GC execution timesMinor GC execution timeslIt is a very lucky case to find the most appropriate GC option, and in most cases, it's not. Be careful when executing GC tuning because OutOfMemoryError may occur if you try to complete GC tuning all at once. Examples of Tuning So far, we have theoretically discussed GC tuning without any examples. Now we will take a look at the examples of GC tuning. Example 1 The following example is GC tuning for Service S. For the newly developed Service S, it took too much time to execute Full GC. See the result of jstat –gcutil. S0 S1 E O P YGC YGCT FGC FGCT GCT 12.16 0.00 5.18 63.78 20.32 54 2.047 5 6.946 8.993Information to the left Perm area is not important for the initial GC tuning. At this time, the values from the right YGC are important. The average value taken to execute Minor GC and Full GC once is calculated as below. Table 3: Average Time Taken to Execute Minor GC and Full GC for Service S. GC Type GC Execution Times GC Execution Time Average Minor GC 54 2.047 37 s Full GC 5 6.946 1,389 ms 37 ms is not bad for Minor GC. However, 1.389 seconds for Full GC means that timeout may frequently occur when GC occurs in the system of which DB Timeout is set to 1 second. In this case, the system requires GC tuning. First, you should check how the memory is used before starting GC tuning. Use the jstat –gccapacity option to check the memory usage. The result checked from this server is as follows. NGCMN NGCMX NGC S0C S1C EC OGCMN OGCMX OGC OC PGCMN PGCMX PGC PC YGC FGC 212992.0 212992.0 212992.0 21248.0 21248.0 170496.0 1884160.0 1884160.0 1884160.0 1884160.0 262144.0 262144.0 262144.0 262144.0 54 5The key values are as follows. New area usage size: 212,992 KBOld area usage size: 1,884,160 KBTherefore, the totally allocated memory size is 2 GB, excluding the Perm area, and New area:Old area is 1:9. To check the status in a more detailed way than jstat, the -verbosegc log has been added and three options were set for the three instances as shown below. No other option has been added. NewRatio=2NewRatio=3NewRatio=4After one day, the GC log of the system has been checked. Fortunately, no Full GC has occurred in this system after NewRatio has been set. Why? The reason is that most of the objects created from the system are destroyed soon, so the objects are not passed to the Old area but destroyed in the New area. In this status, it is not necessary to change other options. Just select the best value for NewRatio. So, how can we determine the best value? To get it, analyze the average response time of Minor GC for each NewRatio. The average response time of Minor GC for each option is as follows: NewRatio=2: 45 msNewRatio=3: 34 msNewRatio=4: 30 msWe have concluded that NewRatio=4 is the best option since the GC time is the shortest even though the New area size is the smallest. After applying the GC option, the server has no Full GC. For your information, the following is the result of executing jstat –gcutil some days after the JVM of the service had started. S0 S1 E O P YGC YGCT FGC FGCT GCT 8.61 0.00 30.67 24.62 22.38 2424 30.219 0 0.000 30.219You many think that GC has not frequently occurred since the server has few requests. However, Full GC has not been executed while Minor GC has been executed 2,424 times. Example 2 This example is for Service A. We found that the JVM had not operated for a long time (8 seconds or more) periodically in the Application Performance Manager (APM) in the company. So we executed GC tuning. We were searching for the reason and found that it took a long time to execute Full GC, so we decided to execute GC tuning. As the starting stage of GC tuning, we added the -verbosegc option and the result is as follows. Figure 1: Duration Graph before GC Tuning. The above graph, which shows the duration, is one of the graphs that the HPJMeter automatically provides after analysis. The X-axis shows the time after the JVM has started and the Y-axis shows the response time of each GC. The green dots, the CMS, indicates the Full GC result, and the blue bots, Parallel Scavenge, indicates the Minor GC result. Previous I said that CMS GC would be the fastest. But the above result show that there were some cases which took up to 15 seconds. What has caused such result? Please remember what I said before: CMS gets slower when compaction is executed. In addition, the memory of the service has been set by using –Xms1g and –Xmx4g and the memory allocated was 4 GB. So I changed the GC type from CMS GC to Parallel GC. I changed the memory size to 2 GB and then set the NewRatio to 3. The result of jstat –gcutil after a few hours is as follows. S0 S1 E O P YGC YGCT FGC FGCT GCT 0.00 30.48 3.31 26.54 37.01 226 11.131 4 11.758 22.890The Full GC time was faster, 3 seconds per one time, compared to 15 seconds for 4 GB. However, 3 seconds is still not so fast. So I created six cases as follows. Case 1: -XX:+UseParallelGC -Xms1536m -Xmx1536m -XX:NewRatio=2Case 2: -XX:+UseParallelGC -Xms1536m -Xmx1536m -XX:NewRatio=3Case 3: -XX:+UseParallelGC -Xms1g -Xmx1g -XX:NewRatio=3Case 4: -XX:+UseParallelOldGC -Xms1536m -Xmx1536m -XX:NewRatio=2Case 5: -XX:+UseParallelOldGC -Xms1536m -Xmx1536m -XX:NewRatio=3Case 6: -XX:+UseParallelOldGC -Xms1g -Xmx1g -XX:NewRatio=3 Which one would be the fastest? The result showed that the smaller the memory size was, the better the result was. The following figure shows the duration graph of Case 6, which showed the highest GC improvement. The slowest response time was 1.7 seconds and the average had been changed to within 1 second, showing the improved result. Figure 2: Duration Graph after Applying Case 6. With the result, I changed all GC options of the service to Case 6. However, this change causes OutOfMemoryError at night each day. It is difficult to detail the reason here, but in short, batch data processing made a lack of JVM memory. The related problems are being cleared now. It is very dangerous to analyze the GC logs accumulated for a short time and to apply the result to all servers as executing GC tuning. Keep in mind that GC tuning can be executed without failure only when you analyze the service operation as well as the GC logs. We have reviewed two GC tuning examples to see how GC tuning is executed. As I mentioned, the GC option set in the examples can be identically set for the server which has the same CPU, OS version and JDK version with the service that executes the same functions. However, do not apply the option I did to your services in operation, since they may not work for you. Conclusion I execute GC tuning based on my experiences without executing heap dump and analyzing the memory in detail. Precise memory status analysis may draw the better GC tuning results. However, that kind of analysis may be helpful when the memory is used in the constant and routine pattern. But, if the service is heavily used and there are a lot of memory usage patterns, GC tuning based on reliable previous experience may be recommendable. I have executed the performance test by setting the G1 GC option to some servers, but have not applied to any operation server yet. The G1 GC option shows a faster result than any other GC types. However, it requires to upgrade to JDK 7. In addition, stability is still not guaranteed. Nobody knows if there is any critical bug or not. So the time is not yet ripe for applying the option. After JDK 7 is stabilized (this does not mean that it is not stable) and WAS is optimized for JDK 7, enabling stable application of G1 GC may finally work as expected and some day we may not need the GC tuning. For more detail on GC tuning, search on Slideshare.com for related materials. The most recommendable material is Everything I Ever Learned About JVM Performance Tuning @Twitter, written by Attila Szegedi, a Twitter engineer. Please take the time to read it. By Sangmin Lee, NHN Performance Engineering Lab.About the author:Joined NHN in 2009, Sangmin Lee works for support fault diagnosis, in-house lecture, and APM technical support and operating websites: tuning-java.com and GodOfJava.com. He has written several books on Java. He has written in his previous company "The story of custom coding which affects the Java performance and tuning", and "The story of testing that Java developers can learn easily with fun" and "The story of troubleshooting for Java developers and system operators" while commuting in a bus. Now, he is revising "Java standard". [Less]
Posted over 11 years ago by Esen Sagynov
I am very glad to announce the immediate availability of M2 version of CUBRID Database driver for Node.js! This release is a full-featured technology preview that provides over 80% of the functionality of the final stable NPM package.Some M2 ... [More] highlightshttps://github.com/CUBRID/node-cubrid project now contains 3,000+ LOC.Connect/Close connection, Query/Close query, Fetch, Batch Execute, Set auto-commit, Commit, Rollback, Implicit Connect etc. are fully implemented.More data types support implemented.Complete driver events model implemented30+ functional test cases30+ unit tests3 E2E demos4 Web site full demos... and many more additions and improvementsThere are no more functions prototype changes from now on. The programming model (events and callbacks) is finalized, and we provide end-to-end test scenarios, including Web site demos.You can download the driver from https://github.com/CUBRID/node-cubrid. We have published several examples on the Github project page which illustrate how to use the driver in different coding styles like event driven, callback driven, or using async Node module. Also, we have prepared the demo video below which showcases this M2 release.Thus, the second milestone is now completed. Next time we will release the beta version of the driver.Milestone 1: Basic driver interfaces: connect, queries support + result set - 3rd week of August. Released on August 18, 2012!Milestone 2: Technology preview: 80% functionality ready - 2nd week of September. Released on September 17, 2012!Milestone 3: Beta release: > 95% functionality ready - 1st week of October. In progress.Milestone 4: Stable release: 100% functionality ready- end of October.Milestone 5: NPM Package and tutorials - beginning of November.What comes next in Beta?Schema supportDocumentation release & publishingMore testingAdditional demoMore code improvements, optimizations and refactoringAn NPM installer will be available with a stable release.Demo video This mute video has been created to illustrate the functionality of CUBRID driver for Node.js after Milestone 2 has been completed. In this video you can see how employees management system can be managed in CUBRID through Node.js driver. See more CUBRID video tutorials on Vimeo. [Less]
Posted over 11 years ago by Esen Sagynov
We are very happy to announce the general availability of the next generation Web-based SQL client with powerful system and database monitoring features - CUBRID Web Manager. It is an open source database administration and monitoring tool designed ... [More] specifically to provide the Web interface for CUBRID, an open source RDBMS highly optimized for Web applications.In this article I will explain how this project has started, what it looks like, its major features, what open source software have been used to develop CWM, what difficulties we have encountered during and after the development process, and will also mention other internal details.BackgroundFor the last year or two we have been receiving many requests from users and hosting companies to bring our CUBRID Manager, a desktop GUI-based database administration tool, to the Web. So far it was not possible to access CUBRID Servers from the Web.This year on March, 2012, we started researching on what such a Web tool should look like, on how it should function and what features it should provide, and finally how we should implement all these.GoalWe have started from analyzing the existing open source solutions such as phpMyAdmin for MySQL. We have found out that all the features it provides come down to executing a SQL statement either by the application itself or by a user. However we have come up with a list of value-added functionalities that we want to provide to our users but which cannot be accomplished by simply querying the database server. Except for being able to execute valid SQL statements through intuitive UI, there are two key features natively supported in CUBRID that we wanted to bring to the Web. They are:Monitoring host system and database resources such as CPU, memory as well as disk space in real time.Providing an information about slow queries being executed, again in real time.Besides these functional differences we wanted to make the new CUBRID Web Manager to be administration- and configuration-free as much as possible. To explain what I mean by this, let me give you this example. For instance, to install phpMyAdmin on your server you have to install prerequisite software. You need a Web server running like Apache or nginx, and a PHP engine either as a Web server module or via other libraries like FastCGI. Also you need to make sure you use PHP version 5.2.0 and higher. Briefly, there are many variables users have to take care of before using phpMyAdmin.In CUBRID Web Manager we wanted to remove all this hassle. So we decided to provide a solution that is very closely integrated with the CUBRID Server and requires no configuration to get started, although configurable if necessary. Just install CUBRID Database Server, and everything is already preconfigured for you. This is what we wanted to achieve.We have researched a lot about what Internet browsers we should support. We thought about dropping IE6&7 support and provide only modern browser support. But we have found out that IE6 still occupies 21% of market share in China where we have many users. This number is also high in other Southeast Asian countries. So, despite the effort it would require, we have decided CWM will support everything starting from IE6 and Firefox 3.5. The modern browsers such as Chrome 10+, Safari 4+, and Opera 11+ will also be supported.Thus, we started to design CWM that will work for everyone with a desktop browser. Here we remembered about our DBAs and developers who want or, actually, need to access their database servers on their mobile phones. This is especially important and useful in urgent situations.TeamFor the last 5 months it was our mission to create a powerful Web-based database administration tool with unique monitoring features which can run on almost any Web browser on PC and mobile phones. To accomplish this mission we have brought together a team of five software engineers majoring in C/C++/Java, and a QA expert from our NHN China branch. They are:Kevin Qian - CWM lead developer and engineer who majors in Java/J2EE.Martin Lee - CUBRID Manager Server (CMS) lead developer, engineer. Major in C/C++.Steve Xu: CWM developer, engineer. Major in C/C++.Santiago Wang: CWM developer, engineer. Major in Java/J2EE.Frank Wu: CWM/CMS tester, QA engineer.Thanks to this team of dedicated developers today we have a solid and really powerful Web-based database administration tool with lots of great features.Free MacBook Air and Amazon Kindle Fire during our CWM Bug Bash eventLast month in August, 2012, we launched a 30-day long CWM Bug Bash developers event dedicated to bringing CUBRID Web Manager out of beta status. In the event we asked users to find and report the remaining bugs in our new tool to make it stable before we officially announce the general availability. Participants could earn point by requesting new features and reporting various bugs. Each reported issue was rewarded depending on its type: new feature (0.5 points), trivial issue (0.5p), minor issue (1p), major issue (2p), critical issue (3p), and blocker issue (4p). Top 4 users would receive the latest model of 13'' MacBook Air and Amazon Kindle Fire tablet computers. Others would receive $50 worth Amazon or iTunes gift cards which they can use to purchase apps, books, or upgrade their OS X from Lion to Mountain Lion.As a result of this CWM Bug Bash event, 148 issues have been reported by 8 participants. Among them 37 issues have been rejected as they were unreproducible, invalid, or duplicates of already reported issues. Thus, our Dev team has accepted 111 issues. So far we have already fixed 63 issues thanks to our great beta testers. The stable version of CWM System architectureWe have designed CUBRID Web Manager as a server-side plugin for CUBRID Database Server. It consists of two components (this does not include the components of the original CUBRID Server):Server sideClient site Server sideThe server side of CWM is actually developed as part of CUBRID Manager Server, which is a part of CUBRID itself. This means the client side is totally isolated from server processes. This allows us to update the server side while making no changes to the client side. CWM is simply a client for CM Server.The server side includes a set of APIs (CMAPI) to communicate with CUBRID Manager Server. CWM uses these APIs to perform database administration and monitoring.CCI API, the C driver for CUBRID, is used on the server side by CWM to execute all SQL statements.The last and major component on the server side of CWM is the nginx Web server which: listens to a particular port which is set in cm_httpd.conf configuration file; receives HTTP requests from the CWM client side (i.e. the browser) to its RESTful API;relays the request to CUBRID Manager Server through either CM Server or CCI APIs;receives the response from CM Server in the form of a JSON object;finally sends this JSON object back to the browser.I will write more about this CUBRID HTTP API in a separate blog.Client sideCWM is implemented in pure JavaScript which populates the DOM with dynamically generated DOM objects depending on user commands. To accelerate the development process of the client side, we have decided to use a JavaScript framework. Major requirements to such a framework was to:have a rich set of predefined UI components including advanced charting and graphing tools necessary to display the system monitoring status;be able to reuse these UI components;provide detailed documentation for each UI component and their usage;MVCprovide support for legacy browsers (IE6+).This is what we wanted. We wanted to develop fast and produce a stable product and avoid spending time on tweaking the UI. Eventually, among various JavaScript frameworks we have chosen Ext JS by Sencha.Thus, the entire client-side of CWM is implemented in JavaScript in Ext JS. When a user performs a particular action (eg. clicks on a table, queries a table, drops a database, etc.), the HTTP request is sent to a RESTful API of our httpd server which is listening on a secure 8282 port by default. For every response, the client receives a JSON object with retrieved data or error messages.Briefly speaking, CWM is simply a Web application which communicates with CUBRID Server via CUBRID Manager's HTTP Interface. Download and installThe current CUBRID Web Manager version 8.4.1 build 0004 is now stable and can be used safely with any CUBRID v8.4.0+. You can download CWM from http://www.cubrid.org/wiki_tools/entry/cubrid-web-manager-installation-instructions. There you will also see the installation instructions.Starting from CUBRID 8.4.2 (the upcoming release scheduled for the end of September, 2012), CWM will be integrated into the main server binary, so users will no longer need to install CWM separately.Until then, when you install CWM, you will find the default cm_httpd.conf configuration file under the /conf directory. This configuration file follows the nginx configuration specification. If you do not change anything, the default configuration will instruct the built-in nginx Web server to listed to secure 8282 SSL port on the same machine. Therefore, to access CWM in the browser, navigate to http://localhost:8282, where localhost can be any remote IP address of your CUBRID host. Host loginThe first thing you will see when you open http://localhost:8282 is the Host Login form. CUBRID Web Manager provides the same authentication service which is available in CUBRID Manager client. In order to use CWM, users have to login to CUBRID Manager Server.Note that host authentication is different from database user authentication. As an administrator of a CUBRID Server you can login to a host server and administer multiple databases. There can be multiple DBAs who can create their own databases which you may or may not have an access to. In order to access the databases you have access to, you need to login to them after you are logged in as a host server administrator.When you login to CWM or CM for the first time ever, the default username and password are admin and admin. Right after the first successful login, you will be prompted to change your password. System and database monitoring The first thing you may want to do is to check the health of your database servers. CUBRID Web Manager provides an awesome, real-time monitoring dashboard implemented in a form of a tachometer of sports cars :). Really cool! On the dashboard you will see three graphs for CPU usage, Memory usage, and Disk Space usage.CPU usageIf you hover over CPU graph, you will see a popover which displays the percentage of CPU used by all system processes, and by CUBRID Database Server. This data gets refreshed every second, so you can see how the graph changes in real time.The green piece of this half-pie indicates on the system CPU usage, while the red color shows how much of this system usage is actually used by CUBRID Server to manage all databases.Memory usageMemory usage also displays how much RAM is used by the system, and how much of it is used by CUBRID, in addition to how much memory is available in total.Remember that in CUBRID the physical memory is used only by running databases. In other words, if some of your databases have not been started, they do not use the memory. For more info see Important Facts to Know about CUBRID.Disk space usageDisk space usage graphs displays how much space is available on the server in total, how much of it has already been used, and how much of this usage accounts for CUBRID databases.At this moment this graph displays the total disk space available to the system, including all available partitions, not just the one where CUBRID Server is installed.In the next version of CUBRID Web Manager, we will add one more graphs which will display the information about Slow Queries which I have explained above. So stay tuned! Next version will be even cooler!Broker managementJust like in CUBRID Manager, in the Brokers tab you can start and stop Brokers, the middleware of CUBRID. If you enable Auto Refresh feature, the information about Brokers will change in real time. You can see various information such as the port a particular broker is listening to, the number of application servers running, the number of jobs queued, the number of transactions and queries per second being processed by the broker, or the number of requests.Configuration VariablesIn the second "Variables" tab, you can see various configuration values used by CUBRID Server, Brokers, and CUBRID Manager. Right now all these values are static.DatabasesWhen you click on a "Databases" tab, you can see a list of available databases on the current host. Here you can perform various actions on each database: check user privileges, start the database, stop it, or drop it. The same list of database is displayed on the left panel with an icon representing the current status of the database. For example, on the above screenshot you can notice that the demodb database has been started, while the hibernate database is not running.TablesWhen you click on any database, a list of existing tables will be displayed on the left panel. Various operations can be performed on each table on the main panel. You can hover over any action icon to see a tooltip.Executing SQL statementsIn the SQL tab you can execute any SQL, even multiple SQL. In case multiple SQL statements are executed at once, all will be executed, but CWM will display only the results of the latest SELECT query.Auto backup planOne of the great features of CUBRID Web Manager is the availability of Auto backup plan which allows to schedule a backup of the entire database. To schedule an automatic backup, open the "Operations" tab and add a backup plan.In the next version we will enhance the auto backup feature. CWM will allow to see a list of backup plans previous added as well as the detailed information about how many times they have been executed, for how long, and other status information.Export dataExporting in CUBRID Web Manager works seamlessly. You can export the entire database or a list of selected tables to SQL file, CSV, or in loaddb format which later can be loaded to another database using a cubrid loaddb utility.Import dataImporting in CWM works similarly. In the current stable version, you can import data from SQL and loaddb files. Later we will add more formats.ConclusionThere are many super great features we have implemented in CUBRID Web Manager. It is really nice, and it is open source! Give it a try and let us know you feedback in the comments below, on Twitter or Facebook. If you have a particular feature request, feel free to ask on our JIRA Issue Tracker, our forum, or ask questions at our Q&A. [Less]
Posted over 11 years ago by Esen Sagynov
Winners of the 5th CUBRID Developers Event are listed bellow! Congratulations to these winners! They won 13'' MacBook Air, three Amazon Kindle Fire tablet computers, and $50 Amazon/iTunes Gift Cards.About the eventTo remind you, last month in August ... [More] , 2012, we started a month long event dedicated to primarily finding bugs in our new CUBRID Web Manager, a powerful Web based database administration tool. Participants could earn point by requesting new features and reporting various bugs. Each reported issue was rewarded depending on its type: new feature (0.5 points), trivial issue (0.5p), minor issue (1p), major issue (2p), critical issue (3p), and blocker issue (4p). For more information about the event, see Smashing the Bugs in CUBRID Web Manager.ResultsAs a result of this CWM Bug Bash event, 148 issues have been reported by 8 participants. Among them 37 issues have been rejected as they were unreproducible, invalid, or duplicates of already reported issues. Thus, we have 111 issues accepted by our Dev team. We have already fixed 63 issues thanks to the following list of great contributors.WinnersFinally we are ready to announce the list of winners of the past CWM Bug Bash event based on the final standings!MacBook Air 13'' goes to Stefan Papusoi (ipstefan) who has reported 25 bugs and has requested 34 new features; earns 49 points.Three Amazon Kindle Fire go to:Yeon Woong Cho (caoy) who has reported 8 bugs and 25 new features; earns 25.5 points.Daniel Bronshtein (dani-br) who has reported 5 bugs and 1 new feature; earns 12.5 points.Emanuel Bronshtein (e3b) who has reported 4 bugs and 2 new features; earns 6 points.Catea Paulina (pololina) receives $50 Amazon/iTunes Gift Card who has requested 9 new features; earns 4.5 points.Сongratulations to all of you! We will contact each winner by email.We are very thankful for your great help to make CUBRID Web Manager a better tool that everybody can use for free! We will be very glad to ship these valuable products to your doorsteps! We sincerely believe you will enjoy your new gadgets or buy great developer books!Social contributionAs we promised, we are giving out $50 worth Amazon and iTunes Gift Cards to those who will share this Bug Bash news on their social networking services. Congratulations to the following three users!John Z (Twitter)Cho Hyun Jong (Twitter, Facebook)Stéphane (Twitter)Thank you very much for sharing our event with your followers!With respect,CUBRID Dev Team! [Less]
Posted over 11 years ago by Changhun Oh
Lately, most Internet services are developed like Software as a Service (SaaS). This means that users can communicate with a variety of services and use their resources. One of the reasons for Facebook and Twitter being so widespread is that one can ... [More] use the functionality and data of the other, thus boosting the usage of both services.However, in order to use the functions of Facebook or Twitter through external services, one does not necessarily need to log in to neither of them. Just a simple authentication process, like OAuth, allows users to leverage the data from popular social networking services. This helps to build an ecosystem that is good for all the users as well as many Internet service providers.In this article I will explain what OAuth is, how it came to life, which entities rely on it, and how one can use it in their own applications. I will also explain the difference between OAuth and no less popular Open ID.Birth and Usage of OAuthOAuth is an open standard protocol for authentication that allows a user to use Internet service functions, such as those provided by Facebook or Twitter, within other applications (desktop, web, mobile, etc.).According to its official site:OAuth is an authorization framework that enables a third-party application to obtain a limited access to an HTTP service.Before OAuth was created, there were other authentication methods that helped to protect the ID and password of users from other applications while enabling API Access Delegation. Google, Yahoo!, AOL and Amazon have each created and used their own authentication methods.OAuth 1.0 was released in 2007 and OAuth 1.0 revision A, a revised version for better security, was released in 2008.OAuth was started when developers from Twitter and Gnolia, which was the social bookmarking service provider, met in 2006 and discussed how to authenticate Internet services. They knew there was no standard for API Access Delegation at that time. They founded an OAuth committee on the Internet in April 2007 and prepared and shared a draft of OAuth proposal. After that day, some people supported this activity. During the 73rd meeting of Internet Engineering Task Force (IETF) held in Minnesota in 2008, a discussion was held to determine whether the OAuth should be managed as an IETF standard. In 2010, the OAuth 1.0 protocol standard was released as RFC5849.As of now, there is OAuth 2.0 created by the IETF OAuth working group, though it is still in the draft stage. OAuth 2.0 is incompatible with OAuth 1.0, however, its authentication process is very simple. Because of the simple process, a variety of Internet services opted to use one of the latest drafts of OAuth 2.0. The following table summarizes the versions of OAuth used by popular Internet service companies.Table 1. Versions of OAuth Used by Internet Service Companies1. Internet Service Companies OAuth Version Facebook 2.0 draft 12 Foursquare 2.0 Google 2.0 Microsoft (Hotmail, Messenger, Xbox) 2.0 LinkedIn 2.0 Daum (Tistory) 2.0 NHN (Open API) 1.0a Daum (Yozm, Open API) 1.0a MySpace 1.0a Dropbox 1.0 Twitter 1.0a Vimeo 1.0a Yahoo! 1.0a OAuth and LoginOAuth and login must be separately understood. I will given a real life example to explain the difference between them.Assume we have a company where employees gain access to its building using their employee ID card. Now assume that an external guest needs to visit the company. If login stands for an employee accessing the building, OAuth stands for a guest receiving a visitor card and accessing the building. See the following example.An external Guest A says to the reception desk that he wants to meet Employee B for business purposes.The reception desk notifies Employee B that Guest A has come to visit him.Employee B comes to the reception desk and identifies Guest A.Employee B records the business purpose and identity of Guest A at the reception desk.The reception desk issues a visitor card to Guest A.Employee B and Guest A go to the specified room to discuss their business.I gave this example to help you understand the procedure of issuing OAuth and the authorization. The visitor card allows visitors to access pre-determined places only which means that a person with a "visitor card" cannot access all the places that a person with an "employee ID card" can access. In that way, a user who has directly logged into the service can do more than a user who has been authorized by OAuth.In OAuth, "Auth" means "Authorization" as well as "Authentication". Therefore, when authentication by OAuth is performed, the service provider (reception desk) asks whether a user (employee) wants to authorize the request of the third-party application (guest). The following figure shows how Twitter asks if you would like to grant access to a third-party application.Figure 1. OAuth Access Right Request to Twitter.OpenID vs. OAuthOpenID is a standard protocol for authentication which also uses HTTP just like OAuth. However, the purpose of OpenID is different from that of OAuth.The main purpose of OpenID is authentication, while for OAuth it is authorization. Therefore, using OpenID is fundamentally identical to log-in. For OpenID, the OpenID Provider processes the user authentication process. Many parties who rely on Open ID delegate the authentication to the OpenID Provider.In addition to authorization, OAuth also has its authentication process. For example, when Facebook OAuth is used, Facebook Service Provider authenticates the Facebook user. However, the essential purpose of OAuth is to identify whether the user has the right to call the API to write on the user's wall or the API to get the friends list.You can use OAuth for user authentication; however, note that its fundamental purpose is to authorize users. This is different from what OpenID aims to achieve.OAuth Dance, OAuth 1.0 Authentication ProcessOAuth Dance is an authentication process that identifies users using OAuth. It is named to illustrate the process of two people sending and receiving information correctly as if they were dancing.Understanding OAuth requires getting to know OAuth terminology in advance. The following table summarizes the key OAuth terminology.Table 2. Key OAuth 1.0 Terminology. Terminology Description User A user who has an account of the Service Provider and tries to use the Consumer. (An employee in our example.) Service Provider Service that provides Open API that uses OAuth. (The reception desk in our example.) Consumer An application or web service that wants to use functions of the Service Provider through OAuth authentication. (A guest) Request Token A value that a Consumer uses to be authorized by the Service Provider After completing authorization, it is exchanged for an Access Token. (The identity of the guest.) Access Token A value that contains a key for the Consumer to access the resource of the Service Provider. (A visitor card.) The following figure shows the OAuth authentication process. Figure 3. OAuth 1.0a Authentication Process. The following table shows the OAuth authentication process by comparing to the company visit process described before. Table 3. Company Visit Process and OAuth Authentication Process. Company Visit Process OAuth Authentication Process 1. Guest A (an external guest) says to the reception desk that he wants to meet Employee B (an employee) for a business purpose. Requesting for and issuing Request Token 2. The reception desk notifies Employee B that Guest A has come to visit him. Calling user authentication page 3. Employee B comes to the reception desk and identifies Guest A. User login completed 4. Employee B records the business purpose and identity of Guest A at the reception desk. Requesting for user authority and accepting the request 5. The reception desk issues a visitor card for Guest A. Issuing Access Token 6. Employee B and Guest A go to the specified room for the business. Requesting service information by using Access TokenFrom this table you can understand the Access Token as a visitor card. With the visitor card, a visitor can access the permitted space. Likewise, a Consumer with an Access Token can call the available Open API of the Service Provider.Request TokenIn OAuth, the process in which a Consumer sends a request by issuing a Request Token, then the Service Provider issues Request Token corresponds to the procedure saying "I am Guest A. Can I meet Employee B?".See the following code used to request the Request Token. This is an example of requesting the Request Token from NAVER OAuth API.GET /naver.oauth?mode=req_req_token&oauth_callback=http://example.com/OAuthRequestToken.do&oauth_consumer_key=WEhGuJZWUasHg&oauth_nonce=zSs4RFI7lakpADpSsv&oauth_signature=wz9+ZO5OLUnTors7HlyaKat1Mo0=&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1330442419&oauth_version=1.0 HTTP/1.1 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: nid.naver.comFor easy reading, the above text has been arranged based on the parameter as shown below.GET http://nid.naver.com/naver.oauth?mode=req_req_token& oauth_callback=http://example.com/OAuthRequestToken.do& oauth_consumer_key=WEhGuJZWUasHg& oauth_nonce=zSs4RFI7lakpADpSsv& oauth_signature=wz9+ZO5OLUnTors7HlyaKat1Mo0=& oauth_signature_method=HMAC-SHA1& oauth_timestamp=1330442419& oauth_version=1.0 HTTP/1.1The following table shows the parameters used to request the Request Token. Table 4. Parameters Used to Request the Issuing of Request Token. Parameter Description oauth_callback Web address of a Consumer that will be redirected after the Service Provider completes authentication. If the Consumer is not a web application and has no address to redirect, the lower-case Out Of Band ("oob") is used as the value. oauth_consumer_key A key value used by the Service Provider to distinguish the Consumer. oauth_nonce A certain string created temporarily by the Consumer. The oauth_timestamp value should be unique for a repeat request. This is to prevent malicious repetition of requests. oauth_signature A signature value that encrypts and encodes the OAuth authentication information. The OAuth authentication information is a concatenated string value of other parameters except oauth_signature itself and the HTTP request method. The encryption method is defined in oauth_signature_method. oauth_signature_method A method to encrypt oauth_signature. HMAC-SHA1 and HMAC-MD5 are available. oauth_timestamp A timestamp when the request was generated. It is the accumulated time in seconds counted from 00:00:00 on January 1, 1970. oauth_version OAuth version. 1.0a is described as 1.0. Generating oauth_signatureIn OAuth 1.0, the most difficult stage is to generate oauth_signature. The Consumer and the Service Provider must definitely use the identical signing algorithm to generate oauth_signature. oauth_sinature is generated through the following four stages.Collect all request parametersAll parameters related to OAuth which start with oauth_ except for oauth_signature should be collected. If parameters are used in the POST body, they also should be collected.Normalize the parametersSort all parameters in alphabetical order and apply URL encoding (rfc3986) to each key and value. List the results of URL encoding as = format and insert "&" between each pair. Apply URL encoding to the entire result again.Create a Signature Base StringCombine the HTTP method name (GET or POST), the HTTP URL address called by the Consumer (except for parameters), and the normalized parameter by using "&". The combination becomes "[GET|POST] + & + [URL string except for parameters] + & + [Normalized Parameter]". In this example, "http://nid.naver.com/naver.oauth" is used as a URL, and URL encoding is applied to the URL to get the value.Generating a KeyEncrypt the string generated at stage 3 using the Consumer Secret Key. This Consumer Secret Key is obtained when the Consumer has registered in Service Provider. Using the encryption method such as HMAC-SHA1, generate the final oauth_signature.Calling User Authentication PageThe stage of calling the user authentication page in OAuth corresponds to "The reception desk notifies Employee B that Guest A has come to visit him and requests identification". When a Request Token is requested, the Service Provider sends oauth_token and oauth_token_secret to be used as a Request Token to the Consumer. When requesting Access Token, oauth_token_secret is used. Access Token is received as a response to the request for the Request Token. If the Consumer is a web application, oauth_token_secret should be saved in the HTTP session, cookies or the DBMS.Then display the user authentication page specified by the Service Provider to the User by using oauth_token. For NAVER, the address of the user authentication page for OAuth is:https://nid.naver.com/naver.oauth?mode=auth_req_tokenAs a parameter, pass the oauth_token which has been returned as a response to the request for the Request Token to the address. For example, the following URL is created. This URL directs to the user authentication page.https://nid.naver.com/naver.oauth?mode=auth_req_token&oauth_token=wpsCb0Mcpf9dDDC2The stage until calling the login page is the stage that corresponds to "the reception desk calls Employee B". Now Employee B should come to the reception desk and identify Guest A. The process to identify whether it is Guest A or not will be necessary. This process in OAuth is when the Service Provider authenticates the User.After completing the authentication, some authorization is requested as mentioned before, such as, "Guest A has come for a business meeting. Please issue a visitor card to him."After completing authentication, the Consumer redirects to the URL specified in oauth_callback. At this time, the Service Provider passes new oauth_token and oauth_verifier to the Consumer. These values are used to request the Access Token.Requesting the Access TokenIn OAuth, Access Token is like a visitor card to be issued to Guest A.Requesting for an Access Token is similar to requesting a Request Token. However, the types of parameters and keys used to generate oauth_signature are different. When requesting the Access Token, oauth_token and oauth_verifer are used, which were issues in the previous step when authentication was performed. Here there is no need for a oauth_callback parameter.When requesting a Request Token, we have used Consumer Secret Key to generate oauth_token_secret. When requesting an Access Token, we need to generate oauth_token_secret using the value obtained by combining the Consumer Secret Key to oauth_token_secret (Consumer Secret Key + & + oauth_token_secret). This is to change the encryption key to strengthen security.The following table shows the parameters used to request an Access Token.Table 5. OAuth Parameters to Request Issuing of Access Token. ParameterDescription oauth_consumer_key The key value used by the Service Provider to distinguish the Consumer. oauth_nonce A certain string created temporarily by the Consumer. The oauth_timestamp value should be unique for a repeat request. This is to prevent malicious repetition of requests. oauth_signature A signature value that encrypts and encodes the OAuth authentication information. The OAuth authentication information is a concatenated string value of other parameters except oauth_signature itself and the HTTP request method. The encryption method is defined in oauth_signature_method. oauth_signature_method A method to encrypt oauth_signature. HMAC-SHA1 and HMAC-MD5 are available. oauth_timestamp A timestamp when the request was generated. It is the accumulated time in seconds counted from 00:00:00 on January 1, 1970. oauth_version OAuth version. oauth_verifier The oauth_verifier value passed through oauth_callback when requesting the Request Token. oauth_token The oauth_token value passed through oauth_callback when requesting the Request Token.After defining the parameters in the above table, request an Access Token. Then oauth_token and oauth_token_secret are returned. Depending on the Service Provider the user ID or profile may be returned.Using the Access TokenFinally, we have the visitor card issued. Now you can enter the building. Entering the building with the visitor card is similar to using the Service Provider functions with the user authority. In other words, a user can call the Open API which requests the authority.For example, to get a list of blogs from a NAVER cafe, the following URL should be called.http://openapi.naver.com/cafe/getMenuList.xmlTo get a list of blogs from NAVER cafes which the user has been subscribed to, the user should request the URL that returns the list of boards from NAVER cafes with a specific user authority. To call the URL, the OAuth parameter should be passed, too.The following is an example of requesting the Open API using the Access Token. The Authorization field is included in the HTTP header. Write the OAuth parameter in the value of the Authorization field. To use the Access Token, the HEAD method is used, not the GET or POST method.POST /cafe/getMenuList.xml HTTP/1.1 Authorization: OAuth oauth_consumer_key="dpf43f3p2l4k3l03",oauth_token="nSDFh734d00sl2jdk" ,oauth_signature_method="HMACSHA1",oauth_timestamp="1379123202",oauth_nonce="chapoH",oauth_signature="MdpQcU8iPSUjWoN%2FUDMsK2sui9I%3D" Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: http://openapi.naver.comFor easy reading, the above Authorization field has been arranged and it is based on the parameter as shown below.Authorization: OAuth oauth_consumer_key="dpf43f3p2l4k3l03", oauth_token="nSDFh734d00sl2jdk", oauth_signature_method="HMACSHA1", oauth_timestamp="1379123202", oauth_nonce="csrrkjsd0OUhja", oauth_signature="MdpQcU8iPGGhytrSoN%2FUDMsK2sui9I%3D"The following table shows parameters used to call Open APIs using the Access Token. Table 6. OAuth Parameters Required to Call Open APIs using the Access Token. ParameterDescription oauth_consumer_key A key value used by the Service Provider to distinguish the Consumer. oauth_nonce A certain string created temporarily by the Consumer. The oauth_timestamp value should be unique for a repeat request. This is to prevent malicious repetition of requests. oauth_signature A signature value that encrypts and encodes the OAuth authentication information. The OAuth authentication information is a concatenated string value of other parameters except oauth_signature itself and the HTTP request method. The encryption method is defined in oauth_signature_method. oauth_signature_method A method to encrypt oauth_signature. HMAC-SHA1 and HMAC-MD5 are available. oauth_timestamp A timestamp when the request was generated. It is the accumulated time in seconds counted from 00:00:00 on January 1, 1970. oauth_version OAuth version oauth_token oauth_token passed through oauth_callback. CautionWhen a request is made using Access Token, some Service Providers request the parameter realm. The realm is an optional parameter used by WWW-Authenticate HTTP header field.OAuth 2.0It is difficult to use OAuth 1.0 for an application that is not the web application. In addition, the procedure is too complicated to produce the OAuth implementation library, and the complicated procedure causes an operational burden upon the Service Provider.OAuth 2.0 improves these weak points. It is not compatible with OAuth 1.0 and still has no final draft. However, many Internet services companies have already adopted OAuth 2.0. The following are the main features of OAuth 2.0:Enhanced support for applications, not for web applicationsDoes not need encryptionUses HTTPS, not HMACSimplified signatureDoes not need sorting and URL encodingAccess Token updateWhen Access Token was issued in OAuth 1.0, the Access Token was valid continuously. For Twitter, Access Token does not expire. For higher security, OAuth 2.0 allows to specify the life-time of the Access Token.The terminology used by OAuth 2.0 is totally different from that of OAuth 1.0. It will be better to understand that the two protocols have the same purpose but they are totally different. However, the final draft has not yet been made; therefore, just understand the characteristics of OAuth 2.0.Even though 2.0 is an improved version of OAuth and it is becoming one of the key elements in the current Internet ecosystem, the former lead author and editor of the OAuth specifications doesn't think so. I recommend you to familiarize yourself with his explanations and recommendations about what version of OAuth it is good to stick with.Nevertheless, newcomers in the industry use the authentication service from Facebook or Twitter rather than implementing their unique authentication method, because it can reduce development and operational costs. In addition, this helps to promote users' services through Facebook or Twitter. From the Service Provider side, the core function can be popularized more.Imagine... just one standard authentication technology... and it is changing the entire Internet industry.By Changhun Oh, Senior Developer at Social Apps Service Team, NHN Corporation.About the author:In 2000, I started programming as a webmaster and it became my vocation... and it will be for the rest of my life. It makes me happy! I want to know everything about development so I read the Hello World (NHN Developers blog). I like creating something. My hobbies are DIY, plastic figure model kits, and camping. I had worked as a technical leader to introduce new technology to the front and optimize the services. Now, I work as an evangelist for open social platform at NHN. [Less]