Home News Feeds Planet MySQL
Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Percona Server for MySQL 5.7.18-15 is Now Available
    Percona announces the GA release of Percona Server for MySQL 5.7.18-15 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository. Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-15 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-15 milestone at Launchpad. Bugs Fixed: The server would crash when querying partitioning table with a single partition. Bug fixed #1657941 (upstream #76418). Running a query on InnoDB table with ngram full-text parser and a LIMIT clause could lead to a server crash. Bug fixed #1679025 (upstream #85835). The release notes for Percona Server for MySQL 5.7.18-15 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

  • Monitoring with Artificial Intelligence and Machine Learning
    Artificial intelligence and machine learning (AI and ML) are so over-hyped today that I usually don’t talk about them. But there are real and valid uses for these technologies in monitoring and performance management. Some companies have already been employing ML and AI with good results for a long time. VividCortex’s own adaptive fault detection uses ML, a fact we don’t generally publicize. AI and ML aren’t magic, and I think we need a broader understanding of this. And understanding that there are a few types of ML use cases, especially for monitoring, could be useful to a lot of people. I generally think about AI and ML in terms of three high-level results they can produce, rather than classifying them in terms of how they achieve those results. 1. Predictive Machine Learning Predictive machine learning is the most familiar use case in monitoring and performance management today. When used in this fashion, a data scientist creates algorithms that can learn how systems normally behave. The result is a model of normal behavior that can predict a range of outcomes for the next data point to be observed. If the next observation falls outside the bounds, it’s typically considered an anomaly. This is the basis of many types of anomaly detection. Preetam Jinka and I wrote the book on using anomaly detection for monitoring. Although we didn’t write extensively about machine learning, machine learning is just a better way (in some cases) to do the same techniques. It isn’t a fundamentally different activity. Who’s using machine learning to predict how our systems should behave? There’s a long list of vendors and monitoring projects. Netuitive, DataDog, Netflix, Facebook, Twitter, and many more. Anomaly detection through machine learning is par for the course these days. 2. Descriptive Machine Learning Descriptive machine learning examines data and determines what it means, then describes that in ways that humans or other machines can use. Good examples of this are fairly widespread. Image recognition, for example, uses descriptive machine learning and AI to decide what’s in a picture and then express it in a sentence. You can look at captionbot.ai to see this in action. What would descriptive ML and AI look like in monitoring? Imagine diagnosing a crash: “I think MySQL got OOM-killed because the InnoDB buffer pool grew larger than memory.” Are any vendors doing this today? I’m not aware of any. I think it’s a hard problem, perhaps not easier than captioning images. 3. Generative Machine Learning Generative machine learning is descriptive in reverse. Google’s software famously performs this technique, the results of which you can see on their inceptionism gallery. I can think of a very good use for generative machine learning: creating realistic load tests. Current best practices for evaluating system performance when we can’t observe the systems in production are to run artificial benchmarks and load tests. These clean-room, sterile tests leave a lot to be desired. Generating realistic load to test applications might be commercially useful. Even generating realistic performance data is hard and might be useful. Photo Credit

  • What About ProxySQL and Mirroring?
    In this blog post, we’ll look at how ProxySQL and mirroring go together. Overview Let me be clear: I love ProxySQL, and I think it is a great component for expanding architecture flexibility and high availability. But not all that shines is gold! In this post, I want to correctly set some expectations, and avoid selling carbon for gold (carbon has it’s own uses, while gold has others). First of all, we need to cover the basics of how ProxySQL manages traffic dispatch (I don’t want to call it mirroring, and I’ll explain further below). ProxySQL receives a connection from the application, and through it we can have a simple SELECT or a more complex transaction. ProxySQL gets each query, passes them to the Query Processor, processes them, identifies if a query is mirrored, duplicates the whole MySQL session ProxySQL internal object and associates it to a mirror queue (which refer to a mirror threads pool). If the pool is free (has an available active slot in the concurrent active threads set) then the query is processed right away. If not, it will stay in the queue. If the queue is full, the query is lost. Whatever is returned from the query goes to /dev/null, and as such no result set is passed back to the client. The whole process is not free for a server. If you check the CPU utilization, you will see that the “mirroring” in ProxySQL actually doubles the CPU utilization. This means that the traffic on server A is impacted because of resource contention. Summarizing, ProxySQL will: Send the query for execution in different order Completely ignore any transaction isolation Have different number of query executed on B with respect to A Add significant load on the server resources This point, coupled with the expectations I mention in the reasoning at the end of this article, it is quite clear to me that at the moment we cannot consider ProxySQL as a valid mechanism to duplicate a consistent load from server A to server B. Personally, I don’t think that the ProxySQL development team (Rene :D) should waste time on fixing this issue, as there are so many other things to cover and improve on in ProxySQL. After working extensively with ProxySQL, and doing a deep QA on mirroring, I think that either we keep it as basic blind traffic dispatcher. Otherwise, a full re-conceptualization is required. But once we have clarified that, ProxySQL “traffic dispatch” (still don’t want to call it mirroring) remains a very interesting feature that can have useful applications – especially since it is easy to setup. The following test results should help set the correct expectations. The tests were simple: load data in a Percona XtraDB Cluster and use ProxySQL to replicate the load on a MySQL master-slave environment. Machines for MySQL/Percona XtraDB Cluster: VM with CentOS 7, 4 CPU 3 GB RAM, attached storage Machine for ProxySQL: VM CentOS 7, 8 CPU 8GB RAM Why did I choose to give ProxySQL a higher volume of resources? I knew in advance I could need to play a bit with a couple of settings that required more memory and CPU cycles. I wanted to be sure I didn’t get any problems from ProxySQL in relation to CPU and memory. The application that I was using to add load is a Java application I develop to perform my tests. The app is at https://github.com/Tusamarco/blogs/blob/master/stresstool_base_app.tar.gz, and the whole set I used to do the tests are here:  https://github.com/Tusamarco/blogs/tree/master/proxymirror. I used four different tables: +------------------+ | Tables_in_mirror | +------------------+ | mirtabAUTOINC | | mirtabMID | | mirtabMIDPart | | mirtabMIDUUID | Ok so let start. Note that the meaningful tests are the ones below. For the whole set, refer to the whole set package. First setup ProxySQL: First setup ProxySQL: delete from mysql_servers where hostgroup_id in (500,501,700,701); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',500,3306,60000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',501,3306,100,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.21',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.231',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',700,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.25',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.43',701,3306,1,400); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; delete from mysql_users where username='load_RW'; insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('load_RW','test',1,500,'test',1); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; delete from mysql_query_rules where rule_id=202; insert into mysql_query_rules (rule_id,username,destination_hostgroup,mirror_hostgroup,active,retries,apply) values(202,'load_RW',500,700,1,3,1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK; Test 1 The first test is mainly a simple functional test during which I insert records using one single thread in Percona XtraDB Cluster and MySQL. No surprise, here I have 3000 loops and at the end of the test I have 3000 records on both platforms. To have a baseline we can see that the ProxySQL CPU utilization is quite low: At the same time, the number of “questions” against Percona XtraDB Cluster and MySQL very similar: Percona XtraDB Cluster MySQL The other two metrics we want to keep an eye on are Mirror_concurrency and Mirror_queue_length. These two refer respectively to mysql-mirror_max_concurrency and mysql-mirror_max_queue_length: These two new variables and metrics were introduced in ProxySQL 1.4.0, with the intent to control and manage the load ProxySQL generates internally related to the mirroring feature. In this case, you can see we have a max of three concurrent connections and zero queue entries (all good). Now that we have a baseline, and that we know at functional level “it works,” let see what happens when increasing the load. Test 2 The scope of the test was identifying how ProxySQL behaves with a standard configuration and increasing load. It comes up that as soon as ProxySQL has a little bit more load, it starts to lose some queries along the way. Executing 3000 loops for 40 threads only results in 120,000 rows inserted in all the four tables in Percona XtraDB Cluster. But the table in the secondary (mirrored) platform only has a variable number or inserted rows, between 101,359 and 104,072. This demonstrates consistent loss of data. After reviewing and comparing the connections running in Percona XtraDB Cluster and the secondary, we can see that (as expected) Percona XtraDB Cluster’s number of connections is scaling and serving the number of incoming requests, while the connections on the secondary are limited by the default value of mysql-mirror_max_concurrency=16. Is also interesting to note that the ProxySQL transaction process queue maintains its connection to the Secondary longer than the connection to Percona XtraDB Cluster. As we can see above, the queue is an evident bell curve that reaches 6K entries (which is quite below the mysql-mirror_max_queue_length limit (32K)). Yet queries were dropped by ProxySQL, which indicates the queue is not really enough to accommodate the pending work. CPU-wise, ProxySQL (as expected) take a few more cycles, but nothing crazy. The overhead for the simple mirroring queue processing can be seen when the main load stops around 12:47. Another interesting graph to keep an eye on is the one describing the executed commands inside Percona XtraDB Cluster and the secondary: Percona XtraDB Cluster Secondary As you can see, the traffic on the secondary was significantly less (669 on average, compared to Percona XtraDB Cluster’s 1.17K). Then it spikes when the main load on the Percona XtraDB Cluster node terminates. In short it is quite clear that ProxySQL is not able to scale following the traffic existing in Percona XtraDB Cluster, and actually loses a significant amount of data on the secondary. Doubling the load in Test 3 shows the same behavior, with ProxySQL reaches its limit for traffic duplication. But can this be optimized? The answer is, of course, yes! This is what the mysql-mirror_max_concurrency is for, so let;’s see what happens if we increase the value from 16 to 100 (just to make it crazy high). Test 4 (two app node writing) The first thing that comes to attention is that both Percona XtraDB Cluster and secondary report the same number of rows in the tables (240,000). That is a good first win. Second, note the number of running connections: The graphs are now are much closer, and the queue drops to just a few entries. Commands executed in Percona XtraDB Cluster: And commands executed in the secondary: Average execution reports the same value, and very similar trends. Finally, what was the CPU cost and effect? Percona XtraDB Cluster and secondary CPU utilization:       As expected, some difference in the CPU usage distribution exists. But the trend is consistent between the two nodes, and the operations. The ProxySQL CPU utilization is definitely higher than before: But it’s absolutely manageable, and still reflects the initial distribution. What about CRUD? So far I’ve only tested the insert operation, but what happen if we run a full CRUD set of tests? Test 7 (CRUD) First of all, let’s review the executed commands in Percona XtraDB Cluster: And the secondary: While in appearance we have very similar workloads, selects aside the behavior will significantly diverge. This is because in the secondary the different operations are not encapsulated by the transaction. They are executed as they are received. We can see a significant difference in update and delete operations between the two. Also, the threads in the execution show a different picture between the two platforms: Percona XtraDB Cluster Secondary It appears quite clear that Percona XtraDB Cluster is constantly running more threads and more connections. Nevertheless, both platforms process a similar total number of questions: Percona XtraDB Cluster Secondary Both have an average or around 1.17K/second questions. This is also another indication of how much the impact of concurrent operation on behavior, with no respect to the isolation or execution order. Below we can clearly see different behavior by reviewing the CPU utilization: Percona XtraDB Cluster Secondary Conclusions To close this article, I want to go back to the start. We cannot consider the mirror function in ProxySQL as a real mirroring, but more as traffic redirection (check here for more reasoning on mirroring from my side). Using ProxySQL with this approach is still partially effective in testing the load and the effect it has on a secondary platform. As we know, data consistency is not guaranteed in this scenario, and Selects, Updates and Deletes are affected (given the different data-set and result-set they manage). The server behaviors change between the original and mirror, if not in the quantity or the quality. I am convinced that when we need a tool able to test our production load on a different or new platform, we would do better to look to something else. Possibly query Playback, recently reviewed and significantly patched by DropBox (https://github.com/Percona-Lab/query-playback). In the end, ProxySQL is already a cool tool. If it doesn’t cover mirroring well, I can live with that. I am interested in having it working as it should (and it does in many other functionalities). Acknowledgments As usual, to Rene, who worked on fixing and introducing new functionalities associated with mirroring, like queue and concurrency control. To the Percona team who developed Percona Monitoring and Management (PMM): all the graphs here (except 3) come from PMM (some of them I customized).

  • Using MariaDB MaxScale 2.1 Regex Filter for Migrations
    Using MariaDB MaxScale 2.1 Regex Filter for Migrations anderskarlsson4 Thu, 05/25/2017 - 13:13 Migrating applications from one database system to another is sometimes easy and sometimes not. But they are hardly ever effortless. Among the obvious issues are schema and data, migrating from one datatype to another, with slightly different behavior and semantics is one thing and another is migrating the actual data, is it UTF8 and if so how many bytes? What is the collation? What is the required accuracy of numeric types? And on top of this are things such as triggers, stored procedures and such. Not to mention performance tuning and the optimal way to construct SQL statements. Speaking of SQL statements, we have application code also. Yes, most databases have some kind of application running on them, often more than one, and these access the database using SQL over some kind of API such as JDBC, ODBC or some proprietary driver. And application code, even simple SQL tends to have one or two database specific constructs in them, and that is what this blog is about. Before moving on to that though, a few words on MariaDB Server 10.2.6 which is GA since May 23. MariaDB Server 10.2 does contain more than a few things that make migration from other database systems to MariaDB a lot easier. Among these features are: CHECK constraints. The syntax for these has been supported before, but in MariaDB Server 10.2 these are actually implementing proper constraints. DEFAULT values. In MariaDB Server before version 10.2, these were several restrictions around what DEFAULT values could be used, and how, but in 10.2 these are lifted. Multiple triggers per event. In MariaDB Server before 10.2 you could only have one trigger per DML eevent, i.e. several BEFORE INSERT triggers. This has two advantages, one is the obvious one that the database you are migrating from might be supporting multiple triggers per event. Another is that sometimes you want to add a trigger to implement some compatibility with the database system you are migrating from, and this feature makes doing this a lot easier. With that said, let’s say you have migrated the schema and the procedures and what have you not, and also the data. Then you have replaced that ODBC driver from the existing database system with one from MariaDB, which means we are all set to try the application. And the application falls over on the first SQL statement because it uses some proprietary feature of the database we are migrating from. There are some ways of getting around that, with MariaDB there are two ways that have been used in the past: Use MariaDB compatibility features. As stated above, there are many new compatibility features in MariaDB Server 10.2.6 GA. In addition there are some settings for the SQL_MODE parameter for compatibility, such as the PIPES_AS_CONCAT that ensures that the ANSI SQL concatenation operator, two pipes (||), is interpreted as a MariaDB CONCAT. Develop procedures, functions and user defined functions that mimic procedures and functions in other database system. There is nothing wrong with the above means of migration, but they don’t cover all aspect of a migration. One more tool that is available now is the new MariaDB MaxScale 2.1.3 GA and there is a plugin that is particularly useful, the Regex one. What this allows us to do is to replace text in the SQL statement so that it matches something that MariaDB Server can work with, and a good example is the Oracle DECODE() function. This function is rather special, in a few ways: It takes a variable number of arguments, from 3 and up. The type of the return value depends on the type of the arguments. The SQL Standard construct for this is the CASE statement, which has the same attributes as above. We cannot solve the use of the DECODE function by adding a STORED FUNCTION. A UDF (User Defined Function) is possible as this can take any number and type of arguments. Also even though a UDF can only return a predefined type, this is not a big issue as MariaDB is loosely typed, so we can always, for numeric results, return a numeric string. A bigger issue though is that MariaDB already has a DECODE function that does something else. Also, we would really like to use the CASE function and a way to deal with that is to use the MariaDB MaxScale Regex filter. Let me show you how. To begin with, we need to set up the Regex filter itself, and the way I do it here, I will use multiple filters, one for each of the number of arguments I pass to DECODE. I guess there is some way of doing this in a smarter way, but here I am just showing the principle. Also note that the Regex filter use the PCRE2 regular expressions, not the Posix one. Let’s start with a couple of filter specification for a DECODE with 3 and 4 arguments and define them in our MariaDB MaxScale configuration file: [DecodeFilter3] type=filter module=regexfilter options=ignorecase match=DECODE\(([^,)]*),([^,)]*),([^,)]*)\) replace=CASE $1 WHEN $2 THEN $3 END [DecodeFilter4] type=filter module=regexfilter options=ignorecase match=DECODE\(([^,)]*),([^,)]*),([^,)]*),([^,)]*)\) replace=CASE $1 WHEN $2 THEN $3 ELSE $4 END As anyone can see, the above really isn’t perfect, things like strings with embedded comas and what have you not will not work, but in the general case, this should work reasonable well, which is not to say you would want to use this in production, but for a test or a proof-of-concept this is good enough. For DECODE with 5, 6 or more arguments, you add these following the pattern above. Before we show this in action, let me add one more useful filter for the Oracle SYSDATE psedocolumn. In Oracle SQL, SYSDATE is the same as NOW() in MariaDB, so this is a simple replacement, but as SYSDATE is a pseudocolumn and not a function, like NOW(), we cannot write a simple Stored Function to handle it, but using a MaraiaDB MaxScale filter should do the trick, like this: [sysdate] type=filter module=regexfilter options=ignorecase match=([^[:alpha:]])SYSDATE replace=$1NOW() With this, it is now time to enable these filters, and that is done by adding them to the Service in MariaDB MaxScale which we will use: [Read-Write Service] type=service router=readwritesplit servers=srv1 user=rwuser passwd=rwpwd max_slave_connections=100% filters=DecodeFilter3|DecodeFilter4|sysdate Assuming you have your MariaDB MaxScale correctly configured in any other place, let’s see if this works as expected. First, we have to restart MariaDB MaxScale and then when we connect to MariaDB and do a call to DECODE the way it looks like in Oracle and see what is returned: $ mysql -h moe -P 4008 -u theuser -pthepassword Welcome to the MariaDB monitor.  Commands end with ; or \g. Your MySQL connection id is 21205 Server version: 10.0.0 2.1.3-maxscale MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB> SELECT DECODE(1, 2, 3, 4) FROM dual; +------------------------------------+ | CASE 1 WHEN  2 THEN  3 ELSE  4 END | +------------------------------------+ |                                  4 | +------------------------------------+ MariaDB> SELECT DECODE('Str1', 'Str1', 'Was str1', 'Not str1') FROM dual; +----------------------------------------------------------------+ | CASE 'Str1' WHEN  'Str1' THEN  'Was str1' ELSE  'Not str1' END | +----------------------------------------------------------------+ | Was str1                                                       | +----------------------------------------------------------------+’ MariaDB> SELECT DECODE('Str1', 'Str2', 'Was str1') FROM dual; +-----------------------------------------------+ | CASE 'Str1' WHEN  'Str2' THEN  'Was str1' END | +-----------------------------------------------+ | NULL                                          | +-----------------------------------------------+ As can be seen, the translation from DECODE to a CASE statement seems to work as expected. Let’s also try with SYSDATE MariaDB> SELECT DECODE('Today', 'Today', SYSDATE, 'Some other day') FROM dual; +-------------------------------------------------------------------+ | CASE 'Today' WHEN  'Today' THEN  NOW() ELSE  'Some other day' END | +-------------------------------------------------------------------+ | 2017-05-19 18:51:22                                               | +-------------------------------------------------------------------+ As we see here, not only does SYSDATE work as expected, we can handle both DECODE and SYSDATE conversions as the filters are piped to each other. Using MariaDB MaxScale with the Regex filter is yet another tool for migrating applications. Happy SQL’ing /Karlsson MariaDB Releases MaxScale MariaDB MaxScale includes multiple filters, and one of the most useful, flexible and easiest to use is the regex filter and in this blog we will look at how this can be used to transform SQL statements that aren't 100% compatible with MariaDB. Login or Register to post comments

  • MySQL for Excel 1.3.7 has been released
    Dear MySQL users, The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.7. This is a maintenance release for 1.3.x. It can be used for production environments. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows.The MySQL Installer comes in 2 versions: Full which includes a complete set of MySQL products with their binaries included in the download. Web (network install) which will just pull the MySQL for Excel over the web and install it when run. You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/ The MySQL for Excel product can also be downloaded by using the product standalone installer found at this link: http://dev.mysql.com/downloads/windows/excel/ Changes in MySQL for Excel 1.3.7 (2017-05-24) Functionality Added or Changed The way MySQL for Excel shares data-editing sessions among users and between computers was improved. (Bug #25509085, Bug #73314) The Append Excel Data to Table operation was updated with new advanced options to manage the behavior of rows containing unique key values that are duplicates of those in the database. (Bug #25479653, Bug #83801) Added a new global option that specifies how to format spatial data as text: Well-Known Text, Keyhole Markup Language, Geography Markup Language, or GeoJSON. (Bug #22081263) Enhanced the logic that migrates stored MySQL connections to the MySQL Workbench connections.xml file. In previous versions, the migration was offered and if not done at that moment, the dialog to migrate was shown every time MySQL for Excel was launched. There was no way to choose to continue storing MySQL connections in the MySQL for Excel configuration folder, instead of merging them with MySQL Workbench. Now, the Connections Migration dialog offers a way to postpone the migration by one hour, one day, one week, one month, or indefinitely. If the migration is postponed, the dialog is shown again after that time elapses. If the migration is postponed indefinitely, then an option is added to the Options dialog that permits the migration of connections to be done manually, as long as MySQL Workbench is installed. Support for MySQL Fabric was removed. Bugs Fixed SSL connections when created with MySQL Workbench should be inactive within MySQL for Excel, which does not support SSL connections. (Bug #25962564) Selecting a schema containing at least one stored procedure for a MySQL 8.0 or 8.1 connection emitted an error. (Bug #25962347) Empty string values within Excel column data that were used in an export or append-data operation caused the generated SQL queries to have no value, instead of an empty value corresponding to the data type of the target column (for example: 0 for Integer; false for Bool if the column does not allow NULL values, or NULL otherwise). (Bug #25509312, Bug #84851) MySQL data could not be refreshed or edited directly in an Excel worksheet by different users or from different computers, which reduced the ability to share data-editing sessions among users or between computers. This fix alters the way connection information is stored by migrating the connection details for related import and edit-data operations from the user settings file to the XML parts of a workbook when the workbook is opened, and if the workbook supports XML parts and the connection information related to that workbook is found in the user settings file. (Bug #25509085, Bug #73314) User-selected data types that replaced the detected values of a column were lost when the First Row Contains Column Names check box was selected or deselected in preparation for an export-data operation. This fix retains the selected value when the data type is set manually to override the automatically detected type and the check box is selected or deselected. It further adds a new action to reset the column back to automatic detection. (Bug #25492772, Bug #84802) A portion of the preview area that should be displayed during import, export, and append data operations was concealed by other fields. (Bug #25325457, Bug #84351) Attempting to refresh MySQL data in an Excel worksheet while the MySQL for Excel task pane was closed generated an error. (Bug #25301136, Bug #84291) Edit-data operations in which the SQL query used optimistic updates, and the data contained empty strings, produced errors during the commit to MySQL. Enhanced mapping of character sets and clearer error-message text were added to identify the use of client character sets that are unsupported in MySQL. (Bug #25236221, Bug #76287) A mismatch between the current schema and the current connection caused the refresh action to fail when a worksheet with imported data was created as an Excel table, saved, closed, and then reopened. (Bug #25233309, Bug #84154) Inactive connections and unsaved passwords caused the refresh action to generate errors for worksheets with imported MySQL data in Excel tables. (Bug #25227226, Bug #84136) Excel worksheets that had currency values with comma separators produced errors when the data was exported to a MySQL table. (Bug #25214984, Bug #84049) MySQL connection passwords were not saved to the secure password vault and produced a password request every time an existing connection was opened. (Bug #25108196, Bug #83855) Excel cells containing empty strings, which are not actually blank, generated errors with export, append, or edit data operations. With this fix, an empty string is now equivalent to a blank cell. (Bug #24431935, Bug #82497) Although the Refresh All action in the Data ribbon refreshed all MySQL connections, it did not refresh the other connections associated with a workbook when the MySQL for Excel add-in was enabled. (Bug #23605635, Bug #81901) Clearing numeric parameter values within a stored procedure, or setting any of the initial values to NULL, during an Import MySQL Data operation emitted an error. (Bug #23281495, Bug #81417) Type TinyInt was mapped as type Bool when data was imported to Excel from MySQL. (Bug #23022665, Bug #80880) MySQL columns of type DATE and DATETIME produced errors during import-data operations. This fix improves the way MySQL for Excel handles these types for all operations: import data, export data, append data, and edit data. (Bug #22627266, Bug #80139) Excel data of type Date could not be exported to a MySQL table. (Bug #22585674, Bug #80079) Tables and views imported to Excel without the Include Column Names as Headers option first being selected omitted the expected default column names (Column1, Column2, and so on). (Bug #22373140) Creating a new schema with the binary – binary collation produced an error. (Bug #22371690) Saved edit-data sessions could not be restored after a workbook was closed and then reopened. (Bug #22138958) Connection sharing between MySQL for Excel and MySQL Workbench resulted in some incorrect connection information being passed to the MySQL Server Connection dialog. (Bug #22079779) The default schema of the current MySQL connection changed unexpectedly when a table in a different schema was edited. (Bug #22074426) With a cursor positioned at the bottom of a worksheet and with the Add Summary Fields check box selected, the import-data operation failed despite having enough space to fill the cells. (Bug #19652840) Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. Inside MySQL blog (NEW blog home): http://insidemysql.com/ MySQL on Windows blog (OLD blog  home): http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support!

Banner
Copyright © 2017 andhrabay.com. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.