Home Ленты новостей Планета MySQL
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Protecting Your MySQL Servers From Ransomware
    Author: Robert Agar A constant in the computing world is that it is always evolving and offering new challenges and opportunities. Software solutions come and go with some becoming staples in the business community while others barely cause a ripple as they disappear into the ether. Take MySQL as an example. From its humble beginnings in 1994, the platform has grown to become the most popular SQL database in 2019. If you are a database professional, chances are very good that you work with MySQL regularly. The popularity of the database platform has not gone unnoticed by the unscrupulous entities that engage in cybersecurity attacks with nefarious intentions. Whether acting as individuals or combining forces into rogue teams, cybercriminals are always looking for new and ingenious ways to cause havoc with your IT environment. Their intrusions can take many forms, from implanting malware in an attempt to steal login credentials to randomly deleting data on your systems. A particularly nasty type of cyberattack is carried out by ransomware. This is a specific form of malware whose goal is to encrypt the data on an infected computer. This makes the data inaccessible to users and can cripple an organization. The criminals behind the attack claim they will unencrypt the data if their financial demands are met. Paying the ransom may or may not get your data back. Remember, you are dealing with criminals and their word is not to be trusted. Targeted MySQL Ransomware Attacks In recent years, MySQL databases have become a target for cybercriminals wielding ransomware. The large installed base of the software provides many potential victims of financial blackmail. Even if only a fraction of the attacks are successful, the criminals stand to take down a lot of systems and possibly make some serious money. Recently, MySQL servers began being hit with attacks trying to implant a ransomware weapon known as GandCrab. The perpetrators behind the ransomware have been targeting specific environments in attempts to thwart defensive actions. As of March 2018, over 50,000 machines have been infected with the majority of targets being systems located in the US and UK. Security experts at Sophos have researched the GandCrab malware and have made some interesting discoveries. For one, though the IP address of the server hosting the sample of the code under study is in Arizona, the user interface of the HFS installation is in simplified Chinese. This suggests that there may be an international cybercriminal team behind these attacks who have compromised a US server. The security firm used a honeypot designed to lure hackers so their tools can be studied and appropriate defenses developed. They were listening on the default TCP port for MySQL servers which is 3306. The attack was executed in stages with the first step verifying that the database server in question was running MySQL. Once that was determined, the set command was used to upload the bytes to construct a helper DLL. The DLL was used to add three malicious functions to the database. These functions were employed to download the GandCrab payload from a remote machine and place it in the root of the C: drive with the name isetup.exe and then executed the program. At this point, your system has been infected and your files will be encrypted. Hopefully, you have a robust backup and recovery policy and can recover your system without resorting to acceding to the ransom demands. Hackers are searching for MySQL logins that are not properly protected. This may be due to a weak password or in some egregious cases, no password at all. Failure to protect your MySQL database may allow hackers to turn it into a launching pad for malware. Some suggestions for protecting your MySQL servers from ransomware are to: Insist on strong passwords. Eliminate the ability to directly access your MSQL servers from the Internet. Monitor your MySQL control settings. Keeping Tabs on Your Systems is a Crucial Defensive Tactic Possessing insight into the operation of your MySQL servers provides a baseline from which you can discover discrepancies and unusual behavior. Monitoring supplies the perfect vehicle for this practice and can be useful in many areas of database administration. It can identify effective optimization initiatives and help you to increase user satisfaction with your systems. It can also be instrumental in alerting you to any suspicious activity which may indicate you are being attacked. SQL Diagnostic Manager for MySQL provides a comprehensive monitoring application that can address all aspects of your MySQL environment. It includes over 600 pre-built monitors that return information on security, excessive privileges, and connection history among many other important details pertaining to your MySQL instance. Set alerts which trigger when thresholds are met that warrant your attention and may help you keep your systems safe from cybercriminals. Rest assured that GandCrab is not the last attempt to exploit vulnerabilities that are bound to be discovered in MySQL. Start monitoring your systems today to track changes in activity and access that might keep the bad guys at bay. The post Protecting Your MySQL Servers From Ransomware appeared first on Monyog Blog.

  • Percona XtraDB Cluster 5.7.27-31.39 Is Now Available
    Percona is happy to announce the release of Percona XtraDB Cluster 5.7.27-31.39 on September 18, 2019. Binaries are available from the downloads section or from our software repositories. Percona XtraDB Cluster 5.7.27-31.39 is now the current release, based on the following: Percona Server for MySQL 5.7.27-30 Codership WSREP API release 5.7.27 Codership Galera library 3.28 All Percona software is open-source and free. Bugs Fixed PXC-2432: PXC was not updating the information_schema user/client statistics properly. PXC-2555: SST initialization delay: fixed a bug where the SST process took too long to detect if a child process was running. PXC-2557: Fixed a crash when a node goes NON-PRIMARY and SHOW STATUS is executed. PXC-2592: PXC restarting automatically on data inconsistency. PXC-2605: PXC could crash when log_slow_verbosity included InnoDB.  Fixed upstream PS-5820. PXC-2639: Fixed an issue where a SQL admin command (like OPTIMIZE) could cause a deadlock. Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

  • How to Install Bolt CMS with Nginx and Let's Encrypt on FreeBSD 12
    Bolt is a sophisticated, lightweight and simple CMS built with PHP. This tutorial shows the installation of Bolt CMS in FreeBSD 12 with Nginx web server, MySQL database server and Let's Encrypt SSL certificate.

  • Hyperconverging and Galera cluster
    What is hyperconverging? Hyperconverging is the latest hype: do things more efficiently with the resources that you have by cramming as many virtual machines on the same hypervisor. In theory this should allow you to mix and match various workloads to make the optimum use of your hypervisor (e.g. all cores used 100% of the time, overbooking your memory up to 200%, moving virtuals around like there is no tomorrow). Any cloud provider is hyperconverging their infrastructure and this has pros and cons. The pro is that it’s much cheaper to run many different workloads while the con clearly is when you encounter noisy neighbors. As Jeremy Cole said: “We are utilizing our virtual machines to the max. If you are on the same hypervisor as us, sorry!” Apart from cloud providers, you could hyperconverge your infrastructure yourself. There are a few hardware/software vendors out there that will help you with that and at one of my previous employers we got a helping hand from one such vendor! DIY hyperconverging In our case the entire infrastructure was migrated to a new hyperconverged infrastructure where we would have multiple infrastructure clusters (read: four hypervisors in one chassis) in multiple data centers. Infra marked one of these DCs suitable for our customer facing projects as the peering was performed in that DC. The idea behind this new infrastructure is that the VM can basically run anywhere in your infrastructure and copied realtime to another hypervisor within the same cluster (read: chassis). This copy process (including memory) obviously required some (short) locking, but it even worked amazingly well. We even had some software running that would move around VMs to optimize the workloads and still retain some spare capacity. Magic! Now there was an additional benefit to choose for this vendor: if a hypervisor would go down the same VM could be spun up immediately on another hypervisor, albeit without copying the memory contents. To be able to do this, the disks are synced to at least one other hypervisor. This means some cluster magic detects one of the hypervisors being down and automagically spins up the same VMs on another (available) hypervisor that contains the latest data of this VM. To spread the load among various hypervisors the replication factor of the disks is advised to be set to 2, where 2 means to be copied to (at least) two other hypervisors. Hyperconverging Galera Our Galera cluster consisted out of three Galera nodes and three asynchronous read replicas attached (see image below). Galera cluster with read slaves In this picture every Galera node stores every transaction in the GCache, InnoDB flushes the transaction to disk (ibdata*) and asynchronous replication dictates another write to the binlogs. That means that every transaction in our Galera node will already be stored three times on disk. The hyperconverged cluster where we hosted Galera had the replication factor set to 2. That means every byte written to disk will be written to at least two other storage controllers (VMs), as shown in the image below. This write operation over the network is synchronously, so the filesystem has to wait until both controllers acknowledged the write. Latency of this write is negligible as the write is super fast and performed over a low latency network. The magic behind this synchronous disk replication is out of the scope for this blog post, but I can hint that a certain NoSQL database (named after some Greek mythology) is managing the storage layer. Hyperconverge write amplification: every write to disk will be written three times! This means that every write to disk in our Galera node will also be synced an additional two hypervisors. To make matters worse, due to semi-synchronous replication, all three nodes Galera perform the exact same operations at (almost) the exact same time! 1 transaction = 3 nodes (3 writes locally + 6 writes over the network) = 27 writes As you can guess from the simple formula above: 9 writes are performed locally and 18 writes are performed over the network. As every write to disk is performed synchronously over the network, this write adds a bit more than negligible latency when it spawns 18 writes over the network at the same time. As 1 transaction to Galera can cause 18 synchronous writes over the network, imagine what latency you will encounter if you have a baseline of 200 transactions per second! And we’re not even counting the asynchronous replicas performing similar write operation again mere (milli)seconds later! Galera managed to cope, but instability only happened on set intervals. We could trace these back to our so called stock-updates or pricing-updates: every half-an-hour stock levels were pushed from the warehouse database and every few hours new pricing information was also pushed via the enterprise service bus. With more than a million products in the database these torrents of writes quickly caused disk latency in the entire hyperconverged cluster and we have seen the disk latency shoot up well beyond 80ms. This no longer affected the Galera cluster, but this was causing cluster wide issues on the distributed storage layer as well. And to make matters even worse: latency on the entire network was also shooting up. Benchmarking semi-synchronously replicated hyperconverged clusters At first nobody believed us, even when we showed the graphs to the vendor. This new infrastructure was so much more expensive than our old that it simply couldn’t be true. Only after conducting benchmarks, reproducing the latency on an empty test cluster, we were taken seriously. The benchmarks revealed that the write amplification saturated the network interfaces of the cluster and we worked with the vendor on seeking a solution to the problem. Even after upgrading the network (10G interface bonding, enabling jumbo frames, hypervisor tuning) we still found latency issues. The issue with our hyperconverged cluster was that there was no (separate) internal network handling the inter-hypervisor network traffic. Of course we could now achieve the double amount of transactions, but that didn’t solve the underlying issue of also causing latency on other VMs and also causing latency on ingress and egress network of our applications. Conclusion We came to the conclusion that (semi-)synchronous replicated databases and hyperconverged infrastructures with high replication factors don’t match. Unfortunately this replication factor could only be set on cluster level and not on an individual VM level. Also the reasoning behind the synchronous disk replication did not make sense (see also my previous blog post) as Galera would wipe the disk contents anyway and in general it would take quite some time for the database to recover, so a quick failover would not happen anyway. That’s why we ran Galera+ProxySQL in the first place: to allow us to have a failover happen within seconds! We also ran other (semi-)synchronous replicated databases: MongoDB, SOLR and Elasticsearch for example and each an everyone of them basically the same lack of need for disk replication. The only option left was to migrate the Galera cluster back to our old hardware that, luckily/sadly, was still switched on. At the same time we started working on a migration to a real cloud vendor as they could offer us better performance without the risk of a single point of failure (e.g. single data center). So what difference would a benchmark up front have made? This only happened due to bad requirements without analyzing the workload that was supposed to be converged. We would have seen these issues before migrating to the new hyperconverged infrastructure if we would have benchmarked beforehand. We would have saved us from many instabilities, outages and post mortems. We might even have chosen a totally different setup or have chosen to split our workloads over multiple (smaller) hyperconverged clusters. This is one of the background stories of my talk Benchmarking Should Never Be Optional on Wednesday 2nd of October 2019 at Percona Live Europe in Amsterdam. In my talk I will feature a few cases why you should always benchmark your systems up front. It’s not only about database benchmarking, but in some cases even the entire system that requires benchmarking.

  • Create MySQL Test Instance with Oracle Cloud Free Tier
    Tweet Oracle announced this week at Oracle OpenWorld that it has introduced a new cloud offer called Oracle Cloud Free Tier. As the name suggest, it allows you to run a few limited instances in Oracle Cloud for free. I will in this blog show how you can use the free tier to setup a MySQL test instance. Tip If you want to read more about Oracle Cloud Free Tier see https://www.oracle.com/cloud/free/ and the FAQ at https://www.oracle.com/cloud/free/faq.html. The first step is to sign up for the cloud service which you do by opening https://www.oracle.com/cloud/free/ and click on the Start for free button near the top of the page: Click on Start for free to get startedThis will take you through a series of pages where you create your Oracle Cloud account. The steps are straight forward. You will need to provide a valid mobile number and credit card (but no fees are charges provided you stick to the always free services). At the end you are directed to the Oracle Cloud login page: Oracle Cloud Login ScreenEnter the email address and password from the registration process, and you are ready to use create Oracle Cloud services. You will need a compute instance which you create by choosing the Compute Create a VM Instance quick action: The Oracle Cloud front page after logging in the first timeNotice how there is a label Always Free Eligible which tells you can create instances in the free tier. On the next screen, you can fill in the details for the instance. You can choose all the default values which will create a VM.Standard.E2.1.Micro virtual machine which is one of the shapes that are included in the free tier. The shape includes 1 OCPU (1 CPU with hyperthreading, so two virtual CPUs) and 1GiB of memory. It will also set everything up for you including the network with ssh access. To be able to ssh to the instance, you need to add the public ssh key for your ssh key pair. If you do not already have an ssh key, then https://docs.oracle.com/en/cloud/paas/event-hub-cloud/admin-guide/generate-ssh-key-pair-using-puttygen.html has an example of creating one on Microsoft Windows. Once you click create, a workflow is created and launched. While the workflow is running, the icon in the top left corner is yellow/orange to indicate that the instance is being worked on. Once the workflow has completed, the instance is available and the icon turns green. You will need the Public IP Address which you can find in the Primary VNIC Information section when viewing the instance details: With that and your ssh key, you can connect to the instance using the opc user, for example (this assumes you have the private key in OpenSSH format): shell$ ssh -i ~/.ssh/id_rsa opc@<ip address of vm> The first step to install MySQL is to install the MySQL yum repository: [opc@mysql ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpmLoaded plugins: langpacks, ulninfo mysql80-community-release-el7-3.noarch.rpm | 25 kB 00:00:00 Examining /var/tmp/yum-root-4Yk8Ev/mysql80-community-release-el7-3.noarch.rpm: mysql80-community-release-el7-3.noarch Marking /var/tmp/yum-root-4Yk8Ev/mysql80-community-release-el7-3.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package mysql80-community-release.noarch 0:el7-3 will be installed --> Finished Dependency Resolution Dependencies Resolved ===================================================================================================== Package Arch Version Repository Size ===================================================================================================== Installing: mysql80-community-release noarch el7-3 /mysql80-community-release-el7-3.noarch 31 k Transaction Summary ===================================================================================================== Install 1 Package Total size: 31 k Installed size: 31 k Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mysql80-community-release-el7-3.noarch 1/1 Verifying : mysql80-community-release-el7-3.noarch 1/1 Installed: mysql80-community-release.noarch 0:el7-3 Complete! Now you can install any of the MySQL products using the yum command. For example, to install MySQL Server, the MySQL client programs, and MySQL Shell: [opc@mysql ~]$ sudo yum install mysql-community-server mysql-community-client mysql-shell Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package mysql-community-client.x86_64 0:8.0.17-1.el7 will be installed --> Processing Dependency: mysql-community-libs(x86-64) >= 8.0.11 for package: mysql-community-client-8.0.17-1.el7.x86_64 ---> Package mysql-community-server.x86_64 0:8.0.17-1.el7 will be installed --> Processing Dependency: mysql-community-common(x86-64) = 8.0.17-1.el7 for package: mysql-community-server-8.0.17-1.el7.x86_64 ---> Package mysql-shell.x86_64 0:8.0.17-1.el7 will be installed --> Running transaction check ---> Package mariadb-libs.x86_64 1:5.5.64-1.el7 will be obsoleted --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-7.el7.x86_64 --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-7.el7.x86_64 ---> Package mysql-community-common.x86_64 0:8.0.17-1.el7 will be installed ---> Package mysql-community-libs.x86_64 0:8.0.17-1.el7 will be obsoleting --> Running transaction check ---> Package mysql-community-libs-compat.x86_64 0:8.0.17-1.el7 will be obsoleting --> Finished Dependency Resolution Dependencies Resolved ===================================================================================================== Package Arch Version Repository Size ===================================================================================================== Installing: mysql-community-client x86_64 8.0.17-1.el7 mysql80-community 32 M mysql-community-libs x86_64 8.0.17-1.el7 mysql80-community 3.0 M replacing mariadb-libs.x86_64 1:5.5.64-1.el7 mysql-community-libs-compat x86_64 8.0.17-1.el7 mysql80-community 2.1 M replacing mariadb-libs.x86_64 1:5.5.64-1.el7 mysql-community-server x86_64 8.0.17-1.el7 mysql80-community 415 M mysql-shell x86_64 8.0.17-1.el7 mysql-tools-community 15 M Installing for dependencies: mysql-community-common x86_64 8.0.17-1.el7 mysql80-community 589 k Transaction Summary ===================================================================================================== Install 5 Packages (+1 Dependent package) Total download size: 468 M Is this ok [y/d/N]: y Downloading packages: warning: /var/cache/yum/x86_64/7Server/mysql80-community/packages/mysql-community-common-8.0.17-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY Public key for mysql-community-common-8.0.17-1.el7.x86_64.rpm is not installed (1/6): mysql-community-common-8.0.17-1.el7.x86_64.rpm | 589 kB 00:00:00 (2/6): mysql-community-libs-8.0.17-1.el7.x86_64.rpm | 3.0 MB 00:00:01 (3/6): mysql-community-libs-compat-8.0.17-1.el7.x86_64.rpm | 2.1 MB 00:00:00 Public key for mysql-shell-8.0.17-1.el7.x86_64.rpm is not installed] 5.0 MB/s | 44 MB 00:01:25 ETA (4/6): mysql-shell-8.0.17-1.el7.x86_64.rpm | 15 MB 00:00:06 (5/6): mysql-community-client-8.0.17-1.el7.x86_64.rpm | 32 MB 00:00:13 (6/6): mysql-community-server-8.0.17-1.el7.x86_64.rpm | 415 MB 00:01:29 ----------------------------------------------------------------------------------------------------- Total 5.1 MB/s | 468 MB 00:01:31 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql Importing GPG key 0x5072E1F5: Userid : "MySQL Release Engineering <mysql-build@oss.oracle.com>" Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5 Package : mysql80-community-release-el7-3.noarch (@/mysql80-community-release-el7-3.noarch) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql Is this ok [y/N]: y Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mysql-community-common-8.0.17-1.el7.x86_64 1/7 Installing : mysql-community-libs-8.0.17-1.el7.x86_64 2/7 Installing : mysql-community-client-8.0.17-1.el7.x86_64 3/7 Installing : mysql-community-server-8.0.17-1.el7.x86_64 4/7 Installing : mysql-community-libs-compat-8.0.17-1.el7.x86_64 5/7 Installing : mysql-shell-8.0.17-1.el7.x86_64 6/7 Erasing : 1:mariadb-libs-5.5.64-1.el7.x86_64 7/7 Verifying : mysql-community-libs-8.0.17-1.el7.x86_64 1/7 Verifying : mysql-community-server-8.0.17-1.el7.x86_64 2/7 Verifying : mysql-community-common-8.0.17-1.el7.x86_64 3/7 Verifying : mysql-community-client-8.0.17-1.el7.x86_64 4/7 Verifying : mysql-shell-8.0.17-1.el7.x86_64 5/7 Verifying : mysql-community-libs-compat-8.0.17-1.el7.x86_64 6/7 Verifying : 1:mariadb-libs-5.5.64-1.el7.x86_64 7/7 Installed: mysql-community-client.x86_64 0:8.0.17-1.el7 mysql-community-libs.x86_64 0:8.0.17-1.el7 mysql-community-libs-compat.x86_64 0:8.0.17-1.el7 mysql-community-server.x86_64 0:8.0.17-1.el7 mysql-shell.x86_64 0:8.0.17-1.el7 Dependency Installed: mysql-community-common.x86_64 0:8.0.17-1.el7 Replaced: mariadb-libs.x86_64 1:5.5.64-1.el7 Complete! There are some dependencies that are pulled in and existing libraries are upgraded. That is it. All that remains is to start MySQL and set the root password. You start MySQL through systemd like: [opc@mysql ~]$ sudo systemctl start mysqld Since it is the first time MySQL is started, the data directory (/var/lib/mysql) is initialized and the root account is created with a random password. You can find the random password in the error log: [opc@mysql ~]$ sudo grep password /var/log/mysqld.log 2019-09-18T09:59:55.552745Z 5 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: i(Y5Nkko>S.t The password you see will of course be different. Use the password the first time you authenticate, then use the ALTER USER statement to set the new password. For example, using MySQL Shell: [opc@mysql ~]$ mysqlsh --user=root --sql Please provide the password for 'root@localhost': ************ Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): No MySQL Shell 8.0.17 Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. Creating a session to 'root@localhost' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 10 (X protocol) Server version: 8.0.17 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. MySQL localhost:33060+ ssl SQL > ALTER USER CURRENT_USER() IDENTIFIED BY 'New$secureP@ssw0rd'; Query OK, 0 rows affected (0.0061 sec) Information The password validation component is installed by default when installing MySQL using RPMs. This means that the password must be at least eight characters long and include at least one lower case, one upper case, one digit, and one special character. You are now ready to use MySQL. Have fun. Tweet

© 2019 0A1.RU Ключевые моменты CMS Joomla!. Все права защищены.
Joomla! — свободное программное обеспечение, распространяемое по лицензии GNU/GPL.