Ubuntu 16.10 Server MariaDB (MySql) open_file_limit won’t go higher than 16364

In Ubuntu 16.10 and other systemd enabled OS like Debian, CentOs and LinuxMint with MariaDB Server version 10.x+ open_file_limit won’t go higher than 16364. according to MariaDB it’s simply because the SystemD daemon prefer new ways to set limit and a few other vital modifications

systemd is an init replacement that MariaDB uses on releases since 10.1.8. Distribution packages before this version may use a different configuration so consult their documentation if required.

systemd services are packaged on RPM and Debian based Linux distributions. When systemd is used, mysqld_safe is not used and settings in the mysqld_safe section of configuration files will not be read or applied.

systemd overview

systemd service files are included in the MariaDB-server package. The service definition is installed in /usr/lib/systemd/system/mariadb.service. The service name is mariadb.service; however aliases to mysql.service and mysqld.service are included for convenience.

Unlike previous init scripts, the mysqld process is executed directly from the init script running as the mysql user. This places a couple of limitations on situations that were previously possible:

  • open-files-limit cannot be raised beyond the operating system limit (usually 1K) and hence the systemd configuration for mariadb has LimitNOFILE set to 16K by default;
  • memlock can be used from version 10.1.10; and
  • The start timeout from init scripts was quite large and special configuration may be required if there is a slow startup time (MDEV-9202).
  • A mapping of common mysqld_safe options to systemd options is provided below.

    mysqld_safe option systemd option Comments
    no option ProtectHome=false If any MariaDB files are in /home/
    no option PrivateDevices=false If any MariaDB storage references raw block devices
    no option ProtectSystem= If any MariaDB write any files to anywhere under /boot, /usr or /etc
    no option TimeoutStartSec={time} Set if the systemd reports failure to start because of timeout. 0 disables any timeout
    no option (ref MDEV-9264) OOMScoreAdjust={priority} e.g. -600 to lower priority of OOM killer for mysqld
    open_files LimitNOFILE={limit}  
    core_file_size LimitCORE={size}  
      LimitMEMLOCK={size} or unlimited When large-pages or memlock is used
    nice Nice={nice value}  
    syslog StandardOutput=syslog  
    syslog-tag SyslogIdentifier  
    flush-caches ExecStartPre=/usr/bin/sync  
      ExecStartPre=/usr/sbin/sysctl -q -w vm.drop_caches=3  
    numa-interleave ExecStart=/usr/bin/numactl –interleave=all /usr/sbin/mysqld ${MYSQLD_OPTS} ${_WSREP_NEW_CLUSTER}  

    Note: systemd.service contains the official meanings for these systemd settings.

    There are other options and the mariadb-service-convert script will attempt to convert these as accurately as possible.

    In addition to the set of options previously provided by mysqld_safe, systemd.service has considerably more options.

When all or some of these settings have been altered don’t forget to:

systemctl daemon-reload
systemctl restart mysql.service

To make the new values active

To verify your changes open a mysql instance in terminal or your preferred Mysql GUI and type

show global variables like 'open%';

If you can see the changes, your done. As exsamle I’ve set my LimitNOFILE=1024000 in /etc/systemd/system/mariadb.service.d/migrated-from-my.cnf-settings.conf and the output looks like

MariaDB [(none)]> show global variables like 'open%';
| Variable_name    | Value   |
| open_files_limit | 1024000 |
1 row in set (0.00 sec)

Posted on

InnoDB table optimization w/o locking table

I’ve noticed a significant amount of performance gain if I repack a table (ALTER TABLE foo ENGINE = INNODB) after some period of time, or many after a heavy volume of INSERT/UPDATE/DELETEs. I don’t know if this is because indicies etc are rebuilt, or compacting the table space, or something else?

It strikes me that doing something like ALTER TABLE foo ENGINE = INNODB should be a part of routine table maintenance, however using OPTIMIZE or ALTER locks the table which is unacceptable, is there is a good way to do with with one database server (meaning no failing over to another instance) w/o locking the entire table?

Update: Using Percona 5.5.17-55

Update: SHOW VARIABLES LIKE ‘innodb%’;

| Variable_name                          | Value                  |
| innodb_adaptive_checkpoint             | estimate               |
| innodb_adaptive_flushing               | OFF                    |
| innodb_adaptive_hash_index             | ON                     |
| innodb_additional_mem_pool_size        | 8388608                |
| innodb_auto_lru_dump                   | 120                    |
| innodb_autoextend_increment            | 8                      |
| innodb_autoinc_lock_mode               | 1                      |
| innodb_buffer_pool_shm_checksum        | ON                     |
| innodb_buffer_pool_shm_key             | 0                      |
| innodb_buffer_pool_size                | 30064771072            |
| innodb_change_buffering                | inserts                |
| innodb_checkpoint_age_target           | 0                      |
| innodb_checksums                       | ON                     |
| innodb_commit_concurrency              | 0                      |
| innodb_concurrency_tickets             | 500                    |
| innodb_data_file_path                  | ibdata1:10M:autoextend |
| innodb_data_home_dir                   |                        |
| innodb_dict_size_limit                 | 0                      |
| innodb_doublewrite                     | ON                     |
| innodb_doublewrite_file                |                        |
| innodb_enable_unsafe_group_commit      | 0                      |
| innodb_expand_import                   | 0                      |
| innodb_extra_rsegments                 | 0                      |
| innodb_extra_undoslots                 | OFF                    |
| innodb_fast_checksum                   | OFF                    |
| innodb_fast_recovery                   | OFF                    |
| innodb_fast_shutdown                   | 1                      |
| innodb_file_format                     | Antelope               |
| innodb_file_format_check               | Barracuda              |
| innodb_file_per_table                  | ON                     |
| innodb_flush_log_at_trx_commit         | 0                      |
| innodb_flush_log_at_trx_commit_session | 3                      |
| innodb_flush_method                    | O_DIRECT               |
| innodb_flush_neighbor_pages            | 1                      |
| innodb_force_recovery                  | 0                      |
| innodb_ibuf_accel_rate                 | 100                    |
| innodb_ibuf_active_contract            | 1                      |
| innodb_ibuf_max_size                   | 15032369152            |
| innodb_io_capacity                     | 200                    |
| innodb_lazy_drop_table                 | 0                      |
| innodb_lock_wait_timeout               | 50                     |
| innodb_locks_unsafe_for_binlog         | OFF                    |
| innodb_log_block_size                  | 512                    |
| innodb_log_buffer_size                 | 67108864               |
| innodb_log_file_size                   | 402653184              |
| innodb_log_files_in_group              | 2                      |
| innodb_log_group_home_dir              | ./                     |
| innodb_max_dirty_pages_pct             | 75                     |
| innodb_max_purge_lag                   | 0                      |
| innodb_mirrored_log_groups             | 1                      |
| innodb_old_blocks_pct                  | 37                     |
| innodb_old_blocks_time                 | 0                      |
| innodb_open_files                      | 300                    |
| innodb_overwrite_relay_log_info        | OFF                    |
| innodb_page_size                       | 16384                  |
| innodb_pass_corrupt_table              | 0                      |
| innodb_read_ahead                      | linear                 |
| innodb_read_ahead_threshold            | 56                     |
| innodb_read_io_threads                 | 4                      |
| innodb_recovery_stats                  | OFF                    |
| innodb_replication_delay               | 0                      |
| innodb_rollback_on_timeout             | OFF                    |
| innodb_show_locks_held                 | 10                     |
| innodb_show_verbose_locks              | 0                      |
| innodb_spin_wait_delay                 | 6                      |
| innodb_stats_auto_update               | 1                      |
| innodb_stats_method                    | nulls_equal            |
| innodb_stats_on_metadata               | ON                     |
| innodb_stats_sample_pages              | 8                      |
| innodb_stats_update_need_lock          | 1                      |
| innodb_strict_mode                     | OFF                    |
| innodb_support_xa                      | ON                     |
| innodb_sync_spin_loops                 | 30                     |
| innodb_table_locks                     | ON                     |
| innodb_thread_concurrency              | 8                      |
| innodb_thread_concurrency_timer_based  | OFF                    |
| innodb_thread_sleep_delay              | 10000                  |
| innodb_use_purge_thread                | 1                      |
| innodb_use_sys_malloc                  | ON                     |
| innodb_use_sys_stats_table             | OFF                    |
| innodb_version                         | 1.0.16-12.8            |
| innodb_write_io_threads                | 4                      |

Posted on

How to copy data from one table to another new table in MySQL?

If you want to copy data from one table to another in MySQL

If you don’t want to list the fields, and the structure of the tables is the same, you can do:

INSERT INTO `table2` SELECT * FROM `table1`;

or if you want to create a new table with the same structure:

CREATE TABLE new_tbl [AS] SELECT * FROM orig_tbl;

Posted on

Mysql / MariaDB Run db in ramdisk

Something tells me that you have one of these two situations

  • disk-based temp tables for your SELECTs and competing for space with your regular data
  • If temp table are landing in /tmp in the root partition, that’s running out of space


Suggestion #1: Map tmpdir to another disk

tmpdir = /another/disk/besides/root/partition

Suggestion #2: Create a RAM Disk

Run this code to install a RAM disk that will available on Linux reboot.

service mysql stop
mkdir /var/tmpfs
echo "none   /var/tmpfs  tmpfs  defaults,size=${RAMDISK_SIZE} 1 2" >> /etc/fstab
mount -t tmpfs -o size=${RAMDISK_SIZE} none /var/tmpfs
cp -R /var/lib/mysql/* /var/tmpfs
mv /var/lib/mysql /var/lib/mysql_old
ln -s /var/tmpfs /var/lib/mysql
chown -R mysql:mysql /var/tmpfs
chown -R mysql:mysql /var/lib/mysql
service mysql start

Then, map tmpdir to /var/tmpfs


Posted on

Setting up MySQL (MariaDB) master/slave replication without the downtime

I clearly don’t need to expound on the benefits of master-slave replication for your MySQL database. It’s simply a good idea; one nicety I looked forward to was the ability to run backups from the slave without impacting the performance of our production database. But the benefits abound.

Most tutorials on master-slave replication use a read lock to accomplish a consistent copy during initial setup. Barbaric! With our users sending thousands of cards and gifts at all hours of the night, I wanted to find a way to accomplish the migration without any downtime.

@pQd via ServerFault suggests enabling bin-logging and taking a non-locking dump with the binlog position included. In effect, you’re creating a copy of the db marked with a timestamp, which allows the slave to catch up once you’ve migrated the data over. This seems like the best way to set up a MySQL slave with no downtime, so I figured I’d document the step-by-step here, in case it proves helpful for others.

First, you’ll need to configure the master’s /etc/mysql/my.cnf by adding these lines in the [mysqld] section:

binlog-format   = mixed

Restart the master mysql server and create a replication user that your slave server will use to connect to the master:

CREATE USER [email protected]<<slave-server-ip>>;
GRANT REPLICATION SLAVE ON *.* TO [email protected]<<slave-server-ip>> IDENTIFIED BY '<<choose-a-good-password>>';

Note: Mysql allows for passwords up to 32 characters for replication users.

Next, create the backup file with the binlog position. It will affect the performance of your database server, but won’t lock your tables:

mysqldump -u root -p --opt --comments --dump-date --no-autocommit --all-databases --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/mysql_dump.sql

Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:

head mysql_dump.sql -n80 | grep "MASTER_LOG_POS"

Because this file for me was huge, I gzip’ed it before transferring it to the slave, but that’s optional:

gzip ~/mysql_dump.sql

To unzip

gzip -d ~/mysql_dump.sql.gz

Or you can use tar -jcf

tar -czvf ~/mysql_dump.sql.tar.gz ~/mysql_dump.sql

To unzip

tar zxf ~/mysql_dump.sql.tar.gz

Now we need to transfer the dump file to our slave server (if you didn’t gzip first, remove the .gz bit):

scp ~/mysql_dump.sql.gz [email protected]<<slave-server-ip>>:~/

While that’s running, you should log into your slave server, and edit your /etc/mysql/my.cnf file to add the following lines:

server-id               = 101
binlog-format       = mixed
log_bin                 = mysql-bin
relay-log               = mysql-relay-bin
log-slave-updates = 1
read-only               = 1

Restart the mysql slave, and then import your dump file:

gunzip ~/mysql_dump.sql.gz
mysql -u root -p < ~/mysql_dump.sql

Log into your mysql console on your slave server and run the following commands to set up and start replication:

CHANGE MASTER TO MASTER_HOST='<<master-server-ip>>',
MASTER_LOG_FILE='<<value from above>>',
MASTER_LOG_POS=<<value from above>>;

To check the progress of your slave:


If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”. Look for Seconds_Behind_Master which indicates how far behind it is. It took me a few hours to accomplish all of the above, but the slave caught up in a matter of minutes. YMMV.

And now you have a newly minted mysql slave server without experiencing any downtime!

A parting tip: Sometimes errors occur in replication. For example, if you accidentally change a row of data on your slave. If this happens, fix the data, then run:


Update: In following my own post when setting up another slave, I ran into an issue with authentication. The slave status showed an error of 1045 (credential error) even though I was able to directly connect using the replicant credentials. It turns out that MySQL allows passwords up to 32 characters in length for master-slave replication.

Posted on

Why do MariaDB not start

The reason for the failure will almost certainly be written in the Error Log and, if you are starting MariaDB manually, to the console. By default, the error log is named host-name.err and is written to the data directory.

Common Locations:

  • /var/log/
  • /var/log/mysql
  • On CentOS = /var/lib/mysql/server-name.err
  • C:\ProgramData\Mysql
  • C:\Program Files\MySQL\MySQL Server x.x\data (x.x refers to the version number)

It’s also possible that the error log has been explicitly written to another location, either by changing the datadir system variable, or setting with log-error=filename – see my.cnf below.

In most cases you should be able to find out the place of the error file by doing:

mysqld --help --verbose | grep 'log-error' | tail -1
mysqld --help --verbose | grep 'datadir' | tail -1

Source: https://mariadb.com/kb/en/mariadb/what-to-do-if-mariadb-doesnt-start/

Posted on

How to get size of mysql database?

Simple 🙂

SELECT table_schema                                        "DB Name", 
   ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" 
FROM   information_schema.tables 
GROUP  BY table_schema; 

Posted on