This usually happens when /opt/splunk/var/lib/splunk/defaultdb fills up and the free space on disk is less than 5GB.
Adding more disk space should do the trick, otherwise there are other tricks you can use to buy more time.
1. Change the mnimum free space in $SPLUNK_HOME/etc/system/local/server.conf
This will allow the disk to fill up to a point where it has 2GB free, once this limit is hit splunk indexers will stop indexing data till more space is available.
NB: NOT recommended unless you really need to get your splunk instance indexing till disk space is increased.
2. if you have another disk or partition with free space on your splunk instance you could archive your index data to that disk/partition.
See below how splunk's indexers stores indexed data in directories called buckets and the associated stages they go through.
More here: https://docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/Setaretirementandarchivingpolicy
We are going to limit the maxdbsize of our splunk instance and archive the frozen archive data to /var/splunk/archives
To do this we first create indexes.conf in $SPLUNK_HOME/etc/system/local/ if it does not exist.
Next we add the following:
In this case "main" is the name of our index and we are limiting the growth to 100GB, once this limit is reached, splunk will freeze the oldest data in the archive to our directory above.
Restart splunk for changes to take effect, splunk will almost immediately move frozen index data to the archive directory and your splunk instance will be healthy again.
No comments:
Post a Comment