I am not an expert in these things, but I am not sure how you could ever do what you want.
The reason storing the hash and knowing the salt works for users is because the user enters the password manually when they login, the system hashes it with the salt and compares it with the hash value it has stored. This way the system never knows the actual password and the purpose is for user security. The real password has to be known/entered somewhere.
Even with the idea of mysql userdatabase, you would have to store the user/password for the mysql database somewhere which had read access to the user/passwords you wanted to supply to kibana. which results in the same problem.
If an application was started manually it would be possible to have this password “offline” and only entered at that point, but any service/background startup would not work.
I have used ssl certificates to secure it otherwise, so at least the password it sends over the network isn’t in plaintext as it requires kibana to use the https url.
If you are worried about access to the system this is a larger problem. As even following the search guard examples for securing with tls/ssl you had the passwords for the keystores in the elasticsearch configuration in plaintext.
I have just successfully installed Searchguard 6.3.0-22.3 (in a self configured docker-elk stack running version 6.3.0 - no other plugins installed, and tbh all the versions don’t matter) and have finally got to the point of getting ready for production, but there is a serious concern that I have which isn’t addressed anywhere else according to my research: where can we store the elasticsearch.user credentials instead of keeping them stored as plaintext inside kibana.yml config e.g. “kibanaserver”. Is there a setting to allow us to store the BCrypt hash in kibana.yml instead? I have tried just giving the hashed value but then the authentication fails, also I fear that the server would then be vulnerable to Pass the Hash type exploits/rainbow tables.
Ideally I would like to have an internal-users database on the Kibana side of the stack to store the hash and salt but I haven’t been able to figure this out for myself yet (e.g. reading credentials from a MySQL database into kibana.yml config file? Seems like overkill…)
Please let me know what people are doing in production to avoid these security issues?
This is a common question, how to deal with passwords on production systems, and there is no one-size-fits-all answer to that.
First, you should always use a separate user to run the ES/KI processes on your production system, and limit access to the configuration files (including passwords) to this user. This would shield against unauthorized users seeing the actual contents of your config file.
The next step would be to use environment variables. This is a common approach with many systems, but it seems Kibana does not support it:
So you could use command line arguments when starting Kibana, which is not really scriptable.
And then you can use the Kibana keystore to remove the elasticsearch user and password from kibana.yml:
···
On Friday, July 20, 2018 at 8:04:36 PM UTC+2, Chris White wrote:
I don’t know how this would work.
I am not an expert in these things, but I am not sure how you could ever do what you want.
The reason storing the hash and knowing the salt works for users is because the user enters the password manually when they login, the system hashes it with the salt and compares it with the hash value it has stored. This way the system never knows the actual password and the purpose is for user security. The real password has to be known/entered somewhere.
Even with the idea of mysql userdatabase, you would have to store the user/password for the mysql database somewhere which had read access to the user/passwords you wanted to supply to kibana. which results in the same problem.
If an application was started manually it would be possible to have this password “offline” and only entered at that point, but any service/background startup would not work.
I have used ssl certificates to secure it otherwise, so at least the password it sends over the network isn’t in plaintext as it requires kibana to use the https url.
If you are worried about access to the system this is a larger problem. As even following the search guard examples for securing with tls/ssl you had the passwords for the keystores in the elasticsearch configuration in plaintext.
I have just successfully installed Searchguard 6.3.0-22.3 (in a self configured docker-elk stack running version 6.3.0 - no other plugins installed, and tbh all the versions don’t matter) and have finally got to the point of getting ready for production, but there is a serious concern that I have which isn’t addressed anywhere else according to my research: where can we store the elasticsearch.user credentials instead of keeping them stored as plaintext inside kibana.yml config e.g. “kibanaserver”. Is there a setting to allow us to store the BCrypt hash in kibana.yml instead? I have tried just giving the hashed value but then the authentication fails, also I fear that the server would then be vulnerable to Pass the Hash type exploits/rainbow tables.
Ideally I would like to have an internal-users database on the Kibana side of the stack to store the hash and salt but I haven’t been able to figure this out for myself yet (e.g. reading credentials from a MySQL database into kibana.yml config file? Seems like overkill…)
Please let me know what people are doing in production to avoid these security issues?
I'm sorry but I think I'm struggling to see how these keystores help for this specifically.
If the user has access to the config file with the password they would seemingly have access to both the keystore and the environment variable that has the password for the keystore as well.
It may stop a casual wanderer but doesn't really seem to secure anything if someone actually wanted it.
Well, the initial question was how to get rid of plaintext passwords in the kibana.yml file, and these would be the options.
Let’s put it that way: The process that is running Kibana needs to provide a username/password to ES. Thus, the process needs to have the password in cleartext at some point. This is because Kibana uses HTTP Basic auth (only) for authenticating against ES.
So the Kibana process needs to either get the password in plaintext (from kibana.yml) or encrypted (from a keystore). But it then needs to be able to decrypt it. as well So the process needs to have access to the key/salt/env variables/whatever for decryption.
So my main point is: If a casual wanderer has the same permissions on your OS as the user that is running the Kibana process, you are screwed anyways. Because the wanderer would have access to either the kibana.yml, or the keystore, or a salt, or anything else the Kibana process can access. He could even dump the memory and look for passwords there. In short: If a bypasser can impersonate as the Kibana process, your system is compromised.
···
On Monday, July 23, 2018 at 11:45:46 PM UTC+2, Chris White wrote:
I’m sorry but I think I’m struggling to see how these keystores help for this specifically.
If the user has access to the config file with the password they would seemingly have access to both the keystore and the environment variable that has the password for the keystore as well.
It may stop a casual wanderer but doesn’t really seem to secure anything if someone actually wanted it.