While I have always wanted such feature (automatically updated /etc/hosts for all running containers), I understand that Docker does not provide it natively (just yet - or at least AFAIK). I also understand some security issues that might come with such feature (not all running containers want other containers to connect to it).
Anyway, about 2 weeks ago, while I was dockerizing a system automation application that requires at least 2 running nodes (containers), I found that the feature was silently available*.
* I went through the Release Notes of almost all recent releases and could not find such feature being mentioned. If I got it wrong, please point me to the proper Release Notes. Thanks!
Before I forget, let me share with you the reason I have always wanted such feature. My reason is simple - I need all my running containers to know the "existence" of other related containers and have a way to communicate with them (in this case, through /etc/hosts).
My test environment was on CentOS 7.1 and Docker 1.8.2.
(1) Firstly, I started a container without any hostname and container name. You can see that the /etc/hosts file was updated with:
(a) the container ID as the hostname
(b) the container name as the hostname
(2) Next, I started another container with an assigned hostname of "node1". You can see that now the /etc/hosts was updated with:
(a) the assigned hostname
(b) the container name as the hostname
(3) To spice it up a little bit more, I started another container with an assigned hostname and gave the container a name. You can see that the /etc/hosts was updated with:
(a) the assigned hostname
(b) the given container name
(4) The next test was to start up 3 containers with different hostname and given name and left all of them running. You can see that "node1" was started and its /etc/hosts was updated accordingly.
(5) Next, I started "node2" and its /etc/hosts was updated with details of "node1" too.
(6) What happened to the /etc/hosts of "node1" at this moment? Surprise, surpirse...you can see that its /etc/hosts was updated with details of "node2" too.
(7) Just to make sure it's not devaju. I started the third container ("node1" and "node2" were still running). This time I wasn't surprised to see that the /etc/hosts of "node3" was updated with details of "node1" and "node2".
(8) Lastly, let's check the /etc/hosts of "node1" and "node2". Viola, they are updated too!
Seriously, I am not sure whether this feature has been available for some time or it is an experimental feature. Anyway, I like it...for the thing I do! So, I am not speaking for you :)
I am a technology enthusiast and application support engineer by profession. All posts shared in this blog is based on my best knowledge. The objective of this blog is to share information and experience in hope that it will help save some of you some time while exploring these fun and exciting technologies.
Saturday, October 10, 2015
Sunday, October 4, 2015
Docker: Sharing Kernel Module
This is more like a request-for-help entry :)
I was working on a small project to dockerize a system automation application that supports clustering. Everything was working fine until the very last minute - during system startup!!!
The database was working fine in a primary-standby setup. The system automation application was running fine on both of the nodes in the cluster. Configuration went fine too.
However, when I started the automation monitoring, KABOOOOMMM, problem happened!
What happened was that the first node came up fine, but the second node seems to be reporting weird status. A quick read of the log file told me that the second node was unable to load the "softdog" kernel module and would not be able to initialize. The reason is very likely because the first node has already gotten hold of the kernel module.
Ok, I have to admit that this is a silly mistake I made and an oversight before I started the project.
Anyway, I would like to share what I learned from this experience:
(1) It is possible to have access to kernel module by:
(a) mounting the /lib/modules directory read-only into the container (-v /lib/modules:/lib/modules:ro).
(b) the kernel version of the host machine must be the same with the kernel version of the image/container.
(c) run the container with --privileged option.
For those people out there that are trying to dockerize applications that require kernel module, please be reminded that all containers on a same host would share the same underlying (host) kernel. Hence, you have to make sure that the application would still work under such condition.
I am not a kernel expert and have not ventured into this area in Docker previously, so any help is appreciated - SOS!!!!!
I was working on a small project to dockerize a system automation application that supports clustering. Everything was working fine until the very last minute - during system startup!!!
The database was working fine in a primary-standby setup. The system automation application was running fine on both of the nodes in the cluster. Configuration went fine too.
However, when I started the automation monitoring, KABOOOOMMM, problem happened!
What happened was that the first node came up fine, but the second node seems to be reporting weird status. A quick read of the log file told me that the second node was unable to load the "softdog" kernel module and would not be able to initialize. The reason is very likely because the first node has already gotten hold of the kernel module.
Ok, I have to admit that this is a silly mistake I made and an oversight before I started the project.
Anyway, I would like to share what I learned from this experience:
(1) It is possible to have access to kernel module by:
(a) mounting the /lib/modules directory read-only into the container (-v /lib/modules:/lib/modules:ro).
(b) the kernel version of the host machine must be the same with the kernel version of the image/container.
(c) run the container with --privileged option.
For those people out there that are trying to dockerize applications that require kernel module, please be reminded that all containers on a same host would share the same underlying (host) kernel. Hence, you have to make sure that the application would still work under such condition.
I am not a kernel expert and have not ventured into this area in Docker previously, so any help is appreciated - SOS!!!!!
Subscribe to:
Posts (Atom)