Docker flask postgres app12/20/2023 ![]() Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for We’ll use this very simple config to proxy The final piece is to hook up our nginx reverse proxy to the flask container. However, in my experience, it works well enough in practiceįor this example. In theory, this doesn’t actually solve the “race condition” because the postgres process could take a while to start upĪnd docker has no way of knowing when it’s “done” starting up. Tell docker-compose to start the db container before the flaskapp container by specifying a dependency: Meaning that our flask container could come up before the db container, and die because it can’t find the db. Our docker-compose.yml is still not quite ready yet - docker-compose up attempts to start containers in parallel, Pass the -volumes option to docker-compose down Container startup order If you do actually want to delete the volume, Now we see that the database volume data is preserved across runs. Note that you need to declare the volume at the top-level to reference it in the db service section. To remedy this, we’ll mount a named volume. Was lost! Turns out that volume data is copied across runs only for named volumes since we didn’t specify a named volume mount in the compose yaml,ĭocker-compose creates a new anonymous volume each time the db container is started. So imagine my surprise when I did a docker-compose down & docker-compose up -d and found out that the database schema I created earlier This process ensures that any data you’ve created in volumes isn’t lost. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. A note about volumesįor docker-compose states that volume data is preserved across runs. Notice the -rm flag to indicate that the container should be deleted immediately after completion. Now, we’ll run a one-off flask container that will create the schema: $ docker-compose run -rm flaskapp /bin/bash -c "cd /opt/services/flaskapp/src & python -c 'import database database.init_db()'" This is because we’ve already specified them in the docker-compose.yml file. Notice the refreshing lack of -env-file or -network flags we would have had to pass in a plain docker run command. Let’s first bring up only the DB container: $ docker-compose up -d db Remember though, that our flask app lives in a separate container so we’ll have toįigure out a way to connect it to the DB container and run a one-off command to create the schema. # you will have to import them first before calling init_db() # they will be registered properly on the metadata. environ host = 'db' # docker-compose creates a hostname alias with the service nameĮngine = create_engine ( % ( user, pwd, host, port, db )) Base = declarative_base () def init_db (): # import all modules here that might define models so that Create a database.py with an init_db function to create the schema: # These env variables are the same ones used for the DB container Our flask app declares SQLAlchemy models, so This still doesn’t solve the issue of creating our database schema. We’ll create an environment file with the required variables and specify it in the db section: The first time we bring up the cluster with a docker-compose up, the database will not have the required tables created yet.įortunately, the postgres image allows you to specify a default user, password and database through environment variables. This looks like a fairly self-explanatory translation of the above diagram, but it doesn’t work out-of-the-box yet. My first attemptĪt docker-compose.yml looked something like this: My 3-container setup is pictured below as explained in the previous post.ĭocker-compose allows you to define all your containers, networks and volumes in a nice declarative yaml file. In this blog post, I’ll detail how I redid my setup using docker-compose.įor illustrative purposes, I’ve extracted a subset of my code and posted a fully functional example on GitHub. Is the recommended tool to manage multi-container deployments. Multiple docker commands by hand or in a shell script is far too brittle. In my previous post, I wrote about how I migrated my app to use user-defined networks.Īs I mentioned in that post, I preferred to start with just the basic docker commands to avoid “magic” as much as possible. ![]() Nginx+Flask+Postgres multi-container setup with Docker Compose
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |