69 Commits

Author SHA1 Message Date
Sebastián Ramírez
490c554e23 📝 Update release notes 2020-06-05 19:12:34 +02:00
Sebastián Ramírez
aab833bae2 ⬆️ Update issue-manager (#211) 2020-06-05 19:11:45 +02:00
Sebastián Ramírez
45fdd880ce 📝 Update release notes 2020-05-25 08:44:17 +02:00
Sebastián Ramírez
27fabe01ea Add GitHub Sponsors button (#201) 2020-05-25 08:43:24 +02:00
Sebastián Ramírez
6967ce1b0c 📝 Update release notes 2020-05-25 08:34:31 +02:00
Sebastián Ramírez
20fa4ce8fb 🔊 Add consistent errors for env vars not set (#200) 2020-05-25 08:33:36 +02:00
Sebastián Ramírez
1a64656267 📝 Update release notes 2020-05-24 23:37:01 +02:00
Sebastián Ramírez
e4c668d7cd Upgrade Traefik to version 2 (#199)
* 🔧 Add STACK_NAME to .env for Traefik labels

*  Upgrade Docker Compose to use Traefik v2

*  Enable Traefik v2 in Docker Compose override for local development

* 🐛 Use internal HTTPS redirect in case the deployment is not through DockerSwarm.rocks
2020-05-24 23:35:49 +02:00
Sebastián Ramírez
bdc40a17f6 📝 Update release notes 2020-04-20 20:46:13 +02:00
Sebastián Ramírez
b88a0fc5fa 📝 Add docs about reporting coverage in HTML (#161) 2020-04-20 20:45:32 +02:00
Sebastián Ramírez
b02e4633dc 📝 Update relase notes 2020-04-20 20:32:26 +02:00
Sebastián Ramírez
43d5b49bd1 Test using the TestClient (#160) 2020-04-20 20:31:29 +02:00
Sebastián Ramírez
41a2f15d8f 🎨 Format fixes (#159)
* 📝 Update release notes

* 🎨 Update format and structure
2020-04-20 19:27:45 +02:00
Sebastián Ramírez
7b768879f5 📝 Update release notes 2020-04-20 19:15:01 +02:00
Sebastián Ramírez
eed33d276d ♻️ Refactor backend, settings, DB sessions, types, configs, plugins (#158)
* ♻️ Refactor backend, update DB session handling

*  Add mypy config and plugins

*  Use Python-jose instead of PyJWT

as it has some extra functionalities and features

*  Add/update scripts for test, lint, format

* 🔧 Update lint and format configs

* 🎨 Update import format, comments, and types

* 🎨 Add types to config

*  Add types for all the code, and small fixes

* 🎨 Use global imports to simplify exploring with Jupyter

* ♻️ Import schemas and models, instead of each class

* 🚚 Rename db_session to db for simplicity

* 📌 Update dependencies installation for testing
2020-04-20 19:03:13 +02:00
Sebastián Ramírez
4b80bdfdce 📝 Update release notes 2020-04-19 21:01:11 +02:00
Sebastián Ramírez
7d57876bda 📝 Add docs about removing the frontend (#156) 2020-04-19 21:00:31 +02:00
Sebastián Ramírez
3678eb9d57 📝 Update release notes 2020-04-19 20:20:32 +02:00
Sebastián Ramírez
f09fb854bf 🔧 Simplify scripts and development (#155)
* 🔧 Update scripts and configs

* 🔧 Add shebang to script

* 🔥 Remove test and dev configs, pass inline

*  Add local development dev-link set up

* 📝 Update generated docs with refactor

* 📝 Add Contributing guide
2020-04-19 20:19:25 +02:00
Sebastián Ramírez
a7fd258e18 📝 Update release notes 2020-04-19 16:45:30 +02:00
Sebastián Ramírez
2afe4159ab ♻️ Simplify Docker Compose files and deployment (#153)
* ♻️ Simplify Docker Compose files and deployment

* 🔧 Remove TRAEFIK_PUBLIC_NETWORK_IS_EXTERNAL from .env
2020-04-19 16:44:12 +02:00
Sebastián Ramírez
283bc7c95b ♻️ Simplify tests, run in same backend service (#152)
remove backend-tests
2020-04-19 12:34:03 +02:00
Sebastián Ramírez
8f9c2bac42 📝 Update release notes 2020-04-19 09:18:19 +02:00
Sebastián Ramírez
a7d3671a72 ♻️ Simplify env files, merge to one .env (#151) 2020-04-19 09:17:14 +02:00
Sebastián Ramírez
894b0a5587 📝 Update Travis badge 2020-04-19 08:54:23 +02:00
Sebastián Ramírez
a25f2c14e4 🔖 Release version 0.5.0 2020-04-19 08:52:11 +02:00
Sebastián Ramírez
122d983415 📝 Update release notes 2020-04-19 08:51:36 +02:00
Sebastián Ramírez
d08d9314ce 📌 Make the public Traefik network a fixed default (#150)
to simplify development
2020-04-19 08:50:00 +02:00
Sebastián Ramírez
ff55b778ba 📝 Update release notes 2020-04-19 07:57:18 +02:00
Ruslan Samoylov
8812ca6635 ⬆️ Upgrade to Postgres 12 (#148) 2020-04-19 07:56:05 +02:00
Sebastián Ramírez
2e8da3a590 📝 Update release notes 2020-04-18 23:30:01 +02:00
Sebastián Ramírez
00297f974f 🙈 Update gitignore with Poetry 2020-04-18 23:29:44 +02:00
Ruslan Samoylov
c8bcc0ba0a Use Poetry for package management (#144)
* use poetry insted of Pipfile

* fix python black version

* set prepare.sh as executable

* revert postgres 11

* use multi-build stage in docker

* fix poetry path

* 🔥 Remove uneeded changes

* 🔧 Move and update Poetry file

* 🙈 Update gitignore

* 🐳 Update Dockerfiles to use Poetry

* 🐳 Update Dockerfiles with Poetry

* 🔧 Add SERVER_NAME required by Celery worker

* 🐳 Update Poetry install to avoid env conflicts

*  Add Pytest to Poetry dependencies

Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-18 23:27:48 +02:00
Sebastián Ramírez
0a194b3b00 📝 Update README, sync with FastAPI docs 2020-04-18 17:48:48 +02:00
Sebastián Ramírez
94b2474438 📝 Update release notes 2020-04-18 10:16:16 +02:00
Sebastián Ramírez
af4e0cfe10 🐛 Fix Windows line endings for shell scripts after generation (#149) 2020-04-18 10:15:00 +02:00
Sebastián Ramírez
001dbda103 📝 Update release notes 2020-04-17 16:35:15 +02:00
Brendon Smith
34f6f9ae54 ⬆️ Upgrade to Vue CLI 4 (#120)
* Upgrade to Vue CLI 4

https://cli.vuejs.org/migrating-from-v3

* 🔥 Remove package-lock.json that varies by system

Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-17 16:33:51 +02:00
Sebastián Ramírez
0c8e682a90 📝 Update release notes 2020-04-17 14:56:05 +02:00
Matthew Shu
67b384f308 🔥 Remove duplicate 'login' tag (#135)
Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-17 14:54:47 +02:00
Sebastián Ramírez
4bd791c11d 📝 Update release notes 2020-04-17 14:34:37 +02:00
Radek Lonka
697b4da6b0 🐛 Fix welcome message to show email if full name doe not exists (#129) 2020-04-17 14:31:30 +02:00
Sebastián Ramírez
854cc709d1 📝 Update release notes 2020-04-17 14:22:02 +02:00
Brendon Smith
21c4d11659 🎨 Bring Python code into compliance with Black and Flake8 (#121)
* Ignore Flake8 unused import error F401

https://flake8.readthedocs.io/en/latest/user/error-codes.html

The apparently unused imports may be needed for SQLAlchemy.

As the code comment says:

make sure all SQL Alchemy models are imported before initializing DB
otherwise, SQL Alchemy might fail to initialize properly relationships

See GitHub 28 and 29

* Ignore Flake8 unused variable error F841

https://flake8.readthedocs.io/en/latest/user/error-codes.html

The apparently unused variables may be needed for tests.

* Bring line length into compliance with Black

Should be 88 characters.

* Format alembic code with Black
2020-04-17 14:20:48 +02:00
Sebastián Ramírez
bcee2427b9 📝 Update release notes 2020-04-17 09:41:34 +02:00
Albert Iribarne
8a2252f654 ♻️ Simplify DB base class declaration (#117)
* Simplify DB base class declaration

* ♻️ Remove object inheritance

Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-17 09:39:25 +02:00
Sebastián Ramírez
8ff61e813e 📝 Update release notes 2020-04-17 09:21:32 +02:00
Mocsár Kálmán
fb874fea35 Update CRUD utils for users handling password hashing (#106)
* Add some information how to run backand test for local backand development

* Bug fixes in backend app

* 🎨 Update format

*  Use random_email for test_update_user

Co-authored-by: Mocsar Kalman <mocsar.kalman@gravityrd.com>
Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-17 09:20:00 +02:00
Sebastián Ramírez
2b9ed9333a 📝 Update release notes 2020-04-17 08:07:39 +02:00
gcharbon
45510b4f80 ♻️ Use . instead of source in build-push.sh (#98) 2020-04-17 08:04:42 +02:00
Sebastián Ramírez
5a79f4e427 📝 Update release notes 2020-04-17 08:01:35 +02:00
Stephen Brown II
79631c7619 Use Pydantic BaseSettings for config settings (#87)
* Use Pydantic BaseSettings for config settings

* Update fastapi dep to >=0.47.0 and email_validator to email-validator

* Fix deprecation warning for Pydantic >=1.0

* Properly support old-format comma separated strings for BACKEND_CORS_ORIGINS

Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-17 07:56:10 +02:00
Sebastián Ramírez
cd875e5bef 👷 Add GitHub action issue-manager 2020-04-12 13:10:38 +02:00
Sebastián Ramírez
1a92a0a6f1 🔥 Remove package-lock.json 2020-04-06 17:28:53 +02:00
Sebastián Ramírez
2eb5b030bd 📝 Update release notes 2020-04-06 12:39:26 +02:00
Sebastián Ramírez
7c2c2276d9 ♻️ Simplify Traefik labels in services (#139) 2020-04-06 12:38:28 +02:00
Sebastián Ramírez
baf584a6cd 📝 Update release notes 2020-04-06 11:37:52 +02:00
Teomor Szczurek
970a182ec8 Add email validation (#40)
* modify tests

*  Add email-validator to Dockerfiles

* ♻️ Update random email generation

* ♻️ Re-apply email validation after rebase

Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>
2020-04-06 11:36:29 +02:00
Sebastián Ramírez
1d8678235d 📝 Update release notes 2020-02-07 23:17:38 +01:00
Ashton Shears
71f430616c ✏️ Fix typo (#83) 2020-02-07 21:46:09 +01:00
Abhishek S
43e508239c :✏️ Fix typo (#80) 2020-02-07 21:44:37 +01:00
Cristobal Aguirre
dc712ac4ec 🐛 Fix typo in read_item GET view (#74) 2020-02-07 21:28:45 +01:00
Sebastián Ramírez
141f6cdb6e 📝 Update release notes 2020-02-07 21:17:28 +01:00
Daniel Butler
fc403c9bc1 ✏️ Correct grammar (#70) 2020-02-07 21:15:10 +01:00
David Montague
4b93dc709f 🐛 Fix docker configuration for flower (#37) 2020-02-07 21:04:02 +01:00
Sebastián Ramírez
2db416d3c1 📝 Update release notes 2020-01-19 22:49:17 +01:00
Manu
ab46165387 Add base class to simplify CRUD (#23) 2020-01-19 22:40:50 +01:00
Sebastián Ramírez
1c975c7f2d 📝 Update release notes 2020-01-19 13:27:07 +01:00
Manu
248ea56c6e Add normal-user fixture for testing (#20) 2020-01-19 13:25:17 +01:00
119 changed files with 1878 additions and 15414 deletions

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [tiangolo]

28
.github/workflows/issue-manager.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Issue Manager
on:
schedule:
- cron: "0 0 * * *"
issue_comment:
types:
- created
- edited
issues:
types:
- labeled
jobs:
issue-manager:
runs-on: ubuntu-latest
steps:
- uses: tiangolo/issue-manager@0.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
config: >
{
"answered": {
"users": ["tiangolo"],
"delay": 864000,
"message": "Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues."
}
}

2
.gitignore vendored
View File

@@ -1,3 +1,5 @@
.vscode
testing-project
.mypy_cache
poetry.lock
dev-link/

View File

@@ -9,4 +9,4 @@ services:
- docker
script:
- bash ./test.sh
- bash ./scripts/test.sh

83
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,83 @@
# Contributing
Here are some short guidelines to guide you if you want to contribute to the development of the Full Stack FastAPI PostgreSQL project generator itself.
After you clone the project, there are several scripts that can help during development.
* `./scripts/dev-fsfp.sh`:
Generate a new default project `dev-fsfp`.
Call it from one level above the project directory. So, if the project is at `~/code/full-stack-fastapi-postgresql/`, call it from `~/code/`, like:
```console
$ cd ~/code/
$ bash ./full-stack-fastapi-postgresql/scripts/dev-fsfp.sh
```
It will generate a new project with all the defaults at `~/code/dev-fsfp/`.
You can go to that directory with a full new project, edit files and test things, for example:
```console
$ cd ./dev-fsfp/
$ docker-compose up -d
```
It is outside of the project generator directory to let you add Git to it and compare versions and changes.
* `./scripts/dev-fsfp-back.sh`:
Move the changes from a project `dev-fsfp` back to the project generator.
You would call it after calling `./scripts/dev-fsfp.sh` and adding some modifications to `dev-fsfp`.
Call it from one level above the project directory. So, if the project is at `~/code/full-stack-fastapi-postgresql/`, call it from `~/code/`, like:
```console
$ cd ~/code/
$ bash ./full-stack-fastapi-postgresql/scripts/dev-fsfp-back.sh
```
That will also contain all the generated files with the generated variables, but it will let you compare the changes in `dev-fsfp` and the source in the project generator with git, and see what to commit.
* `./scripts/discard-dev-files.sh`:
After using `./scripts/dev-fsfp-back.sh`, there will be a bunch of generated files with the variables for the generated project that you don't want to commit, like `README.md` and `.gitlab-ci.yml`.
To discard all those changes at once, run `discard-dev-files.sh` from the root of the project, e.g.:
```console
$ cd ~/code/full-stack-fastapi-postgresql/
$ bash ./scripts/dev-fsfp-back.sh
```
* `./scripts/test.sh`:
Run the tests. It creates a project `testing-project` *inside* of the project generator and runs its tests.
Call it from the root of the project, e.g.:
```console
$ cd ~/code/full-stack-fastapi-postgresql/
$ bash ./scripts/test.sh
```
* `./scripts/dev-link.sh`:
Set up a local directory with links to the files for live development with the source files.
This script generates a project `dev-link` *inside* the project generator, just to generate the `.env` and `./frontend/.env` files.
Then it removes everything except those 2 files.
Then it creates links for each of the source files, and adds those 2 files back.
The end result is that you can go into the `dev-link` directory and develop locally with it as if it was a generated project, with all the variables set. But all the changes are actually done directly in the source files.
This is probably a lot faster to iterate than using `./scripts/dev-fsfp.sh`. But it's tested only in Linux, it might not work in other systems.

View File

@@ -1,6 +1,6 @@
# Full Stack FastAPI and PostgreSQL - Base Project Generator
[![Build Status](https://travis-ci.org/tiangolo/full-stack-fastapi-postgresql.svg?branch=master)](https://travis-ci.org/tiangolo/full-stack-fastapi-postgresql)
[![Build Status](https://travis-ci.com/tiangolo/full-stack-fastapi-postgresql.svg?branch=master)](https://travis-ci.com/tiangolo/full-stack-fastapi-postgresql)
Generate a backend and frontend stack using Python, including interactive API documentation.
@@ -8,17 +8,14 @@ Generate a backend and frontend stack using Python, including interactive API do
[![API docs](img/docs.png)](https://github.com/tiangolo/full-stack-fastapi-postgresql)
### Alternative API documentation
[![API docs](img/redoc.png)](https://github.com/tiangolo/full-stack-fastapi-postgresql)
### Dashboard Login
[![API docs](img/login.png)](https://github.com/tiangolo/full-stack-fastapi-postgresql)
### Dashboard - Create User
[![API docs](img/dashboard.png)](https://github.com/tiangolo/full-stack-fastapi-postgresql)
@@ -27,23 +24,23 @@ Generate a backend and frontend stack using Python, including interactive API do
* Full **Docker** integration (Docker based).
* Docker Swarm Mode deployment.
* **Docker Compose** integration and optimization for local development
* **Docker Compose** integration and optimization for local development.
* **Production ready** Python web server using Uvicorn and Gunicorn.
* Python **[FastAPI](https://github.com/tiangolo/fastapi)** backend:
* Python <a href="https://github.com/tiangolo/fastapi" class="external-link" target="_blank">**FastAPI**</a> backend:
* **Fast**: Very high performance, on par with **NodeJS** and **Go** (thanks to Starlette and Pydantic).
* **Intuitive**: Great editor support. <abbr title="also known as auto-complete, autocompletion, IntelliSense">Completion</abbr> everywhere. Less time debugging.
* **Easy**: Designed to be easy to use and learn. Less time reading docs.
* **Short**: Minimize code duplication. Multiple features from each parameter declaration.
* **Robust**: Get production-ready code. With automatic interactive documentation.
* **Standards-based**: Based on (and fully compatible with) the open standards for APIs: <a href="https://github.com/OAI/OpenAPI-Specification" target="_blank">OpenAPI</a> and <a href="http://json-schema.org/" target="_blank">JSON Schema</a>.
* [**Many other features**](https://github.com/tiangolo/fastapi) including automatic validation, serialization, interactive documentation, authentication with OAuth2 JWT tokens, etc.
* **Standards-based**: Based on (and fully compatible with) the open standards for APIs: <a href="https://github.com/OAI/OpenAPI-Specification" class="external-link" target="_blank">OpenAPI</a> and <a href="http://json-schema.org/" class="external-link" target="_blank">JSON Schema</a>.
* <a href="https://fastapi.tiangolo.com/features/" class="external-link" target="_blank">**Many other features**</a> including automatic validation, serialization, interactive documentation, authentication with OAuth2 JWT tokens, etc.
* **Secure password** hashing by default.
* **JWT token** authentication.
* **SQLAlchemy** models (independent of Flask extensions, so they can be used with Celery workers directly).
* Basic starting models for users (modify and remove as you need).
* **Alembic** migrations.
* **CORS** (Cross Origin Resource Sharing).
* **Celery** worker that can import and use models and code from the rest of the backend selectively (you don't have to install the complete app in each worker).
* **Celery** worker that can import and use models and code from the rest of the backend selectively.
* REST backend tests based on **Pytest**, integrated with Docker, so you can test the full API interaction, independent on the database. As it runs in Docker, it can build a new data store from scratch each time (so you can use ElasticSearch, MongoDB, CouchDB, or whatever you want, and just test that the API works).
* Easy Python integration with **Jupyter Kernels** for remote or in-Docker development with extensions like Atom Hydrogen or Visual Studio Code Jupyter.
* **Vue** frontend:
@@ -61,6 +58,7 @@ Generate a backend and frontend stack using Python, including interactive API do
* Docker multi-stage building, so you don't need to save or commit compiled code.
* Frontend tests ran at build time (can be disabled too).
* Made as modular as possible, so it works out of the box, but you can re-generate with Vue CLI or create it as you need, and re-use what you want.
* It's also easy to remove it if you have an API-only app, check the instructions in the generated `README.md`.
* **PGAdmin** for PostgreSQL database, you can modify it to use PHPMyAdmin and MySQL easily.
* **Flower** for Celery jobs monitoring.
* Load balancing between frontend and backend with **Traefik**, so you can have both under the same domain, separated by path, but served by different containers.
@@ -69,7 +67,7 @@ Generate a backend and frontend stack using Python, including interactive API do
## How to use it
Go to the directoy where you want to create your project and run:
Go to the directory where you want to create your project and run:
```bash
pip install cookiecutter
@@ -105,7 +103,7 @@ The input variables, with their default values (some auto generated) are:
* `secret_key`: Backend server secret key. Use the method above to generate it.
* `first_superuser`: The first superuser generated, with it you will be able to create more users, etc. By default, based on the domain.
* `first_superuser_password`: First superuser password. Use the method above to generate it.
* `backend_cors_origins`: Origins (domains, more or less) that are enabled for CORS (Cross Origin Resource Sharing). This allows a frontend in one domain (e.g. `https://dashboard.example.com`) to communicate with this backend, that could be living in another domain (e.g. `https://api.example.com`). It can also be used to allow your local frontend (with a custom `hosts` domain mapping, as described in the project's `README.md`) that could be living in `http://dev.example.com:8080` to cummunicate with the backend at `https://stag.example.com`. Notice the `http` vs `https` and the `dev.` prefix for local development vs the "staging" `stag.` prefix. By default, it includes origins for production, staging and development, with ports commonly used during local development by several popular frontend frameworks (Vue with `:8080`, React, Angular).
* `backend_cors_origins`: Origins (domains, more or less) that are enabled for CORS (Cross Origin Resource Sharing). This allows a frontend in one domain (e.g. `https://dashboard.example.com`) to communicate with this backend, that could be living in another domain (e.g. `https://api.example.com`). It can also be used to allow your local frontend (with a custom `hosts` domain mapping, as described in the project's `README.md`) that could be living in `http://dev.example.com:8080` to communicate with the backend at `https://stag.example.com`. Notice the `http` vs `https` and the `dev.` prefix for local development vs the "staging" `stag.` prefix. By default, it includes origins for production, staging and development, with ports commonly used during local development by several popular frontend frameworks (Vue with `:8080`, React, Angular).
* `smtp_port`: Port to use to send emails via SMTP. By default `587`.
* `smtp_host`: Host to use to send emails, it would be given by your email provider, like Mailgun, Sparkpost, etc.
* `smtp_user`: The user to use in the SMTP connection. The value will be given by your email provider.
@@ -117,13 +115,12 @@ The input variables, with their default values (some auto generated) are:
* `pgadmin_default_user_password`: PGAdmin default user password. Generate it with the method above.
* `traefik_constraint_tag`: The tag to be used by the internal Traefik load balancer (for example, to divide requests between backend and frontend) for production. Used to separate this stack from any other stack you might have. This should identify each stack in each environment (production, staging, etc).
* `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging.
* `traefik_public_network`: This assumes you have another separate publicly facing Traefik at the server / cluster level. This is the network that main Traefik lives in.
* `traefik_constraint_tag_staging`: The Traefik tag to be used while on staging.
* `traefik_public_constraint_tag`: The tag that should be used by stack services that should communicate with the public.
* `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: "`root:changethis`".
* `flower_auth`: Basic HTTP authentication for flower, in the form`user:password`. By default: "`admin:changethis`".
* `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. If you are not using it yet, you should, is open source. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`.
* `sentry_dsn`: Key URL (DSN) of Sentry, for live error reporting. You can use the open source version or a free account. E.g.: `https://1234abcd:5678ef@sentry.example.com/30`.
* `docker_image_prefix`: Prefix to use for Docker image names. If you are using GitLab Docker registry it would be based on your code repository. E.g.: `git.example.com/development-team/my-awesome-project/`.
* `docker_image_backend`: Docker image name for the backend. By default, it will be based on your Docker image prefix, e.g.: `git.example.com/development-team/my-awesome-project/backend`. And depending on your environment, a different tag will be appended ( `prod`, `stag`, `branch` ). So, the final image names used will be like: `git.example.com/development-team/my-awesome-project/backend:prod`.
@@ -142,11 +139,58 @@ After using this generator, your new project (the directory created) will contai
## Sibling project generators
* Based on Couchbase: [https://github.com/tiangolo/full-stack-fastapi-couchbase](https://github.com/tiangolo/full-stack-fastapi-couchbase).
* Full Stack FastAPI Couchbase: [https://github.com/tiangolo/full-stack-fastapi-couchbase](https://github.com/tiangolo/full-stack-fastapi-couchbase).
## Release Notes
### Next release
### Latest Changes
* Update issue-manager. PR [#211](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/211).
* Add [GitHub Sponsors](https://github.com/sponsors/tiangolo) button. PR [#201](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/201).
* Add consistent errors for env vars not set. PR [#200](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/200).
* Upgrade Traefik to version 2, keeping in sync with DockerSwarm.rocks. PR [#199](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/199).
* Add docs about reporting test coverage in HTML. PR [#161](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/161).
* Run tests with `TestClient`. PR [#160](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/160).
* Refactor backend:
* Simplify configs for tools and format to better support editor integration.
* Add mypy configurations and plugins.
* Add types to all the codebase.
* Update types for SQLAlchemy models with plugin.
* Update and refactor CRUD utils.
* Refactor DB sessions to use dependencies with `yield`.
* Refactor dependencies, security, CRUD, models, schemas, etc. To simplify code and improve autocompletion.
* Change from PyJWT to Python-JOSE as it supports additional use cases.
* Fix JWT tokens using user email/ID as the subject in `sub`.
* PR [#158](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/158).
* Add docs about removing the frontend, for an API-only app. PR [#156](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/156).
* Simplify scripts and development, update docs and configs. PR [#155](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/155).
* Simplify `docker-compose.*.yml` files, refactor deployment to reduce config files. PR [#153](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/153).
* Simplify env var files, merge to a single `.env` file. PR [#151](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/151).
### 0.5.0
* Make the Traefik public network a fixed default of `traefik-public` as done in DockerSwarm.rocks, to simplify development and iteration of the project generator. PR [#150](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/150).
* Update to PostgreSQL 12. PR [#148](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/148). by [@RCheese](https://github.com/RCheese).
* Use Poetry for package management. Initial PR [#144](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/144) by [@RCheese](https://github.com/RCheese).
* Fix Windows line endings for shell scripts after project generation with Cookiecutter hooks. PR [#149](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/149).
* Upgrade Vue CLI to version 4. PR [#120](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/120) by [@br3ndonland](https://github.com/br3ndonland).
* Remove duplicate `login` tag. PR [#135](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/135) by [@Nonameentered](https://github.com/Nonameentered).
* Fix showing email in dashboard when there's no user's full name. PR [#129](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/129) by [@rlonka](https://github.com/rlonka).
* Format code with Black and Flake8. PR [#121](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/121) by [@br3ndonland](https://github.com/br3ndonland).
* Simplify SQLAlchemy Base class. PR [#117](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/117) by [@airibarne](https://github.com/airibarne).
* Update CRUD utils for users, handling password hashing. PR [#106](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/106) by [@mocsar](https://github.com/mocsar).
* Use `.` instead of `source` for interoperability. PR [#98](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/98) by [@gucharbon](https://github.com/gucharbon).
* Use Pydantic's `BaseSettings` for settings/configs and env vars. PR [#87](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/87) by [@StephenBrown2](https://github.com/StephenBrown2).
* Remove `package-lock.json` to let everyone lock their own versions (depending on OS, etc).
* Simplify Traefik service labels PR [#139](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/139).
* Add email validation. PR [#40](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/40) by [@kedod](https://github.com/kedod).
* Fix typo in README. PR [#83](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/83) by [@ashears](https://github.com/ashears).
* Fix typo in README. PR [#80](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/80) by [@abjoker](https://github.com/abjoker).
* Fix function name `read_item` and response code. PR [#74](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/74) by [@jcaguirre89](https://github.com/jcaguirre89).
* Fix typo in comment. PR [#70](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/70) by [@daniel-butler](https://github.com/daniel-butler).
* Fix Flower Docker configuration. PR [#37](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/37) by [@dmontagu](https://github.com/dmontagu).
* Add new CRUD utils based on DB and Pydantic models. Initial PR [#23](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/23) by [@ebreton](https://github.com/ebreton).
* Add normal user testing Pytest fixture. PR [#20](https://github.com/tiangolo/full-stack-fastapi-postgresql/pull/20) by [@ebreton](https://github.com/ebreton).
### 0.4.0

View File

@@ -10,7 +10,7 @@
"secret_key": "changethis",
"first_superuser": "admin@{{cookiecutter.domain_main}}",
"first_superuser_password": "changethis",
"backend_cors_origins": "http://localhost, http://localhost:4200, http://localhost:3000, http://localhost:8080, https://localhost, https://localhost:4200, https://localhost:3000, https://localhost:8080, http://dev.{{cookiecutter.domain_main}}, https://{{cookiecutter.domain_staging}}, https://{{cookiecutter.domain_main}}, http://local.dockertoolbox.tiangolo.com, http://localhost.tiangolo.com",
"backend_cors_origins": "[\"http://localhost\", \"http://localhost:4200\", \"http://localhost:3000\", \"http://localhost:8080\", \"https://localhost\", \"https://localhost:4200\", \"https://localhost:3000\", \"https://localhost:8080\", \"http://dev.{{cookiecutter.domain_main}}\", \"https://{{cookiecutter.domain_staging}}\", \"https://{{cookiecutter.domain_main}}\", \"http://local.dockertoolbox.tiangolo.com\", \"http://localhost.tiangolo.com\"]",
"smtp_port": "587",
"smtp_host": "",
"smtp_user": "",
@@ -23,7 +23,6 @@
"traefik_constraint_tag": "{{cookiecutter.domain_main}}",
"traefik_constraint_tag_staging": "{{cookiecutter.domain_staging}}",
"traefik_public_network": "traefik-public",
"traefik_public_constraint_tag": "traefik-public",
"flower_auth": "admin:{{cookiecutter.first_superuser_password}}",

View File

@@ -1,2 +0,0 @@
default_context:
"project_name": "Dev FSFP"

View File

@@ -1,10 +0,0 @@
#! /usr/bin/env bash
# Run this script from outside the project, to generate a dev-fsfp project
# Exit in case of error
set -e
rm -rf ./dev-fsfp
cookiecutter --config-file ./full-stack-fastapi-postgresql/dev-fsfp-config.yml --no-input -f ./full-stack-fastapi-postgresql

View File

@@ -0,0 +1,8 @@
from pathlib import Path
path: Path
for path in Path(".").glob("**/*.sh"):
data = path.read_bytes()
lf_data = data.replace(b"\r\n", b"\n")
path.write_bytes(lf_data)

View File

@@ -5,6 +5,11 @@
# Exit in case of error
set -e
if [ ! -d ./full-stack-fastapi-postgresql ] ; then
echo "Run this script from outside the project, to integrate a sibling dev-fsfp project with changes and review modifications"
exit 1
fi
if [ $(uname -s) = "Linux" ]; then
echo "Remove __pycache__ files"
sudo find ./dev-fsfp/ -type d -name __pycache__ -exec rm -r {} \+

13
scripts/dev-fsfp.sh Normal file
View File

@@ -0,0 +1,13 @@
#! /usr/bin/env bash
# Exit in case of error
set -e
if [ ! -d ./full-stack-fastapi-postgresql ] ; then
echo "Run this script from outside the project, to generate a sibling dev-fsfp project with independent git"
exit 1
fi
rm -rf ./dev-fsfp
cookiecutter --no-input -f ./full-stack-fastapi-postgresql project_name="Dev FSFP"

34
scripts/dev-link.sh Normal file
View File

@@ -0,0 +1,34 @@
#! /usr/bin/env bash
# Exit in case of error
set -e
# Run this from the root of the project to generate a dev-link project
# It will contain a link to each of the files of the generator, except for
# .env and frontend/.env, that will be the generated ones
# This allows developing with a live stack while keeping the same source code
# Without having to generate dev-fsfp and integrating back all the files
rm -rf dev-link
mkdir -p tmp-dev-link/frontend
cookiecutter --no-input -f ./ project_name="Dev Link"
mv ./dev-link/.env ./tmp-dev-link/
mv ./dev-link/frontend/.env ./tmp-dev-link/frontend/
rm -rf ./dev-link/
mkdir -p ./dev-link/
cd ./dev-link/
for f in ../\{\{cookiecutter.project_slug\}\}/* ; do
ln -s "$f" ./
done
cd ..
mv ./tmp-dev-link/.env ./dev-link/
mv ./tmp-dev-link/frontend/.env ./dev-link/frontend/
rm -rf ./tmp-dev-link

View File

@@ -1,14 +1,13 @@
#! /usr/bin/env bash
set -e
rm -rf \{\{cookiecutter.project_slug\}\}/.git
rm -rf \{\{cookiecutter.project_slug\}\}/backend/app/Pipfile.lock
rm -rf \{\{cookiecutter.project_slug\}\}/backend/app/poetry.lock
rm -rf \{\{cookiecutter.project_slug\}\}/frontend/node_modules
rm -rf \{\{cookiecutter.project_slug\}\}/frontend/dist
git checkout \{\{cookiecutter.project_slug\}\}/README.md
git checkout \{\{cookiecutter.project_slug\}\}/.gitlab-ci.yml
git checkout \{\{cookiecutter.project_slug\}\}/cookiecutter-config-file.yml
git checkout \{\{cookiecutter.project_slug\}\}/docker-compose.deploy.networks.yml
git checkout \{\{cookiecutter.project_slug\}\}/env-backend.env
git checkout \{\{cookiecutter.project_slug\}\}/env-flower.env
git checkout \{\{cookiecutter.project_slug\}\}/.env
git checkout \{\{cookiecutter.project_slug\}\}/frontend/.env
git checkout \{\{cookiecutter.project_slug\}\}/env-pgadmin.env
git checkout \{\{cookiecutter.project_slug\}\}/env-postgres.env

16
scripts/test.sh Normal file
View File

@@ -0,0 +1,16 @@
#! /usr/bin/env bash
# Exit in case of error
set -e
# Run this from the root of the project
rm -rf ./testing-project
cookiecutter --no-input -f ./ project_name="Testing Project"
cd ./testing-project
bash ./scripts/test.sh "$@"
cd ../

14
test.sh
View File

@@ -1,14 +0,0 @@
#! /usr/bin/env bash
# Exit in case of error
set -e
rm -rf ./testing-project
cookiecutter --config-file ./testing-config.yml --no-input -f ./
cd ./testing-project
bash ./scripts/test.sh
cd ../

View File

@@ -1,2 +0,0 @@
default_context:
"project_name": "Testing Project"

View File

@@ -1,15 +1,45 @@
COMPOSE_PATH_SEPARATOR=:
COMPOSE_FILE=docker-compose.test.yml:docker-compose.shared.admin.yml:docker-compose.shared.base-images.yml:docker-compose.shared.depends.yml:docker-compose.shared.env.yml:docker-compose.dev.build.yml:docker-compose.dev.command.yml:docker-compose.dev.env.yml:docker-compose.dev.labels.yml:docker-compose.dev.networks.yml:docker-compose.dev.ports.yml:docker-compose.dev.volumes.yml
DOMAIN=localhost
# DOMAIN=local.dockertoolbox.tiangolo.com
# DOMAIN=localhost.tiangolo.com
# DOMAIN=dev.{{cookiecutter.domain_main}}
STACK_NAME={{cookiecutter.docker_swarm_stack_name_main}}
TRAEFIK_PUBLIC_NETWORK=traefik-public
TRAEFIK_TAG={{cookiecutter.traefik_constraint_tag}}
TRAEFIK_PUBLIC_NETWORK={{cookiecutter.traefik_public_network}}
TRAEFIK_PUBLIC_TAG={{cookiecutter.traefik_public_constraint_tag}}
DOCKER_IMAGE_BACKEND={{cookiecutter.docker_image_backend}}
DOCKER_IMAGE_CELERYWORKER={{cookiecutter.docker_image_celeryworker}}
DOCKER_IMAGE_FRONTEND={{cookiecutter.docker_image_frontend}}
# Backend
BACKEND_CORS_ORIGINS={{cookiecutter.backend_cors_origins}}
PROJECT_NAME={{cookiecutter.project_name}}
SECRET_KEY={{cookiecutter.secret_key}}
FIRST_SUPERUSER={{cookiecutter.first_superuser}}
FIRST_SUPERUSER_PASSWORD={{cookiecutter.first_superuser_password}}
SMTP_TLS=True
SMTP_PORT={{cookiecutter.smtp_port}}
SMTP_HOST={{cookiecutter.smtp_host}}
SMTP_USER={{cookiecutter.smtp_user}}
SMTP_PASSWORD={{cookiecutter.smtp_password}}
EMAILS_FROM_EMAIL={{cookiecutter.smtp_emails_from_email}}
USERS_OPEN_REGISTRATION=False
SENTRY_DSN={{cookiecutter.sentry_dsn}}
# Flower
FLOWER_BASIC_AUTH={{cookiecutter.flower_auth}}
# Postgres
POSTGRES_SERVER=db
POSTGRES_USER=postgres
POSTGRES_PASSWORD={{cookiecutter.postgres_password}}
POSTGRES_DB=app
# PgAdmin
PGADMIN_LISTEN_PORT=5050
PGADMIN_DEFAULT_EMAIL={{cookiecutter.pgadmin_default_user}}
PGADMIN_DEFAULT_PASSWORD={{cookiecutter.pgadmin_default_user_password}}

View File

@@ -2,12 +2,13 @@
## Backend Requirements
* Docker
* Docker Compose
* [Docker](https://www.docker.com/).
* [Docker Compose](https://docs.docker.com/compose/install/).
* [Poetry](https://python-poetry.org/) for Python package and environment management.
## Frontend Requirements
* Node.js (with `npm`)
* Node.js (with `npm`).
## Backend local development
@@ -53,61 +54,73 @@ If your Docker is not running in `localhost` (the URLs above wouldn't work) chec
### General workflow
Open your editor at `./backend/app/` (instead of the project root: `./`), so that you see an `./app/` directory with your code inside. That way, your editor will be able to find all the imports, etc.
By default, the dependencies are managed with [Poetry](https://python-poetry.org/), go there and install it.
Modify or add SQLAlchemy models in `./backend/app/app/db_models/`, Pydantic models in `./backend/app/app/models/`, API endpoints in `./backend/app/app/api/`, CRUD (Create, Read, Update, Delete) utils in `./backend/app/app/crud/`. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs.
From `./backend/app/` you can install all the dependencies with:
Add and modify tasks to the Celery worker in `./backend/app/app/worker.py`.
```console
$ poetry install
```
Then you can start a shell session with the new environment with:
```console
$ poetry shell
```
Next, open your editor at `./backend/app/` (instead of the project root: `./`), so that you see an `./app/` directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Poetry.
Modify or add SQLAlchemy models in `./backend/app/app/models/`, Pydantic schemas in `./backend/app/app/schemas/`, API endpoints in `./backend/app/app/api/`, CRUD (Create, Read, Update, Delete) utils in `./backend/app/app/crud/`. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs.
Add and modify tasks to the Celery worker in `./backend/app/app/worker.py`.
If you need to install any additional package to the worker, add it to the file `./backend/app/celeryworker.dockerfile`.
There is an `.env` file that has some Docker Compose default values that allow you to just run `docker-compose up -d` and start working, while still being able to use and share the same Docker Compose files for deployment, avoiding repetition of code and configuration as much as possible.
### Docker Compose Override
During development, you can change Docker Compose settings that will only affect the local development environment, in the files `docker-compose.dev.*.yml`.
During development, you can change Docker Compose settings that will only affect the local development environment, in the file `docker-compose.override.yml`.
The changes to those files only affect the local development environment, not the production environment. So, you can add "temporal" changes that help the development workflow.
The changes to that file only affect the local development environment, not the production environment. So, you can add "temporary" changes that help the development workflow.
For example, the directory with the backend code is mounted as a Docker "host volume" (in the file `docker-compose.dev.volumes.yml`), mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast.
For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast.
There is a command override in the file `docker-compose.dev.command.yml` that runs `/start-reload.sh` (included in the base image) instead of the default `/start.sh` (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. As it is in `docker-compose.dev.command.yml`, it only applies to local development. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:
There is also a command override that runs `/start-reload.sh` (included in the base image) instead of the default `/start.sh` (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:
```bash
docker-compose up -d
```console
$ docker-compose up -d
```
There is also a commented out `command` override (in the file `docker-compose.dev.command.yml`), you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the process running. That allows you to get inside your living container and run commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session.
There is also a commented out `command` override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session.
To get inside the container with a `bash` session you can start the stack with:
```bash
docker-compose up -d
```console
$ docker-compose up -d
```
and then `exec` inside the running container:
```bash
docker-compose exec backend bash
```console
$ docker-compose exec backend bash
```
You should see an output like:
```
```console
root@7f2607af31c3:/app#
```
that means that you are in a `bash` session inside your container, as a `root` user, under the `/app` directory.
There you use the script `/start-reload.sh` to run the debug live reloading server. You can run that script from inside the container with:
There you can use the script `/start-reload.sh` to run the debug live reloading server. You can run that script from inside the container with:
```bash
bash /start-reload.sh
```console
$ bash /start-reload.sh
```
...it will look like:
```bash
```console
root@7f2607af31c3:/app# bash /start-reload.sh
```
@@ -117,46 +130,73 @@ Nevertheless, if it doesn't detect a change but a syntax error, it will just sto
...this previous detail is what makes it useful to have the container alive doing nothing and then, in a Bash session, make it run the live reload server.
### Backend tests
To test the backend run:
```bash
DOMAIN=backend sh ./scripts/test.sh
```console
$ DOMAIN=backend sh ./scripts/test.sh
```
The file `./scripts/test.sh` has the commands to generate a testing `docker-stack.yml` file from the needed Docker Compose files, start the stack and test it.
The file `./scripts/test.sh` has the commands to generate a testing `docker-stack.yml` file, start the stack and test it.
The tests run with Pytest, modify and add tests to `./backend/app/app/tests/`.
If you need to install any additional package for the tests, add it to the file `./backend/app/tests.dockerfile`.
If you use GitLab CI the tests will run automatically.
#### Local tests
Start the stack with this command:
```Bash
DOMAIN=backend sh ./scripts/test-local.sh
```
The `./backend/app` directory is mounted as a "host volume" inside the docker container (set in the file `docker-compose.dev.volumes.yml`).
You can rerun the test on live code:
```Bash
docker-compose exec backend /app/tests-start.sh
```
#### Test running stack
If your stack is already up and you just want to run the tests, you can use:
```bash
docker-compose exec backend-tests /tests-start.sh
docker-compose exec backend /app/tests-start.sh
```
That `/tests-start.sh` script inside the `backend-tests` container calls `pytest`. If you need to pass extra arguments to `pytest`, you can pass them to that command and they will be forwarded.
That `/app/tests-start.sh` script just calls `pytest` after making sure that the rest of the stack is running. If you need to pass extra arguments to `pytest`, you can pass them to that command and they will be forwarded.
For example, to stop on first error:
```bash
docker-compose exec backend-tests /tests-start.sh -x
docker-compose exec backend bash /app/tests-start.sh -x
```
#### Test Coverage
Because the test scripts forward arguments to `pytest`, you can enable test coverage HTML report generation by passing `--cov-report=html`.
To run the local tests with coverage HTML reports:
```Bash
DOMAIN=backend sh ./scripts/test-local.sh --cov-report=html
```
To run the tests in a running stack with coverage HTML reports:
```bash
docker-compose exec backend bash /app/tests-start.sh --cov-report=html
```
### Live development with Python Jupyter Notebooks
If you know about Python [Jupyter Notebooks](http://jupyter.org/), you can take advantage of them during local development.
The `docker-compose.dev.build.yml` file sends a variable `env` with a value `dev` to the build process of the Docker image (during local development) and the `Dockerfile` has steps to then install and configure Jupyter inside your Docker container.
The `docker-compose.override.yml` file sends a variable `env` with a value `dev` to the build process of the Docker image (during local development) and the `Dockerfile` has steps to then install and configure Jupyter inside your Docker container.
So, you can enter into the Docker running container:
So, you can enter into the running Docker container:
```bash
docker-compose exec backend bash
@@ -166,7 +206,7 @@ And use the environment variable `$JUPYTER` to run a Jupyter Notebook with every
It will output something like:
```
```console
root@73e0ec1f1ae6:/app# $JUPYTER
[I 12:02:09.975 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 12:02:10.317 NotebookApp] Serving notebooks from local directory: /app
@@ -189,36 +229,34 @@ http://localhost:8888/token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
and then open it in your browser.
You will have a full Jupyter Notebook running inside your container, that has direct access to your database by the name container name, etc. So, you can just copy your backend code and run it directly, without needing to modify it.
If you use tools like [Hydrogen](https://github.com/nteract/hydrogen) or [Visual Studio Code Jupyter](https://donjayamanne.github.io/pythonVSCodeDocs/docs/jupyter/), you can use that same modified URL.
You will have a full Jupyter Notebook running inside your container that has direct access to your database by the container name (`db`), etc. So, you can just run sections of your backend code directly, for example with [VS Code Python Jupyter Interactive Window](https://code.visualstudio.com/docs/python/jupyter-support-py) or [Hydrogen](https://github.com/nteract/hydrogen).
### Migrations
As during local development your app directory is mounted as a volume inside the container (set in the file `docker-compose.dev.volumes.yml`), you can also run the migrations with `alembic` commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.
As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with `alembic` commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.
Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors.
* Start an interactive session in the backend container:
```bash
docker-compose exec backend bash
```console
$ docker-compose exec backend bash
```
* If you created a new model in `./backend/app/app/db_models/`, make sure to import it in `./backend/app/app/db/base.py`, that Python module (`base.py`) that imports all the models will be used by Alembic.
* If you created a new model in `./backend/app/app/models/`, make sure to import it in `./backend/app/app/db/base.py`, that Python module (`base.py`) that imports all the models will be used by Alembic.
* After changing a model (for example, adding a column), inside the container, create a revision, e.g.:
```bash
alembic revision --autogenerate -m "Add column last_name to User model"
```console
$ alembic revision --autogenerate -m "Add column last_name to User model"
```
* Commit to the git repository the files generated in the alembic directory.
* After creating the revision, run the migration in the database (this is what will actually change the database):
```bash
alembic upgrade head
```console
$ alembic upgrade head
```
If you don't want to use migrations at all, uncomment the line in the file at `./backend/app/app/db/init_db.py` with:
@@ -229,8 +267,8 @@ Base.metadata.create_all(bind=engine)
and comment the line in the file `prestart.sh` that contains:
```bash
alembic upgrade head
```console
$ alembic upgrade head
```
If you don't want to start with the default models and want to remove them / modify them, from the beginning, without having any previous revision, you can remove the revision files (`.py` Python files) under `./backend/app/alembic/versions/`. And then create a first migration as described above.
@@ -251,9 +289,9 @@ After performing those steps you should be able to open: http://local.dockertool
Check all the corresponding available URLs in the section at the end.
### Develpment in `localhost` with a custom domain
### Development in `localhost` with a custom domain
You might want to use something different than `localhost` as the domain. For example, if you are having problems with cookies that need a subdomain, and Chrome is not allowing you to use `localhost`.
You might want to use something different than `localhost` as the domain. For example, if you are having problems with cookies that need a subdomain, and Chrome is not allowing you to use `localhost`.
In that case, you have two options: you could use the instructions to modify your system `hosts` file with the instructions below in **Development with a custom IP** or you can just use `localhost.tiangolo.com`, it is set up to point to `localhost` (to the IP `127.0.0.1`) and all its subdomains too. And as it is an actual domain, the browsers will store the cookies you set during development, etc.
@@ -300,7 +338,7 @@ Check all the corresponding available URLs in the section at the end.
If you need to use your local stack with a different domain than `localhost`, you need to make sure the domain you use points to the IP where your stack is set up. See the different ways to achieve that in the sections above (i.e. using Docker Toolbox with `local.dockertoolbox.tiangolo.com`, using `localhost.tiangolo.com` or using `dev.{{cookiecutter.domain_main}}`).
To simplify your Docker Compose setup, for example, so that the API explorer, Swagger UI, knows where is your API, you should let it know you are using that domain for development. You will need to edit 1 line in 2 files.
To simplify your Docker Compose setup, for example, so that the API docs (Swagger UI) knows where is your API, you should let it know you are using that domain for development. You will need to edit 1 line in 2 files.
* Open the file located at `./.env`. It would have a line like:
@@ -314,7 +352,7 @@ DOMAIN=localhost
DOMAIN=localhost.tiangolo.com
```
That variable will be used by some of the local development `docker-compose.dev.*.yml` files, for example, to tell Swagger UI to use that domain for the API.
That variable will be used by the Docker Compose files.
* Now open the file located at `./frontend/.env`. It would have a line like:
@@ -350,11 +388,11 @@ npm run serve
Then open your browser at http://localhost:8080
Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server.
Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server with live reload.
Check the file `package.json` to see other available options.
If you have Vue CLI installed, you can also run `vue ui` to control, configure, serve and analyse your application using a nice local web user interface.
If you have Vue CLI installed, you can also run `vue ui` to control, configure, serve, and analyze your application using a nice local web user interface.
If you are only developing the frontend (e.g. other team members are developing the backend) and there is a staging environment already deployed, you can make your local development code use that staging API instead of a full local Docker Compose stack.
@@ -372,6 +410,26 @@ VUE_APP_ENV=development
VUE_APP_ENV=staging
```
### Removing the frontend
If you are developing an API-only app and want to remove the frontend, you can do it easily:
* Remove the `./frontend` directory.
* In the `docker-compose.yml` file, remove the whole service / section `frontend`.
* In the `docker-compose.override.yml` file, remove the whole service / section `frontend`.
Done, you have a frontend-less (api-only) app. 🔥 🚀
---
If you want, you can also remove the `FRONTEND` environment variables from:
* `.env`
* `.gitlab-ci.yml`
* `./scripts/*.sh`
But it would be only to clean them up, leaving them won't really have any effect either way.
## Deployment
You can deploy the stack to a Docker Swarm mode cluster with a main Traefik proxy, set up using the ideas from <a href="https://dockerswarm.rocks" target="_blank">DockerSwarm.rocks</a>, to get automatic HTTPS certificates, etc.
@@ -380,6 +438,24 @@ And you can use CI (continuous integration) systems to do it automatically.
But you have to configure a couple things first.
### Traefik network
This stack expects the public Traefik network to be named `traefik-public`, just as in the tutorials in <a href="https://dockerswarm.rocks" class="external-link" target="_blank">DockerSwarm.rocks</a>.
If you need to use a different Traefik public network name, update it in the `docker-compose.yml` files, in the section:
```YAML
networks:
traefik-public:
external: true
```
Change `traefik-public` to the name of the used Traefik network. And then update it in the file `.env`:
```bash
TRAEFIK_PUBLIC_NETWORK=traefik-public
```
### Persisting Docker named volumes
You need to make sure that each service (Docker container) that uses a volume is always deployed to the same Docker "node" in the cluster, that way it will preserve the data. Otherwise, it could be deployed to a different node each time, and each time the volume would be created in that new node before starting the service. As a result, it would look like your service was starting from scratch every time, losing all the previous data.
@@ -388,14 +464,13 @@ That's specially important for a service running a database. But the same proble
To solve that, you can put constraints in the services that use one or more data volumes (like databases) to make them be deployed to a Docker node with a specific label. And of course, you need to have that label assigned to one (only one) of your nodes.
#### Adding services with volumes
For each service that uses a volume (databases, services with uploaded files, etc) you should have a label constraint in your `docker-compose.deploy.volumes-placement.yml` file.
For each service that uses a volume (databases, services with uploaded files, etc) you should have a label constraint in your `docker-compose.yml` file.
To make sure that your labels are unique per volume per stack (for examlpe, that they are not the same for `prod` and `stag`) you should prefix them with the name of your stack and then use the same name of the volume.
To make sure that your labels are unique per volume per stack (for example, that they are not the same for `prod` and `stag`) you should prefix them with the name of your stack and then use the same name of the volume.
Then you need to have those constraints in your deployment Docker Compose file for the services that need to be fixed with each volume.
Then you need to have those constraints in your `docker-compose.yml` file for the services that need to be fixed with each volume.
To be able to use different environments, like `prod` and `stag`, you should pass the name of the stack as an environment variable. Like:
@@ -403,7 +478,7 @@ To be able to use different environments, like `prod` and `stag`, you should pas
STACK_NAME={{cookiecutter.docker_swarm_stack_name_staging}} sh ./scripts/deploy.sh
```
To use and expand that environment variable inside the `docker-compose.deploy.volumes-placement.yml` files you can add the constraints to the services like:
To use and expand that environment variable inside the `docker-compose.yml` files you can add the constraints to the services like:
```yaml
version: '3'
@@ -414,10 +489,10 @@ services:
deploy:
placement:
constraints:
- node.labels.${STACK_NAME}.app-db-data == true
- node.labels.${STACK_NAME?Variable not set}.app-db-data == true
```
note the `${STACK_NAME}`. In the script `./scripts/deploy.sh`, that `docker-compose.deploy.volumes-placement.yml` would be converted, and saved to a file `docker-stack.yml` containing:
note the `${STACK_NAME?Variable not set}`. In the script `./scripts/deploy.sh`, the `docker-compose.yml` would be converted, and saved to a file `docker-stack.yml` containing:
```yaml
version: '3'
@@ -431,11 +506,12 @@ services:
- node.labels.{{cookiecutter.docker_swarm_stack_name_main}}.app-db-data == true
```
**Note**: The `${STACK_NAME?Variable not set}` means "use the environment variable `STACK_NAME`, but if it is not set, show an error `Variable not set`".
If you add more volumes to your stack, you need to make sure you add the corresponding constraints to the services that use that named volume.
Then you have to create those labels in some nodes in your Docker Swarm mode cluster. You can use `docker-auto-labels` to do it automatically.
#### `docker-auto-labels`
You can use [`docker-auto-labels`](https://github.com/tiangolo/docker-auto-labels) to automatically read the placement constraint labels in your Docker stack (Docker Compose file) and assign them to a random Docker node in your Swarm mode cluster if those labels don't exist yet.
@@ -462,13 +538,12 @@ If you don't want to use `docker-auto-labels` or for any reason you want to manu
* Then check the available nodes with:
```bash
docker node ls
```
```console
$ docker node ls
you would see an output like:
```
// you would see an output like:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
nfa3d4df2df34as2fd34230rm * dog.example.com Ready Active Reachable
2c2sd2342asdfasd42342304e cat.example.com Ready Active Leader
@@ -503,7 +578,7 @@ Here are the steps in detail:
1. **Build your app images**
* Set these environment variables, prepended to the next command:
* Set these environment variables, right before the next command:
* `TAG=prod`
* `FRONTEND_ENV=production`
* Use the provided `scripts/build.sh` file with those environment variables:
@@ -550,30 +625,37 @@ If you change your mind and, for example, want to deploy everything to a differe
#### Deployment Technical Details
For the 3 steps (build, push, deploy) you need a generated `docker-stack.yml`, it is generated using the `docker-compose` command with some of the `docker-compose.*.yml` files. As each of these steps uses different `docker-compose.*.yml` files, the generated `docker-stack.yml` file is slightly different. But it's all generated by the scripts.
Building and pushing is done with the `docker-compose.yml` file, using the `docker-compose` command. The file `docker-compose.yml` uses the file `.env` with default environment variables. And the scripts set some additional environment variables as well.
You can do the process by hand based on those same scripts if you wanted. The general structure of the scripts is like this:
The deployment requires using `docker stack` instead of `docker-swarm`, and it can't read environment variables or `.env` files. Because of that, the `deploy.sh` script generates a file `docker-stack.yml` with the configurations from `docker-compose.yml` and injecting the environment variables in it. And then uses it to deploy the stack.
You can do the process by hand based on those same scripts if you wanted. The general structure is like this:
```bash
# Use the environment variables passed to this script, as TAG and FRONTEND_ENV
# And re-create those variables as environment variables for the next command
TAG=${TAG} \
TAG=${TAG?Variable not set} \
# Set the environment variable FRONTEND_ENV to the same value passed to this script with
# a default value of "production" if nothing else was passed
FRONTEND_ENV=${FRONTEND_ENV-production} \
FRONTEND_ENV=${FRONTEND_ENV-production?Variable not set} \
# The actual comand that does the work: docker-compose
docker-compose \
# Pass the files that should be used at this stage, the set of files changes in each script / each stage
-f docker-compose.deploy.build.yml \
-f docker-compose.deploy.images.yml \
# Use the docker-compose sub command named "config", it just uses the docker-compose.*.yml files passed
# to it and prints their combined contents
# Pass the file that should be used, setting explicitly docker-compose.yml avoids the
# default of also using docker-compose.override.yml
-f docker-compose.yml \
# Use the docker-compose sub command named "config", it just uses the docker-compose.yml
# file passed to it and prints their combined contents
# Put those contents in a file "docker-stack.yml", with ">"
config > docker-stack.yml
# The previous only generated a docker-stack.yml file, but didn't do anything with it
# Now this command uses that same file and does some operation with it, in this case, build it
docker-compose -f docker-stack.yml build
# The previous only generated a docker-stack.yml file,
# but didn't do anything with it yet
# docker-auto-labels makes sure the labels used for constraints exist in the cluster
docker-auto-labels docker-stack.yml
# Now this command uses that same file to deploy it
docker stack deploy -c docker-stack.yml --with-registry-auth "${STACK_NAME?Variable not set}"
```
### Continuous Integration / Continuous Delivery
@@ -587,67 +669,37 @@ GitLab CI is configured assuming 2 environments following GitLab flow:
* `prod` (production) from the `production` branch.
* `stag` (staging) from the `master` branch.
If you need to add more environments, for example, you could imagine using a client-approved `preprod` branch, you can just copy the configurations in `.gitlab-ci.yml` for `stag` and rename the corresponding variables. All the Docker Compose files are configured to support as many environments as you need, so that you only need to modify `.gitlab-ci.yml` (or whichever CI system configuration you are using).
If you need to add more environments, for example, you could imagine using a client-approved `preprod` branch, you can just copy the configurations in `.gitlab-ci.yml` for `stag` and rename the corresponding variables. The Docker Compose file and environment variables are configured to support as many environments as you need, so that you only need to modify `.gitlab-ci.yml` (or whichever CI system configuration you are using).
## Docker Compose files and env vars
## Docker Compose files
There is a main `docker-compose.yml` file with all the configurations that apply to the whole stack, it is used automatically by `docker-compose`.
There are several Docker Compose files, each with a specific purpose.
And there's also a `docker-compose.override.yml` with overrides for development, for example to mount the source code as a volume. It is used automatically by `docker-compose` to apply overrides on top of `docker-compose.yml`.
They are designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily).
These Docker Compose files use the `.env` file containing configurations to be injected as environment variables in the containers.
They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why several of the files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the `docker-compose` command with a different `DOMAIN` environment variable instead of having to change the domain in several places inside the Docker Compose files.
They also use some additional configurations taken from environment variables set in the scripts before calling the `docker-compose` command.
It is all designed to support several "stages", like development, building, testing, and deployment. Also, allowing the deployment to different environments like staging and production (and you can add more environments very easily).
They are designed to have the minimum repetition of code and configurations, so that if you need to change something, you have to change it in the minimum amount of places. That's why files use environment variables that get auto-expanded. That way, if for example, you want to use a different domain, you can call the `docker-compose` command with a different `DOMAIN` environment variable instead of having to change the domain in several places inside the Docker Compose files.
Also, if you want to have another deployment environment, say `preprod`, you just have to change environment variables, but you can keep using the same Docker Compose files.
Because of that, for each "stage" (development, building, testing, deployment) you would use a different set of Docker Compose files.
### The .env file
But you probably don't have to worry about the different files, for building, testing and deployment, you would probably use a CI system (like GitLab CI) and the different configured files would be already set there.
The `.env` file is the one that contains all your configurations, generated keys and passwords, etc.
And for development, there's a `.env` file that will be automatically used by `docker-compose` locally, with the default configurations already set for local development. Including environment variables. So, for local development you can just run:
Depending on your workflow, you could want to exclude it from Git, for example if your project is public. In that case, you would have to make sure to set up a way for your CI tools to obtain it while building or deploying your project.
```bash
docker-compose up -d
```
and it will do the right thing.
They are also separated by the common tasks and functionalities they solve, and they are named accordinly. So, although there are many Docker Compose files, each one has a name that shows what should be in there, and the contents tend to be small and specific. That makes it easier to modify, or add configurations, as you can go directly to the relevant file.
The `docker-compose.deploy.*.yml` files are only used at deployment, being it to production or any other environment. They build the images in production mode (not installing debugging packages), set configurations for Docker Swarm mode, etc.
The `docker-compose.dev.*.yml` files are only used during development. They have overrides and tools for development, as mounting app volumes directly inside the container to iterate fast, map ports directly to your machine, install debugging packages, etc.
The `docker-compose.test.yml` file is used for testing, during development and in a CI environment running tests, but not used in deployment to production (or staging or any other deployment environment of the final code).
The `docker-compose.shared.*.yml` files are used at several stages and contain stuff shared by several stages: development, testing, deployment. They have things like the databases or the environment variables, that are used by all the main services / containers, during development, testing and deployment. The file for `admin`, that has utils needed for development and production, like the Swagger UI interactive API documentation system. But this file is not used during testing (in CI environments) as this is not needed or used in that stage.
The purpose of each Docker Compose file is:
* `docker-compose.deploy.build.yml`: build directories and `Dockerfile`s, for deployment (the building process for development has a little difference).
* `docker-compose.deploy.command.yml`: command overrides for images only during deployment. Initially only for the main Traefik proxy, making it run in a Docker Swarm mode cluster.
* `docker-compose.deploy.images.yml`: image names to be created, with environment variables for the specific tag.
* `docker-compose.deploy.labels.yml`: labels for deployment, the configurations to make the internal Traefik proxy serve some services on specific URLs, some with basic HTTP auth, etc. Also labels used in the internal Traefik proxy container to make it talk to the public Traefik proxy (outside of this stack) and make it send requests for this domain, generate HTTPS certificates, etc.
* `docker-compose.deploy.networks.yml`: networks that have to be used and shared by containers that need to be able to talk to the public Traefik proxy (when a service requires a domain for itself).
* `docker-compose.deploy.volumes-placement.yml`: volume declarations, volumes used by stateful services (as databases) and volume placement constraints, to make those services always run on the node that has their volumes, even after stack updates.
* `docker-compose.dev.build.yml`: build directories and `Dockerfile`s, for local development, sets a built-time argument that then is used in the `Dockerfile`s to install and configure helper tools exclusively for development.
* `docker-compose.dev.command.yml`: command overrides for local development. To tell the internal Traefik proxy to work with a local Docker in the host instead of a Docker Swarm mode cluster. And (commented out but ready to be used) overrides to make the containers run an infinite loop while keeping alive to be able to run the development server manually or do any other interactive work.
* `docker-compose.dev.env.yml`: development environment variable overrides.
* `docker-compose.dev.labels.yml`: local development labels, to be used by the local development Traefik proxy. They have to be declared in a different place than for deployment.
* `docker-compose.dev.networks.yml`: local development networks, to enable interactively talking to the backend.
* `docker-compose.dev.ports.yml`: local development port mappings.
* `docker-compose.dev.volumes.yml`: local development mounted volumes, mainly to map the development code directory inside the container, for fast development without needing to re-build the images.
* `docker-compose.shared.admin.yml`: additional services for administration or utilities with their configurations, like PGAdmin and Swagger, that are not needed during testing and use external images (don't need to be built or create images).
* `docker-compose.shared.base-images.yml`: base Docker images used without modification for shared services, as databases. Used in deployment, development, testing, etc.
* `docker-compose.shared.depends.yml`: dependencies between main services with `depends_on`, used in deployment, development, testing, etc.
* `docker-compose.shared.env.yml`: environment variables used by services, as database passwords, secret keys, etc.
* `docker-compose.test.yml`: specific additional container to be used only during testing, mainly the container that tests the backend and the APIs.
One way to do it could be to add each environment variable to your CI/CD system, and updating the `docker-compose.yml` file to read that specific env var instead of reading the `.env` file.
## URLs
These are the URLs that will be used and generated by the project.
### Production
### Production URLs
Production URLs, from the branch `production`.
@@ -663,7 +715,7 @@ PGAdmin: https://pgadmin.{{cookiecutter.domain_main}}
Flower: https://flower.{{cookiecutter.domain_main}}
### Staging
### Staging URLs
Staging URLs, from the branch `master`.
@@ -678,8 +730,8 @@ Automatic Alternative Docs (ReDoc): https://{{cookiecutter.domain_staging}}/redo
PGAdmin: https://pgadmin.{{cookiecutter.domain_staging}}
Flower: https://flower.{{cookiecutter.domain_staging}}
### Development
### Development URLs
Development URLs, for local development.
@@ -697,7 +749,7 @@ Flower: http://localhost:5555
Traefik UI: http://localhost:8090
### Development with Docker Toolbox
### Development with Docker Toolbox URLs
Development URLs, for local development.
@@ -715,7 +767,7 @@ Flower: http://local.dockertoolbox.tiangolo.com:5555
Traefik UI: http://local.dockertoolbox.tiangolo.com:8090
### Development with a custom IP
### Development with a custom IP URLs
Development URLs, for local development.
@@ -733,7 +785,7 @@ Flower: http://dev.{{cookiecutter.domain_main}}:5555
Traefik UI: http://dev.{{cookiecutter.domain_main}}:8090
### Development in localhost with a custom domain
### Development in localhost with a custom domain URLs
Development URLs, for local development.
@@ -764,7 +816,7 @@ You can check the variables used during generation in the file `cookiecutter-con
You can generate the project again with the same configurations used the first time.
That would be useful if, for example, the project generator (`tiangolo/full-stack-fastapi-postgresql`) was updated and you want to integrate or review the changes.
That would be useful if, for example, the project generator (`tiangolo/full-stack-fastapi-postgresql`) was updated and you wanted to integrate or review the changes.
You could generate a new project with the same configurations as this one in a parallel directory. And compare the differences between the two, without having to overwrite your current code but being able to use the same variables used for your current project.
@@ -774,8 +826,8 @@ You can use that file while generating a new project to reuse all those variable
For example, run:
```bash
cookiecutter --config-file ./cookiecutter-config-file.yml --output-dir ../project-copy https://github.com/tiangolo/full-stack-fastapi-postgresql
```console
$ cookiecutter --config-file ./cookiecutter-config-file.yml --output-dir ../project-copy https://github.com/tiangolo/full-stack-fastapi-postgresql
```
That will use the file `cookiecutter-config-file.yml` in the current directory (in this project) to generate a new project inside a sibling directory `project-copy`.

View File

@@ -1 +1,2 @@
__pycache__
app.egg-info

View File

@@ -0,0 +1,3 @@
[flake8]
max-line-length = 88
exclude = .git,__pycache__,__init__.py,.mypy_cache,.pytest_cache

View File

@@ -0,0 +1,3 @@
.mypy_cache
.coverage
htmlcov

View File

@@ -1,39 +0,0 @@
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
mypy = "*"
black = "*"
jupyter = "*"
isort = "*"
autoflake = "*"
flake8 = "*"
pytest = "*"
vulture = "*"
[packages]
fastapi = "*"
uvicorn = "*"
pyjwt = "*"
python-multipart = "*"
email-validator = "*"
requests = "*"
celery = "*"
passlib = {extras = ["bcrypt"],version = "*"}
tenacity = "*"
pydantic = "*"
emails = "*"
raven = "*"
gunicorn = "*"
jinja2 = "*"
psycopg2-binary = "*"
alembic = "*"
sqlalchemy = "*"
[requires]
python_version = "3.6"
[pipenv]
allow_prereleases = true

View File

@@ -67,11 +67,9 @@ def run_migrations_online():
"""
configuration = config.get_section(config.config_ini_section)
configuration['sqlalchemy.url'] = get_url()
configuration["sqlalchemy.url"] = get_url()
connectable = engine_from_config(
configuration,
prefix="sqlalchemy.",
poolclass=pool.NullPool,
configuration, prefix="sqlalchemy.", poolclass=pool.NullPool,
)
with connectable.connect() as connection:

View File

@@ -1,7 +1,7 @@
"""First revision
Revision ID: d4867f3a4c0a
Revises:
Revises:
Create Date: 2019-04-17 13:53:32.978401
"""
@@ -10,7 +10,7 @@ import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'd4867f3a4c0a'
revision = "d4867f3a4c0a"
down_revision = None
branch_labels = None
depends_on = None
@@ -18,40 +18,42 @@ depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('full_name', sa.String(), nullable=True),
sa.Column('email', sa.String(), nullable=True),
sa.Column('hashed_password', sa.String(), nullable=True),
sa.Column('is_active', sa.Boolean(), nullable=True),
sa.Column('is_superuser', sa.Boolean(), nullable=True),
sa.PrimaryKeyConstraint('id')
op.create_table(
"user",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("full_name", sa.String(), nullable=True),
sa.Column("email", sa.String(), nullable=True),
sa.Column("hashed_password", sa.String(), nullable=True),
sa.Column("is_active", sa.Boolean(), nullable=True),
sa.Column("is_superuser", sa.Boolean(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_index(op.f('ix_user_email'), 'user', ['email'], unique=True)
op.create_index(op.f('ix_user_full_name'), 'user', ['full_name'], unique=False)
op.create_index(op.f('ix_user_id'), 'user', ['id'], unique=False)
op.create_table('item',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('description', sa.String(), nullable=True),
sa.Column('owner_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['owner_id'], ['user.id'], ),
sa.PrimaryKeyConstraint('id')
op.create_index(op.f("ix_user_email"), "user", ["email"], unique=True)
op.create_index(op.f("ix_user_full_name"), "user", ["full_name"], unique=False)
op.create_index(op.f("ix_user_id"), "user", ["id"], unique=False)
op.create_table(
"item",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("title", sa.String(), nullable=True),
sa.Column("description", sa.String(), nullable=True),
sa.Column("owner_id", sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(["owner_id"], ["user.id"],),
sa.PrimaryKeyConstraint("id"),
)
op.create_index(op.f('ix_item_description'), 'item', ['description'], unique=False)
op.create_index(op.f('ix_item_id'), 'item', ['id'], unique=False)
op.create_index(op.f('ix_item_title'), 'item', ['title'], unique=False)
op.create_index(op.f("ix_item_description"), "item", ["description"], unique=False)
op.create_index(op.f("ix_item_id"), "item", ["id"], unique=False)
op.create_index(op.f("ix_item_title"), "item", ["title"], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_item_title'), table_name='item')
op.drop_index(op.f('ix_item_id'), table_name='item')
op.drop_index(op.f('ix_item_description'), table_name='item')
op.drop_table('item')
op.drop_index(op.f('ix_user_id'), table_name='user')
op.drop_index(op.f('ix_user_full_name'), table_name='user')
op.drop_index(op.f('ix_user_email'), table_name='user')
op.drop_table('user')
op.drop_index(op.f("ix_item_title"), table_name="item")
op.drop_index(op.f("ix_item_id"), table_name="item")
op.drop_index(op.f("ix_item_description"), table_name="item")
op.drop_table("item")
op.drop_index(op.f("ix_user_id"), table_name="user")
op.drop_index(op.f("ix_user_full_name"), table_name="user")
op.drop_index(op.f("ix_user_email"), table_name="user")
op.drop_table("user")
# ### end Alembic commands ###

View File

@@ -1,24 +1,21 @@
from typing import List
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from app import crud
from app.api.utils.db import get_db
from app.api.utils.security import get_current_active_user
from app.db_models.user import User as DBUser
from app.models.item import Item, ItemCreate, ItemUpdate
from app import crud, models, schemas
from app.api import deps
router = APIRouter()
@router.get("/", response_model=List[Item])
@router.get("/", response_model=List[schemas.Item])
def read_items(
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
skip: int = 0,
limit: int = 100,
current_user: DBUser = Depends(get_current_active_user),
):
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Retrieve items.
"""
@@ -26,77 +23,77 @@ def read_items(
items = crud.item.get_multi(db, skip=skip, limit=limit)
else:
items = crud.item.get_multi_by_owner(
db_session=db, owner_id=current_user.id, skip=skip, limit=limit
db=db, owner_id=current_user.id, skip=skip, limit=limit
)
return items
@router.post("/", response_model=Item)
@router.post("/", response_model=schemas.Item)
def create_item(
*,
db: Session = Depends(get_db),
item_in: ItemCreate,
current_user: DBUser = Depends(get_current_active_user),
):
db: Session = Depends(deps.get_db),
item_in: schemas.ItemCreate,
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Create new item.
"""
item = crud.item.create(db_session=db, item_in=item_in, owner_id=current_user.id)
item = crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=current_user.id)
return item
@router.put("/{id}", response_model=Item)
@router.put("/{id}", response_model=schemas.Item)
def update_item(
*,
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
id: int,
item_in: ItemUpdate,
current_user: DBUser = Depends(get_current_active_user),
):
item_in: schemas.ItemUpdate,
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Update an item.
"""
item = crud.item.get(db_session=db, id=id)
item = crud.item.get(db=db, id=id)
if not item:
raise HTTPException(status_code=404, detail="Item not found")
if not crud.user.is_superuser(current_user) and (item.owner_id != current_user.id):
raise HTTPException(status_code=400, detail="Not enough permissions")
item = crud.item.update(db_session=db, item=item, item_in=item_in)
item = crud.item.update(db=db, db_obj=item, obj_in=item_in)
return item
@router.get("/{id}", response_model=Item)
def read_user_me(
@router.get("/{id}", response_model=schemas.Item)
def read_item(
*,
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
id: int,
current_user: DBUser = Depends(get_current_active_user),
):
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Get item by ID.
"""
item = crud.item.get(db_session=db, id=id)
if not item:
raise HTTPException(status_code=400, detail="Item not found")
if not crud.user.is_superuser(current_user) and (item.owner_id != current_user.id):
raise HTTPException(status_code=400, detail="Not enough permissions")
return item
@router.delete("/{id}", response_model=Item)
def delete_item(
*,
db: Session = Depends(get_db),
id: int,
current_user: DBUser = Depends(get_current_active_user),
):
"""
Delete an item.
"""
item = crud.item.get(db_session=db, id=id)
item = crud.item.get(db=db, id=id)
if not item:
raise HTTPException(status_code=404, detail="Item not found")
if not crud.user.is_superuser(current_user) and (item.owner_id != current_user.id):
raise HTTPException(status_code=400, detail="Not enough permissions")
item = crud.item.remove(db_session=db, id=id)
return item
@router.delete("/{id}", response_model=schemas.Item)
def delete_item(
*,
db: Session = Depends(deps.get_db),
id: int,
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Delete an item.
"""
item = crud.item.get(db=db, id=id)
if not item:
raise HTTPException(status_code=404, detail="Item not found")
if not crud.user.is_superuser(current_user) and (item.owner_id != current_user.id):
raise HTTPException(status_code=400, detail="Not enough permissions")
item = crud.item.remove(db=db, id=id)
return item

View File

@@ -1,19 +1,15 @@
from datetime import timedelta
from typing import Any
from fastapi import APIRouter, Body, Depends, HTTPException
from fastapi.security import OAuth2PasswordRequestForm
from sqlalchemy.orm import Session
from app import crud
from app.api.utils.db import get_db
from app.api.utils.security import get_current_user
from app.core import config
from app.core.jwt import create_access_token
from app import crud, models, schemas
from app.api import deps
from app.core import security
from app.core.config import settings
from app.core.security import get_password_hash
from app.db_models.user import User as DBUser
from app.models.msg import Msg
from app.models.token import Token
from app.models.user import User
from app.utils import (
generate_password_reset_token,
send_reset_password_email,
@@ -23,10 +19,10 @@ from app.utils import (
router = APIRouter()
@router.post("/login/access-token", response_model=Token, tags=["login"])
@router.post("/login/access-token", response_model=schemas.Token)
def login_access_token(
db: Session = Depends(get_db), form_data: OAuth2PasswordRequestForm = Depends()
):
db: Session = Depends(deps.get_db), form_data: OAuth2PasswordRequestForm = Depends()
) -> Any:
"""
OAuth2 compatible token login, get an access token for future requests
"""
@@ -37,25 +33,25 @@ def login_access_token(
raise HTTPException(status_code=400, detail="Incorrect email or password")
elif not crud.user.is_active(user):
raise HTTPException(status_code=400, detail="Inactive user")
access_token_expires = timedelta(minutes=config.ACCESS_TOKEN_EXPIRE_MINUTES)
access_token_expires = timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
return {
"access_token": create_access_token(
data={"user_id": user.id}, expires_delta=access_token_expires
"access_token": security.create_access_token(
user.id, expires_delta=access_token_expires
),
"token_type": "bearer",
}
@router.post("/login/test-token", tags=["login"], response_model=User)
def test_token(current_user: DBUser = Depends(get_current_user)):
@router.post("/login/test-token", response_model=schemas.User)
def test_token(current_user: models.User = Depends(deps.get_current_user)) -> Any:
"""
Test access token
"""
return current_user
@router.post("/password-recovery/{email}", tags=["login"], response_model=Msg)
def recover_password(email: str, db: Session = Depends(get_db)):
@router.post("/password-recovery/{email}", response_model=schemas.Msg)
def recover_password(email: str, db: Session = Depends(deps.get_db)) -> Any:
"""
Password Recovery
"""
@@ -73,8 +69,12 @@ def recover_password(email: str, db: Session = Depends(get_db)):
return {"msg": "Password recovery email sent"}
@router.post("/reset-password/", tags=["login"], response_model=Msg)
def reset_password(token: str = Body(...), new_password: str = Body(...), db: Session = Depends(get_db)):
@router.post("/reset-password/", response_model=schemas.Msg)
def reset_password(
token: str = Body(...),
new_password: str = Body(...),
db: Session = Depends(deps.get_db),
) -> Any:
"""
Reset password
"""

View File

@@ -1,28 +1,25 @@
from typing import List
from typing import Any, List
from fastapi import APIRouter, Body, Depends, HTTPException
from fastapi.encoders import jsonable_encoder
from pydantic.types import EmailStr
from pydantic.networks import EmailStr
from sqlalchemy.orm import Session
from app import crud
from app.api.utils.db import get_db
from app.api.utils.security import get_current_active_superuser, get_current_active_user
from app.core import config
from app.db_models.user import User as DBUser
from app.models.user import User, UserCreate, UserInDB, UserUpdate
from app import crud, models, schemas
from app.api import deps
from app.core.config import settings
from app.utils import send_new_account_email
router = APIRouter()
@router.get("/", response_model=List[User])
@router.get("/", response_model=List[schemas.User])
def read_users(
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
skip: int = 0,
limit: int = 100,
current_user: DBUser = Depends(get_current_active_superuser),
):
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Retrieve users.
"""
@@ -30,13 +27,13 @@ def read_users(
return users
@router.post("/", response_model=User)
@router.post("/", response_model=schemas.User)
def create_user(
*,
db: Session = Depends(get_db),
user_in: UserCreate,
current_user: DBUser = Depends(get_current_active_superuser),
):
db: Session = Depends(deps.get_db),
user_in: schemas.UserCreate,
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Create new user.
"""
@@ -46,64 +43,64 @@ def create_user(
status_code=400,
detail="The user with this username already exists in the system.",
)
user = crud.user.create(db, user_in=user_in)
if config.EMAILS_ENABLED and user_in.email:
user = crud.user.create(db, obj_in=user_in)
if settings.EMAILS_ENABLED and user_in.email:
send_new_account_email(
email_to=user_in.email, username=user_in.email, password=user_in.password
)
return user
@router.put("/me", response_model=User)
@router.put("/me", response_model=schemas.User)
def update_user_me(
*,
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
password: str = Body(None),
full_name: str = Body(None),
email: EmailStr = Body(None),
current_user: DBUser = Depends(get_current_active_user),
):
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Update own user.
"""
current_user_data = jsonable_encoder(current_user)
user_in = UserUpdate(**current_user_data)
user_in = schemas.UserUpdate(**current_user_data)
if password is not None:
user_in.password = password
if full_name is not None:
user_in.full_name = full_name
if email is not None:
user_in.email = email
user = crud.user.update(db, user=current_user, user_in=user_in)
user = crud.user.update(db, db_obj=current_user, obj_in=user_in)
return user
@router.get("/me", response_model=User)
@router.get("/me", response_model=schemas.User)
def read_user_me(
db: Session = Depends(get_db),
current_user: DBUser = Depends(get_current_active_user),
):
db: Session = Depends(deps.get_db),
current_user: models.User = Depends(deps.get_current_active_user),
) -> Any:
"""
Get current user.
"""
return current_user
@router.post("/open", response_model=User)
@router.post("/open", response_model=schemas.User)
def create_user_open(
*,
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
password: str = Body(...),
email: EmailStr = Body(...),
full_name: str = Body(None),
):
) -> Any:
"""
Create new user without the need to be logged in.
"""
if not config.USERS_OPEN_REGISTRATION:
if not settings.USERS_OPEN_REGISTRATION:
raise HTTPException(
status_code=403,
detail="Open user resgistration is forbidden on this server",
detail="Open user registration is forbidden on this server",
)
user = crud.user.get_by_email(db, email=email)
if user:
@@ -111,21 +108,21 @@ def create_user_open(
status_code=400,
detail="The user with this username already exists in the system",
)
user_in = UserCreate(password=password, email=email, full_name=full_name)
user = crud.user.create(db, user_in=user_in)
user_in = schemas.UserCreate(password=password, email=email, full_name=full_name)
user = crud.user.create(db, obj_in=user_in)
return user
@router.get("/{user_id}", response_model=User)
@router.get("/{user_id}", response_model=schemas.User)
def read_user_by_id(
user_id: int,
current_user: DBUser = Depends(get_current_active_user),
db: Session = Depends(get_db),
):
current_user: models.User = Depends(deps.get_current_active_user),
db: Session = Depends(deps.get_db),
) -> Any:
"""
Get a specific user by id.
"""
user = crud.user.get(db, user_id=user_id)
user = crud.user.get(db, id=user_id)
if user == current_user:
return user
if not crud.user.is_superuser(current_user):
@@ -135,22 +132,22 @@ def read_user_by_id(
return user
@router.put("/{user_id}", response_model=User)
@router.put("/{user_id}", response_model=schemas.User)
def update_user(
*,
db: Session = Depends(get_db),
db: Session = Depends(deps.get_db),
user_id: int,
user_in: UserUpdate,
current_user: UserInDB = Depends(get_current_active_superuser),
):
user_in: schemas.UserUpdate,
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Update a user.
"""
user = crud.user.get(db, user_id=user_id)
user = crud.user.get(db, id=user_id)
if not user:
raise HTTPException(
status_code=404,
detail="The user with this username does not exist in the system",
)
user = crud.user.update(db, user=user, user_in=user_in)
user = crud.user.update(db, db_obj=user, obj_in=user_in)
return user

View File

@@ -1,19 +1,21 @@
from fastapi import APIRouter, Depends
from pydantic.types import EmailStr
from typing import Any
from app.api.utils.security import get_current_active_superuser
from fastapi import APIRouter, Depends
from pydantic.networks import EmailStr
from app import models, schemas
from app.api import deps
from app.core.celery_app import celery_app
from app.models.msg import Msg
from app.models.user import UserInDB
from app.utils import send_test_email
router = APIRouter()
@router.post("/test-celery/", response_model=Msg, status_code=201)
@router.post("/test-celery/", response_model=schemas.Msg, status_code=201)
def test_celery(
msg: Msg, current_user: UserInDB = Depends(get_current_active_superuser)
):
msg: schemas.Msg,
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Test Celery worker.
"""
@@ -21,10 +23,11 @@ def test_celery(
return {"msg": "Word received"}
@router.post("/test-email/", response_model=Msg, status_code=201)
@router.post("/test-email/", response_model=schemas.Msg, status_code=201)
def test_email(
email_to: EmailStr, current_user: UserInDB = Depends(get_current_active_superuser)
):
email_to: EmailStr,
current_user: models.User = Depends(deps.get_current_active_superuser),
) -> Any:
"""
Test emails.
"""

View File

@@ -0,0 +1,61 @@
from typing import Generator
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import jwt
from pydantic import ValidationError
from sqlalchemy.orm import Session
from app import crud, models, schemas
from app.core import security
from app.core.config import settings
from app.db.session import SessionLocal
reusable_oauth2 = OAuth2PasswordBearer(
tokenUrl=f"{settings.API_V1_STR}/login/access-token"
)
def get_db() -> Generator:
try:
db = SessionLocal()
yield db
finally:
db.close()
def get_current_user(
db: Session = Depends(get_db), token: str = Depends(reusable_oauth2)
) -> models.User:
try:
payload = jwt.decode(
token, settings.SECRET_KEY, algorithms=[security.ALGORITHM]
)
token_data = schemas.TokenPayload(**payload)
except (jwt.JWTError, ValidationError):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Could not validate credentials",
)
user = crud.user.get(db, id=token_data.sub)
if not user:
raise HTTPException(status_code=404, detail="User not found")
return user
def get_current_active_user(
current_user: models.User = Depends(get_current_user),
) -> models.User:
if not crud.user.is_active(current_user):
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
def get_current_active_superuser(
current_user: models.User = Depends(get_current_user),
) -> models.User:
if not crud.user.is_superuser(current_user):
raise HTTPException(
status_code=400, detail="The user doesn't have enough privileges"
)
return current_user

View File

@@ -1,5 +0,0 @@
from starlette.requests import Request
def get_db(request: Request):
return request.state.db

View File

@@ -1,45 +0,0 @@
import jwt
from fastapi import Depends, HTTPException, Security
from fastapi.security import OAuth2PasswordBearer
from jwt import PyJWTError
from sqlalchemy.orm import Session
from starlette.status import HTTP_403_FORBIDDEN
from app import crud
from app.api.utils.db import get_db
from app.core import config
from app.core.jwt import ALGORITHM
from app.db_models.user import User
from app.models.token import TokenPayload
reusable_oauth2 = OAuth2PasswordBearer(tokenUrl="/api/v1/login/access-token")
def get_current_user(
db: Session = Depends(get_db), token: str = Security(reusable_oauth2)
):
try:
payload = jwt.decode(token, config.SECRET_KEY, algorithms=[ALGORITHM])
token_data = TokenPayload(**payload)
except PyJWTError:
raise HTTPException(
status_code=HTTP_403_FORBIDDEN, detail="Could not validate credentials"
)
user = crud.user.get(db, user_id=token_data.user_id)
if not user:
raise HTTPException(status_code=404, detail="User not found")
return user
def get_current_active_user(current_user: User = Security(get_current_user)):
if not crud.user.is_active(current_user):
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
def get_current_active_superuser(current_user: User = Security(get_current_user)):
if not crud.user.is_superuser(current_user):
raise HTTPException(
status_code=400, detail="The user doesn't have enough privileges"
)
return current_user

View File

@@ -2,7 +2,7 @@ import logging
from tenacity import after_log, before_log, retry, stop_after_attempt, wait_fixed
from app.db.session import db_session
from app.db.session import SessionLocal
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@@ -17,16 +17,17 @@ wait_seconds = 1
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.WARN),
)
def init():
def init() -> None:
try:
db = SessionLocal()
# Try to create session to check if DB is awake
db_session.execute("SELECT 1")
db.execute("SELECT 1")
except Exception as e:
logger.error(e)
raise e
def main():
def main() -> None:
logger.info("Initializing service")
init()
logger.info("Service finished initializing")

View File

@@ -2,7 +2,7 @@ import logging
from tenacity import after_log, before_log, retry, stop_after_attempt, wait_fixed
from app.db.session import db_session
from app.db.session import SessionLocal
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@@ -17,16 +17,17 @@ wait_seconds = 1
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.WARN),
)
def init():
def init() -> None:
try:
# Try to create session to check if DB is awake
db_session.execute("SELECT 1")
db = SessionLocal()
db.execute("SELECT 1")
except Exception as e:
logger.error(e)
raise e
def main():
def main() -> None:
logger.info("Initializing service")
init()
logger.info("Service finished initializing")

View File

@@ -1,53 +1,89 @@
import os
import secrets
from typing import Any, Dict, List, Optional, Union
from pydantic import AnyHttpUrl, BaseSettings, EmailStr, HttpUrl, PostgresDsn, validator
def getenv_boolean(var_name, default_value=False):
result = default_value
env_value = os.getenv(var_name)
if env_value is not None:
result = env_value.upper() in ("TRUE", "1")
return result
class Settings(BaseSettings):
API_V1_STR: str = "/api/v1"
SECRET_KEY: str = secrets.token_urlsafe(32)
# 60 minutes * 24 hours * 8 days = 8 days
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
SERVER_NAME: str
SERVER_HOST: AnyHttpUrl
# BACKEND_CORS_ORIGINS is a JSON-formatted list of origins
# e.g: '["http://localhost", "http://localhost:4200", "http://localhost:3000", \
# "http://localhost:8080", "http://local.dockertoolbox.tiangolo.com"]'
BACKEND_CORS_ORIGINS: List[AnyHttpUrl] = []
@validator("BACKEND_CORS_ORIGINS", pre=True)
def assemble_cors_origins(cls, v: Union[str, List[str]]) -> Union[List[str], str]:
if isinstance(v, str) and not v.startswith("["):
return [i.strip() for i in v.split(",")]
elif isinstance(v, (list, str)):
return v
raise ValueError(v)
PROJECT_NAME: str
SENTRY_DSN: Optional[HttpUrl] = None
@validator("SENTRY_DSN", pre=True)
def sentry_dsn_can_be_blank(cls, v: str) -> Optional[str]:
if len(v) == 0:
return None
return v
POSTGRES_SERVER: str
POSTGRES_USER: str
POSTGRES_PASSWORD: str
POSTGRES_DB: str
SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None
@validator("SQLALCHEMY_DATABASE_URI", pre=True)
def assemble_db_connection(cls, v: Optional[str], values: Dict[str, Any]) -> Any:
if isinstance(v, str):
return v
return PostgresDsn.build(
scheme="postgresql",
user=values.get("POSTGRES_USER"),
password=values.get("POSTGRES_PASSWORD"),
host=values.get("POSTGRES_SERVER"),
path=f"/{values.get('POSTGRES_DB') or ''}",
)
SMTP_TLS: bool = True
SMTP_PORT: Optional[int] = None
SMTP_HOST: Optional[str] = None
SMTP_USER: Optional[str] = None
SMTP_PASSWORD: Optional[str] = None
EMAILS_FROM_EMAIL: Optional[EmailStr] = None
EMAILS_FROM_NAME: Optional[str] = None
@validator("EMAILS_FROM_NAME")
def get_project_name(cls, v: Optional[str], values: Dict[str, Any]) -> str:
if not v:
return values["PROJECT_NAME"]
return v
EMAIL_RESET_TOKEN_EXPIRE_HOURS: int = 48
EMAIL_TEMPLATES_DIR: str = "/app/app/email-templates/build"
EMAILS_ENABLED: bool = False
@validator("EMAILS_ENABLED", pre=True)
def get_emails_enabled(cls, v: bool, values: Dict[str, Any]) -> bool:
return bool(
values.get("SMTP_HOST")
and values.get("SMTP_PORT")
and values.get("EMAILS_FROM_EMAIL")
)
EMAIL_TEST_USER: EmailStr = "test@example.com" # type: ignore
FIRST_SUPERUSER: EmailStr
FIRST_SUPERUSER_PASSWORD: str
USERS_OPEN_REGISTRATION: bool = False
class Config:
case_sensitive = True
API_V1_STR = "/api/v1"
SECRET_KEY = os.getenvb(b"SECRET_KEY")
if not SECRET_KEY:
SECRET_KEY = os.urandom(32)
ACCESS_TOKEN_EXPIRE_MINUTES = 60 * 24 * 8 # 60 minutes * 24 hours * 8 days = 8 days
SERVER_NAME = os.getenv("SERVER_NAME")
SERVER_HOST = os.getenv("SERVER_HOST")
BACKEND_CORS_ORIGINS = os.getenv(
"BACKEND_CORS_ORIGINS"
) # a string of origins separated by commas, e.g: "http://localhost, http://localhost:4200, http://localhost:3000, http://localhost:8080, http://local.dockertoolbox.tiangolo.com"
PROJECT_NAME = os.getenv("PROJECT_NAME")
SENTRY_DSN = os.getenv("SENTRY_DSN")
POSTGRES_SERVER = os.getenv("POSTGRES_SERVER")
POSTGRES_USER = os.getenv("POSTGRES_USER")
POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD")
POSTGRES_DB = os.getenv("POSTGRES_DB")
SQLALCHEMY_DATABASE_URI = (
f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_SERVER}/{POSTGRES_DB}"
)
SMTP_TLS = getenv_boolean("SMTP_TLS", True)
SMTP_PORT = None
_SMTP_PORT = os.getenv("SMTP_PORT")
if _SMTP_PORT is not None:
SMTP_PORT = int(_SMTP_PORT)
SMTP_HOST = os.getenv("SMTP_HOST")
SMTP_USER = os.getenv("SMTP_USER")
SMTP_PASSWORD = os.getenv("SMTP_PASSWORD")
EMAILS_FROM_EMAIL = os.getenv("EMAILS_FROM_EMAIL")
EMAILS_FROM_NAME = PROJECT_NAME
EMAIL_RESET_TOKEN_EXPIRE_HOURS = 48
EMAIL_TEMPLATES_DIR = "/app/app/email-templates/build"
EMAILS_ENABLED = SMTP_HOST and SMTP_PORT and EMAILS_FROM_EMAIL
FIRST_SUPERUSER = os.getenv("FIRST_SUPERUSER")
FIRST_SUPERUSER_PASSWORD = os.getenv("FIRST_SUPERUSER_PASSWORD")
USERS_OPEN_REGISTRATION = getenv_boolean("USERS_OPEN_REGISTRATION")
settings = Settings()

View File

@@ -1,19 +0,0 @@
from datetime import datetime, timedelta
import jwt
from app.core import config
ALGORITHM = "HS256"
access_token_jwt_subject = "access"
def create_access_token(*, data: dict, expires_delta: timedelta = None):
to_encode = data.copy()
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(minutes=15)
to_encode.update({"exp": expire, "sub": access_token_jwt_subject})
encoded_jwt = jwt.encode(to_encode, config.SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt

View File

@@ -1,11 +1,34 @@
from datetime import datetime, timedelta
from typing import Any, Union
from jose import jwt
from passlib.context import CryptContext
from app.core.config import settings
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def verify_password(plain_password: str, hashed_password: str):
ALGORITHM = "HS256"
def create_access_token(
subject: Union[str, Any], expires_delta: timedelta = None
) -> str:
if expires_delta:
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(
minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES
)
to_encode = {"exp": expire, "sub": str(subject)}
encoded_jwt = jwt.encode(to_encode, settings.SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
def verify_password(plain_password: str, hashed_password: str) -> bool:
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password: str):
def get_password_hash(password: str) -> str:
return pwd_context.hash(password)

View File

@@ -1 +1,10 @@
from . import item, user
from .crud_item import item
from .crud_user import user
# For a new basic set of CRUD operations you could just do
# from .base import CRUDBase
# from app.models.item import Item
# from app.schemas.item import ItemCreate, ItemUpdate
# item = CRUDBase[Item, ItemCreate, ItemUpdate](Item)

View File

@@ -0,0 +1,66 @@
from typing import Any, Dict, Generic, List, Optional, Type, TypeVar, Union
from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel
from sqlalchemy.orm import Session
from app.db.base_class import Base
ModelType = TypeVar("ModelType", bound=Base)
CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel)
UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel)
class CRUDBase(Generic[ModelType, CreateSchemaType, UpdateSchemaType]):
def __init__(self, model: Type[ModelType]):
"""
CRUD object with default methods to Create, Read, Update, Delete (CRUD).
**Parameters**
* `model`: A SQLAlchemy model class
* `schema`: A Pydantic model (schema) class
"""
self.model = model
def get(self, db: Session, id: Any) -> Optional[ModelType]:
return db.query(self.model).filter(self.model.id == id).first()
def get_multi(
self, db: Session, *, skip: int = 0, limit: int = 100
) -> List[ModelType]:
return db.query(self.model).offset(skip).limit(limit).all()
def create(self, db: Session, *, obj_in: CreateSchemaType) -> ModelType:
obj_in_data = jsonable_encoder(obj_in)
db_obj = self.model(**obj_in_data) # type: ignore
db.add(db_obj)
db.commit()
db.refresh(db_obj)
return db_obj
def update(
self,
db: Session,
*,
db_obj: ModelType,
obj_in: Union[UpdateSchemaType, Dict[str, Any]]
) -> ModelType:
obj_data = jsonable_encoder(db_obj)
if isinstance(obj_in, dict):
update_data = obj_in
else:
update_data = obj_in.dict(exclude_unset=True)
for field in obj_data:
if field in update_data:
setattr(db_obj, field, update_data[field])
db.add(db_obj)
db.commit()
db.refresh(db_obj)
return db_obj
def remove(self, db: Session, *, id: int) -> ModelType:
obj = db.query(self.model).get(id)
db.delete(obj)
db.commit()
return obj

View File

@@ -0,0 +1,34 @@
from typing import List
from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session
from app.crud.base import CRUDBase
from app.models.item import Item
from app.schemas.item import ItemCreate, ItemUpdate
class CRUDItem(CRUDBase[Item, ItemCreate, ItemUpdate]):
def create_with_owner(
self, db: Session, *, obj_in: ItemCreate, owner_id: int
) -> Item:
obj_in_data = jsonable_encoder(obj_in)
db_obj = self.model(**obj_in_data, owner_id=owner_id)
db.add(db_obj)
db.commit()
db.refresh(db_obj)
return db_obj
def get_multi_by_owner(
self, db: Session, *, owner_id: int, skip: int = 0, limit: int = 100
) -> List[Item]:
return (
db.query(self.model)
.filter(Item.owner_id == owner_id)
.offset(skip)
.limit(limit)
.all()
)
item = CRUDItem(Item)

View File

@@ -0,0 +1,55 @@
from typing import Any, Dict, Optional, Union
from sqlalchemy.orm import Session
from app.core.security import get_password_hash, verify_password
from app.crud.base import CRUDBase
from app.models.user import User
from app.schemas.user import UserCreate, UserUpdate
class CRUDUser(CRUDBase[User, UserCreate, UserUpdate]):
def get_by_email(self, db: Session, *, email: str) -> Optional[User]:
return db.query(User).filter(User.email == email).first()
def create(self, db: Session, *, obj_in: UserCreate) -> User:
db_obj = User(
email=obj_in.email,
hashed_password=get_password_hash(obj_in.password),
full_name=obj_in.full_name,
is_superuser=obj_in.is_superuser,
)
db.add(db_obj)
db.commit()
db.refresh(db_obj)
return db_obj
def update(
self, db: Session, *, db_obj: User, obj_in: Union[UserUpdate, Dict[str, Any]]
) -> User:
if isinstance(obj_in, dict):
update_data = obj_in
else:
update_data = obj_in.dict(exclude_unset=True)
if update_data["password"]:
hashed_password = get_password_hash(update_data["password"])
del update_data["password"]
update_data["hashed_password"] = hashed_password
return super().update(db, db_obj=db_obj, obj_in=update_data)
def authenticate(self, db: Session, *, email: str, password: str) -> Optional[User]:
user = self.get_by_email(db, email=email)
if not user:
return None
if not verify_password(password, user.hashed_password):
return None
return user
def is_active(self, user: User) -> bool:
return user.is_active
def is_superuser(self, user: User) -> bool:
return user.is_superuser
user = CRUDUser(User)

View File

@@ -1,55 +0,0 @@
from typing import List, Optional
from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session
from app.db_models.item import Item
from app.models.item import ItemCreate, ItemUpdate
def get(db_session: Session, *, id: int) -> Optional[Item]:
return db_session.query(Item).filter(Item.id == id).first()
def get_multi(db_session: Session, *, skip=0, limit=100) -> List[Optional[Item]]:
return db_session.query(Item).offset(skip).limit(limit).all()
def get_multi_by_owner(
db_session: Session, *, owner_id: int, skip=0, limit=100
) -> List[Optional[Item]]:
return (
db_session.query(Item)
.filter(Item.owner_id == owner_id)
.offset(skip)
.limit(limit)
.all()
)
def create(db_session: Session, *, item_in: ItemCreate, owner_id: int) -> Item:
item_in_data = jsonable_encoder(item_in)
item = Item(**item_in_data, owner_id=owner_id)
db_session.add(item)
db_session.commit()
db_session.refresh(item)
return item
def update(db_session: Session, *, item: Item, item_in: ItemUpdate) -> Item:
item_data = jsonable_encoder(item)
update_data = item_in.dict(skip_defaults=True)
for field in item_data:
if field in update_data:
setattr(item, field, update_data[field])
db_session.add(item)
db_session.commit()
db_session.refresh(item)
return item
def remove(db_session: Session, *, id: int):
item = db_session.query(Item).filter(Item.id == id).first()
db_session.delete(item)
db_session.commit()
return item

View File

@@ -1,65 +0,0 @@
from typing import List, Optional
from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session
from app.core.security import get_password_hash, verify_password
from app.db_models.user import User
from app.models.user import UserCreate, UserUpdate
def get(db_session: Session, *, user_id: int) -> Optional[User]:
return db_session.query(User).filter(User.id == user_id).first()
def get_by_email(db_session: Session, *, email: str) -> Optional[User]:
return db_session.query(User).filter(User.email == email).first()
def authenticate(db_session: Session, *, email: str, password: str) -> Optional[User]:
user = get_by_email(db_session, email=email)
if not user:
return None
if not verify_password(password, user.hashed_password):
return None
return user
def is_active(user) -> bool:
return user.is_active
def is_superuser(user) -> bool:
return user.is_superuser
def get_multi(db_session: Session, *, skip=0, limit=100) -> List[Optional[User]]:
return db_session.query(User).offset(skip).limit(limit).all()
def create(db_session: Session, *, user_in: UserCreate) -> User:
user = User(
email=user_in.email,
hashed_password=get_password_hash(user_in.password),
full_name=user_in.full_name,
is_superuser=user_in.is_superuser,
)
db_session.add(user)
db_session.commit()
db_session.refresh(user)
return user
def update(db_session: Session, *, user: User, user_in: UserUpdate) -> User:
user_data = jsonable_encoder(user)
update_data = user_in.dict(skip_defaults=True)
for field in user_data:
if field in update_data:
setattr(user, field, update_data[field])
if user_in.password:
passwordhash = get_password_hash(user_in.password)
user.hashed_password = passwordhash
db_session.add(user)
db_session.commit()
db_session.refresh(user)
return user

View File

@@ -1,5 +1,5 @@
# Import all the models, so that Base has them before being
# imported by Alembic
from app.db.base_class import Base # noqa
from app.db_models.user import User # noqa
from app.db_models.item import Item # noqa
from app.models.item import Item # noqa
from app.models.user import User # noqa

View File

@@ -1,11 +1,13 @@
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from typing import Any
from sqlalchemy.ext.declarative import as_declarative, declared_attr
class CustomBase(object):
@as_declarative()
class Base:
id: Any
__name__: str
# Generate __tablename__ automatically
@declared_attr
def __tablename__(cls):
def __tablename__(cls) -> str:
return cls.__name__.lower()
Base = declarative_base(cls=CustomBase)

View File

@@ -1,24 +1,25 @@
from app import crud
from app.core import config
from app.models.user import UserCreate
from sqlalchemy.orm import Session
# make sure all SQL Alchemy models are imported before initializing DB
# otherwise, SQL Alchemy might fail to initialize properly relationships
from app import crud, schemas
from app.core.config import settings
from app.db import base # noqa: F401
# make sure all SQL Alchemy models are imported (app.db.base) before initializing DB
# otherwise, SQL Alchemy might fail to initialize relationships properly
# for more details: https://github.com/tiangolo/full-stack-fastapi-postgresql/issues/28
from app.db import base
def init_db(db_session):
def init_db(db: Session) -> None:
# Tables should be created with Alembic migrations
# But if you don't want to use migrations, create
# the tables un-commenting the next line
# Base.metadata.create_all(bind=engine)
user = crud.user.get_by_email(db_session, email=config.FIRST_SUPERUSER)
user = crud.user.get_by_email(db, email=settings.FIRST_SUPERUSER)
if not user:
user_in = UserCreate(
email=config.FIRST_SUPERUSER,
password=config.FIRST_SUPERUSER_PASSWORD,
user_in = schemas.UserCreate(
email=settings.FIRST_SUPERUSER,
password=settings.FIRST_SUPERUSER_PASSWORD,
is_superuser=True,
)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in) # noqa: F841

View File

@@ -1,10 +1,7 @@
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.orm import sessionmaker
from app.core import config
from app.core.config import settings
engine = create_engine(config.SQLALCHEMY_DATABASE_URI, pool_pre_ping=True)
db_session = scoped_session(
sessionmaker(autocommit=False, autoflush=False, bind=engine)
)
Session = sessionmaker(autocommit=False, autoflush=False, bind=engine)
engine = create_engine(settings.SQLALCHEMY_DATABASE_URI, pool_pre_ping=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

View File

@@ -1,12 +0,0 @@
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
class Item(Base):
id = Column(Integer, primary_key=True, index=True)
title = Column(String, index=True)
description = Column(String, index=True)
owner_id = Column(Integer, ForeignKey("user.id"))
owner = relationship("User", back_populates="items")

View File

@@ -1,14 +0,0 @@
from sqlalchemy import Boolean, Column, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
class User(Base):
id = Column(Integer, primary_key=True, index=True)
full_name = Column(String, index=True)
email = Column(String, unique=True, index=True)
hashed_password = Column(String)
is_active = Column(Boolean(), default=True)
is_superuser = Column(Boolean(), default=False)
items = relationship("Item", back_populates="owner")

View File

@@ -1,17 +1,18 @@
import logging
from app.db.init_db import init_db
from app.db.session import db_session
from app.db.session import SessionLocal
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def init():
init_db(db_session)
def init() -> None:
db = SessionLocal()
init_db(db)
def main():
def main() -> None:
logger.info("Creating initial data")
init()
logger.info("Initial data created")

View File

@@ -1,36 +1,21 @@
from fastapi import FastAPI
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from app.api.api_v1.api import api_router
from app.core import config
from app.db.session import Session
from app.core.config import settings
app = FastAPI(title=config.PROJECT_NAME, openapi_url="/api/v1/openapi.json")
# CORS
origins = []
app = FastAPI(
title=settings.PROJECT_NAME, openapi_url=f"{settings.API_V1_STR}/openapi.json"
)
# Set all CORS enabled origins
if config.BACKEND_CORS_ORIGINS:
origins_raw = config.BACKEND_CORS_ORIGINS.split(",")
for origin in origins_raw:
use_origin = origin.strip()
origins.append(use_origin)
if settings.BACKEND_CORS_ORIGINS:
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
),
)
app.include_router(api_router, prefix=config.API_V1_STR)
@app.middleware("http")
async def db_session_middleware(request: Request, call_next):
request.state.db = Session()
response = await call_next(request)
request.state.db.close()
return response
app.include_router(api_router, prefix=settings.API_V1_STR)

View File

@@ -0,0 +1,2 @@
from .item import Item
from .user import User

View File

@@ -1,34 +1,17 @@
from pydantic import BaseModel
from typing import TYPE_CHECKING
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
if TYPE_CHECKING:
from .user import User # noqa: F401
# Shared properties
class ItemBase(BaseModel):
title: str = None
description: str = None
# Properties to receive on item creation
class ItemCreate(ItemBase):
title: str
# Properties to receive on item update
class ItemUpdate(ItemBase):
pass
# Properties shared by models stored in DB
class ItemInDBBase(ItemBase):
id: int
title: str
owner_id: int
# Properties to return to client
class Item(ItemInDBBase):
pass
# Properties properties stored in DB
class ItemInDB(ItemInDBBase):
pass
class Item(Base):
id = Column(Integer, primary_key=True, index=True)
title = Column(String, index=True)
description = Column(String, index=True)
owner_id = Column(Integer, ForeignKey("user.id"))
owner = relationship("User", back_populates="items")

View File

@@ -1,36 +1,19 @@
from typing import Optional
from typing import TYPE_CHECKING
from pydantic import BaseModel
from sqlalchemy import Boolean, Column, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
if TYPE_CHECKING:
from .item import Item # noqa: F401
# Shared properties
class UserBase(BaseModel):
email: Optional[str] = None
is_active: Optional[bool] = True
is_superuser: Optional[bool] = False
full_name: Optional[str] = None
class UserBaseInDB(UserBase):
id: int = None
# Properties to receive via API on creation
class UserCreate(UserBaseInDB):
email: str
password: str
# Properties to receive via API on update
class UserUpdate(UserBaseInDB):
password: Optional[str] = None
# Additional properties to return via API
class User(UserBaseInDB):
pass
# Additional properties stored in DB
class UserInDB(UserBaseInDB):
hashed_password: str
class User(Base):
id = Column(Integer, primary_key=True, index=True)
full_name = Column(String, index=True)
email = Column(String, unique=True, index=True, nullable=False)
hashed_password = Column(String, nullable=False)
is_active = Column(Boolean(), default=True)
is_superuser = Column(Boolean(), default=False)
items = relationship("Item", back_populates="owner")

View File

@@ -0,0 +1,4 @@
from .item import Item, ItemCreate, ItemInDB, ItemUpdate
from .msg import Msg
from .token import Token, TokenPayload
from .user import User, UserCreate, UserInDB, UserUpdate

View File

@@ -0,0 +1,39 @@
from typing import Optional
from pydantic import BaseModel
# Shared properties
class ItemBase(BaseModel):
title: Optional[str] = None
description: Optional[str] = None
# Properties to receive on item creation
class ItemCreate(ItemBase):
title: str
# Properties to receive on item update
class ItemUpdate(ItemBase):
pass
# Properties shared by models stored in DB
class ItemInDBBase(ItemBase):
id: int
title: str
owner_id: int
class Config:
orm_mode = True
# Properties to return to client
class Item(ItemInDBBase):
pass
# Properties properties stored in DB
class ItemInDB(ItemInDBBase):
pass

View File

@@ -1,3 +1,5 @@
from typing import Optional
from pydantic import BaseModel
@@ -7,4 +9,4 @@ class Token(BaseModel):
class TokenPayload(BaseModel):
user_id: int = None
sub: Optional[int] = None

View File

@@ -0,0 +1,39 @@
from typing import Optional
from pydantic import BaseModel, EmailStr
# Shared properties
class UserBase(BaseModel):
email: Optional[EmailStr] = None
is_active: Optional[bool] = True
is_superuser: bool = False
full_name: Optional[str] = None
# Properties to receive via API on creation
class UserCreate(UserBase):
email: EmailStr
password: str
# Properties to receive via API on update
class UserUpdate(UserBase):
password: Optional[str] = None
class UserInDBBase(UserBase):
id: Optional[int] = None
class Config:
orm_mode = True
# Additional properties to return via API
class User(UserInDBBase):
pass
# Additional properties stored in DB
class UserInDB(UserInDBBase):
hashed_password: str

View File

@@ -1,14 +1,16 @@
import requests
from typing import Dict
from app.core import config
from app.tests.utils.utils import get_server_api
from fastapi.testclient import TestClient
from app.core.config import settings
def test_celery_worker_test(superuser_token_headers):
server_api = get_server_api()
def test_celery_worker_test(
client: TestClient, superuser_token_headers: Dict[str, str]
) -> None:
data = {"msg": "test"}
r = requests.post(
f"{server_api}{config.API_V1_STR}/utils/test-celery/",
r = client.post(
f"{settings.API_V1_STR}/utils/test-celery/",
json=data,
headers=superuser_token_headers,
)

View File

@@ -1,18 +1,18 @@
import requests
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app.core import config
from app.core.config import settings
from app.tests.utils.item import create_random_item
from app.tests.utils.utils import get_server_api
def test_create_item(superuser_token_headers):
server_api = get_server_api()
def test_create_item(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
data = {"title": "Foo", "description": "Fighters"}
response = requests.post(
f"{server_api}{config.API_V1_STR}/items/",
headers=superuser_token_headers,
json=data,
response = client.post(
f"{settings.API_V1_STR}/items/", headers=superuser_token_headers, json=data,
)
assert response.status_code == 200
content = response.json()
assert content["title"] == data["title"]
assert content["description"] == data["description"]
@@ -20,13 +20,14 @@ def test_create_item(superuser_token_headers):
assert "owner_id" in content
def test_read_item(superuser_token_headers):
item = create_random_item()
server_api = get_server_api()
response = requests.get(
f"{server_api}{config.API_V1_STR}/items/{item.id}",
headers=superuser_token_headers,
def test_read_item(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
item = create_random_item(db)
response = client.get(
f"{settings.API_V1_STR}/items/{item.id}", headers=superuser_token_headers,
)
assert response.status_code == 200
content = response.json()
assert content["title"] == item.title
assert content["description"] == item.description

View File

@@ -1,29 +1,27 @@
import requests
from typing import Dict
from app.core import config
from app.tests.utils.utils import get_server_api
from fastapi.testclient import TestClient
from app.core.config import settings
def test_get_access_token():
server_api = get_server_api()
def test_get_access_token(client: TestClient) -> None:
login_data = {
"username": config.FIRST_SUPERUSER,
"password": config.FIRST_SUPERUSER_PASSWORD,
"username": settings.FIRST_SUPERUSER,
"password": settings.FIRST_SUPERUSER_PASSWORD,
}
r = requests.post(
f"{server_api}{config.API_V1_STR}/login/access-token", data=login_data
)
r = client.post(f"{settings.API_V1_STR}/login/access-token", data=login_data)
tokens = r.json()
assert r.status_code == 200
assert "access_token" in tokens
assert tokens["access_token"]
def test_use_access_token(superuser_token_headers):
server_api = get_server_api()
r = requests.post(
f"{server_api}{config.API_V1_STR}/login/test-token",
headers=superuser_token_headers,
def test_use_access_token(
client: TestClient, superuser_token_headers: Dict[str, str]
) -> None:
r = client.post(
f"{settings.API_V1_STR}/login/test-token", headers=superuser_token_headers,
)
result = r.json()
assert r.status_code == 200

View File

@@ -1,107 +1,115 @@
import requests
from typing import Dict
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app import crud
from app.core import config
from app.db.session import db_session
from app.models.user import UserCreate
from app.tests.utils.user import user_authentication_headers
from app.tests.utils.utils import get_server_api, random_lower_string
from app.core.config import settings
from app.schemas.user import UserCreate
from app.tests.utils.utils import random_email, random_lower_string
def test_get_users_superuser_me(superuser_token_headers):
server_api = get_server_api()
r = requests.get(
f"{server_api}{config.API_V1_STR}/users/me", headers=superuser_token_headers
)
def test_get_users_superuser_me(
client: TestClient, superuser_token_headers: Dict[str, str]
) -> None:
r = client.get(f"{settings.API_V1_STR}/users/me", headers=superuser_token_headers)
current_user = r.json()
assert current_user
assert current_user["is_active"] is True
assert current_user["is_superuser"]
assert current_user["email"] == config.FIRST_SUPERUSER
assert current_user["email"] == settings.FIRST_SUPERUSER
def test_create_user_new_email(superuser_token_headers):
server_api = get_server_api()
username = random_lower_string()
def test_get_users_normal_user_me(
client: TestClient, normal_user_token_headers: Dict[str, str]
) -> None:
r = client.get(f"{settings.API_V1_STR}/users/me", headers=normal_user_token_headers)
current_user = r.json()
assert current_user
assert current_user["is_active"] is True
assert current_user["is_superuser"] is False
assert current_user["email"] == settings.EMAIL_TEST_USER
def test_create_user_new_email(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
username = random_email()
password = random_lower_string()
data = {"email": username, "password": password}
r = requests.post(
f"{server_api}{config.API_V1_STR}/users/",
headers=superuser_token_headers,
json=data,
r = client.post(
f"{settings.API_V1_STR}/users/", headers=superuser_token_headers, json=data,
)
assert 200 <= r.status_code < 300
created_user = r.json()
user = crud.user.get_by_email(db_session, email=username)
user = crud.user.get_by_email(db, email=username)
assert user
assert user.email == created_user["email"]
def test_get_existing_user(superuser_token_headers):
server_api = get_server_api()
username = random_lower_string()
def test_get_existing_user(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
username = random_email()
password = random_lower_string()
user_in = UserCreate(email=username, password=password)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in)
user_id = user.id
r = requests.get(
f"{server_api}{config.API_V1_STR}/users/{user_id}",
headers=superuser_token_headers,
r = client.get(
f"{settings.API_V1_STR}/users/{user_id}", headers=superuser_token_headers,
)
assert 200 <= r.status_code < 300
api_user = r.json()
user = crud.user.get_by_email(db_session, email=username)
assert user.email == api_user["email"]
existing_user = crud.user.get_by_email(db, email=username)
assert existing_user
assert existing_user.email == api_user["email"]
def test_create_user_existing_username(superuser_token_headers):
server_api = get_server_api()
username = random_lower_string()
def test_create_user_existing_username(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
username = random_email()
# username = email
password = random_lower_string()
user_in = UserCreate(email=username, password=password)
user = crud.user.create(db_session, user_in=user_in)
crud.user.create(db, obj_in=user_in)
data = {"email": username, "password": password}
r = requests.post(
f"{server_api}{config.API_V1_STR}/users/",
headers=superuser_token_headers,
json=data,
r = client.post(
f"{settings.API_V1_STR}/users/", headers=superuser_token_headers, json=data,
)
created_user = r.json()
assert r.status_code == 400
assert "_id" not in created_user
def test_create_user_by_normal_user():
server_api = get_server_api()
username = random_lower_string()
def test_create_user_by_normal_user(
client: TestClient, normal_user_token_headers: Dict[str, str]
) -> None:
username = random_email()
password = random_lower_string()
user_in = UserCreate(email=username, password=password)
user = crud.user.create(db_session, user_in=user_in)
user_token_headers = user_authentication_headers(server_api, username, password)
data = {"email": username, "password": password}
r = requests.post(
f"{server_api}{config.API_V1_STR}/users/", headers=user_token_headers, json=data
r = client.post(
f"{settings.API_V1_STR}/users/", headers=normal_user_token_headers, json=data,
)
assert r.status_code == 400
def test_retrieve_users(superuser_token_headers):
server_api = get_server_api()
username = random_lower_string()
def test_retrieve_users(
client: TestClient, superuser_token_headers: dict, db: Session
) -> None:
username = random_email()
password = random_lower_string()
user_in = UserCreate(email=username, password=password)
user = crud.user.create(db_session, user_in=user_in)
crud.user.create(db, obj_in=user_in)
username2 = random_lower_string()
username2 = random_email()
password2 = random_lower_string()
user_in2 = UserCreate(email=username2, password=password2)
user2 = crud.user.create(db_session, user_in=user_in2)
crud.user.create(db, obj_in=user_in2)
r = requests.get(
f"{server_api}{config.API_V1_STR}/users/", headers=superuser_token_headers
)
r = client.get(f"{settings.API_V1_STR}/users/", headers=superuser_token_headers)
all_users = r.json()
assert len(all_users) > 1
for user in all_users:
assert "email" in user
for item in all_users:
assert "email" in item

View File

@@ -1,13 +1,34 @@
from typing import Dict, Generator
import pytest
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app.tests.utils.utils import get_server_api, get_superuser_token_headers
from app.core.config import settings
from app.db.session import SessionLocal
from app.main import app
from app.tests.utils.user import authentication_token_from_email
from app.tests.utils.utils import get_superuser_token_headers
@pytest.fixture(scope="session")
def db() -> Generator:
yield SessionLocal()
@pytest.fixture(scope="module")
def server_api():
return get_server_api()
def client() -> Generator:
with TestClient(app) as c:
yield c
@pytest.fixture(scope="module")
def superuser_token_headers():
return get_superuser_token_headers()
def superuser_token_headers(client: TestClient) -> Dict[str, str]:
return get_superuser_token_headers(client)
@pytest.fixture(scope="module")
def normal_user_token_headers(client: TestClient, db: Session) -> Dict[str, str]:
return authentication_token_from_email(
client=client, email=settings.EMAIL_TEST_USER, db=db
)

View File

@@ -1,59 +1,59 @@
from sqlalchemy.orm import Session
from app import crud
from app.models.item import ItemCreate, ItemUpdate
from app.schemas.item import ItemCreate, ItemUpdate
from app.tests.utils.user import create_random_user
from app.tests.utils.utils import random_lower_string
from app.db.session import db_session
def test_create_item():
def test_create_item(db: Session) -> None:
title = random_lower_string()
description = random_lower_string()
item_in = ItemCreate(title=title, description=description)
user = create_random_user()
item = crud.item.create(db_session=db_session, item_in=item_in, owner_id=user.id)
user = create_random_user(db)
item = crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=user.id)
assert item.title == title
assert item.description == description
assert item.owner_id == user.id
def test_get_item():
def test_get_item(db: Session) -> None:
title = random_lower_string()
description = random_lower_string()
item_in = ItemCreate(title=title, description=description)
user = create_random_user()
item = crud.item.create(db_session=db_session, item_in=item_in, owner_id=user.id)
stored_item = crud.item.get(db_session=db_session, id=item.id)
user = create_random_user(db)
item = crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=user.id)
stored_item = crud.item.get(db=db, id=item.id)
assert stored_item
assert item.id == stored_item.id
assert item.title == stored_item.title
assert item.description == stored_item.description
assert item.owner_id == stored_item.owner_id
def test_update_item():
def test_update_item(db: Session) -> None:
title = random_lower_string()
description = random_lower_string()
item_in = ItemCreate(title=title, description=description)
user = create_random_user()
item = crud.item.create(db_session=db_session, item_in=item_in, owner_id=user.id)
user = create_random_user(db)
item = crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=user.id)
description2 = random_lower_string()
item_update = ItemUpdate(description=description2)
item2 = crud.item.update(
db_session=db_session, item=item, item_in=item_update
)
item2 = crud.item.update(db=db, db_obj=item, obj_in=item_update)
assert item.id == item2.id
assert item.title == item2.title
assert item2.description == description2
assert item.owner_id == item2.owner_id
def test_delete_item():
def test_delete_item(db: Session) -> None:
title = random_lower_string()
description = random_lower_string()
item_in = ItemCreate(title=title, description=description)
user = create_random_user()
item = crud.item.create(db_session=db_session, item_in=item_in, owner_id=user.id)
item2 = crud.item.remove(db_session=db_session, id=item.id)
item3 = crud.item.get(db_session=db_session, id=item.id)
user = create_random_user(db)
item = crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=user.id)
item2 = crud.item.remove(db=db, id=item.id)
item3 = crud.item.get(db=db, id=item.id)
assert item3 is None
assert item2.id == item.id
assert item2.title == title

View File

@@ -1,83 +1,94 @@
from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session
from app import crud
from app.db.session import db_session
from app.models.user import UserCreate
from app.tests.utils.utils import random_lower_string
from app.core.security import verify_password
from app.schemas.user import UserCreate, UserUpdate
from app.tests.utils.utils import random_email, random_lower_string
def test_create_user():
email = random_lower_string()
def test_create_user(db: Session) -> None:
email = random_email()
password = random_lower_string()
user_in = UserCreate(email=email, password=password)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in)
assert user.email == email
assert hasattr(user, "hashed_password")
def test_authenticate_user():
email = random_lower_string()
def test_authenticate_user(db: Session) -> None:
email = random_email()
password = random_lower_string()
user_in = UserCreate(email=email, password=password)
user = crud.user.create(db_session, user_in=user_in)
authenticated_user = crud.user.authenticate(
db_session, email=email, password=password
)
user = crud.user.create(db, obj_in=user_in)
authenticated_user = crud.user.authenticate(db, email=email, password=password)
assert authenticated_user
assert user.email == authenticated_user.email
def test_not_authenticate_user():
email = random_lower_string()
def test_not_authenticate_user(db: Session) -> None:
email = random_email()
password = random_lower_string()
user = crud.user.authenticate(db_session, email=email, password=password)
user = crud.user.authenticate(db, email=email, password=password)
assert user is None
def test_check_if_user_is_active():
email = random_lower_string()
def test_check_if_user_is_active(db: Session) -> None:
email = random_email()
password = random_lower_string()
user_in = UserCreate(email=email, password=password)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in)
is_active = crud.user.is_active(user)
assert is_active is True
def test_check_if_user_is_active_inactive():
email = random_lower_string()
def test_check_if_user_is_active_inactive(db: Session) -> None:
email = random_email()
password = random_lower_string()
user_in = UserCreate(email=email, password=password, disabled=True)
print(user_in)
user = crud.user.create(db_session, user_in=user_in)
print(user)
user = crud.user.create(db, obj_in=user_in)
is_active = crud.user.is_active(user)
print(is_active)
assert is_active
def test_check_if_user_is_superuser():
email = random_lower_string()
def test_check_if_user_is_superuser(db: Session) -> None:
email = random_email()
password = random_lower_string()
user_in = UserCreate(email=email, password=password, is_superuser=True)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in)
is_superuser = crud.user.is_superuser(user)
assert is_superuser is True
def test_check_if_user_is_superuser_normal_user():
username = random_lower_string()
def test_check_if_user_is_superuser_normal_user(db: Session) -> None:
username = random_email()
password = random_lower_string()
user_in = UserCreate(email=username, password=password)
user = crud.user.create(db_session, user_in=user_in)
user = crud.user.create(db, obj_in=user_in)
is_superuser = crud.user.is_superuser(user)
assert is_superuser is False
def test_get_user():
def test_get_user(db: Session) -> None:
password = random_lower_string()
username = random_lower_string()
username = random_email()
user_in = UserCreate(email=username, password=password, is_superuser=True)
user = crud.user.create(db_session, user_in=user_in)
user_2 = crud.user.get(db_session, user_id=user.id)
user = crud.user.create(db, obj_in=user_in)
user_2 = crud.user.get(db, id=user.id)
assert user_2
assert user.email == user_2.email
assert jsonable_encoder(user) == jsonable_encoder(user_2)
def test_update_user(db: Session) -> None:
password = random_lower_string()
email = random_email()
user_in = UserCreate(email=email, password=password, is_superuser=True)
user = crud.user.create(db, obj_in=user_in)
new_password = random_lower_string()
user_in_update = UserUpdate(password=new_password, is_superuser=True)
crud.user.update(db, db_obj=user, obj_in=user_in_update)
user_2 = crud.user.get(db, id=user.id)
assert user_2
assert user.email == user_2.email
assert verify_password(new_password, user_2.hashed_password)

View File

@@ -1,17 +1,18 @@
from app import crud
from app.db.session import db_session
from app.models.item import ItemCreate
from typing import Optional
from sqlalchemy.orm import Session
from app import crud, models
from app.schemas.item import ItemCreate
from app.tests.utils.user import create_random_user
from app.tests.utils.utils import random_lower_string
def create_random_item(owner_id: int = None):
def create_random_item(db: Session, *, owner_id: Optional[int] = None) -> models.Item:
if owner_id is None:
user = create_random_user()
user = create_random_user(db)
owner_id = user.id
title = random_lower_string()
description = random_lower_string()
item_in = ItemCreate(title=title, description=description, id=id)
return crud.item.create(
db_session=db_session, item_in=item_in, owner_id=owner_id
)
return crud.item.create_with_owner(db=db, obj_in=item_in, owner_id=owner_id)

View File

@@ -1,25 +1,50 @@
import requests
from typing import Dict
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from app import crud
from app.core import config
from app.db.session import db_session
from app.models.user import UserCreate
from app.tests.utils.utils import random_lower_string
from app.core.config import settings
from app.models.user import User
from app.schemas.user import UserCreate, UserUpdate
from app.tests.utils.utils import random_email, random_lower_string
def user_authentication_headers(server_api, email, password):
def user_authentication_headers(
*, client: TestClient, email: str, password: str
) -> Dict[str, str]:
data = {"username": email, "password": password}
r = requests.post(f"{server_api}{config.API_V1_STR}/login/access-token", data=data)
r = client.post(f"{settings.API_V1_STR}/login/access-token", data=data)
response = r.json()
auth_token = response["access_token"]
headers = {"Authorization": f"Bearer {auth_token}"}
return headers
def create_random_user():
email = random_lower_string()
def create_random_user(db: Session) -> User:
email = random_email()
password = random_lower_string()
user_in = UserCreate(username=email, email=email, password=password)
user = crud.user.create(db_session=db_session, user_in=user_in)
user = crud.user.create(db=db, obj_in=user_in)
return user
def authentication_token_from_email(
*, client: TestClient, email: str, db: Session
) -> Dict[str, str]:
"""
Return a valid token for the user with given email.
If the user doesn't exist it is created first.
"""
password = random_lower_string()
user = crud.user.get_by_email(db, email=email)
if not user:
user_in_create = UserCreate(username=email, email=email, password=password)
user = crud.user.create(db, obj_in=user_in_create)
else:
user_in_update = UserUpdate(password=password)
user = crud.user.update(db, db_obj=user, obj_in=user_in_update)
return user_authentication_headers(client=client, email=email, password=password)

View File

@@ -1,31 +1,27 @@
import random
import string
from typing import Dict
import requests
from fastapi.testclient import TestClient
from app.core import config
from app.core.config import settings
def random_lower_string():
def random_lower_string() -> str:
return "".join(random.choices(string.ascii_lowercase, k=32))
def get_server_api():
server_name = f"http://{config.SERVER_NAME}"
return server_name
def random_email() -> str:
return f"{random_lower_string()}@{random_lower_string()}.com"
def get_superuser_token_headers():
server_api = get_server_api()
def get_superuser_token_headers(client: TestClient) -> Dict[str, str]:
login_data = {
"username": config.FIRST_SUPERUSER,
"password": config.FIRST_SUPERUSER_PASSWORD,
"username": settings.FIRST_SUPERUSER,
"password": settings.FIRST_SUPERUSER_PASSWORD,
}
r = requests.post(
f"{server_api}{config.API_V1_STR}/login/access-token", data=login_data
)
r = client.post(f"{settings.API_V1_STR}/login/access-token", data=login_data)
tokens = r.json()
a_token = tokens["access_token"]
headers = {"Authorization": f"Bearer {a_token}"}
# superuser_token_headers = headers
return headers

View File

@@ -2,8 +2,7 @@ import logging
from tenacity import after_log, before_log, retry, stop_after_attempt, wait_fixed
from app.db.session import db_session
from app.tests.api.api_v1.test_login import test_get_access_token
from app.db.session import SessionLocal
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@@ -18,18 +17,17 @@ wait_seconds = 1
before=before_log(logger, logging.INFO),
after=after_log(logger, logging.WARN),
)
def init():
def init() -> None:
try:
# Try to create session to check if DB is awake
db_session.execute("SELECT 1")
# Wait for API to be awake, run one simple tests to authenticate
test_get_access_token()
db = SessionLocal()
db.execute("SELECT 1")
except Exception as e:
logger.error(e)
raise e
def main():
def main() -> None:
logger.info("Initializing service")
init()
logger.info("Service finished initializing")

View File

@@ -1,86 +1,84 @@
import logging
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional
from typing import Any, Dict, Optional
import emails
import jwt
from emails.template import JinjaTemplate
from jwt.exceptions import InvalidTokenError
from jose import jwt
from app.core import config
password_reset_jwt_subject = "preset"
from app.core.config import settings
def send_email(email_to: str, subject_template="", html_template="", environment={}):
assert config.EMAILS_ENABLED, "no provided configuration for email variables"
def send_email(
email_to: str,
subject_template: str = "",
html_template: str = "",
environment: Dict[str, Any] = {},
) -> None:
assert settings.EMAILS_ENABLED, "no provided configuration for email variables"
message = emails.Message(
subject=JinjaTemplate(subject_template),
html=JinjaTemplate(html_template),
mail_from=(config.EMAILS_FROM_NAME, config.EMAILS_FROM_EMAIL),
mail_from=(settings.EMAILS_FROM_NAME, settings.EMAILS_FROM_EMAIL),
)
smtp_options = {"host": config.SMTP_HOST, "port": config.SMTP_PORT}
if config.SMTP_TLS:
smtp_options = {"host": settings.SMTP_HOST, "port": settings.SMTP_PORT}
if settings.SMTP_TLS:
smtp_options["tls"] = True
if config.SMTP_USER:
smtp_options["user"] = config.SMTP_USER
if config.SMTP_PASSWORD:
smtp_options["password"] = config.SMTP_PASSWORD
if settings.SMTP_USER:
smtp_options["user"] = settings.SMTP_USER
if settings.SMTP_PASSWORD:
smtp_options["password"] = settings.SMTP_PASSWORD
response = message.send(to=email_to, render=environment, smtp=smtp_options)
logging.info(f"send email result: {response}")
def send_test_email(email_to: str):
project_name = config.PROJECT_NAME
def send_test_email(email_to: str) -> None:
project_name = settings.PROJECT_NAME
subject = f"{project_name} - Test email"
with open(Path(config.EMAIL_TEMPLATES_DIR) / "test_email.html") as f:
with open(Path(settings.EMAIL_TEMPLATES_DIR) / "test_email.html") as f:
template_str = f.read()
send_email(
email_to=email_to,
subject_template=subject,
html_template=template_str,
environment={"project_name": config.PROJECT_NAME, "email": email_to},
environment={"project_name": settings.PROJECT_NAME, "email": email_to},
)
def send_reset_password_email(email_to: str, email: str, token: str):
project_name = config.PROJECT_NAME
def send_reset_password_email(email_to: str, email: str, token: str) -> None:
project_name = settings.PROJECT_NAME
subject = f"{project_name} - Password recovery for user {email}"
with open(Path(config.EMAIL_TEMPLATES_DIR) / "reset_password.html") as f:
with open(Path(settings.EMAIL_TEMPLATES_DIR) / "reset_password.html") as f:
template_str = f.read()
if hasattr(token, "decode"):
use_token = token.decode()
else:
use_token = token
server_host = config.SERVER_HOST
link = f"{server_host}/reset-password?token={use_token}"
server_host = settings.SERVER_HOST
link = f"{server_host}/reset-password?token={token}"
send_email(
email_to=email_to,
subject_template=subject,
html_template=template_str,
environment={
"project_name": config.PROJECT_NAME,
"project_name": settings.PROJECT_NAME,
"username": email,
"email": email_to,
"valid_hours": config.EMAIL_RESET_TOKEN_EXPIRE_HOURS,
"valid_hours": settings.EMAIL_RESET_TOKEN_EXPIRE_HOURS,
"link": link,
},
)
def send_new_account_email(email_to: str, username: str, password: str):
project_name = config.PROJECT_NAME
def send_new_account_email(email_to: str, username: str, password: str) -> None:
project_name = settings.PROJECT_NAME
subject = f"{project_name} - New account for user {username}"
with open(Path(config.EMAIL_TEMPLATES_DIR) / "new_account.html") as f:
with open(Path(settings.EMAIL_TEMPLATES_DIR) / "new_account.html") as f:
template_str = f.read()
link = config.SERVER_HOST
link = settings.SERVER_HOST
send_email(
email_to=email_to,
subject_template=subject,
html_template=template_str,
environment={
"project_name": config.PROJECT_NAME,
"project_name": settings.PROJECT_NAME,
"username": username,
"password": password,
"email": email_to,
@@ -89,23 +87,20 @@ def send_new_account_email(email_to: str, username: str, password: str):
)
def generate_password_reset_token(email):
delta = timedelta(hours=config.EMAIL_RESET_TOKEN_EXPIRE_HOURS)
def generate_password_reset_token(email: str) -> str:
delta = timedelta(hours=settings.EMAIL_RESET_TOKEN_EXPIRE_HOURS)
now = datetime.utcnow()
expires = now + delta
exp = expires.timestamp()
encoded_jwt = jwt.encode(
{"exp": exp, "nbf": now, "sub": password_reset_jwt_subject, "email": email},
config.SECRET_KEY,
algorithm="HS256",
{"exp": exp, "nbf": now, "sub": email}, settings.SECRET_KEY, algorithm="HS256",
)
return encoded_jwt
def verify_password_reset_token(token) -> Optional[str]:
def verify_password_reset_token(token: str) -> Optional[str]:
try:
decoded_token = jwt.decode(token, config.SECRET_KEY, algorithms=["HS256"])
assert decoded_token["sub"] == password_reset_jwt_subject
decoded_token = jwt.decode(token, settings.SECRET_KEY, algorithms=["HS256"])
return decoded_token["email"]
except InvalidTokenError:
except jwt.JWTError:
return None

View File

@@ -1,11 +1,11 @@
from raven import Client
from app.core import config
from app.core.celery_app import celery_app
from app.core.config import settings
client_sentry = Client(config.SENTRY_DSN)
client_sentry = Client(settings.SENTRY_DSN)
@celery_app.task(acks_late=True)
def test_celery(word: str):
def test_celery(word: str) -> str:
return f"test task return {word}"

View File

@@ -0,0 +1,4 @@
[mypy]
plugins = pydantic.mypy, sqlmypy
ignore_missing_imports = True
disallow_untyped_defs = True

View File

@@ -0,0 +1,46 @@
[tool.poetry]
name = "app"
version = "0.1.0"
description = ""
authors = ["Admin <admin@example.com>"]
[tool.poetry.dependencies]
python = "^3.7"
uvicorn = "^0.11.3"
fastapi = "^0.54.1"
python-multipart = "^0.0.5"
email-validator = "^1.0.5"
requests = "^2.23.0"
celery = "^4.4.2"
passlib = {extras = ["bcrypt"], version = "^1.7.2"}
tenacity = "^6.1.0"
pydantic = "^1.4"
emails = "^0.5.15"
raven = "^6.10.0"
gunicorn = "^20.0.4"
jinja2 = "^2.11.2"
psycopg2-binary = "^2.8.5"
alembic = "^1.4.2"
sqlalchemy = "^1.3.16"
pytest = "^5.4.1"
python-jose = {extras = ["cryptography"], version = "^3.1.0"}
[tool.poetry.dev-dependencies]
mypy = "^0.770"
black = "^19.10b0"
isort = "^4.3.21"
autoflake = "^1.3.1"
flake8 = "^3.7.9"
pytest = "^5.4.1"
sqlalchemy-stubs = "^0.3"
pytest-cov = "^2.8.1"
[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
line_length = 88
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"

View File

@@ -0,0 +1,6 @@
#!/bin/sh -e
set -x
# Sort imports one per line, so autoflake can remove unused imports
isort --recursive --force-single-line-imports --apply app
sh ./scripts/format.sh

View File

@@ -0,0 +1,6 @@
#!/bin/sh -e
set -x
autoflake --remove-all-unused-imports --recursive --remove-unused-variables --in-place app --exclude=__init__.py
black app
isort --recursive --apply app

View File

@@ -2,7 +2,7 @@
set -x
autoflake --remove-all-unused-imports --recursive --remove-unused-variables --in-place app --exclude=__init__.py
isort --multi-line=3 --trailing-comma --force-grid-wrap=0 --combine-as --line-width 88 --recursive --apply app
black app
vulture app --min-confidence 70
mypy app
black app --check
isort --recursive --check-only app
flake8

View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -e
set -x
bash scripts/test.sh --cov-report=html "${@}"

View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -e
set -x
pytest --cov=app --cov-report=term-missing app/tests "${@}"

View File

@@ -3,4 +3,4 @@ set -e
python /app/app/tests_pre_start.py
pytest $* /app/app/tests/
bash ./scripts/test.sh "$@"

View File

@@ -1,17 +1,25 @@
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install celery~=4.3 passlib[bcrypt] tenacity requests emails "fastapi>=0.16.0" uvicorn gunicorn pyjwt python-multipart email_validator jinja2 psycopg2-binary alembic SQLAlchemy
WORKDIR /app/
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy poetry.lock* in case it doesn't exist in the repo
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
# Allow installing dev dependencies to run tests
ARG INSTALL_DEV=false
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
# For development, Jupyter remote kernel, Hydrogen
# Using inside the container:
# jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
ARG env=prod
RUN bash -c "if [ $env == 'dev' ] ; then pip install jupyterlab ; fi"
EXPOSE 8888
ARG INSTALL_JUPYTER=false
RUN bash -c "if [ $INSTALL_JUPYTER == 'true' ] ; then pip install jupyterlab ; fi"
COPY ./app /app
WORKDIR /app/
ENV PYTHONPATH=/app
EXPOSE 80

View File

@@ -1,13 +1,25 @@
FROM python:3.7
RUN pip install raven celery~=4.3 passlib[bcrypt] tenacity requests "fastapi>=0.16.0" emails pyjwt email_validator jinja2 psycopg2-binary alembic SQLAlchemy
WORKDIR /app/
# Install Poetry
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | POETRY_HOME=/opt/poetry python && \
cd /usr/local/bin && \
ln -s /opt/poetry/bin/poetry && \
poetry config virtualenvs.create false
# Copy poetry.lock* in case it doesn't exist in the repo
COPY ./app/pyproject.toml ./app/poetry.lock* /app/
# Allow installing dev dependencies to run tests
ARG INSTALL_DEV=false
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
# For development, Jupyter remote kernel, Hydrogen
# Using inside the container:
# jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
ARG env=prod
RUN bash -c "if [ $env == 'dev' ] ; then pip install jupyterlab ; fi"
EXPOSE 8888
ARG INSTALL_JUPYTER=false
RUN bash -c "if [ $INSTALL_JUPYTER == 'true' ] ; then pip install jupyterlab ; fi"
ENV C_FORCE_ROOT=1

View File

@@ -1,24 +0,0 @@
FROM python:3.7
RUN pip install requests pytest tenacity passlib[bcrypt] "fastapi>=0.16.0" psycopg2-binary SQLAlchemy
# For development, Jupyter remote kernel, Hydrogen
# Using inside the container:
# jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
ARG env=prod
RUN bash -c "if [ $env == 'dev' ] ; then pip install jupyterlab ; fi"
EXPOSE 8888
COPY ./app /app
ENV PYTHONPATH=/app
COPY ./app/tests-start.sh /tests-start.sh
RUN chmod +x /tests-start.sh
# This will make the container wait, doing nothing, but alive
CMD ["bash", "-c", "while true; do sleep 1; done"]
# Afterwards you can exec a command /tests-start.sh in the live container, like:
# docker exec -it backend-tests /tests-start.sh

View File

@@ -19,7 +19,6 @@ default_context:
pgadmin_default_user_password: '{{ cookiecutter.pgadmin_default_user_password }}'
traefik_constraint_tag: '{{ cookiecutter.traefik_constraint_tag }}'
traefik_constraint_tag_staging: '{{ cookiecutter.traefik_constraint_tag_staging }}'
traefik_public_network: '{{ cookiecutter.traefik_public_network }}'
traefik_public_constraint_tag: '{{ cookiecutter.traefik_public_constraint_tag }}'
flower_auth: '{{ cookiecutter.flower_auth }}'
sentry_dsn: '{{ cookiecutter.sentry_dsn }}'

View File

@@ -1,15 +0,0 @@
version: '3.3'
services:
backend:
build:
context: ./backend
dockerfile: backend.dockerfile
celeryworker:
build:
context: ./backend
dockerfile: celeryworker.dockerfile
frontend:
build:
context: ./frontend
args:
FRONTEND_ENV: ${FRONTEND_ENV-production}

View File

@@ -1,11 +0,0 @@
version: '3.3'
services:
proxy:
command: --docker \
--docker.swarmmode \
--docker.watch \
--docker.exposedbydefault=false \
--constraints=tag==${TRAEFIK_TAG} \
--logLevel=INFO \
--accessLog \
--web

View File

@@ -1,8 +0,0 @@
version: '3.3'
services:
backend:
image: '${DOCKER_IMAGE_BACKEND}:${TAG-latest}'
celeryworker:
image: '${DOCKER_IMAGE_CELERYWORKER}:${TAG-latest}'
frontend:
image: '${DOCKER_IMAGE_FRONTEND}:${TAG-latest}'

View File

@@ -1,66 +0,0 @@
version: '3.3'
services:
pgadmin:
deploy:
labels:
- traefik.frontend.rule=Host:pgadmin.${DOMAIN}
- traefik.enable=true
- traefik.port=5050
- traefik.tags=${TRAEFIK_PUBLIC_TAG}
- traefik.docker.network=${TRAEFIK_PUBLIC_NETWORK}
# Traefik service that listens to HTTP
- traefik.redirectorservice.frontend.entryPoints=http
- traefik.redirectorservice.frontend.redirect.entryPoint=https
# Traefik service that listens to HTTPS
- traefik.webservice.frontend.entryPoints=https
proxy:
deploy:
labels:
# For the configured domain
- traefik.frontend.rule=Host:${DOMAIN}
# For a domain with and without 'www'
# Comment the previous line above and un-comment the line below
# - "traefik.frontend.rule=Host:www.${DOMAIN},${DOMAIN}"
- traefik.enable=true
- traefik.port=80
- traefik.tags=${TRAEFIK_PUBLIC_TAG}
- traefik.docker.network=${TRAEFIK_PUBLIC_NETWORK}
# Traefik service that listens to HTTP
- traefik.servicehttp.frontend.entryPoints=http
- traefik.servicehttp.frontend.redirect.entryPoint=https
# Traefik service that listens to HTTPS
- traefik.servicehttps.frontend.entryPoints=https
# Uncomment the config line below to detect and redirect www to non-www (or the contrary)
# The lines above for traefik.frontend.rule are needed too
# - "traefik.servicehttps.frontend.redirect.regex=^https?://(www.)?(${DOMAIN})/(.*)"
# To redirect from non-www to www un-comment the line below
# - "traefik.servicehttps.frontend.redirect.replacement=https://www.${DOMAIN}/$$3"
# To redirect from www to non-www un-comment the line below
# - "traefik.servicehttps.frontend.redirect.replacement=https://${DOMAIN}/$$3"
flower:
deploy:
labels:
- traefik.frontend.rule=Host:flower.${DOMAIN}
- traefik.enable=true
- traefik.port=5555
- traefik.tags=${TRAEFIK_PUBLIC_TAG}
- traefik.docker.network=${TRAEFIK_PUBLIC_NETWORK}
# Traefik service that listens to HTTP
- traefik.redirectorservice.frontend.entryPoints=http
- traefik.redirectorservice.frontend.redirect.entryPoint=https
# Traefik service that listens to HTTPS
- traefik.webservice.frontend.entryPoints=https
backend:
deploy:
labels:
- traefik.frontend.rule=PathPrefix:/api,/docs,/redoc
- traefik.enable=true
- traefik.port=80
- traefik.tags=${TRAEFIK_TAG}
frontend:
deploy:
labels:
- traefik.frontend.rule=PathPrefix:/
- traefik.enable=true
- traefik.port=80
- traefik.tags=${TRAEFIK_TAG}

View File

@@ -1,18 +0,0 @@
version: '3.3'
services:
pgadmin:
networks:
- ${TRAEFIK_PUBLIC_NETWORK}
- default
proxy:
networks:
- ${TRAEFIK_PUBLIC_NETWORK}
- default
flower:
networks:
- ${TRAEFIK_PUBLIC_NETWORK}
- default
networks:
{{cookiecutter.traefik_public_network}}:
external: true

View File

@@ -1,17 +0,0 @@
version: '3.3'
services:
db:
volumes:
- app-db-data:/var/lib/postgresql/data/pgdata
deploy:
placement:
constraints:
- node.labels.${STACK_NAME}.app-db-data == true
proxy:
deploy:
placement:
constraints:
- node.role == manager
volumes:
app-db-data:

View File

@@ -1,25 +0,0 @@
version: '3.3'
services:
backend:
build:
context: ./backend
dockerfile: backend.dockerfile
args:
env: dev
celeryworker:
build:
context: ./backend
dockerfile: celeryworker.dockerfile
args:
env: dev
backend-tests:
build:
context: ./backend
dockerfile: tests.dockerfile
args:
env: dev
frontend:
build:
context: ./frontend
args:
FRONTEND_ENV: dev

View File

@@ -1,14 +0,0 @@
version: '3.3'
services:
proxy:
command: --docker \
--docker.watch \
--docker.exposedbydefault=false \
--constraints=tag==${TRAEFIK_TAG} \
--logLevel=DEBUG \
--accessLog \
--web
# backend:
# command: bash -c "while true; do sleep 1; done" # Infinite loop to keep container live doing nothing
backend:
command: /start-reload.sh

View File

@@ -1,14 +0,0 @@
version: '3.3'
services:
backend:
environment:
- JUPYTER=jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
- SERVER_HOST=http://${DOMAIN}
celeryworker:
environment:
- RUN=celery worker -A app.worker -l info -Q main-queue -c 1
- JUPYTER=jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
- SERVER_HOST=http://${DOMAIN}
backend-tests:
environment:
- JUPYTER=jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888

View File

@@ -1,19 +0,0 @@
version: '3.3'
services:
proxy:
labels:
- traefik.frontend.rule=Host:${DOMAIN}
- traefik.enable=true
- traefik.port=80
backend:
labels:
- traefik.frontend.rule=PathPrefix:/api,/docs,/redoc
- traefik.enable=true
- traefik.port=80
- traefik.tags=${TRAEFIK_TAG}
frontend:
labels:
- traefik.frontend.rule=PathPrefix:/
- traefik.enable=true
- traefik.port=80
- traefik.tags=${TRAEFIK_TAG}

View File

@@ -1,7 +0,0 @@
version: '3.3'
services:
backend:
networks:
default:
aliases:
- ${DOMAIN}

View File

@@ -1,15 +0,0 @@
version: '3.3'
services:
pgadmin:
ports:
- '5050:5050'
proxy:
ports:
- '80:80'
- '8090:8080'
flower:
ports:
- '5555:5555'
backend:
ports:
- '8888:8888'

View File

@@ -1,11 +0,0 @@
version: '3.3'
services:
backend:
volumes:
- ./backend/app:/app
celeryworker:
volumes:
- ./backend/app:/app
backend-tests:
volumes:
- ./backend/app:/app

View File

@@ -0,0 +1,89 @@
version: "3.3"
services:
proxy:
ports:
- "80:80"
- "8090:8080"
command:
# Enable Docker in Traefik, so that it reads labels from Docker services
- --providers.docker
# Add a constraint to only use services with the label for this stack
# from the env var TRAEFIK_TAG
- --providers.docker.constraints=Label(`traefik.constraint-label-stack`, `${TRAEFIK_TAG?Variable not set}`)
# Do not expose all Docker services, only the ones explicitly exposed
- --providers.docker.exposedbydefault=false
# Disable Docker Swarm mode for local development
# - --providers.docker.swarmmode
# Enable the access log, with HTTP requests
- --accesslog
# Enable the Traefik log, for configurations and errors
- --log
# Enable the Dashboard and API
- --api
# Enable the Dashboard and API in insecure mode for local development
- --api.insecure=true
labels:
- traefik.enable=true
- traefik.http.routers.${STACK_NAME?Variable not set}-traefik-public-http.rule=Host(`${DOMAIN?Variable not set}`)
- traefik.http.services.${STACK_NAME?Variable not set}-traefik-public.loadbalancer.server.port=80
pgadmin:
ports:
- "5050:5050"
flower:
ports:
- "5555:5555"
backend:
ports:
- "8888:8888"
volumes:
- ./backend/app:/app
environment:
- JUPYTER=jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
- SERVER_HOST=http://${DOMAIN?Variable not set}
build:
context: ./backend
dockerfile: backend.dockerfile
args:
INSTALL_DEV: ${INSTALL_DEV-true}
INSTALL_JUPYTER: ${INSTALL_JUPYTER-true}
# command: bash -c "while true; do sleep 1; done" # Infinite loop to keep container live doing nothing
command: /start-reload.sh
labels:
- traefik.enable=true
- traefik.constraint-label-stack=${TRAEFIK_TAG?Variable not set}
- traefik.http.routers.${STACK_NAME?Variable not set}-backend-http.rule=PathPrefix(`/api`) || PathPrefix(`/docs`) || PathPrefix(`/redoc`)
- traefik.http.services.${STACK_NAME?Variable not set}-backend.loadbalancer.server.port=80
celeryworker:
volumes:
- ./backend/app:/app
environment:
- RUN=celery worker -A app.worker -l info -Q main-queue -c 1
- JUPYTER=jupyter lab --ip=0.0.0.0 --allow-root --NotebookApp.custom_display_url=http://127.0.0.1:8888
- SERVER_HOST=http://${DOMAIN?Variable not set}
build:
context: ./backend
dockerfile: celeryworker.dockerfile
args:
INSTALL_DEV: ${INSTALL_DEV-true}
INSTALL_JUPYTER: ${INSTALL_JUPYTER-true}
frontend:
build:
context: ./frontend
args:
FRONTEND_ENV: dev
labels:
- traefik.enable=true
- traefik.constraint-label-stack=${TRAEFIK_TAG?Variable not set}
- traefik.http.routers.${STACK_NAME?Variable not set}-frontend-http.rule=PathPrefix(`/`)
- traefik.http.services.${STACK_NAME?Variable not set}-frontend.loadbalancer.server.port=80
networks:
traefik-public:
# For local dev, don't expect an external Traefik network
external: false

Some files were not shown because too many files have changed in this diff Show More