mirror of
https://github.com/hoernschen/dendrite.git
synced 2024-12-27 07:28:27 +00:00
Vendor github.com/mattes/migrate
This commit is contained in:
parent
3c543bba54
commit
a387b77e0d
161 changed files with 9586 additions and 0 deletions
6
vendor/manifest
vendored
6
vendor/manifest
vendored
|
@ -150,6 +150,12 @@
|
|||
"revision": "8b1c8ab81986c1ce7f06a52fce48f4a1156b66ee",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/mattes/migrate",
|
||||
"repository": "https://github.com/mattes/migrate",
|
||||
"revision": "69472d5f5cdca0fb2766d8d86f63cb2e78e1d869",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
|
||||
"repository": "https://github.com/matttproud/golang_protobuf_extensions",
|
||||
|
|
22
vendor/src/github.com/mattes/migrate/CONTRIBUTING.md
vendored
Normal file
22
vendor/src/github.com/mattes/migrate/CONTRIBUTING.md
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Development, Testing and Contributing
|
||||
|
||||
1. Make sure you have a running Docker daemon
|
||||
(Install for [MacOS](https://docs.docker.com/docker-for-mac/))
|
||||
2. Fork this repo and `git clone` somewhere to `$GOPATH/src/github.com/%you%/migrate`
|
||||
3. `make rewrite-import-paths` to update imports to your local fork
|
||||
4. Confirm tests are working: `make test-short`
|
||||
5. Write awesome code ...
|
||||
6. `make test` to run all tests against all database versions
|
||||
7. `make restore-import-paths` to restore import paths
|
||||
8. Push code and open Pull Request
|
||||
|
||||
Some more helpful commands:
|
||||
|
||||
* You can specify which database/ source tests to run:
|
||||
`make test-short SOURCE='file go-bindata' DATABASE='postgres cassandra'`
|
||||
* After `make test`, run `make html-coverage` which opens a shiny test coverage overview.
|
||||
* Missing imports? `make deps`
|
||||
* `make build-cli` builds the CLI in directory `cli/build/`.
|
||||
* `make list-external-deps` lists all external dependencies for each package
|
||||
* `make docs && make open-docs` opens godoc in your browser, `make kill-docs` kills the godoc server.
|
||||
Repeatedly call `make docs` to refresh the server.
|
67
vendor/src/github.com/mattes/migrate/FAQ.md
vendored
Normal file
67
vendor/src/github.com/mattes/migrate/FAQ.md
vendored
Normal file
|
@ -0,0 +1,67 @@
|
|||
# FAQ
|
||||
|
||||
#### How is the code base structured?
|
||||
```
|
||||
/ package migrate (the heart of everything)
|
||||
/cli the CLI wrapper
|
||||
/database database driver and sub directories have the actual driver implementations
|
||||
/source source driver and sub directories have the actual driver implementations
|
||||
```
|
||||
|
||||
#### Why is there no `source/driver.go:Last()`?
|
||||
It's not needed. And unless the source has a "native" way to read a directory in reversed order,
|
||||
it might be expensive to do a full directory scan in order to get the last element.
|
||||
|
||||
#### What is a NilMigration? NilVersion?
|
||||
NilMigration defines a migration without a body. NilVersion is defined as const -1.
|
||||
|
||||
#### What is the difference between uint(version) and int(targetVersion)?
|
||||
version refers to an existing migration version coming from a source and therefor can never be negative.
|
||||
targetVersion can either be a version OR represent a NilVersion, which equals -1.
|
||||
|
||||
#### What's the difference between Next/Previous and Up/Down?
|
||||
```
|
||||
1_first_migration.up.extension next -> 2_second_migration.up.extension ...
|
||||
1_first_migration.down.extension <- previous 2_second_migration.down.extension ...
|
||||
```
|
||||
|
||||
#### Why two separate files (up and down) for a migration?
|
||||
It makes all of our lives easier. No new markup/syntax to learn for users
|
||||
and existing database utility tools continue to work as expected.
|
||||
|
||||
#### How many migrations can migrate handle?
|
||||
Whatever the maximum positive signed integer value is for your platform.
|
||||
For 32bit it would be 2,147,483,647 migrations. Migrate only keeps references to
|
||||
the currently run and pre-fetched migrations in memory. Please note that some
|
||||
source drivers need to do build a full "directory" tree first, which puts some
|
||||
heat on the memory consumption.
|
||||
|
||||
#### Are the table tests in migrate_test.go bloated?
|
||||
Yes and no. There are duplicate test cases for sure but they don't hurt here. In fact
|
||||
the tests are very visual now and might help new users understand expected behaviors quickly.
|
||||
Migrate from version x to y and y is the last migration? Just check out the test for
|
||||
that particular case and know what's going on instantly.
|
||||
|
||||
#### What is Docker being used for?
|
||||
Only for testing. See [testing/docker.go](testing/docker.go)
|
||||
|
||||
#### Why not just use docker-compose?
|
||||
It doesn't give us enough runtime control for testing. We want to be able to bring up containers fast
|
||||
and whenever we want, not just once at the beginning of all tests.
|
||||
|
||||
#### Can I maintain my driver in my own repository?
|
||||
Yes, technically thats possible. We want to encourage you to contribute your driver to this respository though.
|
||||
The driver's functionality is dictated by migrate's interfaces. That means there should really
|
||||
just be one driver for a database/ source. We want to prevent a future where several drivers doing the exact same thing,
|
||||
just implemented a bit differently, co-exist somewhere on Github. If users have to do research first to find the
|
||||
"best" available driver for a database in order to get started, we would have failed as an open source community.
|
||||
|
||||
#### Can I mix multiple sources during a batch of migrations?
|
||||
No.
|
||||
|
||||
#### What does "dirty" database mean?
|
||||
Before a migration runs, each database sets a dirty flag. Execution stops if a migration fails and the dirty state persists,
|
||||
which prevents attempts to run more migrations on top of a failed migration. You need to manually fix the error
|
||||
and then "force" the expected version.
|
||||
|
||||
|
23
vendor/src/github.com/mattes/migrate/LICENSE
vendored
Normal file
23
vendor/src/github.com/mattes/migrate/LICENSE
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Matthias Kadenbach
|
||||
|
||||
https://github.com/mattes/migrate
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
81
vendor/src/github.com/mattes/migrate/MIGRATIONS.md
vendored
Normal file
81
vendor/src/github.com/mattes/migrate/MIGRATIONS.md
vendored
Normal file
|
@ -0,0 +1,81 @@
|
|||
# Migrations
|
||||
|
||||
## Migration Filename Format
|
||||
|
||||
A single logical migration is represented as two separate migration files, one
|
||||
to migrate "up" to the specified version from the previous version, and a second
|
||||
to migrate back "down" to the previous version. These migrations can be provided
|
||||
by any one of the supported [migration sources](./README.md#migration-sources).
|
||||
|
||||
The ordering and direction of the migration files is determined by the filenames
|
||||
used for them. `migrate` expects the filenames of migrations to have the format:
|
||||
|
||||
{version}_{title}.up.{extension}
|
||||
{version}_{title}.down.{extension}
|
||||
|
||||
The `title` of each migration is unused, and is only for readability. Similarly,
|
||||
the `extension` of the migration files is not checked by the library, and should
|
||||
be an appropriate format for the database in use (`.sql` for SQL variants, for
|
||||
instance).
|
||||
|
||||
Versions of migrations may be represented as any 64 bit unsigned integer.
|
||||
All migrations are applied upward in order of increasing version number, and
|
||||
downward by decreasing version number.
|
||||
|
||||
Common versioning schemes include incrementing integers:
|
||||
|
||||
1_initialize_schema.down.sql
|
||||
1_initialize_schema.up.sql
|
||||
2_add_table.down.sql
|
||||
2_add_table.up.sql
|
||||
...
|
||||
|
||||
Or timestamps at an appropriate resolution:
|
||||
|
||||
1500360784_initialize_schema.down.sql
|
||||
1500360784_initialize_schema.up.sql
|
||||
1500445949_add_table.down.sql
|
||||
1500445949_add_table.up.sql
|
||||
...
|
||||
|
||||
But any scheme resulting in distinct, incrementing integers as versions is valid.
|
||||
|
||||
It is suggested that the version number of corresponding `up` and `down` migration
|
||||
files be equivalent for clarity, but they are allowed to differ so long as the
|
||||
relative ordering of the migrations is preserved.
|
||||
|
||||
The migration files are permitted to be empty, so in the event that a migration
|
||||
is a no-op or is irreversible, it is recommended to still include both migration
|
||||
files, and either leaving them empty or adding a comment as appropriate.
|
||||
|
||||
## Migration Content Format
|
||||
|
||||
The format of the migration files themselves varies between database systems.
|
||||
Different databases have different semantics around schema changes and when and
|
||||
how they are allowed to occur (for instance, if schema changes can occur within
|
||||
a transaction).
|
||||
|
||||
As such, the `migrate` library has little to no checking around the format of
|
||||
migration sources. The migration files are generally processed directly by the
|
||||
drivers as raw operations.
|
||||
|
||||
## Reversibility of Migrations
|
||||
|
||||
Best practice for writing schema migration is that all migrations should be
|
||||
reversible. It should in theory be possible for run migrations down and back up
|
||||
through any and all versions with the state being fully cleaned and recreated
|
||||
by doing so.
|
||||
|
||||
By adhering to this recommended practice, development and deployment of new code
|
||||
is cleaner and easier (cleaning database state for a new feature should be as
|
||||
easy as migrating down to a prior version, and back up to the latest).
|
||||
|
||||
As opposed to some other migration libraries, `migrate` represents up and down
|
||||
migrations as separate files. This prevents any non-standard file syntax from
|
||||
being introduced which may result in unintended behavior or errors, depending
|
||||
on what database is processing the file.
|
||||
|
||||
While it is technically possible for an up or down migration to exist on its own
|
||||
without an equivalently versioned counterpart, it is strongly recommended to
|
||||
always include a down migration which cleans up the state of the corresponding
|
||||
up migration.
|
123
vendor/src/github.com/mattes/migrate/Makefile
vendored
Normal file
123
vendor/src/github.com/mattes/migrate/Makefile
vendored
Normal file
|
@ -0,0 +1,123 @@
|
|||
SOURCE ?= file go-bindata github aws-s3 google-cloud-storage
|
||||
DATABASE ?= postgres mysql redshift cassandra sqlite3 spanner cockroachdb clickhouse
|
||||
VERSION ?= $(shell git describe --tags 2>/dev/null | cut -c 2-)
|
||||
TEST_FLAGS ?=
|
||||
REPO_OWNER ?= $(shell cd .. && basename "$$(pwd)")
|
||||
|
||||
|
||||
build-cli: clean
|
||||
-mkdir ./cli/build
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -o build/migrate.linux-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=darwin GOARCH=amd64 go build -a -o build/migrate.darwin-amd64 -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli && CGO_ENABLED=1 GOOS=windows GOARCH=amd64 go build -a -o build/migrate.windows-amd64.exe -ldflags='-X main.Version=$(VERSION)' -tags '$(DATABASE) $(SOURCE)' .
|
||||
cd ./cli/build && find . -name 'migrate*' | xargs -I{} tar czf {}.tar.gz {}
|
||||
cd ./cli/build && shasum -a 256 * > sha256sum.txt
|
||||
cat ./cli/build/sha256sum.txt
|
||||
|
||||
|
||||
clean:
|
||||
-rm -r ./cli/build
|
||||
|
||||
|
||||
test-short:
|
||||
make test-with-flags --ignore-errors TEST_FLAGS='-short'
|
||||
|
||||
|
||||
test:
|
||||
@-rm -r .coverage
|
||||
@mkdir .coverage
|
||||
make test-with-flags TEST_FLAGS='-v -race -covermode atomic -coverprofile .coverage/_$$(RAND).txt -bench=. -benchmem'
|
||||
@echo 'mode: atomic' > .coverage/combined.txt
|
||||
@cat .coverage/*.txt | grep -v 'mode: atomic' >> .coverage/combined.txt
|
||||
|
||||
|
||||
test-with-flags:
|
||||
@echo SOURCE: $(SOURCE)
|
||||
@echo DATABASE: $(DATABASE)
|
||||
|
||||
@go test $(TEST_FLAGS) .
|
||||
@go test $(TEST_FLAGS) ./cli/...
|
||||
@go test $(TEST_FLAGS) ./testing/...
|
||||
|
||||
@echo -n '$(SOURCE)' | tr -s ' ' '\n' | xargs -I{} go test $(TEST_FLAGS) ./source/{}
|
||||
@go test $(TEST_FLAGS) ./source/testing/...
|
||||
@go test $(TEST_FLAGS) ./source/stub/...
|
||||
|
||||
@echo -n '$(DATABASE)' | tr -s ' ' '\n' | xargs -I{} go test $(TEST_FLAGS) ./database/{}
|
||||
@go test $(TEST_FLAGS) ./database/testing/...
|
||||
@go test $(TEST_FLAGS) ./database/stub/...
|
||||
|
||||
|
||||
kill-orphaned-docker-containers:
|
||||
docker rm -f $(shell docker ps -aq --filter label=migrate_test)
|
||||
|
||||
|
||||
html-coverage:
|
||||
go tool cover -html=.coverage/combined.txt
|
||||
|
||||
|
||||
deps:
|
||||
-go get -v -u ./...
|
||||
-go test -v -i ./...
|
||||
# TODO: why is this not being fetched with the command above?
|
||||
-go get -u github.com/fsouza/fake-gcs-server/fakestorage
|
||||
|
||||
|
||||
list-external-deps:
|
||||
$(call external_deps,'.')
|
||||
$(call external_deps,'./cli/...')
|
||||
$(call external_deps,'./testing/...')
|
||||
|
||||
$(foreach v, $(SOURCE), $(call external_deps,'./source/$(v)/...'))
|
||||
$(call external_deps,'./source/testing/...')
|
||||
$(call external_deps,'./source/stub/...')
|
||||
|
||||
$(foreach v, $(DATABASE), $(call external_deps,'./database/$(v)/...'))
|
||||
$(call external_deps,'./database/testing/...')
|
||||
$(call external_deps,'./database/stub/...')
|
||||
|
||||
|
||||
restore-import-paths:
|
||||
find . -name '*.go' -type f -execdir sed -i '' s%\"github.com/$(REPO_OWNER)/migrate%\"github.com/mattes/migrate%g '{}' \;
|
||||
|
||||
|
||||
rewrite-import-paths:
|
||||
find . -name '*.go' -type f -execdir sed -i '' s%\"github.com/mattes/migrate%\"github.com/$(REPO_OWNER)/migrate%g '{}' \;
|
||||
|
||||
|
||||
# example: fswatch -0 --exclude .godoc.pid --event Updated . | xargs -0 -n1 -I{} make docs
|
||||
docs:
|
||||
-make kill-docs
|
||||
nohup godoc -play -http=127.0.0.1:6064 </dev/null >/dev/null 2>&1 & echo $$! > .godoc.pid
|
||||
cat .godoc.pid
|
||||
|
||||
|
||||
kill-docs:
|
||||
@cat .godoc.pid
|
||||
kill -9 $$(cat .godoc.pid)
|
||||
rm .godoc.pid
|
||||
|
||||
|
||||
open-docs:
|
||||
open http://localhost:6064/pkg/github.com/$(REPO_OWNER)/migrate
|
||||
|
||||
|
||||
# example: make release V=0.0.0
|
||||
release:
|
||||
git tag v$(V)
|
||||
@read -p "Press enter to confirm and push to origin ..." && git push origin v$(V)
|
||||
|
||||
|
||||
define external_deps
|
||||
@echo '-- $(1)'; go list -f '{{join .Deps "\n"}}' $(1) | grep -v github.com/$(REPO_OWNER)/migrate | xargs go list -f '{{if not .Standard}}{{.ImportPath}}{{end}}'
|
||||
|
||||
endef
|
||||
|
||||
|
||||
.PHONY: build-cli clean test-short test test-with-flags deps html-coverage \
|
||||
restore-import-paths rewrite-import-paths list-external-deps release \
|
||||
docs kill-docs open-docs kill-orphaned-docker-containers
|
||||
|
||||
SHELL = /bin/bash
|
||||
RAND = $(shell echo $$RANDOM)
|
||||
|
140
vendor/src/github.com/mattes/migrate/README.md
vendored
Normal file
140
vendor/src/github.com/mattes/migrate/README.md
vendored
Normal file
|
@ -0,0 +1,140 @@
|
|||
[![Build Status](https://travis-ci.org/mattes/migrate.svg?branch=master)](https://travis-ci.org/mattes/migrate)
|
||||
[![GoDoc](https://godoc.org/github.com/mattes/migrate?status.svg)](https://godoc.org/github.com/mattes/migrate)
|
||||
[![Coverage Status](https://coveralls.io/repos/github/mattes/migrate/badge.svg?branch=v3.0-prev)](https://coveralls.io/github/mattes/migrate?branch=v3.0-prev)
|
||||
[![packagecloud.io](https://img.shields.io/badge/deb-packagecloud.io-844fec.svg)](https://packagecloud.io/mattes/migrate?filter=debs)
|
||||
|
||||
# migrate
|
||||
|
||||
__Database migrations written in Go. Use as [CLI](#cli-usage) or import as [library](#use-in-your-go-project).__
|
||||
|
||||
* Migrate reads migrations from [sources](#migration-sources)
|
||||
and applies them in correct order to a [database](#databases).
|
||||
* Drivers are "dumb", migrate glues everything together and makes sure the logic is bulletproof.
|
||||
(Keeps the drivers lightweight, too.)
|
||||
* Database drivers don't assume things or try to correct user input. When in doubt, fail.
|
||||
|
||||
|
||||
Looking for [v1](https://github.com/mattes/migrate/tree/v1)?
|
||||
|
||||
|
||||
## Databases
|
||||
|
||||
Database drivers run migrations. [Add a new database?](database/driver.go)
|
||||
|
||||
* [PostgreSQL](database/postgres)
|
||||
* [Redshift](database/redshift)
|
||||
* [Ql](database/ql)
|
||||
* [Cassandra](database/cassandra)
|
||||
* [SQLite](database/sqlite3)
|
||||
* [MySQL/ MariaDB](database/mysql)
|
||||
* [Neo4j](database/neo4j) ([todo #167](https://github.com/mattes/migrate/issues/167))
|
||||
* [MongoDB](database/mongodb) ([todo #169](https://github.com/mattes/migrate/issues/169))
|
||||
* [CrateDB](database/crate) ([todo #170](https://github.com/mattes/migrate/issues/170))
|
||||
* [Shell](database/shell) ([todo #171](https://github.com/mattes/migrate/issues/171))
|
||||
* [Google Cloud Spanner](database/spanner)
|
||||
* [CockroachDB](database/cockroachdb)
|
||||
* [ClickHouse](database/clickhouse)
|
||||
|
||||
|
||||
## Migration Sources
|
||||
|
||||
Source drivers read migrations from local or remote sources. [Add a new source?](source/driver.go)
|
||||
|
||||
* [Filesystem](source/file) - read from fileystem
|
||||
* [Go-Bindata](source/go-bindata) - read from embedded binary data ([jteeuwen/go-bindata](https://github.com/jteeuwen/go-bindata))
|
||||
* [Github](source/github) - read from remote Github repositories
|
||||
* [AWS S3](source/aws-s3) - read from Amazon Web Services S3
|
||||
* [Google Cloud Storage](source/google-cloud-storage) - read from Google Cloud Platform Storage
|
||||
|
||||
|
||||
|
||||
## CLI usage
|
||||
|
||||
* Simple wrapper around this library.
|
||||
* Handles ctrl+c (SIGINT) gracefully.
|
||||
* No config search paths, no config files, no magic ENV var injections.
|
||||
|
||||
__[CLI Documentation](cli)__
|
||||
|
||||
([brew todo #156](https://github.com/mattes/migrate/issues/156))
|
||||
|
||||
```
|
||||
$ brew install migrate --with-postgres
|
||||
$ migrate -database postgres://localhost:5432/database up 2
|
||||
```
|
||||
|
||||
|
||||
## Use in your Go project
|
||||
|
||||
* API is stable and frozen for this release (v3.x).
|
||||
* Package migrate has no external dependencies.
|
||||
* Only import the drivers you need.
|
||||
(check [dependency_tree.txt](https://github.com/mattes/migrate/releases) for each driver)
|
||||
* To help prevent database corruptions, it supports graceful stops via `GracefulStop chan bool`.
|
||||
* Bring your own logger.
|
||||
* Uses `io.Reader` streams internally for low memory overhead.
|
||||
* Thread-safe and no goroutine leaks.
|
||||
|
||||
__[Go Documentation](https://godoc.org/github.com/mattes/migrate)__
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/mattes/migrate"
|
||||
_ "github.com/mattes/migrate/database/postgres"
|
||||
_ "github.com/mattes/migrate/source/github"
|
||||
)
|
||||
|
||||
func main() {
|
||||
m, err := migrate.New(
|
||||
"github://mattes:personal-access-token@mattes/migrate_test",
|
||||
"postgres://localhost:5432/database?sslmode=enable")
|
||||
m.Steps(2)
|
||||
}
|
||||
```
|
||||
|
||||
Want to use an existing database client?
|
||||
|
||||
```go
|
||||
import (
|
||||
"database/sql"
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database/postgres"
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("postgres", "postgres://localhost:5432/database?sslmode=enable")
|
||||
driver, err := postgres.WithInstance(db, &postgres.Config{})
|
||||
m, err := migrate.NewWithDatabaseInstance(
|
||||
"file:///migrations",
|
||||
"postgres", driver)
|
||||
m.Steps(2)
|
||||
}
|
||||
```
|
||||
|
||||
## Migration files
|
||||
|
||||
Each migration has an up and down migration. [Why?](FAQ.md#why-two-separate-files-up-and-down-for-a-migration)
|
||||
|
||||
```
|
||||
1481574547_create_users_table.up.sql
|
||||
1481574547_create_users_table.down.sql
|
||||
```
|
||||
|
||||
[Best practices: How to write migrations.](MIGRATIONS.md)
|
||||
|
||||
|
||||
|
||||
## Development and Contributing
|
||||
|
||||
Yes, please! [`Makefile`](Makefile) is your friend,
|
||||
read the [development guide](CONTRIBUTING.md).
|
||||
|
||||
Also have a look at the [FAQ](FAQ.md).
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
Looking for alternatives? [https://awesome-go.com/#database](https://awesome-go.com/#database).
|
113
vendor/src/github.com/mattes/migrate/cli/README.md
vendored
Normal file
113
vendor/src/github.com/mattes/migrate/cli/README.md
vendored
Normal file
|
@ -0,0 +1,113 @@
|
|||
# migrate CLI
|
||||
|
||||
## Installation
|
||||
|
||||
#### With Go toolchain
|
||||
|
||||
```
|
||||
$ go get -u -d github.com/mattes/migrate/cli github.com/lib/pq
|
||||
$ go build -tags 'postgres' -o /usr/local/bin/migrate github.com/mattes/migrate/cli
|
||||
```
|
||||
|
||||
Note: This example builds the cli which will only work with postgres. In order
|
||||
to build the cli for use with other databases, replace the `postgres` build tag
|
||||
with the appropriate database tag(s) for the databases desired. The tags
|
||||
correspond to the names of the sub-packages underneath the
|
||||
[`database`](../database) package.
|
||||
|
||||
#### MacOS
|
||||
|
||||
([todo #156](https://github.com/mattes/migrate/issues/156))
|
||||
|
||||
```
|
||||
$ brew install migrate --with-postgres
|
||||
```
|
||||
|
||||
#### Linux (*.deb package)
|
||||
|
||||
```
|
||||
$ curl -L https://packagecloud.io/mattes/migrate/gpgkey | apt-key add -
|
||||
$ echo "deb https://packagecloud.io/mattes/migrate/ubuntu/ xenial main" > /etc/apt/sources.list.d/migrate.list
|
||||
$ apt-get update
|
||||
$ apt-get install -y migrate
|
||||
```
|
||||
|
||||
#### Download pre-build binary (Windows, MacOS, or Linux)
|
||||
|
||||
[Release Downloads](https://github.com/mattes/migrate/releases)
|
||||
|
||||
```
|
||||
$ curl -L https://github.com/mattes/migrate/releases/download/$version/migrate.$platform-amd64.tar.gz | tar xvz
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
$ migrate -help
|
||||
Usage: migrate OPTIONS COMMAND [arg...]
|
||||
migrate [ -version | -help ]
|
||||
|
||||
Options:
|
||||
-source Location of the migrations (driver://url)
|
||||
-path Shorthand for -source=file://path
|
||||
-database Run migrations against this database (driver://url)
|
||||
-prefetch N Number of migrations to load in advance before executing (default 10)
|
||||
-lock-timeout N Allow N seconds to acquire database lock (default 15)
|
||||
-verbose Print verbose logging
|
||||
-version Print version
|
||||
-help Print usage
|
||||
|
||||
Commands:
|
||||
create [-ext E] [-dir D] NAME
|
||||
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
|
||||
goto V Migrate to version V
|
||||
up [N] Apply all or N up migrations
|
||||
down [N] Apply all or N down migrations
|
||||
drop Drop everyting inside database
|
||||
force V Set version V but don't run migration (ignores dirty state)
|
||||
version Print current migration version
|
||||
```
|
||||
|
||||
|
||||
So let's say you want to run the first two migrations
|
||||
|
||||
```
|
||||
$ migrate -database postgres://localhost:5432/database up 2
|
||||
```
|
||||
|
||||
If your migrations are hosted on github
|
||||
|
||||
```
|
||||
$ migrate -source github://mattes:personal-access-token@mattes/migrate_test \
|
||||
-database postgres://localhost:5432/database down 2
|
||||
```
|
||||
|
||||
The CLI will gracefully stop at a safe point when SIGINT (ctrl+c) is received.
|
||||
Send SIGKILL for immediate halt.
|
||||
|
||||
|
||||
|
||||
## Reading CLI arguments from somewhere else
|
||||
|
||||
##### ENV variables
|
||||
|
||||
```
|
||||
$ migrate -database "$MY_MIGRATE_DATABASE"
|
||||
```
|
||||
|
||||
##### JSON files
|
||||
|
||||
Check out https://stedolan.github.io/jq/
|
||||
|
||||
```
|
||||
$ migrate -database "$(cat config.json | jq '.database')"
|
||||
```
|
||||
|
||||
##### YAML files
|
||||
|
||||
````
|
||||
$ migrate -database "$(cat config/database.yml | ruby -ryaml -e "print YAML.load(STDIN.read)['database']")"
|
||||
$ migrate -database "$(cat config/database.yml | python -c 'import yaml,sys;print yaml.safe_load(sys.stdin)["database"]')"
|
||||
```
|
7
vendor/src/github.com/mattes/migrate/cli/build_aws-s3.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_aws-s3.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build aws-s3
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/source/aws-s3"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_cassandra.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_cassandra.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build cassandra
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/cassandra"
|
||||
)
|
8
vendor/src/github.com/mattes/migrate/cli/build_clickhouse.go
vendored
Normal file
8
vendor/src/github.com/mattes/migrate/cli/build_clickhouse.go
vendored
Normal file
|
@ -0,0 +1,8 @@
|
|||
// +build clickhouse
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/kshvakov/clickhouse"
|
||||
_ "github.com/mattes/migrate/database/clickhouse"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_cockroachdb.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_cockroachdb.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build cockroachdb
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/cockroachdb"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_github.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_github.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build github
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/source/github"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_go-bindata.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_go-bindata.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build go-bindata
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/source/go-bindata"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_google-cloud-storage.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_google-cloud-storage.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build google-cloud-storage
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/source/google-cloud-storage"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_mysql.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_mysql.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build mysql
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/mysql"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_postgres.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_postgres.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build postgres
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/postgres"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_ql.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_ql.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build ql
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/ql"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_redshift.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_redshift.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build redshift
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/redshift"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_spanner.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_spanner.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build spanner
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/spanner"
|
||||
)
|
7
vendor/src/github.com/mattes/migrate/cli/build_sqlite3.go
vendored
Normal file
7
vendor/src/github.com/mattes/migrate/cli/build_sqlite3.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
// +build sqlite3
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
_ "github.com/mattes/migrate/database/sqlite3"
|
||||
)
|
96
vendor/src/github.com/mattes/migrate/cli/commands.go
vendored
Normal file
96
vendor/src/github.com/mattes/migrate/cli/commands.go
vendored
Normal file
|
@ -0,0 +1,96 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"github.com/mattes/migrate"
|
||||
_ "github.com/mattes/migrate/database/stub" // TODO remove again
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
"os"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func createCmd(dir string, timestamp int64, name string, ext string) {
|
||||
base := fmt.Sprintf("%v%v_%v.", dir, timestamp, name)
|
||||
os.MkdirAll(dir, os.ModePerm)
|
||||
createFile(base + "up" + ext)
|
||||
createFile(base + "down" + ext)
|
||||
}
|
||||
|
||||
func createFile(fname string) {
|
||||
if _, err := os.Create(fname); err != nil {
|
||||
log.fatalErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
func gotoCmd(m *migrate.Migrate, v uint) {
|
||||
if err := m.Migrate(v); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
log.fatalErr(err)
|
||||
} else {
|
||||
log.Println(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func upCmd(m *migrate.Migrate, limit int) {
|
||||
if limit >= 0 {
|
||||
if err := m.Steps(limit); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
log.fatalErr(err)
|
||||
} else {
|
||||
log.Println(err)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err := m.Up(); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
log.fatalErr(err)
|
||||
} else {
|
||||
log.Println(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func downCmd(m *migrate.Migrate, limit int) {
|
||||
if limit >= 0 {
|
||||
if err := m.Steps(-limit); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
log.fatalErr(err)
|
||||
} else {
|
||||
log.Println(err)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err := m.Down(); err != nil {
|
||||
if err != migrate.ErrNoChange {
|
||||
log.fatalErr(err)
|
||||
} else {
|
||||
log.Println(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func dropCmd(m *migrate.Migrate) {
|
||||
if err := m.Drop(); err != nil {
|
||||
log.fatalErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
func forceCmd(m *migrate.Migrate, v int) {
|
||||
if err := m.Force(v); err != nil {
|
||||
log.fatalErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
func versionCmd(m *migrate.Migrate) {
|
||||
v, dirty, err := m.Version()
|
||||
if err != nil {
|
||||
log.fatalErr(err)
|
||||
}
|
||||
if dirty {
|
||||
log.Printf("%v (dirty)\n", v)
|
||||
} else {
|
||||
log.Println(v)
|
||||
}
|
||||
}
|
12
vendor/src/github.com/mattes/migrate/cli/examples/Dockerfile
vendored
Normal file
12
vendor/src/github.com/mattes/migrate/cli/examples/Dockerfile
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
FROM ubuntu:xenial
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl apt-transport-https
|
||||
|
||||
RUN curl -L https://packagecloud.io/mattes/migrate/gpgkey | apt-key add - && \
|
||||
echo "deb https://packagecloud.io/mattes/migrate/ubuntu/ xenial main" > /etc/apt/sources.list.d/migrate.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y migrate
|
||||
|
||||
RUN migrate -version
|
||||
|
45
vendor/src/github.com/mattes/migrate/cli/log.go
vendored
Normal file
45
vendor/src/github.com/mattes/migrate/cli/log.go
vendored
Normal file
|
@ -0,0 +1,45 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
logpkg "log"
|
||||
"os"
|
||||
)
|
||||
|
||||
type Log struct {
|
||||
verbose bool
|
||||
}
|
||||
|
||||
func (l *Log) Printf(format string, v ...interface{}) {
|
||||
if l.verbose {
|
||||
logpkg.Printf(format, v...)
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, format, v...)
|
||||
}
|
||||
}
|
||||
|
||||
func (l *Log) Println(args ...interface{}) {
|
||||
if l.verbose {
|
||||
logpkg.Println(args...)
|
||||
} else {
|
||||
fmt.Fprintln(os.Stderr, args...)
|
||||
}
|
||||
}
|
||||
|
||||
func (l *Log) Verbose() bool {
|
||||
return l.verbose
|
||||
}
|
||||
|
||||
func (l *Log) fatalf(format string, v ...interface{}) {
|
||||
l.Printf(format, v...)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func (l *Log) fatal(args ...interface{}) {
|
||||
l.Println(args...)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func (l *Log) fatalErr(err error) {
|
||||
l.fatal("error:", err)
|
||||
}
|
237
vendor/src/github.com/mattes/migrate/cli/main.go
vendored
Normal file
237
vendor/src/github.com/mattes/migrate/cli/main.go
vendored
Normal file
|
@ -0,0 +1,237 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/mattes/migrate"
|
||||
)
|
||||
|
||||
// set main log
|
||||
var log = &Log{}
|
||||
|
||||
func main() {
|
||||
helpPtr := flag.Bool("help", false, "")
|
||||
versionPtr := flag.Bool("version", false, "")
|
||||
verbosePtr := flag.Bool("verbose", false, "")
|
||||
prefetchPtr := flag.Uint("prefetch", 10, "")
|
||||
lockTimeoutPtr := flag.Uint("lock-timeout", 15, "")
|
||||
pathPtr := flag.String("path", "", "")
|
||||
databasePtr := flag.String("database", "", "")
|
||||
sourcePtr := flag.String("source", "", "")
|
||||
|
||||
flag.Usage = func() {
|
||||
fmt.Fprint(os.Stderr,
|
||||
`Usage: migrate OPTIONS COMMAND [arg...]
|
||||
migrate [ -version | -help ]
|
||||
|
||||
Options:
|
||||
-source Location of the migrations (driver://url)
|
||||
-path Shorthand for -source=file://path
|
||||
-database Run migrations against this database (driver://url)
|
||||
-prefetch N Number of migrations to load in advance before executing (default 10)
|
||||
-lock-timeout N Allow N seconds to acquire database lock (default 15)
|
||||
-verbose Print verbose logging
|
||||
-version Print version
|
||||
-help Print usage
|
||||
|
||||
Commands:
|
||||
create [-ext E] [-dir D] NAME
|
||||
Create a set of timestamped up/down migrations titled NAME, in directory D with extension E
|
||||
goto V Migrate to version V
|
||||
up [N] Apply all or N up migrations
|
||||
down [N] Apply all or N down migrations
|
||||
drop Drop everyting inside database
|
||||
force V Set version V but don't run migration (ignores dirty state)
|
||||
version Print current migration version
|
||||
`)
|
||||
}
|
||||
|
||||
flag.Parse()
|
||||
|
||||
// initialize logger
|
||||
log.verbose = *verbosePtr
|
||||
|
||||
// show cli version
|
||||
if *versionPtr {
|
||||
fmt.Fprintln(os.Stderr, Version)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// show help
|
||||
if *helpPtr {
|
||||
flag.Usage()
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// translate -path into -source if given
|
||||
if *sourcePtr == "" && *pathPtr != "" {
|
||||
*sourcePtr = fmt.Sprintf("file://%v", *pathPtr)
|
||||
}
|
||||
|
||||
// initialize migrate
|
||||
// don't catch migraterErr here and let each command decide
|
||||
// how it wants to handle the error
|
||||
migrater, migraterErr := migrate.New(*sourcePtr, *databasePtr)
|
||||
defer func() {
|
||||
if migraterErr == nil {
|
||||
migrater.Close()
|
||||
}
|
||||
}()
|
||||
if migraterErr == nil {
|
||||
migrater.Log = log
|
||||
migrater.PrefetchMigrations = *prefetchPtr
|
||||
migrater.LockTimeout = time.Duration(int64(*lockTimeoutPtr)) * time.Second
|
||||
|
||||
// handle Ctrl+c
|
||||
signals := make(chan os.Signal, 1)
|
||||
signal.Notify(signals, syscall.SIGINT)
|
||||
go func() {
|
||||
for range signals {
|
||||
log.Println("Stopping after this running migration ...")
|
||||
migrater.GracefulStop <- true
|
||||
return
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
switch flag.Arg(0) {
|
||||
case "create":
|
||||
args := flag.Args()[1:]
|
||||
|
||||
createFlagSet := flag.NewFlagSet("create", flag.ExitOnError)
|
||||
extPtr := createFlagSet.String("ext", "", "File extension")
|
||||
dirPtr := createFlagSet.String("dir", "", "Directory to place file in (default: current working directory)")
|
||||
createFlagSet.Parse(args)
|
||||
|
||||
if createFlagSet.NArg() == 0 {
|
||||
log.fatal("error: please specify name")
|
||||
}
|
||||
name := createFlagSet.Arg(0)
|
||||
|
||||
if *extPtr != "" {
|
||||
*extPtr = "." + strings.TrimPrefix(*extPtr, ".")
|
||||
}
|
||||
if *dirPtr != "" {
|
||||
*dirPtr = strings.Trim(*dirPtr, "/") + "/"
|
||||
}
|
||||
|
||||
timestamp := startTime.Unix()
|
||||
|
||||
createCmd(*dirPtr, timestamp, name, *extPtr)
|
||||
|
||||
case "goto":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
if flag.Arg(1) == "" {
|
||||
log.fatal("error: please specify version argument V")
|
||||
}
|
||||
|
||||
v, err := strconv.ParseUint(flag.Arg(1), 10, 64)
|
||||
if err != nil {
|
||||
log.fatal("error: can't read version argument V")
|
||||
}
|
||||
|
||||
gotoCmd(migrater, uint(v))
|
||||
|
||||
if log.verbose {
|
||||
log.Println("Finished after", time.Now().Sub(startTime))
|
||||
}
|
||||
|
||||
case "up":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
limit := -1
|
||||
if flag.Arg(1) != "" {
|
||||
n, err := strconv.ParseUint(flag.Arg(1), 10, 64)
|
||||
if err != nil {
|
||||
log.fatal("error: can't read limit argument N")
|
||||
}
|
||||
limit = int(n)
|
||||
}
|
||||
|
||||
upCmd(migrater, limit)
|
||||
|
||||
if log.verbose {
|
||||
log.Println("Finished after", time.Now().Sub(startTime))
|
||||
}
|
||||
|
||||
case "down":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
limit := -1
|
||||
if flag.Arg(1) != "" {
|
||||
n, err := strconv.ParseUint(flag.Arg(1), 10, 64)
|
||||
if err != nil {
|
||||
log.fatal("error: can't read limit argument N")
|
||||
}
|
||||
limit = int(n)
|
||||
}
|
||||
|
||||
downCmd(migrater, limit)
|
||||
|
||||
if log.verbose {
|
||||
log.Println("Finished after", time.Now().Sub(startTime))
|
||||
}
|
||||
|
||||
case "drop":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
dropCmd(migrater)
|
||||
|
||||
if log.verbose {
|
||||
log.Println("Finished after", time.Now().Sub(startTime))
|
||||
}
|
||||
|
||||
case "force":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
if flag.Arg(1) == "" {
|
||||
log.fatal("error: please specify version argument V")
|
||||
}
|
||||
|
||||
v, err := strconv.ParseInt(flag.Arg(1), 10, 64)
|
||||
if err != nil {
|
||||
log.fatal("error: can't read version argument V")
|
||||
}
|
||||
|
||||
if v < -1 {
|
||||
log.fatal("error: argument V must be >= -1")
|
||||
}
|
||||
|
||||
forceCmd(migrater, int(v))
|
||||
|
||||
if log.verbose {
|
||||
log.Println("Finished after", time.Now().Sub(startTime))
|
||||
}
|
||||
|
||||
case "version":
|
||||
if migraterErr != nil {
|
||||
log.fatalErr(migraterErr)
|
||||
}
|
||||
|
||||
versionCmd(migrater)
|
||||
|
||||
default:
|
||||
flag.Usage()
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
4
vendor/src/github.com/mattes/migrate/cli/version.go
vendored
Normal file
4
vendor/src/github.com/mattes/migrate/cli/version.go
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
package main
|
||||
|
||||
// Version is set in Makefile with build flags
|
||||
var Version = "dev"
|
31
vendor/src/github.com/mattes/migrate/database/cassandra/README.md
vendored
Normal file
31
vendor/src/github.com/mattes/migrate/database/cassandra/README.md
vendored
Normal file
|
@ -0,0 +1,31 @@
|
|||
# Cassandra
|
||||
|
||||
* Drop command will not work on Cassandra 2.X because it rely on
|
||||
system_schema table which comes with 3.X
|
||||
* Other commands should work properly but are **not tested**
|
||||
|
||||
|
||||
## Usage
|
||||
`cassandra://host:port/keyspace?param1=value¶m2=value2`
|
||||
|
||||
|
||||
| URL Query | Default value | Description |
|
||||
|------------|-------------|-----------|
|
||||
| `x-migrations-table` | schema_migrations | Name of the migrations table |
|
||||
| `port` | 9042 | The port to bind to |
|
||||
| `consistency` | ALL | Migration consistency
|
||||
| `protocol` | | Cassandra protocol version (3 or 4)
|
||||
| `timeout` | 1 minute | Migration timeout
|
||||
| `username` | nil | Username to use when authenticating. |
|
||||
| `password` | nil | Password to use when authenticating. |
|
||||
|
||||
|
||||
`timeout` is parsed using [time.ParseDuration(s string)](https://golang.org/pkg/time/#ParseDuration)
|
||||
|
||||
|
||||
## Upgrading from v1
|
||||
|
||||
1. Write down the current migration version from schema_migrations
|
||||
2. `DROP TABLE schema_migrations`
|
||||
4. Download and install the latest migrate version.
|
||||
5. Force the current migration version with `migrate force <current_version>`.
|
228
vendor/src/github.com/mattes/migrate/database/cassandra/cassandra.go
vendored
Normal file
228
vendor/src/github.com/mattes/migrate/database/cassandra/cassandra.go
vendored
Normal file
|
@ -0,0 +1,228 @@
|
|||
package cassandra
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/gocql/gocql"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := new(Cassandra)
|
||||
database.Register("cassandra", db)
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var dbLocked = false
|
||||
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoKeyspace = fmt.Errorf("no keyspace provided")
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
KeyspaceName string
|
||||
}
|
||||
|
||||
type Cassandra struct {
|
||||
session *gocql.Session
|
||||
isLocked bool
|
||||
|
||||
// Open and WithInstance need to guarantee that config is never nil
|
||||
config *Config
|
||||
}
|
||||
|
||||
func (p *Cassandra) Open(url string) (database.Driver, error) {
|
||||
u, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Check for missing mandatory attributes
|
||||
if len(u.Path) == 0 {
|
||||
return nil, ErrNoKeyspace
|
||||
}
|
||||
|
||||
migrationsTable := u.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
p.config = &Config{
|
||||
KeyspaceName: u.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
}
|
||||
|
||||
cluster := gocql.NewCluster(u.Host)
|
||||
cluster.Keyspace = u.Path[1:len(u.Path)]
|
||||
cluster.Consistency = gocql.All
|
||||
cluster.Timeout = 1 * time.Minute
|
||||
|
||||
if len(u.Query().Get("username")) > 0 && len(u.Query().Get("password")) > 0 {
|
||||
authenticator := gocql.PasswordAuthenticator{
|
||||
Username: u.Query().Get("username"),
|
||||
Password: u.Query().Get("password"),
|
||||
}
|
||||
cluster.Authenticator = authenticator
|
||||
}
|
||||
|
||||
// Retrieve query string configuration
|
||||
if len(u.Query().Get("consistency")) > 0 {
|
||||
var consistency gocql.Consistency
|
||||
consistency, err = parseConsistency(u.Query().Get("consistency"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cluster.Consistency = consistency
|
||||
}
|
||||
if len(u.Query().Get("protocol")) > 0 {
|
||||
var protoversion int
|
||||
protoversion, err = strconv.Atoi(u.Query().Get("protocol"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cluster.ProtoVersion = protoversion
|
||||
}
|
||||
if len(u.Query().Get("timeout")) > 0 {
|
||||
var timeout time.Duration
|
||||
timeout, err = time.ParseDuration(u.Query().Get("timeout"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cluster.Timeout = timeout
|
||||
}
|
||||
|
||||
p.session, err = cluster.CreateSession()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := p.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Close() error {
|
||||
p.session.Close()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Lock() error {
|
||||
if dbLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
dbLocked = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Unlock() error {
|
||||
dbLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// run migration
|
||||
query := string(migr[:])
|
||||
if err := p.session.Query(query).Exec(); err != nil {
|
||||
// TODO: cast to Cassandra error and get line number
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Cassandra) SetVersion(version int, dirty bool) error {
|
||||
query := `TRUNCATE "` + p.config.MigrationsTable + `"`
|
||||
if err := p.session.Query(query).Exec(); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if version >= 0 {
|
||||
query = `INSERT INTO "` + p.config.MigrationsTable + `" (version, dirty) VALUES (?, ?)`
|
||||
if err := p.session.Query(query, version, dirty).Exec(); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return current keyspace version
|
||||
func (p *Cassandra) Version() (version int, dirty bool, err error) {
|
||||
query := `SELECT version, dirty FROM "` + p.config.MigrationsTable + `" LIMIT 1`
|
||||
err = p.session.Query(query).Scan(&version, &dirty)
|
||||
switch {
|
||||
case err == gocql.ErrNotFound:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if _, ok := err.(*gocql.Error); ok {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Cassandra) Drop() error {
|
||||
// select all tables in current schema
|
||||
query := fmt.Sprintf(`SELECT table_name from system_schema.tables WHERE keyspace_name='%s'`, p.config.KeyspaceName[1:]) // Skip '/' character
|
||||
iter := p.session.Query(query).Iter()
|
||||
var tableName string
|
||||
for iter.Scan(&tableName) {
|
||||
err := p.session.Query(fmt.Sprintf(`DROP TABLE %s`, tableName)).Exec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Re-create the version table
|
||||
if err := p.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure version table exists
|
||||
func (p *Cassandra) ensureVersionTable() error {
|
||||
err := p.session.Query(fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s (version bigint, dirty boolean, PRIMARY KEY(version))", p.config.MigrationsTable)).Exec()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, _, err = p.Version(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ParseConsistency wraps gocql.ParseConsistency
|
||||
// to return an error instead of a panicking.
|
||||
func parseConsistency(consistencyStr string) (consistency gocql.Consistency, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
var ok bool
|
||||
err, ok = r.(error)
|
||||
if !ok {
|
||||
err = fmt.Errorf("Failed to parse consistency \"%s\": %v", consistencyStr, r)
|
||||
}
|
||||
}
|
||||
}()
|
||||
consistency = gocql.ParseConsistency(consistencyStr)
|
||||
|
||||
return consistency, nil
|
||||
}
|
53
vendor/src/github.com/mattes/migrate/database/cassandra/cassandra_test.go
vendored
Normal file
53
vendor/src/github.com/mattes/migrate/database/cassandra/cassandra_test.go
vendored
Normal file
|
@ -0,0 +1,53 @@
|
|||
package cassandra
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
"github.com/gocql/gocql"
|
||||
"time"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "cassandra:3.0.10"},
|
||||
{Image: "cassandra:3.0"},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
// Cassandra exposes 5 ports (7000, 7001, 7199, 9042 & 9160)
|
||||
// We only need the port bound to 9042, but we can only access to the first one
|
||||
// through 'i.Port()' (which calls DockerContainer.firstPortMapping())
|
||||
// So we need to get port mapping to retrieve correct port number bound to 9042
|
||||
portMap := i.NetworkSettings().Ports
|
||||
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
|
||||
|
||||
cluster := gocql.NewCluster(i.Host())
|
||||
cluster.Port = port
|
||||
//cluster.ProtoVersion = 4
|
||||
cluster.Consistency = gocql.All
|
||||
cluster.Timeout = 1 * time.Minute
|
||||
p, err := cluster.CreateSession()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
// Create keyspace for tests
|
||||
p.Query("CREATE KEYSPACE testks WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor':1}").Exec()
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Cassandra{}
|
||||
portMap := i.NetworkSettings().Ports
|
||||
port, _ := strconv.Atoi(portMap["9042/tcp"][0].HostPort)
|
||||
addr := fmt.Sprintf("cassandra://%v:%v/testks", i.Host(), port)
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT table_name from system_schema.tables"))
|
||||
})
|
||||
}
|
12
vendor/src/github.com/mattes/migrate/database/clickhouse/README.md
vendored
Normal file
12
vendor/src/github.com/mattes/migrate/database/clickhouse/README.md
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
# ClickHouse
|
||||
|
||||
`clickhouse://host:port?username=user&password=qwerty&database=clicks`
|
||||
|
||||
| URL Query | Description |
|
||||
|------------|-------------|
|
||||
| `x-migrations-table`| Name of the migrations table |
|
||||
| `database` | The name of the database to connect to |
|
||||
| `username` | The user to sign in as |
|
||||
| `password` | The user's password |
|
||||
| `host` | The host to connect to. |
|
||||
| `port` | The port to bind to. |
|
196
vendor/src/github.com/mattes/migrate/database/clickhouse/clickhouse.go
vendored
Normal file
196
vendor/src/github.com/mattes/migrate/database/clickhouse/clickhouse.go
vendored
Normal file
|
@ -0,0 +1,196 @@
|
|||
package clickhouse
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/url"
|
||||
"time"
|
||||
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
|
||||
var ErrNilConfig = fmt.Errorf("no config")
|
||||
|
||||
type Config struct {
|
||||
DatabaseName string
|
||||
MigrationsTable string
|
||||
}
|
||||
|
||||
func init() {
|
||||
database.Register("clickhouse", &ClickHouse{})
|
||||
}
|
||||
|
||||
func WithInstance(conn *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := conn.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch := &ClickHouse{
|
||||
conn: conn,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := ch.init(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
type ClickHouse struct {
|
||||
conn *sql.DB
|
||||
config *Config
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Open(dsn string) (database.Driver, error) {
|
||||
purl, err := url.Parse(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
q := migrate.FilterCustomQuery(purl)
|
||||
q.Scheme = "tcp"
|
||||
conn, err := sql.Open("clickhouse", q.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch = &ClickHouse{
|
||||
conn: conn,
|
||||
config: &Config{
|
||||
MigrationsTable: purl.Query().Get("x-migrations-table"),
|
||||
DatabaseName: purl.Query().Get("database"),
|
||||
},
|
||||
}
|
||||
|
||||
if err := ch.init(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) init() error {
|
||||
if len(ch.config.DatabaseName) == 0 {
|
||||
if err := ch.conn.QueryRow("SELECT currentDatabase()").Scan(&ch.config.DatabaseName); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(ch.config.MigrationsTable) == 0 {
|
||||
ch.config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
return ch.ensureVersionTable()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Run(r io.Reader) error {
|
||||
migration, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := ch.conn.Exec(string(migration)); err != nil {
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migration}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (ch *ClickHouse) Version() (int, bool, error) {
|
||||
var (
|
||||
version int
|
||||
dirty uint8
|
||||
query = "SELECT version, dirty FROM `" + ch.config.MigrationsTable + "` ORDER BY sequence DESC LIMIT 1"
|
||||
)
|
||||
if err := ch.conn.QueryRow(query).Scan(&version, &dirty); err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return version, dirty == 1, nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) SetVersion(version int, dirty bool) error {
|
||||
var (
|
||||
bool = func(v bool) uint8 {
|
||||
if v {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
tx, err = ch.conn.Begin()
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := "INSERT INTO " + ch.config.MigrationsTable + " (version, dirty, sequence) VALUES (?, ?, ?)"
|
||||
if _, err := tx.Exec(query, version, bool(dirty), time.Now().UnixNano()); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) ensureVersionTable() error {
|
||||
var (
|
||||
table string
|
||||
query = "SHOW TABLES FROM " + ch.config.DatabaseName + " LIKE '" + ch.config.MigrationsTable + "'"
|
||||
)
|
||||
// check if migration table exists
|
||||
if err := ch.conn.QueryRow(query).Scan(&table); err != nil {
|
||||
if err != sql.ErrNoRows {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
// if not, create the empty migration table
|
||||
query = `
|
||||
CREATE TABLE ` + ch.config.MigrationsTable + ` (
|
||||
version UInt32,
|
||||
dirty UInt8,
|
||||
sequence UInt64
|
||||
) Engine=TinyLog
|
||||
`
|
||||
if _, err := ch.conn.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Drop() error {
|
||||
var (
|
||||
query = "SHOW TABLES FROM " + ch.config.DatabaseName
|
||||
tables, err = ch.conn.Query(query)
|
||||
)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
for tables.Next() {
|
||||
var table string
|
||||
if err := tables.Scan(&table); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query = "DROP TABLE IF EXISTS " + ch.config.DatabaseName + "." + table
|
||||
|
||||
if _, err := ch.conn.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
return ch.ensureVersionTable()
|
||||
}
|
||||
|
||||
func (ch *ClickHouse) Lock() error { return nil }
|
||||
func (ch *ClickHouse) Unlock() error { return nil }
|
||||
func (ch *ClickHouse) Close() error { return ch.conn.Close() }
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS test_1;
|
3
vendor/src/github.com/mattes/migrate/database/clickhouse/examples/migrations/001_init.up.sql
vendored
Normal file
3
vendor/src/github.com/mattes/migrate/database/clickhouse/examples/migrations/001_init.up.sql
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
CREATE TABLE test_1 (
|
||||
Date Date
|
||||
) Engine=Memory;
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS test_2;
|
|
@ -0,0 +1,3 @@
|
|||
CREATE TABLE test_2 (
|
||||
Date Date
|
||||
) Engine=Memory;
|
19
vendor/src/github.com/mattes/migrate/database/cockroachdb/README.md
vendored
Normal file
19
vendor/src/github.com/mattes/migrate/database/cockroachdb/README.md
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
# cockroachdb
|
||||
|
||||
`cockroachdb://user:password@host:port/dbname?query` (`cockroach://`, and `crdb-postgres://` work, too)
|
||||
|
||||
| URL Query | WithInstance Config | Description |
|
||||
|------------|---------------------|-------------|
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `x-lock-table` | `LockTable` | Name of the table which maintains the migration lock |
|
||||
| `x-force-lock` | `ForceLock` | Force lock acquisition to fix faulty migrations which may not have released the schema lock (Boolean, default is `false`) |
|
||||
| `dbname` | `DatabaseName` | The name of the database to connect to |
|
||||
| `user` | | The user to sign in as |
|
||||
| `password` | | The user's password |
|
||||
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
|
||||
| `port` | | The port to bind to. (default is 5432) |
|
||||
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
|
||||
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
|
||||
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
|
||||
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
|
||||
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
|
338
vendor/src/github.com/mattes/migrate/database/cockroachdb/cockroachdb.go
vendored
Normal file
338
vendor/src/github.com/mattes/migrate/database/cockroachdb/cockroachdb.go
vendored
Normal file
|
@ -0,0 +1,338 @@
|
|||
package cockroachdb
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
|
||||
"github.com/cockroachdb/cockroach-go/crdb"
|
||||
"github.com/lib/pq"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"context"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := CockroachDb{}
|
||||
database.Register("cockroach", &db)
|
||||
database.Register("cockroachdb", &db)
|
||||
database.Register("crdb-postgres", &db)
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var DefaultLockTable = "schema_lock"
|
||||
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
LockTable string
|
||||
ForceLock bool
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type CockroachDb struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
// Open and WithInstance need to guarantee that config is never nil
|
||||
config *Config
|
||||
}
|
||||
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := `SELECT current_database()`
|
||||
var databaseName string
|
||||
if err := instance.QueryRow(query).Scan(&databaseName); err != nil {
|
||||
return nil, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if len(databaseName) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
config.DatabaseName = databaseName
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
if len(config.LockTable) == 0 {
|
||||
config.LockTable = DefaultLockTable
|
||||
}
|
||||
|
||||
px := &CockroachDb{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := px.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := px.ensureLockTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// As Cockroach uses the postgres protocol, and 'postgres' is already a registered database, we need to replace the
|
||||
// connect prefix, with the actual protocol, so that the library can differentiate between the implementations
|
||||
re := regexp.MustCompile("^(cockroach(db)?|crdb-postgres)")
|
||||
connectString := re.ReplaceAllString(migrate.FilterCustomQuery(purl).String(), "postgres")
|
||||
|
||||
db, err := sql.Open("postgres", connectString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
lockTable := purl.Query().Get("x-lock-table")
|
||||
if len(lockTable) == 0 {
|
||||
lockTable = DefaultLockTable
|
||||
}
|
||||
|
||||
forceLockQuery := purl.Query().Get("x-force-lock")
|
||||
forceLock, err := strconv.ParseBool(forceLockQuery)
|
||||
if err != nil {
|
||||
forceLock = false
|
||||
}
|
||||
|
||||
px, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
LockTable: lockTable,
|
||||
ForceLock: forceLock,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Close() error {
|
||||
return c.db.Close()
|
||||
}
|
||||
|
||||
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
|
||||
// See: https://github.com/cockroachdb/cockroach/issues/13546
|
||||
func (c *CockroachDb) Lock() error {
|
||||
err := crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
|
||||
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := "SELECT * FROM " + c.config.LockTable + " WHERE lock_id = $1"
|
||||
rows, err := tx.Query(query, aid)
|
||||
if err != nil {
|
||||
return database.Error{OrigErr: err, Err: "failed to fetch migration lock", Query: []byte(query)}
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
// If row exists at all, lock is present
|
||||
locked := rows.Next()
|
||||
if locked && !c.config.ForceLock {
|
||||
return database.Error{Err: "lock could not be acquired; already locked", Query: []byte(query)}
|
||||
}
|
||||
|
||||
query = "INSERT INTO " + c.config.LockTable + " (lock_id) VALUES ($1)"
|
||||
if _, err := tx.Exec(query, aid) ; err != nil {
|
||||
return database.Error{OrigErr: err, Err: "failed to set migration lock", Query: []byte(query)}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
} else {
|
||||
c.isLocked = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Locking is done manually with a separate lock table. Implementing advisory locks in CRDB is being discussed
|
||||
// See: https://github.com/cockroachdb/cockroach/issues/13546
|
||||
func (c *CockroachDb) Unlock() error {
|
||||
aid, err := database.GenerateAdvisoryLockId(c.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// In the event of an implementation (non-migration) error, it is possible for the lock to not be released. Until
|
||||
// a better locking mechanism is added, a manual purging of the lock table may be required in such circumstances
|
||||
query := "DELETE FROM " + c.config.LockTable + " WHERE lock_id = $1"
|
||||
if _, err := c.db.Exec(query, aid); err != nil {
|
||||
if e, ok := err.(*pq.Error); ok {
|
||||
// 42P01 is "UndefinedTableError" in CockroachDB
|
||||
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
|
||||
if e.Code == "42P01" {
|
||||
// On drops, the lock table is fully removed; This is fine, and is a valid "unlocked" state for the schema
|
||||
c.isLocked = false
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return database.Error{OrigErr: err, Err: "failed to release migration lock", Query: []byte(query)}
|
||||
}
|
||||
|
||||
c.isLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// run migration
|
||||
query := string(migr[:])
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) SetVersion(version int, dirty bool) error {
|
||||
return crdb.ExecuteTx(context.Background(), c.db, nil, func(tx *sql.Tx) error {
|
||||
if _, err := tx.Exec( `TRUNCATE "` + c.config.MigrationsTable + `"`); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
if _, err := tx.Exec(`INSERT INTO "` + c.config.MigrationsTable + `" (version, dirty) VALUES ($1, $2)`, version, dirty); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Version() (version int, dirty bool, err error) {
|
||||
query := `SELECT version, dirty FROM "` + c.config.MigrationsTable + `" LIMIT 1`
|
||||
err = c.db.QueryRow(query).Scan(&version, &dirty)
|
||||
|
||||
switch {
|
||||
case err == sql.ErrNoRows:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if e, ok := err.(*pq.Error); ok {
|
||||
// 42P01 is "UndefinedTableError" in CockroachDB
|
||||
// https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/pgwire/pgerror/codes.go
|
||||
if e.Code == "42P01" {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (c *CockroachDb) Drop() error {
|
||||
// select all tables in current schema
|
||||
query := `SELECT table_name FROM information_schema.tables WHERE table_schema=(SELECT current_schema())`
|
||||
tables, err := c.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
|
||||
// delete one table after another
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
|
||||
if len(tableNames) > 0 {
|
||||
// delete one by one ...
|
||||
for _, t := range tableNames {
|
||||
query = `DROP TABLE IF EXISTS ` + t + ` CASCADE`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := c.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CockroachDb) ensureVersionTable() error {
|
||||
// check if migration table exists
|
||||
var count int
|
||||
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
|
||||
if err := c.db.QueryRow(query, c.config.MigrationsTable).Scan(&count); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if count == 1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty migration table
|
||||
query = `CREATE TABLE "` + c.config.MigrationsTable + `" (version INT NOT NULL PRIMARY KEY, dirty BOOL NOT NULL)`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
func (c *CockroachDb) ensureLockTable() error {
|
||||
// check if lock table exists
|
||||
var count int
|
||||
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
|
||||
if err := c.db.QueryRow(query, c.config.LockTable).Scan(&count); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if count == 1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty lock table
|
||||
query = `CREATE TABLE "` + c.config.LockTable + `" (lock_id INT NOT NULL PRIMARY KEY)`
|
||||
if _, err := c.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
91
vendor/src/github.com/mattes/migrate/database/cockroachdb/cockroachdb_test.go
vendored
Normal file
91
vendor/src/github.com/mattes/migrate/database/cockroachdb/cockroachdb_test.go
vendored
Normal file
|
@ -0,0 +1,91 @@
|
|||
package cockroachdb
|
||||
|
||||
// error codes https://github.com/lib/pq/blob/master/error.go
|
||||
|
||||
import (
|
||||
//"bytes"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/lib/pq"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
"bytes"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "cockroachdb/cockroach:v1.0.2", Cmd: []string{"start", "--insecure"}},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
db, err := sql.Open("postgres", fmt.Sprintf("postgres://root@%v:%v?sslmode=disable", i.Host(), i.PortFor(26257)))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer db.Close()
|
||||
err = db.Ping()
|
||||
if err == io.EOF {
|
||||
_, err = db.Exec("CREATE DATABASE migrate")
|
||||
return err == nil;
|
||||
} else if e, ok := err.(*pq.Error); ok {
|
||||
if e.Code.Name() == "cannot_connect_now" {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
_, err = db.Exec("CREATE DATABASE migrate")
|
||||
return err == nil;
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.PortFor(26257))
|
||||
d, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
})
|
||||
}
|
||||
|
||||
func TestMultiStatement(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable", i.Host(), i.Port())
|
||||
d, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if err := d.Run(bytes.NewReader([]byte("CREATE TABLE foo (foo text); CREATE TABLE bar (bar text);"))); err != nil {
|
||||
t.Fatalf("expected err to be nil, got %v", err)
|
||||
}
|
||||
|
||||
// make sure second table exists
|
||||
var exists bool
|
||||
if err := d.(*CockroachDb).db.QueryRow("SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = 'bar' AND table_schema = (SELECT current_schema()))").Scan(&exists); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !exists {
|
||||
t.Fatalf("expected table bar to exist")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterCustomQuery(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
c := &CockroachDb{}
|
||||
addr := fmt.Sprintf("cockroach://root@%v:%v/migrate?sslmode=disable&x-custom=foobar", i.Host(), i.PortFor(26257))
|
||||
_, err := c.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
})
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS users;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE users (
|
||||
user_id INT UNIQUE,
|
||||
name STRING(40),
|
||||
email STRING(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
ALTER TABLE users DROP COLUMN IF EXISTS city;
|
|
@ -0,0 +1 @@
|
|||
ALTER TABLE users ADD COLUMN city TEXT;
|
|
@ -0,0 +1 @@
|
|||
DROP INDEX IF EXISTS users_email_index;
|
|
@ -0,0 +1,3 @@
|
|||
CREATE UNIQUE INDEX IF NOT EXISTS users_email_index ON users (email);
|
||||
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS books;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE books (
|
||||
user_id INT,
|
||||
name STRING(40),
|
||||
author STRING(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS movies;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE movies (
|
||||
user_id INT,
|
||||
name STRING(40),
|
||||
director STRING(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
0
vendor/src/github.com/mattes/migrate/database/crate/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/crate/README.md
vendored
Normal file
112
vendor/src/github.com/mattes/migrate/database/driver.go
vendored
Normal file
112
vendor/src/github.com/mattes/migrate/database/driver.go
vendored
Normal file
|
@ -0,0 +1,112 @@
|
|||
// Package database provides the Database interface.
|
||||
// All database drivers must implement this interface, register themselves,
|
||||
// optionally provide a `WithInstance` function and pass the tests
|
||||
// in package database/testing.
|
||||
package database
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
nurl "net/url"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrLocked = fmt.Errorf("can't acquire lock")
|
||||
)
|
||||
|
||||
const NilVersion int = -1
|
||||
|
||||
var driversMu sync.RWMutex
|
||||
var drivers = make(map[string]Driver)
|
||||
|
||||
// Driver is the interface every database driver must implement.
|
||||
//
|
||||
// How to implement a database driver?
|
||||
// 1. Implement this interface.
|
||||
// 2. Optionally, add a function named `WithInstance`.
|
||||
// This function should accept an existing DB instance and a Config{} struct
|
||||
// and return a driver instance.
|
||||
// 3. Add a test that calls database/testing.go:Test()
|
||||
// 4. Add own tests for Open(), WithInstance() (when provided) and Close().
|
||||
// All other functions are tested by tests in database/testing.
|
||||
// Saves you some time and makes sure all database drivers behave the same way.
|
||||
// 5. Call Register in init().
|
||||
// 6. Create a migrate/cli/build_<driver-name>.go file
|
||||
// 7. Add driver name in 'DATABASE' variable in Makefile
|
||||
//
|
||||
// Guidelines:
|
||||
// * Don't try to correct user input. Don't assume things.
|
||||
// When in doubt, return an error and explain the situation to the user.
|
||||
// * All configuration input must come from the URL string in func Open()
|
||||
// or the Config{} struct in WithInstance. Don't os.Getenv().
|
||||
type Driver interface {
|
||||
// Open returns a new driver instance configured with parameters
|
||||
// coming from the URL string. Migrate will call this function
|
||||
// only once per instance.
|
||||
Open(url string) (Driver, error)
|
||||
|
||||
// Close closes the underlying database instance managed by the driver.
|
||||
// Migrate will call this function only once per instance.
|
||||
Close() error
|
||||
|
||||
// Lock should acquire a database lock so that only one migration process
|
||||
// can run at a time. Migrate will call this function before Run is called.
|
||||
// If the implementation can't provide this functionality, return nil.
|
||||
// Return database.ErrLocked if database is already locked.
|
||||
Lock() error
|
||||
|
||||
// Unlock should release the lock. Migrate will call this function after
|
||||
// all migrations have been run.
|
||||
Unlock() error
|
||||
|
||||
// Run applies a migration to the database. migration is garantueed to be not nil.
|
||||
Run(migration io.Reader) error
|
||||
|
||||
// SetVersion saves version and dirty state.
|
||||
// Migrate will call this function before and after each call to Run.
|
||||
// version must be >= -1. -1 means NilVersion.
|
||||
SetVersion(version int, dirty bool) error
|
||||
|
||||
// Version returns the currently active version and if the database is dirty.
|
||||
// When no migration has been applied, it must return version -1.
|
||||
// Dirty means, a previous migration failed and user interaction is required.
|
||||
Version() (version int, dirty bool, err error)
|
||||
|
||||
// Drop deletes everything in the database.
|
||||
Drop() error
|
||||
}
|
||||
|
||||
// Open returns a new driver instance.
|
||||
func Open(url string) (Driver, error) {
|
||||
u, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if u.Scheme == "" {
|
||||
return nil, fmt.Errorf("database driver: invalid URL scheme")
|
||||
}
|
||||
|
||||
driversMu.RLock()
|
||||
d, ok := drivers[u.Scheme]
|
||||
driversMu.RUnlock()
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("database driver: unknown driver %v (forgotten import?)", u.Scheme)
|
||||
}
|
||||
|
||||
return d.Open(url)
|
||||
}
|
||||
|
||||
// Register globally registers a driver.
|
||||
func Register(name string, driver Driver) {
|
||||
driversMu.Lock()
|
||||
defer driversMu.Unlock()
|
||||
if driver == nil {
|
||||
panic("Register driver is nil")
|
||||
}
|
||||
if _, dup := drivers[name]; dup {
|
||||
panic("Register called twice for driver " + name)
|
||||
}
|
||||
drivers[name] = driver
|
||||
}
|
8
vendor/src/github.com/mattes/migrate/database/driver_test.go
vendored
Normal file
8
vendor/src/github.com/mattes/migrate/database/driver_test.go
vendored
Normal file
|
@ -0,0 +1,8 @@
|
|||
package database
|
||||
|
||||
func ExampleDriver() {
|
||||
// see database/stub for an example
|
||||
|
||||
// database/stub/stub.go has the driver implementation
|
||||
// database/stub/stub_test.go runs database/testing/test.go:Test
|
||||
}
|
27
vendor/src/github.com/mattes/migrate/database/error.go
vendored
Normal file
27
vendor/src/github.com/mattes/migrate/database/error.go
vendored
Normal file
|
@ -0,0 +1,27 @@
|
|||
package database
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Error should be used for errors involving queries ran against the database
|
||||
type Error struct {
|
||||
// Optional: the line number
|
||||
Line uint
|
||||
|
||||
// Query is a query excerpt
|
||||
Query []byte
|
||||
|
||||
// Err is a useful/helping error message for humans
|
||||
Err string
|
||||
|
||||
// OrigErr is the underlying error
|
||||
OrigErr error
|
||||
}
|
||||
|
||||
func (e Error) Error() string {
|
||||
if len(e.Err) == 0 {
|
||||
return fmt.Sprintf("%v in line %v: %s", e.OrigErr, e.Line, e.Query)
|
||||
}
|
||||
return fmt.Sprintf("%v in line %v: %s (details: %v)", e.Err, e.Line, e.Query, e.OrigErr)
|
||||
}
|
0
vendor/src/github.com/mattes/migrate/database/mongodb/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/mongodb/README.md
vendored
Normal file
53
vendor/src/github.com/mattes/migrate/database/mysql/README.md
vendored
Normal file
53
vendor/src/github.com/mattes/migrate/database/mysql/README.md
vendored
Normal file
|
@ -0,0 +1,53 @@
|
|||
# MySQL
|
||||
|
||||
`mysql://user:password@tcp(host:port)/dbname?query`
|
||||
|
||||
| URL Query | WithInstance Config | Description |
|
||||
|------------|---------------------|-------------|
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `dbname` | `DatabaseName` | The name of the database to connect to |
|
||||
| `user` | | The user to sign in as |
|
||||
| `password` | | The user's password |
|
||||
| `host` | | The host to connect to. |
|
||||
| `port` | | The port to bind to. |
|
||||
| `x-tls-ca` | | The location of the root certificate file. |
|
||||
| `x-tls-cert` | | Cert file location. |
|
||||
| `x-tls-key` | | Key file location. |
|
||||
| `x-tls-insecure-skip-verify` | | Whether or not to use SSL (true\|false) |
|
||||
|
||||
## Use with existing client
|
||||
|
||||
If you use the MySQL driver with existing database client, you must create the client with parameter `multiStatements=true`:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database/mysql"
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, _ := sql.Open("mysql", "user:password@tcp(host:port)/dbname?multiStatements=true")
|
||||
driver, _ := mysql.WithInstance(db, &mysql.Config{})
|
||||
m, _ := migrate.NewWithDatabaseInstance(
|
||||
"file:///migrations",
|
||||
"mysql",
|
||||
driver,
|
||||
)
|
||||
|
||||
m.Steps(2)
|
||||
}
|
||||
```
|
||||
|
||||
## Upgrading from v1
|
||||
|
||||
1. Write down the current migration version from schema_migrations
|
||||
1. `DROP TABLE schema_migrations`
|
||||
2. Wrap your existing migrations in transactions ([BEGIN/COMMIT](https://dev.mysql.com/doc/refman/5.7/en/commit.html)) if you use multiple statements within one migration.
|
||||
3. Download and install the latest migrate version.
|
||||
4. Force the current migration version with `migrate force <current_version>`.
|
329
vendor/src/github.com/mattes/migrate/database/mysql/mysql.go
vendored
Normal file
329
vendor/src/github.com/mattes/migrate/database/mysql/mysql.go
vendored
Normal file
|
@ -0,0 +1,329 @@
|
|||
package mysql
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/go-sql-driver/mysql"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
func init() {
|
||||
database.Register("mysql", &Mysql{})
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
|
||||
var (
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
ErrAppendPEM = fmt.Errorf("failed to append PEM")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type Mysql struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
config *Config
|
||||
}
|
||||
|
||||
// instance must have `multiStatements` set to true
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := `SELECT DATABASE()`
|
||||
var databaseName sql.NullString
|
||||
if err := instance.QueryRow(query).Scan(&databaseName); err != nil {
|
||||
return nil, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if len(databaseName.String) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
config.DatabaseName = databaseName.String
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
mx := &Mysql{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := mx.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return mx, nil
|
||||
}
|
||||
|
||||
func (m *Mysql) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
q := purl.Query()
|
||||
q.Set("multiStatements", "true")
|
||||
purl.RawQuery = q.Encode()
|
||||
|
||||
db, err := sql.Open("mysql", strings.Replace(
|
||||
migrate.FilterCustomQuery(purl).String(), "mysql://", "", 1))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
// use custom TLS?
|
||||
ctls := purl.Query().Get("tls")
|
||||
if len(ctls) > 0 {
|
||||
if _, isBool := readBool(ctls); !isBool && strings.ToLower(ctls) != "skip-verify" {
|
||||
rootCertPool := x509.NewCertPool()
|
||||
pem, err := ioutil.ReadFile(purl.Query().Get("x-tls-ca"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
|
||||
return nil, ErrAppendPEM
|
||||
}
|
||||
|
||||
certs, err := tls.LoadX509KeyPair(purl.Query().Get("x-tls-cert"), purl.Query().Get("x-tls-key"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
insecureSkipVerify := false
|
||||
if len(purl.Query().Get("x-tls-insecure-skip-verify")) > 0 {
|
||||
x, err := strconv.ParseBool(purl.Query().Get("x-tls-insecure-skip-verify"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
insecureSkipVerify = x
|
||||
}
|
||||
|
||||
mysql.RegisterTLSConfig(ctls, &tls.Config{
|
||||
RootCAs: rootCertPool,
|
||||
Certificates: []tls.Certificate{certs},
|
||||
InsecureSkipVerify: insecureSkipVerify,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
mx, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return mx, nil
|
||||
}
|
||||
|
||||
func (m *Mysql) Close() error {
|
||||
return m.db.Close()
|
||||
}
|
||||
|
||||
func (m *Mysql) Lock() error {
|
||||
if m.isLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(
|
||||
fmt.Sprintf("%s:%s", m.config.DatabaseName, m.config.MigrationsTable))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := "SELECT GET_LOCK(?, 1)"
|
||||
var success bool
|
||||
if err := m.db.QueryRow(query, aid).Scan(&success); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "try lock failed", Query: []byte(query)}
|
||||
}
|
||||
|
||||
if success {
|
||||
m.isLocked = true
|
||||
return nil
|
||||
}
|
||||
|
||||
return database.ErrLocked
|
||||
}
|
||||
|
||||
func (m *Mysql) Unlock() error {
|
||||
if !m.isLocked {
|
||||
return nil
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(
|
||||
fmt.Sprintf("%s:%s", m.config.DatabaseName, m.config.MigrationsTable))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := `SELECT RELEASE_LOCK(?)`
|
||||
if _, err := m.db.Exec(query, aid); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
m.isLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Mysql) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := string(migr[:])
|
||||
if _, err := m.db.Exec(query); err != nil {
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Mysql) SetVersion(version int, dirty bool) error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
|
||||
query := "TRUNCATE `" + m.config.MigrationsTable + "`"
|
||||
if _, err := m.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
query := "INSERT INTO `" + m.config.MigrationsTable + "` (version, dirty) VALUES (?, ?)"
|
||||
if _, err := m.db.Exec(query, version, dirty); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Mysql) Version() (version int, dirty bool, err error) {
|
||||
query := "SELECT version, dirty FROM `" + m.config.MigrationsTable + "` LIMIT 1"
|
||||
err = m.db.QueryRow(query).Scan(&version, &dirty)
|
||||
switch {
|
||||
case err == sql.ErrNoRows:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if e, ok := err.(*mysql.MySQLError); ok {
|
||||
if e.Number == 0 {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Mysql) Drop() error {
|
||||
// select all tables
|
||||
query := `SHOW TABLES LIKE '%'`
|
||||
tables, err := m.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
|
||||
// delete one table after another
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
|
||||
if len(tableNames) > 0 {
|
||||
// delete one by one ...
|
||||
for _, t := range tableNames {
|
||||
query = "DROP TABLE IF EXISTS `" + t + "` CASCADE"
|
||||
if _, err := m.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := m.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Mysql) ensureVersionTable() error {
|
||||
// check if migration table exists
|
||||
var result string
|
||||
query := `SHOW TABLES LIKE "` + m.config.MigrationsTable + `"`
|
||||
if err := m.db.QueryRow(query).Scan(&result); err != nil {
|
||||
if err != sql.ErrNoRows {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty migration table
|
||||
query = "CREATE TABLE `" + m.config.MigrationsTable + "` (version bigint not null primary key, dirty boolean not null)"
|
||||
if _, err := m.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Returns the bool value of the input.
|
||||
// The 2nd return value indicates if the input was a valid bool value
|
||||
// See https://github.com/go-sql-driver/mysql/blob/a059889267dc7170331388008528b3b44479bffb/utils.go#L71
|
||||
func readBool(input string) (value bool, valid bool) {
|
||||
switch input {
|
||||
case "1", "true", "TRUE", "True":
|
||||
return true, true
|
||||
case "0", "false", "FALSE", "False":
|
||||
return false, true
|
||||
}
|
||||
|
||||
// Not a valid bool value
|
||||
return
|
||||
}
|
60
vendor/src/github.com/mattes/migrate/database/mysql/mysql_test.go
vendored
Normal file
60
vendor/src/github.com/mattes/migrate/database/mysql/mysql_test.go
vendored
Normal file
|
@ -0,0 +1,60 @@
|
|||
package mysql
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
sqldriver "database/sql/driver"
|
||||
"fmt"
|
||||
// "io/ioutil"
|
||||
// "log"
|
||||
"testing"
|
||||
|
||||
// "github.com/go-sql-driver/mysql"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "mysql:8", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.7", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.6", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
{Image: "mysql:5.5", ENV: []string{"MYSQL_ROOT_PASSWORD=root", "MYSQL_DATABASE=public"}},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
db, err := sql.Open("mysql", fmt.Sprintf("root:root@tcp(%v:%v)/public", i.Host(), i.Port()))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer db.Close()
|
||||
err = db.Ping()
|
||||
|
||||
if err == sqldriver.ErrBadConn {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
// mysql.SetLogger(mysql.Logger(log.New(ioutil.Discard, "", log.Ltime)))
|
||||
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Mysql{}
|
||||
addr := fmt.Sprintf("mysql://root:root@tcp(%v:%v)/public", i.Host(), i.Port())
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
|
||||
// check ensureVersionTable
|
||||
if err := d.(*Mysql).ensureVersionTable(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// check again
|
||||
if err := d.(*Mysql).ensureVersionTable(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
})
|
||||
}
|
0
vendor/src/github.com/mattes/migrate/database/neo4j/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/neo4j/README.md
vendored
Normal file
28
vendor/src/github.com/mattes/migrate/database/postgres/README.md
vendored
Normal file
28
vendor/src/github.com/mattes/migrate/database/postgres/README.md
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
# postgres
|
||||
|
||||
`postgres://user:password@host:port/dbname?query` (`postgresql://` works, too)
|
||||
|
||||
| URL Query | WithInstance Config | Description |
|
||||
|------------|---------------------|-------------|
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `dbname` | `DatabaseName` | The name of the database to connect to |
|
||||
| `search_path` | | This variable specifies the order in which schemas are searched when an object is referenced by a simple name with no schema specified. |
|
||||
| `user` | | The user to sign in as |
|
||||
| `password` | | The user's password |
|
||||
| `host` | | The host to connect to. Values that start with / are for unix domain sockets. (default is localhost) |
|
||||
| `port` | | The port to bind to. (default is 5432) |
|
||||
| `fallback_application_name` | | An application_name to fall back to if one isn't provided. |
|
||||
| `connect_timeout` | | Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. |
|
||||
| `sslcert` | | Cert file location. The file must contain PEM encoded data. |
|
||||
| `sslkey` | | Key file location. The file must contain PEM encoded data. |
|
||||
| `sslrootcert` | | The location of the root certificate file. The file must contain PEM encoded data. |
|
||||
| `sslmode` | | Whether or not to use SSL (disable\|require\|verify-ca\|verify-full) |
|
||||
|
||||
|
||||
## Upgrading from v1
|
||||
|
||||
1. Write down the current migration version from schema_migrations
|
||||
1. `DROP TABLE schema_migrations`
|
||||
2. Wrap your existing migrations in transactions ([BEGIN/COMMIT](https://www.postgresql.org/docs/current/static/transaction-iso.html)) if you use multiple statements within one migration.
|
||||
3. Download and install the latest migrate version.
|
||||
4. Force the current migration version with `migrate force <current_version>`.
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS users;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE users (
|
||||
user_id integer unique,
|
||||
name varchar(40),
|
||||
email varchar(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
ALTER TABLE users DROP COLUMN IF EXISTS city;
|
|
@ -0,0 +1,3 @@
|
|||
ALTER TABLE users ADD COLUMN city varchar(100);
|
||||
|
||||
|
|
@ -0,0 +1 @@
|
|||
DROP INDEX IF EXISTS users_email_index;
|
|
@ -0,0 +1,3 @@
|
|||
CREATE UNIQUE INDEX CONCURRENTLY users_email_index ON users (email);
|
||||
|
||||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS books;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE books (
|
||||
user_id integer,
|
||||
name varchar(40),
|
||||
author varchar(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS movies;
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE movies (
|
||||
user_id integer,
|
||||
name varchar(40),
|
||||
director varchar(40)
|
||||
);
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
|
@ -0,0 +1 @@
|
|||
-- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean sed interdum velit, tristique iaculis justo. Pellentesque ut porttitor dolor. Donec sit amet pharetra elit. Cras vel ligula ex. Phasellus posuere.
|
273
vendor/src/github.com/mattes/migrate/database/postgres/postgres.go
vendored
Normal file
273
vendor/src/github.com/mattes/migrate/database/postgres/postgres.go
vendored
Normal file
|
@ -0,0 +1,273 @@
|
|||
package postgres
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
nurl "net/url"
|
||||
|
||||
"github.com/lib/pq"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := Postgres{}
|
||||
database.Register("postgres", &db)
|
||||
database.Register("postgresql", &db)
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
ErrNoSchema = fmt.Errorf("no schema")
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type Postgres struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
// Open and WithInstance need to garantuee that config is never nil
|
||||
config *Config
|
||||
}
|
||||
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
query := `SELECT CURRENT_DATABASE()`
|
||||
var databaseName string
|
||||
if err := instance.QueryRow(query).Scan(&databaseName); err != nil {
|
||||
return nil, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if len(databaseName) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
config.DatabaseName = databaseName
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
px := &Postgres{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := px.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (p *Postgres) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
db, err := sql.Open("postgres", migrate.FilterCustomQuery(purl).String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
px, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return px, nil
|
||||
}
|
||||
|
||||
func (p *Postgres) Close() error {
|
||||
return p.db.Close()
|
||||
}
|
||||
|
||||
// https://www.postgresql.org/docs/9.6/static/explicit-locking.html#ADVISORY-LOCKS
|
||||
func (p *Postgres) Lock() error {
|
||||
if p.isLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(p.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// This will either obtain the lock immediately and return true,
|
||||
// or return false if the lock cannot be acquired immediately.
|
||||
query := `SELECT pg_try_advisory_lock($1)`
|
||||
var success bool
|
||||
if err := p.db.QueryRow(query, aid).Scan(&success); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "try lock failed", Query: []byte(query)}
|
||||
}
|
||||
|
||||
if success {
|
||||
p.isLocked = true
|
||||
return nil
|
||||
}
|
||||
|
||||
return database.ErrLocked
|
||||
}
|
||||
|
||||
func (p *Postgres) Unlock() error {
|
||||
if !p.isLocked {
|
||||
return nil
|
||||
}
|
||||
|
||||
aid, err := database.GenerateAdvisoryLockId(p.config.DatabaseName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
query := `SELECT pg_advisory_unlock($1)`
|
||||
if _, err := p.db.Exec(query, aid); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
p.isLocked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Postgres) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// run migration
|
||||
query := string(migr[:])
|
||||
if _, err := p.db.Exec(query); err != nil {
|
||||
// TODO: cast to postgress error and get line number
|
||||
return database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Postgres) SetVersion(version int, dirty bool) error {
|
||||
tx, err := p.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
|
||||
query := `TRUNCATE "` + p.config.MigrationsTable + `"`
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
query = `INSERT INTO "` + p.config.MigrationsTable + `" (version, dirty) VALUES ($1, $2)`
|
||||
if _, err := tx.Exec(query, version, dirty); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Postgres) Version() (version int, dirty bool, err error) {
|
||||
query := `SELECT version, dirty FROM "` + p.config.MigrationsTable + `" LIMIT 1`
|
||||
err = p.db.QueryRow(query).Scan(&version, &dirty)
|
||||
switch {
|
||||
case err == sql.ErrNoRows:
|
||||
return database.NilVersion, false, nil
|
||||
|
||||
case err != nil:
|
||||
if e, ok := err.(*pq.Error); ok {
|
||||
if e.Code.Name() == "undefined_table" {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
}
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
|
||||
default:
|
||||
return version, dirty, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Postgres) Drop() error {
|
||||
// select all tables in current schema
|
||||
query := `SELECT table_name FROM information_schema.tables WHERE table_schema=(SELECT current_schema())`
|
||||
tables, err := p.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
|
||||
// delete one table after another
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
|
||||
if len(tableNames) > 0 {
|
||||
// delete one by one ...
|
||||
for _, t := range tableNames {
|
||||
query = `DROP TABLE IF EXISTS ` + t + ` CASCADE`
|
||||
if _, err := p.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := p.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Postgres) ensureVersionTable() error {
|
||||
// check if migration table exists
|
||||
var count int
|
||||
query := `SELECT COUNT(1) FROM information_schema.tables WHERE table_name = $1 AND table_schema = (SELECT current_schema()) LIMIT 1`
|
||||
if err := p.db.QueryRow(query, p.config.MigrationsTable).Scan(&count); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if count == 1 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// if not, create the empty migration table
|
||||
query = `CREATE TABLE "` + p.config.MigrationsTable + `" (version bigint not null primary key, dirty boolean not null)`
|
||||
if _, err := p.db.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
return nil
|
||||
}
|
150
vendor/src/github.com/mattes/migrate/database/postgres/postgres_test.go
vendored
Normal file
150
vendor/src/github.com/mattes/migrate/database/postgres/postgres_test.go
vendored
Normal file
|
@ -0,0 +1,150 @@
|
|||
package postgres
|
||||
|
||||
// error codes https://github.com/lib/pq/blob/master/error.go
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/lib/pq"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
mt "github.com/mattes/migrate/testing"
|
||||
)
|
||||
|
||||
var versions = []mt.Version{
|
||||
{Image: "postgres:9.6"},
|
||||
{Image: "postgres:9.5"},
|
||||
{Image: "postgres:9.4"},
|
||||
{Image: "postgres:9.3"},
|
||||
{Image: "postgres:9.2"},
|
||||
}
|
||||
|
||||
func isReady(i mt.Instance) bool {
|
||||
db, err := sql.Open("postgres", fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable", i.Host(), i.Port()))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer db.Close()
|
||||
err = db.Ping()
|
||||
if err == io.EOF {
|
||||
return false
|
||||
|
||||
} else if e, ok := err.(*pq.Error); ok {
|
||||
if e.Code.Name() == "cannot_connect_now" {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func Test(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Postgres{}
|
||||
addr := fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable", i.Host(), i.Port())
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
})
|
||||
}
|
||||
|
||||
func TestMultiStatement(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Postgres{}
|
||||
addr := fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable", i.Host(), i.Port())
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if err := d.Run(bytes.NewReader([]byte("CREATE TABLE foo (foo text); CREATE TABLE bar (bar text);"))); err != nil {
|
||||
t.Fatalf("expected err to be nil, got %v", err)
|
||||
}
|
||||
|
||||
// make sure second table exists
|
||||
var exists bool
|
||||
if err := d.(*Postgres).db.QueryRow("SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = 'bar' AND table_schema = (SELECT current_schema()))").Scan(&exists); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !exists {
|
||||
t.Fatalf("expected table bar to exist")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterCustomQuery(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Postgres{}
|
||||
addr := fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable&x-custom=foobar", i.Host(), i.Port())
|
||||
_, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestWithSchema(t *testing.T) {
|
||||
mt.ParallelTest(t, versions, isReady,
|
||||
func(t *testing.T, i mt.Instance) {
|
||||
p := &Postgres{}
|
||||
addr := fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable", i.Host(), i.Port())
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
// create foobar schema
|
||||
if err := d.Run(bytes.NewReader([]byte("CREATE SCHEMA foobar AUTHORIZATION postgres"))); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := d.SetVersion(1, false); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// re-connect using that schema
|
||||
d2, err := p.Open(fmt.Sprintf("postgres://postgres@%v:%v/postgres?sslmode=disable&search_path=foobar", i.Host(), i.Port()))
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
version, _, err := d2.Version()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if version != -1 {
|
||||
t.Fatal("expected NilVersion")
|
||||
}
|
||||
|
||||
// now update version and compare
|
||||
if err := d2.SetVersion(2, false); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
version, _, err = d2.Version()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if version != 2 {
|
||||
t.Fatal("expected version 2")
|
||||
}
|
||||
|
||||
// meanwhile, the public schema still has the other version
|
||||
version, _, err = d.Version()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if version != 1 {
|
||||
t.Fatal("expected version 2")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestWithInstance(t *testing.T) {
|
||||
|
||||
}
|
0
vendor/src/github.com/mattes/migrate/database/ql/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/ql/README.md
vendored
Normal file
1
vendor/src/github.com/mattes/migrate/database/ql/migration/33_create_table.down.sql
vendored
Normal file
1
vendor/src/github.com/mattes/migrate/database/ql/migration/33_create_table.down.sql
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS pets;
|
3
vendor/src/github.com/mattes/migrate/database/ql/migration/33_create_table.up.sql
vendored
Normal file
3
vendor/src/github.com/mattes/migrate/database/ql/migration/33_create_table.up.sql
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
CREATE TABLE pets (
|
||||
name string
|
||||
);
|
1
vendor/src/github.com/mattes/migrate/database/ql/migration/44_alter_table.down.sql
vendored
Normal file
1
vendor/src/github.com/mattes/migrate/database/ql/migration/44_alter_table.down.sql
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
DROP TABLE IF EXISTS pets;
|
1
vendor/src/github.com/mattes/migrate/database/ql/migration/44_alter_table.up.sql
vendored
Normal file
1
vendor/src/github.com/mattes/migrate/database/ql/migration/44_alter_table.up.sql
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
ALTER TABLE pets ADD predator bool;;
|
212
vendor/src/github.com/mattes/migrate/database/ql/ql.go
vendored
Normal file
212
vendor/src/github.com/mattes/migrate/database/ql/ql.go
vendored
Normal file
|
@ -0,0 +1,212 @@
|
|||
package ql
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
nurl "net/url"
|
||||
|
||||
_ "github.com/cznic/ql/driver"
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
)
|
||||
|
||||
func init() {
|
||||
database.Register("ql", &Ql{})
|
||||
}
|
||||
|
||||
var DefaultMigrationsTable = "schema_migrations"
|
||||
var (
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
ErrAppendPEM = fmt.Errorf("failed to append PEM")
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
type Ql struct {
|
||||
db *sql.DB
|
||||
isLocked bool
|
||||
|
||||
config *Config
|
||||
}
|
||||
|
||||
func WithInstance(instance *sql.DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if err := instance.Ping(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
mx := &Ql{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
if err := mx.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mx, nil
|
||||
}
|
||||
func (m *Ql) ensureVersionTable() error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tx.Exec(fmt.Sprintf(`
|
||||
CREATE TABLE IF NOT EXISTS %s (version uint64,dirty bool);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS version_unique ON %s (version);
|
||||
`, m.config.MigrationsTable, m.config.MigrationsTable)); err != nil {
|
||||
if err := tx.Rollback(); err != nil {
|
||||
return err
|
||||
}
|
||||
return err
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Ql) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dbfile := strings.Replace(migrate.FilterCustomQuery(purl).String(), "ql://", "", 1)
|
||||
db, err := sql.Open("ql", dbfile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
mx, err := WithInstance(db, &Config{
|
||||
DatabaseName: purl.Path,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return mx, nil
|
||||
}
|
||||
func (m *Ql) Close() error {
|
||||
return m.db.Close()
|
||||
}
|
||||
func (m *Ql) Drop() error {
|
||||
query := `SELECT Name FROM __Table`
|
||||
tables, err := m.db.Query(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
defer tables.Close()
|
||||
tableNames := make([]string, 0)
|
||||
for tables.Next() {
|
||||
var tableName string
|
||||
if err := tables.Scan(&tableName); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(tableName) > 0 {
|
||||
if strings.HasPrefix(tableName, "__") == false {
|
||||
tableNames = append(tableNames, tableName)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(tableNames) > 0 {
|
||||
for _, t := range tableNames {
|
||||
query := "DROP TABLE " + t
|
||||
err = m.executeQuery(query)
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
if err := m.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (m *Ql) Lock() error {
|
||||
if m.isLocked {
|
||||
return database.ErrLocked
|
||||
}
|
||||
m.isLocked = true
|
||||
return nil
|
||||
}
|
||||
func (m *Ql) Unlock() error {
|
||||
if !m.isLocked {
|
||||
return nil
|
||||
}
|
||||
m.isLocked = false
|
||||
return nil
|
||||
}
|
||||
func (m *Ql) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
query := string(migr[:])
|
||||
|
||||
return m.executeQuery(query)
|
||||
}
|
||||
func (m *Ql) executeQuery(query string) error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *Ql) SetVersion(version int, dirty bool) error {
|
||||
tx, err := m.db.Begin()
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction start failed"}
|
||||
}
|
||||
|
||||
query := "TRUNCATE TABLE " + m.config.MigrationsTable
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
|
||||
if version >= 0 {
|
||||
query := fmt.Sprintf(`INSERT INTO %s (version, dirty) VALUES (%d, %t)`, m.config.MigrationsTable, version, dirty)
|
||||
if _, err := tx.Exec(query); err != nil {
|
||||
tx.Rollback()
|
||||
return &database.Error{OrigErr: err, Query: []byte(query)}
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "transaction commit failed"}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Ql) Version() (version int, dirty bool, err error) {
|
||||
query := "SELECT version, dirty FROM " + m.config.MigrationsTable + " LIMIT 1"
|
||||
err = m.db.QueryRow(query).Scan(&version, &dirty)
|
||||
if err != nil {
|
||||
return database.NilVersion, false, nil
|
||||
}
|
||||
return version, dirty, nil
|
||||
}
|
62
vendor/src/github.com/mattes/migrate/database/ql/ql_test.go
vendored
Normal file
62
vendor/src/github.com/mattes/migrate/database/ql/ql_test.go
vendored
Normal file
|
@ -0,0 +1,62 @@
|
|||
package ql
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
_ "github.com/cznic/ql/driver"
|
||||
"github.com/mattes/migrate"
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
_ "github.com/mattes/migrate/source/file"
|
||||
)
|
||||
|
||||
func Test(t *testing.T) {
|
||||
dir, err := ioutil.TempDir("", "ql-driver-test")
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
os.RemoveAll(dir)
|
||||
}()
|
||||
fmt.Printf("DB path : %s\n", filepath.Join(dir, "ql.db"))
|
||||
p := &Ql{}
|
||||
addr := fmt.Sprintf("ql://%s", filepath.Join(dir, "ql.db"))
|
||||
d, err := p.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
db, err := sql.Open("ql", filepath.Join(dir, "ql.db"))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
if err := db.Close(); err != nil {
|
||||
return
|
||||
}
|
||||
}()
|
||||
dt.Test(t, d, []byte("CREATE TABLE t (Qty int, Name string);"))
|
||||
driver, err := WithInstance(db, &Config{})
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if err := d.Drop(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
m, err := migrate.NewWithDatabaseInstance(
|
||||
"file://./migration",
|
||||
"ql", driver)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fmt.Println("UP")
|
||||
err = m.Up()
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
6
vendor/src/github.com/mattes/migrate/database/redshift/README.md
vendored
Normal file
6
vendor/src/github.com/mattes/migrate/database/redshift/README.md
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
Redshift
|
||||
===
|
||||
|
||||
This provides a Redshift driver for migrations. It is used whenever the URL of the database starts with `redshift://`.
|
||||
|
||||
Redshift is PostgreSQL compatible but has some specific features (or lack thereof) that require slightly different behavior.
|
46
vendor/src/github.com/mattes/migrate/database/redshift/redshift.go
vendored
Normal file
46
vendor/src/github.com/mattes/migrate/database/redshift/redshift.go
vendored
Normal file
|
@ -0,0 +1,46 @@
|
|||
package redshift
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
|
||||
"github.com/mattes/migrate/database"
|
||||
"github.com/mattes/migrate/database/postgres"
|
||||
)
|
||||
|
||||
// init registers the driver under the name 'redshift'
|
||||
func init() {
|
||||
db := new(Redshift)
|
||||
db.Driver = new(postgres.Postgres)
|
||||
|
||||
database.Register("redshift", db)
|
||||
}
|
||||
|
||||
// Redshift is a wrapper around the PostgreSQL driver which implements Redshift-specific behavior.
|
||||
//
|
||||
// Currently, the only different behaviour is the lack of locking in Redshift. The (Un)Lock() method(s) have been overridden from the PostgreSQL adapter to simply return nil.
|
||||
type Redshift struct {
|
||||
// The wrapped PostgreSQL driver.
|
||||
database.Driver
|
||||
}
|
||||
|
||||
// Open implements the database.Driver interface by parsing the URL, switching the scheme from "redshift" to "postgres", and delegating to the underlying PostgreSQL driver.
|
||||
func (driver *Redshift) Open(dsn string) (database.Driver, error) {
|
||||
parsed, err := url.Parse(dsn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parsed.Scheme = "postgres"
|
||||
psql, err := driver.Driver.Open(parsed.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Redshift{Driver: psql}, nil
|
||||
}
|
||||
|
||||
// Lock implements the database.Driver interface by not locking and returning nil.
|
||||
func (driver *Redshift) Lock() error { return nil }
|
||||
|
||||
// Unlock implements the database.Driver interface by not unlocking and returning nil.
|
||||
func (driver *Redshift) Unlock() error { return nil }
|
0
vendor/src/github.com/mattes/migrate/database/shell/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/shell/README.md
vendored
Normal file
35
vendor/src/github.com/mattes/migrate/database/spanner/README.md
vendored
Normal file
35
vendor/src/github.com/mattes/migrate/database/spanner/README.md
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
# Google Cloud Spanner
|
||||
|
||||
## Usage
|
||||
|
||||
The DSN must be given in the following format.
|
||||
|
||||
`spanner://projects/{projectId}/instances/{instanceId}/databases/{databaseName}`
|
||||
|
||||
See [Google Spanner Documentation](https://cloud.google.com/spanner/docs) for details.
|
||||
|
||||
|
||||
| Param | WithInstance Config | Description |
|
||||
| ----- | ------------------- | ----------- |
|
||||
| `x-migrations-table` | `MigrationsTable` | Name of the migrations table |
|
||||
| `url` | `DatabaseName` | The full path to the Spanner database resource. If provided as part of `Config` it must not contain a scheme or query string to match the format `projects/{projectId}/instances/{instanceId}/databases/{databaseName}`|
|
||||
| `projectId` || The Google Cloud Platform project id
|
||||
| `instanceId` || The id of the instance running Spanner
|
||||
| `databaseName` || The name of the Spanner database
|
||||
|
||||
|
||||
> **Note:** Google Cloud Spanner migrations can take a considerable amount of
|
||||
> time. The migrations provided as part of the example take about 6 minutes to
|
||||
> run on a small instance.
|
||||
>
|
||||
> ```log
|
||||
> 1481574547/u create_users_table (21.354507597s)
|
||||
> 1496539702/u add_city_to_users (41.647359754s)
|
||||
> 1496601752/u add_index_on_user_emails (2m12.155787369s)
|
||||
> 1496602638/u create_books_table (2m30.77299181s)
|
||||
|
||||
## Testing
|
||||
|
||||
To unit test the `spanner` driver, `SPANNER_DATABASE` needs to be set. You'll
|
||||
need to sign-up to Google Cloud Platform (GCP) and have a running Spanner
|
||||
instance since it is not possible to run Google Spanner outside GCP.
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE Users
|
|
@ -0,0 +1,5 @@
|
|||
CREATE TABLE Users (
|
||||
UserId INT64,
|
||||
Name STRING(40),
|
||||
Email STRING(83)
|
||||
) PRIMARY KEY(UserId)
|
|
@ -0,0 +1 @@
|
|||
ALTER TABLE Users DROP COLUMN city
|
|
@ -0,0 +1 @@
|
|||
ALTER TABLE Users ADD COLUMN city STRING(100)
|
|
@ -0,0 +1 @@
|
|||
DROP INDEX UsersEmailIndex
|
|
@ -0,0 +1 @@
|
|||
CREATE UNIQUE INDEX UsersEmailIndex ON Users (Email)
|
|
@ -0,0 +1 @@
|
|||
DROP TABLE Books
|
|
@ -0,0 +1,6 @@
|
|||
CREATE TABLE Books (
|
||||
UserId INT64,
|
||||
Name STRING(40),
|
||||
Author STRING(40)
|
||||
) PRIMARY KEY(UserId, Name),
|
||||
INTERLEAVE IN PARENT Users ON DELETE CASCADE
|
294
vendor/src/github.com/mattes/migrate/database/spanner/spanner.go
vendored
Normal file
294
vendor/src/github.com/mattes/migrate/database/spanner/spanner.go
vendored
Normal file
|
@ -0,0 +1,294 @@
|
|||
package spanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
nurl "net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"cloud.google.com/go/spanner"
|
||||
sdb "cloud.google.com/go/spanner/admin/database/apiv1"
|
||||
|
||||
"github.com/mattes/migrate"
|
||||
"github.com/mattes/migrate/database"
|
||||
|
||||
"google.golang.org/api/iterator"
|
||||
adminpb "google.golang.org/genproto/googleapis/spanner/admin/database/v1"
|
||||
)
|
||||
|
||||
func init() {
|
||||
db := Spanner{}
|
||||
database.Register("spanner", &db)
|
||||
}
|
||||
|
||||
// DefaultMigrationsTable is used if no custom table is specified
|
||||
const DefaultMigrationsTable = "SchemaMigrations"
|
||||
|
||||
// Driver errors
|
||||
var (
|
||||
ErrNilConfig = fmt.Errorf("no config")
|
||||
ErrNoDatabaseName = fmt.Errorf("no database name")
|
||||
ErrNoSchema = fmt.Errorf("no schema")
|
||||
ErrDatabaseDirty = fmt.Errorf("database is dirty")
|
||||
)
|
||||
|
||||
// Config used for a Spanner instance
|
||||
type Config struct {
|
||||
MigrationsTable string
|
||||
DatabaseName string
|
||||
}
|
||||
|
||||
// Spanner implements database.Driver for Google Cloud Spanner
|
||||
type Spanner struct {
|
||||
db *DB
|
||||
|
||||
config *Config
|
||||
}
|
||||
|
||||
type DB struct {
|
||||
admin *sdb.DatabaseAdminClient
|
||||
data *spanner.Client
|
||||
}
|
||||
|
||||
// WithInstance implements database.Driver
|
||||
func WithInstance(instance *DB, config *Config) (database.Driver, error) {
|
||||
if config == nil {
|
||||
return nil, ErrNilConfig
|
||||
}
|
||||
|
||||
if len(config.DatabaseName) == 0 {
|
||||
return nil, ErrNoDatabaseName
|
||||
}
|
||||
|
||||
if len(config.MigrationsTable) == 0 {
|
||||
config.MigrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
sx := &Spanner{
|
||||
db: instance,
|
||||
config: config,
|
||||
}
|
||||
|
||||
if err := sx.ensureVersionTable(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sx, nil
|
||||
}
|
||||
|
||||
// Open implements database.Driver
|
||||
func (s *Spanner) Open(url string) (database.Driver, error) {
|
||||
purl, err := nurl.Parse(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
adminClient, err := sdb.NewDatabaseAdminClient(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dbname := strings.Replace(migrate.FilterCustomQuery(purl).String(), "spanner://", "", 1)
|
||||
dataClient, err := spanner.NewClient(ctx, dbname)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
migrationsTable := purl.Query().Get("x-migrations-table")
|
||||
if len(migrationsTable) == 0 {
|
||||
migrationsTable = DefaultMigrationsTable
|
||||
}
|
||||
|
||||
db := &DB{admin: adminClient, data: dataClient}
|
||||
return WithInstance(db, &Config{
|
||||
DatabaseName: dbname,
|
||||
MigrationsTable: migrationsTable,
|
||||
})
|
||||
}
|
||||
|
||||
// Close implements database.Driver
|
||||
func (s *Spanner) Close() error {
|
||||
s.db.data.Close()
|
||||
return s.db.admin.Close()
|
||||
}
|
||||
|
||||
// Lock implements database.Driver but doesn't do anything because Spanner only
|
||||
// enqueues the UpdateDatabaseDdlRequest.
|
||||
func (s *Spanner) Lock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unlock implements database.Driver but no action required, see Lock.
|
||||
func (s *Spanner) Unlock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run implements database.Driver
|
||||
func (s *Spanner) Run(migration io.Reader) error {
|
||||
migr, err := ioutil.ReadAll(migration)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// run migration
|
||||
stmts := migrationStatements(migr)
|
||||
ctx := context.Background()
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: stmts,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "migration failed", Query: migr}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetVersion implements database.Driver
|
||||
func (s *Spanner) SetVersion(version int, dirty bool) error {
|
||||
ctx := context.Background()
|
||||
|
||||
_, err := s.db.data.ReadWriteTransaction(ctx,
|
||||
func(ctx context.Context, txn *spanner.ReadWriteTransaction) error {
|
||||
m := []*spanner.Mutation{
|
||||
spanner.Delete(s.config.MigrationsTable, spanner.AllKeys()),
|
||||
spanner.Insert(s.config.MigrationsTable,
|
||||
[]string{"Version", "Dirty"},
|
||||
[]interface{}{version, dirty},
|
||||
)}
|
||||
return txn.BufferWrite(m)
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Version implements database.Driver
|
||||
func (s *Spanner) Version() (version int, dirty bool, err error) {
|
||||
ctx := context.Background()
|
||||
|
||||
stmt := spanner.Statement{
|
||||
SQL: `SELECT Version, Dirty FROM ` + s.config.MigrationsTable + ` LIMIT 1`,
|
||||
}
|
||||
iter := s.db.data.Single().Query(ctx, stmt)
|
||||
defer iter.Stop()
|
||||
|
||||
row, err := iter.Next()
|
||||
switch err {
|
||||
case iterator.Done:
|
||||
return database.NilVersion, false, nil
|
||||
case nil:
|
||||
var v int64
|
||||
if err = row.Columns(&v, &dirty); err != nil {
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(stmt.SQL)}
|
||||
}
|
||||
version = int(v)
|
||||
default:
|
||||
return 0, false, &database.Error{OrigErr: err, Query: []byte(stmt.SQL)}
|
||||
}
|
||||
|
||||
return version, dirty, nil
|
||||
}
|
||||
|
||||
// Drop implements database.Driver. Retrieves the database schema first and
|
||||
// creates statements to drop the indexes and tables accordingly.
|
||||
// Note: The drop statements are created in reverse order to how they're
|
||||
// provided in the schema. Assuming the schema describes how the database can
|
||||
// be "build up", it seems logical to "unbuild" the database simply by going the
|
||||
// opposite direction. More testing
|
||||
func (s *Spanner) Drop() error {
|
||||
ctx := context.Background()
|
||||
res, err := s.db.admin.GetDatabaseDdl(ctx, &adminpb.GetDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Err: "drop failed"}
|
||||
}
|
||||
if len(res.Statements) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
r := regexp.MustCompile(`(CREATE TABLE\s(\S+)\s)|(CREATE.+INDEX\s(\S+)\s)`)
|
||||
stmts := make([]string, 0)
|
||||
for i := len(res.Statements) - 1; i >= 0; i-- {
|
||||
s := res.Statements[i]
|
||||
m := r.FindSubmatch([]byte(s))
|
||||
|
||||
if len(m) == 0 {
|
||||
continue
|
||||
} else if tbl := m[2]; len(tbl) > 0 {
|
||||
stmts = append(stmts, fmt.Sprintf(`DROP TABLE %s`, tbl))
|
||||
} else if idx := m[4]; len(idx) > 0 {
|
||||
stmts = append(stmts, fmt.Sprintf(`DROP INDEX %s`, idx))
|
||||
}
|
||||
}
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: stmts,
|
||||
})
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(strings.Join(stmts, "; "))}
|
||||
}
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(strings.Join(stmts, "; "))}
|
||||
}
|
||||
|
||||
if err := s.ensureVersionTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Spanner) ensureVersionTable() error {
|
||||
ctx := context.Background()
|
||||
tbl := s.config.MigrationsTable
|
||||
iter := s.db.data.Single().Read(ctx, tbl, spanner.AllKeys(), []string{"Version"})
|
||||
if err := iter.Do(func(r *spanner.Row) error { return nil }); err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
stmt := fmt.Sprintf(`CREATE TABLE %s (
|
||||
Version INT64 NOT NULL,
|
||||
Dirty BOOL NOT NULL
|
||||
) PRIMARY KEY(Version)`, tbl)
|
||||
|
||||
op, err := s.db.admin.UpdateDatabaseDdl(ctx, &adminpb.UpdateDatabaseDdlRequest{
|
||||
Database: s.config.DatabaseName,
|
||||
Statements: []string{stmt},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(stmt)}
|
||||
}
|
||||
if err := op.Wait(ctx); err != nil {
|
||||
return &database.Error{OrigErr: err, Query: []byte(stmt)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func migrationStatements(migration []byte) []string {
|
||||
regex := regexp.MustCompile(";$")
|
||||
migrationString := string(migration[:])
|
||||
migrationString = strings.TrimSpace(migrationString)
|
||||
migrationString = regex.ReplaceAllString(migrationString, "")
|
||||
|
||||
statements := strings.Split(migrationString, ";")
|
||||
return statements
|
||||
}
|
28
vendor/src/github.com/mattes/migrate/database/spanner/spanner_test.go
vendored
Normal file
28
vendor/src/github.com/mattes/migrate/database/spanner/spanner_test.go
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
package spanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
dt "github.com/mattes/migrate/database/testing"
|
||||
)
|
||||
|
||||
func Test(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping test in short mode.")
|
||||
}
|
||||
|
||||
db, ok := os.LookupEnv("SPANNER_DATABASE")
|
||||
if !ok {
|
||||
t.Skip("SPANNER_DATABASE not set, skipping test.")
|
||||
}
|
||||
|
||||
s := &Spanner{}
|
||||
addr := fmt.Sprintf("spanner://%v", db)
|
||||
d, err := s.Open(addr)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
dt.Test(t, d, []byte("SELECT 1"))
|
||||
}
|
0
vendor/src/github.com/mattes/migrate/database/sqlite3/README.md
vendored
Normal file
0
vendor/src/github.com/mattes/migrate/database/sqlite3/README.md
vendored
Normal file
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue