There are multiple microservices, each runs using Spring Boot. Data is persisted in a common database, with each microservice having it's own schema. Over time as the tables for a microservice evolve, Liquibase is used for maintenance. In a brand new install of the product, Liquibase would create new tables as the service starts.
For a variety of reasons we need to combine microservices. Let's say service P (primary) and service S (secondary) are to be combined with all the tables under ONE schema (P's in this case). One of the requirements is that the changesets in the combined microservice should be able to handle a back-level migration. Let's say that both services are now at version 4 and a customer at version 2 wants to "upgrade". In the old model of non-combined services, when the version 4 of services P and S run for the first time both services schemas would be upgraded to V 4 (assuming there were changes in V3 and V4).
The combines service needs to handle the same upgrade scenario. I believe I can handle this if I can find a way to upgrade the secondary schemas from a changelog. Upgrading the Primary is what normally would happen and it not the issue. Typically, a change log runs "driven" by the corresponding databaseChangeLog table. I say "driven" because the entry for a changeset in the table determines whether the changeset needs to be executed in a succeeding invocation of the service. The change log for the new combined service will need to be enhanced to upgrade any existing schema for S before the tables are copied over using a custom sql task. I see the existing schema upgrade as a different custom task which sets up a new Liquibase environment and runs a change log for S against its schema. The code below is from the upgrade custom task's execute() method and in it "originalSchema" is the schema of S which needs to be upgraded.
public void execute(Database database) throws CustomChangeException {
LOGGER.debug("Updating Schema " + originalSchema);
confirmationMessage = ""; // if there's an exception it'll visible in the log.
validateSettings();
JdbcConnection dbConn = (JdbcConnection) database.getConnection();
try {
Database db = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(dbConn);
String savedDefaultSchema = database.getDefaultSchemaName();
String savedLiquibaseSchema = database.getLiquibaseSchemaName();
database.setDefaultSchemaName(originalSchema);
ChangeLogHistoryServiceFactory savedCLHSFactory = ChangeLogHistoryServiceFactory.getInstance();
ChangeLogHistoryServiceFactory.reset();
Liquibase liquibase = new Liquibase(changeLogPath, new ClassLoaderResourceAccessor(), database);
liquibase.update(new Contexts(), new LabelExpression());
ChangeLogHistoryServiceFactory.setInstance(savedCLHSFactory);
database.setDefaultSchemaName(savedDefaultSchema);
confirmationMessage = String.format("UpgradeSchema has successfully run on schema %s using change log %s.", originalSchema, changeLogPath);
}
catch (DatabaseException de) {
throw new CustomChangeException(de);
}
catch (LiquibaseException le) {
throw new CustomChangeException(le);
}
}
I discovered if I did not "reset" the ChangeLogHistoryServiceFactory it would use the cached changelog table of the combined service which was being used to "drive" the changelog. A "services" map is maintained by ChangeLogHistoryServiceFactory and since the "DataBase", which is the key to the map, does not change the same StandardChangeLogHistoryService is returned when in the new environment. Doing a reset on the ChangeLogHistoryServiceFactory gets around it.
Being a relative newbie to Liquibase (I've written change sets using the "builtin" Change objects, but never attempted a custom one), I thought I'd ask the experts for feedback as to whether the above conforms to "best practices" of running a Liquibase change log to affect a different schema from within a change set.
I do recognize the above is not thread safe, but I'd like to keep that out of the discussion unless relevant.
Thanks for taking the time to read this.