Steve Donie
Principal Software Engineer
Datical, Inc. http://www.datical.com/
--logLevel
parameter at the command-line liquibase prints the logging on the command-line. But when I additionally use the --logFile
parameter then it prints them in a file without printing it in the command-line. Is there any way to let liquibase print it in the command-line AND in a logfile?
I am using liquibase 3.5.5. and this is what I give liquibase as parameters:Fortunately, an experienced programmer in any programming language (whatever it may be) can pick up Python very quickly. Indeed, it’s also easy for beginners to use and learn. So why are we being late? Let’s get started.
Python is a dynamic, interpreted (bytecode-compiled) and very reliable, a high-level programming language that allows you to achieve more in lesser lines of code. There are no type declarations of variables, parameters, functions, or methods in the source code. This makes the code very short and flexible, and you lose the compile-time type checking of the source code. Python tracks the types of all values at runtime and flag code that does not make sense as it runs.
The programming model offered by Python allows you to carry out coding in various styles such as functional, procedural or object-oriented. It offers a huge library, carries out memory management automatically and has a dynamic type of system. It can be installed on various operating systems.
An excellent way to see how Python code works is to run the Python interpreter and type code right into it. If you ever have a question like, “What happens if I ever add an int to a list?” Just typing it into the Python interpreter is a fast and likely the best way to see what happens.
Another great feature of Python is dynamic name resolution, which binds methods and variable name when the program is executed, it is an amazing feature and you will realize this when you start working on it.
Here are some reasons why Python is preferred over other programming languages:
I have a directory where the packages are kept in the below format ,both the package specification and body are kept in the same file ,the file name is abc.pkg
create or replace package abc
IS
/* this package is utility to fetch employee related data*/
END abc;
/
create or replace package body abc
IS
END abc;
/
My changeset is
<changeSet author="curious" id="PKG_1" logicalFilePath= "package/abc.pkg" runOnChange="true">
<sqlFile path="package/abc.pkg"
relativeToChangelogFile="true"
endDelimiter="/"
splitStatements="true"
encoding="utf8"
/>
</changeSet>
The build is successful but the package got error because of / , can anyone help me with this fix , as this must be the most common way of checkin the code into version control systems like GIT and SVN
I tested this out, I create 2 databases from the original change log file and applied the diff changelog file to both databases. I dropped the view on one database and left it in the other database. I executed the generateChangeLog commands against both databases. I was able to generate a changelog file as expected from the database where the view was dropped. With the database with the old version of the view still in place, I generated the changelog file where the createTable tags were missing the column tags.
I created a new changeset to add a new column to an existing table as shown below. I also added the key-property to the hbm.xml file of the usergroup table. It does get picked up according to the logs, and ran successfully, but it does not show up on the databasechangelog table, and the changes are not being reflected in the usergroup table.
<changeSet author="changeTest" id="2018-10-25">
<preConditions onFail="MARK_RAN">
<not>
<columnExists tableName="USERGROUP" columnName="USERROLE"/>
</not>
</preConditions>
<addColumn tableName="USERGROUP">
<column name="USERROLE" type="VARCHAR(50 ${varcharUnit})" />
</addColumn>
<update tableName="USERGROUP">
<column name="USERROLE" type="VARCHAR(50 ${varcharUnit})" value="user"/>
<where>USERNAME = 'user1'</where>
</update>
</changeSet>
The key-property is under a composite-id element, with 2 keys before it.
<key-property name="userRoleInGroup" type="string" >
<column name="userRole" default="user"/>
</key-property>
Am I missing some update process for this?
SQL Server 2008:
exec sp_addrolemember db_datareader, MYUSER
SQL Server 2012:
ALTER ROLE db_datawriter ADD MEMBER MYUSER
The following SQL when run via liquibase against SQL Server 2012 works correctly, but when ran against SQL Server 2008 it fails and says:
Incorrect syntax near the keyword 'ADD'
Code:
DECLARE @ver nvarchar(50),
@intVer int,
@ver2008 int = 10
SET @ver = CAST(serverproperty('ProductVersion') AS nvarchar)
SET @intVer = CAST(SUBSTRING(@ver, 1, CHARINDEX('.', @ver) - 1) AS int)
IF (@intVer = @ver2008)
exec sp_addrolemember db_datawriter, MYUSER
ELSE
ALTER ROLE db_datawriter ADD MEMBER MYUSER;
According to the description of DatabaseMetaData.getColumns, the content of the "COLUMN_SIZE" column is* pretty worthless for datetime/timestamp columns:** "For datetime datatypes, this is the length in characters of the String representation* (assuming the maximum allowed precision of the fractional seconds component)."
java -jar liquibase.jar \
--driver=oracle.jdbc.OracleDriver \
--classpath=website.war \
--changeLogFile=com/example/db.changelog.xml \
--url=jdbc:oracle:thin:@localhost:1521:oracle \
--username=scott \
--password=tiger \
update
and indeed all these tmp files are created, but not removed (?)INFO 11/21/18 12:54 PM: liquibase: adding 'jar:file:/tmp/liquibase.tmp7700727174965420685.tmp!/' to classpath
INFO 11/21/18 12:54 PM: liquibase: adding 'jar:file:/tmp/liquibase.tmp6452496865871492534.tmp!/' to classpath
INFO 11/21/18 12:54 PM: liquibase: adding 'jar:file:/tmp/liquibase.tmp4023308518781064752.tmp!/' to classpath
INFO 11/21/18 12:54 PM: liquibase: adding 'jar:file:/tmp/liquibase.tmp8499956868201722408.tmp!/' to classpath
INFO 11/21/18 12:54 PM: liquibase: adding 'jar:file:/tmp/liquibase.tmp7185863074002000436.tmp!/' to classpath
I am trying to upgrade Hadoop from 1.1.2 to 2.9.0
I had executed the command start-dfs.sh -upgrade
on hadoop2.9.0 cluster.
It shows that upgrade is in process and not yet finalized. I have been waiting for long but still showing the same status.
How to track the status of upgrade process?