DbSchemaCLI — Universal Command Line Client

DbSchemaCLI is a command-line client that serves two purposes: it hosts the Floating License Server for team licensing, and it acts as a multi-database SQL client capable of querying, transferring, and automating data across many databases simultaneously. DbSchemaCLI is included in the standard DbSchema installation package.

Floating License Server

DbSchemaCLI can host the Floating License Server for teams sharing licenses in environments without direct internet access. See Licensing & Registration — Floating License Server for the full setup guide.

Connect to the Database

Download the JDBC driver for your database before creating a connection:

download driver {RDBMS}

Replace {RDBMS} with a name from the supported databases list, exactly as listed (case-sensitive). Examples: download driver PostgreSQL, download driver MySql. Drivers are saved to ~/.DbSchema/drivers/.

Alternatively, download the .jar files manually and place them in ~/.DbSchema/drivers/<rdbms>/.

Defining Connections in init.sql

On startup, DbSchemaCLI executes ~/.DbSchema/cli/init.sql. Define your connections there so they are available in every session:

register driver PostgreSql org.postgresql.Driver jdbc:postgresql://{HOST}:{PORT}/{DB} "port=5432"
connection pos1 PostgreSql "user=postgres password=secret host=localhost db=sakila"

register driver Oracle oracle.jdbc.OracleDriver jdbc:oracle:thin:@{HOST}:{PORT}:{DB} "port=5432"
connection ora1 Oracle "user='sys as sysdba' password=secret host=188.21.11.1 db=prod"
  • register driver defines the JDBC driver class, URL template (with {TAG} placeholders), and default values for those tags.
  • connection maps a named connection to a registered driver by supplying real values for each URL tag.

Execute Queries on Multiple Databases

Connect to several databases at once and run the same query against all of them:

connect pos1,pos2
SELECT * FROM sakila.city;

Group connections for convenience:

connection group production pos1,pos2
connect production
SELECT * FROM sakila.city;

Spool results to a file (use exclude to omit specific connections from the group):

connect production exclude my3
spool /tmp/result.csv
SELECT * FROM sakila.city;
spool off

Use Variables

Define variables with vset. Set cli.settings.replace_variables=true to substitute them into SQL queries. Run help to list all system variables; run show variables to see currently defined values.

vset cli.spool.separator=|
vset format.date=YYYY-MM-dd'T'hh:mm:ss'Z'
vset format.timestamp=YYYY-MM-dd'T'hh:mm:ss'Z'
vset cli.settings.replace_variables=true

vset myid=6230
connect db4
execute export.sql

Where export.sql contains:

spool /tmp/export_&myid.csv
select * from some_table where id=&myid;

Transfer Data Between Databases

Transfer rows from one or more source databases into the currently connected target database. Column names must match between source and target:

connect pos1
transfer into_table from pos2,pos3 using
select id, firstname, lastname from persons;

The transfer runs on multiple parallel threads (configurable):

vset cli.settings.transfer_threads=4

There is one reader thread and multiple writer threads. Each writer commits in batches ranging from 500 rows (for wide tables with 50+ columns) to 100,000 rows (for narrow single-column tables).

Write Groovy Scripts

For tasks requiring more logic than SQL alone, use Groovy — a pure Java scripting language with closures. Start a Groovy block with the keyword groovy and end it with a line containing only /:

connect db1

groovy
int days = 5
sql.eachRow( """
    SELECT personid, firstname, lastname FROM persons p WHERE p.created < sysdate-${days} AND
    EXISTS ( SELECT 1 FROM sport_club s WHERE s.personid=p.personid )
    ORDER BY personid ASC
""".toString() ) { r ->
    println "${r.personid} ${r.firstname} ${r.lastname}"
}
/

Two objects are available inside Groovy scripts:

  • sql — the database connection as a Groovy SQL object
  • connector — the DbSchemaCLI connector object for a named connection

Groovy features useful for database work:

  • Multiline strings delimited with triple quotes """..."""
  • ${...} expression interpolation inside strings
  • Closures like sql.eachRow(...) { r -> ... } (similar to Java lambdas)

Chunk Database Updates and Deletes

Updating or deleting large amounts of data in a single transaction can cause disk, memory, or locking issues. Split the work into chunks and commit periodically.

Update example (using PostgreSQL setResultSetHoldability):

import java.sql.ResultSet
sql.setResultSetHoldability(ResultSet.HOLD_CURSORS_OVER_COMMIT)
def i = 0
sql.eachRow("SELECT personid FROM persons".toString()) { r ->
    sql.execute("UPDATE persons SET ocupation='t' WHERE personid=?".toString(), [r.personid])
    i++
    if (i > 1000) {
        sql.commit()
        i = 0
        print "."
    }
}
sql.commit()

Delete example (load IDs first, then delete in batches):

List ids = new ArrayList()
sql.eachRow('SELECT id FROM large_table') { m ->
    ids.add(m.id)
}
int cnt = 0
for (int id : ids) {
    sql.execute('DELETE FROM large_table WHERE id=?', [id])
    cnt++
    if (cnt > 1000) {
        print '.'
        sql.commit()
        cnt = 0
    }
}

Create Custom Commands

Save any Groovy script as a file and register it as a named command. Example — a Postgres disk-usage command saved as diskUsage.groovy:

def cli = new CliBuilder(usage: 'disk usage')
cli.h(longOpt: 'help', 'Show usage information and quit')
def options = cli.parse(args)

if (options.'help') {
    cli.usage()
    return
}

sql.execute("DROP TABLE IF EXISTS dbschema_disk_usage")
sql.execute("CREATE TABLE dbschema_disk_usage(filesystem text, blocks bigint, used bigint, free bigint, percent text, mount_point text)")
sql.execute("COPY dbschema_disk_usage FROM PROGRAM 'df -k | sed \"s/ */,/g\"' WITH (FORMAT CSV, HEADER)")
sql.execute("UPDATE dbschema_disk_usage SET used=used/(1024*1024), free=free/(1024*1024), blocks=blocks/(1024*1024)")

String dbid = sql.firstRow("SELECT current_database() as dbid").dbid
sql.execute("COPY dbschema_disk_usage FROM PROGRAM 'du -m -s /data/${dbid}/pgsystem/pg_log | sed \"s/\\s\\s*/,/g\"' WITH (FORMAT CSV)".toString())

int logBlocks = sql.firstRow("SELECT sum(blocks) as usage FROM dbschema_disk_usage").usage

sql.eachRow("SELECT * FROM dbschema_disk_usage") { r ->
    if (r.filesystem.startsWith('/dev/mapper')) {
        int freePercent = Math.round(r.free * 100 / r.blocks)
        String status = (r.free < 30 && r.blocks > 1500) ? 'Very Low' :
                        ((r.free < 50 && r.blocks > 1000) || r.free < 30) ? 'Low' : 'Ok'
        println String.format('%10s Free %3d%% ( %dG out of %dG )', status, freePercent, r.free, r.blocks)
    }
}

sql.execute("DROP TABLE dbschema_disk_usage")

Register and invoke the command:

register command diskUsage.groovy as disk usage
disk usage

Create Connections Using Suffixes

For large clusters with primary and standby databases (e.g. pos1.db / pos1.sbydb), use a suffix factory to auto-derive standby connections from the primary ones:

groovy
import com.wisecoders.dbs.cli.connection.*

for (int i = 1; i < 130; i++) {
    ConnectionFactory.createConnector("pos${i}", "postgres", "user=postgres host=pos${i}.db db=sakila${i}")
}

ConnectionFactory.addSuffix(new Suffix("sby") {
    public Connector create(Connector con) {
        String host = con.getHost()
        if (host.endsWith(".db")) return duplicate(con, "host=" + host.replaceFirst(/\.db/, /.sbydb/))
        return null
    }
})
/

The suffix factory receives an existing connector and returns a derived connector with the hostname modified. Connections pos1sby, pos2sby, etc. become available automatically.

Write Cronjob Scripts

DbSchemaCLI can execute scheduled scripts on a recurring basis. If a script fails, an alert email is sent to configured addresses.

Configure the mail server and cron folder:

vset cli.mail.server.host=internal.mailserver
vset cli.mail.server.user=admin
vset cli.mail.server.password=secret
vset cli.mail.from=Admin
vset [email protected]
vset cli.cron.folder=/dbschemacli/cronscripts

Add an entry to /etc/crontab on the server running DbSchemaCLI:

01 * * * * /usr/local/bin/dbschemacli -cron

Name script files using the schedule pattern — DbSchemaCLI parses the filename to determine when to run each script:

sales_report.schedule_hour(13)_dayOfWeek(1).sql
usage_report.schedule_minute(each5).sql
load_report.dayOfMonth(1).sql

Supported schedule tokens after the schedule_ keyword: minute, hour, dayOfWeek, dayOfMonth, dayOfYear, month, year. Each token takes a value in parentheses:

  • Single number: (13)
  • List: (1,2,5)
  • Range: (1-10)
  • Every N: (each5)

Files with a .groovy extension are executed as Groovy scripts instead of SQL.

Deliver Data to an FTP / SFTP Server

Upload CSV spool files to any FTP or SFTP server using Apache Commons VFS:

upload sftp://[email protected]/intoFolder report.csv

Add -d to delete the local file after a successful upload:

upload -d sftp://[email protected]/intoFolder report.csv

Store Passwords in a Separate File

To keep passwords out of init.sql (e.g. for safe Git storage), store them in ~/.DbSchema/dbshell/pass.txt or ~/.pgpass. The format follows the PostgreSQL pgpass standard:

hostname:port:database:username:password

Wildcards are supported:

*:1521:sakila:postgres:secret

In your connection definition, set password=<pgpass> to tell DbSchemaCLI to look up the password from the file.

DbSchemaCLI Start Parameters

DbSchemaCLI accepts the following command-line arguments:

  • -cron — start in crontab mode (see Cronjob Scripts)
  • <script-path> — execute the given .sql or .groovy script file and exit