dSql Documentation

Register Database Connections

Before creating a database connection we will need to download the JDBC drivers for the databases we are working with. dSql already include JDBC drivers for MySql, MariaDb, PostgreSQL, Oracle, SqlServer, Sqlite, SAP, Teradata, Vertica, Derby, Exasol, Firebird, H2, HSql and Ingres. For other databases execute in dSql the command

download driver <rdbms>
where 'rdbms' is one database name from this list, case sensitive.

Example: download driver PostgreSQL or download driver MySql. The drivers will be downloaded in C:\Users\YourUser\.dSql\drivers. Alternative download the JDBC driver .jar files by yourself and place them in the /home/users/.dSql/drivers/<rdbms>

In the User Home Directory (C:\Users\YourUser or /home/YourUser ) dSql is creating a folder .dSql. with a file init.sql. All commands from this file are executed when dSql is started.

  • register driver <rdbms> <driverClass> <URL> <"default_parameters">
  • connection <name> <rdbms> <"parameters">


register driver PostgreSql org.postgresql.Driver jdbc:postgresql://<host>:<port>/<db> "port=5432"

connection pos1 PostgreSql "user=postgres password=secret host=localhost db=sakila"

For each used Rdbms (MySql,Postgres,Oracle...) the register driver command defines the Java driver class and the JDBC URL with parameters as <...> tags. In the example above the default port value is passed in the end. This way of defining connections give more flexibility in managing large number of connections. In the advanced section you will see the benefits.

The connection command defines the connection by setting the real values for each URL parameter. The rdbms from the register driver command should match the rdbms from the connection command.

The complete list of known drivers is the same as for DbSchema.

Query Multiple Databases

This operation assumes that the same table exists on each database you connect to. Also the connections pos1 and pos2 should be already defined.

connect pos1,pos2
SELECT * FROM sakila.city;

Group connections into groups.

connection group production pos1,pos2
connect production
SELECT * FROM sakila.city;
Spool result to a file.
connect production exclude my3
spool /tmp/result.csv
SELECT * FROM sakila.city;
spool off

Use Variables

Variables can be defined using command vset. Few variables are system variables, used by dSql. Any other user-defined variable will be replaced in SQL text if the system variable replace_variables=true: Use help to list all system variables. Execute show variables to list the already defined variables.
vset spool.separator=|
vset format.date=YYYY-MM-dd'T'hh:mm:ss'Z'
vset format.timestamp=YYYY-MM-dd'T'hh:mm:ss'Z'
vset replace_variables=true

vset myid=6230
connect db4
execute export.sql
Where the export.sql script is :
spool /tmp/export_&myid.csv
select * from some_table where id=&myid;

Transfer Data Between Databases

Next command transfers data into the currently connected database from the specified databases using a reader query (executed on the from databases). The column names should match between the source and the target. There can be multiple source databases.
connect pos1
transfer into_table from pos2,pos3 using
select id, firstname, lastname from persons;

The transfer is running on configurable number of threads.

vset transfer.threads=4

Execute 'help' to see all configurable parameters.

There is one reader thread and multiple writer threads. Each writer thread is writing a bunch of rows, then it does COMMIT the transaction. The writer threads are writing between 500 and 100.000 records, depending on the number of columns which are transferred ( 500 for more then 50 columns, 100.000 for 1- 2 columns ).

Write Python Scripts

Many database tasks require more logic than simple SQLs. We do this using Python using JPython libraries. Python code can be started with the keyword python and ends with the first line with only a slash '/'.

Everything which works in Java works also in JPython. Here is an example:

connect db1

statement = sql.createStatement()
res = statement.executeQuery("select * from some_table")
meta_data = res.getMetaData()
col_count = meta_data.getColumnCount()
col_names = [str(meta_data.getColumnName(i + 1)) for i in range(col_count)]
print "Column count:", col_count, "names:", col_names
while res.next():
    values = [str(res.getObject(i + 1)) for i in range(col_count)]
    print dict(zip(col_names, values))

Python scripts receive two objects :

  • sql is the database connection.
  • connector is the dSql object for a defined connection

Write Groovy Scripts

Many database tasks require more logic than simple SQLs. We do this using Groovy, a pure Java scripting language with closures. Groovy code can be started with the keyword groovy and ends with the first line with only a slash '/'.

Everything which works in Java works also in Groovy, with this three major improvements:

  • Multiline Strings (also called GString, started and end with """ )
  • Variables in GStrings, ${...} are replaced
  • Closures sql.eachRow(...){r->...} are similar with Java Lambda
Here is an example with Groovy:
connect db1

int days = 5
sql.eachRow( """
    SELECT personid, firstname, lastname FROM persons p WHERE p.created < sysdate-${days} AND
      EXISTS ( SELECT 1 FROM sport_club s WHERE s.personid=p.personid )
      ORDER BY personid ASC
    """.toString() ) { r ->
    println "${r.personid} ${r.firstname} ${r.lastname}"

Groovy scripts receive two objects :

  • sql is the database connection as Sql object.
  • connector is the dSql object for a defined connection

Chunk Database Update or Deletion

Updating or deleting large amounts of data may require to split the operation in smaller chunks. This because the large operation may cause disk or memory issues and may lead to database locks.

  • The update example is using a nested loop with a Postgres.setResultSetHoldability.
  • The delete example is loading id values into an ArrayList.
import java.sql.ResultSet
sql.setResultSetHoldability( ResultSet.HOLD_CURSORS_OVER_COMMIT)
def i = 0
sql.eachRow( "SELECT personid FROM persons".toString() ){ r ->
  sql.execute("UPDATE persons SET ocupation='t' WHERE personid=?".toString(), [ r.personid ])
  if ( i > 1000 ){
    i = 0
    print "."
Deletion example:
  List ids = new ArrayList()
  sql.eachRow( 'SELECT id FROM large_table' ) { m ->
    ids.add( m.id )
  int cnt = 0;
  for ( int id : ids ) {
    sql.execute('DELETE FROM large_table WHERE id=?', [id] )
    if ( cnt > 1000) {
      print '.'
      cnt = 0;

Create Custom Command

We create a Postgres command to compute disk usage on database. The command can be executed on multiple databases. This is using the Groovy CLI. The command has to be registered in dSql using the register command.
def cli = new CliBuilder(usage:'disk usage ')
cli.h(longOpt:'help', 'Show usage information and quit')
def options = cli.parse(args)

if (options.'help') {

sql.execute("DROP TABLE IF EXISTS dsql_disk_usage")
sql.execute("DROP TABLE IF EXISTS dsql_disk_usage_logs")
sql.execute("CREATE TABLE dsql_disk_usage( filesystem text, blocks bigint, used bigint, free bigint, percent text, mount_point text )")
sql.execute("CREATE TABLE dsql_disk_usage_logs( blocks bigint, folder text )")

sql.execute("COPY dsql_disk_usage FROM PROGRAM 'df -k | sed \"s/  */,/g\"' WITH ( FORMAT CSV, HEADER ) ")
sql.execute("UPDATE dsql_disk_usage SET used=used/(1024*1024), free=free/(1024*1024), blocks=blocks/(1024*1024)")

String dbid = sql.firstRow( "SELECT current_database() as dbid").dbid
sql.execute("COPY dsql_disk_usage_logs FROM PROGRAM 'du -m -s /data/${dbid}/pgsystem/pg_log | sed \"s/\\s\\s*/,/g\"' WITH ( FORMAT CSV ) ".toString())
sql.execute("COPY dsql_disk_usage_logs FROM PROGRAM 'du -m -s /data/${dbid}/jobsystem/pg_log | sed \"s/\\s\\s*/,/g\"' WITH ( FORMAT CSV ) ".toString())

int logBlocks = sql.firstRow( "SELECT sum(blocks) as usage FROM dsql_disk_usage_logs").usage

sql.eachRow( "SELECT * FROM dsql_disk_usage"){ r->
  if ( r.filesystem.startsWith( '/dev/mapper') ){
    int freePercent = Math.round(r.free*100/(r.blocks))
    String status = ( r.free < 30 && r.blocks > 1500 ) ? 'Very Low' : (( r.free < 50 && r.blocks > 1000) || r.free < 30 ) ? 'Low' : "Ok"
    String logStatus = logBlocks > 1024 ? String.format( 'pg_logs %dM', logBlocks ) : ''
    println String.format( '%10s   Free %3d%% ( %dG out of %dG ) %s', status, freePercent, r.free, r.blocks, logStatus )

sql.execute("DROP TABLE dsql_disk_usage_logs")
sql.execute("DROP TABLE dsql_disk_usage")

Create Connections Using Suffixes

There are cases when you have to manage large database clusters where you have primaries and standby databases. I used to manage a cluster where the primary database host name was like pos1.db, pos2.db, ... and standbys were called pos1.sbydb, pos2.sbydb...

For this cases I have created the database connections like below.

import com.ecircle.dsql.connection.*

for ( int i=1;i <130; i++){
  ConnectionFactory.createConnector( "pos${i}", "postgres", "user=postgres host=pos${i}.db db=sakila${i}" )

ConnectionFactory.addSuffix( new Suffix("sby"){
  public Connector create( Connector con ){
   String host = con.getHost();
   if ( host.endsWith(".db") ) return duplicate( con, "host=" + host.replaceFirst(/\.db/, /.sbydb/))
   return null
The Suffix class receive a connection. If the connection is matching the name .db, it duplicates the Connector by creating a new parameters string, where .db is replaced with .sbydb.

Write Cronjob Scripts

Using dSql you can execute database scripts on a regular basis with a minimal resolution of one hour. If a script is failing, an email will be sent to the configured emails.
vset email.server.host=internal.mailserver
vset email.server.user=admin
vset email.server.password=secret
vset email.from=Admin
vset email.alert=admin@databaseconsole.com
vset cron.folder=/dsql/cronscripts
On the machine where dSql is running, add in /etc/crontab this line, fixing fist the path to dSql
01 * * * * /usr/local/bin/dsql -cron
And in /dsql/cronscripts/ save files like
dSql is checking in the file names for this pattern in the end of the file name:
  • .<daily|weekly|monthly|yearly>[<number_or_range>hour].
  • .<number_or_range><day_of_week|day_of_month|day_of_year|month_of_year>[<number_or_range>hour]
If the scripts are getting any error, an email will be sent to the configured email server. Files can also have the .groovy extension. In this case the file content is interpreted as Groovy script.

Upload Reports on [s]ftp Server

Produced .csv report files can be uploaded on ftp or sftp servers. The command is upload [-d] ftpURL fileName. -d states for removing the local file. We are using Apache Commons VFS and all protocols used there are supported. Example:
upload sftp://ftpusser@ftp.ftphost.com/intoFolder report.csv

Save Passwords in Separate File

For some reasons you may want to save passwords outside the init.sql file where you define the database connections. This allows to save the init.sql file in GIT without making passwords public. Therefore dSql allow to save passwords in separate file ~/.dsql/passwords.txt or ~/.pgpass (for compatibility with PostgreSql standards). The file format is the same as Postgres pgpass

For example the password is 'secret' for all hosts, port 1521 user postgres.
The connection should specify password=<pgpass> in the connection parameters to make use of the password from the password.txt or .pgpass file.

dSql Start Parameters

dSql can be started with following parameters:

  • -cron to start in cron mode, details in cronjob chapter.
  • A script file name or path to a script file. dSql will execute the given script.