dataql

command module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 18, 2026 License: MIT Imports: 3 Imported by: 0

README

DataQL

Go Version Build Tests Go Report Card GoDoc GitHub issues License

A powerful CLI tool for querying and transforming data across multiple formats

DataQL is a CLI tool developed in Go that allows you to query and manipulate data files using SQL statements. It loads data into an SQLite database (in-memory or file-based) enabling powerful SQL operations on your data.

Features

Current:

  • Multi-format support: CSV, JSON, JSONL/NDJSON with auto-detection
  • Execute SQL queries using SQLite syntax
  • Export results to .csv or .jsonl formats
  • Interactive REPL mode with command history
  • Progress bar for large file operations
  • Parallel file processing for multiple inputs
  • Automatic flattening of nested JSON objects

Roadmap:

  • Additional formats: XML, Parquet, Excel
  • Database connectors: PostgreSQL, MySQL, MongoDB
  • Cloud storage: S3, GCS, Azure Blob
  • HTTP/HTTPS URLs as data sources
  • REPL with autocomplete and syntax highlighting
  • Interactive datasource configuration

Installation

Linux / macOS:

curl -fsSL https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.ps1 | iex
Install Options

Specific version:

# Linux/macOS
curl -fsSL https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.sh | bash -s -- --version v1.0.0

# Windows
$env:DATAQL_VERSION="v1.0.0"; irm https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.ps1 | iex

User installation (no sudo/admin required):

# Linux/macOS
curl -fsSL https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.sh | bash -s -- --local

# Windows
$env:DATAQL_USER_INSTALL="true"; irm https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/install.ps1 | iex
From Source
# Clone the repository
git clone https://github.com/adrianolaselva/dataql.git
cd dataql

# Build and install
make build
make install       # requires sudo
# or
make install-local # installs to ~/.local/bin
Verify Installation
dataql --version
Uninstall

Linux / macOS:

curl -fsSL https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/uninstall.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/adrianolaselva/dataql/main/scripts/uninstall.ps1 | iex

Usage

Basic Usage

Load a data file and start interactive mode (format is auto-detected):

# CSV file
dataql run -f data.csv -d ","

# JSON file (array or single object)
dataql run -f data.json

# JSONL/NDJSON file (one JSON per line)
dataql run -f data.jsonl
Supported Input Formats
Format Extensions Description
CSV .csv Comma-separated values with configurable delimiter
JSON .json JSON arrays or single objects
JSONL .jsonl, .ndjson Newline-delimited JSON (streaming)
Command Line Options
Flag Short Description Default
--file -f Input file(s) (CSV, JSON, JSONL) Required
--delimiter -d CSV delimiter (only for CSV files) ,
--query -q SQL query to execute -
--export -e Export path -
--type -t Export format (csv, jsonl) -
--storage -s SQLite file path (for persistence) In-memory
--lines -l Limit number of lines/records to read All
--collection -c Custom table name Filename
Examples

Interactive Mode:

dataql run -f sales.csv -d ";"
dataql> SELECT product, SUM(amount) as total FROM sales GROUP BY product ORDER BY total DESC LIMIT 10;
product      total
Widget Pro   125430.50
Gadget Plus   98210.00
...

Execute Query and Display Results:

dataql run -f data.csv -d "," -q "SELECT * FROM data WHERE amount > 100 LIMIT 10"

Export to JSONL:

dataql run -f input.csv -d "," \
  -q "SELECT id, name, value FROM input WHERE status = 'active'" \
  -e output.jsonl -t jsonl

Export to CSV:

dataql run -f input.csv -d "," \
  -q "SELECT * FROM input" \
  -e output.csv -t csv

Multiple Input Files:

dataql run -f file1.csv -f file2.csv -d "," \
  -q "SELECT a.*, b.extra FROM file1 a JOIN file2 b ON a.id = b.id"

Query JSON Files:

# JSON array
dataql run -f users.json -q "SELECT name, email FROM users WHERE status = 'active'"

# JSON with nested objects (automatically flattened)
# {"user": {"name": "John", "address": {"city": "NYC"}}}
# becomes columns: user_name, user_address_city
dataql run -f data.json -q "SELECT user_name, user_address_city FROM data"

Query JSONL/NDJSON Files:

# JSONL is ideal for large datasets (streaming, low memory)
dataql run -f logs.jsonl -q "SELECT level, message, timestamp FROM logs WHERE level = 'ERROR'"

# Works with .ndjson extension too
dataql run -f events.ndjson -q "SELECT COUNT(*) as total FROM events"

Custom Table Name:

# Use --collection to specify a custom table name
dataql run -f data.json -c my_table -q "SELECT * FROM my_table"

Persist to SQLite File:

dataql run -f data.csv -d "," -s ./database.db
Real-World Example
# Download sample data
wget https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2021-financial-year-provisional/Download-data/annual-enterprise-survey-2021-financial-year-provisional-csv.csv -O survey.csv

# Query and export
dataql run -f survey.csv -d "," \
  -q "SELECT Year, Industry_aggregation_NZSIOC as industry, Variable_name as metric, Value as amount FROM survey WHERE Value > 1000" \
  -e analysis.jsonl -t jsonl

SQL Reference

DataQL uses SQLite under the hood, supporting standard SQL syntax:

-- Basic SELECT
SELECT column1, column2 FROM tablename;

-- Filtering
SELECT * FROM data WHERE amount > 100 AND status = 'active';

-- Aggregation
SELECT category, COUNT(*), SUM(value) FROM data GROUP BY category;

-- Joins (multiple files)
SELECT a.*, b.extra FROM file1 a JOIN file2 b ON a.id = b.id;

-- Ordering and Limiting
SELECT * FROM data ORDER BY created_at DESC LIMIT 100;

Note: Table names are derived from filenames (without extension). For sales.csv, sales.json, or sales.jsonl, use SELECT * FROM sales. Use --collection flag to specify a custom table name.

Development

Prerequisites
  • Go 1.22 or higher
  • GCC (for SQLite compilation)
Building
make build
Testing
make test
Linting
make lint

Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL