Introduction

sanger-tol/blobtoolkit is a bioinformatics pipeline that can be used to identify and analyse non-target DNA for eukaryotic genomes. It takes a samplesheet of BAM/CRAM/FASTQ/FASTA files as input, calculates genome statistics, coverage and completeness information, combines them in a TSV file by window size to create a BlobDir dataset and static plots.

  1. Calculate genome statistics in windows (fastawindows)
  2. Calculate Coverage (blobtk/depth)
  3. Determine the appropriate BUSCO lineages from the taxonomy.
  4. Run BUSCO (busco)
  5. Extract BUSCO genes (blobtoolkit/extractbuscos)
  6. Run Diamond BLASTp against extracted BUSCO genes (diamond/blastp)
  7. Run BLASTx against sequences with no hit (diamond/blastx)
  8. Run BLASTn against sequences still with no hit (blast/blastn)
  9. Count BUSCO genes (blobtoolkit/countbuscos)
  10. Generate combined sequence stats across various window sizes (blobtoolkit/windowstats)
  11. Import analysis results into a BlobDir dataset (blobtoolkit/blobdir)
  12. Create static plot images (blobtk/images)

Usage

[!NOTE] If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

sample,datatype,datafile,library_layout
mMelMel3_hic,hic,GCA_922984935.2.hic.mMelMel3.cram,PAIRED
mMelMel1,illumina,GCA_922984935.2.illumina.mMelMel1.cram,PAIRED
mMelMel3_ont,ont,GCA_922984935.2.ont.mMelMel3.cram,SINGLE

Each row represents a read set (aligned or not). The first column (sample name) must be unique. If you have multiple read sets from the same actual sample, make sure you edit the sample names to make them unique. The datatype refers to the sequencing technology used to generate the underlying raw data and follows a controlled vocabulary (ont, hic, pacbio, pacbio_clr, illumina). The library layout indicates whether the reads are paired or single. The aligned read files can be generated using the sanger-tol/readmapping pipeline.

Now, you can run the pipeline using:

nextflow run sanger-tol/blobtoolkit \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR> \
   --fasta genome.fasta \
   --accession GCA_XXXXXXXXX.X \
   --taxon XXXX \
   --taxdump /path/to/taxdump/database \
   --blastp /path/to/diamond/database \
   --blastn /path/to/blastn/nt.nal \
   --blastx /path/to/blastx/database

[!WARNING] Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details, please refer to the usage documentation and the parameter documentation.

BLAST Database Configuration

BLASTn Database Requirements

The --blastn parameter requires a direct path to a BLAST database file, either a .nal (alias) file or a .nin (index) file. The pipeline validates that all required companion files are present.

Supported File Types:

  1. .nal file (preferred) - BLAST alias file:

    --blastn /path/to/databases/nt.nal
  2. .nin file (fallback) - BLAST index file (when .nal is not available):

    --blastn /path/to/databases/nt.nin
  3. Compressed archive (for CI/testing):

    --blastn https://example.com/path/to/nt_database.tar.gz

Database Structure Requirements:

When using a .nal file:

The directory must contain all companion files with the same prefix:

  • db_name.nal (alias file - the file you point to)
  • db_name.nin or db_name.##.nin (index file(s))
  • db_name.nhr or db_name.##.nhr (header file(s))
  • db_name.nsq or db_name.##.nsq (sequence file(s))
When using a .nin file:

The directory must contain companion files with the same prefix:

  • db_name.nin or db_name.##.nin (index file - the file you point to)
  • db_name.nhr or db_name.##.nhr (header file(s))
  • db_name.nsq or db_name.##.nsq (sequence file(s))

Note: ## represents numbers like 00, 01, 02, etc. for large databases split into multiple files.

Example Directory Structures:

Single File Pattern:
/data/blast/nt/
├── nt.nal                   # Point --blastn here
├── nt.nin                   # Required companion files
├── nt.nhr
├── nt.nsq
├── taxdb.btd                # Optional taxonomy files
└── taxonomy4blast.sqlite3
Numbered File Pattern (Large Databases):
/data/blast/nt/
├── nt.nal                   # Point --blastn here
├── nt.00.nin                # Required numbered companion files
├── nt.00.nhr
├── nt.00.nsq
├── nt.01.nin
├── nt.01.nhr
├── nt.01.nsq
├── nt.02.nin
├── nt.02.nhr
├── nt.02.nsq
├── taxdb.btd
├── taxdb.bti
└── taxonomy4blast.sqlite3
Using .nin file (when .nal is not available):
/data/blast/nt/
├── nt.nin                   # Point --blastn here (no .nal file)
├── nt.nhr                   # Required companion files
├── nt.nsq
├── taxdb.btd
├── taxdb.bti
└── taxonomy4blast.sqlite3

Troubleshooting:

  • Error: "Invalid BLAST database path" - Ensure you're pointing to either a .nal or .nin file, not a directory
  • Error: "Missing required files" - Verify that all companion files (.nin, .nhr, .nsq) exist with the same prefix
  • Error: "BLAST database appears incomplete" - Check that all required BLAST database components are present
  • Error: "File not found" - Verify the file path is correct and the file exists

Migration from Previous Versions:

If you were previously using --blastn /path/to/taxonomy4blast.sqlite3, you now need to:

  1. Use --blastn /path/to/nt.nal (if available), or
  2. Use --blastn /path/to/nt.nin (if .nal is not available)

Pipeline output

For more details about the output files and reports, please refer to the output documentation.

Credits

sanger-tol/blobtoolkit was written in Nextflow by:

The original design and coding for BlobToolKit software and Snakemake pipeline was done by Richard Challis and Sujai Kumar.

We thank the following people for their extensive assistance in the development of this pipeline:

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

Citations

If you use sanger-tol/blobtoolkit for your analysis, please cite it using the following DOI: 10.5281/zenodo.7949058

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

This pipeline uses code and infrastructure developed and maintained by the nf-core community, reused here under the MIT license.

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

Run with

Read how to configure the Seqera Platform CLI here.

clones in last

0

stars

watchers

last release

N/A

last updated

open issues

open pull requests

collaborators