Starting out with your raw reads

You have your data – now what?

Your data will usually come to you from the sequencing facility as masses of short reads in zipped files. The files will be labelled something like this:

AP0_CGATGT_L001_R1_001.fastq.gz

AP0_CGATGT_L002_R1_001.fastq.gz

AP0_CGATGT_L001_R2_001.fastq.gz

AP0_CGATGT_L002_R2_001.fastq.gz

So for this sample AP0 – (plant labeled AP and from pre-inoculation therefore ‘0’), I have paired-end data. R1 is from the forward read and R2 is from the reverse read. The Illumina run was done using two lanes therefore I have two R1 and two R2 datasets (L001 and L002).

These zipped files are .fastq files. Fasta files are the basic text files that gene sequences are formatted in with headers beginning with a  ‘>’ followed by the sequence on the next line. If they are not too massive they can be opened in Notepad+++. Fastq files have additional information about the quality of the read which cannot be easily understood by humans but can allow software to sort through and discard poor reads. More information here,

https://en.wikipedia.org/wiki/FASTQ_format.

You want all the sample AP0 R1 reads from two sequencing lanes to be in one file. The same goes for the R2 reads. So you will need to unzip the files and then combine them explained below (in Back to the zipped raw data).

Unix and high performance computing

Most of the work involved in RNAseq analysis requires a Unix environment (command line) rather than Windows or graphic user interfaces. To run assembly and alignment software you also really need high performance computing (hpc) access. There are a number of ways to access these such as:

http://nci.org.au/access/getting-access-to-the-national-facility/allocation-schemes/

https://aws.amazon.com/hpc/resources/

https://nectar.org.au/

but as I have access to the university cluster I will be just discussing the methods I have used for Artemis, the University of Sydney hpc.

http://sydney.edu.au/research_support/hpc/access/

On my local Windows operating system I have installed PuTTY software (http://www.putty.org/) which allows me to log in to the hpc remotely using a VPN and operate the software with my data in a project directory allocated to me. I use Filezilla (https://filezilla-project.org/) for transferring much of my data from my local data storage location to the hpc and back.

PBS scripts

When using hpc clusters all jobs need to be scheduled and queued so that appropriate allocation of resources occurs. To run software on your data you submit a portable batch script (PBS) which specifies the software you need to use, the resources you need and the time expected to run the job (wall-time). It is often a bit of guesswork to know how long a job will take but the manuals for software, always available on-line, will give some guidance. Here is an example of information required in a PBS script for Artemis:

# Some useful brief comment on what this script does
#PBS -P RDS-FAC-PROJECT-RW – make sure to replace FAC and PROJECT to suit you
#PBS -N Job_101  -Job name – this is what you will see when you run commands to check the status of your job
#PBS -l nodes=1:ppn=2 – This is requesting 1 node and 2 processor per node. Only request more if your job multi-threads. 1 node has a max of 24 processors, so say you wanted 40 cores you would specify ‘nodes=2:ppn=20’
#PBS -l walltime=02:00:00 – Maximum run time of your job, in hours:minutes:seconds. Example requests 2 hours. Job is killed when it reaches wall time so make sure you request enough.
#PBS -l pmem=4gb – RAM per processor
#PBS -j oe – Send the ‘standard error’ and ‘standard output’ log files to one file (delete this line to keep them separate)
#PBS -M your.email@sydney.edu.au – email job reports to this email address
#PBS -m abe – along with the above directive, this asks for an email when the job aborts (a), begins (b), and ends (e). ‘e’ is the most important as it will give you a report of resources used which can help you decide how much resources to request for the next run.

# Load modules
module load trinity – job will use the software trinity
module load trimmomatic – job will use the software trimmomatic

# Working directory: – if all your input and output is in the same directory, if you don’t have all the data in the same directory it can be useful to have this ‘cd’ command in the script -so you can run the script from any directory without specifying full pathnames for the input and output files.

# Run trinity: – insert commands to run the job…

I will provide some of the actual pbs scripts I have used as I go.

Back to the zipped raw data

Once you have your data on the hpc you can unzip using the unix command ‘gunzip’ as these are gzipped files (gz):

gunzip ‘filename’

eg. gunzip AP0_CGATGT_L001_R1_001.fastq.gz

Your original .gz file will be gone from the directory and replaced with the unzipped version.

With your files unzipped you can now join them with the unix command cat and, in my example, rename the new combined file AP0_R1.fastq:

cat AP0_CGATGT_L001_R1_001.fastq AP0_CGATGT_L002_R1_001.fastq > AP0_R1.fastq

When you have all your data combined like this you can zip them again and begin processing them. Much of the software for assembly can process zipped files. To zip them:

gzip ‘filename’

Next post – trimming the reads

 

 

 

 

 

 

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s