Last week's data wrangling produced eight FASTQ files containing the sequencing reads for my genome ($8=4 \times 2$, four lanes' worth of paired-end reads). The next step in making sense of these 1.3 billion reads is to map their positions of origin in the human reference genome assembly. This post will continue somewhat down in the weeds technically, but we'll end up in position to look at some interesting genetics next time.
If you're not interested in the technical minutiae - and that would be fair enough - you could skip to the last section.
Reads quality control with FastQC
Trust, but verify: since my data came from Illumina's CLIA lab, I wasn't terribly worried about gross quality problems - but some QC is always prudent before proceeding further. We have an app on DNAnexus to run reads through FastQC, which runs some essential sanity checks. I ran these on my four pairs of FASTQ files to get four FastQC reports. (It's worth examining the four lanes separately, because they could conceivably have independent quality issues.) Here's one representative plot:
This shows that, across all the 100bp reads in one pair of my FASTQ files, the Phred quality scores assigned by the sequencer tend to be high across most of the read, but degrade to either end. That's typical with current technology. My other FastQC results also looked as expected.
Mapping with BWA-MEM
Mapping is the process of taking each 100bp sequencing read and determining the most likely corresponding position in the human genome reference assembly, enabling us to locate variants throughout my genome. Mapping is challenging for many reasons, such as: (1) the reads contain outright sequencing errors, in addition to bona fide differences between my genome and the reference; (2) the human genome contains extensive repetition, making 100bp reads sometimes difficult to uniquely place; (3) the paired reads convey implicit proximity information that can be tricky to use to maximum effect; (4) with 1.3 billion reads to map, it has to be done efficiently.
Befitting such an important and challenging algorithmic problem, there's a diversity of mapping software available for read data like mine, utilizing a variety of cool string processing algorithms including Bloom filters, suffix arrays, and the Burrows-Wheeler transform. I'm going to use BWA-MEM, a modern and popular mapper we already have available on DNAnexus, to map my reads onto hs37d5, the reference assembly used in the 1000 Genomes Project. (A colleague wrote a great explanation of the several different reference assemblies in common use today, although we expect a transition to GRCh38 over the next year.)
Aside: it would be great to produce a de novo assembly of my genome from the reads, rather than just mapping them onto a generic reference assembly. One day this will be standard, and I'll explore it a little bit in a future post - but with current technology, it's just far more difficult than mapping.
BAM finishing
BWA-MEM will take my FASTQ reads and produce a BAM file encoding the mappings, which we'll eventually feed to a variant calling algorithm. Best practices call for a few additional "finishing" steps on the mappings beforehand, however. One is deduplication, flagging or removing reads that are redundant in a certain sense - arising from PCR amplification prior to sequencing - which could otherwise bias downstream analyses. I'll perform deduplication using Picard MarkDuplicates, which is pretty standard.
There are additional finishing steps in common use today, including disposal of low-quality reads or mappings, recalibration of base quality scores, and local realignment around indels. I'll save myself the trouble because it seems the very latest generation of variant callers don't benefit much from these steps.
The mapping workflow
Here's a screenshot of the DNAnexus workflow I created for my mapping and BAM finishing:
Bear with me - there are a few additional things going on here:
- I'm running BWA-MEM separately on each of my four pairs of FASTQ files. It's possible to run it on all of the FASTQ files at once, but it'll be useful later on to preserve the information of where each read arose from, and mapping the four "read groups" separately is the easiest way to do so.
- Then I'm using bamtools to merge the four intermediate BAM files into one, preserving the read group information, prior to deduplication.
- I'm creating a BAM index, which is useful for various downstream programs.
- I wrote an applet to report various statistics about the BAM files, useful for sanity checking the results.
- Lastly, I'm importing the mappings into a format suitable for our genome browser.
This workflow, as well as those from previous and future episodes, is available in this DNAnexus public project (free account required).
Did it work?
My workflow produced a final, deduplicated BAM file of about 63 GiB in size. Here are a few of the mappings, displayed using samtools view which converts the BAM data into an equivalent text format:
You can find the definition of each column in the SAM format specification. Briefly, each row includes a read identifier, its sequence and quality scores, a position in the reference genome assembly, and a "CIGAR string", a compact encoding of how the read differs from the reference at the specified position.
Looks good so far, but as with the original FASTQ reads, it's important to do some overall sanity checks before proceeding further. The workflow logs and reports show that deduplication removed less than 2% of my 1.3 billion reads; and of my 1,286,633,504 remaining reads, 98% mapped onto the hs37d5 reference, 94% with a positive mapping quality score. This is great, and I'll take a look at the unmapped leftovers in a future post. We can also look at how the mappings cover the reference genome; I collected some statistics on this using bedtools (code):
Looks good so far, but as with the original FASTQ reads, it's important to do some overall sanity checks before proceeding further. The workflow logs and reports show that deduplication removed less than 2% of my 1.3 billion reads; and of my 1,286,633,504 remaining reads, 98% mapped onto the hs37d5 reference, 94% with a positive mapping quality score. This is great, and I'll take a look at the unmapped leftovers in a future post. We can also look at how the mappings cover the reference genome; I collected some statistics on this using bedtools (code):
This plot shows a (complementary) cumulative distribution function for the basewise coverage of the mappings, or how many mapped reads cover each nucleotide position in the reference assembly. The black curve represents the complete hs37d5 reference assembly, which is about 3.2 Gbp, while the blue curve represents only positions within Consensus CDS (CCDS), a conservative annotation of protein-coding exons, totaling about 32 Mbp.
According to this plot, the typical (median) position is covered by about 40 mapped reads, a depth sufficient to enable confident variant detection despite occasional errors and noise in the sequencing and mapping. About 7% of the hs37d5 reference has no coverage at all - largely accounted for by centromeres and telomeres, but including other difficult regions as well - while almost all CCDS exon positions are covered by at least several reads. At the other extreme, very few positions have more than 70-fold coverage.
According to this plot, the typical (median) position is covered by about 40 mapped reads, a depth sufficient to enable confident variant detection despite occasional errors and noise in the sequencing and mapping. About 7% of the hs37d5 reference has no coverage at all - largely accounted for by centromeres and telomeres, but including other difficult regions as well - while almost all CCDS exon positions are covered by at least several reads. At the other extreme, very few positions have more than 70-fold coverage.
It's interesting that the curves cross, and the typical CCDS position has slightly lower coverage than the genome-wide median. This could have to do with the relative GC-richness of protein-coding regions, which might make sequencing or mapping to them a little less efficient. In any case, the difference is slight and not too concerning.
So, after a long slog through some technical minutiae, where are we? Here's a visualization of some of my mappings in our genome browser:
(click to zoom)
We're looking at a 150bp stretch of the reference chromosome 12. Each of the light blue and green bars represents the mapping of one of my 100bp reads. (The color reflects the strand orientation of the read with respect to the reference, which is basically random and not very interesting in WGS, except to account for the lower reliability of the 3' ends of the reads.) We can see that my reads cover the genome many times over, which is typical, as shown previously. The reference DNA sequence is displayed just above the mappings.
Each black box indicates a mismatch between the read from my genome and the corresponding position in the reference. For the most part, these mismatches are infrequent and isolated to individual reads - consistent with errors and noise. That shows my genome has largely the same sequence as the reference, as expected. But we can see one position where all of my reads contain an A instead of the reference assembly's G. Based on this unanimous evidence, we can conclude that I'm homozygous for the variant A allele at this position. (If I were heterozygous, we would instead see some mixture of reads with A and G at that position.)
Each black box indicates a mismatch between the read from my genome and the corresponding position in the reference. For the most part, these mismatches are infrequent and isolated to individual reads - consistent with errors and noise. That shows my genome has largely the same sequence as the reference, as expected. But we can see one position where all of my reads contain an A instead of the reference assembly's G. Based on this unanimous evidence, we can conclude that I'm homozygous for the variant A allele at this position. (If I were heterozygous, we would instead see some mixture of reads with A and G at that position.)
The next algorithmic analysis step will use the mappings to detect variants across my whole genome - similar, to a first approximation, to the logic we just applied to this one locus. First, however, I'll have much more to say about this particular homozygous variant; it wasn't mentioned in Illumina's clinical interpretation of my genome, but it's actually had a significant effect on my adult life.
GMX Deutschland
ReplyDeleteWe offer the best sites where you kazakhstan brides can meet a girl. Foreign women profiles for you. Come to our site to get acquainted.
ReplyDeleteI have read your post several times. very interesting stuff. I came across similar conclusions in the materials of the authors of the company essayswriting.services who write articles and essays to order. very talented guys, I recommend them to everyone
ReplyDeleteGood morning! Our company https://essaysrescue.com/customwritings-review/ have a rigorous recruitment process that filters the mediocre writers from the experts. Each prospective research paper writer must pass several tests to determine whether they are a perfect candidate for the job.
ReplyDeleteIn abstracts, it is important to take into account the norms and rules, and I know where to buyessay, all abstracts and other written works are written there, because the specialists on this assistant do an excellent job
ReplyDeleteWe are picky when it comes to hiring writers. Creating a decent college-level paper requires a whole set of research https://topessaybrands.com/review/papernow-org-review/ and writing skills as well as plenty of time. While students might not have all of the above, our qualified writers specialize in creating custom academic content within short deadlines.
ReplyDelete“I’m looking for a qualified academized reddit professional to write my essays.” Say no more. We are the best option to write your essays. Here’s why. First, we care about every customer of ours, and hence we always aim to improve their experience with us thanks to our well-trained client support.
ReplyDeleteBy accessing the portal write my essay Essay Keeper you don't have to worry about the quality of the work. The quality of our work is impeccable, and we strictly adhere to the deadlines for the delivery of orders. Moreover, here you have the opportunity to order an essay inexpensively. All these arguments speak in favor of using our services
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThanks for your great post and work! But I found several mistakes in your article I wanna recommend you visit our service https://writemypapers.company/dissertation-help/ That makes your article better.
ReplyDeleteI have been working on this topic for many days, and I did a lot of research on this topic but to no avail because I didn't find any helpful material. Still, I happened to read your blog by chance, And I got what I wanted; if your readers want a Buddyboss Coupon Code, they can get it from here.
ReplyDeleteCreative Destruction is a unique form of battle royale game in which you compete against 100 other players while constructing various constructions. This game is all about creativity and bloodletting. You'll find yourself playing this battle royale game all the time because to its gorgeous visuals and intriguing characters. You may also play in duos and squads, making the game much more competitive. The animation of the game is comparable to those of Fortnite and Overwatch. In comparison to the majority of battle royale games, it has a brand new aesthetic. Read more at pc game news source.
ReplyDeleteSee more games to choose from our site!
When it comes to software providers, there are heaps of them on board, together with Microgaming, Red Tiger, NetEnt, Eurostar Studios, Sling Shots, Neko Games, Snowborn Games, and Stormcraft Studios. This NZ online casino has 24/7 assistance available to all clients. Even if you don’t have an account simply but, you'll be able to|you 1xbet possibly can} chat with one of their buyer representatives for any casino concerns.
ReplyDeleteThank you so much for the post you do. I like your post
ReplyDeletei would like to bookmark the page so i can come here again to read you,
ReplyDeleteAs I website possessor I believe the content material here is rattling great ,
ReplyDeleteThank goodness I found it on Bing.
ReplyDeleteYou should keep it up forever! Best of luck.
ReplyDeleteMr. Auto Approved provides hassle-free Auto financing Vancouver, ensuring quick approvals and competitive rates. Secure your dream car effortlessly with our dedicated auto financing solutions in Calgary.
ReplyDeleteExperience heightened vitality with Monster Rabbit, offering the finest sex honey abu dhabi. Unleash passion and energy with our premium products, designed for an indulgent and satisfying experience.
ReplyDelete