Increase Productivity & Reproducibility by Automating DNA Library Prep for Ion Torrent® Sequencing

Learn how Biomek automation of NEBNext DNA library preparation for Ion Torrent enables high-throughput, reproducible production of libraries with high yield and complexity. In this webinar Joanna Hamilton from Dartmouth Genomics & Molecular Biology Shared Resource, Lynne Apone from New England Biolabs and Zach Smith from Beckman Coulter describe the NEBNext Fast Fragmentation & DNA Library Prep kit for Ion Torrent, automation of this method and implementation in multiple workflows in a sequencing core lab.

Script

Zach Smith:
Hello. My name is Zach Smith. I'd like to welcome you to our online webinar about increasing productivity and reproducibility by automating DNA library prep for Ion Torrent sequencing. I'm joined today by Dr. Joanna Hamilton from Dartmouth and Dr. Lynne Apone from New England Biolabs, and we're very pleased to have them here. Thank you for joining us. With that, I would like to begin this talk by passing the presentation off to Dr. Apone.

Lynne Apone, PhD:
Thank you very much, Zach. I want to thank everyone for attending the seminar. I'm going to begin by ... I'm having a little problem with moving the slides. There we go, so I apologize. I'm going to begin. I'm going to talk a little bit about one of the kits we developed at NEB for Ion Torrent library construction, but what I'd like to do is begin with just an overview of the workflow and describe the steps involved in generating a library for Ion Torrent sequencing.

The first step involves fragmentation of the input DNA. Those fragments are then end repaired to generate five prime phosphorylated blunt-ended molecules onto those molecules of ligated P1 and A adaptor. In this system, the adaptors are not phosphorylated, and that aids in reducing the amount of adaptive diamond it's produced, but it causes an incomplete or results in an incomplete ligation, so there are nicks remaining in the DNA after the ligation that have to be repaired, and those are repaired by a nick translation reaction.

Following the nick translation, the library is enriched by amplifying in a PCR reaction, and so there's a number of steps that occur during library construction, and after each one of these steps, there's typically a clean-up that occurs, and also after the ligation and nick repair, there's also a size selection and a repair and a clean-up. When we were developing a kit at NEB, we had a number of goals that we set for our kit. The goals we wanted to have were to develop a fast and simple protocol that would minimize hands-on time as well as the ability or the possibility of errors. We also wanted to eliminate the need to fragment DNA before going into library prep, so we wanted to have, as our input, intact DNA, and of course, we wanted to create a low-cost solution for our customers.

The kit we developed is called the NEBNext Fast DNA fragmentation and library prep set, and what we've done with this kit is we've combined the fragmentation and end repair into one step. This allows us to use, as input, intact DNA. We've also had this particular cup is able to be inactivated with heat, and so that removes the need to clean up the sample after this step. We've also combined the ligation as well as the nick translation step, and that allows for the construction of PCR through the libraries, and so after the ligation and nick translation step, the sample is cleaned up and size selected, and then the sample can either be sequenced directly or it can be enriched in PCR amplification.

We fairly reached our goals with this kit. We have a simple protocol with a streamlined workflow. It's fast. It takes about two hours to make a library, and there's very little hands-on time involved. The kit is compatible with a broad range of inputs from 10 nanograms to one microgram, and it's compatible with multiplexing, and as I said, PCR free library construction. This kit is also compatible with a broad range of inputs, so I'm showing you libraries that were constructed from DNA isolated from some different organisms, and the GC content in these organisms range from 38% to 65%. I'm also showing you that the kit can be used to construct libraries of different sizes, so in this particular library construction, a single prep was made for each library. Then, it was run on an E-Gel to size select. The 200 base pair size library was eluted first. The gel was going for a little bit longer and then the 400 base pair library is eluted. There's versatility there.

In addition, the libraries that are generated with the kit produce very high quality sequencing data, so I'm showing you a snapshot of one of our run reports for our library, one of our libraries. This was a 200 base pair library that we size selected on E-Gel, so you can see that the read length distribution is quite tight and the data that was produced, it's very high quality. It shows very high matching to the genome and good quality. In our kit, we also have a protocol for read size selection for libraries, and this would give a much broader size distribution, and so we think we have a kit that has a streamlined workflow. It's fast. It's easy, and it produces very high quality Ion Torrent libraries, and so thank you. I am going to, I think, pass control back to Zach.

Zach Smith:
Thank you, Lynne. I am going to talk about some of the work that we'd performed here at Beckman Coulter to automate this protocol. The protocol in question was automated on our Biomek 4000 liquid handler, which is the smallest of our Biomek platforms, largely by Sarah Simons, one of our very capable field application scientists. What I'm showing here is the deck layout that's required by the method. We have the standard Biomek 4000 deck setup except for we've added an orbital shaker at this position, and then, we were using the gripper and a variety of tools.

The method itself has a HTML driven user interface that's very straightforward to use. We have the ability to set the number of samples from one sample up to 48 samples in any combination thereof, so if you wanted to do 23 samples, you could do 23 samples. Then, what you'll do next is select which workflow that you wanted to pursue, either fragmentation or ligation, size selection, PCR or post PCR clean-up. Based on what information you feed the user interface ... There we go, there's our two arrows pointing out the two points of the HTML user interface. The method will then pop up an HTML driven reagent calculator that shows you what to put on the, for example, the 24-position tube rack, and how to make those things that you're putting on the tube rack, so these are complete Master Mix recipes that take into account the number of samples being processed and the required dead-volumes for the pipetting and labware that's being used, and how much of our AMPure XP reagent and ethanol and resuspension buffer to load in our reservoirs. It gives you a very detailed overview as to what the actual consumables will be required on the deck.

Then also, depending on which workflow you've selected, the instrument will then pop up for the required deck for any of the individual workflows, so this is the workflow for fragmentation and ligation. They then have, on the upper right, the one for size selection followed by the bottom left, which is PCR setup, and then finally, the post-PCR clean-up. It doesn't require a whole lot of labware. The method has been written in such a way as to use partial plate transfers, and so if we're doing up to 48 samples, it will run, for example, one clean-up in columns one through six, and then transfer column one to column seven of the same plate, and then use column seven through 12 for the second clean-up. This has the ability to allow the user to cut down on the amount of required labware and makes for some pretty good slick code. With that, I am going to pass the presentation over to Dr. Hamilton of Dartmouth, of Dartmouth, of Dartmouth.

Joanna Hamilton, PhD:
Okay. Thank you, Zach and Lynne for that introduction, and thank you everybody for listening to our presentation today. I'm going to talk to you about how we have used these methods in our core facility, so we're located in Lebanon, New Hampshire. We have a fairly small genomics core facility, and we also have a sister facility that does Sanger sequencing but within my core, we are doing the DNA sequencing and microarrays. There's really three of us that are working in that capacity. At Dartmouth, we recently purchased the Ion Torrent sequencing platform. We have both the PGM and the Proton Sequencer. The PGM is a lower capacity sequencer for predominantly targeted sequencing applications, and the Proton Sequencer is a high-capacity sequencer that we can use for whole exome sequencing, some RNA seq, and the data that I'm going to show you today was from the Proton Sequencer.

Here, I'm just showing the Ion Torrent workflow. We start off with DNA that we need to fragment and then make a library prep. Once we have the library prepared, we then template this onto ISP beads, which is part of the Ion Torrent platform. We do that on the one-touch instrument. Once we have the DNA on the beads, and ideally, we have one unique piece of DNA on each of the beads, so that we are able to distinguish the sequence once it comes off the sequencer, the beads are then loaded into ... They are loaded into each well on the semiconductor chip. Then, we perform the sequencing, which takes anywhere from two hours four hours depending on the read length. Then, we're using the Ion Torrent browser and a number of downstream analysis tools to run the analysis.

The major bottleneck that we have at Dartmouth is really in the library preparation step. For each biological sample, a unique library needs to be prepared and doing that by hand is often time-consuming. We only do perhaps eight or 10 library preps at a time manually, and we would like to increase this productivity here at this step, and so we went ahead and purchased the Biomek 4000. we were hopeful that this would help us with many of our protocols as well as the NGS sequencing. We worked with Sarah Simons very closely to write the methods for a number of NGS applications, and she came on site and visited with us and really helped to set up the method, so it was very easy for us to be able to do this.

Sarah helped us water test all the methods before she left us with the instrument, and so we were very capable of running the instrument fairly quickly after the methods were installed. Another nice thing that Beckman did for us was they created a Word document, which provide detailed explanations of exactly how we were to run the protocol, which was what was needed at which steps, so that it made it easy for us to use.

Within our core, we offer a number of different sequencing services. Before the project begins, we meet with our investigator and the Bioinformatics Core, so that we can really get a good understanding of what the project is, and then when we receive the sample, we do some quality control. We NanoDropped and Qubit everything that comes into the lab. We also Fragment Analyze all the DNA so that we know we're starting with high quality samples. We have an internal billing system where customers can place the order. We can also work with external customers as well who may want to do some of the services that we offer. We can set that system up for them as well.

Then, once we are satisfied that the sample is high enough quality to go on into the process, we'll go ahead and do the entire workflow and then provide the results to the investigator and often, in association with the Bioinformatics Core here as well. Some of the services that we offer, we offer RNA transcription profiling. We still do do many microarrays, but we're also doing a lot of RNA seq as well as some micro RNA work as well, but today, what I'm going to predominantly talk about is the DNA sequencing that we can perform. We can do whole genome sequencing for small for small species, and I'm going to talk a little bit about that. We can also do custom panels, which is another thing that we have optimized for our sequencing and also on the Biomek 4000.

First of all, I'm going to talk to you about the NEB workflow that we automated. Several months ago, an investigator approached us and they had a, for us, a fairly large DNA sequencing project with 170 biological samples. These were bacterial samples, and they wanted to do whole genome sequencing. Because the genome of the bacteria of interest is fairly small, we were able to multiplex this fairly highly, and we took advantage of the full range of 96 barcodes that's available from Ion Torrent.

We have used the NEBNext workflow in the core previously for projects before this, and we were happy with the workflow. We had generated good libraries, and so we decided that this was a good opportunity to try to automate this. Of course, with all investigators, they wanted the data as soon as possible and with the grant deadline approaching, we actually ended up starting a lot of the libraries manually while Sarah was working on the methods, and so we have a very nice dataset with the large end that we are able to compare because of this fortuitous timeframe that we set up this experiment with.

I'm going to go on and show you really how we did this. Lynne already mentioned this is the NEB library prep, so I'll just go into it in a little more detail, setting up the reaction. Then, we do not have Covaris instrument here, so we needed to use a prep where we could fragment the DNA as well, and so we liked this kit because we were able to do the fragmentation reaction at the same time. We then do the fragmentation and then prepare step, and then this is done really within the PCR instrument themselves, and so there's not too much hands-on time for that but obviously, the pipetting, if you're going to have a lot of samples, does add to this time.

Then, the Ion Torrent adaptors are added and then for this instance, we're using the XPB to clean-ups and not the E-Gels. We're doing a 200 base pair sequencing since that's what's available to us on the Ion Proton Sequencer. We did do limited PCR cycling for our samples. The idea for automating this was really to try to increase our turnaround time for our library preparations to be able to produce more libraries in one go rather than just the eight or 10 that we would prepare manually to try to avoid any human error being introduced. Also, we were hoping that the automation would increase the reproducibility of our preparations and result in fewer errors.

The setup that we did for the first batch of manual preparations, at most, we'd prepared 10 libraries at a time, and so here, I just have depicted the time course of how long it would really take to prepare 80 libraries, which would really take ... If each of these bars is a day, it would take us eight or nine days to prepare this. Of course, that's if we're working on only this project. Here's our automated timeline that we used for this project and how we optimize this protocol. The first time we did it, we just did eight samples, and then we increased that to 24 preparations at a time in one sitting. You can see already we're halving the time that it would take to make the same number of libraries. I propose that you could do 48 at a time, and again, you're going to really reduce the time that it's going to take to make, for example, 96 libraries to sequence.

As Zach mentioned, the way that the protocol was set up, it was set up to be modular, so that we could run anywhere from eight or one to 48 samples. You could select each step depending on where you were in the protocol. We used all 96 bar-codes from Life Technologies, Ion Torrent, the Ion Xpress DNA bar-codes, and on the left here, I'm just showing really how complicated the protocol is if you were to try to write this yourself. Luckily, we had Sarah helping us here. Some of the nice features that we were able to include, we have our instruments set up in a location where it's not right next to the lab, and so we even incorporated some custom music at the end of the run down here so that we could hear that the instrument was completed and efficiently put it at the next step.

The first step of the protocol is basically setting up the fragmentation reaction, and so the DNA is set up here in a sample plate. This step is set up manually with up to 48 samples in the plate, and then the ligation reactions are added. The fragmentation and ligation reactions are added from the reagent tubes, which are on chilled plates. Another nice feature that we have is when the machine is completed doing all of the pipetting steps, a message pops up, which tells you exactly what you need to do next. You don't need to have the paper manual or protocol right in front of you. You can rely on the software telling you what to do. Here, the next step was to put it in a thermocycler. It tells you the temperature controls that you need to do and it also tells you what to do once you put the instrument, the plates back on the machine. It tells you where to load those as well.

The second step is the adaptor ligation. You end up replacing the plate here, replacing the plate on the orbital shaker. All of these figures I've taken from a document that Sarah put together for us, which really outlines the deck layout, so again, it's sort of creating it so that it's not very easy to make a mistake, which is great. Then, we have a plate with the 96 adaptors or 48 adaptors in this case are uniquely loaded here, and then the reaction proceeds from there in that plate with the new adaptors. Again, at the end of that, we have a prompt telling us exactly what to do with the sample and which stage we're at in the protocol.

We use the XP beads for cleaning up the reaction, and we did this in a reservoir. There is a little bit of overage that you need to account for when you're using the reservoirs, but this was reusable. We made sure that everything was sterile before we put it in so that we could reuse any unused AMPure beads or ethanol that we put in the reservoirs. Then, we also have the clean-up plate set up here, and we also have the magnet, which we set up right on the deck so that the gripper is able to take the plate from the orbital shaker and put it directly on the magnet so that we can we can do all of that on the deck.

After the clean-up step, we go on to set up the PCR step and the Master Mix is then added to a chilled plate, and again, this is all set up within 96-well plates on the instrument. It tells us exactly what we need to do to put it in the PCR machine afterwards. There is a second clean-up step after the PCR. Again, we're using the same setup as before where we're using the AMPure XP beads and the magnet, and once our protocol is complete, there's the music of course, and then a note that tells us we've completed the protocol. It is also telling us that there are dirty tips and make sure you discard of the dirty tips. Then, these libraries are ready for QC and sequencing. Once we've done this, the QC that we do is usually Qubit and Fragment Analyzer to determine how we're going to sequence these libraries.

I'll go on and show you a little bit about one of the projects that we've done. Again, it was a large project with about 170 samples. We have some patient-derived bacterial DNA. We started the preps with 100 nanograms, but we have gone as low as 10 nanograms and seen very good results with this as well. As I've mentioned, we incorporated the Ion Xpress DNA barcodes. We are using the Ion Proton to do the sequencing here, and we use the version three, most recent chemistry, for doing 200 base pair sequencing, so we really wanted to get a fairly tight library distributed right around the 200 base pairs. We amplified this to eight cycles during the preparation.

Here, I'm showing a comparison of two libraries generated with the NEBNext protocol on the Beckman, the Biomek 4000. The top panel is a manually prepared library, and the bottom panel is the Beckman prepared library. Excuse me. What you can see is that the software actually for the Fragment Analyzer is different in these two cases, and so this is why these do look slightly different, but I think you can see that both preparations have very nice distribution between about 100 and 300 base pairs here. It's a little tighter of a distribution for the automated libraries, which is great since the distribution of the library is important for the emulsion PCR amplification on the beads for Ion Torrent sequencing.

Here, I'm just showing the results of some of our library preps. We actually prepared 83 libraries manually and 84 libraries on the Beckman, and these are the results of that. The yield for the manual libraries does typically generate a larger library or more libraries, sorry, but this is more variable. You can see that the error bars here are a little broader than that for the red Biomek prepared libraries, but even with the yields being a little less for the Biomek libraries, there's plenty of library to sequence, and so I don't really think that that's too much of an issue.

The size distribution of the Beckman libraries was much tighter, so we really didn't see the variation here with the automated library preps and the size of the libraries themselves was a little larger than the manually prepared libraries. I'm guessing this is during the XP clean-up step maybe because the ethanol is out in a reservoir. Perhaps that affects some of the size selection here, but that really doesn't ... We're still within a good range of library size for Ion Torrent sequencing here.

We went on and prepared a multiplexed reaction of anywhere from 83 to 96 libraries for sequencing on the Ion Proton, and here, I'm just showing the results of that. The previous data that Lynne showed was for sequencing on the Ion PGM, and so this kit is compatible with PGM and Proton Sequencing. You can see here on the left is the bead loading, and on the right is the read length that we're generating, so with the libraries that we've prepared, we're getting about 145 base pairs of sequencing, but that does extend out past 250 for some sequencing types.

I just also was looking, wanted to look at exactly what the sequencing readout was, and so what I put together was how variable the recounts were for the manually prepared and the Biomek prepared libraries, and so independent of how we prepared the libraries, the same technicians set up the multiplexing dilutions for the sequencing reaction. Really, what this figure is showing is that the number of reads that are generated for any given sample are more broadly distributed than in the manual preps than in the Biomek preps. I think this just speaks to the fact that if you have a more tighter library prep and more common parameters for each of your libraries, that the sequencing itself does benefit from that as well.

In the manual preps, we did have several samples that had very few reads and some that had a large number of reads based on the way that we multiplexed these together, and so I think this also increases our confidence that the Biomek libraries really are helping us in this regard, and we don't need to repeat any of the sequencing based on some of the QC problems that we might have seen with some of these libraries in the manual preps.

Just to summarize for the NEB part of my talk, here's a summary of how many biological samples we prepared for this project. For the manual preps, we actually had 83 biological samples. However, we needed to repeat 14 samples. We had 14 failed libraries when we made these by hand either due to, well, mostly due to human error at a certain step. It was either an entire batch failed or one or two libraries within a manual batch failed at any time. We did, in fact, need to make 97 libraries to get the 83 libraries to sequence. The cost of reagents for this was around $2,500, so automatically, we had an extra cost just for having to repeat the libraries. From start to finish, bearing in mind, we were doing other projects of course at the same time, it took us 28 days to make the 83 or 97 libraries manually, and two technicians worked on this together.

In comparison, the automated libraries, we had zero failed libraries, which was great. Every single library that we made on the Beckman worked first time, so the cost of reagents to repeat that was nothing. We were able to make the 84 libraries in over an 11-day period, and it was just one technician working on this. I did not calculate the labor costs associated with the extra time, but I think you can see that this really has increased our productivity for making this method automated.

I also wanted to talk about another project that we've automated on the Biomek 4000. At the Norris Cotton Cancer Center, we have several investigators that were interested in looking at DNA sequencing for their favorite gene of interest, for example, and so we created a custom cancer gene panel, which consists of 541 genes as identified by our investigators. This doesn't encompass the entire comprehensive cancer panel that's offered by Life Technologies. The two options at the time when we set this up, which was over a year ago now that we really had available to us, were to use the Agilent HaloPlex panel or we could use the Ion Torrent AmpliSeq, the Ion Torrent AmpliSeq chemistry.

However, with that chemistry, we were committed to purchase 7,000 reactions, and for a small core facility, that was really a little too many reactions for us to justify the upfront cost, and so we opted for the Agilent HaloPlex technology. Again, we did this on the Ion Proton and with 200 base pair sequencing. The plan was to try to get around 300x coverage to be able to detect the rare variants, and for that, we needed to use the proton.

Here is a list. I don't know that you can necessarily read it, but here's a list of the 541 genes that we wanted to look at compassing many of the known oncogenes and tumor suppressor genes that many cancer research investigators are interested in. The design itself was 541 target genes, which was resolved to 9,800 regions when taking into account the different sites that we could look at and also different variants as well. This was 186,000 amplicons. The way that it was designed was a paired-end design, so that we could look in both direction for each of the genes on the panel. That's one thing that Ion Torrent doesn't have the ability to do, is paired-end sequencing, and so this was attractive to us to be able to do this as well.

The way that the protocol works is the first step is to digest and denature the DNA, and then, there is a circularization and hybridization reaction that allows us to hybridize the regions of interest that we want in our capture. Then, following that, there's PCR amplification to generate the library for the sequencing. This reaction is very labor-intensive. The first step is setting up an eight-reaction or an eight-restriction digests for each biological sample, and so you can only do 12 biological samples, and that will be an entire 96-well plate, and of course, there's room for error as you add these restriction enzymes and also the Master Mix. Really, our rationale for automating this was very different from our rationale for automating the NEB. The NEB, we want it to be able to increase our turnaround times and do more samples than we could do manually, so this, our rationale was really to just try to remove that potential for human error and to be able to generate a more consistent library prep.

Again, working with Sarah Simons from Beckman, we automated this method. We also created a modular protocol so that we were able to run this over a number of days. The first step is actually the overnight hybridization, and so we did want to have to have the protocol running, of course, whilst that was going on, so we set it up to be a modular. At first, we set it up for one to eight samples. Here, I'm just showing the deck layout for this protocol as well, which is very similar.

In order to eliminate the human error, Zach already touched on that there were reagent calculators, which really tell us exactly what to add at which step and which tube it goes into on the machine. This is modified depending on the number of samples that we put in, so if we're doing two libraries, these reaction volumes will be different compared to doing eight libraries, for example, and so that's going to also eliminate any human error during the processing. Here's another little message telling us exactly what we need to do next with our sample at a certain step in the protocol.

I'm just showing here the different deck layouts and the different tools and instruments that we've been using to set up these protocols. We have a chilled plate and a magnet here, so those are important for some of the steps in the HaloPlex as well, and then also just showing that we can use a reservoir for different clean-up steps and for reagents that you can use a larger volume for.

The first step after the restriction digest, there is a confirmation step on the Fragment Analyzer. Here, I'm just showing that independent of whether we prepared this step manually or on the Biomek 4000, the restriction digest patent is the same across the eight-restriction digest reaction mixes. After that first step, we always run this on the Fragment Analyzer to just ensure that the libraries that we're going to prepare are going to work. We run a positive control as well that's provided in the kit at every batch that we prepare so that we know that things are working out.

For this experiment, we did compare the same exact biological sample prepared either manually or on the Biomek 4000. Here, I'm just showing the library prep profile, all that. The profile of the HaloPlex libraries is a little broader than the NEB libraries, and so this range is anywhere from about 150 to about 600 base pairs. At first, we were a little concerned that the 600 base pairs may impair the sequencing since with the emulsion PCR and Ion Torrent's recommendations are that the library is distributed more on the two to 300 base pairs size range, but we didn't have any problems with the sequencing. I'll go ahead and show you some of the sequencing results in a moment, but we got very good sequencing from both the Biomek and the manual libraries.

When comparing these, the library size was equivalent between our preparation methods. However, the yield for the Biomek 4000 was, in some cases, half of the amount that we got manually. Again, this is plenty to sequence. I think it just is based on the fact that when you're preparing them manually, you're more conscious of mixing the beads and making sure you get the very last drop of reaction out of the well, whereas when that happens on the Beckman, there may be a little bit of leftover, one or two microliters in each of the reaction steps. As you do that over four or five different parts of the protocol, you may lose a little bit of material, but again, it's plenty of material to sequence.

Here are the sequencing results that we ... some of the results that we got sequencing on the Ion Proton. This was with the V2 chemistry, so it was the earlier version of the chemistry. We were very happy because we were getting pretty good polyclonal numbers and around the 17%, which per Ion Torrent, anything less than about 30% is really good. These libraries, even though they were broader distribution, didn't really seem to affect the sequencing ability. We were able to multiplex five samples on the sequencing run and get about 400x coverage for each sample for these 541 genes on our targeted panel.

Here, I'm just showing the on-target percents and the uniformity percents for the various prepared libraries. You can see they're equivalent, and so we were very happy with these results. Some things that we did working with Beckman to modify the protocol after we had set it up, we increased it from eight to 12 samples to be able to fill the entire 96-well plate. We consistently do a second wash for the HaloPlex protocol at the end to remove adaptor dimers, and we incorporated that robotically as well. We modified the timing to allow for a little bit more ethanol evaporation during the library prep, which we suspected may have been having a little effect on the yield. I didn't mention before, but we actually didn't have to modify the NEB protocol at all. The one that Sarah produced for us worked very well, so there were no modifications on that but it was a little bit more of a straightforward protocol to automate.

Just to summarize for the HaloPlex, we definitely increased the accuracy of our library preps. Again, we had library preps that were produced manually that we had to repeat because of human error, and so with this, we really have very little repeating library preps, which is great. The lower yield doesn't seem to affect the sequencing ability at all. We've also optimized this with FFPE DNA as well, and so we did a project with about 25 samples for FFPE, and that worked very nicely too for the HaloPlex.

Just to summarize my talk today, for us, automating these two protocols has resulted in less human error and more reliable library preps. In addition to the the time saving for the NEB, it has been cost effective since we're now not having to repeat libraries, and it does allow us to have one technician working on this and doing more library preps and another technician can be doing something else.

Working with Beckman has been great especially working with Sarah. She has come up to Dartmouth many times and really helped us get started on this instrument since there's only three of us in the lab. That was really, really appreciated. Some of the other applications that we're working on and areas where we feel that we can use this instrument to have us streamline some of our processes, we're doing the AmpliSeq Exome library prep from Ion Torrent, and we're working on automating that. We have the Fragment Analyzer that I mentioned where we QC all our samples with this, and so we would like to automate the setup for that. That's into a 96-well plate with a standard fix for each sample and then the samples on top, we think that could be automated as well, as well as setting up qPCR into a 384-well plate. We're also working on that as well.

With that, I thank you for listening to our presentation. I would just like to acknowledge all the people at Dartmouth who have worked with me and really done all the work, especially Heidi Trask and Christian Lytle who really worked with Sarah on the robot and then all Bioinformatics team, Xiangjun Xiao, who does most of our analysis, and then both NEB and Beckman for working with me and inviting me to present today. I think we have some time for some questions, so I'll pass it back to Zach who can facilitate this. Thank you.

Zach Smith:
There we go. For the question and answer session, we do have a question and answer section of the site that you can go to submit your questions to us. We have received a few questions already, and so I will begin getting those answered. Our first question involves, how much starting DNA is needed for the NEB Fast Frag and library prep kit and for the HaloPlex protocols? I can answer that one straight off the bat, so for the Fast Frag and the library prep set for NEB, the method supports anywhere from 10 nanograms to one microgram of starting material. For the HaloPlex we're using, the kit supported 225 nanograms per sample, so that pretty much takes care of that one.

Another question we have received involves, how random is the DNA fragmentation achieved by the Fast Frag and library prep kit? For that one, I'm going to pass the presentation over to Dr. Apone.

Lynne Apone, PhD:
Hi. Thank you. In a fragmentation mix, we have two enzymes, one that randomly fragments DNA and the other that cuts across from or randomly nicks DNA, sorry, and then one that cuts across from that random nick. It produces random fragments. When we compare sequence coverage to targets using the fragmentation mix compared to mechanical sharing, we don't see any bias in coverage, so we believe that it is in fact random. There have been a few independent publications looking at this issue that we're happy to send you citations for if you send us email. I'm going to send this back to Zach.

Zach Smith:
Thank you very much, Lynne. Another question we have received is, is it possible to use Covaris with this library prep method rather than the enzyme-based fragmentation? I can answer that one. Currently, the method is designed to work with the fragmentation solution in the kit, but it could be modified fairly straight, fairly easily to accommodate Covaris sheared samples. Okay. Another question we have received is, is there an automation method for the Fast Frag and library prep set for other Biomek platforms? I can answer that one as well. We do have a solution for the Fast Frag and library prep set on the Biomek FXP that is currently actually installed at a site and is available for the customers to use. This method for the B4K and for HaloPlex is also available right now that we could install at customer sites. The next question we have received is, can you get copy number information from HaloPlex? With that, I am going to turn the presentation over to Dr. Hamilton.

Joanna Hamilton, PhD:
Thank you. Yes. I'm not 100% sure about the copy number information. I think you would need to run a number of normal controls for any of the CNV analysis that we do with Ion Torrent. You would have to have a number of controls, and it may be that because we're looking for deeper coverage here, that the copy number analysis tools may not be available for this, but again, I'm not 100% sure. We would need to check with Agilent, I think, for that. Thank you.

Zach Smith:
Thank you, Joanna. Another question we have received is, what was the total turnaround time to prep 48 samples using the NEBNext kit on the Biomek? The total machine time is about four hours for 48 samples. However, that doesn't include all the prep work that's necessary to get those 48 samples onto the instrument. I'd like to go back to Dr. Hamilton and get her input and see how this performed in her lab. This performed in her ...

Joanna Hamilton, PhD:
Yeah. Thank you, Zach. We actually didn't do 48 samples at one time. The most that we did was 24, and we reckoned that would take about three hours, which included the only ... Really, the only manual prep setup is the first step where we put the DNA into the 96-well plate to start with. Everything else is done on the deck, and that takes about three hours for 24 samples. I imagine the robot's not going to take too much longer for the 48, and the manual step wouldn't be much more, so I think four hours sounds definitely very reasonable for that. I think you could do two preps a day very feasibly.

Zach Smith:
Thank you very much, Joanna. With that, I would like to revisit a previous question about using Covaris sheared samples. I would like to give Dr. Apone a chance to weigh in on that one. Weigh in on that one.

Lynne Apone, PhD:
Thank you very much. I just want to mention that we do have a kit and we do offer a kit in which the input is DNA that's already fragmented from any way that is necessary. Okay. Thank you.

Zach Smith:
Okay, well, if there are no other questions, I think that concludes our presentation here today. I'd like to thank each and every one of you for showing up and listening to our talk, and like to also thank Dr. Joanna Hamilton from Dartmouth and Dr. Lynne Apone from New England Biolabs for presenting. I'm Zach Smith from Beckman Coulter, and thank you very much.
Loading Spinner