This is a great topic! Thank you for hosting an AMA. And I’m so glad you mentioned long read data too.
- What are some ways to optimize yield/quality for both short read and long read preps?
- Any suggestions for standardization methods that help reduce batches across isolation/sites so that downstream data can better be compared to other cohorts, etc.? Thinking of a prior discussion about what makes useful metadata.
- Suggestions for reducing the human error side of things in general? This is more of a meta/open-ended question, thinking about some prior discussions here about detecting sample swaps - if we can prevent them, even better! I wanted to highlight your QC comment here as I think it’s important to continue to have a an ongoing discussion about QC and preventing sample swaps.