Called Baseline Study, the project will collect anonymous genetic and molecular information from 175 people—and later thousands more—to create what the company hopes will be the fullest picture of what a healthy human being should be.

The project will collect anonymous genetic and molecular information from 175 people. Getty Images
The early-stage project is run by Andrew Conrad, a 50-year-old molecular biologist who pioneered cheap, high-volume tests for HIV in blood-plasma donations.

Dr. Conrad joined Google X—the company’s research arm—in March 2013, and he has built a team of about 70-to-100 experts from fields including physiology, biochemistry, optics, imaging and molecular biology.

Other mass medical and genomics studies exist. But Baseline will amass a much larger and broader set of new data. The hope is that this will help researchers detect killers such as heart disease and cancer far earlier, pushing medicine more toward prevention rather than the treatment of illness.

"With any complex system, the notion has always been there to proactively address problems," Dr. Conrad said. "That’s not revolutionary. We are just asking the question: If we really wanted to be proactive, what would we need to know? You need to know what the fixed, well-running thing should look like."

The project won’t be restricted to specific diseases, and it will collect hundreds of different samples using a wide variety of new diagnostic tools. Then Google will use its massive computing power to find patterns, or “biomarkers,” buried in the information. The hope is that these biomarkers can be used by medical researchers to detect any disease a lot earlier.

Premise Data corporation sits right at the intersection of a number of interesting trends and provides an extremely valuable service to some very hungry customers with deep pockets.

In many ways, this is the same play as Onavo.

I have long been a strong advocate of using data to drive product teams. Rather than provide the team an old-style roadmap listing someone’s best guess as to what features may or may not work, I strongly prefer to provide the product team with a prioritized set of KPI’s, and then the team makes the calls as to what are the best ways to achieve those goals. It’s part of a larger trend in product to focus on outcome not output.
In the future, it should not matter to pharmaceutical companies, payers, physicians or patients whether an intervention for a specific disease is chemical or technological. The only thing that matters is whether the data package generated on that intervention proves it is efficacious, safe and cost-effective over the long term (meaning patients have to use it long-term).

"One provision of federal health reforms ties hospitals’ reimbursement for treatment more closely to patient outcomes than to the volume of patients treated.

Feeling more scrutiny, health-care providers now have an immediate need for the types of software and big-data products that can help them track treatment efficacy and patient progress over large populations of people, Dr. Yeshwant said.”


Some solid tips for navigating decisions under uncertainty from Twitter’s Director of Product Management, Ameet Ranadive. I’ve found similar ideas to be super helpful in navigating product development when there are many unknowns. Here’s my take on his three key points:

  1. Day one hypothesis: I find it’s helpful to spend some time coming up with an initial hypothesis and then recording it. Your views, expectations, and opinions will change rapidly as you see new data and continue the dialog with your team. Everyone’s thinking will evolve very quickly so it’s helpful to understand what your starting point was, what changed, and how you got there. Being able to reconstruct your decision-making is very helpful for improving your decision-making process.
  2. Directionally correct + order of magnitude > the perfect estimate: It’s easy to suffer from analysis paralysis when trying to make the perfect decision or estimate. And, most analytical people easily fall into this trap. We flesh out our models with as much detail as possible in an attempt to be precise, but that often misses the purpose of the exercise… figuring out what to do next. By refocusing on being directionally correct and getting the order of magnitude right, you’ll have a much easier time making decisions and estimates that you’re comfortable relying on.
  3. What do you have to believe - force yourself to figure out which data points are required to make a given decision and then come up with a notion of confidence in your estimates for each one. Inevitably, there will be some shaky estimates in there. Focus on what those need to be in order achieve different scenarios and then vet each case. Essentially, you isolate the most variable component and then drill into it as much as possible.

I hope you find this helpful!