Wednesday 6 February 2013

Takeaways from FutureMed (Day 2)


Here are a few of my takeaways (and thoughts) from yesterday at FutureMed 2013.

We are full of sh*t
Just as gene sequencing took off these past couple of years, we will soon see analysis of microbes break new ground.  Companies like uBiome are mapping the microbiome (the sum total of microbes, their genes and the environment).  I learnt during Larry Smarr's lecture that 90% of our cells and 99% of our genes aren't human (whatever human is) but microbes!  Our gut alone harbors 100 trillion bacteria.  Depending on what species are most dominant inside us determines our physiology.  So we are indeed what we put into our mouth!  Read this fascinating article on how Larry quantified himself to the extreme to take control of his health.

Watson is learning about cancer at Kettering and will go live by the end of the year
IBM's Watson is at Kettering devouring knowledge from millions of its encounters.  Senior oncologists at the hospital are spending 50% of their time teaching Watson.  By the end this year, it'll start assisting doctors with therapeutic options for cancer.  Marty Kohn said that IBM has no decision yet on their business model for Watson.  Would this be a big cloud or several mini-clouds?  We don't know this yet.

An inflection every 50 years
Back in 1870, there was germ theory.  In 1920, advances based on medications (example is penicillin).  By 1970, medicine evolved into a science (evidence based).  Fast forward to 2020 and we could possibly be working in a healthcare world that's driven primarily by data.  These thoughts were from Daniel Riskin, MD whose company actively culls out data from EHRs and creates meaning out of it.

A machine is possibly better than 50% of MDs (who are below average)
Vinod Khosla reflected on insights that possibly stare at us in our face.  The most uncomfortable of them was the fact that half of the doctors out there are below average and could possibly provide much better medicine with assistance from a machine.

Elegant visualization and avatars
Among several other things, John Mattison, MD from Kaiser talked about elegant visualization in the world of big data.  As a case in point, a recent study generated 5 terabytes of data for every patient.  To put this in context most of the world's data has been created in the past two years and is unstructured (Mary Kohn).  A better way to make sense of such data would be to use for example heat maps to visualize what's going on.  What if you combined such visualization and connected it with actionable outcomes using avatars?  If your grandma heard from your avatar in your voice, would she be more willing to get active everyday?

4 predictions for health IT
Christopher Longhurst, MD made the following interesting predictions:
1) EMR vendor consolidation and increasing federal regulation will pose challenges to rapid innovation
2) Health IT focus will shift over the next 3-5 years from EMR implementation to analytics-enabled clinical decision support
3) Over the next 5-7 years, we will see a shift from "evidence-based practice" to "practice-based evidence" through the creation of national learning healthcare systems (could HIEs be those hubs?)
4) Personal health records will evolve into personal health advisors

Trusted recommendations at the right time change behavior
Julia Hu from Lark showed the new Lark.  Their approach to development involved capturing and automating expert coaching and providing behavioral alerts to users at the right time.  For example, if Lark noticed that you didn't sleep well last night, have been running around all day, are still in the office at 7PM and have been sitting for two straight hours then it knows you possibly need a push to de-stress for a few minutes.

New age of innovation
Here's 16-year old Jack Andraka's video where he's screaming away joyously when he became the grand prize winner at the Intel Science Fair last year.  He developed a new way to detect pancreatic cancer and is talking to Quest and other labs to commercialize his method.  He closed his presentation talking about how the Internet has no cognition of who you are, your age, your gender and so on but mainly about what you have to say.

More later.

Sunday 3 February 2013

Amazon must invest in bio-data centers


For all the scramble that we've had over cloud computing the past few years, there could be a gentle evolution underway.  Last August, George Church's Regenesis became the first book to be stored in DNA and was replicated 40billion times (it was fun am sure).  It cost about $1,000 to print onto the first DNA chip and then another $50 to replicate via polymerase chain reaction (PCR).   

How it's done
A book constitutes words and images that fit into an HTML file (Church's book has 53,246 words and 11jpeg images).  The file itself occupies a few hundred KB in traditional information storage terms.  Broken down, the file represents a series of 0s and 1s that can be encoded into ATCG equivalents (primary nucleobases found in DNA).  Church's team assigned 0 to As and Cs and 1s to Gs and Ts.  When we do this, we get a few million base pairs and a few thousand DNA segments.  Once the ATCG equivalent of the book is available on a computer, it can be taken to a lab to synthesize.  The result is an extremely compact storage mass with massive capacity to store - an oligonucleotide chip.  But direct encoding of AT, CG into 0s and 1s provides errors and hence newer experiments use a ternary system (not binary) with 0,1 and 2 that try not to confuse DNA readers.  However, we must just remember that the book in our hands will now be all sticky and gooey.  

A DNA Kindle
Just as we need a Kindle for books (basically a hardware device with software capable of interpreting 0s and 1s in a human-readable form), we will now need rapid DNA-readers (sequencers).  Cost of sequencing is expected to drop to less than $1,000 but scientists are still working with humans in mind, not books and other data that we may wish to store and read.  

A device that can take gooey, sequence it to get the ATCGs and then present it in human readable form could soon be a much needed Christmas toy.  Without stretching my imagination too far, I see a possibility where we have bio-printers and readers at home.  We would get our gooey in a dark box in the mail (it has to be protected from sunlight), we pour it, sequence it and print it (from tissue to book to whatever!).

Data centers with a 10,000 year shelf-life
Our current storage systems have poor shelf-life.  A book has wear and tear.  A CD breaks and scratches.  Data servers heat up and die and require expensive backups.  And DNA?  DNA stays.  Think that every DNA base pair in our cells has had an original existence tying us right to the very beginning of life (some 3,900 million years ago when single cell organisms first discovered that sex was a good idea).  It's always been there - not in its current diversity but in its base forms.  I learnt that we now have the complete DNA code of our brethren neanderthals - a genome consisting of 4billion nucleotides.  Read this fascinating scientific study that also explores whether we (humans) messed around with them.

It's possible therefore to foresee the advent of an entirely new industry to store humanity's data in biological form.  Instead of large physical data centers housing hundreds of thousands of servers, coolers and other goodies, we will have micro-scale biological storage systems in cold, dark places.  It's so micro that a single gram of DNA holds 2.2million gigabytes of data.  The Economist says that you can fit the world's data in a lorry (truck for some).

*
Amazon Web Services started with a small group of people in South Africa who imagined a brave new world running storage and computing off the Internet (AWS today is an est. $3.8B business).  Amazon hopefully knows it's time to let its engineers mess with biology.  Hopefully Facebook doesn't.