View my profile


Layers Upon Layers

We briefed the Secretary of Homeland Security, Janet Napolitano, yesterday. She asked a number of very cogent questions about how we could detect threats to health early and bring that information to someone in her position as quickly and clearly as possible. In a very polite way, she was challenging us to demonstrate our capabilities. I think we did well in our response, and it got me thinking that I needed to express some thoughts in this venue to lay out our approach in a longer form.

All incidents relating to health threats begin somewhere on the planet, but may not stay there. Both naturally-occurring zoonotic agents and human-engineered pathogens may start in one place and migrate with human and other kinds of traffic. I separate them into two major categories:
  • “Static” data that does not change rapidly or is not augmented with new data very often.
    • These data include streets, infrastructure, fixed resources such as hospitals and fire stations.
    • These data form a set of foundation layers to allow us to build upon them with the other type of data.
  •  “Dynamic” data that does change rapidly or grows as we move through time.
    • These data can be meteorological, biological, clinical (human and non-human), business data, and social media.
    • Some have specific location components (like our EMS data with lat and long elements).
    • Some have far less granular location data because of the nature of where they are created. Hospital locations are all known, but the data relating to patients may refer to billing addresses rather than residences. Those can be useful, but less so than EMS data that most often relates to the point of illness or injury.
    • Some are specific as to character and location, but are only useful as background or for situational awareness, such as the SMARTT data on hospital resources.
    • And some have no geographic specificity such as most online social media.

Assuming that we have the above types of data, the focus turns to the middle layer of aggregation, integration, and analysis to identify threats to health as early as possible and with the greatest specificity as possible. (The Secretary was quite pointed in her need to understand a complex situation quickly—asking us to show more detail about how we detect and characterize incidents.) We have made great strides in the past year to develop methods to incorporate new data sets, develop analytical tools to identify anomalies and characterize them relating to potential health threats.

That brings us to the engagement layer where everything comes together. I see several ways to express information online:
  • Static reports—providing a regular update on the status of health threats for professionals. The report shows the activity relating to health threats as alert levels from normal to highest alert, based on what our analytics models determine. Ideally, it’s a single page that features trends, maps, and other indicators. The goal is at-a-glance comprehension of the status for the situation leading up to that day.
    • We will tailor these by role (local public health official, emergency department manager, school nurse, physicians in private practice, fire chiefs, emergency management, etc.) by jurisdiction or service area (catchment area, school district, municipality, county, state, market area, etc.) and type of organizational setting (public, private, governmental, NGO, for-profit, not-for-profit).
    • Part of the value proposition is to cast the information in terms that relate to the responsibility of the person receiving them. For example, public officials have general responsibility for their jurisdictions to safeguard their fellow citizens. Private firms have responsibility to their companies and shareholders to manage risks to their businesses—including risks of tainted or contaminated products that pose threats to health. We believe that both ends of this continuum are actually mutually reinforcing—if we can get everyone’s data. With comprehensive data we can provide the best available information to guide understanding and inform responses.
  • Dynamic reports—providing the above but with extra tools to allow engagement with the alerts and other information the system develops. These are essentially dashboards. The capabilities needed include choosing the data sources, health issues, types of vectors of the agent, type of place where the incident began, disposition of the case, etc. We also want to support changing the time frame of the report from the present or other designated date to look at trends over a week, a month, a quarter, or a year. We want to show alerts over time and place with the best method to speed understanding. These would involve maps, graphs, or line listings. We see these leveraging the output of our system in the form of alerts and characterizing information as to time, place, issue, severity, and the like. Finally, we want these reports to serve as virtual rooms where people can discuss the situation and achieve shared understanding of what is happening.
  • Expert level—essentially access to the tools for developing and using our data. While all levels will have roles-based security, the access to this system and its attendant data will be even higher because it includes access to the raw data. We will reserve it to those with legal rights to view such data and with the permission of the data provider. Yet, it will need all the functions listed above and many more to support ongoing investigations and the development of analytics models for alert detection and characterization.

The hardest question the Secretary posed was "Can you tell me what's going to happen next?" More on that later. 


Ten Years On: From Anthrax to Food Safety

It is an axiom in military history that generals are always refighting the last war. In American history, the initial efforts in our wars are often unsuccessful because we have to unlearn those lessons and replace the people who cannot. On the civilian side, we re-fight political battles as well. We cite the Founding Fathers or Abraham Lincoln to defend our respective political viewpoints as though the issues they grappled with are identical with those of our day. The risk of engaging in this practice is that sometimes we reason from past events and trends and miss the novel developments in the present and the future.  

We are still assessing the impact of 9/11 and have not had the time and distance to consider how much it marks a change in thinking and perspective. One of my professors (of medieval history) once said that he preferred studying the Middle Ages because it was quieter—the dust had settled. My interests, however, were in modern history with all the attendant noise. I believe that our responses to 9/11 were in line with larger trends. The idea of imagining things that do not exist and that those probabilities can be quantified is a novel development. While there have been futures markets in commodities for many years (as well as their routine collapses), we now have a large number of people who are able to profit from looking at the information available to them and betting for or against future developments. Biosurveillance (and food defense) can benefit from those approaches. It comes down to tracking for and against the likelihood of future events or threats.

We failed to imagine the use of airplanes as bombs or mailing anthrax in envelopes as ways to attack our country. Those failures do not give us license to spend our scarce resources defending against every imaginable threat, however. We need to assess the risks against the investment necessary to guard against them. In some ways we can think about the next war (if I can stretch my initial point here) rather than refight the last. The means we use to do that require quantifying the probability of the threat. To spend billions on threats that have little likelihood of happening is not prudent, but spending millions may be a wise investment—particularly if those means also guard against ordinary threats that hurt many every day.

We have seen ordinary threats emerge on a routine basis--such as the current listeria outbreak we are responding to as of this writing. Yet, we still respond after the fact to such threats that sicken, hospitalize, and kill many of our citizens. The secondary use of data and the application of analytics that we have developed for other purposes seem to offer just such a response.  These approaches are still early, but the work in the larger trend of using data and quantitative tools to look at the present offers us a way to peer into the future. If we address the issues of food safety in this manner, I believe we address the issues of food defense.


A New (Virtual) Immune System?

I've written and rewritten this post several times. I worried that I was not a medical expert or did not know enough about computer viruses. Well, dear reader, heck with it. Here are some ideas to chew on. If you see any issues, let me know. 

Our bodies work every day to respond to threats to our health. Our immune systems work all the time to respond to threats from pathogens that surround us. These can be quite ordinary bacteria and viruses that enter our bodies because we inhale, drink, or eat them. They also come in through the skin, our eyes, and through cuts. The immune system is a marvel of evolution with both localized response and systemic response--all without us thinking about it.

One worry we have is when we encounter a pathogen that we as a species have never seen before. Some zoonotic agents, diseases that move from non-humans to us, can attack us and our immune systems may have no antibodies to use in our defense. More worrisome are agents that suppress our immune systems ability to respond, leaving us less defended by ordinary threats. When I stop to think about these concerns, both natural and human-engineered, I literally lose sleep.

Zoonotic agents are on the rise as we move economic activity into formerly remote areas for mineral extraction or lumber harvesting. The opening of the higher areas of the Amazon Basin or the Congo to economic activity open the pathways for pathogens to move from these areas to the rest of the world. The US Agency for International Development has tracked the rate of these zoonoses for several years and has determined that they will continue to appear with greater frequency as we advance the global economy.

In the information technology realm, there is a similar situation where humans create viruses and other types of threats. The IT folks have created immune systems that involve firewalls, gateway detectors, anti-virus software, and lots of education to convince people that the threats are real. Properly managed, these form a defense in depth that allow our computers to keep running without being adversely affected--meaning grinding to a halt as an errant bot takes over the computer's functions or violates our privacy by revealing information to others.

Given these threats and many others that I have not mentioned (don't want to affect your sleep), what we really need is a societal immune system--a way to detect, characterize, and prompt a response to threats to human health that act to inform us of local outbreaks as well as the big pandemic ones. The tricky part is to get enough data early enough and apply robust analytics to see threats locally. Like our bodies immune systems, we need to see and act locally before the problem becomes systemic.


Are we selling soda?

Our team is having an interesting discussion about how we provide value to our customers. Views are all over the place and sometimes conflict with each other. As Elwood P. Dowd once remarked, "Well, an element of conflict in any discussion's a very good thing. It means everybody is taking part and nobody left out." Believe me, no one is left out.

When complete and operational, our system will support biosurveillance as defined by the federal government in Homeland Security Presidential Directive 21, as "the process of active data-gathering with appropriate analysis and interpretation of biosphere data that might relate to disease activity and threats to human or animal health – whether infectious, toxic, metabolic, or otherwise, and regardless of intentional or natural origin – in order to achieve early warning of health threats, early detection of health events, and overall situational awareness of disease activity." Okaaay, how do we do that? How do we deliver it? How do people engage with those analyses and interpretations?

One metaphor that may be helpful is making and selling soda. It's a liquid that comes in bottles made of glass, bottles made of plastic, and aluminum cans--at least when you think of the finished product. It also comes in containers of syrup used by soda fountains to blend their own soft drink by adding carbonated water at the moment of mixing. That lowers the cost of transportation and delivery radically, provides a fresh product, and avoids all the disposal issues of the cans or bottles. (I began my professional career as a busboy in a restaurant where I picked up such arcane knowledge.) But putting the soda in a glass can only be part of the path to the customer. Depending on the place, the soda can be added to ice cream for a fun, if fattening, experience. It can also have rum as its other active ingredient--equally fun although more immediately risky. Some people have even used it in cooking to sweeten dishes or add flavors they hope will delight the palate. So, the ubiquitous soda may be delivered in a wide variety of ways, mixed in with other things, and enjoyed by a person throughout a lifetime as their tastes evolve.

Our commodity is information derived from many sources across the biosphere. We divide it into the following data domains:

These are meant to represent the variety of sources of data we use. These can all tell us about the potential for threats to our health. The physical domain is the most basic since it involves data about such things as the weather, air quality, and water safety. The biological domain means all living things that are not human beings or non-human animals--such as bacteria and viruses that may infect us and make us sick. It also includes all foods that are plants or derived from plants. The clinical (non-human) domain includes veterinary data on companion animals who live with us, other animals for food production, or wildlife. The clinical (human) data is what most of us think of when we mention data relating to human health, but often is not seen in this type of spectrum. The organizational data domain brings in government agencies, businesses, and other groups that may have data of interest. These can touch the other domains--none of them are meant to be exclusive--and may include data coming from economic activity such as purchasing prescription drugs or over-the-counter medications. Finally, everyone's current favorite, social media. These sources include household names like Facebook and Twitter, but also other esoteric sources as MPHISE.

This construct is only meant to suggest the variety of data that are useful for biosurveillance. Our approach is to be inclusive in the variety of data we use, but recognize that not all data is of equal utility. Therefore, our approach is to evaluate a data source on what it can tell us about emergent health threats. Some of these are only background to other data the help us develop our information. Some are significant on their own in their ability to inform us of these threats.

The data are only the raw materials, however. We use them with a set of analytics tools to derive information from the data. That information, itself, is still only a part of the full value we can provide. Information needs a context for interpretation and meaning. Taken all together we can decide what is going on and act--or not--as the situation requires.

I think that we can look at soda as a metaphorical product. Like our information, it's delivered in a variety of ways, sometimes served by itself, sometimes mixed in with other things. It can stand on its own in its own container or be put into another person's container that suits their purposes. Getting the formula right is a necessary prerequisite for satisfaction but may not stop there. We should also enable people's use of our "soda" in their own containers and concoctions. Therefore, we need to develop our product so it can be used by everyone in every way possible. Easily said, not so easily done. More later.


History Is Speeding Up?

I'm an information technologist. There, I said it, and I'm glad. It's such a relief to get that off my chest.

I have worked in IT since the 1980s, which makes me a very old guy in that space. Back then, there were not nearly as many of us as there are now. I guess you could say that, defined by the use of IT tools and enhancing them, there are now many millions of information technologists world-wide. The spread of skills, knowledge, and abilities in IT across the globe has accelerated the pace of development and even accelerated the rate of change. (A nod to the literature spanning Thomas Kuhn to Ray Kurzweil and thousands of other authors who have charted the transformations.) My experience in this space is more applied and personal.

I began with IBM punch cards, usually waiting hours for my job to run--often at 3 AM because that's when the computer science students finally went home and mere students in the humanities could get time on the mainframe. My frustration level bounced from irritated to furious during these sessions. Irritated because it was 3 AM and furious with myself when a misplaced comma in line 13 caused a syntax error, stopping the job. The fury was because it usually took 45 minutes to get back the cards and a green-bar printout showing me the error of my ways. Then another 45 minutes to submit the cards and see the result.

I took several such courses across my undergraduate and graduate studies, never really mastering any of the arcane science. I vowed not to touch a computer again for the rest of my working life as a historian. I was confident that my oath would stand since I used pen and paper, or at most a typewriter. Life then took a turn and things did not follow my plan.

Within three years of finishing my dissertation, I owned a microcomputer and was using it to develop a fund accounting system for the historical society where I was working. I backed into IT by working at a non-profit that only had six employees, so everyone had to wear multiple hats. Because I had some coursework in programming languages (and was the only one in the office who had), I began helping out with the membership database. That turned into creating the accounting system. That turned into developing an archival materials database. That turned into creating a database index to photographic archives. I found that I had become conversant in two languages not spoken by human beings but very useful for computers. In fact, I had come to see the fun and the potential of these technologies.

How does a Luddite find fun in technology? When I could see it work. The microcomputers and then the PCs that I worked on allowed me to find the issues in my programs within seconds (I've never been a great coder) and correct them. Each loop of coding, compiling, and execution supported my learning curve because I got better with each cycle. The difference between 45 minutes then 45 seconds and then 4.5 seconds from conception to execution helped me build my expertise.

Another phenomenon was taking place as I moved from the life of a scholar to a computer programmer. I shifted from machines that recorded data on floppy disks to magnetic disks to optical discs. All within a few years. The change in storage size and speed of response was exponential. I also moved from paper punch cards, to green monitors, to amber monitors, to color monitors over a few years. At one point, I wondered why anyone would want to have a color monitor. I saw the reason in a magazine advertisement that filled the page with random text but showed the message in red letters. De-cluttering the data to derive the message in color was better.

It seemed that I needed to replace my perfectly functional set of equipment with a new generation every three years or so. I was learning the paradox that information technology is a strategic asset that one throws away on a regular basis. That seems wasteful, but is not when properly considered. The changes in processing power (famously expressed by Moore's Law), size of storage, and resolution of display--just to name three--have kept growing at various rates from fast to very fast. The first magnetic hard drive I saw was the size of a modern washing machine, cost $250,000, and held 10 MB. Today, I can buy a 64GB SD disk for $120. That's a 6.4 million times increase in storage that can fit under a postage stamp. (Remember letters?) On another plane, my smartphone has more than 300 pixels per inch--twice that of previous versions. Without a microscope, I cannot see the pixels. The increase in power, capacity, and resolution has created a new set of technologies every 2 to 3 years. I had to upgrade just to stay in place.

The issue in play was not a technological arms race. It was remaining conversant with the rest of the field. If my version X code was to run on version Y, I had to move to the latest platform. Hence I became adept in relearning a language that had changed or even learning a new-to-me language. Everything I knew about IT has changed every five years, requiring me to constantly study, adapt, and transform my approach. Finally, the  new capabilities of the current version always outstripped the earlier ones by at least an order of magnitude. But there is more to say.

The changes transformed our approaches we networked computers into LANS and WANS, then again when we connected to the Internet. The capabilities of the Internet are still rapidly developing as we come to understand crowd computing (for good and ill) that brings millions of people together to contribute to a shared experience. The rise of these social spaces challenge the old notions of relationships and the definition of society. They seem to spring at us from nothing and become household terms faster than a baby learns to walk.

What does this mean to those who have invested in previous efforts in our field of biosurveillance? It means that old systems must be replaced by new. Sunk cost is an anchor dooming unchanging approaches to irrelevance. History is speeding up. One must change faster just to keep pace. Lewis Carroll's Red Queen was a prophet.


Seeing Around Corners

When I was in high school, we did a classic experiment in biology class. We were given test tubes, sugar, and yeast. We then put water, the sugar, and the yeast in the tube and mixed them up. We capped each tube and let them rest, returning each day after to take a small drop of the liquid. We then recapped the tube and returned it to the rack. We put the drop on a microscope's slide and counted the number of yeast cells across the microscope's field of vision. The idea was to take a consistent sample each time and count the cells in that sample. We kept records of the number of yeast cells each day over a week.

Of course, the yeast cell population zoomed for the next few days. As we graphed the sample size (on paper in such primitive times), we charted the increase in the number of cells in the sample and were able to infer the growth of the whole yeasty population. At some point, I don't remember exactly when, the number of cells leveled off and began a stair-step decline. The path on our chart resembled a ball bouncing down a flight of stairs with a rise in number followed by a greater fall before another "bounce" followed by another fall. The yeast population was clearly headed for extinction as it used up the resources of food and oxygen in the closed system. Needless to say, the yeast was also producing carbon dioxide that contributed to its decline. The yeast population had grown to the carrying capacity of the environment, but because it was a closed environment with no replenishment of food or oxygen the yeast ran out of resources and its decline was accelerated by their own waste products.

As you can tell, this lesson has stayed with me for more than 40 years. At that time and since, I have viewed it as a way to consider our potential path as a species. When I was even younger, in grade school, we learned that there were 3 billion people on our planet. We are now closing in on 7 billion as I write this with predictions of up to 10 billion by the middle of this century. The question we face is, what is the carrying capacity of our planet for our species? It's not a simple question to answer as we have pluses and minuses. For example, on the plus side, our technologies have enabled us to grow more food than ever on the same acreage. On the minus side, we have limits on some resources needed to sustain us--such as fresh water. In some ways we pit our technologies against the finite resources of our earth to see if we can provide for our needs.

The above is a round-about way to say that we are now at a point in our journey where we need to start looking around corners to see health threats to us. Some may be visible but disputed such as global climate change. Others may be invisible and undetected such as zoonoses where infectious diseases move into the human animal population from the non-human animal population. As we extract resources from tropical forests and bring those areas into the global economy, we also sometimes transport infectious diseases along with the plants, animals, and minerals we extract. Because most of our species may never have seen those pathogens before, they pose significant threats to our health because we have no immunity to them.

In the natural order, we would develop those immunities only after great pain and loss on a societal scale. The Bubonic Plague of the 14th century decimated Europe, but left a residual genetic trait that persisted in those who survived the pestilence and passed it to their children. It seems to confer greater immunity to plague to those who possess it. Scientists have found that trait across European descendants, but not in modern non-European populations. However the cost in suffering was immense. It took 150 years for Europe's population to recover.

As a member of the human family, I'd prefer to avoid such suffering. What we need is a societal immune system for our species.


Animals as Sentinels

Can animals help us detect threats to human health? Dr. Larry Glickman has convinced me that they can be part of the system to detect and understand threats to humans. (If I make any errors or misrepresentations with the following, the fault is mine.)

Everyone has heard of the practice of taking a canary down into a coal mine to check for toxic gases. Larry referred to that practice in a presentation sponsored by the Triangle Global Health Consortium One Health Collaborative. He noted that canaries have been used as sentinels for detecting harmful levels of carbon monoxide and methane in coal mines since 1918. As late as 1995, canaries were used in Japan to detect harmful gases in environmental disasters.

Why canaries? It has to do with their greater sensitivity to noxious gases compared to human beings. Why other animals? That has as much to do with our lifestyles as the animals' sensitivity to health threats. We live with lots of animals as companions. More than a third of American households have dogs and almost that proportion have cats according to the AVMA. With over 150 million dogs and cats literally living with us, we should be looking to that population as health sentinels.

Larry mentioned several instances where animal populations are useful in understanding threats to human health. Ticks bring a number of diseases into the human sphere including Rocky Mountain Spotted Fever and Lyme Disease. Larry described techniques for using dogs to assess the tick population to develop advance warning of Lyme Disease.

One of his most compelling explorations was using dogs and cats as sentinels in the aftermath of an unintentional release of propyl mercaptan from a waste-processing facility in Fairburn, Georgia. The concern was that human beings might ascribe unrelated medical issues to the disaster. Larry led a team that sought to find a better way to seek the truth by looking at the dogs and cats in the area of the disaster. He used veterinary medical information from a chain of clinics that used the same electronic medical records to look for symptoms that would be produced by the chemical released. In fact, he and his colleagues found that the release "showed no conclusive and consistent evidence of adverse health effects." No smoking gun, but at least we had an independent and verifiable way to get to the truth.

Can we use animal health data to look for ongoing threats to human health. You bet. More on that topic to come.


Where Should Our Brains Be? Final Chapter: Electrons

This is the final installment in this thread. (Whew!)

In his presentation, Christensen explored the domain of expertise. He used the example of the development of fiber optic cable materials. A few decades ago fiber optic materials science was not well understood and the technology to fashion new molecules into new fiber strands that could carry light signals was held by only the most capital-intensive companies in that domain. Today, the materials molecular science is better and more widely understood with thousands of people involved in the development of new types of fiber optic cabling using desktop-sized equipment. That moved the science in that domain from an intuitive process requiring high levels of expertise and experience with very expensive capital equipment to an intermediate phase based on evidence as to how materials would react to various operations involved in synthesis to a final state of rules that guide both our understanding of the underlying physics and mathematics as well as their application using sophisticated systems that apply those rules in fabrication. Quite a journey that helps explain the accelerating pace of change in technology.

I would add that another force is at play as well. It's the pathway that successful technology takes--successful in the fact of its widespread adoption. All new technologies--whether interesting molecules or new machines--at first don't work well, require significant expertise to use, and cost a lot of money. If they are successful, they become reliable and effective, can be used by non-experts, and get cheaper. If they are digital, they can even become free. Chris Anderson explored that in Free: The Future of a Radical Price: The Economics of Abundance and Why Zero Pricing Is Changing the Face of Business (a book that was offered for free in digital form. Really!).

In my mind the real transition we are making is to use electrons to assist our work rather than molecules. If a process can be based on unique arrangements of electrons then it has lower costs and higher effectiveness than its older molecule-based method. In English: medical records are potentially cheaper and more powerful if they are in electronic systems rather than paper files. Setting aside the transition question (a deft dodge), electronic health records will allow the physician treating you to see your medical history without asking you about it. Every treatment, medication, and reading--potentially back to birth. But there's more.... Being able to process millions of records of people with similar characteristics or medical issues will allow physicians to become much better in their jobs. Rather than drawing on the few thousands of patients that an individual may treat in a career, being able to analyze millions' of patients records across millions of procedures will enhance that provider's expertise since she/he will be using the accumulated experience of an entire profession. It's sort of like going in to the hospital for surgery. Do you want the physician who has done 10 such procedures on patients or one who has done 4000? (Duh!) Imagine drawing on the experience of 4,000,000 such procedures, and you can get the impact of scale.


Where Should Our Brains Be? Continued...

Let's resume from our earlier post about the sociology of data visualization. 

So, are we leaving out the human element in our approach? The response is, "Of course not!" Yet, I should explain where we are in the development of our innovation. To explore that, one more digression is in order.

Monday and Tuesday last week, NC State University hosted the Emerging Issues Forum, an annual gathering of state leaders and experts on important issues of the day. This year's forum was on health care and featured Clayton Christensen. He has written a series of very influential books on innovation, leading off with The Innovator's Dilemma. His talk was an update to his thesis that innovation comes in two varieties--sustaining and disruptive. Sustaining in the sense that an organization is changing its product and service to better serve its current customers and hold or increase market share. Think of Sears during most of the 20th Century as the dominant retailer through its catalogs and retail stores in cities and towns. The other variety is disruptive innovation. That can come is several forms. One is essentially an intruder that offers a better value proposition to an underserved part of the market. Think here of WalMart and its rise in the late 20th Century as it moved into the towns and cities from rural areas, featuring substantially lower prices. Eventually WalMart eclipsed Sears as the dominant retailer in the United States. But that is not the only type of disruptive innovation. There is another variant sort of in the same domain--eBay. That is a reseller, but of people not previously engaged in such commerce at a global scale. eBay took the swap meet and put it online, bringing retailing to the masses and creating a global market where none existed before.

Christensen suggested that health care is ripe for disruptive innovation, particularly since the costs keep rising faster than economic growth. He suggested that there were two ways to look at that. The first is to recognize that systems tend to centralize services overtime. In his narrative, we once had the solution come to us when a physician made a house call. (I still remember Dr. Ball coming to my house when we all had chicken pox.) The advent of diagnostic and treatment technology started forcing the patient go to the physician's office or even the pinnacle of medical infrastructure--the hospital. No wonder healthcare spending rose as the costs of infrastructure (both equipment and expertise) grew in scope. Despite these changes and the centralization of care, he noted the evolving character of technologies emerging that allow for hand held ultrasound diagnostic machines to be used by primary care providers that do well enough for everyday medical issues. Rather than pay for the truck-sized MRI and PET scanners, these devices can do some of that job in the physician's offices. For the equipment manufacturers, that opened a new market that the big scanning companies had overlooked. Therefore, he thought that such developments might foster a countercyclical decentralization of healthcare with the big caveat that all other things remain neutral.

(More to come!)


Where Should Our Brains Be?

A colleague sent me an interesting and thought-provoking message a few days ago. It challenged a characteristic of the paradigm we have been following in developing the system of NCB-Prepared. Our initial approach is to access data across a number of domains (clinical, emergency, food safety, weather, etc.) and look for patterns that may be emerging threats to human health. The emphasis is on the analytics that can find the signal in all the data noise. It is necessarily a data- and technology-heavy approach.

The alternative he suggested involves creating and leveraging social networks of experts to understand phenomena. Although he did not reference it, he seemed to move in the same pathway as James Surowiecki explored in The Wisdom of Crowds. The notion he was proposing was connecting a crowd of experts in an online community to allow them to observe, comment, and learn from each other to address issues of cognition (essentially figure out what's going on) together. The connections across organizational borders would reduce the effects of siloed information and expertise that tend to crop up. Across any effective organization there is an interplay between the formal structures where authority and responsibility reside and the informal network that often facilitates getting the work done. As he put it, developing such an approach "implies a careful consideration of how you would like to sociologically engineer your network."

That statement resonated with me as I recalled a rich conversation I had about visual analytics with a researcher at Purdue University. He made the point that presenting data as visual information only expresses one end of the model. The other involves the psychology and sociology of the person engaging with that visualization to understand it and potentially take action. He advocated taking a holistic approach to the experience of data visualization.

These ideas resonated with me and prompted a line of thought that I'll continue in future posts. Stay tuned. 


Quality, Data and Otherwise

As our enterprise incorporates new data sets to help us detect emerging threats to human health, we encounter a set of issues in accessing an organization's data. One area is usually an unspoken reservation that an organization holds in their encounter with us. Put simply, everyone's data is a mess--missing values, entry errors, "idiosyncratic" spelling, cryptic abbreviations, among other issues.

It's sort of like inviting someone to view your bedroom closet. If it's like mine, there are shoes piled in a corner, a top shelf with hats and boots that are rarely used, sweaters and sweatshirts mixed up on other shelves, a rack of ties of ascending ugliness, and shirts, slacks, and jackets in no particular order. Everything is there and I can find it, but a stranger would need time to figure out where everything is and more time to fit things together into useful outfits. Finally, there are vestigial items. (I have never been willing to get rid of the suit I wore on my wedding day even though it has not fit in decades and is wildly out of fashion. What lapels!)

We have been exploring a massive data set that encompasses our entire state with millions of records and includes both normalized data and free text. An expert team from our SAS partnership has investigated the quality issues across the data and are helping develop categorical extractions from the free text so we can apply our analytics model to the data to look for anomalies. Those anomalies may be evidence of issues of concern to human health, but we apply both analytics and subject matter expertise to make that determination. The result are signals that can guide public officials' actions and understanding of an incident.

The process of developing the methods to automatically identify those signals involves sophisticated mathematics and domain expertise to develop rules to apply to the data. The rules allow the system to infer meaning from the data, but meaning tempered by a number of factors such as the characteristics of the data, the number of data points, connections to similar data that may add meaning to promote an anomaly to a signal. (Sort of like enlisting expert help in choosing the right tie, shirt, and jacket combination to those of us with aesthetic challenges.)

What has impressed me in the process of understanding the data is the variety of experts we have involved. Their focus on the task to develop these rules based on the data available exemplifies a major type of creativity of our project. While based on a variety of sciences, it is an art form and these are masters.


Food, Safety, and Moral Behavior

A colleague of mine sent me a link to an article in Foreign Affairs by Evan Fraser and Andrew Rimas titled, "The Psychology of Food Riots: When Do Price Spikes Lead to Unrest?" In citing the article, he challenged us to try to understand the rapid emergence of such popular movements because their causes are predictable. The article's authors look at not just the physical and economic realities of food security, but the moral dimension as well. The perception of merchants' efforts to take advantage of fluctuations in market price and availability of commodities poses a moral issue from the perspective of those who are going hungry. It's not just the fact of the shortage of food that encourages people to take to the streets, but the perception that they have been morally wronged by these merchants in the domain of food. The authors drew from a variety of historical examples to illustrate their thesis and made a strong case about the moral imperative that we all share.

The question my friend posed concerns the development of our technological system and its use. The point he made is that just understanding the physical properties underlying an alert in our context is not completely sufficient in understanding the fuller meaning of what is going on. Just as a sense of moral outrage provides the catalyst for food riots and civil unrest, we need to be aware of the context of the signals we detect that are outside the frame of our data. Put in a more folksy way, when considering information technology, the most important distance is the space between the screen and the back of the chair. That's the place where we need to focus our thinking and consider how to help bridge that gap. If we can do that, then we will be successful.


Food Safety

I had a great encounter with the North Carolina Food Safety and Defense Task Force in Raleigh today. They are an eclectic group of people who work to monitor the food supply chain and try to spot threats to human health from what is in the food we eat. They come from government, private companies, universities, and other interested groups. The wide range of people involved mirror the number of links in the food chain operating globally every day. With 12,000 mile food supply lines, there are myriad opportunities for food to pick up contaminants or other agents that can make us sick.

The group was gracious in allowing me to brief them on our project. I mentioned that our system is now live and has started looking at emergency medical services data with our analytics. They peppered me with questions about security of the data feeds, how we combine different types of data, who we serve with our information, how we are funded, do we plan on another business model, etc. They were a very receptive audience both during and after the meeting. As I headed back to my office, I even got a call from a state agency who had heard about our project from the meeting and wanted to talk about participating in our project. Such encounters are an emotional boost since these skilled professionals see the value we bring in biosurveillance.