Providing Healthcare Intelligence Across Massive Data Sets

Roughly eight months ago, we performed a comprehensive upgrade on our systems and infrastructure, consolidating servers, adding more horsepower, and significantly expanding our storage infrastructure to keep up with our fast growing demands.

How much data are we talking about? When you have millions of patients, thousands of providers, and millions of transactions flowing through your infrastructure, it can add up pretty quickly. Currently, we employ roughly 20TB of storage for each of our redundant production environments, and our data storage needs are growing at a rate of about 50GB per week.

Space isn’t the primary concern here at Sentry, however, as speed and adherence to a “real-time” philosophy of data availability tend to complicate matters. We value storage solutions which spread our processing load over as many drives as possible, and employ several tiers of storage for performance reasons. Today, our typical production cluster incorporates over 90 physical hard drives, and IBM was instrumental in helping us map out a strategy that elegantly handles our current challenges, while at the same time being flexible enough to grow to meet our needs for tomorrow.

Providing real-time access to extremely large data sets requires diligence, planning, and a whole lot of machines and hard drives, and here at Sentry, we’re pleased to be able to partner with IBM to deliver our products to our customers in ways that help them better understand their data quickly and easily. If you’d liked to read more about how IBM worked with us, you can read the case study here.