Fans of the Hitchhiker’s Guide to the Galaxy will recall that the “Answer to the Ultimate Question of Life, the Universe, and Everything” is 42. That’s it. No context. No meaning. 42.
As the Big Data enterprise begins to produce more “answers” it’s important that we not accept the sort of context-free results we recall from Hitchhiker. Instead, we need to compare one set of results to another to see where we stand, whether it’s loss prevention, incident rates, video analytics, or any other dimension of security. Below, we’ll explore both the sources and uses of normative data in Big Security Data.
In its simplest form, normative data is statistical samples of large data sets that provide answers to question like:
- How many times does X happen per week at a commercial office building?
- How many times does X happen in retail stores vs. commercial property?
- What percentage of employees or visitors exhibit behavior Y nationwide?
- Has the percentage changed since last year?
- Are incident rates worse at certain kinds of properties?
- Are my incident rates worse than national or regional norms?
- Is there anything out of the ordinary in this week’s data?
If Big Data could provide this sort of information, wouldn’t that be a huge improvement in the way we practice security?
Earlier in my career, I worked in the healthcare informatics business. Unlike the security industry, healthcare is rich in normative data sources. They are collected by doctors and hospitals and public agencies, reported to states and quality boards, and analyzed extensively by for-profit companies trying to give their clients an edge.
For almost any situation, a consumer, provider, or insurance company can compare performance and cost against known averages that are sliced and diced in myriad ways. This allows all stakeholders to have more productive conversations about the “facts on the ground” and how they compare to current best practices, historical performance, comparable stakeholders, or any other measure deemed relevant.
Other real-market examples of norms that help improve overall industry performance include: airline on-time performance statistics; automobile quality ratings; manufacturing defect rates; consumer product safety ratings; financial services performance; and the list goes on.
Unfortunately, data in the security industry is largely fragmented and not available for analysis outside of a single enterprise. This makes any attempt at standardized norms or comparative evaluation a rather parochial exercise. Such compartmentalization of data is largely a byproduct of the stovepipe system architectures that dominate our software vendors, and the absence of any regulatory reporting requirement to draw the data out.
Cloud computing is beginning to surmount the challenge of stovepipes, now that SaaS vendors are able to anonymously aggregate data for the benefit of their entire customer base. Look in the fine print of almost any SaaS agreement and you’ll likely find one or more terms indicating your consent to anonymous data aggregation. And that’s the starting point for deriving valuable information for the industry as a whole.
No one vendor will ever hold all the data, but that doesn’t mean individual SaaS services can’t still provide enormous benefit through Big Data offerings. In the healthcare industry, it was often “valuable enough” for a hospital to compare itself to just a subset of other hospitals. That’s because a random sample of part of a group tends to exhibit the same statistical properties as the whole group, or be close enough in many cases to be valuable enough for performance improvement.
And the answer you’ve been waiting for? That answer is actually 48, not 42. At least that’s what 3 million of our anonymous users tell us about how often the typical door is used each day.
-Steve Van Till
Assuming your organization has formulated a strategy and set goals for what you want to achieve with Big Data, there are several paths toward implementation.
To my knowledge, there are no on-premise physical security systems with a Big Data solution built into their core deployments. That’s because Big Data is a relatively new technology, and no one has included it yet in their feature set. It’s also because the technology platforms used for Big Data and those used for security are very different. Finally, few security systems have been set up to marshal all the necessary data into one place, where it can be usefully analyzed with Big Data techniques. In any case, there are three broad approaches worth considering for your Big Data project.
One way to evaluate options is to consider where they fall on the build vs. buy continuum. If your company has deep IT resources, perhaps with other Big Data projects already in place, it may be more attractive to graft your security data strategy onto the tools and talents already at work in other departments. If you have few IT resources and always look to outside vendors for solutions, you will approach this more as a “shopping” exercise than a build-out.
The “roll your own” option begins with exporting data from your current systems and aggregating it onto a Big Data platform where you can perform subsequent analysis. This is known as ETL, or Extract, Transform, and Load. You’ll need to do this because the typical security database platform will not support Big Data operations.
After ETL, the difficult task of programming one of the many Big Data technologies to perform your particular analysis will begin. For this, you’ll want to have access to someone called a “data scientist” in addition to your software developers.
If building IT solutions from the ground up isn’t for you, a second option is to transfer your data to an online Big Data solutions provider and work with them to extract the value you seek. Along with many industry stalwarts, there are now many start-ups operating in this arena.
This approach enables you to avoid both ramp-up and capital expenses. The learning curve for Big Data technologies can be significant, depending on your goals, so you may not want to be burdened with the expense or time for that process to play out. Similarly, you may not wish to invest in the technology up front, and cloud solutions offer the same flattened expense profile as traditional SaaS business applications.
One disadvantage, given that this is a new field, is vendor longevity. Pick the wrong vendor and you may find yourself migrating to a new provider, and that’s tough given there aren’t really any standards for data portability in this domain.
The last option is to wait and see which security vendors emerge with built-in solutions. This will likely occur first among enterprise systems providers, with an advantage toward cloud offerings. Cloud vendors can distribute Big Data solution costs across everyone in their customer base who chooses to use it, rather than asking you to buy a whole Big Data stack to put in next to your other servers.
The trick here is to recognize that every industry vertical will have different Big Data strategies, seeking to extract different types of value. That means there will be no one-size-fits-all offering, and you’ll do best to choose the vendor who can extensively customize your solution.
-Steve Van Till
In this blog entry, we drill down on what types of security data make up a Big Data strategy and what types of analytics might help extract value from that data.
Let’s go back to two inextricably linked phrases: “Big Data strategy” and “extract value”. In other words: no strategy, no value. Whatever your company hopes to achieve with Big Data, you need to have a strategy, or you won’t be capturing and storing the data that delivers results. More on this later.
We’re all pretty familiar with the types of data generated by physical security systems…or are we? Truth is we ignore most of it because it is too tedious or outright impossible to analyze with traditional tools.
Examples include endless access control activity logs, or millions of motion detection events from even a modest-sized video surveillance deployment. What about sensors deployed to monitor doors, windows, cabinets, or anything that opened or traversed? In an enterprise of any size, very quickly we get a mountain of routine events that we don’t analyze for larger patterns.
Here’s where the familiar structured query approach of most report generation tools fails to reveal the bigger patterns that emerge from large data sets and better analytics. That’s because structured queries only reduce large sets of records to smaller set of records, possibly with a few calculations thrown in. They’re not designed to detect global patterns and long-term trends. Enter Big Data tools.
My enterprise has 500 locations, and I want to understand how all these facilities compare over the past five years. Our first problem is probably that the local security systems are an archipelago of pre-cloud information islands. Assuming it’s possible to aggregate that data, or work with a cloud provider who can, there are many new questions we can answer:
- How do daily security patterns compare across my facilities?
- Is traffic flow the same this year as last year?
- What signals precede an actionable security event?
- What’s the year-over-year change in visitor-to-employee ratios?
- Which facilities exhibit the most off-hours video events?
- Where are administrative privileges changed most often?
- How do the preventive maintenance signals compare?
- Which locations stand out this week? This month? Every month?
- What is the seasonal variation in security events?
- How do my large facilities differ from my small facilities?
- How do my physical security data correlate with other data sources?
The list could go on, all using discrete data types, without even discussing how video analytics at this scale can alter the value extracted from an enterprise security data repository. At this level, we simply regard video analytics as another input to other Big Data tools.
Mixing in other data sources enhances what we learn from security data. This should be part of a Big Data strategy to gain maximum business benefit from the investment. Possible sources: HR, compliance, network, certifications, sales, maintenance, shrinkage, safety, weather, and many others depending on your business and what you want Big Data to do for you.
The first thing to do is find out whether your security systems are collecting the types of data you need for your Big Data strategy. If not, work with your vendors to augment existing systems, or possibly switch vendors to establish a suitable source of data. The key question: can you marshal all relevant data into one (virtual) place so you can extract the value.
Here’s where cloud storage and cloud providers of Big Data services come in handy. They allow you to move data into the right environment and store as much as you want, without a huge initial investment in servers or software.
It’s as if the cloud and Big Data were made for each other.
-Steve Van Till
Big Data is the newest Big Buzzword, and it’s rolling across the IT landscape like the fast-paced “cloud” buzz-storm that preceded it. Given the vast amounts of unanalyzed data we collect in our industry, the tools and analytic techniques of Big Data hold the promise of extracting more business value from our security systems than ever before.
We’ve been using data for a long time, usually as relational databases that allow us to make queries and see the results in our desired visual form. Isn’t Big Data just a really large version of the database tools we’ve known for decades? There are a number of competing definitions of “true” Big Data.
There’s the “3 Vs” camp: volume, velocity, and variety. Enough of these three in any combination and you’ve got Big Data. There’s no real description of the technology, and it ends up sounding a lot like Business Intelligence—advanced analytical and visualization techniques, but on a much larger scale.
Next, there’s the viewpoint that Big Data brings us the ability to understand large volumes of data at high velocity, but doing it with so-called “unstructured” data in real time. Think of gathering unfolding intelligence from random texts on social media in order to predict stock market direction. Traditional tools won’t do this, so you’ll inevitably be shopping for some new software technologies.
This second camp comes close to our third way of defining Big Data, which focuses on the technologies needed to handle extremely large data sets “too large to process using traditional techniques.” That’s vague I know, but it drives the question to where practitioners are talking petabytes and exabytes of data as the threshold for true Big Data.
Big Data techniques are now being used to provide insight into many scientific and business questions. For years, science has generated massive data sets in particle physics, meteorology, genomics, and many other fields. Scientists had to develop custom analytical techniques for each of these domains. Now, they can exploit the power of Big Data tools forged by cloud computing and made widely available through the open source software movement.
Businesses with access to rich data sets can use these new techniques to understand complex customer behavior patterns to improve market share. A recent MIT study found that companies using Big Data to their advantage were 5% more productive and 6% more profitable than their peers—those are impressive advantages.
Closer to home, the security industry is part of a growing trend toward adoption of Cyber Physical Systems (CPS) with widespread sensor networks that produce exponentially more data every year. This vast quantity of data is useless unless it can be meaningfully analyzed to produce business value.
And there’s the rub: you need to know which problems you want to solve. And for that, as one McKinsey columnist quips, the most important tool is…the pencil. What security problems can we solve for our customers if we had all the right data? If we start from what outcomes we want to achieve—then we’ll avoid the trap of making cool charts just for the sake of cool charts.
Would understanding variations in employee behavior help identify potential inside threats? Would large enterprises allocate security dollars more efficiently if they could compare one facility against another? Can security departments improve their own system performance if they could benchmark it against their peers?
Time to sharpen our pencils.
-Steve Van Till
Last week’s entry identified three of the five things that companies stumble on when implementing a Bring Your Own Device policy – there are misconceptions on what a BYOD policy actually means, and then how to control it once it’s introduced. Today we dive in some more:
4) I don’t need to worry about BYOD, we don’t allow it.
This is nearly as dangerous as assuming a BYOD policy is all you need. You must have a BYOD policy. BYOD is similar to internet usage or social media, even a very simple policy is better than nothing. At least it prevents confusion when a problem arises. Odds are someone in your organization is using a personal device to access corporate data whether it is allowed or not. You might already issue employees laptops or even smartphones but very few people take a company laptop on vacation, or carry two phones. Often, the biggest offenders are senior executives or IT staff who forward their company phone to a personal one. There are thousands of BYOD policy templates available on the internet; if you can’t write your own find one that fits your company and modify it.
5) I have a policy and a Mobile Device Management application, my data is secure.
In a word, no. Assuming that a well-written policy and popular MDM software will secure your data is a common misunderstanding by IT professionals and represents a serious risk to the data. Don’t get me wrong, MDM packages are a must for managing BYOD and the vendors are moving in the right direction, but it is important to recognize that our personal devices are not yet as secure as our corporate workstations. Minimum criteria we found helpful were:
• Strong Identity—Authenticate users via two-factor authentication using smart cards or equivalent
• Configure devices with a corporate profile that has remote management capability
• Explicit blacklist and whitelist capability for applications and websites
• Malicious site and malware protection
• Enforce encryption when connecting to corporate data
• Compartments for corporate and personal data
If you allow personal devices on your network, you will need to implement some kind of management software. What works best for your business is up to you. There are a few providers who are at least close to the criteria above but you will need to do your homework.
- Charles Wheeler
Bring Your Own Device or BYOD -- it’s in the news and very likely in your company. Over the past year I had to learn quite a lot about BYOD, mobility and security. It hasn’t always been easy and there are several common mistakes that many of us make when attempting to implement, or at least control, personal devices on our corporate networks. Based on a combination of my--somewhat limited--experience and listening to some pretty smart people, here are the five things that most of us get wrong when introducing BYOD. These are broad topics that require far more detail than I can get into here—think of this as a basic list to get you started.
1) I can’t implement BYOD, we can’t afford to buy everyone a phone!
This is probably the most common misconception of BYOD and this question seems to come up during every BYOD discussion. Remember: the Y in BYOD means it’s your (personal) device and not provided by the company. But for many people, mobile devices equal BYOD and so a “BYOD policy” is the same as “mobile device policy”. It is a problem of definition and perception, so try this on for size when the CFO objects to your BYOD plan on the basis of device cost: “Bring Your Own Device (BYOD) is the practice of allowing personally owned devices (phones, tablets, laptops, PC’s) to access privileged corporate information, networks and applications.” It might not cover every aspect of BYOD, but it should be enough to get the conversation started in the right direction.
2) I can’t implement BYOD, it will cause chaos!
This is the network admin lament and, without a good policy, it’s absolutely true. If every employee is allowed to access the network with the device du jour it will be chaos, not to mention a serious security risk. But allowing some devices doesn’t mean we have to allow every device. It is absolutely within the right of the company to put restrictions around how corporate data is accessed. This doesn’t mean an exhaustive list covering all 4,000 or so Android devices, perhaps just some basic criteria such as:
Any device accessing corporate information must comply with the following security measures,
- The device will require a username and password to access the device.
- The device will run an operating system and applications approved by the IT department.
- The device will not be “jail broken”, rooted, hacked or illegally modified.
- The device may be remotely wiped by the IT staff if required. (More on this below).
I encourage you to work with your employees on this issue, get a feel for what they really want and let them help with the approved devices.
3) BYOD will let employees keep data after they leave!
True, unless you are careful. The insider threat is absolutely increased by allowing employees to use their own devices, especially since these devices are often used outside of the office or corporate network. There are tools to assist with device management but these can lead down some expensive rabbit holes, eliminating any cost savings that drove the implementation of BYOD in the first place. If the goal of BYOD is a platform agnostic, mobile workforce then we cannot get bogged down in managing individual devices. Instead, we need to manage the identity of the user and the method of transmission. I am not terribly concerned if an employee accesses data from a laptop, PC or a phone, provided I can ensure that the person is identified and authorized and that the transmission method is reasonably free from interception.
- Charles Wheeler
Recently, I conducted a personal computing assessment and determined that it was time for me to refresh my Windows PC. I have several computing environments including a PC, Mac, tablet, and smart phone. I like dabbling across these different environments to keep my pulse on the various approaches to the human/machine interface.
Having moved up the Windows line to Windows 7, I was intrigued to check out what Windows 8 had to offer. My home PC was about five years old, so I could easily justify the expense for my techno-sampling and with the notion that I may have some future requirements for Windows applications.
So, I dove in and purchased a new Windows 8 machine. From the time I booted up and started navigating around the interface, I was struck by a few things. The first was that this latest OS was not tremendously different from my existing Windows experience. Sure there was a new dashboard of tiles, but behind the tiles lurked the same old familiar Windows software.
What struck me next was the realization that I was using Microsoft’s state of the art OS, and assuming a five-year lifecycle like my last Wintel box, this was probably my last PC. I hope to have many years of computing ahead of me, but with the rapid pace of technology advancement, I cannot conceive of ever needing another device like this. Rather than dedicating all of my computing power and interaction to one device, my computing world is already spread among several devices and the cloud. In five years, my phone and tablet will be much more capable than the machine sitting on the desk in front of me. Yes, this would be my last PC.
That afternoon with the new laptop clearly demonstrated for me that Personal Computing, which allows people to aggregate the computing power of multiple systems and apply them in a myriad of wonderful ways to fit their specific needs, has superseded the Personal Computer. This trend certainly accounts for plunging PC sales and the troubling strategic positions of former PC giants like Intel, Microsoft, Lenovo, Hewlett Packard, and Dell.
At this point, I’m not sure what to make of my new PC. I strongly doubt that it represents a link in a chain of future PCs. It appears more to be the end of the road and it looks like Apple, Google, and Samsung will be fighting for the future market with tablets, while the legacy guys cling to shrinking sales of PCs.
This bodes well for providers of alternate computing platforms like smart phones and tablets. There is simply no doubt that more people will consume more types of applications on those platforms. It also means that cloud platforms and SaaS providers can expect to attract more users like me who prefer cloudware over software. If Windows 8 represents the state of the art in the Windows PC world, it looks to me like a dead man walking.
- John Szczygiel
Quite frequently we encounter skeptics in the security industry who have a hard time imagining delivering enterprise physical security via the cloud. These may be the same folks who say that cloud computing is not part of their life in any way.
A study by Wakefield Research, commissioned by Citrix, was highlighted in a recent CE Pro article and showed that out of 10 people who use the cloud, 5 claim to have never used the cloud at all.
What accounts for the disparity? I believe that for some people they simply have not yet connected the terminology, “cloud”, with the services they routinely use. Who knew that Twitter is a cloud app? Why would I have thought that my Nike+ GPS was using hosted (cloud) services? What about on-line banking, is that a cloud thing?
Of course, all of these applications are related to cloud computing and to an entirely new way of consuming software and services over the internet. This trend is increasingly pervasive and it is unstoppable. Software capabilities and their underlying systems and data are moving to where people need them, when they need them.
Lately, I often find myself verifying that the services I’m considering for purchase are available across all my main computing domains: computer, tablet and phone. I don’t want software, I want cloudware!
My expectations have already changed, and I now completely rule out the concept of installing software, synchronizing data, and doing back-ups. I want to use my software to solve problems, not solve problems to use my software. For me, software has truly become a service along with all of the underlying infrastructure that enables it to be reliable, secure, and in a constant state of evolution.
I believe these views represent today’s cloud service consumer, and I’m not really worried about the 54% of Americans who claim they’ve never used the cloud. They don’t need to know the terminology to enjoy the cloud’s many benefits.
- John Szczygiel
In theory, low cost wins out over high cost, other things being equal, or at least that’s the predicted economic behavior of the classic rational consumer. It’s also the basis of how many of us are rewarded in the corporate world; that is, for doing things that save our companies money. Why, then, does Total Cost of Ownership (TCO) so often fall by the wayside when buyers are making choices? More pointedly, why do we see this pattern in the security industry?
For an illustrative case study, let’s look at how this works with cloud solutions, specifically Software as a Service (SaaS). I’ve been presenting security buyers with the TCO merits of the cloud for many years. Some audiences find it compelling. Some don’t. By now I’ve witnessed enough reactions to see that these responses fall into two distinct camps, which we’ll explore in a moment.
First, here are the generally recognized, core TCO benefits of cloud computing:
- Decreased energy consumption levels
- Reduced data center expenses
- Lower IT staffing costs
These benefits are universal and appear across all industries and application types. They are all virtues that result from the common infrastructure, multi-tenancy, and economies of scale at the heart of SaaS.
So, what are those buyer reactions?
The first is what I would call the “Not My Problem” point of view—one that still prevails in many corporate security departments. When presented with the TCO case, this camp’s responses are:
- Energy consumption is not part of my budget. That's corporate, it doesn't affect me.
- The data center is not my concern. They can always find room for one more computer.
- I don't pay for IT on my budget. They will just have to figure out how to make their budget work.
- My boss is used to software maintenance bills, but recurring subscription fees are something new, and I don't want to have to justify them.
In contrast, there’s what I call the “Team Player” view. This is the security group that sees the wisdom of being aligned with the company’s mission and profitability. You’ll find these groups led by the security director who aspires to be a CSO, or a CSO who wants to demonstrate strategic value to his peers. This camp’s response to the very same TCO analysis and benefits goes something like this:
- Energy savings fall to the bottom line, and that's good for the whole company. We're also trying to be green, and I can see how this aligns.
- The data center is already nearly full, and I know we're looking for ways to avoid further expansion and operating expenses. We need to support that goal.
- I know it's a corporate priority to keep IT focused on our strategic "front office" initiatives that drive sales and profitability. That's a good reason to use the cloud for my "back office" apps.
- Software maintenance agreements and upgrades are killing me. I'd love to get rid of them.
In the end, the difference between these two camps is not about whether they agree with TCO studies or not. It’s about how they view their role in the company. It’s about whether they are looking inward at the limited needs of their own department, or whether they see themselves as part of a bigger corporate team with common goals. In short, it’s about what motivates them.
So, back to that rational consumer we talked about. What’s his choice in this situation?
-Steve Van Till
The Cloud Comes to Security
The cloud computing market is exploding. Worth over $100 billion in 2012 and doubling within the next four years, this trend is here to stay and the physical security market is no exception. With a growing range of “Security as a Service” offerings from access control to video surveillance, recurring revenue business models have grown far beyond their origins in alarm monitoring and now pervade every aspect of our industry.
Vendors responded to this gold rush with product offerings designed to put integrators into the RMR business. They’re also offering to
save end users big up-front expenses while reducing total cost of ownership.
Unfortunately, with every gold rush there are opportunists selling false, over-hyped claims. This is especially worrisome for the security industry because inaccurate cloud claims endanger the safety of our customers and their property. In this article we’ll examine a number of these claims and discuss their impact on the quality of security systems.Real Data Center or the Integrator’s Network Closet?
You’d think that anything advertised as a “cloud” solution would be hosted at a real data center. Today, a number of security systems vying for cloud legitimacy are actually server-based products that integrators install at their own facilities and operate on behalf of customers.
This sounds fine in theory, except that few security integrators have the IT chops to pull it off. Many a “data center" in our industry turns out to be nothing more than an unsecured network closet in an industrial park office, with electrical backup from a 5 HP generator. Contrast this with a real data center that has a hardened perimeter, riot glass, 24/7 guards and staffing, biometric access control, redundant ISP connections, redundant power supplies and cooling equipment, fire suppression, earthquake proofing, network monitoring, and enough batteries and diesel generators to keep the whole thing running for days—if not weeks—in the event of a major emergency. Why this matters:
Nothing less than a real data center can provide the high availability and data security required for security services; anything else places end users at high risk.Substandard Hosting: Just a Fancier Network Closet?
Substandard hosting is a close cousin to the network closet. It has become so easy to “spin up a virtual machine” with an Infrastructure as a Service company that everyone is doing it and declaring “game over” for meeting the hosting requirements of a security application. Unfortunately, many end users suffer brand-name blindness in this scenario and can’t see past vendor assurances that their data is hosted at Amazon, Azure, or some other Big Company. Don’t get me wrong: these are great companies with great services. But the quality of an application service has as much to do with the way the software is written and managed as it does with where it is hosted.
For example, it does no good to be hosted at a Big Company unless you’re diversified across at least two of their data centers. A lot of Amazon Web Services customers learned that lesson the hard way during recent outages in one of their primary availability zones. Ditto for Microsoft’s European cloud customers.
Taking it a step further, it does no good to be diversified across multiple data centers unless you have real-time database replication built into your product architecture. And it does no good to have replicated data unless you also have global traffic management to immediately switch between facilities. The lesson here is that it’s the whole
solution that matters. A brand-name hosting service doesn’t guarantee quality unless the application provider has taken all the proper architectural and operational steps.Why this matters:
High availability and top-tier hosting matters because no one knows when security violations will occur, and you cannot afford to be down when they do.What, No Multi-tenancy?
Software multi-tenancy is the fundamental design approach that allows Software as a Service systems to operate securely and efficiently. Yet many of the systems touted as “cloud” solutions don’t have it. They are simply the same single-tenant designs as before, with a web browser tacked onto the front end. Yes, the web browser is a welcome improvement over thick clients, but customers and vendors should care about what’s behind the browser.
So, what is multi-tenancy? To quote Salesforce.com
, a leading authority on the subject, “Whereas a traditional single-tenant application requires a dedicated set of resources to fulfill the needs of just one organization, a multi-tenant application can satisfy the needs of multiple tenants … using the hardware resources and staff needed to manage just a single software instance.”
Multi-tenancy matters to security integrators and end users for two reasons: economics and security.
In terms of economics, multi-tenancy allows major SaaS providers like Salesforce, Google, Netsuite, and many other familiar names to operate their services at massive scale and low cost. It does this by using a software design that enables thousands, even millions of unrelated customers to safely share the same underlying hardware resources. While the cost savings on hardware is obvious, there are equally considerable savings in energy, maintenance, and staffing expenses. Without multi-tenancy, the expense of running applications is virtually the same as traditional IT, and the whole cost-benefit argument for cloud services collapses.
By the same token, supporting millions of customers on a single highly scalable instance can only be accomplished if the security provisions were designed into the software from the start. Here a real estate analogy is illustrative. Ever lived in a single family home that was subdivided to support multiple renters? Doesn’t work so well. Not nearly as well as an apartment building designed from the outset to support multiple tenants. The same holds true for software.Why this matters:
Customers must look for multi-tenancy if they expect to achieve the promised cloud savings over the long term. Without it, there is also no adequate data security model in repurposed legacy applications.The “Private Cloud Ready” Deception
Read a stack of recent sales literature and you’ll come across the terms “private cloud ready” or “suitable for private cloud deployment”. Vendors often apply such terms to security appliances and server architectures, both real and virtualized. Sounds good, but what does it actually mean? Not much.
According to the oft-cited NIST
definition, a private cloud is an architecture where “the infrastructure is provisioned for exclusive use by a single organization”. This means dedicated servers, storage, network connections, and staff to take care of the whole thing. Sound familiar? It should, it’s exactly the same as the traditional security software delivery model. It might have been moved offsite to another data center, and it might have been virtualized for a little hardware efficiency, but at its core it offers the same features as the dedicated client-server model of the past several decades.
That’s why I favor the more descriptive definition that a private cloud “is a marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall”. That’s probably not what you thought you were buying when the vendor told you it’s a “private cloud ready” product.Why this matters:
When security buyers are trying to free themselves of all the hassles of dedicated server equipment, the single-user “private cloud” fiction leaves customers right where they’ve always been.The “Hide the Server” Deception
The private cloud claim is closely related to a practice we call “Hide the Server”. This amounts to taking the end user’s current security applications and moving them to where they no longer see them running on their own computers.
Does moving an old application architecture to a new server 1,000 miles away give it any of the characteristics of cloud computing? Of course not. It won’t magically support thousands of end user organizations or suddenly be any faster for new users to provision. The truth is that the service provider will need to move every new customer’s computers to a new location, just like they did with yours. What do you think that will cost them? What will it cost you?
Why this matters: As a technology that promises to lower total cost of ownership, real cloud computing must deliver savings. “Hide the server” can never do that.The “Cloud-Based Protocols” Deception
Many old-line software systems vendors are desperate to shoehorn “cloud” into their marketing literature. You can’t blame them. If it was 100 years ago and I had to sell wagons against automobiles, you can be sure I’d find a way to use the term “horseless carriage” in my pitch.
In one of the most egregious abuses of the term, there are systems vendors who are covered by the media as cloud companies because they claim to use “cloud-based protocols”. You might as well claim to be an electric company because your products use electricity.
I applaud their PR agency for working “cloud” into their press release, but it turns out this is just a case of old-fashioned remote access. Why this matters:
Citing “cloud-based protocols” leads users to a situation that sums up everything we’ve outlined so far: single-tenant applications that are usually hidden remotely as “private clouds” in a data center that has not been qualified or audited.How to Recognize a Real Cloud
So, how do you recognize the real thing? Let’s go back to the impartial definitions NIST
wrote several years ago:On-Demand Self Service.
You can obtain services, or expand existing services, without talking to a human and going through a big provisioning process. This is a good litmus test for determining if an application uses multi-tenancy and a real data center. On Demand Self Service is usually only possible with multi-tenant applications designed to serve large populations efficiently.Measured Service.
You pay only for what you are using; say, per camera, per door, or per alarm point in the case of traditional physical security. This functions as good protection against “hide the server” and “private cloud ready” types of claims because there’s no way you can buy services from those architectures “by the drink”. Instead, you’ll see charges for the server, or a virtual machine, or storage, or a license—not something you can directly relate to the actual business of security. Resource Pooling.
Sharing a common infrastructure across all customers for maximum economic and computing efficiency. You’ll be able to recognize resource pooling if you are logging into the same system (web address) as everyone else who uses the service. This indicates the vendor is using a true multi-tenant architecture and that you’ll get the benefits of a real cloud design. Conclusion:
The cloud is here to stay, and it offers security buyers numerous advantages over traditional solutions—but only when it’s the real cloud.
-Steve Van Till