Tuesday, October 28, 2014

The Data Security Challenge



The field of Information Security is a broad one encompassing twelve domains according to the ISO 27002 standard.  But the focus of Information Security has become cyber security these days, and understandably so given the number and frequency of cyber-attacks being experienced.  However, we shouldn’t forget that information is created from the aggregation and analysis of items of data.  So, ultimately the core objective of the ISO standards is to secure data. 

There are basically two types of data within scope of the standards:  business data and system data.  Business data includes customers and their data, product data, product and marketing strategies, and intellectual property.  But it also includes data regarding employees, legal matters e.g. contracts or law suits, as well as data related to compliance with any applicable regulations.  The loss of confidentiality of this data would seriously impact the ability of a business to operate and jeopardize its competitive standing. 

In contrast, system data identifies the computer technology, both applications and infrastructure that enable a business.  These technology components collect, process and store business data.  But they also provide operating capabilities that deliver products to customers and enable collaboration with business partners.  The loss of confidentiality of system data would enable the malicious parties to shut down business operations, and/or locate and steal critical business data.  Such events would be disastrous and threaten the ability of a business to continue operation.  So, we can see that securing both types of data is essential.

In the midst of ever increasing cyber-attacks there is a growing concern for data privacy.  I attended the Strata Hadoop Conference (http://strataconf.com/stratany2014 ) in New York City earlier this month and much of the Security Track was devoted to discussion of data privacy.  Government regulations e.g. the Gramm-Leach Bliley Act (1999) (GLBA) and the Health Information Portability and Accountability Act (1996) (HIPAA) have mandated that those entities holding personally identifiable Information in the case of GLBA, personal health information in the case of HIPAA must ensure protection of the data.
 
But most people today view the list of regulated data items as being a slim representation of the data they consider to be personal.  We live in a world where large amounts of data on individuals is collected about us daily including the products we buy and from whom, the foods we like, the restaurants we patronize, the political party we support, the list of our friends and family and their contact information; and on and on.  The fact that the exact location of where an individual is now and where he or she has been all day, is information that many people consider to be private and not to be available to anyone with whom they have not explicitly shared it.  And when we place our photos in “The Cloud” our expectation is that no one will see them other than those specifically given permission to do so.  And, by the way, software that can recognize my face in other people’s photos and create a link between us is not a particularly desirable thing. 


As you can see, individuals’ expectations of privacy, greatly complicates the data privacy challenge.  Over the next several months I will explore the data privacy challenge and discuss approaches to risk mitigation and control.

Thursday, October 9, 2014

Information Security:  What’s Old Is New Again

Cyber-attacks with accompanying data breaches have become headline news with increasing frequency.  These reports have resulted in a pervasive state of panic among corporations, especially those whose business is financial services, as well as across the general public.  The attacks are occurring more frequently and becoming more costly in dollars and reputation for the targeted companies.  Indications are that the situation will only worsen over time.

Why, one might ask, have the attackers been so successful of late?  Have the attackers developed some advanced methods requiring combatant capabilities that have yet to be discovered?  Are we doomed to have our valuable information pillaged and plundered by criminals?  I have read about “Heartbleed” and the Target breach, as well as the JP Morgan and Home Depot breaches, and discovered that no new security tools or methods would have been required to prevent them.  The methods to prevent and defend against cyber attacks are known and the tools are available and in many cases already installed.  So, what is the problem?  Jaime Diamond of JP Morgan Chase hit the nail on the head when he said “JP Morgan plans to spend $250 million on digital security annually, but had been losing many of its security staff to other banks over the last year, with others expected to leave soon” (quoted from the New York Times Deal Book, October 2, 2014).  Although his statement pertains to JP Morgan in particular, it is actually indicative of the problem of companies providing information security in general.  Information Security within corporations has typically been understaffed and underfunded. 

Historically it has been extremely difficult for Information Security departments to demonstrate the need and urgency of allocating funds and hiring sufficient staff to fully support the function.  It’s been a bit like convincing someone to buy an insurance policy to protect against some potential event that may never occur.  Businesses will opt to put their money toward the things they know they can accomplish, rather than on things that are doubtful.  So, when it’s time to tighten the belt, the Information Security budget is a prime target, and when it’s deemed time to down-size or right-size, Information Security staff are likely the first to go.

Two events have occurred in recent years that are beginning to affect this corporate stance.  First there is the fact that regulations now require corporations to notify customers when they’ve experienced a data breach.  The second is that the volume and frequency of successful attacks is on the rise.  The result of the two events is that corporations have been forced into the un-wanted limelight, clouding their reputation, costing them customers and inviting the scrutiny of government regulators.  I believe this situation will drive a change in perspective toward the need and urgency of information security.

We have come to an unfortunate state of affairs, but it may be the beginning of a new day for information security; one in which adequate staff and funding are provided to enable development of repeatable processes and controls, as well as effective use of tools to build the strong defenses needed to combat the cyber attackers.

Monday, August 25, 2014

So, you want to build a new system... Why?

There’s an old saying that goes “To a carpenter with a hammer, every problem looks like a nail”.    Similarly, if you’re experiencing symptoms of illness and you decide to consult a doctor who is also a surgeon, it will come as no surprise that the doctor’s recommended remedy will likely require surgery.  It’s human nature that our approach to the problems we encounter lies within the context of the tools with which we are equipped, i.e. within the scope of our competency.  Addressing problems within that scope provides a level of comfort that we will be successful in solving the problem by exercising the skills that have been reliably successful in the past.  Indeed, we enjoy applying those skills.

Now, consider how this behavioral pattern is manifested by IT organizations. When a business organization consults its IT team for a perspective on a functional business problem, what solution is most likely to be recommended?  Because IT consists of professionals whose proficiency lies in delivering systems, their likely response is that a new system is required.  Similar to our consultation with the surgeon, the business executives will be reluctant to accept the recommendation of new system acquisition, as they’ve likely been down that road before.  They know that it will be costly in time and money, and will likely be problematic.   But, despite the fact that non-IT people have become increasingly computer savvy, their system fluency lies in the utilization of systems and not in system design.  The understanding of how and why systems behave the way they do is the domain of IT professionals.  So, the business leaders will eventually be persuaded by their IT organization to invest in a new system.  

Of course, IT will provide good reasons for why a new system is in order.  

  • The current system is antiquated.  The legitimacy of this claim is usually corroborated by the existence of a future date after which some vendor/supplier of a system component will no longer support that version of the component.  However, the truth and implications of the claim are rarely explored, as it would likely reveal less urgency and greater flexibility regarding whether or not to replace the existing system.
  • The current system can no longer be enhanced to deliver the evolving functional requirements of the business due to platform limitations.  I know of a large system that was delivered by a two-year project employing a team of ten developers, and which ten years after its initial production date, was still fully supporting its business line and required only one developer to maintain and enhance.   In contrast, there was another system that only one year after implementation of its initial project phase, required the scrap and re-design of it’s large and poorly designed database, which had made maintenance of the system untenable.  The longevity of a system regarding its ability to meet business functional requirements is most often due to the quality of the design rather than to any inherent limitations of the platform or tool capabilities. 
  • A new system is needed in order to meet speed-to-market demands of the business.   Most IT projects continue to be delivered late and over budget, despite the fact that technology innovations have reduced the time required to accomplish many system development tasks.  That is because IT organizations wrongly place responsibility for speed of delivery on platforms and tools  (i.e. hardware and software), rather than where it rightfully belongs, on process.  Well-designed, efficient, repeatable and reliable process is what will assure speed-to-market.  No system can substitute for that.
  • New technology is inherently better than old technology.  Time and again I’ve heard the statement that “we’ve got to get the users off the old and onto new technology”.  The greatest expense of bringing new business products to market is the cost of IT.  That cost continues to rise astronomically, eating into product profitability.  IT does a huge disservice to the businesses it supports when it insists that they invest in new technology that delivers no gains in business value.  We would do well to consider this.  Arguably, one of the oldest and most significant tools in business is the spreadsheet.  It existed before computers and with the advent of computers, became one of the first business tools to be automated.  Despite automation, the functionality of the spreadsheet remains fundamentally the same as ever and it continues to be a mainstay of business productivity.  


I suggest that IT leaders become valued members of their business teams, and as such contribute ways for businesses to get the most from their IT investments rather than needlessly propagating technology.  This philosophy should be promoted throughout the IT organization so that IT may become partners in profitability.

Monday, July 28, 2014

The Productization of the Internet

The Internet should be like electricity when it comes to consumption of its services:  simple, safe, reliable, ubiquitous, inexpensive and standardized.  After all,  the Internet, like electricity, is a commodity and should be treated as such by service providers, the government and consumers.  This is the logical progression for this technology, given its role and prevalence in our society and economy.

Let me first tackle the premise that the Internet is a commodity.  Once upon a time, the Internet, like electricity, belonged to the realm of science.  The observance, manipulation and utilization of it was limited to scientific researchers who were studying its capabilities and potential.  But unlike electricity, which is a naturally occurring phenomena, the Internet was man-made.  Scientists had to consider the properties of electricity and devise potential uses for those properties.  In contrast, from the very beginning, the Internet was conceived as a tool, the purpose of which was to facilitate dialogue and information sharing.  Yet, despite the come from behind position of having to find uses for electricity, it has become generally acknowledged as a commodity, where the Internet has not.  But, why not?

What does it take for something to become a commodity?  An essential requirement is productization.   The Internet, just as electricity, has been made into a product.  A further qualification of the Internet as a commodity according to Merriam-Webster is that it is bought and sold, but more importantly that its “wide availability typically leads to smaller profit margins and diminishes the importance of factors (as brand name) other than price”.   Indeed the price of Internet services has dramatically declined over the past two decades while the general availability of services has increased just as dramatically.  Increasingly retail businesses including hotels, restaurants, airlines, railways and public transit vehicles, provide Internet access free of charge to customers on their premises.  In addition, the ongoing wars between the major Internet service providers as they compete for consumers of residential service continues to exert downward pressure on prices.

There now exists the expectation that accessibility to the Internet is ubiquitous; that regardless of where we are or what we are doing, we can connect to the Internet.  In fact pervasive use of computing devices to access services via the Internet has  driven a modern paradigm that assumes such capability as a fundamental operating principle.   This model has be instantiated by government, businesses and institutions through the implementation of processes that rely upon the general accessibility of Internet services by all.  In other words, the assumption is that the Internet is a commodity. 

But, an assumption of accessibility without the acknowledgement of the Internet as a commodity has left our society in a strange limbo where Internet service providers continue to compete for the dollars of those who can afford to pay for the service, and consumers who are expected to have access to the Internet but cannot afford it become disenfranchised; excluded from the new reality.  If in our new paradigm the Internet is a commodity, then we must acknowledge it and treat it as such.  Everyone must be guaranteed access to the Internet.

Another feature of a commodity is quality of service.  Back to our electricity metaphor, consumers do not experience differences in quality of service.  Everyone who has electricity has power for all devices on their property on demand.  The wealthy do not get a better quality of electrical power than the poor.  Businesses do not compete based on whether or not they have electricity or the quality of electricity in the physical plants.  There is a single quality standard that everyone receives.  Returning to our discussion of the Internet, there should be no distinction such as “Fast Internet”.  Everyone should receive the same standard of service and it should be the fastest that the providers can produce. 

In order to guarantee universal accessibility and standard quality of service, government must take the responsibility to address delivery of Internet services as it does other commodities.  It must ensure accessibility, quality and affordability for all consumers.